From patchwork Sun May 8 14:25:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110901 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25F7FA0510; Sun, 8 May 2022 16:26:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1780F406B4; Sun, 8 May 2022 16:26:42 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2042.outbound.protection.outlook.com [40.107.237.42]) by mails.dpdk.org (Postfix) with ESMTP id 0452740395; Sun, 8 May 2022 16:26:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RRhhij/jv5qQGQjZSCy9T2bu67Y2vQmA/Y1TLMNUGNSI3IyJKsLUjMnvMguBvTvqfy67IlfIZVk5ug2si4LJ1A0hYMUlPWN3s+ClVesuP04HdTNgd6G0f2W/eo9lwOpBExGO4KZBvp74jV7dMPskcLlEoay6wVBhK+oEbnqWbJGhCs6sgFbMGbhSJDJcUIuHv/WPNE6rovwqObb8rE77wy6vBUCc8FZkOubXp21EqZKXUiPMwHewIA3YtXaC3lzwLmPtQmmOE1WMMDa2YrUGwScbf9yNht00HPRhu+w7z7CUXT5YHem8PzXWAte0JDdaB9j/HqFstXfv8fyfAz+IUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vLuozA3QDxnSx1PJsfBlVwJ/+3u1p7fBxPpxpvkJ3Ss=; b=IJjc/+cCJNW6C6Tb9ZPsQY/OHe2TCTFU88zsCHeBhgQQPrbnMvmGWWQJy9Bd+h8VjvJOfRB9oGfO3NpME0wXbsmarQDAQj9ike8hj6sxh8CwmP5PqJsVY0VihUouVVqp+VHY6Oy8n+BvtHDirlA0xWDXr61Houp3oMJSfMppn6qQ7BxFUMe0Gj3LkwgxNoPE3t9FnONl5cFqH9wVFEy0wyrdZLWmEJ5jblHEtiJazXH6SHtmmD54ji2rKsBTFUn5NdBi1ylit04jOIkDrGeS7YyKC8DCo9MS9HY79KUjeBLlgp4fVglNBQ8ns5VKihBvMqEAz/mzD4b2b3zN7LvDQA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vLuozA3QDxnSx1PJsfBlVwJ/+3u1p7fBxPpxpvkJ3Ss=; b=hX5d2+YxHAatezYaWIWdVP44xFMpHkcELMaUwZc/eHK+cA590+PVkpjkTnjA18qNcWZdAu1f5UJHK1gIEYlMDY3BQGBu0eMwm82mis87ZH6JkopMOX+3AHZdcecDE5+n7vsnYURu8g2fl91TSMvktbcD2mnq/EqdSjxlvpAPYsP+EYJ+nq1qOZoukdGXnw41bvN86gEiTkpT4M3qzJxRo9cwQ5k971FLGHIdcOkrYpoFdwIPBDL9ox5z3xnJWa/JQLmf1uauWtJNoUlnYCoKh1wO/zPLi0ivbKBKrANqMR4/wI0XKbyyjx7S03t+bi5hCSI93LZN6z96RJri1O3PJA== Received: from MW4PR03CA0208.namprd03.prod.outlook.com (2603:10b6:303:b8::33) by DM6PR12MB4514.namprd12.prod.outlook.com (2603:10b6:5:2a7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20; Sun, 8 May 2022 14:26:38 +0000 Received: from CO1NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b8:cafe::ab) by MW4PR03CA0208.outlook.office365.com (2603:10b6:303:b8::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20 via Frontend Transport; Sun, 8 May 2022 14:26:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT017.mail.protection.outlook.com (10.13.175.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:38 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:36 -0700 From: Xueming Li To: , Maxime Coquelin CC: , , Subject: [PATCH v3 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Date: Sun, 8 May 2022 17:25:48 +0300 Message-ID: <20220508142554.560354-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bc22d350-61fa-4bcd-6364-08da30fec37c X-MS-TrafficTypeDiagnostic: DM6PR12MB4514:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AhPgmXHiLnwdt/F6GEBL54Tb+r5dg8jsPSDm5eCeQzl/3Kf2oMBx+3TEKO/Hmie1Foq8RAFoNz0JGuUm+jH7DZxQIvZ8FLJaHGSZkAZ9qtw+B4ziBCkSle9VB1BAFjxVuXm2kcZeesAxyLsEfIyeIvT9Xj1wBn1dbT9go55L7wjNx0dDYU/iSQzuS3YT1bo4c/t/EmF3yn1g8FcNK5gSBQWYtZ4rC1MzdkVTDv+Ar0Xi0yCpSUkiUUyoWRX3YT6vKZ69jhALN5oA5svlnvEzXZe4KgZSx4SYYxOOgfIAp9AkFlO5HqplD7vyMnNb+xIu8/EA/yonjM920N8SxPThwPVvyLKWjTJhShgEzuNQjlKoPbCbvslvIvDmHWaNFS75BlTAcz5ZunBtOP5NB1bLZQrbKgcXXQxh55UsawcpYeY/h12FCksmXXbkkId4eugQc8b6hvrc9/19SYUPY0VzDyivJ0nFCCmPbNt6KfrL3a6othJvGsNG0gLb5xsVvXkpsTiR2HjpyGuOLU4yIHHqiyPwfh3QFXHizyk1SoG9ZgoFChFbJbRq1xfdbkSVWeyuSS2qHMwjDNbk2t4rVYx9o3//BjiNqlvBobw6jyTsHwesfsOu7FVOhG9ECbFYyVPR/Hd75D16yu0kvb23iINorjxGYtzdfrcBbpRLlINeREQgpo34AMOOcfZpTnOpjUEwSSp9LKgC8yBGdg0h3JWSxA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(1076003)(55016003)(8936002)(508600001)(36860700001)(83380400001)(426003)(356005)(54906003)(110136005)(336012)(316002)(16526019)(186003)(36756003)(47076005)(8676002)(7696005)(4326008)(6666004)(2906002)(82310400005)(26005)(81166007)(70206006)(70586007)(2616005)(5660300002)(86362001)(40460700003)(6286002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:38.4944 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bc22d350-61fa-4bcd-6364-08da30fec37c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4514 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Disable interrupt unregister timeout to avoid invalid FD caused interrupt thread segment fault. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: matan@mellanox.com Cc: stable@dpdk.org Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 3416797d289..2e517beda24 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -17,7 +17,7 @@ static void -mlx5_vdpa_virtq_handler(void *cb_arg) +mlx5_vdpa_virtq_kick_handler(void *cb_arg) { struct mlx5_vdpa_virtq *virtq = cb_arg; struct mlx5_vdpa_priv *priv = virtq->priv; @@ -59,20 +59,16 @@ static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { unsigned int i; - int retries = MLX5_VDPA_INTR_RETRIES; int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) != -1) { - while (retries-- && ret == -EAGAIN) { + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_handler, - virtq); + mlx5_vdpa_virtq_kick_handler, virtq); if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d " - "of virtq %d interrupt, retries = %d.", - rte_intr_fd_get(virtq->intr_handle), - (int)virtq->index, retries); - + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + virtq->index); usleep(MLX5_VDPA_INTR_RETRIES_USEC); } } @@ -359,7 +355,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) goto error; if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_handler, + mlx5_vdpa_virtq_kick_handler, virtq)) { rte_intr_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", From patchwork Sun May 8 14:25:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110902 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3504A0510; Sun, 8 May 2022 16:26:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26F7C40DFD; Sun, 8 May 2022 16:26:44 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by mails.dpdk.org (Postfix) with ESMTP id 4378642827; Sun, 8 May 2022 16:26:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BT0LqPIhf+u5FnK2YqFN8I5ZfqPqxwZZGIqk4QA7sLLPL7sbTddQlIrI6HfvikDj+rBvhbLfJ40dpHJC2xe6TUyzkQQ8ZPHC8I2b79xjijHWJWLGP5urfsA8heYm4a8ghh9Uinu70VtZGUK834WC3nGsaRXRrnUbpdQG38rqNdxRQEt/jsrkJD7SkrSchu01irpLDTTjyCfL9xfXNR+Y1zp6zy5gj1LcQMZVr5YDn8MibYikHSPJaSyHnbjK8mEqLDi3e0Sh2hQaQYmEdXrrCYxfzp/d5lHP8XN8uAK4zRyKlBbtYQcCVkVqiAofh5UcE9jk0UaWHXNYMzGERSIk/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S573vHyI6PRRPWYhkuC2wa6q6aMb/Sva6p+FTJF0Qtc=; b=iRbPvha2obXkQTKE2oA4KFgg+CleObEbV/JOAntTPuZKMkXHQjAeRhtfcskwvHsKHQpAuOsGnAfvlzO0YoNVmKXkV6q6teDsbeZrgKGUXBiN2wm5eVRRmNfLYut145djzCZTqd+9xJEXmrLIU3s7CeVC4TCdOWW3pCtn1kGiV5kitEiiQc7PDueiZvfKFVda8yIVL5KkDMn0ZQaPv48EY3qkPrxWMKU3cCL1mkeK5hoGTHw0NLmCyS8nFosg7xawnp7SHlvwTA1MFAPoDmqJuMd3l74O3G8qalb9YPaFnwPG1AVijyRmr165sZWUQOh4WPSjaIaYoPHZgtfhQM43rw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S573vHyI6PRRPWYhkuC2wa6q6aMb/Sva6p+FTJF0Qtc=; b=O3vqnsvx1XcxSdhVZtjtrFqn2DHmdNlTC+gsAG0WSRO3aFxsDLrqx2MyCe/XPqTUoySqxbp33hZoBc/mGxI7vFoQpObYDNT/L2nEJTGaGyLuna2+6jpioD/ZpArufrUfhSRbF8Fg5JtxftjhOvcJYWe5bg5yoKYoba9t3M8+R1KdXMT7BEFEsESeX0Iea6TnHmCzkMrJI5ZH8q4t5L9OqkOleIURtXOpdrTdPmJoMGCW0OTsCrNZBeHjJdgCbNXbl8q+X9MCXXtlnFOgngz6FQJ2vRyXn6oUAE4QK2r9zyjZtch4b7CyDWOxwee+cMYQkSR1y6dlvL2Il/OB9b0IIg== Received: from DM6PR08CA0015.namprd08.prod.outlook.com (2603:10b6:5:80::28) by CY4PR1201MB0232.namprd12.prod.outlook.com (2603:10b6:910:21::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.25; Sun, 8 May 2022 14:26:40 +0000 Received: from DM6NAM11FT010.eop-nam11.prod.protection.outlook.com (2603:10b6:5:80:cafe::13) by DM6PR08CA0015.outlook.office365.com (2603:10b6:5:80::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.24 via Frontend Transport; Sun, 8 May 2022 14:26:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT010.mail.protection.outlook.com (10.13.172.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:39 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:37 -0700 From: Xueming Li To: , Maxime Coquelin CC: , Subject: [PATCH v3 2/7] vdpa/mlx5: fix dead loop when process interrupted Date: Sun, 8 May 2022 17:25:49 +0300 Message-ID: <20220508142554.560354-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2dfd398c-22bd-4576-ad3f-08da30fec499 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0232:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /jIUxsIvHUIHBoEmuiCQfFUp3YhFYirnZN+XTBMfUU49OUc4pGE/PkQ7d1k1lORx/WeMAELQJC8AdtY2o/Gp+xHRSrUpgppwSS01b02/gsRPE98jAubnBKjiTLfDgZv4AG2mmVooqXgFSU056RPG0JbkVpORIG0+8jHbP0aUi9bRAkZRURev+0jbGu/oLIKz/I6JVjJhxZX0Dl/a5Kackyjn9hDGD7NgdY9Fn5DaHxt1tUlCbjITIskOJYD9cXW880KIdtrvVwuHKjlidRXDmOoC05hzEROPOPfh++Qt7ht9ZRMRD/U74gaKgmajJPhT9FYP2JS7Z0LloFtnAkquhor2qUd1Ju6Gv7wB34Erml9h6Z/IkVxCj36n1iMuTCzLzty+L4hAnMkUEW8lA6nXYKe6xUCZjLjmxqsRJWsuy4LRPUU62fhtzz10vd8VHoi45zgVvSGoUz4IYx2xe5BOVpwCXQJDDRx+6caOyJBSHmVXXd40B5yBuRwvk7ArYEv7nWA+ZxgV1Q2L9HeP4ygERgIWRsWJ7T7I8474AEkfx+XW38cPCdZ9RSbsGhmy6NaE8f3Nv8aqEQjO1kJ3R6a8bi+1y6SF7EAC0oApuC+4TgJZSCJsTi7v48u0fYPAhZ4HummSooLu1Zb19ooZkD6NzvJ6/pJLh0oWYDjnMT7IchF08nxwpejx1c8NqeBznRu3CXsYSw1cWLNpyFi5PbEWOA== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(82310400005)(26005)(6286002)(83380400001)(16526019)(7696005)(6666004)(55016003)(40460700003)(47076005)(4326008)(70206006)(70586007)(316002)(54906003)(8676002)(36860700001)(110136005)(356005)(8936002)(81166007)(86362001)(5660300002)(508600001)(1076003)(186003)(2616005)(426003)(336012)(36756003)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:40.3328 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2dfd398c-22bd-4576-ad3f-08da30fec499 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT010.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0232 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In Ctrl+C handling, sometimes kick handling thread gets endless EGAIN error and fall into dead lock. Kick happens frequently in real system due to busy traffic or retry mechanism. This patch simplifies kick firmware anyway and skip setting hardware notifier due to potential device error, notifier could be set in next successful kick request. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: stable@dpdk.org Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 2e517beda24..2696d54b412 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -23,11 +23,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) struct mlx5_vdpa_priv *priv = virtq->priv; uint64_t buf; int nbytes; + int retry; if (rte_intr_fd_get(virtq->intr_handle) < 0) return; - - do { + for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); if (nbytes < 0) { @@ -39,7 +39,9 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) virtq->index, strerror(errno)); } break; - } while (1); + } + if (nbytes < 0) + return; rte_write32(virtq->index, priv->virtq_db_addr); if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) From patchwork Sun May 8 14:25:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110903 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F207A0510; Sun, 8 May 2022 16:26:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3492F4282D; Sun, 8 May 2022 16:26:45 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2053.outbound.protection.outlook.com [40.107.93.53]) by mails.dpdk.org (Postfix) with ESMTP id 9636942827 for ; Sun, 8 May 2022 16:26:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MML+S1GXisaoWJPks8DMrxqTswuIJE1i7PAE0lFbW5As36fu18nOnUUWkhoy9N+loZ1dvO9BDq9Z9mPXcOqdcmF3RQH+GL51y70K6WNgAmUbaMyfylv4DNj7p/6ENZyraNrlp758doHNjpe/i0Eakc4FQTxKuDYV1DvtROoqMZvpkhvNs1FsEKjUySpRFx2tLkuG9Z59Dos1nX+fxOvdXIugWJSY+DA5l8X0XcIDT09rQuvDA/N1+91jC84bJlEhU55Cb+KyZfd0SLt60QB6SjL5Mfu9fKc7y2jza3bAoGpTC1l/VRgN3Wr1k1wgJySQHN+AhuyzkjZrfKXug3oaKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+TB6OPsfJ64oIxwsJt9jEQ3MNZZTGD9Pq9QroViQ2NE=; b=SH0NYhdzpbaYfjh4k0OcMeDpOtj6w43Ohy4IX9+oeSg+rienKXZmYnDdV3K4cFvrBbpG7QkE2aRsQpCAgRm0PM7mNEMCBMwodGBGrspasMs3NBfAh5OM7KaFP8Mz9iwFbsDlHbMq+kOajSPPGuOA1nzBIVaugrEFuCyuyGbBbs6Rn4pUu/7Kqj/EGf7RGbK2jtNbrs8KfX+mGLNtlsRvCXllhVFhrwXgyDPu4H+gdokiwNn7WmNWBq6PXR2qMeHL/7uMAuTGWvMuB/n2uzFwYwcU4vjVbu9ZZJ/OkrTATQYxhOCzBk2GUxOhJGyOYvi63ktVtH+nMioDNQfEMjY/cg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+TB6OPsfJ64oIxwsJt9jEQ3MNZZTGD9Pq9QroViQ2NE=; b=uVGjSiRV9ipboyzhijA6Mo1xJ6vR72++h5Qm37ASjvRL2y5N+SeNroJ81Bcp4dapgLu/lJFmGlhvCxKIvq+kdHdcAd0eHVAYITqMlkO6AbYTDygnMx/Jr9JsLmljN0Jm7WSgEENevSW813pt/UUAdopzqRPpvNh/cwnHIGQR+czoZFXxd89LsVQkLAn8eYFT8EIMnpC2apGZ9futZBQTzxfGsDomLqnKNTHY8UMnCIq0b9v+CHGrZeN2Byv3YKtnHbLoa2tLH+S3FP+pXHKXnpHCWo0rFJ0T39hhOXC4tjQtZFxwR9ceRmPVYUiDrBPiKdDXJuL17gdFWSClTPb6xA== Received: from DS7PR03CA0161.namprd03.prod.outlook.com (2603:10b6:5:3b2::16) by DM4PR12MB5262.namprd12.prod.outlook.com (2603:10b6:5:399::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.22; Sun, 8 May 2022 14:26:42 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b2:cafe::7d) by DS7PR03CA0161.outlook.office365.com (2603:10b6:5:3b2::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.14 via Frontend Transport; Sun, 8 May 2022 14:26:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:41 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:39 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 3/7] vdpa/mlx5: no kick handling during shutdown Date: Sun, 8 May 2022 17:25:50 +0300 Message-ID: <20220508142554.560354-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 51da5144-e84d-485c-01e1-08da30fec5b8 X-MS-TrafficTypeDiagnostic: DM4PR12MB5262:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kln85rKQTfrYt/BQbTwlKOXz3Dlfo3inwJxCyWCFllJU682zh9nFvip7fbaz5zTpJym3Jdd/z+dstifUfUMRxUAMg60y2TMb/klLTL+ba9rFAc1Ly769tCb8E6HUyVcDTQRkRQ1cC+nqVlrLGZeQxsHP71+6fSSnM35NA27jYgOvQjtGpNQRB4RTjvzFB2/HF1hjreVvAaKSFjV4DPfKbiwhikENu/27uhzk/axukHMgcvH/hCTlfLFI2Ocp/6O63uccx8zWxdV/FYAGkpajbft6EHfjzpX4Qwels4ZyDYE88cXizJVOPA6jN4WCX/RsIO2RtbGR7L5QjXVtC4GK1WQh1Iz699NHBAKKCSRpUjM9lbIkQ3OrDwEuH/1LNrpT/+PUUFtMEOghzbkaGrk7vBu5MUx/RGRyH+BepIPR7znZ1Bogo2PfoxsPXwA1NGoOnGn4UsoHdjKKtKFXT3PAfPY87WpL44drLpFQaNLUQWF1ELGdENZtR+YwZLfzh4z4VOZVhxpz8mw7dBVd+vNziwc7OMxdymj1FpxArTT5BmgZpJxeLYGjQjLI01yhVGVdXcYbzCrBescBIhiOA0umU9rqCUDLpZfjyBp40zWZ4CFnplBRnXxs+cRlmHMTSDFc9UQWXi40IaMUviFIEFwssoath6C3EtkVdLe1AVjRYUqqToGAkVcpbwafgR9zMnsehz59xeJ500/z88+5+f4hG8GiSZoI0eCtBCBZT32RSDBLVIT49dvQQNUrz+1mcxzk X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(1076003)(316002)(55016003)(5660300002)(40460700003)(508600001)(8936002)(107886003)(110136005)(47076005)(336012)(426003)(16526019)(186003)(36756003)(70586007)(70206006)(82310400005)(26005)(6286002)(86362001)(36860700001)(81166007)(2616005)(4326008)(8676002)(2906002)(6666004)(83380400001)(356005)(7696005)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:42.2093 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51da5144-e84d-485c-01e1-08da30fec5b8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5262 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When Qemu suspends a VM, hw notifier is un-mmapped while vCPU thread may still be active and write notifier through kick socket. PMD kick handler thread tries to install hw notifier through client socket. In such case, it will timeout and slow down device close. This patch skips hw notifier install if VQ or device in middle of shutdown. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 17 ++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 12 +++++++++++- 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 749c9d097cf..48f20d9ecdb 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -252,13 +252,15 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); + priv->state = MLX5_VDPA_STATE_IN_PROGRESS; + } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); - priv->configured = 0; + priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -277,7 +279,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - if (priv->configured && mlx5_vdpa_dev_close(vid)) { + if (priv->state == MLX5_VDPA_STATE_CONFIGURED && + mlx5_vdpa_dev_close(vid)) { DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } @@ -291,7 +294,7 @@ mlx5_vdpa_dev_config(int vid) mlx5_vdpa_dev_close(vid); return -1; } - priv->configured = 1; + priv->state = MLX5_VDPA_STATE_CONFIGURED; DRV_LOG(INFO, "vDPA device %d was configured.", vid); return 0; } @@ -373,7 +376,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -401,7 +404,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -594,7 +597,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found) { - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); if (priv->var) { mlx5_glue->dv_free_var(priv->var); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 22617924eac..cc83d7cba3d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -113,9 +113,15 @@ enum { MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT }; +enum mlx5_dev_state { + MLX5_VDPA_STATE_PROBED = 0, + MLX5_VDPA_STATE_CONFIGURED, + MLX5_VDPA_STATE_IN_PROGRESS /* Shutting down. */ +}; + struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; - uint8_t configured; + enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; pthread_t timer_tid; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 2696d54b412..4c34983da41 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -25,6 +25,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (rte_intr_fd_get(virtq->intr_handle) < 0) return; for (retry = 0; retry < 3; ++retry) { @@ -43,6 +48,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) if (nbytes < 0) return; rte_write32(virtq->index, priv->virtq_db_addr); + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_ERR; @@ -541,7 +551,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) DRV_LOG(INFO, "Update virtq %d status %sable -> %sable.", index, virtq->enable ? "en" : "dis", enable ? "en" : "dis"); - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { virtq->enable = !!enable; return 0; } From patchwork Sun May 8 14:25:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110904 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72609A0510; Sun, 8 May 2022 16:26:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E0F842837; Sun, 8 May 2022 16:26:46 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2061.outbound.protection.outlook.com [40.107.220.61]) by mails.dpdk.org (Postfix) with ESMTP id 93A374282F for ; Sun, 8 May 2022 16:26:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k8mt+5ksCdmDDqa04ZE7rXIxHnJQi8JzyW8KgSmlOELAVg080ObiLy1PN7tVER8rLcUfG2Amwb6N2jZQAazw3p18cEvfdczyOqxNOwcZxECuYKF35GRO+Z6UhjiB8XhPu0PAxCIuQBUo4YsJ1+E19YDMCBRGftlZrDmgFWpsP3ClmkQ6Hsyk6BZ9ojVC3Og9eyxLx+dABqq5uuuvVAAIXxFB4N42hXqg+qqr1s1t2bRnFoI3W2mkMq0MVgcrjuv1Ej704eaWZuBfX+gZ1uZD03hDwDHunJABtEAcrRtVdiRP1Zb1kf6pHSoYxHmJd9Ia7/OWwU5+vmoeJR+hR4CGwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4JMN0JwB71mJSRk7ewhSNda1bH0GCKKhO6jzv/8nuBs=; b=l+Z2kqceYFaeMcyK/eBArAFo503RtiNewf3TSATnbU+R0TCHeI586PXz6sAccGEBlUrtrkHgcvZD7+DlmYUcuT7ayfJ2pT+YEXxZ0PcGH5QEr+Hl2XiAdbiVmlQCoyytUPjBV7GYcuoIVkXGP6U9PyHgCWa5Nk2/puNNT2kT8ax5khnu4YhCjhWEZU4ncZs6aJFBfbnrrfFjiKFMqWgC9Uw/j4kLJUNs2+Q2lFSZIie/ZaOxDmYZFNdGA8wnq3qp0MJyC85EPQwgopUdQYHyj+aPCuxGoDYWwWmzxX7gEqnjhpkCmMrDY36otTDUAri+dbiaoGG3DnBQcGNWfYkJSA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4JMN0JwB71mJSRk7ewhSNda1bH0GCKKhO6jzv/8nuBs=; b=YJCq8uIG+psS3lL1WaApDO3nafZIqvl3iEgB38x22QvdmtLtPTzkaq55CbRUT60r6ymGMZDMf9WqO7zJj6TlZ0DfdZdxu7XNAmoxoVDVRo5Di3QW6GsgzDjNBCdit95RIq4DXxKVgdCl/37fzdOzU7kIW8wWLHgJXefTVwT7M/+xwYmksZ7COFU4KZHYMxsdDYa7+ivlvL5Q0xJIzfvFxv9zhrYMUMTtFduXYJ9kAYKjUgwEnypovXNDdjtyYtJI/IVihBgG6aPbbuqgFG6SFZ9UhXnkgIm/sCjb3I8dTt5nbi8iEQocSSIUcEci4Z0IqwW7+354GWh7wyPMLLzOEw== Received: from DS7PR06CA0004.namprd06.prod.outlook.com (2603:10b6:8:2a::19) by CH2PR12MB4876.namprd12.prod.outlook.com (2603:10b6:610:67::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20; Sun, 8 May 2022 14:26:43 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2a:cafe::88) by DS7PR06CA0004.outlook.office365.com (2603:10b6:8:2a::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20 via Frontend Transport; Sun, 8 May 2022 14:26:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:42 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:40 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 4/7] vdpa/mlx5: reuse resources in reconfiguration Date: Sun, 8 May 2022 17:25:51 +0300 Message-ID: <20220508142554.560354-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b4eb3fbd-544f-4e9d-eb92-08da30fec652 X-MS-TrafficTypeDiagnostic: CH2PR12MB4876:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MMAmpUkc3/4mEnCy/Pv46q23oUCbsK9dw7cy57g/XYNSzyQI2GSLLiALmXLBLGSQBYq+vEYLelLUgpnbKEUMiZTRO05mwjljZS1X6Fbazhri1h8zZKrVg5KDSlX3U8YIpNNCFQFnI4YDTTpwynnM1SUPz2073Lwg6jsPo9qyE8Ey621fFLQhWOsmkGHzWWqzMakY7LCdXLba41TggTTyDceUWR1enhOxIXJfERKxjl08G8vtQaQ06OpUqXYueQIaePOlMPkphdTDCSGkq1i50eEGDomORhdqsDyE4Dt3K5VH70shI2yCgbmfQm62dyHG1E/aqdCJyYi2IdXidvVYW2+X8eYMrhdtavXS6roQSaXzpvjbASZdU/NHoOpGfgcV0JIse97OTytPk+Q0tEUHuK9FuT09POkg1Dvj0AiGnw5DoU1roAKTTYYKrLhdy0THau5CC6vNewleNmIizs9IqgYaQvYl5zzaZn0VcmEhc/9CnM7EKF7IyCznKAO9J+ovY05ZYNm8cpzwhCf9Zmt3+1GVSRTQD/NGNOcZIz3gtCUzV5OU2y71AT2Swq4MLH87s47vZ47fM07IDpHWXo+ZG7qPXvKUrhPNSchcqeG0m40syFzZDSl3cbWmibxPfqzWIn6xcVKZtCFOdafrdF+GK+Y6uWDHRDrkFX6ZqCTN7wAQ//98IF4WAyIVRVREukowVjhiefnqqYIW1bk7x9rGHA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(316002)(1076003)(40460700003)(55016003)(5660300002)(8936002)(508600001)(107886003)(110136005)(47076005)(336012)(426003)(16526019)(186003)(36756003)(70586007)(70206006)(82310400005)(26005)(86362001)(6286002)(30864003)(81166007)(36860700001)(2616005)(4326008)(8676002)(2906002)(6666004)(356005)(83380400001)(7696005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:43.1152 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b4eb3fbd-544f-4e9d-eb92-08da30fec652 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4876 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To speed up device resume, create reuseable resources during device probe state, release when device is removed. Reused resources includes TIS, TD, VAR Doorbell mmap, error handling event channel and interrupt handler, UAR, Rx event channel, NULL MR, steer domain and table. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 167 +++++++++++++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 23 ++-- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 11 -- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 30 +---- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 -------- 6 files changed, 149 insertions(+), 135 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 48f20d9ecdb..4408aeccfbd 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -49,6 +50,8 @@ TAILQ_HEAD(mlx5_vdpa_privs, mlx5_vdpa_priv) priv_list = TAILQ_HEAD_INITIALIZER(priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; +static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv); + static struct mlx5_vdpa_priv * mlx5_vdpa_find_priv_resource_by_vdev(struct rte_vdpa_device *vdev) { @@ -250,7 +253,6 @@ mlx5_vdpa_dev_close(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; } - mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); @@ -258,7 +260,6 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; @@ -288,7 +289,7 @@ mlx5_vdpa_dev_config(int vid) if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); - if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_err_event_setup(priv) || + if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_virtqs_prepare(priv) || mlx5_vdpa_steer_setup(priv) || mlx5_vdpa_cqe_event_setup(priv)) { mlx5_vdpa_dev_close(vid); @@ -507,13 +508,90 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); } +static int +mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) +{ + struct mlx5_devx_tis_attr tis_attr = {0}; + struct ibv_context *ctx = priv->cdev->ctx; + uint32_t i; + int retry; + + for (retry = 0; retry < 7; retry++) { + priv->var = mlx5_glue->dv_alloc_var(ctx, 0); + if (priv->var != NULL) + break; + DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.", retry); + /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ + usleep(100000U << retry); + } + if (!priv->var) { + DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Always map the entire page. */ + priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | + PROT_WRITE, MAP_SHARED, ctx->cmd_fd, + priv->var->mmap_off); + if (priv->virtq_db_addr == MAP_FAILED) { + DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); + priv->virtq_db_addr = NULL; + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", + priv->virtq_db_addr); + priv->td = mlx5_devx_cmd_create_td(ctx); + if (!priv->td) { + DRV_LOG(ERR, "Failed to create transport domain."); + rte_errno = errno; + return -rte_errno; + } + tis_attr.transport_domain = priv->td->id; + for (i = 0; i < priv->num_lag_ports; i++) { + /* 0 is auto affinity, non-zero value to propose port. */ + tis_attr.lag_tx_port_affinity = i + 1; + priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); + if (!priv->tiss[i]) { + DRV_LOG(ERR, "Failed to create TIS %u.", i); + return -rte_errno; + } + } + priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); + if (!priv->null_mr) { + DRV_LOG(ERR, "Failed to allocate null MR."); + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); +#ifdef HAVE_MLX5DV_DR + priv->steer.domain = mlx5_glue->dr_create_domain(ctx, + MLX5DV_DR_DOMAIN_TYPE_NIC_RX); + if (!priv->steer.domain) { + DRV_LOG(ERR, "Failed to create Rx domain."); + rte_errno = errno; + return -rte_errno; + } +#endif + priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); + if (!priv->steer.tbl) { + DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); + rte_errno = errno; + return -rte_errno; + } + if (mlx5_vdpa_err_event_setup(priv) != 0) + return -rte_errno; + if (mlx5_vdpa_event_qp_global_prepare(priv)) + return -rte_errno; + return 0; +} + static int mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, struct mlx5_kvargs_ctrl *mkvlist) { struct mlx5_vdpa_priv *priv = NULL; struct mlx5_hca_attr *attr = &cdev->config.hca_attr; - int retry; if (!attr->vdpa.valid || !attr->vdpa.max_num_virtio_queues) { DRV_LOG(ERR, "Not enough capabilities to support vdpa, maybe " @@ -537,25 +615,10 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; + pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; - for (retry = 0; retry < 7; retry++) { - priv->var = mlx5_glue->dv_alloc_var(priv->cdev->ctx, 0); - if (priv->var != NULL) - break; - DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.\n", retry); - /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ - usleep(100000U << retry); - } - if (!priv->var) { - DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + if (mlx5_vdpa_create_dev_resources(priv)) goto error; - } - priv->err_intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (priv->err_intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); if (priv->vdev == NULL) { DRV_LOG(ERR, "Failed to register vDPA device."); @@ -564,19 +627,13 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, } mlx5_vdpa_config_get(mkvlist, priv); SLIST_INIT(&priv->mr_list); - pthread_mutex_init(&priv->vq_config_lock, NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); return 0; - error: - if (priv) { - if (priv->var) - mlx5_glue->dv_free_var(priv->var); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (priv) + mlx5_vdpa_dev_release(priv); return -rte_errno; } @@ -596,22 +653,48 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) if (found) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); - if (found) { - if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); - if (priv->var) { - mlx5_glue->dv_free_var(priv->var); - priv->var = NULL; - } - if (priv->vdev) - rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (found) + mlx5_vdpa_dev_release(priv); return 0; } +static void +mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + + mlx5_vdpa_event_qp_global_release(priv); + mlx5_vdpa_err_event_unset(priv); + if (priv->steer.tbl) + claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); + if (priv->steer.domain) + claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); + if (priv->null_mr) + claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); + for (i = 0; i < priv->num_lag_ports; i++) { + if (priv->tiss[i]) + claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); + } + if (priv->td) + claim_zero(mlx5_devx_cmd_destroy(priv->td)); + if (priv->virtq_db_addr) + claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); + if (priv->var) + mlx5_glue->dv_free_var(priv->var); +} + +static void +mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) +{ + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) + mlx5_vdpa_dev_close(priv->vid); + mlx5_vdpa_release_dev_resources(priv); + if (priv->vdev) + rte_vdpa_unregister_device(priv->vdev); + pthread_mutex_destroy(&priv->vq_config_lock); + rte_free(priv); +} + static const struct rte_pci_id mlx5_vdpa_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index cc83d7cba3d..e0ba20b953c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -233,6 +233,15 @@ int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, */ void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp); +/** + * Create all the event global resources. + * + * @param[in] priv + * The vdpa driver private structure. + */ +int +mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv); + /** * Release all the event global resources. * diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index f8d910b33f8..7167a98db0f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -40,11 +40,9 @@ mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv) } /* Prepare all the global resources for all the event objects.*/ -static int +int mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) { - if (priv->eventc) - return 0; priv->eventc = mlx5_os_devx_create_event_channel(priv->cdev->ctx, MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); if (!priv->eventc) { @@ -389,22 +387,30 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) flags = fcntl(priv->err_chnl->fd, F_GETFL); ret = fcntl(priv->err_chnl->fd, F_SETFL, flags | O_NONBLOCK); if (ret) { + rte_errno = errno; DRV_LOG(ERR, "Failed to change device event channel FD."); goto error; } - + priv->err_intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (priv->err_intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + goto error; + } if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd)) goto error; if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT)) goto error; - if (rte_intr_callback_register(priv->err_intr_handle, - mlx5_vdpa_err_interrupt_handler, - priv)) { + ret = rte_intr_callback_register(priv->err_intr_handle, + mlx5_vdpa_err_interrupt_handler, + priv); + if (ret != 0) { rte_intr_fd_set(priv->err_intr_handle, 0); DRV_LOG(ERR, "Failed to register error interrupt for device %d.", priv->vid); + rte_errno = -ret; goto error; } else { DRV_LOG(DEBUG, "Registered error interrupt for device%d.", @@ -453,6 +459,7 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv) mlx5_glue->devx_destroy_event_channel(priv->err_chnl); priv->err_chnl = NULL; } + rte_intr_instance_free(priv->err_intr_handle); } int @@ -575,8 +582,6 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; - if (mlx5_vdpa_event_qp_global_prepare(priv)) - return -1; if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; attr.pd = priv->cdev->pdn; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 599079500b0..62f5530e91d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -34,10 +34,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) SLIST_INIT(&priv->mr_list); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - if (priv->null_mr) { - claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); - priv->null_mr = NULL; - } if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -196,13 +192,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; priv->vmem = mem; - priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); - if (!priv->null_mr) { - DRV_LOG(ERR, "Failed to allocate null MR."); - ret = -errno; - goto error; - } - DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; entry = rte_zmalloc(__func__, sizeof(*entry), 0); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index a0fd2776e57..d4b4375c886 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -45,14 +45,6 @@ void mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv) { mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.tbl) { - claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); - priv->steer.tbl = NULL; - } - if (priv->steer.domain) { - claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); - priv->steer.domain = NULL; - } if (priv->steer.rqt) { claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); priv->steer.rqt = NULL; @@ -248,11 +240,7 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { - mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.rqt) { - claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); - priv->steer.rqt = NULL; - } + mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { return ret; } else if (!priv->steer.rss[0].flow) { @@ -268,26 +256,10 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { -#ifdef HAVE_MLX5DV_DR - priv->steer.domain = mlx5_glue->dr_create_domain(priv->cdev->ctx, - MLX5DV_DR_DOMAIN_TYPE_NIC_RX); - if (!priv->steer.domain) { - DRV_LOG(ERR, "Failed to create Rx domain."); - goto error; - } - priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); - if (!priv->steer.tbl) { - DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); - goto error; - } if (mlx5_vdpa_steer_update(priv)) goto error; return 0; error: mlx5_vdpa_steer_unset(priv); return -1; -#else - (void)priv; - return -ENOTSUP; -#endif /* HAVE_MLX5DV_DR */ } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 4c34983da41..5ab63930ce8 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -3,7 +3,6 @@ */ #include #include -#include #include #include @@ -120,20 +119,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) if (virtq->counters) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } - for (i = 0; i < priv->num_lag_ports; i++) { - if (priv->tiss[i]) { - claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); - priv->tiss[i] = NULL; - } - } - if (priv->td) { - claim_zero(mlx5_devx_cmd_destroy(priv->td)); - priv->td = NULL; - } - if (priv->virtq_db_addr) { - claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); - priv->virtq_db_addr = NULL; - } priv->features = 0; memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; @@ -462,8 +447,6 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - struct mlx5_devx_tis_attr tis_attr = {0}; - struct ibv_context *ctx = priv->cdev->ctx; uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); @@ -485,33 +468,6 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) (int)nr_vring); return -1; } - /* Always map the entire page. */ - priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | - PROT_WRITE, MAP_SHARED, ctx->cmd_fd, - priv->var->mmap_off); - if (priv->virtq_db_addr == MAP_FAILED) { - DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); - priv->virtq_db_addr = NULL; - goto error; - } else { - DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", - priv->virtq_db_addr); - } - priv->td = mlx5_devx_cmd_create_td(ctx); - if (!priv->td) { - DRV_LOG(ERR, "Failed to create transport domain."); - return -rte_errno; - } - tis_attr.transport_domain = priv->td->id; - for (i = 0; i < priv->num_lag_ports; i++) { - /* 0 is auto affinity, non-zero value to propose port. */ - tis_attr.lag_tx_port_affinity = i + 1; - priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); - if (!priv->tiss[i]) { - DRV_LOG(ERR, "Failed to create TIS %u.", i); - goto error; - } - } priv->nr_virtqs = nr_vring; for (i = 0; i < nr_vring; i++) if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) From patchwork Sun May 8 14:25:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110906 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFCE5A0510; Sun, 8 May 2022 16:27:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4ADF14283E; Sun, 8 May 2022 16:26:50 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2060.outbound.protection.outlook.com [40.107.223.60]) by mails.dpdk.org (Postfix) with ESMTP id 0FA544282F for ; Sun, 8 May 2022 16:26:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=C3xa28muwUhO3KTRkyY0++WlvZwo6IjZpEEculuAcw7xcNZoyzfOFeyW9TlN23NfkkokePa4UelTjDu6D7VJWxTqqvFm1si4QVjl6oPoXsuEBiAxMuCWSxlm5pF15yaIuhdr1I4rK8LPiCdmLFHdC+fbA8GXopeIRKaOZKxWTXrVHOPN0orBNbOZT7nICtw+GkuI4vjosPPP82LgZusB9Bt+gIU/68CiyDGSNBRmqutY0/FNv01w20I2ZXc3G3M4o5QlgarfQCDsZM/V1n7+Qqmlfz+zHJlWkaOhktDBfRLGIgO1oREZ/d+SFkMHIFKyNo0lx3l0P4FgNRyt00WugA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EBruJsni/gTeuOPb2Gm6S1BvyJtYuarmU9bP3KRIz4Y=; b=Vea3K6i7GQvK+b8764INidJX91RjTVtLF/bD4fQ3WWJhajPf41CEtWtSoxz9lwQAlKDlcJWMgIzyr60teJwvDWynd0uY0v1CEUcekv9qG2j2HAK9DRvcJbZhu3LASVH7ovjDl7dNvbw039UaUH3k9fuAeV53xOMKNpKNAwwQqYym9cFDmZWhRvRhv1AIXyah2uVnw40MuMQnBgx+daLAXyyb40zKlxhVkc7eoiHcMiEWFGcTIH/tMD420krJg1gs4n1EN7vH0ysNlfrHMKUI9udSJ0RoVqbuz9q4VzS142g14/YCiC3ubQvz8mlwcOM2Ija3HLrXXDnKCob7FhWD2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EBruJsni/gTeuOPb2Gm6S1BvyJtYuarmU9bP3KRIz4Y=; b=fmi/VNKBCqKYruNKc1oL2d7vf7EjxsbXQ5v00Hxov6omQy9UvqXdlX4zd8m8wsiEkTo32sfsyyIfdWvo2i/pJHKt0OfmFZRd2h9Ld0CysH+DpnPP6d1x4C9K6GnokpKjCLUgSYXORTgWlJOoLd/ufo6tZ2MF6VsvNzMf2kNijwqHal0PEb5I07sz5bbqNBtuFTae7Zv+WJd6JtRq/6fmYmFV1z+zYigwZqkHkSjQ+0mriNiaZL34g7tLHH+0HT2CCXdjjCBddURL4NGmHxTb84zSeICLXr/vkcMMpKKmhrztIcck5TRGzO5ye8IBC6CXFK3QoTqNsWgYktHwOiMtEg== Received: from MW4P222CA0002.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::7) by DM6PR12MB3932.namprd12.prod.outlook.com (2603:10b6:5:1c1::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.27; Sun, 8 May 2022 14:26:44 +0000 Received: from CO1NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::1a) by MW4P222CA0002.outlook.office365.com (2603:10b6:303:114::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.14 via Frontend Transport; Sun, 8 May 2022 14:26:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT033.mail.protection.outlook.com (10.13.174.247) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:43 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:42 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 5/7] vdpa/mlx5: cache and reuse hardware resources Date: Sun, 8 May 2022 17:25:52 +0300 Message-ID: <20220508142554.560354-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: aa1e22ef-d2fe-4d94-b022-08da30fec704 X-MS-TrafficTypeDiagnostic: DM6PR12MB3932:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lAvAj1Av+VvmIWGtAcxfYCEUoYAg9OBWu5zZ6lf61KYdvhEKoCfm2FFAuBtgI/vptE7e4Ma+1BYWBsCcNlwGfrAV3ZhnsJO7p7AaFLZeSu0XZOhDNhJZZ0jD16rtYcBQTVgmXNuzBtQmw7sKxlVrKHHupTxyxmNd1tyIriy5D8C/EWkMMk4i7GLtKCA2WWVbYNmqfyOYe0wAeADyI40jDci+Fo67WTd11PMm/lfmmC2V4kIy3pCZanv2TXGaNshWECae89t9rIp6gy3BQdxcHdGYBuhzCXD6N2g/wZs6n3ndQ/eH6GaaYy2qt2iPwh7TmO9UN+4nu6mechLvduxseK0WXPeyCReD1/Y+oyNMqnvIRKq2jLs4yBABZd7Gn8iv78ldmWsCLUVr78V4AS7Y34DA8Zpe34XkZuQozjRwkmA1gqW9CGVCDFngXVabBgmlEtR9Oy+5DV9qM15ZNGhjjEnA93yNBvFHKKhpU0cPVz1gRp0q65sFS5AVgyy2O3sTSlFjTiY468hR2fxm0G51e0vb3p5VKeIM4GqBCrvKLtI5mRw9bvQwbBZ+5D68XJPlGn4Nq/ua6J6pkFrRTa+KhozNKz5dyOlVKtBhR1MT663Ph/2LI3vg3Vu0VDXZQczs0ssfadf7nrgt44pM81HHJYy+BuHcGrUko3bIydcMrDbXOwHvvgjGm4cpjL/jDUMZwgRgc71tHMDYEEu8qlL+Iw== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(83380400001)(82310400005)(186003)(6286002)(26005)(7696005)(86362001)(8676002)(4326008)(70586007)(70206006)(81166007)(55016003)(110136005)(6666004)(508600001)(356005)(316002)(36860700001)(36756003)(47076005)(40460700003)(2616005)(426003)(336012)(8936002)(107886003)(1076003)(5660300002)(2906002)(16526019)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:44.3896 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aa1e22ef-d2fe-4d94-b022-08da30fec704 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3932 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org During device suspend and resume, resources are not changed normally. When huge resources were allocated to VM, like huge memory size or lots of queues, time spent on release and recreate became significant. To speed up, this patch reuses resources like VM MR and VirtQ memory if not changed. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 12 ++++- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 27 ++++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 73 +++++++++++++++++++++-------- 4 files changed, 99 insertions(+), 24 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 4408aeccfbd..fb5d9276621 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -241,6 +241,13 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } +static void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) +{ + mlx5_vdpa_virtqs_cleanup(priv); + mlx5_vdpa_mem_dereg(priv); +} + static int mlx5_vdpa_dev_close(int vid) { @@ -260,7 +267,8 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_mem_dereg(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ @@ -663,6 +671,7 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + mlx5_vdpa_dev_cache_clean(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e0ba20b953c..540bf87a352 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -289,13 +289,21 @@ int mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv); void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); /** - * Release a virtq and all its related resources. + * Release virtqs and resources except that to be reused. * * @param[in] priv * The vdpa driver private structure. */ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +/** + * Cleanup cached resources of all virtqs. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv); + /** * Create all the HW virtqs resources and all their related resources. * @@ -323,7 +331,7 @@ int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv); int mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable); /** - * Unset steering and release all its related resources- stop traffic. + * Unset steering - stop traffic. * * @param[in] priv * The vdpa driver private structure. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 62f5530e91d..d6e3dd664b5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -32,8 +32,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) entry = next; } SLIST_INIT(&priv->mr_list); - if (priv->lm_mr.addr) - mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -149,6 +147,23 @@ mlx5_vdpa_vhost_mem_regions_prepare(int vid, uint8_t *mode, uint64_t *mem_size, return mem; } +static int +mlx5_vdpa_mem_cmp(struct rte_vhost_memory *mem1, struct rte_vhost_memory *mem2) +{ + uint32_t i; + + if (mem1->nregions != mem2->nregions) + return -1; + for (i = 0; i < mem1->nregions; i++) { + if (mem1->regions[i].guest_phys_addr != + mem2->regions[i].guest_phys_addr) + return -1; + if (mem1->regions[i].size != mem2->regions[i].size) + return -1; + } + return 0; +} + #define KLM_SIZE_MAX_ALIGN(sz) ((sz) > MLX5_MAX_KLM_BYTE_COUNT ? \ MLX5_MAX_KLM_BYTE_COUNT : (sz)) @@ -191,6 +206,14 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; + if (priv->vmem != NULL) { + if (mlx5_vdpa_mem_cmp(mem, priv->vmem) == 0) { + /* VM memory not changed, reuse resources. */ + free(mem); + return 0; + } + mlx5_vdpa_mem_dereg(priv); + } priv->vmem = mem; for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 5ab63930ce8..0dfeb8fce24 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -66,10 +66,33 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Release cached VQ resources. */ +void +mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) +{ + unsigned int i, j; + + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + for (j = 0; j < RTE_DIM(virtq->umems); ++j) { + if (virtq->umems[j].obj) { + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[j].obj)); + virtq->umems[j].obj = NULL; + } + if (virtq->umems[j].buf) { + rte_free(virtq->umems[j].buf); + virtq->umems[j].buf = NULL; + } + virtq->umems[j].size = 0; + } + } +} + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - unsigned int i; int ret = -EAGAIN; if (rte_intr_fd_get(virtq->intr_handle) >= 0) { @@ -94,13 +117,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - if (virtq->umems[i].obj) - claim_zero(mlx5_glue->devx_umem_dereg - (virtq->umems[i].obj)); - rte_free(virtq->umems[i].buf); - } - memset(&virtq->umems, 0, sizeof(virtq->umems)); if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; @@ -120,7 +136,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } priv->features = 0; - memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; } @@ -215,6 +230,8 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); if (ret) return -1; + if (vq.size == 0) + return 0; virtq->index = index; virtq->vq_size = vq.size; attr.tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); @@ -259,24 +276,42 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } /* Setup 3 UMEMs for each virtq. */ for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - virtq->umems[i].size = priv->caps.umems[i].a * vq.size + - priv->caps.umems[i].b; - virtq->umems[i].buf = rte_zmalloc(__func__, - virtq->umems[i].size, 4096); - if (!virtq->umems[i].buf) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * vq.size + priv->caps.umems[i].b; + if (virtq->umems[i].size == size && + virtq->umems[i].obj != NULL) { + /* Reuse registered memory. */ + memset(virtq->umems[i].buf, 0, size); + goto reuse; + } + if (virtq->umems[i].obj) + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[i].obj)); + if (virtq->umems[i].buf) + rte_free(virtq->umems[i].buf); + virtq->umems[i].size = 0; + virtq->umems[i].obj = NULL; + virtq->umems[i].buf = NULL; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" " %u.", i, index); goto error; } - virtq->umems[i].obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, - virtq->umems[i].buf, - virtq->umems[i].size, - IBV_ACCESS_LOCAL_WRITE); - if (!virtq->umems[i].obj) { + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, size, + IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", i, index); goto error; } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; +reuse: attr.umems[i].id = virtq->umems[i].obj->umem_id; attr.umems[i].offset = 0; attr.umems[i].size = virtq->umems[i].size; From patchwork Sun May 8 14:25:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110905 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99B45A0510; Sun, 8 May 2022 16:27:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68D2C42834; Sun, 8 May 2022 16:26:49 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2059.outbound.protection.outlook.com [40.107.212.59]) by mails.dpdk.org (Postfix) with ESMTP id ECA1442833 for ; Sun, 8 May 2022 16:26:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KtXEqnRseWdt7qs9ILLwH7A8G9bJtl9VZa983DZ8VZEW5LsyQnSfiiK7nonEDRsBfY5FzBskQe4ixY4NG6YU7o1kkIRZoaROKb1apoklJrh5PUf0fQPE8nUEQFiZHZhsWUinDPr07TV9my7XVZLNhLWZfBmHtIXkLWPHZH9sp4CQPCJ8gISZ/AuD7/yJ85xELib/ucKB6keiRJ9mbwl/YrD02q7XpA5xdxHk7la20Y8OgvpR5yw+tWX76L1HAhMoB15r7x24VfWImJ/+YGfOVzpMX7jDNf4w/5u2CmjgIlEVcTLfCYBvVMLEJGmJSoVBHxnijdBp3+0B3R4nQAnN5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TDM3X/u3ChT1Bj5mSzLQNzgP8gZDYpMpt15gLrdaO/U=; b=Vb7u2NgzCRDDDjZky5F0GCxGqYgP8cHf3/SXDKbAjoQPKSHDiDZnsbQgLL0sfSK3XBb0GzztGL5Q9SLgcbDGQDq0/YM5ZcmXND8VoMy6sDUHnPco5T+nXkL9YXbasdS25AFrVKnjtPoIXWzhnDz1XLg3QGaRxnasqauKI3maOSvgJPrGEugFsPGVdOPihEHHLSc++QoHpg/YcGVKphHCu1NyvMZdIYiwNqINY+IWsiUhDdBIv83EEk/BNiYlpW5Wy6vuV0QSjOmif7d8C9sOuJDmEiUJjJH4EoXOK4fETCSDJTxDdgxeSAdKNTwcr/CJfhUMy0rlctbln2oEbVMFCQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TDM3X/u3ChT1Bj5mSzLQNzgP8gZDYpMpt15gLrdaO/U=; b=HhMBupzh2rrnNVAePMPVIuMz13oWCDbJRLpT0Cotn84uio3VkNSq+JMrstqplj1fF8jUrWefRDQQMEEqI+I0AlTLKw2KJ5vo1y8jaznH3OM8Tsqi85JvIyNPIJl/GI2dxu3mPk6tw77opRVPn5vP7hYq5d6neZ1nozlg9xznq+7lXoHJbFW6bcGkM2v95j0fNNx8xEeQi5qGo4HLQnwhHFwUDpkrwKtf5O7eqsIqdSweK+G+QvQ52OneTvrtNl7dAGJE0fj8QySx1AsFrqUd7PX8Lz72TczjZZJaJdvMyWwOkwgyQjDmu5xsjm78ExkwFOnvsWKw+IAJcAxOZo7rLA== Received: from DM6PR03CA0097.namprd03.prod.outlook.com (2603:10b6:5:333::30) by BY5PR12MB4164.namprd12.prod.outlook.com (2603:10b6:a03:207::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.18; Sun, 8 May 2022 14:26:46 +0000 Received: from DM6NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:5:333:cafe::e) by DM6PR03CA0097.outlook.office365.com (2603:10b6:5:333::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20 via Frontend Transport; Sun, 8 May 2022 14:26:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT063.mail.protection.outlook.com (10.13.172.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:45 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:43 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 6/7] vdpa/mlx5: support device cleanup callback Date: Sun, 8 May 2022 17:25:53 +0300 Message-ID: <20220508142554.560354-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bde9b762-3891-449b-16f2-08da30fec7ce X-MS-TrafficTypeDiagnostic: BY5PR12MB4164:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZIlOutgivGhqbOvWfZ33ob/srsbKBGHKwe7sv6tTz0pwITi84XbGxrXhhWZa6sn6/fZiAIfV32PTXPo7JcSWk6GW2j0QToWH7+sHRkjW+m+lQaRDO9t+1r+Muaimb7vjkdIJ+ISTGjYlD9biGCeRgm6Lt5SwQKNocTkLMb2y9vL4yIppEAdJxF07Qd0hGPL3PwMQ4hve1C6REMRSVZIOHEb6khd6e5lYaNgWwmUXgsZVFXFb64EpaCrPcBEssIB9C2Cr8Lak+pEPpN4MtKS8w1qxUEPjCXL69a6MOhtmdiapNJzT6iV0DYtrg181Ak2K6ezyFF2eVaXzF8vCKbddW7rk4AEBd0HmwW+Bh6xO2q5K8KO7m51GEaA09VBPh7v9epS6j/lSce3mk9csoOFAw8wMmkf+/aRaP85eQ9C2P5hRRU1BpS7Q5obRA7ZpX8ZvoyOaXq/j8qtU4f2uiQxqUT8/kP5iDiG1h6vD59+ZFktiEJxIy/Ckkj1xzNg7A/uK5PKitx17rq+8IIsR16Ulp3d7RI5wpVDpoW1uKQtr8NhwJsINGcMV+/xxU73syYtwfG6H8YK7Ck7XGH46MSMa5mKoCnDuOIcrsvZmBOVh9oXYi/8rKXna526UrJ2wQzyHIX5/UdKCs02BbU8fCVjmYf9kZWkxrDhp3v9TQIC8j6llHgJcl8hh7nPEmSBzPWZTPS9FYQQZf+NFpgsQA3BkCQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(1076003)(83380400001)(186003)(16526019)(82310400005)(40460700003)(70586007)(70206006)(316002)(36860700001)(47076005)(336012)(8676002)(426003)(4326008)(356005)(107886003)(55016003)(2906002)(6666004)(7696005)(508600001)(5660300002)(8936002)(2616005)(36756003)(6286002)(86362001)(81166007)(110136005)(26005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:45.6939 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bde9b762-3891-449b-16f2-08da30fec7ce X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4164 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports device cleanup callback API which is called when the device is disconnected from the VM. Cached resources like VM MR and VQ memory are released. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 23 +++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + 2 files changed, 24 insertions(+) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index fb5d9276621..b1d5487080d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -270,6 +270,8 @@ mlx5_vdpa_dev_close(int vid) if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -294,6 +296,7 @@ mlx5_vdpa_dev_config(int vid) return -1; } priv->vid = vid; + priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); @@ -431,12 +434,32 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) return mlx5_vdpa_virtq_stats_reset(priv, qid); } +static int +mlx5_vdpa_dev_cleanup(int vid) +{ + struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); + struct mlx5_vdpa_priv *priv; + + if (vdev == NULL) + return -1; + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + if (priv == NULL) { + DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); + return -1; + } + if (priv->state == MLX5_VDPA_STATE_PROBED) + mlx5_vdpa_dev_cache_clean(priv); + priv->connected = false; + return 0; +} + static struct rte_vdpa_dev_ops mlx5_vdpa_ops = { .get_queue_num = mlx5_vdpa_get_queue_num, .get_features = mlx5_vdpa_get_vdpa_features, .get_protocol_features = mlx5_vdpa_get_protocol_features, .dev_conf = mlx5_vdpa_dev_config, .dev_close = mlx5_vdpa_dev_close, + .dev_cleanup = mlx5_vdpa_dev_cleanup, .set_vring_state = mlx5_vdpa_set_vring_state, .set_features = mlx5_vdpa_features_set, .migration_done = NULL, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 540bf87a352..24bafe85b44 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -121,6 +121,7 @@ enum mlx5_dev_state { struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; + bool connected; enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; From patchwork Sun May 8 14:25:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110907 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83D49A0510; Sun, 8 May 2022 16:27:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 38FC24282F; Sun, 8 May 2022 16:26:51 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2050.outbound.protection.outlook.com [40.107.94.50]) by mails.dpdk.org (Postfix) with ESMTP id CD8254282F for ; Sun, 8 May 2022 16:26:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=C7QebJ8kQORZGpVqsWG7z3D1S4qEEO3lR2Y1/5PF8CVKrCS7+iYCjVb3D7ZJjZRC8yTZgnycxWiJSxON0idePo/6RVcsftCZcCcwKWcXoR9h+J3fCxjD2xiQnn9D24NlbEjVx2eeX1PTo47+uya33Q5V3EIOn22gIBPw2fym/BaMS56FEmotHI38q2cmwtBBIajvBaCl3IVO7kmdvvYtp74aBWxj6rRT04PfxTMG5+n4v8pvfS5uGcCYmjsAonrVSDAU6ZdIZBZkLella9cS/34fCHK8LJlUoE6k9qfdRFitu32UqJcjHKkQpHI8YnpHZkKZvrAwJYTUQrQpFEsg7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=v5NRRq4xqnWep/hh4/ePXyobRSHXVIphFtdsj9EG11g=; b=ghzXk5+iGAup0MzpO6ruWeDPMa3NYpWZntn7Lq7jSO5u4MaW60m2+C/xS+1C3bkj+o2NS11DYK7NQ++SxT/FXP8ntBJa+VrxNpQfUyGv0eyQFz0W1Ff39QaT/3elh3qm9neDIAEsK9zqr7EbE5yCy3nZHqR1w0ZLzPpqKaPC6mNiSU4+vwKcgQkQh3QH627JiGNRlBqe3JIaAu7BuhB19Pc89hOy/6gJEPB8dLAcSnh4yzyYhxTR64pDpV6wfcl5ha+jw0yaAQVXitxqvu8TSVf2iNDsYMDEQmr/IqRPLm7HJCCKMrEAvHM4IYbYPbG13nwUgO7N7G1qj8RnZUiBOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v5NRRq4xqnWep/hh4/ePXyobRSHXVIphFtdsj9EG11g=; b=ocb6gH61RPgI3June3g1W+wbxhb/dvmJ2tzHiUvTtnkKBMerHABDiLmRHZs6meLhsuHEUNSQrUBpGVQXfxyh3B24LOOA2HIA4wt08jdRpmBd1xoiS+/PJnis/Tx2PDR3tbi3wlPT6r9w0AvCLD4y8d/rrzO10UZsRXCt81KNX5Z/5gdretri24fsTealj72GxJm5PUrcTalU1NUyiKv3/2aRhVlrtOxSTdh8yVhtad3udXzMY5TCf/LXJdesUJsDZOjGGFkRagax+6dpvEwgYkDF2cIGNzv/t8L6zYc0OVlTZZNfHb9iIk+jUOsvlUFO378EhAPwTMBp7Y5Me9bHGA== Received: from DM6PR10CA0009.namprd10.prod.outlook.com (2603:10b6:5:60::22) by DS7PR12MB6046.namprd12.prod.outlook.com (2603:10b6:8:85::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.20; Sun, 8 May 2022 14:26:47 +0000 Received: from DM6NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:5:60:cafe::85) by DM6PR10CA0009.outlook.office365.com (2603:10b6:5:60::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.24 via Frontend Transport; Sun, 8 May 2022 14:26:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT008.mail.protection.outlook.com (10.13.172.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:47 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:46 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:44 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 7/7] vdpa/mlx5: make statistics counter persistent Date: Sun, 8 May 2022 17:25:54 +0300 Message-ID: <20220508142554.560354-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b833dedb-5950-49f4-c3d4-08da30fec8a1 X-MS-TrafficTypeDiagnostic: DS7PR12MB6046:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iWxJZBQmg3pRsAv8PsCtHfthYXxgZlrAah/37Iu4V5i3CgTu2srm0YP/LPkBrT9lL76dtd4YadSQhK6zvhB3thrKb5xVci0TgANNd0q53n0YvqvevjyNhfgCjG0TxTe8MY9YiBlAB9nrVy26GNHHueaa9BbQC278rIu+oGkcLexfzZWKfyQKR/y/RLegsKF59CqIvUEQauZyjPxsY0p4M3C2ugtOL/8RTP1qQ0hHtsGdVkZeJktsuEwK1muhJZDuqvd6Nht8ZSYJt1WpBH5SBBVpOMwes7UjyiOYEqan0uk5l2W2Qjnul2YCDcg/hPLcZUtjKlT9vtypt8nJIgmYrZ9VTgA7BUmznr8Tkq9lGP8Dim/qEQlncSIRMx96o54qLeIAJCYSpyBhYhzOdpqYPKvY4z/uTPV84N9PVhtpUf8YPrVjH3kMe1YPHAmNt29FSmiFFXPPrmpDsL/+6i8CWLHtoE2dpUMGhsqIINhxm8NWuxkp8sr3o+jJ0MJjX95jUF4KYP/aK+x8XsM9pi9of7oAO/pQwkhZ1ij3/wAn+7zg4uWDr8SeZccIUrIBN+qI6Ytkxm855egB+OXS3PLh5zloASxkpha5kPvg0rJ4+Ze09+2VDqf35HbZEEvWel4feGb1lPWX6DVgHCRxz91z+jnIexWBlU1xur+ZqzsPk7yTH7oQu58xPVQ2hQ1EbJDBd0n16Mz5MqEl8bErwtmiPw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(70206006)(81166007)(70586007)(36860700001)(2906002)(47076005)(426003)(356005)(8676002)(336012)(40460700003)(83380400001)(4326008)(55016003)(316002)(508600001)(6666004)(110136005)(36756003)(7696005)(2616005)(26005)(1076003)(82310400005)(107886003)(6286002)(86362001)(186003)(16526019)(5660300002)(8936002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:47.0967 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b833dedb-5950-49f4-c3d4-08da30fec8a1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6046 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In order to speed-up the device suspend and resume, make the statistics counters persistent in reconfiguration until the device gets removed. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- doc/guides/vdpadevs/mlx5.rst | 6 ++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 19 +++++++---------- drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 32 +++++++++++------------------ 4 files changed, 26 insertions(+), 32 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index acb791032ad..3ded142311e 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -109,3 +109,9 @@ Upon potential hardware errors, mlx5 PMD try to recover, give up if failed 3 times in 3 seconds, virtq will be put in disable state. User should check log to get error information, or query vdpa statistics counter to know error type and count report. + +Statistics +^^^^^^^^^^ + +The device statistics counter persists in reconfiguration until the device gets +removed. User can reset counters by calling function rte_vdpa_reset_stats(). diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index b1d5487080d..76fa5d4299e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -388,12 +388,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -416,12 +411,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -695,6 +685,11 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) uint32_t i; mlx5_vdpa_dev_cache_clean(priv); + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + if (!priv->virtqs[i].counters) + continue; + claim_zero(mlx5_devx_cmd_destroy(priv->virtqs[i].counters)); + } mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 24bafe85b44..e7f3319f896 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -92,6 +92,7 @@ struct mlx5_vdpa_virtq { struct rte_intr_handle *intr_handle; uint64_t err_time[3]; /* RDTSC time of recent errors. */ uint32_t n_retry; + struct mlx5_devx_virtio_q_couners_attr stats; struct mlx5_devx_virtio_q_couners_attr reset; }; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 0dfeb8fce24..e025be47d27 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -127,14 +127,9 @@ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { int i; - struct mlx5_vdpa_virtq *virtq; - for (i = 0; i < priv->nr_virtqs; i++) { - virtq = &priv->virtqs[i]; - mlx5_vdpa_virtq_unset(virtq); - if (virtq->counters) - claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); - } + for (i = 0; i < priv->nr_virtqs; i++) + mlx5_vdpa_virtq_unset(&priv->virtqs[i]); priv->features = 0; priv->nr_virtqs = 0; } @@ -590,7 +585,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, struct rte_vdpa_stat *stats, unsigned int n) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; - struct mlx5_devx_virtio_q_couners_attr attr = {0}; + struct mlx5_devx_virtio_q_couners_attr *attr = &virtq->stats; int ret; if (!virtq->counters) { @@ -598,7 +593,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, "is invalid.", qid); return -EINVAL; } - ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr); + ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, attr); if (ret) { DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid); return ret; @@ -608,37 +603,37 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, return ret; stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS, - .value = attr.received_desc - virtq->reset.received_desc, + .value = attr->received_desc - virtq->reset.received_desc, }; if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS) return ret; stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS, - .value = attr.completed_desc - virtq->reset.completed_desc, + .value = attr->completed_desc - virtq->reset.completed_desc, }; if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS) return ret; stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS, - .value = attr.bad_desc_errors - virtq->reset.bad_desc_errors, + .value = attr->bad_desc_errors - virtq->reset.bad_desc_errors, }; if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN) return ret; stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN, - .value = attr.exceed_max_chain - virtq->reset.exceed_max_chain, + .value = attr->exceed_max_chain - virtq->reset.exceed_max_chain, }; if (ret == MLX5_VDPA_STATS_INVALID_BUFFER) return ret; stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_INVALID_BUFFER, - .value = attr.invalid_buffer - virtq->reset.invalid_buffer, + .value = attr->invalid_buffer - virtq->reset.invalid_buffer, }; if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS) return ret; stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETION_ERRORS, - .value = attr.error_cqes - virtq->reset.error_cqes, + .value = attr->error_cqes - virtq->reset.error_cqes, }; return ret; } @@ -649,11 +644,8 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; int ret; - if (!virtq->counters) { - DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " - "is invalid.", qid); - return -EINVAL; - } + if (virtq->counters == NULL) /* VQ not enabled. */ + return 0; ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &virtq->reset); if (ret)