From patchwork Thu Feb 24 13:28:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108263 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A5ADA034E; Thu, 24 Feb 2022 14:29:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C87B426E2; Thu, 24 Feb 2022 14:29:08 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2081.outbound.protection.outlook.com [40.107.94.81]) by mails.dpdk.org (Postfix) with ESMTP id 601A44114D; Thu, 24 Feb 2022 14:29:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=njdUnDap4IatNziuabncsaUIS8jRLzl7mZs7mwC+IQXfFE53eaKO2kMIxaWp8j4ae8ECvnBhxkzY/S5J4dm8dECxOBleUN9GNZ6sZYc+M8qgwFouTGLhdJyNNO+VO/4ZkE/wSAEpY1U/iv82XYQLLh/T7YTpC13/d/uV0JehvWL3PMeQYjPddagBtur/jOusKs5VTsrtq2Buli6wLsF2KZvgFhdcYWuKJoVr4xsdjocdOQbN/MSkQgcpdZJcM0kSxT9vftqvpAGwaTIwbfWaFQqK7I5jKgTGL2pPWU+56MQxoSJvRTf0aTMyWc/nbXyBX9KVBrUoHV1Orbj/A4sTtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vlplyzhIxuG/Atpfl8B1f0BsyMqOLPadfvHXLb+bfP4=; b=Ej+c7cUlS6NagtvF54uuCxKm/wA/Y2O4DP5aIiaFuspSBrkDPZAnvESbGereHQucBeKiLLfAOZ4tVBj1zcHo3hTya0m+8juTWjGGkEmfRYzA8VEYHBXUnYOtlbeUuhmBKdZKAk/SH/56D/yOaCM3Shg6SaxJHdQaHkM0e84clmYMzculxYNy5mOtig9kX54DjWra9mFy6b+zWtTDzA48GYbd09xLZx1mjREThj7SxwSBZ7huJtoftvfDMZv44pouXltOjLub4GPczDdDYOK8zFFvvOfjG2XnFFuL/8NGoRODdd3fbvq7lHP/KAPNgFmhdIHA7ICKAm4K//OGasTr0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vlplyzhIxuG/Atpfl8B1f0BsyMqOLPadfvHXLb+bfP4=; b=HXDv+lwh4W+S/rTznVr2p5qdRnOb5499/TPCQX7OqOqrUuROz4/5PrhRRpk0TbVhc16ZHl0lz2vj9MbauX2kDNVMRj37L2fYyGsdEI/ZxoAh9Gw6yEqmOHpq0oHHdB/S1AgksaQUAE4LWRd/294mgzi4gPbImagZr9piZ9OWl2nTzBri4mi3LCyY6pE0BaxL5coRL7hX8aGoEhxiDuWroZxWv5drE/8zqoY/EPCUSP6+Xe7CVGZ2RunMW3MQlDPDEaQ40HXTp4JwzLYsLFkfP/hXEmeM1qaQBK/gI+NO208VZue+MgrUmBKIcelAh813n5cSQ1KZxrFWca9u3c+PTQ== Received: from DM6PR02CA0073.namprd02.prod.outlook.com (2603:10b6:5:1f4::14) by CY4PR12MB1461.namprd12.prod.outlook.com (2603:10b6:910:f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.24; Thu, 24 Feb 2022 13:29:04 +0000 Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1f4:cafe::a7) by DM6PR02CA0073.outlook.office365.com (2603:10b6:5:1f4::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 13:29:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:02 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:28:58 -0800 From: Xueming Li To: CC: , , , "Matan Azrad" , Viacheslav Ovsiienko , Maxime Coquelin Subject: [PATCH 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Date: Thu, 24 Feb 2022 21:28:14 +0800 Message-ID: <20220224132820.1939650-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d89bc01b-f45e-40be-7f59-08d9f799a060 X-MS-TrafficTypeDiagnostic: CY4PR12MB1461:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +AJ3auKwDNXjTxiYd3LxWRoXoRjA9bj2P94BHZ3iVtHFy9UupXyUvidvpRCfBUKGljCT2uMwZLC4dW38um03Fd2xmhy/EM8ejtYNmynZoAAsSLawybSohjSaU4TqmjesWgLvviIMqxGBtJBxXId6m1EXqx9PaKzKFuR3CpWJrFca4kcbEuezvmnMLDC9idM4H5FFg0hRO0oGuowIjNU4logI9IUt0LSwGodIhd8UgIB32MQgCWBsrCT/9DM0ZxDziu0hkm00Upex0aQckluw4tltjvNPKGxp9znui0trjQ1WjU9+klEcCSnQ8NHJirC0KMXRqk2ODTyzKD9nWZN8BkNZJaQE4ZPNujvvF0jnYuOmRaBgXP1boh9Ft2mDol2Gd0ogIFIgyiHju9rHNhnpeZ98BJ6IQp4h3Wels6i4r7gGWEGj1AFsPHyAt6hbo7RGwNUZb+QRPe3uDhAR1d846+QJZwoqhbcndJ7txuSvSrMuizaT80Z5WTkGQEJnqsB+yTauODKNiyY42Q6voZ4nmYIgdG9UHzRsYACoH4mjXF2L+LjxjQ70XjmoGCzK2gSe3kuidGhfWukieCA1aopDI7ErhHn99JM/Q+f4fnVOgUrmg8yd9Za0vaXf4q5oDXql9m2yrtrMuqZj/6scsezHXf2YN3P/L+jF7RKVnSspO2rCV2cEiGZrkAt8kMFPnk5awI6UaDIHJY2UoE+1nKVATA== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(6916009)(6286002)(8936002)(186003)(356005)(81166007)(16526019)(82310400004)(26005)(54906003)(4326008)(8676002)(70206006)(70586007)(1076003)(508600001)(316002)(5660300002)(2906002)(2616005)(83380400001)(36756003)(47076005)(86362001)(55016003)(36860700001)(6666004)(426003)(336012)(7696005)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:04.0184 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d89bc01b-f45e-40be-7f59-08d9f799a060 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1461 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Disable interrupt unregister timeout to avoid invalid FD caused interrupt thread segment fault. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: matan@mellanox.com Cc: stable@dpdk.org Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 3416797d289..de324506cb9 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -17,7 +17,7 @@ static void -mlx5_vdpa_virtq_handler(void *cb_arg) +mlx5_vdpa_virtq_kick_handler(void *cb_arg) { struct mlx5_vdpa_virtq *virtq = cb_arg; struct mlx5_vdpa_priv *priv = virtq->priv; @@ -59,20 +59,16 @@ static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { unsigned int i; - int retries = MLX5_VDPA_INTR_RETRIES; int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) != -1) { - while (retries-- && ret == -EAGAIN) { + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_handler, - virtq); + mlx5_vdpa_virtq_kick_handler, virtq); if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d " - "of virtq %d interrupt, retries = %d.", - rte_intr_fd_get(virtq->intr_handle), - (int)virtq->index, retries); - + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + (int)virtq->index); usleep(MLX5_VDPA_INTR_RETRIES_USEC); } } @@ -359,7 +355,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) goto error; if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_handler, + mlx5_vdpa_virtq_kick_handler, virtq)) { rte_intr_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", From patchwork Thu Feb 24 13:28:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108264 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C164EA034E; Thu, 24 Feb 2022 14:29:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44599426ED; Thu, 24 Feb 2022 14:29:12 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2068.outbound.protection.outlook.com [40.107.92.68]) by mails.dpdk.org (Postfix) with ESMTP id 0A6144114D; Thu, 24 Feb 2022 14:29:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Aze681QVZ4+KmNOG1lTkA2L9Bge0m2MMbOljudPHuT6zoDbWp0q3h21GHnoUXmAghDzTMSfCq3aoDL48WyQVxRIQAfzkA6gd6CBGydTN+TVlJXlch5cwuEysI7erMLS5D0rDfwHsNj5aN816H4HVAl+swbsJt53cq3brimSlIRxYe4ndGbeW+NNMg7kM8VEoieUJNFxs7l60fbq23etiLyLXdBapFE/mhh9JsYUPxfUoP1F/xC+hnF9HLcb28p5jpFL/7sZ7jQNCmb46bl8iQMr6kbp/sz0iMp/uuv4v2PUy09tPx7dy0ENgqgBY3m8/1kiVBMYqTG2MKqWqzUbqsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lrQRc6PY08XzlLHhONbwBNbqLbu6NNZ4dF8VZE269Ko=; b=AWOuIa9PIBm1LCoin0dKkjpVH25VL5KhT9ruLmyUBRZPq7X4CRYgJRPj3R2oq7rYxAqvuJk+KXRd4PQdacQVnaHSpGuDp/JbhdxcQi5dFQL792s53WDap4NNZnUQaDD4CxfCELrBZar5dOW/LYaLe2piSjXG99r2z5n+UAfi20+3WzaIufVB6Az2b/Gxz8G+t8FMWBTOGDkQoTt7oJ7qyH70RCHTp/62oHbEUnpDe/g2RRWa3Bs+3e7I0IqiI4jszKhGcDZYkwem70VPlruyl3QRLqZAnJzrB3AtF7n4IH9PSb73E+gy2tdJ82MDdLElq+X8c0HIdF4l43fEhRqzmA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lrQRc6PY08XzlLHhONbwBNbqLbu6NNZ4dF8VZE269Ko=; b=uJpEz4LTIpGAMkxwbRO3dOJOMNRdqflirkqAPFAJQm620l5LNHGQ65SNC1kDvSGzuknYGXcwVkNm6/GchyZtsfSRmh9k0tSq9YsdFJqfczdX8J03zUggrtW5TNQmtX57BSUN+iMVa6IyaUoFP3eJz9yaNj0SJqdAHy1uQK+Qvo/e1ye2w2H5+mzST47fGMYSY7oWNwht5tD8lCSxtiHkDGtB9nfHiiYBFdza+YikehhouqTxr79lW7jjazkGP2QuAImL9pXBZqS6a5hsPAesy2uLZDKE/jPlAkpFTkT54UuSXYzQm79UJjTdb8MwbcGYX1DVUJwl16ExTZwHLKghBA== Received: from DM6PR06CA0018.namprd06.prod.outlook.com (2603:10b6:5:120::31) by BYAPR12MB2888.namprd12.prod.outlook.com (2603:10b6:a03:137::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 13:29:07 +0000 Received: from DM6NAM11FT051.eop-nam11.prod.protection.outlook.com (2603:10b6:5:120:cafe::bc) by DM6PR06CA0018.outlook.office365.com (2603:10b6:5:120::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 13:29:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT051.mail.protection.outlook.com (10.13.172.243) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:05 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:01 -0800 From: Xueming Li To: CC: , , Matan Azrad , Viacheslav Ovsiienko , Maxime Coquelin Subject: [PATCH 2/7] vdpa/mlx5: fix dead loop when process interrupted Date: Thu, 24 Feb 2022 21:28:15 +0800 Message-ID: <20220224132820.1939650-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c29ab306-cc67-483d-cdda-08d9f799a1bd X-MS-TrafficTypeDiagnostic: BYAPR12MB2888:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PdsbESwzL817xc4Dyt8bgg4qFy0XWb1+4KyeXkSLfJpJfptHV2f6T7Jt1cMehh98ShCaZT5d9gTZqVJdGrvs1dLqWxNQFaSHFrbQckprENMqadVTDN8eiaNkEwfGtrcTU7zzVeYjspLebxC99cd/7MZ8ct7XT2OW1Ino4to9PHbMMNJzJO6Wxaiht31yfD6FGrpNxn1f2vamXXNrDTC07+KHUtWbW0dmbe182czAUDn0c5EZJAQ/6TbzE8JkQrssAeDVGNv+5J8sVYSnSUMelW9RuvYkkhtJ6JjXKOthu00PJK+gI8Qn8mfPHX2Qr/psxnA1QcRPzSAkCMZLj80NqkBu8VEZLfFylm7xqhWWYsEDfaxr8FLq/oGKRA4dDyhrGKhIjrxoaxZDldHoMDsvfqrXkW8VkPBJNLxSHZQN9iWHBPGN3Ajt6J0fpQuWdNFzQdh42gpykRBI6TiCTPHzJ3/h0WNcb5C6aU/8V3Rk6TcO8csWrSuoJWQzyYsro0335eJPSGLJP2EFBGvWr44fryTs/qqRnHB0H47gDjGdh7vFcREAEiklDxhZSt3CzOLY5+txmX2yfuQc27y4qoS5oxCuT6AOP7NkGWnmpUwT5prQl/oXHYKAMoFpzTYwIGoQrH2htc4q2ngh+vdmdhOIf8ke+wyH8oXJuw0PZNnsUr4KiyTadSqUXiO83EmPLCL9FLoDzE6C9FoGDzr02Y3Ncg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(2906002)(2616005)(4326008)(8936002)(70586007)(70206006)(81166007)(1076003)(54906003)(6916009)(26005)(356005)(8676002)(186003)(36756003)(5660300002)(316002)(40460700003)(36860700001)(55016003)(7696005)(83380400001)(336012)(6666004)(16526019)(86362001)(47076005)(6286002)(508600001)(426003)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:06.4013 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c29ab306-cc67-483d-cdda-08d9f799a1bd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT051.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2888 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In Ctrl+C handling, sometimes kick handling thread gets endless EGAIN error and fall into dead lock. Kick happens frequently in real system due to busy traffic or retry mechanism. This patch simplifies kick firmware anyway and skip setting hardware notifier due to potential device error, notifier could be set in next successful kick request. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: stable@dpdk.org Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index de324506cb9..e1e05924a40 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -23,11 +23,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) struct mlx5_vdpa_priv *priv = virtq->priv; uint64_t buf; int nbytes; + int retry; if (rte_intr_fd_get(virtq->intr_handle) < 0) return; - - do { + for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); if (nbytes < 0) { @@ -39,7 +39,9 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) virtq->index, strerror(errno)); } break; - } while (1); + } + if (nbytes < 0) + return; rte_write32(virtq->index, priv->virtq_db_addr); if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) From patchwork Thu Feb 24 13:28:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108265 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F697A034E; Thu, 24 Feb 2022 14:29:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48B21426F7; Thu, 24 Feb 2022 14:29:15 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2077.outbound.protection.outlook.com [40.107.100.77]) by mails.dpdk.org (Postfix) with ESMTP id 0A4DC4114D for ; Thu, 24 Feb 2022 14:29:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=U0GKA4M/42OxVy9jbYUsuAK34kwfV/8ndQ4XBlIgIzhK71CmABM1qTDi1cnMRGCvaUY1LowzFwzpdYvwNCcqjDYlzhkJD5b89ChI/Mmz5WzoIcrHONOylKZRKylD5iF+5k9JxtE9ro+6rFq56MHPU3uYN4fvoVl9DBeMJhqGgIdv69WEBxr8omQsuejQ9p6hOkssuXFGvu4c7O/EipTxwntgeAWpBRiVLLMw7oXQiS1tWcDuRKCgMH9dEumZi2ysU8pSs4yLUrFS/MUhsfksbJl5u0q0cNQQ/Z8QbEQyGQ89qMUSRqsQWaxnzudBDME+DJTcO5DWf2KvJZPkL9Rvpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y9Ngd7RKqwo+G1lSrMiJrVeXwh9yv2vPLgH84Qe5n5s=; b=g0BHvndgI+7qjLKSZQfnC3aKisn/gnZvn9jPR/WAyRLn6U3ShTF91o0u9RiQecld8wfiMX+YbjCGTAlpX2qEfPu6E3sODvi8o4M0GoqsfFQydruZtVJBSzxBHVINf+y/qj+4UK2vHJkgsoTBUdiQfPoxVWoKZDjCQ0DVxNfJG76mWYdRDYoTqNSHd+9+2zgWLxVAIJ01l7ZT/0AG9SJk8ma/V7OqVf5YGm1tfQIKky7pgh8NOcr064YeexQTAAlR5dvbmwPsICJLsHfMD1fxcyuNT85CP1OZD/prr3/nBimOu7o9A96tCa7SGrdfJySqSaH9JJnHk3PlfIKvAVVdkg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y9Ngd7RKqwo+G1lSrMiJrVeXwh9yv2vPLgH84Qe5n5s=; b=l7M8uEMwldXDi3fRx3jeOGjk8qCAVvE+jV7UfV1Md6fi79uH0O4yMyLxwNC3jqY1VmmgvU75auGl8EmRheUt3WVQ2SxZr/TNASu0RyEvOPPclUpApn8AaidffGc4aJ+coMuQzWAZl0bAgLnaS6TL++hyvwplBNYxAfnd437vVEoGcwD/byyBxPn/+mTcuBHrE47QMBxZgBm6rgguaBQRSzk4bVuwtwvn5Hdd9xkHzs1MJU3TMs6VmHKpOjYiWvyIt98rKjvmvELp3oopPcPa+V1vaBTVwdc3DoYfQd02pC4eX2WnMPPq31iLoY6mwgJ0nJxVMDugKaYl7eLGUgLlAA== Received: from DM6PR03CA0048.namprd03.prod.outlook.com (2603:10b6:5:100::25) by CY4PR1201MB0168.namprd12.prod.outlook.com (2603:10b6:910:1d::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.24; Thu, 24 Feb 2022 13:29:09 +0000 Received: from DM6NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:5:100:cafe::cb) by DM6PR03CA0048.outlook.office365.com (2603:10b6:5:100::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 13:29:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT055.mail.protection.outlook.com (10.13.173.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:07 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:04 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH 3/7] vdpa/mlx5: no kick handling during shutdown Date: Thu, 24 Feb 2022 21:28:16 +0800 Message-ID: <20220224132820.1939650-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 53cfe85b-d4c1-4e8d-c5a8-08d9f799a34f X-MS-TrafficTypeDiagnostic: CY4PR1201MB0168:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: P4sSnzKiI+dEGuwNL5LgRubjqw7+ETqOAddkomiTa9FYoZt84vv+9zy00wvoml5VFxbGa+vVJ7nyt5l/P57uApsnoW4ROQfdjgpagNhJGwyPhmSzR25CuGIDSALTYq2iKiMlx4T4j2h0V+v2RcNRBhvJdB7XNWm3bOeHvKCj6T084gC4ZkfP8diCr/Qb5g2RORIvM228ZPAu0x9XRlpxHBof+9PGsNM6+ES318WFNYRfK6j8ZIMZ1XvApx2f+sUspTtj/6GOceRyViQEbIo2bVsgBEHPDUhYxRc99eUwOxkUoe3ovTteeqdYzHUSnrh1ZVsozuQ6bHUpBdnOCjmuuUOYulq20zCzdz59NRYrAdF0+zKIP5bBVxANXX2fxdudb3dKnCwTC99gZnInBkKK6CwJ5o9YZJy9NWiENmyb6f+AlP97o3+eC5p09A090oR/IVQj6Zn8uARZJ0rZFrYTs91LVQuh7RzTvRynLyb9y1VRX2mStIwFBwpP/uDBNFis6w7kpXF/q49z9iGUo0YRPaC5DLeTc0W+QuHkgQEx1nvGHBKezV4MsF107cCf41Lynd351nFBw0WOMe07p8xFTVw+vACv/UwD+0gLRQhS0vRvVc9cjkEl0ZvDbMVz/xZFESGH/zpcO6HkZmcbZKJY6O/x1W7CcEWE/YUhNHxZ3LKXq1tFwFCKxTkAdubOutHENoZV7rP/qQogsN/XKoiI37qd63290az6xGxv3XLrNAoVj0rEf4MdtK5B/UdkgK32 X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(83380400001)(7696005)(81166007)(26005)(356005)(16526019)(186003)(6286002)(2906002)(36860700001)(47076005)(336012)(426003)(82310400004)(1076003)(107886003)(5660300002)(316002)(8936002)(54906003)(2616005)(6916009)(36756003)(4326008)(508600001)(8676002)(86362001)(6666004)(40460700003)(55016003)(70206006)(70586007)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:09.0357 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 53cfe85b-d4c1-4e8d-c5a8-08d9f799a34f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0168 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When Qemu suspend a VM, hw notifier is un-mmapped while vCPU thread may still active and write notifier through kick socket. PMD kick handler thread tries to install hw notifier through slave socket in such case will timeout and slow down device close. This patch skips hw notifier install if VQ or device in middle of shutdown. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 17 ++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 12 +++++++++++- 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 749c9d097cf..48f20d9ecdb 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -252,13 +252,15 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); + priv->state = MLX5_VDPA_STATE_IN_PROGRESS; + } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); - priv->configured = 0; + priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -277,7 +279,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - if (priv->configured && mlx5_vdpa_dev_close(vid)) { + if (priv->state == MLX5_VDPA_STATE_CONFIGURED && + mlx5_vdpa_dev_close(vid)) { DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } @@ -291,7 +294,7 @@ mlx5_vdpa_dev_config(int vid) mlx5_vdpa_dev_close(vid); return -1; } - priv->configured = 1; + priv->state = MLX5_VDPA_STATE_CONFIGURED; DRV_LOG(INFO, "vDPA device %d was configured.", vid); return 0; } @@ -373,7 +376,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -401,7 +404,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -594,7 +597,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found) { - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); if (priv->var) { mlx5_glue->dv_free_var(priv->var); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 22617924eac..cc83d7cba3d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -113,9 +113,15 @@ enum { MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT }; +enum mlx5_dev_state { + MLX5_VDPA_STATE_PROBED = 0, + MLX5_VDPA_STATE_CONFIGURED, + MLX5_VDPA_STATE_IN_PROGRESS /* Shutting down. */ +}; + struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; - uint8_t configured; + enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; pthread_t timer_tid; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index e1e05924a40..b1d584ca8b0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -25,6 +25,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (rte_intr_fd_get(virtq->intr_handle) < 0) return; for (retry = 0; retry < 3; ++retry) { @@ -43,6 +48,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) if (nbytes < 0) return; rte_write32(virtq->index, priv->virtq_db_addr); + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_ERR; @@ -541,7 +551,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) DRV_LOG(INFO, "Update virtq %d status %sable -> %sable.", index, virtq->enable ? "en" : "dis", enable ? "en" : "dis"); - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { virtq->enable = !!enable; return 0; } From patchwork Thu Feb 24 13:28:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108266 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01261A034E; Thu, 24 Feb 2022 14:29:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7100642703; Thu, 24 Feb 2022 14:29:16 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2074.outbound.protection.outlook.com [40.107.93.74]) by mails.dpdk.org (Postfix) with ESMTP id E9D53426F1 for ; Thu, 24 Feb 2022 14:29:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DGZLDP2GVAPaRTBjiN5lW6BUNVsWb/KtQoJDhJZlWo1MXwB9rHK9UEmdR2padIxV3fymdcHGLg+6xFM3i0LFF36nVUWxh1tCMw4U15pEtK0zd6Arx5Vqgs2TdPpteRBWun6G5GnuMiBU7Ic8d3N7Nu/PWx/lK+CamrMMfNV2yBKnZPXn2wrdvJWxbxpWs9cvXC9p5ErrdySnT5c8i/ip9VcFWAUOHIK0faLQ6NB1Eqxy9xiSTJsQm5bisZXTCR+OMR7c2TqauNayX/oYpbk9Hz4YpaKmcHWahuXN0VOTEOix8SINovh7jJg/RzDq4nD6tBY3QGR34rjic4///eJpdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aSEg/nUz6I5eS3hD350v6JX/uSwaPDJNxNRbL6koeM4=; b=JMoNPJkPvePIEe6IUBoXlFNLcb18bMigqpTgGc4si2TOk9hP1YtBraOjDLPlb281AKGgOJIJnps8FMUwKF0nPV2OdwPCevF+a+4rPCQlAr3EoAXeGBQQMJWLBP5Vqmmp2zuitpwHg/WdFJz40eao4/susBgLDS3NhpIsGeR/dE5zyOY78piSG8Hj9OhST8vKFDA+/AK0xmNM2UY57/oB0Aj00RvRSrfnOe3l2quIOknUnndHnnuU8ONQQyCYS/1QdF3i+W3PcTLHPpPcHmqZLExWcRmdANa4F0CDv0FKxw/hiZ2NK8AQ9GaHAl3s4ppDNnsXgfPLUcyk4dAK15Jfrg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aSEg/nUz6I5eS3hD350v6JX/uSwaPDJNxNRbL6koeM4=; b=mdVOSi/8imTMtjsWuIY7HG//BLR+nn9Ux9/BbLlazmYG16T1aLgb80OPPN71tRHxVSuyJnAw23LhqAO+dHne6DwG5gdEdvy602wZqj6KL9rUWZTnSMxaQ2JwJb30fCppQwnb8wmWOBbIT6DpaaYAOt9uzk2ibDTyEdv3mDLLxvLxyD9zKb6ljPxKRk5sRn6ONRxcRm3fPoEA4kd+lVbbQ1Z/t/QLtFH6Y2xBgHXnCF4GSZq+YZGNstavZmKCwGR+rRD56NREhr9oZVsWmBB0LlBfKXTyJJFHlNivDwVlAghxAfpB4JkUnFIuBvoPsg1sTj42A+0RvTG8hxioBxtxBg== Received: from BN6PR22CA0031.namprd22.prod.outlook.com (2603:10b6:404:37::17) by BN6PR1201MB0082.namprd12.prod.outlook.com (2603:10b6:405:53::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 13:29:11 +0000 Received: from BN8NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:404:37:cafe::9) by BN6PR22CA0031.outlook.office365.com (2603:10b6:404:37::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.14 via Frontend Transport; Thu, 24 Feb 2022 13:29:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT045.mail.protection.outlook.com (10.13.177.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:09 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:06 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH 4/7] vdpa/mlx5: reuse resources in reconfiguration Date: Thu, 24 Feb 2022 21:28:17 +0800 Message-ID: <20220224132820.1939650-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 41b182d2-982c-4cae-1b56-08d9f799a4da X-MS-TrafficTypeDiagnostic: BN6PR1201MB0082:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2UZC4kO+AkSXdircRDiRqId6LduVwA5R3Sei4xYvnYY1kWwLpWCXRQv4LTu1aglyt809l3spnRJb38unNpkL8smhN0vZQ8jooYU8JT3r7BLfSyoXwmtiJp84kL0WGf46jt74uiJECpu6mD4A6AFwux92jRVtB4VF1PVhHuw2QArc21VaZfsIRp/KlarlTnDVH0iuxZ7nvVjOk4VKmvXHoRECzFBo29YYkyf6+n+p0bmjWu6Y9Q2A/o82MIgMTmFbZTaIDh7DV+sCZegA0dNTwN5I5NaBgulf7lwpsD45sjhye4bkg+Mx5Evc3BTH7GP9NXS0v6nYfl5dAIbHEf8vcxC1+tOIyLLDanZ2/IWw6f/j21EGw985Q9JacRfms3aYJiW8BxDe/n9rJvmipnxXGYpBnHsVTYbOaxTBNw9PQ+/Y5eeQHe39L7yYtYb5oVHw2UNUPpNZWXs2DR4FPIQ6R9zlBokcf5/puSAf6+g8sor/Ih9gm3GRHXVNjfHWt7l24eXgMcnfcmSAHcRlZwrfZbEZ603iOZlDVOjEzqTmjAt72qEQ7fG18KUivy5DTKVr7tZHhA+c5vKI6drvsQpP+P/CvtsGFHyuZBdqSqLDd0CjtMVUUX9Q1hMU/lrjlTNwwvYf9E7Jkg5NmIbkgZuL8v6YJ1Zvv9xEG5c4nfYK9fgE54d23Dk6t2hgXABGaLHcJkFOgRuViIV2tPX4KrsPjA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(82310400004)(47076005)(83380400001)(36756003)(1076003)(16526019)(2616005)(426003)(40460700003)(186003)(336012)(107886003)(30864003)(26005)(6286002)(6916009)(316002)(2906002)(54906003)(4326008)(7696005)(86362001)(70206006)(70586007)(5660300002)(8676002)(6666004)(36860700001)(81166007)(55016003)(508600001)(356005)(8936002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:11.2762 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 41b182d2-982c-4cae-1b56-08d9f799a4da X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0082 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To speed up device resume, create reuseable resources during device probe state, release when device remove. Reused resources includes TIS, TD, VAR Doorbell mmap, error handling event channel and interrupt handler, UAR, Rx event channel, NULL MR, steer domain and table. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 165 +++++++++++++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 23 ++-- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 11 -- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 25 +---- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 -------- 6 files changed, 147 insertions(+), 130 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 48f20d9ecdb..7e57ae715a8 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -49,6 +50,8 @@ TAILQ_HEAD(mlx5_vdpa_privs, mlx5_vdpa_priv) priv_list = TAILQ_HEAD_INITIALIZER(priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; +static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv); + static struct mlx5_vdpa_priv * mlx5_vdpa_find_priv_resource_by_vdev(struct rte_vdpa_device *vdev) { @@ -250,7 +253,6 @@ mlx5_vdpa_dev_close(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; } - mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); @@ -258,7 +260,6 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; @@ -288,7 +289,7 @@ mlx5_vdpa_dev_config(int vid) if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); - if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_err_event_setup(priv) || + if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_virtqs_prepare(priv) || mlx5_vdpa_steer_setup(priv) || mlx5_vdpa_cqe_event_setup(priv)) { mlx5_vdpa_dev_close(vid); @@ -507,13 +508,88 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); } +static int +mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) +{ + struct mlx5_devx_tis_attr tis_attr = {0}; + struct ibv_context *ctx = priv->cdev->ctx; + uint32_t i; + int retry; + + for (retry = 0; retry < 7; retry++) { + priv->var = mlx5_glue->dv_alloc_var(ctx, 0); + if (priv->var != NULL) + break; + DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.", retry); + /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ + usleep(100000U << retry); + } + if (!priv->var) { + DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Always map the entire page. */ + priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | + PROT_WRITE, MAP_SHARED, ctx->cmd_fd, + priv->var->mmap_off); + if (priv->virtq_db_addr == MAP_FAILED) { + DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); + priv->virtq_db_addr = NULL; + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", + priv->virtq_db_addr); + priv->td = mlx5_devx_cmd_create_td(ctx); + if (!priv->td) { + DRV_LOG(ERR, "Failed to create transport domain."); + rte_errno = errno; + return -rte_errno; + } + tis_attr.transport_domain = priv->td->id; + for (i = 0; i < priv->num_lag_ports; i++) { + /* 0 is auto affinity, non-zero value to propose port. */ + tis_attr.lag_tx_port_affinity = i + 1; + priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); + if (!priv->tiss[i]) { + DRV_LOG(ERR, "Failed to create TIS %u.", i); + return -rte_errno; + } + } + priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); + if (!priv->null_mr) { + DRV_LOG(ERR, "Failed to allocate null MR."); + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); + priv->steer.domain = mlx5_glue->dr_create_domain(ctx, + MLX5DV_DR_DOMAIN_TYPE_NIC_RX); + if (!priv->steer.domain) { + DRV_LOG(ERR, "Failed to create Rx domain."); + rte_errno = errno; + return -rte_errno; + } + priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); + if (!priv->steer.tbl) { + DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); + rte_errno = errno; + return -rte_errno; + } + if (mlx5_vdpa_err_event_setup(priv) != 0) + return -rte_errno; + if (mlx5_vdpa_event_qp_global_prepare(priv)) + return -rte_errno; + return 0; +} + static int mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, struct mlx5_kvargs_ctrl *mkvlist) { struct mlx5_vdpa_priv *priv = NULL; struct mlx5_hca_attr *attr = &cdev->config.hca_attr; - int retry; if (!attr->vdpa.valid || !attr->vdpa.max_num_virtio_queues) { DRV_LOG(ERR, "Not enough capabilities to support vdpa, maybe " @@ -537,25 +613,10 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; + pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; - for (retry = 0; retry < 7; retry++) { - priv->var = mlx5_glue->dv_alloc_var(priv->cdev->ctx, 0); - if (priv->var != NULL) - break; - DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.\n", retry); - /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ - usleep(100000U << retry); - } - if (!priv->var) { - DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + if (mlx5_vdpa_create_dev_resources(priv)) goto error; - } - priv->err_intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (priv->err_intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); if (priv->vdev == NULL) { DRV_LOG(ERR, "Failed to register vDPA device."); @@ -564,19 +625,13 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, } mlx5_vdpa_config_get(mkvlist, priv); SLIST_INIT(&priv->mr_list); - pthread_mutex_init(&priv->vq_config_lock, NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); return 0; - error: - if (priv) { - if (priv->var) - mlx5_glue->dv_free_var(priv->var); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (priv) + mlx5_vdpa_dev_release(priv); return -rte_errno; } @@ -596,22 +651,48 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) if (found) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); - if (found) { - if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); - if (priv->var) { - mlx5_glue->dv_free_var(priv->var); - priv->var = NULL; - } - if (priv->vdev) - rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (found) + mlx5_vdpa_dev_release(priv); return 0; } +static void +mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + + mlx5_vdpa_event_qp_global_release(priv); + mlx5_vdpa_err_event_unset(priv); + if (priv->steer.tbl) + claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); + if (priv->steer.domain) + claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); + if (priv->null_mr) + claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); + for (i = 0; i < priv->num_lag_ports; i++) { + if (priv->tiss[i]) + claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); + } + if (priv->td) + claim_zero(mlx5_devx_cmd_destroy(priv->td)); + if (priv->virtq_db_addr) + claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); + if (priv->var) + mlx5_glue->dv_free_var(priv->var); +} + +static void +mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) +{ + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) + mlx5_vdpa_dev_close(priv->vid); + mlx5_vdpa_release_dev_resources(priv); + if (priv->vdev) + rte_vdpa_unregister_device(priv->vdev); + pthread_mutex_destroy(&priv->vq_config_lock); + rte_free(priv); +} + static const struct rte_pci_id mlx5_vdpa_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index cc83d7cba3d..e0ba20b953c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -233,6 +233,15 @@ int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, */ void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp); +/** + * Create all the event global resources. + * + * @param[in] priv + * The vdpa driver private structure. + */ +int +mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv); + /** * Release all the event global resources. * diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index f8d910b33f8..7167a98db0f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -40,11 +40,9 @@ mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv) } /* Prepare all the global resources for all the event objects.*/ -static int +int mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) { - if (priv->eventc) - return 0; priv->eventc = mlx5_os_devx_create_event_channel(priv->cdev->ctx, MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); if (!priv->eventc) { @@ -389,22 +387,30 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) flags = fcntl(priv->err_chnl->fd, F_GETFL); ret = fcntl(priv->err_chnl->fd, F_SETFL, flags | O_NONBLOCK); if (ret) { + rte_errno = errno; DRV_LOG(ERR, "Failed to change device event channel FD."); goto error; } - + priv->err_intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (priv->err_intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + goto error; + } if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd)) goto error; if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT)) goto error; - if (rte_intr_callback_register(priv->err_intr_handle, - mlx5_vdpa_err_interrupt_handler, - priv)) { + ret = rte_intr_callback_register(priv->err_intr_handle, + mlx5_vdpa_err_interrupt_handler, + priv); + if (ret != 0) { rte_intr_fd_set(priv->err_intr_handle, 0); DRV_LOG(ERR, "Failed to register error interrupt for device %d.", priv->vid); + rte_errno = -ret; goto error; } else { DRV_LOG(DEBUG, "Registered error interrupt for device%d.", @@ -453,6 +459,7 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv) mlx5_glue->devx_destroy_event_channel(priv->err_chnl); priv->err_chnl = NULL; } + rte_intr_instance_free(priv->err_intr_handle); } int @@ -575,8 +582,6 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; - if (mlx5_vdpa_event_qp_global_prepare(priv)) - return -1; if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; attr.pd = priv->cdev->pdn; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 599079500b0..62f5530e91d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -34,10 +34,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) SLIST_INIT(&priv->mr_list); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - if (priv->null_mr) { - claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); - priv->null_mr = NULL; - } if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -196,13 +192,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; priv->vmem = mem; - priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); - if (!priv->null_mr) { - DRV_LOG(ERR, "Failed to allocate null MR."); - ret = -errno; - goto error; - } - DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; entry = rte_zmalloc(__func__, sizeof(*entry), 0); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index a0fd2776e57..e42868486e7 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -45,14 +45,6 @@ void mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv) { mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.tbl) { - claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); - priv->steer.tbl = NULL; - } - if (priv->steer.domain) { - claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); - priv->steer.domain = NULL; - } if (priv->steer.rqt) { claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); priv->steer.rqt = NULL; @@ -248,11 +240,7 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { - mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.rqt) { - claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); - priv->steer.rqt = NULL; - } + mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { return ret; } else if (!priv->steer.rss[0].flow) { @@ -269,17 +257,6 @@ int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { #ifdef HAVE_MLX5DV_DR - priv->steer.domain = mlx5_glue->dr_create_domain(priv->cdev->ctx, - MLX5DV_DR_DOMAIN_TYPE_NIC_RX); - if (!priv->steer.domain) { - DRV_LOG(ERR, "Failed to create Rx domain."); - goto error; - } - priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); - if (!priv->steer.tbl) { - DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); - goto error; - } if (mlx5_vdpa_steer_update(priv)) goto error; return 0; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index b1d584ca8b0..6bda9f1814a 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -3,7 +3,6 @@ */ #include #include -#include #include #include @@ -120,20 +119,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) if (virtq->counters) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } - for (i = 0; i < priv->num_lag_ports; i++) { - if (priv->tiss[i]) { - claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); - priv->tiss[i] = NULL; - } - } - if (priv->td) { - claim_zero(mlx5_devx_cmd_destroy(priv->td)); - priv->td = NULL; - } - if (priv->virtq_db_addr) { - claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); - priv->virtq_db_addr = NULL; - } priv->features = 0; memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; @@ -462,8 +447,6 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - struct mlx5_devx_tis_attr tis_attr = {0}; - struct ibv_context *ctx = priv->cdev->ctx; uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); @@ -485,33 +468,6 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) (int)nr_vring); return -1; } - /* Always map the entire page. */ - priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | - PROT_WRITE, MAP_SHARED, ctx->cmd_fd, - priv->var->mmap_off); - if (priv->virtq_db_addr == MAP_FAILED) { - DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); - priv->virtq_db_addr = NULL; - goto error; - } else { - DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", - priv->virtq_db_addr); - } - priv->td = mlx5_devx_cmd_create_td(ctx); - if (!priv->td) { - DRV_LOG(ERR, "Failed to create transport domain."); - return -rte_errno; - } - tis_attr.transport_domain = priv->td->id; - for (i = 0; i < priv->num_lag_ports; i++) { - /* 0 is auto affinity, non-zero value to propose port. */ - tis_attr.lag_tx_port_affinity = i + 1; - priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); - if (!priv->tiss[i]) { - DRV_LOG(ERR, "Failed to create TIS %u.", i); - goto error; - } - } priv->nr_virtqs = nr_vring; for (i = 0; i < nr_vring; i++) if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) From patchwork Thu Feb 24 13:28:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108269 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7FD13A034E; Thu, 24 Feb 2022 14:29:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 82A7642708; Thu, 24 Feb 2022 14:29:41 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2053.outbound.protection.outlook.com [40.107.220.53]) by mails.dpdk.org (Postfix) with ESMTP id 804A0426F0 for ; Thu, 24 Feb 2022 14:29:37 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CAIoz0x23oj42h9o4q9E6TN4CPg1iDWPpFTE0PvYR5KG+KIB7pMzaAi2ES68VMHsBvCY/fSvwFXkcPSsWFXGZKEFXNPxQxF/kSbwHP0wKbTjA9tnjQ6ESlwFSQBFtULZcOLaTYbCzhF1Rs23kmAKXBeqAV38TyhQ88hNaPk2CL3rYJGQtoNLBv1TYe9VoZpksFL+/FMvAPqpSFUFyC9HvTI5ZzG4TPgp7mqbQa6rOy9MxO+ZgeXrJsSbU0Q88Aq8LIHnyhMRXf3C9o9Bw8Dr/aH8tWPRA22lohzFgTCX0kHGd6KjdH5HJgPRZIJF2DNbpS5pMsBldwkItze6MRxtWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=I+edqd4XwS71WT5lUkC7h6f+6ePZM3Y4kHrGW+jc4f0=; b=UbK5qhh6VRc6lf6RWROBR/WLI4KmYZpr3f1rUtWEGvp1kaxT8telFNJvPObIwAI2UjLVTQpIRcZV0v34IM/GEJsmMp0nUA56Bx+crZUxnxroTtQNYDRASknAH4cG3igjNqbqIq6t0xXZZJA5vfPREQBMPY+c5Mw5eueNQ4A1hecMdZptgd7+fvRPcqekYpmxwyzZeOjn6R4/2G1+nqB+GdDT7C7pomsAIYBtozvMgWcQ8wZdGmPUeFade0XV8WELJ89imz6VAIwJ8Dt7eWJteuVbygdcfVYJPCy7yn21Rw7bHvH8es81Gfo/+49kT1bmHjo7QMAl9MbMJC7S9ilYbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=I+edqd4XwS71WT5lUkC7h6f+6ePZM3Y4kHrGW+jc4f0=; b=pEBFmjvlyUoDQh6u+TWfVIwkVzFVGAPP+aZ6RJaIwBtf9n9zx40+7YKgo+ItmpyAsrqzW39KWf+irzAoU60vWc+1MrzVU5sTeKeNq0C2IeVxyzNqS3x59tk2wWoWEy0eYlNaQKxQJoMqmTXZeLB61PlmgvKn+NtihHyKU5mJAWZ1jI0HNkGvcFjYlyk8flTgVoG2q9D+9jNg4z5+Xg4fmM6jHs2sSrLwLZGVBGA6p/h1Z8wSsvrG80X4L+u9r4qn0LBthQ6iOXDtuxLhKgw1aCBhlD/xwFEdsSEZYGAUABrvF5c49SyUNEgq5WQCiS8aKdWq7/fD1jSztxwOv3XAMQ== Received: from DS7PR05CA0107.namprd05.prod.outlook.com (2603:10b6:8:56::25) by MW2PR12MB2459.namprd12.prod.outlook.com (2603:10b6:907:c::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 13:29:34 +0000 Received: from DM6NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:8:56:cafe::8f) by DS7PR05CA0107.outlook.office365.com (2603:10b6:8:56::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.9 via Frontend Transport; Thu, 24 Feb 2022 13:29:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT022.mail.protection.outlook.com (10.13.172.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:33 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:29 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:27 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH 5/7] vdpa/mlx5: cache and reuse hardware resources Date: Thu, 24 Feb 2022 21:28:18 +0800 Message-ID: <20220224132820.1939650-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9d820bf2-8772-4754-153c-08d9f799b1d3 X-MS-TrafficTypeDiagnostic: MW2PR12MB2459:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zaSylq4kXcAW8go2xMId+ol68MgWisFD6GUhIZzdxfPYyHMJrOkhnBVD6bY7KbmQ+XRwZTGIPlxDiBJq6eF4CONpgA/030RH/IUTwNMDJl6Q8Aw9cDxLiuDrEoAN9YqkRrG460xFonY9gZzn7VxCHnKJM9vATKth6Sr6gPYUXCRTaHr/pzKzVPFDlBEwpXQeBbuPOfg66A6xL8csTYxkgEKeqTMKRrEv3Vy4lrbsrAmKXJrSyezwI8Je/XZWC/XsBcUILvtLy+m59nelSeDPbZJllt3YcCJlUPjDn5wd5XHZMhBaV+g7cO4n1EE+GaJIC/hsjx0BUVLIxISO+BXEMvbOELoyUtqX5Bvoef1qUfuriDVrG5qPTT5BHUk86MEzUJGIVUsgiAzZJhpiuWh54Pgkk9iO7MPFk1GNzCSHlCrRaopX63c2WXl0rEBMUrgxdWKdlV9e0Vcg08V+cYYFawFmgYjBjy/bM4oeRHrk/A0oj07WYoV8bUApd9Mt7mJQxXErlsWonsvVsQ8n8sU6NwM4s6uESi8BoxG1oo5Ve+c3o7PWM5HvXFqdIh6zTkizXm9yvSGqkZ/U5Pqisjq1R39SDytM4zjbx0rMB2pp2Ggn4Y6s6OtRvOt+9FzEH6s3P4Rtz21jqSBoGG6kUS6EfAcu22+YfwJV1B9S1xhxvqKom1QfHzGu3TYFC0+fuxA7R5UGOpLks45Wu18O60jRjg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(36756003)(4326008)(40460700003)(47076005)(83380400001)(336012)(8936002)(426003)(5660300002)(55016003)(2906002)(70206006)(70586007)(36860700001)(107886003)(508600001)(6286002)(186003)(6666004)(54906003)(16526019)(82310400004)(81166007)(356005)(8676002)(316002)(7696005)(26005)(6916009)(2616005)(86362001)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:33.3843 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9d820bf2-8772-4754-153c-08d9f799b1d3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW2PR12MB2459 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org During device suspend and resume, resources are not changed normally. When huge resources allocated to VM, like huge memory size or lots of queues, time spent on release and recreate became significant. To speed up, this patch reuse resoruces like VM MR and VirtQ memory if not changed. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 12 ++++- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 27 ++++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 73 +++++++++++++++++++++-------- 4 files changed, 99 insertions(+), 24 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 7e57ae715a8..dbaa590d5d1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -241,6 +241,13 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } +static void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) +{ + mlx5_vdpa_virtqs_cleanup(priv); + mlx5_vdpa_mem_dereg(priv); +} + static int mlx5_vdpa_dev_close(int vid) { @@ -260,7 +267,8 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_mem_dereg(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ @@ -661,6 +669,7 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + mlx5_vdpa_dev_cache_clean(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e0ba20b953c..540bf87a352 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -289,13 +289,21 @@ int mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv); void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); /** - * Release a virtq and all its related resources. + * Release virtqs and resources except that to be reused. * * @param[in] priv * The vdpa driver private structure. */ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +/** + * Cleanup cached resources of all virtqs. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv); + /** * Create all the HW virtqs resources and all their related resources. * @@ -323,7 +331,7 @@ int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv); int mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable); /** - * Unset steering and release all its related resources- stop traffic. + * Unset steering - stop traffic. * * @param[in] priv * The vdpa driver private structure. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 62f5530e91d..d6e3dd664b5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -32,8 +32,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) entry = next; } SLIST_INIT(&priv->mr_list); - if (priv->lm_mr.addr) - mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -149,6 +147,23 @@ mlx5_vdpa_vhost_mem_regions_prepare(int vid, uint8_t *mode, uint64_t *mem_size, return mem; } +static int +mlx5_vdpa_mem_cmp(struct rte_vhost_memory *mem1, struct rte_vhost_memory *mem2) +{ + uint32_t i; + + if (mem1->nregions != mem2->nregions) + return -1; + for (i = 0; i < mem1->nregions; i++) { + if (mem1->regions[i].guest_phys_addr != + mem2->regions[i].guest_phys_addr) + return -1; + if (mem1->regions[i].size != mem2->regions[i].size) + return -1; + } + return 0; +} + #define KLM_SIZE_MAX_ALIGN(sz) ((sz) > MLX5_MAX_KLM_BYTE_COUNT ? \ MLX5_MAX_KLM_BYTE_COUNT : (sz)) @@ -191,6 +206,14 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; + if (priv->vmem != NULL) { + if (mlx5_vdpa_mem_cmp(mem, priv->vmem) == 0) { + /* VM memory not changed, reuse resources. */ + free(mem); + return 0; + } + mlx5_vdpa_mem_dereg(priv); + } priv->vmem = mem; for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 6bda9f1814a..c42846ecb3c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -66,10 +66,33 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Release cached VQ resources. */ +void +mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) +{ + unsigned int i, j; + + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + for (j = 0; j < RTE_DIM(virtq->umems); ++j) { + if (virtq->umems[j].obj) { + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[j].obj)); + virtq->umems[j].obj = NULL; + } + if (virtq->umems[j].buf) { + rte_free(virtq->umems[j].buf); + virtq->umems[j].buf = NULL; + } + virtq->umems[j].size = 0; + } + } +} + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - unsigned int i; int ret = -EAGAIN; if (rte_intr_fd_get(virtq->intr_handle) >= 0) { @@ -94,13 +117,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - if (virtq->umems[i].obj) - claim_zero(mlx5_glue->devx_umem_dereg - (virtq->umems[i].obj)); - rte_free(virtq->umems[i].buf); - } - memset(&virtq->umems, 0, sizeof(virtq->umems)); if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; @@ -120,7 +136,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } priv->features = 0; - memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; } @@ -215,6 +230,8 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); if (ret) return -1; + if (vq.size == 0) + return 0; virtq->index = index; virtq->vq_size = vq.size; attr.tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); @@ -259,24 +276,42 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } /* Setup 3 UMEMs for each virtq. */ for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - virtq->umems[i].size = priv->caps.umems[i].a * vq.size + - priv->caps.umems[i].b; - virtq->umems[i].buf = rte_zmalloc(__func__, - virtq->umems[i].size, 4096); - if (!virtq->umems[i].buf) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * vq.size + priv->caps.umems[i].b; + if (virtq->umems[i].size == size && + virtq->umems[i].obj != NULL) { + /* Reuse registered memory. */ + memset(virtq->umems[i].buf, 0, size); + goto reuse; + } + if (virtq->umems[i].obj) + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[i].obj)); + if (virtq->umems[i].buf) + rte_free(virtq->umems[i].buf); + virtq->umems[i].size = 0; + virtq->umems[i].obj = NULL; + virtq->umems[i].buf = NULL; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" " %u.", i, index); goto error; } - virtq->umems[i].obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, - virtq->umems[i].buf, - virtq->umems[i].size, - IBV_ACCESS_LOCAL_WRITE); - if (!virtq->umems[i].obj) { + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, size, + IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", i, index); goto error; } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; +reuse: attr.umems[i].id = virtq->umems[i].obj->umem_id; attr.umems[i].offset = 0; attr.umems[i].size = virtq->umems[i].size; From patchwork Thu Feb 24 13:28:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108267 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9ECE4A034E; Thu, 24 Feb 2022 14:29:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6EC3D411FE; Thu, 24 Feb 2022 14:29:36 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by mails.dpdk.org (Postfix) with ESMTP id D8BF141156 for ; Thu, 24 Feb 2022 14:29:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L/9kjZMF6i6snYBBWsaXXv9Ut2dpDotJF4gEF/T09JQbvJ8nE84kE5Y7I9Xwt+7wCYGUjXHZyAtPD2ZSQmNIl3LuISzaiIXr67ZE9pWffIyz98d/XYvZ6oP9Gt93uzWWmPFqlvyS2VNb1RZA+Wisc/zBrzx9Vy98qGotKxmVSOjCKaB/TjR5Tu6Ep80uzXqhPnVY4mlfjLDbFUONnnKScRpWM9UQp61mHp8Cc4D39Zg7KlBisGPbMxgCdV4YAlbGy5f5/khyeAa1T9+hg4g/JzwiqLYPe1pdbE34dDPUEQNGOXG7P6KSrTS5NLNSDxUORR1s+sUfLLNqnBTDre8EOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8aQ7eNQuHaDd+4s8/8IcNCX9EiUXryhr6Quzw0kcNlA=; b=GwH88Y/ToPwtyo63gkupO6awuQKQ0dw04bE71LU1KgupPlteVrx7XcfDDJRT0VDuyq5iXGRVQv8FRe2leUhmHo9hcz4zUI0hhnaIHU//3XbM+wfrlcqfYfkIMtPp7ZtEVd61ykEO3BCimxr4SxCheZxodqs1B4bjotxit1hFmOWSv3YquW+67S43miQjp8GabxpsSe3viGWrHDqvvQMo1sIDVxu6b5XXLhlBi591j5xSYqtE2Ocj+ox2jWccFZMoPSELKIOmuevAQ4Iho+qCL2TAdEuFcVsEWaxvUxR4Hwsq/EiuS8CfKH9WqasOfragxWiw2OikIe1yjkgmVoscew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8aQ7eNQuHaDd+4s8/8IcNCX9EiUXryhr6Quzw0kcNlA=; b=eZ9MazIEbzYsNOlI6jTs3VsdwmAnN8yGFQqcxIOMbtYoGudkKZ/1LukWsGaPOf8lhwoC+BacMfenYWLix1WrOgu/ZoUw4qKbWpsCT1qI4kbJl5WXAGCRQ3KQ3Q0Jj7xJZKJH+I4hAoYFt0K0C9/Qr+muA0h4dUFLrVh98v5KdPAMY0jAxzsFhSWPuQIr2puN4fsHypRidalezFt+5mjurnEUThBtzKz7s3wXp2KM9PfqP/jMm8Be5xKog1ZXgsXNFqr3uwnims4zeyWRi4A5DIAIq1FwqaDhnp4EPeKrKkIb1SeXQfeBlEPaHIfeLtS+xVQwu5Io9yPHbruwa/Ni/Q== Received: from BN8PR03CA0009.namprd03.prod.outlook.com (2603:10b6:408:94::22) by MW2PR12MB2444.namprd12.prod.outlook.com (2603:10b6:907:11::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 13:29:32 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::51) by BN8PR03CA0009.outlook.office365.com (2603:10b6:408:94::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 13:29:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:31 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:29 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH 6/7] vdpa/mlx5: support device cleanup callback Date: Thu, 24 Feb 2022 21:28:19 +0800 Message-ID: <20220224132820.1939650-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2865dcb1-0bee-4891-54fb-08d9f799b130 X-MS-TrafficTypeDiagnostic: MW2PR12MB2444:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v3dc1psHAUBlozJCL/ME01FIsnpcbsgyORtowoFBL9TIGc47OZcizDUAaY+P861lwDCmUHI1Zw/3FdyyxxhOSJYIDXUROj4XhrQOD27WndRO1fLORJOQEC3/9mTeEi478h67wDIL3dmhDmwpBV1O6GCriKLuXLHv0mr+QwxK1wZbbmfsXRhNZ1c/wfVGWQGnAhjsxNObTOKwilShuq491vBxXHDfV1lWX1A0B0BnYldI73kYoRPAunn8zev54WYLq0FIeBq2c3rp+WQ5JmerGPruw4lI9QQxmKnz9r1l5iFT8kE5DbvBmwVoimdfjydR+KZuOtwbvGfVP1fU8xtTWkxuQJcGQxZOAj/hHQvGRODglJ2Q7RI0NbjgWvUojXtU2oAIlPHvwmwVncIR7jG6Rw2UkjbzwvBuJ6wFQSSTmpMU4GwO3cmwkpqvQgqVbG3B6esTbkwbDjG5Imj737+pdx8k+zHp8+yBgHhiB9MgN/O9pUoX26dh21zBAuImR0FlFtcGiwNW0nDmCuYtAlb5DZCpnyl+94uDGIYWFFXpoOLS882idGz6CYWkY9e3txKqb1piIxZh3Q0Qz+vrVvKaqsSvxpCPz85i2ssD9WgrWJF3qZfenujsoqBJ/B117ULDLQV1h16YJVKwN2uFYceZwhhO4thmnzscCU6tLzEkHpybvQgh9NCO0tfJFgjxLzwFwZbMQFRoN7rjAdlCe+rH/Q== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(36860700001)(356005)(47076005)(2906002)(81166007)(82310400004)(70206006)(8936002)(4326008)(8676002)(40460700003)(55016003)(70586007)(6666004)(336012)(508600001)(107886003)(6916009)(7696005)(186003)(1076003)(86362001)(54906003)(2616005)(6286002)(83380400001)(5660300002)(426003)(26005)(36756003)(316002)(16526019)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:32.2552 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2865dcb1-0bee-4891-54fb-08d9f799b130 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW2PR12MB2444 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports device cleanup callback API which called when device disconected with VM. Cached resources like VM MR and VQ memory are released. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 23 +++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + 2 files changed, 24 insertions(+) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index dbaa590d5d1..c83b1141482 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -270,6 +270,8 @@ mlx5_vdpa_dev_close(int vid) if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -294,6 +296,7 @@ mlx5_vdpa_dev_config(int vid) return -1; } priv->vid = vid; + priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); @@ -431,12 +434,32 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) return mlx5_vdpa_virtq_stats_reset(priv, qid); } +static int +mlx5_vdpa_dev_clean(int vid) +{ + struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); + struct mlx5_vdpa_priv *priv; + + if (vdev == NULL) + return -1; + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + if (priv == NULL) { + DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); + return -1; + } + if (priv->state == MLX5_VDPA_STATE_PROBED) + mlx5_vdpa_dev_cache_clean(priv); + priv->connected = false; + return 0; +} + static struct rte_vdpa_dev_ops mlx5_vdpa_ops = { .get_queue_num = mlx5_vdpa_get_queue_num, .get_features = mlx5_vdpa_get_vdpa_features, .get_protocol_features = mlx5_vdpa_get_protocol_features, .dev_conf = mlx5_vdpa_dev_config, .dev_close = mlx5_vdpa_dev_close, + .dev_cleanup = mlx5_vdpa_dev_clean, .set_vring_state = mlx5_vdpa_set_vring_state, .set_features = mlx5_vdpa_features_set, .migration_done = NULL, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 540bf87a352..24bafe85b44 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -121,6 +121,7 @@ enum mlx5_dev_state { struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; + bool connected; enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; From patchwork Thu Feb 24 13:28:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108268 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08811A034E; Thu, 24 Feb 2022 14:29:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 42E4A426F9; Thu, 24 Feb 2022 14:29:38 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2072.outbound.protection.outlook.com [40.107.220.72]) by mails.dpdk.org (Postfix) with ESMTP id A1190426F0 for ; Thu, 24 Feb 2022 14:29:36 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XIE7i0ectFbZqxOg0rIO7ZqheakMDb2f923t4KvUd/beS4NmhwcM5h54B03BnQtojT7gmimWhuLUd4RCJYYn21xHBeJy6Ct7gMGBlz3uXBjuAdz3RiB5rOyqUdVo/71Mc8gCflGytaixg2Jk7eDxlRGiWDoaOPvYfavlMtbTnOZi4c1xZdkx6xtZuz4fH5+cgXNF3jaWZNa2iL9Cjsq01OsjJltdGD4F7LwaCUpJLsdP21prHD4FWBP7W3fc4L4gulvtJrCLLWpGXz1zjYTK95ZDnvcT65RxvXZxqCI+0o/kWSt9iDlXmtymd63OFMP0XdkAGG6oH4hm/4U1c321lQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pLChRZ9cAlm6twjFYUrEy92K9jyxG8J0eb77jwBOiQI=; b=jJjPgqz6oyz+gV5AhrD+S5osv1DxpFeK/lIYXC4WM+lammKPZMwNFwoAtaeSWXmiqFWgOGjhsNbiLB+DPxP+uMZ3nlKh/12I0SmmKR8M76XaxGH9X6MqSky7VqcrSrQkBXl8UO6l6HLLrMXOvpZ60GEGNlw4GsUl2bbdPCGuiPb8J2ogh/aJMlGPc9Nsqo2VA2Zw4m1rDl32u0OMFVMNFbwyLXBkI1t22lJ5Ipy1KQpKrSOAVRKmqiFC/alnx/DCfDTq1XabKZQSDI68eRx4r6SkNEfQnC4Rpjzm51+SuY4pACS7VpipFL2szr4ZihhgLprvN2Xwvwgy12b1NHlPcw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pLChRZ9cAlm6twjFYUrEy92K9jyxG8J0eb77jwBOiQI=; b=Jk8au5bM7uYL7nKwG9TDKXCol0UaXubaA2V1n+W4PSz/0PoAIMJUPd+UQ8vmx5k3+mSJToKptr4ZD7n9vwVuUv0IEj2HXL/ARQQrKWwKxl82llS70Mgh4JploIrRjIAuxErQz4wGsFPr0exQtp8K5AQj/xYuob09n8UI7RnQABPtniS6dWodWGp2YHf2tI0uLCKJk7LCSroOHAgeLK1ndRh+I8tX0US2jX67xODET4TCz7ypZK4YbOGTXIw/+naxLvxBeUOTXxC6u/fUuz9ZKmhPVmp6rg2qWacs3vFT6viJj4mCSwizxyEQxFuH4nrBTvk40p5TzPf8hyuQQxe+ig== Received: from DM6PR17CA0016.namprd17.prod.outlook.com (2603:10b6:5:1b3::29) by MN2PR12MB4223.namprd12.prod.outlook.com (2603:10b6:208:1d3::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 13:29:34 +0000 Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1b3:cafe::cf) by DM6PR17CA0016.outlook.office365.com (2603:10b6:5:1b3::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.24 via Frontend Transport; Thu, 24 Feb 2022 13:29:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 13:29:34 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 13:29:33 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 05:29:31 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH 7/7] vdpa/mlx5: make statistics counter persistent Date: Thu, 24 Feb 2022 21:28:20 +0800 Message-ID: <20220224132820.1939650-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224132820.1939650-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1ab9bef5-e285-4537-4fb6-08d9f799b236 X-MS-TrafficTypeDiagnostic: MN2PR12MB4223:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0aVOx5A/muLl9hFfZhEzkMyJ4y8xL1VhiduYUoXGtsOf7/OmDwur1llJ4IAgMZ6kNXv3KOBQpRXDwXzNzxY7bWYAHv8ALfKXFZTru5MUGwwxZXLBuaUEGrvFm2Nb9Xi3YKSIc/JQbQg1H3gOMtuqqM7Zz2cqLTyykQW/xkWQ2xLlyLz7YrVrRVhVtE8P1pBrz5fui/IB++GfJKA3eLA5OugMd+l+w0OflyaIJsDugc4q30ftLIiOvd2ykHSiROy3Cj3faUvdUBmth+/xZqf6xkSjGDLJOWVF4vrhq52bo49lMTnuRnJ4VjWEuag2VocFMZhILndp3GsuSTJqRyV2JsV8wEG2Z7Q+ZbqPtXybEJ204drhl8TQdPkciDJfmFZNKCE8l+4Rjkt/ILXqw0T49GYM33EFX/OQa2JZlN2Ba2s5QlQz2LqkGSeQvGwZ0fThgW8mcrzteRTQeqHIy+KKiTfoClL4C4UvlN8Rs/X86ZHJp23DbACEbrQfGfsduc1d9X52EO9JIFwX63MHVTZ5WJlPelrGgMLvhFB0ZSqvQWCN2CbCDvhLV486pMsNCTmpA6iHJV5wBPkj11tM+mj7SkSBMdNJEjWVnoHOeU5P8AGrCJCYZ4Iyye3Cuawao6UBnLzX+tZLGV2SfUsyFqRuUVqRJWMNZfrLsOrO8eNNbxZ2TmaMvmx8oD6TwfC+hQIBC8wLXzf+C6TvKz8e46NyoQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(82310400004)(83380400001)(47076005)(36756003)(107886003)(2616005)(40460700003)(186003)(336012)(16526019)(1076003)(426003)(6286002)(26005)(6916009)(316002)(2906002)(54906003)(4326008)(8936002)(86362001)(70586007)(5660300002)(70206006)(36860700001)(81166007)(55016003)(7696005)(508600001)(356005)(6666004)(8676002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 13:29:34.0346 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ab9bef5-e285-4537-4fb6-08d9f799b236 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4223 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To speed the device suspend and resume time, make counter persitent in reconfiguration until the device gets removed. Signed-off-by: Xueming Li --- doc/guides/vdpadevs/mlx5.rst | 6 ++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 19 +++++++---------- drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 32 +++++++++++------------------ 4 files changed, 26 insertions(+), 32 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index acb791032ad..3ded142311e 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -109,3 +109,9 @@ Upon potential hardware errors, mlx5 PMD try to recover, give up if failed 3 times in 3 seconds, virtq will be put in disable state. User should check log to get error information, or query vdpa statistics counter to know error type and count report. + +Statistics +^^^^^^^^^^ + +The device statistics counter persists in reconfiguration until the device gets +removed. User can reset counters by calling function rte_vdpa_reset_stats(). diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index c83b1141482..92ef7777169 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -388,12 +388,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -416,12 +411,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -693,6 +683,11 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) uint32_t i; mlx5_vdpa_dev_cache_clean(priv); + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + if (!priv->virtqs[i].counters) + continue; + claim_zero(mlx5_devx_cmd_destroy(priv->virtqs[i].counters)); + } mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 24bafe85b44..e7f3319f896 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -92,6 +92,7 @@ struct mlx5_vdpa_virtq { struct rte_intr_handle *intr_handle; uint64_t err_time[3]; /* RDTSC time of recent errors. */ uint32_t n_retry; + struct mlx5_devx_virtio_q_couners_attr stats; struct mlx5_devx_virtio_q_couners_attr reset; }; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index c42846ecb3c..d2c91b25db1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -127,14 +127,9 @@ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { int i; - struct mlx5_vdpa_virtq *virtq; - for (i = 0; i < priv->nr_virtqs; i++) { - virtq = &priv->virtqs[i]; - mlx5_vdpa_virtq_unset(virtq); - if (virtq->counters) - claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); - } + for (i = 0; i < priv->nr_virtqs; i++) + mlx5_vdpa_virtq_unset(&priv->virtqs[i]); priv->features = 0; priv->nr_virtqs = 0; } @@ -590,7 +585,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, struct rte_vdpa_stat *stats, unsigned int n) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; - struct mlx5_devx_virtio_q_couners_attr attr = {0}; + struct mlx5_devx_virtio_q_couners_attr *attr = &virtq->stats; int ret; if (!virtq->counters) { @@ -598,7 +593,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, "is invalid.", qid); return -EINVAL; } - ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr); + ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, attr); if (ret) { DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid); return ret; @@ -608,37 +603,37 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, return ret; stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS, - .value = attr.received_desc - virtq->reset.received_desc, + .value = attr->received_desc - virtq->reset.received_desc, }; if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS) return ret; stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS, - .value = attr.completed_desc - virtq->reset.completed_desc, + .value = attr->completed_desc - virtq->reset.completed_desc, }; if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS) return ret; stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS, - .value = attr.bad_desc_errors - virtq->reset.bad_desc_errors, + .value = attr->bad_desc_errors - virtq->reset.bad_desc_errors, }; if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN) return ret; stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN, - .value = attr.exceed_max_chain - virtq->reset.exceed_max_chain, + .value = attr->exceed_max_chain - virtq->reset.exceed_max_chain, }; if (ret == MLX5_VDPA_STATS_INVALID_BUFFER) return ret; stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_INVALID_BUFFER, - .value = attr.invalid_buffer - virtq->reset.invalid_buffer, + .value = attr->invalid_buffer - virtq->reset.invalid_buffer, }; if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS) return ret; stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETION_ERRORS, - .value = attr.error_cqes - virtq->reset.error_cqes, + .value = attr->error_cqes - virtq->reset.error_cqes, }; return ret; } @@ -649,11 +644,8 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; int ret; - if (!virtq->counters) { - DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " - "is invalid.", qid); - return -EINVAL; - } + if (virtq->counters == NULL) /* VQ not enabled. */ + return 0; ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &virtq->reset); if (ret)