From patchwork Sun May 8 14:25:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 110903 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F207A0510; Sun, 8 May 2022 16:26:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3492F4282D; Sun, 8 May 2022 16:26:45 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2053.outbound.protection.outlook.com [40.107.93.53]) by mails.dpdk.org (Postfix) with ESMTP id 9636942827 for ; Sun, 8 May 2022 16:26:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MML+S1GXisaoWJPks8DMrxqTswuIJE1i7PAE0lFbW5As36fu18nOnUUWkhoy9N+loZ1dvO9BDq9Z9mPXcOqdcmF3RQH+GL51y70K6WNgAmUbaMyfylv4DNj7p/6ENZyraNrlp758doHNjpe/i0Eakc4FQTxKuDYV1DvtROoqMZvpkhvNs1FsEKjUySpRFx2tLkuG9Z59Dos1nX+fxOvdXIugWJSY+DA5l8X0XcIDT09rQuvDA/N1+91jC84bJlEhU55Cb+KyZfd0SLt60QB6SjL5Mfu9fKc7y2jza3bAoGpTC1l/VRgN3Wr1k1wgJySQHN+AhuyzkjZrfKXug3oaKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+TB6OPsfJ64oIxwsJt9jEQ3MNZZTGD9Pq9QroViQ2NE=; b=SH0NYhdzpbaYfjh4k0OcMeDpOtj6w43Ohy4IX9+oeSg+rienKXZmYnDdV3K4cFvrBbpG7QkE2aRsQpCAgRm0PM7mNEMCBMwodGBGrspasMs3NBfAh5OM7KaFP8Mz9iwFbsDlHbMq+kOajSPPGuOA1nzBIVaugrEFuCyuyGbBbs6Rn4pUu/7Kqj/EGf7RGbK2jtNbrs8KfX+mGLNtlsRvCXllhVFhrwXgyDPu4H+gdokiwNn7WmNWBq6PXR2qMeHL/7uMAuTGWvMuB/n2uzFwYwcU4vjVbu9ZZJ/OkrTATQYxhOCzBk2GUxOhJGyOYvi63ktVtH+nMioDNQfEMjY/cg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+TB6OPsfJ64oIxwsJt9jEQ3MNZZTGD9Pq9QroViQ2NE=; b=uVGjSiRV9ipboyzhijA6Mo1xJ6vR72++h5Qm37ASjvRL2y5N+SeNroJ81Bcp4dapgLu/lJFmGlhvCxKIvq+kdHdcAd0eHVAYITqMlkO6AbYTDygnMx/Jr9JsLmljN0Jm7WSgEENevSW813pt/UUAdopzqRPpvNh/cwnHIGQR+czoZFXxd89LsVQkLAn8eYFT8EIMnpC2apGZ9futZBQTzxfGsDomLqnKNTHY8UMnCIq0b9v+CHGrZeN2Byv3YKtnHbLoa2tLH+S3FP+pXHKXnpHCWo0rFJ0T39hhOXC4tjQtZFxwR9ceRmPVYUiDrBPiKdDXJuL17gdFWSClTPb6xA== Received: from DS7PR03CA0161.namprd03.prod.outlook.com (2603:10b6:5:3b2::16) by DM4PR12MB5262.namprd12.prod.outlook.com (2603:10b6:5:399::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.22; Sun, 8 May 2022 14:26:42 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b2:cafe::7d) by DS7PR03CA0161.outlook.office365.com (2603:10b6:5:3b2::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.14 via Frontend Transport; Sun, 8 May 2022 14:26:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Sun, 8 May 2022 14:26:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sun, 8 May 2022 14:26:41 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 8 May 2022 07:26:39 -0700 From: Xueming Li To: , Maxime Coquelin CC: Subject: [PATCH v3 3/7] vdpa/mlx5: no kick handling during shutdown Date: Sun, 8 May 2022 17:25:50 +0300 Message-ID: <20220508142554.560354-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220508142554.560354-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220508142554.560354-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 51da5144-e84d-485c-01e1-08da30fec5b8 X-MS-TrafficTypeDiagnostic: DM4PR12MB5262:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kln85rKQTfrYt/BQbTwlKOXz3Dlfo3inwJxCyWCFllJU682zh9nFvip7fbaz5zTpJym3Jdd/z+dstifUfUMRxUAMg60y2TMb/klLTL+ba9rFAc1Ly769tCb8E6HUyVcDTQRkRQ1cC+nqVlrLGZeQxsHP71+6fSSnM35NA27jYgOvQjtGpNQRB4RTjvzFB2/HF1hjreVvAaKSFjV4DPfKbiwhikENu/27uhzk/axukHMgcvH/hCTlfLFI2Ocp/6O63uccx8zWxdV/FYAGkpajbft6EHfjzpX4Qwels4ZyDYE88cXizJVOPA6jN4WCX/RsIO2RtbGR7L5QjXVtC4GK1WQh1Iz699NHBAKKCSRpUjM9lbIkQ3OrDwEuH/1LNrpT/+PUUFtMEOghzbkaGrk7vBu5MUx/RGRyH+BepIPR7znZ1Bogo2PfoxsPXwA1NGoOnGn4UsoHdjKKtKFXT3PAfPY87WpL44drLpFQaNLUQWF1ELGdENZtR+YwZLfzh4z4VOZVhxpz8mw7dBVd+vNziwc7OMxdymj1FpxArTT5BmgZpJxeLYGjQjLI01yhVGVdXcYbzCrBescBIhiOA0umU9rqCUDLpZfjyBp40zWZ4CFnplBRnXxs+cRlmHMTSDFc9UQWXi40IaMUviFIEFwssoath6C3EtkVdLe1AVjRYUqqToGAkVcpbwafgR9zMnsehz59xeJ500/z88+5+f4hG8GiSZoI0eCtBCBZT32RSDBLVIT49dvQQNUrz+1mcxzk X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(1076003)(316002)(55016003)(5660300002)(40460700003)(508600001)(8936002)(107886003)(110136005)(47076005)(336012)(426003)(16526019)(186003)(36756003)(70586007)(70206006)(82310400005)(26005)(6286002)(86362001)(36860700001)(81166007)(2616005)(4326008)(8676002)(2906002)(6666004)(83380400001)(356005)(7696005)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2022 14:26:42.2093 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51da5144-e84d-485c-01e1-08da30fec5b8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5262 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When Qemu suspends a VM, hw notifier is un-mmapped while vCPU thread may still be active and write notifier through kick socket. PMD kick handler thread tries to install hw notifier through client socket. In such case, it will timeout and slow down device close. This patch skips hw notifier install if VQ or device in middle of shutdown. Signed-off-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 17 ++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 12 +++++++++++- 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 749c9d097cf..48f20d9ecdb 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -252,13 +252,15 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); + priv->state = MLX5_VDPA_STATE_IN_PROGRESS; + } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); - priv->configured = 0; + priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -277,7 +279,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - if (priv->configured && mlx5_vdpa_dev_close(vid)) { + if (priv->state == MLX5_VDPA_STATE_CONFIGURED && + mlx5_vdpa_dev_close(vid)) { DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } @@ -291,7 +294,7 @@ mlx5_vdpa_dev_config(int vid) mlx5_vdpa_dev_close(vid); return -1; } - priv->configured = 1; + priv->state = MLX5_VDPA_STATE_CONFIGURED; DRV_LOG(INFO, "vDPA device %d was configured.", vid); return 0; } @@ -373,7 +376,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -401,7 +404,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -594,7 +597,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found) { - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); if (priv->var) { mlx5_glue->dv_free_var(priv->var); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 22617924eac..cc83d7cba3d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -113,9 +113,15 @@ enum { MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT }; +enum mlx5_dev_state { + MLX5_VDPA_STATE_PROBED = 0, + MLX5_VDPA_STATE_CONFIGURED, + MLX5_VDPA_STATE_IN_PROGRESS /* Shutting down. */ +}; + struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; - uint8_t configured; + enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; pthread_t timer_tid; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 2696d54b412..4c34983da41 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -25,6 +25,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (rte_intr_fd_get(virtq->intr_handle) < 0) return; for (retry = 0; retry < 3; ++retry) { @@ -43,6 +48,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) if (nbytes < 0) return; rte_write32(virtq->index, priv->virtq_db_addr); + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_ERR; @@ -541,7 +551,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) DRV_LOG(INFO, "Update virtq %d status %sable -> %sable.", index, virtq->enable ? "en" : "dis", enable ? "en" : "dis"); - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { virtq->enable = !!enable; return 0; }