From patchwork Thu Feb 24 14:38:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108289 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E7ACA034E; Thu, 24 Feb 2022 15:38:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC84242712; Thu, 24 Feb 2022 15:38:30 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2063.outbound.protection.outlook.com [40.107.92.63]) by mails.dpdk.org (Postfix) with ESMTP id 0DEDF426F3; Thu, 24 Feb 2022 15:38:29 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E37jg7SjbN6jLZGwS0UsW5CpA8oDSr4xOxU7oLSUp74GWGO7g0hHjh2OMTF9sK/5jZtBBrQTddqS7p9WtYSoUgplOa31sHUCEOFVmHai1A6zJiY00G63MjnYoOJePLAeAgawYodaAmkdCMZnsydALXHrZinLGk7tlaEhd6pwEPm+HaNRCgvZ47NLXryBCGejv+N/Q5GfIGWCuNz7DU407DaAqlpZhojmRBrbSReoYlOru/J6/bdEmGOcjIIDBw2IIuSS5vgp6IElPvixjXy/I38rBp9vf0kmsZraqRdqPJSL+1RBaXJW1Gzrkxq3ndBNdRA9kEVjLeDpg9YmrKDp1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vlplyzhIxuG/Atpfl8B1f0BsyMqOLPadfvHXLb+bfP4=; b=TBmx4hPj5oPCEWmCAKLie8oSjYUpYGdXWpzoej8OAzetXQZ9ZE9+qPHXS8v9xe5053MZsArOyDffqwmxT8oMpAEhmY19YA9cotLtrNY3J2HVavtXPiP44lu9FkVc11mZEpuz8WtN9A5v/eU+Dcyq988MqExy9ocksrSeNs/Ejexm2IcBGJs7/WEOLiVxVSHPdDJgod0MxCzJtK/kMsXqu0u44NioxrWQHOIMGdkeR1NhdmsOS2/wTpTsWjDfWK9kMEKEAzMLzWdndg4Je93FD9HrTA+5opN34vTViRoC0kN2QCTVczwJTxSsqgM8nKIcxHGfLQn/eoXtSc2IAW0SdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vlplyzhIxuG/Atpfl8B1f0BsyMqOLPadfvHXLb+bfP4=; b=ft1FALSfD7GpBSQj8OJpbW/tgvjTQVK/0HGs7fXv1Pq3GdzX2xor+ACFld3qsxt4iMQ+rYWw6DxOiuwYYdCb7gPiA8lS9bfvlAhL3p5EmylOigzI/ygmc4scGWXQL1An8hwg996us4U99szDH+/qDpdu2LOqe0RoGKzcObyyArxssaIBX0obM2PyYwOM9GpYJKEWXMIQq40/vy/AEsrkg5pfZ83dxrEn7aIvzPf45djat1hsADZatYZZQS1P16pmulowU0lSZFoMNBhx99i0KqLllJRjG1hS/pvk8yvGKBYNGmgnh8T6z2U5trVp9/kl6K/npLdZl8DqGRaxhG0xlg== Received: from MW4PR03CA0160.namprd03.prod.outlook.com (2603:10b6:303:8d::15) by DM6PR12MB5006.namprd12.prod.outlook.com (2603:10b6:5:1b8::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 14:38:27 +0000 Received: from CO1NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8d:cafe::f0) by MW4PR03CA0160.outlook.office365.com (2603:10b6:303:8d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 14:38:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT032.mail.protection.outlook.com (10.13.174.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:23 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:20 -0800 From: Xueming Li To: CC: , , , "Matan Azrad" , Viacheslav Ovsiienko , Maxime Coquelin Subject: [PATCH v1 1/7] vdpa/mlx5: fix interrupt trash that leads to segment fault Date: Thu, 24 Feb 2022 22:38:03 +0800 Message-ID: <20220224143809.1977642-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 24827fc0-f6b6-4bad-3ccc-08d9f7a3515e X-MS-TrafficTypeDiagnostic: DM6PR12MB5006:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LmebpqgXHQiL+GW7OO3CGETg5syfvERh6638iE/tXEC3S2+X/Jub84s7Ht7AH6oH8E+FWSZ4Q4P1Rf1+Rvg+cnxYoIVT+0zQ6LZd6sZ3S4/fwJhDkhkBUdnsKxN1tFKOhchCxBHXUoeUDEWFQasPFVvUrjHADy7pAo0JgXCzWg5qHxWEtWppbWukAb3uecpUROqGmX3JlysLzzyOAo9l6lnjTiik1nbtYhF4GNPJepj8g+2CWGHZL5KumT7IZZstazMjPOuzIiP/XGa0HlSZ5XYXkuI3BABEkMeOJDq8LwnUTQTjf5PTIxStHTTqGLmZCzzGdec7dbmd/ZFxOOxVgFKbwjYzFqri/1vEhVZIwGr1xucq0Vwv6TFaml8vr6DHVtxLlSxCGvgrWDZmuTxqRmvlARkfN9dyHT5BmlQ84hAjBxnMZxy4UnWpnA91kPrTolFBZNx66LZCLrkBNTay3JPT0ifPMGeryPgDoZ6t6uFQRUyzviGyCu9R8CD6wI9CbpHWSErVz6M3tXJnI/M9OWNfw8iXipwsg4rpmzM81t6zj+sSjMviqUqoPmNrsKZJCo0N3PiEMBzAAWzt8g9Q67V6vfGk49kSghcFhiMLtBAFNpn4eyF7/kWstoFNziZ3Hz5Zo+8Q2aZVtvoC+KpqjqHW7lfvy0S4QJFzP43y9rtz9omtzzA2d5+8FUQ5X3YZMh6Ow+NHAU4n8hetM00Mjw== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(16526019)(1076003)(83380400001)(40460700003)(6286002)(82310400004)(2616005)(86362001)(26005)(186003)(426003)(336012)(2906002)(81166007)(316002)(47076005)(5660300002)(356005)(70586007)(6916009)(8676002)(4326008)(54906003)(7696005)(55016003)(36756003)(8936002)(6666004)(508600001)(36860700001)(70206006)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:26.5402 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 24827fc0-f6b6-4bad-3ccc-08d9f7a3515e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5006 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Disable interrupt unregister timeout to avoid invalid FD caused interrupt thread segment fault. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: matan@mellanox.com Cc: stable@dpdk.org Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 3416797d289..de324506cb9 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -17,7 +17,7 @@ static void -mlx5_vdpa_virtq_handler(void *cb_arg) +mlx5_vdpa_virtq_kick_handler(void *cb_arg) { struct mlx5_vdpa_virtq *virtq = cb_arg; struct mlx5_vdpa_priv *priv = virtq->priv; @@ -59,20 +59,16 @@ static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { unsigned int i; - int retries = MLX5_VDPA_INTR_RETRIES; int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) != -1) { - while (retries-- && ret == -EAGAIN) { + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_handler, - virtq); + mlx5_vdpa_virtq_kick_handler, virtq); if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d " - "of virtq %d interrupt, retries = %d.", - rte_intr_fd_get(virtq->intr_handle), - (int)virtq->index, retries); - + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + (int)virtq->index); usleep(MLX5_VDPA_INTR_RETRIES_USEC); } } @@ -359,7 +355,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) goto error; if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_handler, + mlx5_vdpa_virtq_kick_handler, virtq)) { rte_intr_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", From patchwork Thu Feb 24 14:38:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108288 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7589A034E; Thu, 24 Feb 2022 15:38:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC3E3426F6; Thu, 24 Feb 2022 15:38:29 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2085.outbound.protection.outlook.com [40.107.212.85]) by mails.dpdk.org (Postfix) with ESMTP id 84CE1426F3; Thu, 24 Feb 2022 15:38:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eVWGdBsdXqX4cgmXYVETVMbxHHtqF0qdW+l/KtD9plJAiMbupLMO7nSJwzRA9Q5RQ1p0K/VjWSC8r8+Uil5umx4ysP3pz1ISEjBkZb8L1xkynrcgGwM2zr+lVYgN9pY8hJBckEHbS0UW20uiF+WZW7mHns9F4lwVEjMAj9P4IznQO245d9noIctJPFl8S5/lnKW42kmsmIjvvJOtQt94hYE7oKMkx1DSanT6rq+Ag2Pk0jSLVv9ZCtU79TDyacE8mJ21hEGHwKvtP7mz+6KFiic0bbpTGv8gpAecLUZRLcQg59i2qCqiJLXIl+vv02cCowoRe1XOqLYG15uhV7Qphw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lrQRc6PY08XzlLHhONbwBNbqLbu6NNZ4dF8VZE269Ko=; b=b7chru1ryrh4iL/qO8LJnDvHvhF2LPY1wAGIAzkc5Hbi4SgvOwn7IZ6MIA/MuXYKbA1c2ZuejqUXB3jIle9TqL9Abb9VFuaC+W4ovjIfkUJ8N5dzTXPD9wOdCOBF21jMiQ0YIVdL76rPE9PI3cpWBKc1aZ4QLEdPfCEgv06K4gKXn1TocTdPd20/DIaK2XZ3ySGt46GhmwOpfq4yQK+Wujesi0vSPLyqWkDPOfY4Nwdj4D6XSeKRK7xkaG7hr+0vqxGLXXBpujb5+O5VSYoBD70Aor5jIqXEakeT28HvbmJvcdzzQySiwj3EL93kO+NdFg1FJyPI4rPct5ZIRX7ttQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lrQRc6PY08XzlLHhONbwBNbqLbu6NNZ4dF8VZE269Ko=; b=HU2RjQ44jkRUditS/eSNu6GgVvvdvv98GKDRss58w9qETwWlffWxA9ijDshYc6haXcjqjvTpX70esbc6/pFFnHM0UWNp9yIbebTeMu5Emhu2XCAVAJnW+nIYLYXcSKEC+pzFU2GzRAZGnRefx2MGsYup/+zpa6/a2JGq+U1M1qgfhUwv2GpzyQ6oABmxNcpFh5yXOf2/5CU8oLA9pqEIzfq8qTQMqoPNbTaqkH8a6oXlTI+tmkuXnN2JQwAiNNRqxFTJPvStTE/N67E75lYqCv94MfKhQN7rOMA+parX0orGWfrQHfIfaiaRqNW7T2aHH7itAJyMCIDBS1tq6hLy+w== Received: from MWHPR04CA0036.namprd04.prod.outlook.com (2603:10b6:300:ee::22) by MWHPR1201MB0239.namprd12.prod.outlook.com (2603:10b6:301:50::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23; Thu, 24 Feb 2022 14:38:26 +0000 Received: from CO1NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ee:cafe::6f) by MWHPR04CA0036.outlook.office365.com (2603:10b6:300:ee::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 14:38:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT063.mail.protection.outlook.com (10.13.175.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:25 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:22 -0800 From: Xueming Li To: CC: , , Matan Azrad , Viacheslav Ovsiienko , Maxime Coquelin Subject: [PATCH v1 2/7] vdpa/mlx5: fix dead loop when process interrupted Date: Thu, 24 Feb 2022 22:38:04 +0800 Message-ID: <20220224143809.1977642-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d3de28c7-9478-4ec9-4c6b-08d9f7a35171 X-MS-TrafficTypeDiagnostic: MWHPR1201MB0239:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JBuU3B4NEDhxtmVcMimK55rNtS08WozK9H9lhIMXsdYK4gRPemiO84KXevfTO2rD146kV8sUMkUJee4QAlqz6tYAHMvOIEMl8SeqbyQKBXR3vkMIrVCQbwfMeaKX10GWTQvzZaYfi4vpzY6Hx9Iop+CAQSHzNSFfo2ANGKsbsDixLEn1hcyFfZLR4NHuYS0GOJvlbQwVg8TMOjUthVr8tjG35/d+Whlfz83VBpvg02JsX7ty5qioS3PaN4eQRzZh211w9CbIat5Pyx0ovw0iWOVV38oozC4w0CmshSStpqW7RgYSPKOkGMzfeW8q76pJ/ApxnUXaq2XyREhgtpH0w3QL8TiqpZHKm5HAYZLFQU6o+lXWF3t2lQFl0KuZ+K8O0AZtOyTknC/MSeRE0jslenK7W+G0Bm6owEGfIMEa4JTbXiCJ9Z/f3Wdju+aiQ3qAMFN4XsR4HEdz3189rnmjy4DRuWDmJMfVDZwPiv/DoPjoUc+f9RPtMzXjkNAHyK7e+HGqFyjgaXZJ5lWWu67Ma+T2HLABo/+/dSOUWd6KDhqgIimMuw7KoxnAY+RU7DcxMXvRZqOTvSZApEHJRj6v8KTj83zPKms+q2LOatkZwSQbZumGQRAkDkkvFNCArmb8s4sx7ZBSn9ImDRwnlSOi+I97eXXp/eBhOnf77U2sg0JzUnf4FWrGJmCHDFKDs8On7U5uvbyF4FVxQ7ZUFfnpTQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(6916009)(54906003)(6666004)(508600001)(86362001)(336012)(6286002)(186003)(36756003)(1076003)(7696005)(83380400001)(2616005)(426003)(16526019)(5660300002)(316002)(26005)(8936002)(70206006)(82310400004)(356005)(81166007)(36860700001)(47076005)(2906002)(40460700003)(55016003)(8676002)(4326008)(70586007)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:26.6495 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d3de28c7-9478-4ec9-4c6b-08d9f7a35171 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1201MB0239 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In Ctrl+C handling, sometimes kick handling thread gets endless EGAIN error and fall into dead lock. Kick happens frequently in real system due to busy traffic or retry mechanism. This patch simplifies kick firmware anyway and skip setting hardware notifier due to potential device error, notifier could be set in next successful kick request. Fixes: 62c813706e41 ("vdpa/mlx5: map doorbell") Cc: stable@dpdk.org Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index de324506cb9..e1e05924a40 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -23,11 +23,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) struct mlx5_vdpa_priv *priv = virtq->priv; uint64_t buf; int nbytes; + int retry; if (rte_intr_fd_get(virtq->intr_handle) < 0) return; - - do { + for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); if (nbytes < 0) { @@ -39,7 +39,9 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) virtq->index, strerror(errno)); } break; - } while (1); + } + if (nbytes < 0) + return; rte_write32(virtq->index, priv->virtq_db_addr); if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) From patchwork Thu Feb 24 14:38:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108291 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 080F0A034E; Thu, 24 Feb 2022 15:38:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9EC7842726; Thu, 24 Feb 2022 15:38:35 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2044.outbound.protection.outlook.com [40.107.237.44]) by mails.dpdk.org (Postfix) with ESMTP id 4EB844270D for ; Thu, 24 Feb 2022 15:38:33 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ma27lAF3vo1hZIr2HFtqifiVf76hrBAW5DUvXdRJgfAepD3kx+OqKwtgdy/VHGItMEbO70G29OXQKA3f/NpfqTMqzy0j0ydF+CT7bNS1/NkVCFO+fU/TcpwNa0Jvh5Q+SkswBaWDkykXVpSG/6Ou4LzrW3M6Y9hTyNZ8vbLvoeQi7ypekNvSNLMUcvn6tQjWwi1COfPEQoYX5h0a4VV9N2IBB0aS5fjihkqfuanz9lB0OuGEnESSXUmM6NmMioX6VS5mgVCEXFOIIWEBUjYznAa/uGMvd3pPS/Il6mkha81XLJrytUkCxzCxdSEhr5G9lnFJMaUF0h74GcmsD/M6kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1XIJVSeYObyHggMrA4p8UzsV4f9rmy+23mKlN8dnoiM=; b=dD79VoQmt22okEleVQXGS4KCJHws26tBm7GDGHzphOT+YJo6gz7Q1j2wGYj/GAio9N/DwAPchjEb56UEKl6tT6ci8FUPgT7mOPn62Z+TJznzS1eAJeVVj0MmWt/vapurjBIb2Xs+7KP6VHvlkvOH4DuUddOiKAFR1USVVp9iOrynZKmKxHjicnItufjcYQ6nn8msdB2JFxgLzDJPCmqiDvpl8dM1Qup5QOjQFJ9t8ja8YBEx8Rm1j+jfvwOLSOrOfj0YGGPuIdtgyA3blriuK3Eb1pLHwj3p3QxZAcv7C8ZUlprdeel/mvPfzrf0rvDL9FAIkCu/cytZvlKBT7uM+w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1XIJVSeYObyHggMrA4p8UzsV4f9rmy+23mKlN8dnoiM=; b=eC58Z9RWA6mbcqQu0P4TZmMrpLX6mkWWaB8CsbKac2VZi6kD79d/0L2e7nmVkazoVNRfag01SJbGOOMTbVExrfxbmESEn8R9dpQKqS20WUOKmwoToaMpugWoK/p+YiPCvAQ953nWGesITCM+ryRudAJ8qUZZgcGkg/cyvnqTYMnYmsoZCMKqpqC0PIjFvyznB3AG1mboCQY111xb9ajsRutOg/wQIZ7pyqQIWJP4EfUYUZxJVQjxK7MLTl4UBWTGaNIAaBwqeyYLZsO2mGilkyRWxkE8ba4u7dRYtco865asXvW5DtgJbovy/pjSoUfLbWBRsYixb052AzKRuuaq7w== Received: from MW4PR04CA0333.namprd04.prod.outlook.com (2603:10b6:303:8a::8) by MN2PR12MB3631.namprd12.prod.outlook.com (2603:10b6:208:c2::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 14:38:29 +0000 Received: from CO1NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::db) by MW4PR04CA0333.outlook.office365.com (2603:10b6:303:8a::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 14:38:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT055.mail.protection.outlook.com (10.13.175.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:28 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:27 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:24 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH v1 3/7] vdpa/mlx5: no kick handling during shutdown Date: Thu, 24 Feb 2022 22:38:05 +0800 Message-ID: <20220224143809.1977642-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8f547d95-5834-4bda-fa45-08d9f7a352c3 X-MS-TrafficTypeDiagnostic: MN2PR12MB3631:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1XMmLgeN6lIZJiZgeZ7uOnnTGOaRk+c0KBf4g8iuCWeOnMlONHdvd1E8BGB1nkuyLgO7Nw9b/ZmO/88ds41qaNPnPjzoM0cR13UOPoIzz0p+AF7hdIwUU2kzZFvaoKqbJNqhVeq15cMDdwb5slby0zfoAvYIFVrB9wkC57NaO3mBB9OcknBuUeWqUCGfy+ZnWdwH70H7TlcEELsCHlCLY5W1r72MTuRPlgp7WL9UDL6cx1dYJv9k06Pucxpqab9/yHvOL1ZZmRmDv2v+aRsQZ90r20aFCp0aTiBPb1V158AUDmoUKkI4GgiYs6fiC3iREUsayguCV2dT0owRhxi9O4waGpoYbOcpDsn9qWHrZNgGaQihCuDgLuzyvXVWkCCpEafZtw8saUxQCynKhNxQ2gHrFYSpTAT1XWxKWL5BuXxrWFib0ZVK6W0nG1NOokrtjEGV62ZaNGkSdZkgNx9UXKucLWjthxARpIg1ajaRn1DVP4Mo3aJ2C19XRXO4bDT6cprMX2OjyIkNm2xJN3ehvR9UoTjjk2uJNc6TbVP6X/Sw0QVKCmBYPv0EuTNJ5BKvkUckJVBCzrDr784qN2MP6zU2b6BIIgB6ZTpbHG1yQdtMF+abZdBL82E1yP09oF9X9cAJwO6As70J9AbAtGkPZXkNsY+cjXJOCLFRd8dTdYRwbhFibjONf0v4B1d9NjN2zTfTgjPq2i63/HftJXOnFfi5uSl6V0DCyBlbY1HV348ggNmQLSowF7VsUxXe4Ido X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(36756003)(1076003)(7696005)(47076005)(426003)(336012)(107886003)(2616005)(16526019)(26005)(186003)(6286002)(8936002)(6666004)(36860700001)(6916009)(5660300002)(508600001)(81166007)(70586007)(70206006)(8676002)(356005)(4326008)(54906003)(83380400001)(2906002)(86362001)(55016003)(40460700003)(82310400004)(316002)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:28.9003 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8f547d95-5834-4bda-fa45-08d9f7a352c3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3631 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When Qemu suspend a VM, hw notifier is un-mmapped while vCPU thread may still active and write notifier through kick socket. PMD kick handler thread tries to install hw notifier through client socket in such case will timeout and slow down device close. This patch skips hw notifier install if VQ or device in middle of shutdown. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 17 ++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 12 +++++++++++- 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 8dfaba791dc..a93a9e78f7f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -252,13 +252,15 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); + priv->state = MLX5_VDPA_STATE_IN_PROGRESS; + } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); - priv->configured = 0; + priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -277,7 +279,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - if (priv->configured && mlx5_vdpa_dev_close(vid)) { + if (priv->state == MLX5_VDPA_STATE_CONFIGURED && + mlx5_vdpa_dev_close(vid)) { DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } @@ -291,7 +294,7 @@ mlx5_vdpa_dev_config(int vid) mlx5_vdpa_dev_close(vid); return -1; } - priv->configured = 1; + priv->state = MLX5_VDPA_STATE_CONFIGURED; DRV_LOG(INFO, "vDPA device %d was configured.", vid); return 0; } @@ -373,7 +376,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -401,7 +404,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { DRV_LOG(ERR, "Device %s was not configured.", vdev->device->name); return -ENODATA; @@ -590,7 +593,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found) { - if (priv->configured) + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); if (priv->var) { mlx5_glue->dv_free_var(priv->var); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 22617924eac..cc83d7cba3d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -113,9 +113,15 @@ enum { MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT }; +enum mlx5_dev_state { + MLX5_VDPA_STATE_PROBED = 0, + MLX5_VDPA_STATE_CONFIGURED, + MLX5_VDPA_STATE_IN_PROGRESS /* Shutting down. */ +}; + struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; - uint8_t configured; + enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; pthread_t timer_tid; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index e1e05924a40..b1d584ca8b0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -25,6 +25,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (rte_intr_fd_get(virtq->intr_handle) < 0) return; for (retry = 0; retry < 3; ++retry) { @@ -43,6 +48,11 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) if (nbytes < 0) return; rte_write32(virtq->index, priv->virtq_db_addr); + if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + priv->vid, virtq->index); + return; + } if (virtq->notifier_state == MLX5_VDPA_NOTIFIER_STATE_DISABLED) { if (rte_vhost_host_notifier_ctrl(priv->vid, virtq->index, true)) virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_ERR; @@ -541,7 +551,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) DRV_LOG(INFO, "Update virtq %d status %sable -> %sable.", index, virtq->enable ? "en" : "dis", enable ? "en" : "dis"); - if (!priv->configured) { + if (priv->state == MLX5_VDPA_STATE_PROBED) { virtq->enable = !!enable; return 0; } From patchwork Thu Feb 24 14:38:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108290 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABAA6A034E; Thu, 24 Feb 2022 15:38:44 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7B8A426ED; Thu, 24 Feb 2022 15:38:33 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2040.outbound.protection.outlook.com [40.107.96.40]) by mails.dpdk.org (Postfix) with ESMTP id 32B3942710 for ; Thu, 24 Feb 2022 15:38:32 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RMEkUTU9P86zTqo0SwFuOToWcLPyeknhQu61vI/UkkqYugtt/rhCcdUOkLEdg8+Qtv9R1B4jUaqfJN/Xx8u7gtdhka5xzj2eOgFHeN6DzJpMwzpJXzJa1P738F1YgFpgcFEjmfewj2yok2MNcDRRCU6IxKIi1soq6eoioT+OnRoorKU3gItw5mUz7JjmIJ85IjF3dVCG3gEiXT22Ax05/LY7KPOqc6PQqGkw8CE8KvQWNcTMvnBchQm32YR/7nEg+k6p6OF2xsKdESa3afgthvjsjsbuYcnhC1jMqhups5FusbIxb8fzr8Xet35dY69FzE6ZbJy5VhUnE/9PGDSUsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bs2Y8vJgsopbqUrfFjN/BH0RWQODCTQW3BL8PZazUNY=; b=EwyvFK7u8TnoZCzWvnHDpvqs+Ko5wzlkk+3DKPILVfbcQm4uCDaCs7J8S90HPL+bWH18aGQNdooS62fsYMZ7BP/Wqs0fug7CSAFCOyG2bQlEAEZ7GyutofpS82DJoMVtVOLiWvuwodGWn1CTGuuarnRFECR6/8bfRnyKMa9ydPPvNcGxbA/kth8kSrjIyHflVi4Hk6uTLuVVIWXpAs7z2RBas7gI+6qJBiymWsgTHvFMQxbiCAlwqi4dHyrRKoFV+cCwOaVBC/2huFIMCP7jtMFMyLGjLQX1AUrZ2LMnIC9vJKgukeYfVVHd+3TRLw77QP9ZeqQCoQ3413g8TZmCuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bs2Y8vJgsopbqUrfFjN/BH0RWQODCTQW3BL8PZazUNY=; b=Xg8zp5SV617jCR8QueNfc8vxr9ba4GbY3H2ATz4wtiHLS8g3HPa15yCa9dZ43ULKO2bdFhl53SycNUGmAPalMubfn5uxlW/PJiaIXDCx0qDT7+OYQogI5kOQlXFweRMqQs+81PB3YfRRwAfB1IHHhRJjzcSZ21hPIRzQTTNud19k9IutazNlzl9AYvNZrG/M2r3EimyABvAyt6dxbNYdfm732Vg+16dmhWi9mkI9DGbRuzgi095ae07sRJhyuLX1rpLlwPljyUj3XPAa1OBEeARGlF64XFO10dkFVZ1JJn9F0zYUbxSeXcS2wS02FyU23qu2c7sHNiv35lT1eM6D8w== Received: from MW4PR04CA0337.namprd04.prod.outlook.com (2603:10b6:303:8a::12) by BYAPR12MB2839.namprd12.prod.outlook.com (2603:10b6:a03:72::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Thu, 24 Feb 2022 14:38:29 +0000 Received: from CO1NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::e8) by MW4PR04CA0337.outlook.office365.com (2603:10b6:303:8a::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.24 via Frontend Transport; Thu, 24 Feb 2022 14:38:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT055.mail.protection.outlook.com (10.13.175.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:29 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:28 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:26 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH v1 4/7] vdpa/mlx5: reuse resources in reconfiguration Date: Thu, 24 Feb 2022 22:38:06 +0800 Message-ID: <20220224143809.1977642-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7cdfe166-9267-4df6-2dca-08d9f7a3533b X-MS-TrafficTypeDiagnostic: BYAPR12MB2839:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0EYaUEX7B3kyssWEGz30kLV00CI0iPWnWF79H2DSZhZFCGrwxuY89ZF/60ggpvplEt4SqNWE+vYXtls4gzH1zWOdPQjvtKGDzIQ8zsJCeSyLh5NRoUdM0LeJgdoo8gwDJDqtDmruBUV1K/tmOVO5eXOhYvgmJLoYiCe5xckYRYNAA4rOQGGmUMA3BpZc5nLFyGMB4oOWMNkC8zPpY8UWHFFmnMHe+VY/OoyyR7BqZSVkT4ZFnUb1k9KMp3WZq/tLx8DzPa3fva2lhYcu3iHSiQoVoIIbO5DPucWZVo2Yd0eoPEkZ9ePAYwdgSaFTOa09jybCHOuGWV1Ge82//hC92kNr8Km4mzogZXBgPz5yg6xRyRtjc3ZLrI7T1jXYn//VIBRbM/d0YfoQiY6ab51eYh8DyDXYsiRJ4pVaFNHjagh8so9577Qig0iBj71H/GxZCc6Ebfbv6k3BUgp6ryF0x7Dl3m3CV3lhPoA1kq0hmabPimFmsKeGQ6gQb1EadabjiD2nwVqzfxXZaQozByYnhS5Tv/MrwHFBQ70ZLu1GDRSXseSQtxSBhv8v7O6JzEK0LCvPI1MdTCu5f5w4CBZegbgPBRcifNj6em8Z3x7mKoWWEdbCbgmTQojljZ1AJeY2yn6jpkewc0SFJP81dPaPdCVHNV0TqNoXEU3rLqVafyhQyUUy2QqtGTh7qw684H7bG/djByIPQdaVV7CfGbbCVQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(54906003)(6916009)(30864003)(336012)(26005)(8676002)(8936002)(16526019)(5660300002)(186003)(6286002)(4326008)(70206006)(70586007)(356005)(83380400001)(1076003)(82310400004)(81166007)(316002)(86362001)(426003)(6666004)(2616005)(107886003)(36756003)(36860700001)(47076005)(2906002)(55016003)(7696005)(40460700003)(508600001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:29.6503 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7cdfe166-9267-4df6-2dca-08d9f7a3533b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2839 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To speed up device resume, create reuseable resources during device probe state, release when device remove. Reused resources includes TIS, TD, VAR Doorbell mmap, error handling event channel and interrupt handler, UAR, Rx event channel, NULL MR, steer domain and table. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 165 +++++++++++++++++++++------- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 ++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 23 ++-- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 11 -- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 25 +---- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 -------- 6 files changed, 147 insertions(+), 130 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index a93a9e78f7f..ee35c36624b 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -49,6 +50,8 @@ TAILQ_HEAD(mlx5_vdpa_privs, mlx5_vdpa_priv) priv_list = TAILQ_HEAD_INITIALIZER(priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; +static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv); + static struct mlx5_vdpa_priv * mlx5_vdpa_find_priv_resource_by_vdev(struct rte_vdpa_device *vdev) { @@ -250,7 +253,6 @@ mlx5_vdpa_dev_close(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; } - mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); if (priv->state == MLX5_VDPA_STATE_CONFIGURED) { ret |= mlx5_vdpa_lm_log(priv); @@ -258,7 +260,6 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; @@ -288,7 +289,7 @@ mlx5_vdpa_dev_config(int vid) if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); - if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_err_event_setup(priv) || + if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_virtqs_prepare(priv) || mlx5_vdpa_steer_setup(priv) || mlx5_vdpa_cqe_event_setup(priv)) { mlx5_vdpa_dev_close(vid); @@ -504,12 +505,87 @@ mlx5_vdpa_config_get(struct rte_devargs *devargs, struct mlx5_vdpa_priv *priv) DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); } +static int +mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) +{ + struct mlx5_devx_tis_attr tis_attr = {0}; + struct ibv_context *ctx = priv->cdev->ctx; + uint32_t i; + int retry; + + for (retry = 0; retry < 7; retry++) { + priv->var = mlx5_glue->dv_alloc_var(ctx, 0); + if (priv->var != NULL) + break; + DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.", retry); + /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ + usleep(100000U << retry); + } + if (!priv->var) { + DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Always map the entire page. */ + priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | + PROT_WRITE, MAP_SHARED, ctx->cmd_fd, + priv->var->mmap_off); + if (priv->virtq_db_addr == MAP_FAILED) { + DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); + priv->virtq_db_addr = NULL; + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", + priv->virtq_db_addr); + priv->td = mlx5_devx_cmd_create_td(ctx); + if (!priv->td) { + DRV_LOG(ERR, "Failed to create transport domain."); + rte_errno = errno; + return -rte_errno; + } + tis_attr.transport_domain = priv->td->id; + for (i = 0; i < priv->num_lag_ports; i++) { + /* 0 is auto affinity, non-zero value to propose port. */ + tis_attr.lag_tx_port_affinity = i + 1; + priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); + if (!priv->tiss[i]) { + DRV_LOG(ERR, "Failed to create TIS %u.", i); + return -rte_errno; + } + } + priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); + if (!priv->null_mr) { + DRV_LOG(ERR, "Failed to allocate null MR."); + rte_errno = errno; + return -rte_errno; + } + DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); + priv->steer.domain = mlx5_glue->dr_create_domain(ctx, + MLX5DV_DR_DOMAIN_TYPE_NIC_RX); + if (!priv->steer.domain) { + DRV_LOG(ERR, "Failed to create Rx domain."); + rte_errno = errno; + return -rte_errno; + } + priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); + if (!priv->steer.tbl) { + DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); + rte_errno = errno; + return -rte_errno; + } + if (mlx5_vdpa_err_event_setup(priv) != 0) + return -rte_errno; + if (mlx5_vdpa_event_qp_global_prepare(priv)) + return -rte_errno; + return 0; +} + static int mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev) { struct mlx5_vdpa_priv *priv = NULL; struct mlx5_hca_attr *attr = &cdev->config.hca_attr; - int retry; if (!attr->vdpa.valid || !attr->vdpa.max_num_virtio_queues) { DRV_LOG(ERR, "Not enough capabilities to support vdpa, maybe " @@ -533,25 +609,10 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev) priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; + pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; - for (retry = 0; retry < 7; retry++) { - priv->var = mlx5_glue->dv_alloc_var(priv->cdev->ctx, 0); - if (priv->var != NULL) - break; - DRV_LOG(WARNING, "Failed to allocate VAR, retry %d.\n", retry); - /* Wait Qemu release VAR during vdpa restart, 0.1 sec based. */ - usleep(100000U << retry); - } - if (!priv->var) { - DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); + if (mlx5_vdpa_create_dev_resources(priv)) goto error; - } - priv->err_intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (priv->err_intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); if (priv->vdev == NULL) { DRV_LOG(ERR, "Failed to register vDPA device."); @@ -560,19 +621,13 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev) } mlx5_vdpa_config_get(cdev->dev->devargs, priv); SLIST_INIT(&priv->mr_list); - pthread_mutex_init(&priv->vq_config_lock, NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); return 0; - error: - if (priv) { - if (priv->var) - mlx5_glue->dv_free_var(priv->var); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (priv) + mlx5_vdpa_dev_release(priv); return -rte_errno; } @@ -592,22 +647,48 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev) if (found) TAILQ_REMOVE(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); - if (found) { - if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); - if (priv->var) { - mlx5_glue->dv_free_var(priv->var); - priv->var = NULL; - } - if (priv->vdev) - rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); - rte_intr_instance_free(priv->err_intr_handle); - rte_free(priv); - } + if (found) + mlx5_vdpa_dev_release(priv); return 0; } +static void +mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + + mlx5_vdpa_event_qp_global_release(priv); + mlx5_vdpa_err_event_unset(priv); + if (priv->steer.tbl) + claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); + if (priv->steer.domain) + claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); + if (priv->null_mr) + claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); + for (i = 0; i < priv->num_lag_ports; i++) { + if (priv->tiss[i]) + claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); + } + if (priv->td) + claim_zero(mlx5_devx_cmd_destroy(priv->td)); + if (priv->virtq_db_addr) + claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); + if (priv->var) + mlx5_glue->dv_free_var(priv->var); +} + +static void +mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) +{ + if (priv->state == MLX5_VDPA_STATE_CONFIGURED) + mlx5_vdpa_dev_close(priv->vid); + mlx5_vdpa_release_dev_resources(priv); + if (priv->vdev) + rte_vdpa_unregister_device(priv->vdev); + pthread_mutex_destroy(&priv->vq_config_lock); + rte_free(priv); +} + static const struct rte_pci_id mlx5_vdpa_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index cc83d7cba3d..e0ba20b953c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -233,6 +233,15 @@ int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, */ void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp); +/** + * Create all the event global resources. + * + * @param[in] priv + * The vdpa driver private structure. + */ +int +mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv); + /** * Release all the event global resources. * diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index f8d910b33f8..7167a98db0f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -40,11 +40,9 @@ mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv) } /* Prepare all the global resources for all the event objects.*/ -static int +int mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) { - if (priv->eventc) - return 0; priv->eventc = mlx5_os_devx_create_event_channel(priv->cdev->ctx, MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); if (!priv->eventc) { @@ -389,22 +387,30 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) flags = fcntl(priv->err_chnl->fd, F_GETFL); ret = fcntl(priv->err_chnl->fd, F_SETFL, flags | O_NONBLOCK); if (ret) { + rte_errno = errno; DRV_LOG(ERR, "Failed to change device event channel FD."); goto error; } - + priv->err_intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (priv->err_intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + goto error; + } if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd)) goto error; if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT)) goto error; - if (rte_intr_callback_register(priv->err_intr_handle, - mlx5_vdpa_err_interrupt_handler, - priv)) { + ret = rte_intr_callback_register(priv->err_intr_handle, + mlx5_vdpa_err_interrupt_handler, + priv); + if (ret != 0) { rte_intr_fd_set(priv->err_intr_handle, 0); DRV_LOG(ERR, "Failed to register error interrupt for device %d.", priv->vid); + rte_errno = -ret; goto error; } else { DRV_LOG(DEBUG, "Registered error interrupt for device%d.", @@ -453,6 +459,7 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv) mlx5_glue->devx_destroy_event_channel(priv->err_chnl); priv->err_chnl = NULL; } + rte_intr_instance_free(priv->err_intr_handle); } int @@ -575,8 +582,6 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; - if (mlx5_vdpa_event_qp_global_prepare(priv)) - return -1; if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; attr.pd = priv->cdev->pdn; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 599079500b0..62f5530e91d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -34,10 +34,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) SLIST_INIT(&priv->mr_list); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - if (priv->null_mr) { - claim_zero(mlx5_glue->dereg_mr(priv->null_mr)); - priv->null_mr = NULL; - } if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -196,13 +192,6 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; priv->vmem = mem; - priv->null_mr = mlx5_glue->alloc_null_mr(priv->cdev->pd); - if (!priv->null_mr) { - DRV_LOG(ERR, "Failed to allocate null MR."); - ret = -errno; - goto error; - } - DRV_LOG(DEBUG, "Dump fill Mkey = %u.", priv->null_mr->lkey); for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; entry = rte_zmalloc(__func__, sizeof(*entry), 0); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index a0fd2776e57..e42868486e7 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -45,14 +45,6 @@ void mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv) { mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.tbl) { - claim_zero(mlx5_glue->dr_destroy_flow_tbl(priv->steer.tbl)); - priv->steer.tbl = NULL; - } - if (priv->steer.domain) { - claim_zero(mlx5_glue->dr_destroy_domain(priv->steer.domain)); - priv->steer.domain = NULL; - } if (priv->steer.rqt) { claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); priv->steer.rqt = NULL; @@ -248,11 +240,7 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { - mlx5_vdpa_rss_flows_destroy(priv); - if (priv->steer.rqt) { - claim_zero(mlx5_devx_cmd_destroy(priv->steer.rqt)); - priv->steer.rqt = NULL; - } + mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { return ret; } else if (!priv->steer.rss[0].flow) { @@ -269,17 +257,6 @@ int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { #ifdef HAVE_MLX5DV_DR - priv->steer.domain = mlx5_glue->dr_create_domain(priv->cdev->ctx, - MLX5DV_DR_DOMAIN_TYPE_NIC_RX); - if (!priv->steer.domain) { - DRV_LOG(ERR, "Failed to create Rx domain."); - goto error; - } - priv->steer.tbl = mlx5_glue->dr_create_flow_tbl(priv->steer.domain, 0); - if (!priv->steer.tbl) { - DRV_LOG(ERR, "Failed to create table 0 with Rx domain."); - goto error; - } if (mlx5_vdpa_steer_update(priv)) goto error; return 0; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index b1d584ca8b0..6bda9f1814a 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -3,7 +3,6 @@ */ #include #include -#include #include #include @@ -120,20 +119,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) if (virtq->counters) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } - for (i = 0; i < priv->num_lag_ports; i++) { - if (priv->tiss[i]) { - claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); - priv->tiss[i] = NULL; - } - } - if (priv->td) { - claim_zero(mlx5_devx_cmd_destroy(priv->td)); - priv->td = NULL; - } - if (priv->virtq_db_addr) { - claim_zero(munmap(priv->virtq_db_addr, priv->var->length)); - priv->virtq_db_addr = NULL; - } priv->features = 0; memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; @@ -462,8 +447,6 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - struct mlx5_devx_tis_attr tis_attr = {0}; - struct ibv_context *ctx = priv->cdev->ctx; uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); @@ -485,33 +468,6 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) (int)nr_vring); return -1; } - /* Always map the entire page. */ - priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | - PROT_WRITE, MAP_SHARED, ctx->cmd_fd, - priv->var->mmap_off); - if (priv->virtq_db_addr == MAP_FAILED) { - DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); - priv->virtq_db_addr = NULL; - goto error; - } else { - DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", - priv->virtq_db_addr); - } - priv->td = mlx5_devx_cmd_create_td(ctx); - if (!priv->td) { - DRV_LOG(ERR, "Failed to create transport domain."); - return -rte_errno; - } - tis_attr.transport_domain = priv->td->id; - for (i = 0; i < priv->num_lag_ports; i++) { - /* 0 is auto affinity, non-zero value to propose port. */ - tis_attr.lag_tx_port_affinity = i + 1; - priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); - if (!priv->tiss[i]) { - DRV_LOG(ERR, "Failed to create TIS %u.", i); - goto error; - } - } priv->nr_virtqs = nr_vring; for (i = 0; i < nr_vring; i++) if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) From patchwork Thu Feb 24 14:38:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108292 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA5C3A034E; Thu, 24 Feb 2022 15:38:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E391B42720; Thu, 24 Feb 2022 15:38:52 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2070.outbound.protection.outlook.com [40.107.236.70]) by mails.dpdk.org (Postfix) with ESMTP id C5338426FF for ; Thu, 24 Feb 2022 15:38:51 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NefpXNPB4niBEeY4/l5E/7ZQqmSG9gq1K6gYXSn44Ag2ruEO9H3gliwlBJbB7i8qlEDLophmIm4tFn/eNjnjQoNa7LT7o9l2DJP5G//YW8jw0TQrNDIsE5tXioHAOQc/rj/DE4RPlXNAMMsarIMdUnWHf3aIlGaJY+epVF7rTjZr+dVWyedxFPGTLYGod+nRZWro/yNWAMY9cKJ+gy1aV8SYg35SRYpKGwSWD0qdKpcPZhEvaev28OvKFIS/1EOsykILtac1UlJBVjbecS+ClrvxLF2yPSGb4Wk59WDpjBPk5tJlHcx7BLBo7TFyvssTl1EkCpuSv5b901oikIPGlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=R/OGo9EmW5pnXvwfd2azAVxZY6An/72NJ0pNcqnczTM=; b=h3y/J83yl21xWoivN8t7a26ZcwtmlrTETWV0mWJDig1AIaZ1LCiJp4urpCSF9jXD6Or+hfBfm4HZKm8bT/ANfawbyCU6jB3LItJ3EJoAxigl2wlUP75hElbiJE1s56qKFofVybu+RSxWy+c+nSvT7gfan1seCb0RtaOeWrfevkMYB1t9JUYEI6L+vGALDd50hDXBDmvYaJxF0AEfHMiQxdRDUfRBzPD3nRAzNHkbAQfjj/KvpxMs/WFw3CO1Fko7x1FKMfTbi6OMCTz8Yy8kj/yjOmh+PTXiQ8HUVFmxYHtSxvumHMCi3S+n8ownpXnb60GyPefutCViUGx8+zLIfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R/OGo9EmW5pnXvwfd2azAVxZY6An/72NJ0pNcqnczTM=; b=OiJlONECJO3B5QLatMnv3YWASToUYU4jF+PSAadpfneLwasl8+24aqawQiX+Mr7YcgfiMZU69+mHl40YJsVrc5w7Dng5sKWd3Ke5dCP6fln+HgBg0BPSpep3/MvHUN2oAzbelyZCbUDo4Cd9+LFNDg6xBRywxQkv1i1a4cDZbHr3CcO5vuMfXw5RD3QqiglU9Hr4t1IyMYp1u2TRbFXuGSpwCczULRfd4g/1woPGT0C2yUa5K+NPjIbFnzb7l3gtldIj4re1/oB9C/AGlFQPHaDBZVXKRuF5SjJQa23xGc9AE7WkwtukzyGEhr8TkvB3MFfOgjPBimBbKrFf4dVymg== Received: from MWHPR18CA0062.namprd18.prod.outlook.com (2603:10b6:300:39::24) by DM4PR12MB5263.namprd12.prod.outlook.com (2603:10b6:5:39b::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 14:38:49 +0000 Received: from CO1NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:300:39:cafe::8) by MWHPR18CA0062.outlook.office365.com (2603:10b6:300:39::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.12 via Frontend Transport; Thu, 24 Feb 2022 14:38:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT053.mail.protection.outlook.com (10.13.175.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:48 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:46 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH v1 5/7] vdpa/mlx5: cache and reuse hardware resources Date: Thu, 24 Feb 2022 22:38:07 +0800 Message-ID: <20220224143809.1977642-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 94cf2c3c-d39b-47ba-5396-08d9f7a35ef0 X-MS-TrafficTypeDiagnostic: DM4PR12MB5263:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gHtJAiPNdiyipppwjFfJRfRInSMZ7AZnREbSgj7c+XtpvyGbuQhWKXh/TlZDX42mFYCjYCJd+y16yHZffaa7qRJmX3hHw86KpfhpiCNv/o+Db7FMOZyTjVwJgD8QvNo2BcMkqnInG4NTCfg9CsMn5rCAPku5QC/ySKsTy9WbpwotQzDY7fmDVBvUGsBatqUomlgwwdK+cSlWN1uVollZuj2CUTvOCsHUyb5yh2jzL7Xhlgv/OvlzHrIsieBF/M0LMDn8UR+ER6t9yx6GqoEPDp9N/1SU0BlTPw0+WGGfkv8BrYEm2G0SO8vbu87m5S5CFmp28kP+UioRK5UTAFeuxte/6tzegQQUuVns2CH8Xfj2S5rTyrs6clDRkUvkyxZu/gE7I54FqOk0XxdfTlv4tv9+BoReInZDz5KOXwZkv/7m1oBW1Pd6w/uME/Yoa+pDqvJJB9/jD8OwI1aWHKWkX/J0zw0jQHcFAyBB5Ek9vuD8iVhAjh7UsVhdpwe6bOfKH9Hnc9t9chbVhRyksuLffryTPsB0ql0ABkDADvH+tCXKDlQSqI0LEbepk87YA3XJeQpoGQJj1omSfgsUzodgPppArmcUIuNj6YzbzSNxtoiJkzz/ozpVraAYclAlw6x6orZ5YQNCmbZP0H/X0w0E/xGef7plYnHEGPvgYOun4Lg+zB2mcJH9XH0zqewrkU/HjXUd8/zoD0nr4u1qFlGuZQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(82310400004)(316002)(6666004)(508600001)(47076005)(36756003)(70206006)(70586007)(7696005)(4326008)(83380400001)(8676002)(55016003)(356005)(6286002)(186003)(16526019)(81166007)(54906003)(1076003)(2616005)(107886003)(26005)(40460700003)(8936002)(426003)(6916009)(36860700001)(336012)(2906002)(86362001)(5660300002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:49.2611 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 94cf2c3c-d39b-47ba-5396-08d9f7a35ef0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5263 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org During device suspend and resume, resources are not changed normally. When huge resources allocated to VM, like huge memory size or lots of queues, time spent on release and recreate became significant. To speed up, this patch reuse resoruces like VM MR and VirtQ memory if not changed. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 12 ++++- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 27 ++++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 73 +++++++++++++++++++++-------- 4 files changed, 99 insertions(+), 24 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index ee35c36624b..f794cb9bd61 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -241,6 +241,13 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } +static void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) +{ + mlx5_vdpa_virtqs_cleanup(priv); + mlx5_vdpa_mem_dereg(priv); +} + static int mlx5_vdpa_dev_close(int vid) { @@ -260,7 +267,8 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); - mlx5_vdpa_mem_dereg(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ @@ -657,6 +665,7 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + mlx5_vdpa_dev_cache_clean(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e0ba20b953c..540bf87a352 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -289,13 +289,21 @@ int mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv); void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); /** - * Release a virtq and all its related resources. + * Release virtqs and resources except that to be reused. * * @param[in] priv * The vdpa driver private structure. */ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +/** + * Cleanup cached resources of all virtqs. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv); + /** * Create all the HW virtqs resources and all their related resources. * @@ -323,7 +331,7 @@ int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv); int mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable); /** - * Unset steering and release all its related resources- stop traffic. + * Unset steering - stop traffic. * * @param[in] priv * The vdpa driver private structure. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index 62f5530e91d..d6e3dd664b5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -32,8 +32,6 @@ mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) entry = next; } SLIST_INIT(&priv->mr_list); - if (priv->lm_mr.addr) - mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); if (priv->vmem) { free(priv->vmem); priv->vmem = NULL; @@ -149,6 +147,23 @@ mlx5_vdpa_vhost_mem_regions_prepare(int vid, uint8_t *mode, uint64_t *mem_size, return mem; } +static int +mlx5_vdpa_mem_cmp(struct rte_vhost_memory *mem1, struct rte_vhost_memory *mem2) +{ + uint32_t i; + + if (mem1->nregions != mem2->nregions) + return -1; + for (i = 0; i < mem1->nregions; i++) { + if (mem1->regions[i].guest_phys_addr != + mem2->regions[i].guest_phys_addr) + return -1; + if (mem1->regions[i].size != mem2->regions[i].size) + return -1; + } + return 0; +} + #define KLM_SIZE_MAX_ALIGN(sz) ((sz) > MLX5_MAX_KLM_BYTE_COUNT ? \ MLX5_MAX_KLM_BYTE_COUNT : (sz)) @@ -191,6 +206,14 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; + if (priv->vmem != NULL) { + if (mlx5_vdpa_mem_cmp(mem, priv->vmem) == 0) { + /* VM memory not changed, reuse resources. */ + free(mem); + return 0; + } + mlx5_vdpa_mem_dereg(priv); + } priv->vmem = mem; for (i = 0; i < mem->nregions; i++) { reg = &mem->regions[i]; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 6bda9f1814a..c42846ecb3c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -66,10 +66,33 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Release cached VQ resources. */ +void +mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) +{ + unsigned int i, j; + + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + for (j = 0; j < RTE_DIM(virtq->umems); ++j) { + if (virtq->umems[j].obj) { + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[j].obj)); + virtq->umems[j].obj = NULL; + } + if (virtq->umems[j].buf) { + rte_free(virtq->umems[j].buf); + virtq->umems[j].buf = NULL; + } + virtq->umems[j].size = 0; + } + } +} + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - unsigned int i; int ret = -EAGAIN; if (rte_intr_fd_get(virtq->intr_handle) >= 0) { @@ -94,13 +117,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - if (virtq->umems[i].obj) - claim_zero(mlx5_glue->devx_umem_dereg - (virtq->umems[i].obj)); - rte_free(virtq->umems[i].buf); - } - memset(&virtq->umems, 0, sizeof(virtq->umems)); if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; @@ -120,7 +136,6 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); } priv->features = 0; - memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs); priv->nr_virtqs = 0; } @@ -215,6 +230,8 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); if (ret) return -1; + if (vq.size == 0) + return 0; virtq->index = index; virtq->vq_size = vq.size; attr.tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); @@ -259,24 +276,42 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } /* Setup 3 UMEMs for each virtq. */ for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - virtq->umems[i].size = priv->caps.umems[i].a * vq.size + - priv->caps.umems[i].b; - virtq->umems[i].buf = rte_zmalloc(__func__, - virtq->umems[i].size, 4096); - if (!virtq->umems[i].buf) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * vq.size + priv->caps.umems[i].b; + if (virtq->umems[i].size == size && + virtq->umems[i].obj != NULL) { + /* Reuse registered memory. */ + memset(virtq->umems[i].buf, 0, size); + goto reuse; + } + if (virtq->umems[i].obj) + claim_zero(mlx5_glue->devx_umem_dereg + (virtq->umems[i].obj)); + if (virtq->umems[i].buf) + rte_free(virtq->umems[i].buf); + virtq->umems[i].size = 0; + virtq->umems[i].obj = NULL; + virtq->umems[i].buf = NULL; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" " %u.", i, index); goto error; } - virtq->umems[i].obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, - virtq->umems[i].buf, - virtq->umems[i].size, - IBV_ACCESS_LOCAL_WRITE); - if (!virtq->umems[i].obj) { + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, size, + IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", i, index); goto error; } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; +reuse: attr.umems[i].id = virtq->umems[i].obj->umem_id; attr.umems[i].offset = 0; attr.umems[i].size = virtq->umems[i].size; From patchwork Thu Feb 24 14:38:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108293 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F15F7A034E; Thu, 24 Feb 2022 15:39:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD46942725; Thu, 24 Feb 2022 15:38:54 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2071.outbound.protection.outlook.com [40.107.92.71]) by mails.dpdk.org (Postfix) with ESMTP id DDBE24271E for ; Thu, 24 Feb 2022 15:38:53 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fjhuH6jzJVnTR2airCamv1m/+L/y5BHQRGBfmrdEdzBTUkS1p2nUDOKcbO1dlcR8sa3uuJcNly+tan65VIJHA5kdPi1kHaZSyppI1dhRoYxuPt/WK+BiU2/Ir/RRb8nLi3X4q4HCli0pOZEE9oK7BCYEN553CtccX6TwW2toMS1Xv2OC/ywLjlDOtDhtnz/7sTu2D4LtxXAbBgxilYxfIfDwGECNYv1rEIWCqgzxtuzlDuHSPsPNylNmOPtyG2EzZiJ03GBVPJAO+ZTA16d+WZwCMUAnGc9Ki6147JUN8Uw96r6DbWUQTHTS/cKjiw/hgv0Ls07Ko+QAdXc6FZG87w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Iyql/w7x/AehFOc05lYomCru352xQrt3ksbCdqDB7Io=; b=HGamZ4EQhQKRvOJxU+vyP3wjlON6OqkGNOo6Prh+v3jtGcySCDmnZRP1Cmxs9bSNTt8Va58gYTWOe/gbUlPE/2/ebTJaDcss+YPkxbYEjTEAoTE8DBbpcrPXMOoRqbbLHtgvc0I5pd0waavl1vbQGPZ+yc1S/LAgepK6v/81fr4UnMWle24dldnnBt3jq9/7JUiuAPY+v9XwE92C2VHalP/PAY4iTYku4dwv+sN1pd/JukMzMxet89SvnKztyPYDn8zJCRCN3nMEen8NERPfkL9s/Ze/g1AJA6S6w6mSPDqAaz1lAfk8sZoE6UYfLJs78+UnwkUP/4jy4ATyVeBHdg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Iyql/w7x/AehFOc05lYomCru352xQrt3ksbCdqDB7Io=; b=XS5o9ez1l3q6hV3GCPkAjGjxoU4J74aVPDhHSSdgazlhhUOPvjCJPrGvHqagLxu1wdiJz0eAWx+b8bDBU98v/AOe1qwudbZXm1KOIPUAFole90AQ2rT3YAGu/pArkQAGE36DzmT7x8VnC32fft1Nx8Bc/+cmlFcEyxEeinDn3LDZv6vnfoVwMckwnRN07VW+PjFUdUnKNfH1U+etM7zoPmRVdo5pIPZkKByb7jg7Up1Fh2t3x4zHTHfprBrOEAHPTKRKQR7t8577hm14T0QeBKKPVzxyQ567tckf7bKzEdp3Gt+YJzRcHQhFl9RfIT8fEbKCeJ8WKNi45EEeiNp7Ow== Received: from MW2PR16CA0057.namprd16.prod.outlook.com (2603:10b6:907:1::34) by BY5PR12MB3729.namprd12.prod.outlook.com (2603:10b6:a03:1ad::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Thu, 24 Feb 2022 14:38:52 +0000 Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::47) by MW2PR16CA0057.outlook.office365.com (2603:10b6:907:1::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.25 via Frontend Transport; Thu, 24 Feb 2022 14:38:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:51 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:48 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH v1 6/7] vdpa/mlx5: support device cleanup callback Date: Thu, 24 Feb 2022 22:38:08 +0800 Message-ID: <20220224143809.1977642-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a068aaa2-5563-4894-4eac-08d9f7a36083 X-MS-TrafficTypeDiagnostic: BY5PR12MB3729:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MSHXPMCs1zDwVt0EeaPNBYzxc4/mcZYQ5uhlUn3aIg87UiqiGClxUalMtZLExpH83Wg4YwXcBsvfFIiEgYMI1bdONFEdwYIE0vPjviQqrhDAf80aUqly3QtjENe6dz3F1Jc0gDQvD9n3wP3BFs4FOqi8PdfRLj+b2d1KRKfUQbqFyGv3uZTz3xfI/OyHZBufzSAt1d+ivzdEuk7+zs82vPa0s/I8LT/8knSBxg3u1LoGmQQ7rWQwZiKx/tQyl6ldkeIvsM2UmkfBW3KC4z9iWG6Hm6Z66ZzD7Fw3HAXSo7CF1Zq4Zby8YVuRAV4tPsJL3/AkAvJ1pmpiGi1NpscSGy5vfw/6R7hidASGUxIc//vIQl9bhTRlZDOF9hQQ+OgjRS5tReUmBk7qKX89pGAtliE17JfvsEhMdd9xSqDR2H36DRl/5bqs1RfkX4OsGzZ9X9jw5z9NWwDnJ3J4PlSuVSsNOCOJPLbvJybHc38CHGLQoExUHM/STMMSg42EkVQRDJKYfCTxLpTGgOcH/Z4LRiG5ib4p3wAr+Ey6GnvqpLJvGvivUhmasxJRE6RvED8A6mb21E+fqewkQQA9aR2PUkOI07CMmCANcwhMZP02Y7XxtjMg6Rhx/YeW4CGvI/SsOoK9pER4GuoQ/S2WeE13WwJ4cRE+3F2n72mgUqpFGsoHmdTI1v1b+6sGzfOhYmpDwmWSd7Jkb9VDl7Yy/jnxNA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(36756003)(508600001)(107886003)(336012)(16526019)(7696005)(70206006)(5660300002)(4326008)(1076003)(426003)(83380400001)(6666004)(70586007)(47076005)(6286002)(2616005)(8676002)(8936002)(316002)(26005)(186003)(54906003)(36860700001)(86362001)(2906002)(356005)(81166007)(55016003)(6916009)(82310400004)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:51.9333 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a068aaa2-5563-4894-4eac-08d9f7a36083 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3729 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports device cleanup callback API which called when device disconnected with VM. Cached resources like VM MR and VQ memory are released. Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 23 +++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + 2 files changed, 24 insertions(+) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index f794cb9bd61..a64445cd8b5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -270,6 +270,8 @@ mlx5_vdpa_dev_close(int vid) if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ pthread_mutex_init(&priv->vq_config_lock, NULL); @@ -294,6 +296,7 @@ mlx5_vdpa_dev_config(int vid) return -1; } priv->vid = vid; + priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); @@ -431,12 +434,32 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) return mlx5_vdpa_virtq_stats_reset(priv, qid); } +static int +mlx5_vdpa_dev_clean(int vid) +{ + struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); + struct mlx5_vdpa_priv *priv; + + if (vdev == NULL) + return -1; + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + if (priv == NULL) { + DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); + return -1; + } + if (priv->state == MLX5_VDPA_STATE_PROBED) + mlx5_vdpa_dev_cache_clean(priv); + priv->connected = false; + return 0; +} + static struct rte_vdpa_dev_ops mlx5_vdpa_ops = { .get_queue_num = mlx5_vdpa_get_queue_num, .get_features = mlx5_vdpa_get_vdpa_features, .get_protocol_features = mlx5_vdpa_get_protocol_features, .dev_conf = mlx5_vdpa_dev_config, .dev_close = mlx5_vdpa_dev_close, + .dev_cleanup = mlx5_vdpa_dev_clean, .set_vring_state = mlx5_vdpa_set_vring_state, .set_features = mlx5_vdpa_features_set, .migration_done = NULL, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 540bf87a352..24bafe85b44 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -121,6 +121,7 @@ enum mlx5_dev_state { struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; + bool connected; enum mlx5_dev_state state; pthread_mutex_t vq_config_lock; uint64_t no_traffic_counter; From patchwork Thu Feb 24 14:38:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 108294 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A19FA034E; Thu, 24 Feb 2022 15:39:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B567942731; Thu, 24 Feb 2022 15:38:55 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2061.outbound.protection.outlook.com [40.107.212.61]) by mails.dpdk.org (Postfix) with ESMTP id E7FF542730 for ; Thu, 24 Feb 2022 15:38:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DnqPFjyIr6LbluSW7dvygccVmqPJJhSmAfgE+p4o0+VGQItae8wAhcnzzfAwaS+K0wTmitjhf5asJobNHpOjgZY/44Gj3VTtp9zJOD+oLFE0sM4KZzJ2n+3sogJfT6TcxlE6NXIhnQf5GZVENJEGsRF+Sfvn/7G8FR2OulvjjZ31RXiroXU10oTrDtEYZ3ekQ5OHQ/ZJ9lT2acRMDxjTAsPaJRZM/XEuBwch5NTZ2peCvJ9jvTWZFwcjVg9rvpWS5sEHF3Q8s95967h3+5Mwq6LXH2vfBEIm/R7/LlJ9i4HRL+ZDeZa+VpY/eqy6sAZOkwU93fnmAsm7TQ5wFcBHLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jpmK+OYPiCJ8Cs6NGPr52FExrK+KFNF9J1AWO2l2vCo=; b=EHF9vy50j6tZWLa0qsjTN/jQlUinn6bPzQ6hwPya6xspBJXFNACfRKbLkOJ0T448Ag5yrUU4BWD6H+5fuTpSbIArb7G7Wk7iQRbK5i0c9RSs7reixYeeUY0vbIHN4z93I/+pPhIGeL+PkknFoRjHo07/ZcUZhyCZIhp2qOOJnh+Bk984h2CIGTGIIY08zmmOmEJEsUyDdFF+vXA2b7MKWrVoPnvu07O5t4mZVnmnkBQQJBWoyMuudQs9wJ6pXVW4HtfQ7eEUPwX6EXKBCuXZDL41Hs7BiOqlfc2TfPDmYa3n/7f3rnsnsveZxCCltD7/+ItdWC0LvIvVDsXwbSPH6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jpmK+OYPiCJ8Cs6NGPr52FExrK+KFNF9J1AWO2l2vCo=; b=N+CG7FBEgZV0yMvCrK7Jpf/IB1P6zuNYBjp8gbtC+2mKANVPKJBH4N5Sa2odA/EgVi/rZsPmh1E8iAO6XC/p54qbOyMHVyUKFadQOHljH+1+Sx8jiTOSExkwBjyAqIZHFFcgrx4Af6sVZ/JYvNvt8OMoe/3mdm95zpxgrOc6PJ8YHYWrpkYIEhD2aU7JAApJKnTyPezICv6U/DQOZSfCb06gd3a1jLUr5mO2dJ69JeA0rYwPZBjgj0n2jj8bYRWu3EbVXcikGXXZX+Sb5A1tLWruqDdTgFc8P2IwqCyHWjkWLFwV+yoittcBXovlFDIRxYIyjmaNcc8TkTeq8uSU8w== Received: from MW2PR16CA0045.namprd16.prod.outlook.com (2603:10b6:907:1::22) by PH0PR12MB5482.namprd12.prod.outlook.com (2603:10b6:510:ea::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 14:38:53 +0000 Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::fc) by MW2PR16CA0045.outlook.office365.com (2603:10b6:907:1::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.24 via Frontend Transport; Thu, 24 Feb 2022 14:38:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 14:38:52 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 14:38:51 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 06:38:49 -0800 From: Xueming Li To: CC: , Matan Azrad , "Viacheslav Ovsiienko" Subject: [PATCH v1 7/7] vdpa/mlx5: make statistics counter persistent Date: Thu, 24 Feb 2022 22:38:09 +0800 Message-ID: <20220224143809.1977642-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220224143809.1977642-1-xuemingl@nvidia.com> References: <20220224132820.1939650-1-xuemingl@nvidia.com> <20220224143809.1977642-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: af90da23-a8c9-4658-469d-08d9f7a360ff X-MS-TrafficTypeDiagnostic: PH0PR12MB5482:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Rp8NUjEksXCSCshYZ1Y/TQSQnN2Wj2xViIhRUzJr3pqUaJiPUA+vCUPAK3r2yHU2CtFpo6AQNp9RcVcERg47kQSdm6jHtwYHtaCzb7UtOX9oJaqq8k4pdxk8/E38vcmbszdgXT4EizkdTAwnONzRczabgCrq4jWr0byC36GYOWrlMBRHa7IB9kQ2Lz4w6UobS4uStgR5AkQ7M6Agoy9jgFDdFfpLKjN3HiDIYx8v7jr2yv5UFLij2JNgH3Vi0t0vrVVlWamJ/WkbcCvDsRC/AB3ou/uOJKP58aFqQNGNF1T1Iv1AerPKk01ZPm/wFip7x+KUW60JAFPdMtpmJPsyBqp2HfmULCzseA4CsR/As3DktK+qN0P5dN/51PHtuIhs2QEf/eCJFyD4+etl8Odd2MdAlw9Kzhe49LWcfODMilU0/gwyQg4oncWJ1mAP8SD6o+QPAx1oKjntU7V0Io4tBk9OnhwSGTs8e3Ix822ofBz8kH7Opct+cE7nU7JD3xh/i/y/nGj+g/cFqYn7MeMutHAJiXdOcFYWOdpCCt+LG68rMywT9aDqrIjOsMkuxqGEvwy+kskPCP5LpD/wXyWv/VKIapzqy3Lln1bIC4C/YpKbuWp9ZKRr3qjr1cqPl0BXk0jnRTbVsMRs8woIn+sDAGR2YJ2x7Kpiofx4om2XFqKbULpEFhTedLKY+aVlgBP6ECS3pAUH2LoY+QN2wy7/Zg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(5660300002)(47076005)(8936002)(508600001)(1076003)(16526019)(426003)(2616005)(107886003)(86362001)(336012)(6286002)(186003)(83380400001)(26005)(55016003)(7696005)(2906002)(356005)(82310400004)(6666004)(54906003)(36756003)(8676002)(4326008)(70586007)(81166007)(36860700001)(70206006)(40460700003)(6916009)(316002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 14:38:52.7614 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: af90da23-a8c9-4658-469d-08d9f7a360ff X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5482 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To speed the device suspend and resume time, make counter persitent in reconfiguration until the device gets removed. Signed-off-by: Xueming Li --- doc/guides/vdpadevs/mlx5.rst | 6 ++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 19 +++++++---------- drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 32 +++++++++++------------------ 4 files changed, 26 insertions(+), 32 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 30f0b62eb41..070208d3952 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -182,3 +182,9 @@ Upon potential hardware errors, mlx5 PMD try to recover, give up if failed 3 times in 3 seconds, virtq will be put in disable state. User should check log to get error information, or query vdpa statistics counter to know error type and count report. + +Statistics +^^^^^^^^^^ + +The device statistics counter persists in reconfiguration until the device gets +removed. User can reset counters by calling function rte_vdpa_reset_stats(). diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index a64445cd8b5..e9038e3904e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -388,12 +388,7 @@ mlx5_vdpa_get_stats(struct rte_vdpa_device *vdev, int qid, DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -416,12 +411,7 @@ mlx5_vdpa_reset_stats(struct rte_vdpa_device *vdev, int qid) DRV_LOG(ERR, "Invalid device: %s.", vdev->device->name); return -ENODEV; } - if (priv->state == MLX5_VDPA_STATE_PROBED) { - DRV_LOG(ERR, "Device %s was not configured.", - vdev->device->name); - return -ENODATA; - } - if (qid >= (int)priv->nr_virtqs) { + if (qid >= (int)priv->caps.max_num_virtio_queues * 2) { DRV_LOG(ERR, "Too big vring id: %d for device %s.", qid, vdev->device->name); return -E2BIG; @@ -689,6 +679,11 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) uint32_t i; mlx5_vdpa_dev_cache_clean(priv); + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + if (!priv->virtqs[i].counters) + continue; + claim_zero(mlx5_devx_cmd_destroy(priv->virtqs[i].counters)); + } mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_err_event_unset(priv); if (priv->steer.tbl) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 24bafe85b44..e7f3319f896 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -92,6 +92,7 @@ struct mlx5_vdpa_virtq { struct rte_intr_handle *intr_handle; uint64_t err_time[3]; /* RDTSC time of recent errors. */ uint32_t n_retry; + struct mlx5_devx_virtio_q_couners_attr stats; struct mlx5_devx_virtio_q_couners_attr reset; }; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index c42846ecb3c..d2c91b25db1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -127,14 +127,9 @@ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { int i; - struct mlx5_vdpa_virtq *virtq; - for (i = 0; i < priv->nr_virtqs; i++) { - virtq = &priv->virtqs[i]; - mlx5_vdpa_virtq_unset(virtq); - if (virtq->counters) - claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); - } + for (i = 0; i < priv->nr_virtqs; i++) + mlx5_vdpa_virtq_unset(&priv->virtqs[i]); priv->features = 0; priv->nr_virtqs = 0; } @@ -590,7 +585,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, struct rte_vdpa_stat *stats, unsigned int n) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; - struct mlx5_devx_virtio_q_couners_attr attr = {0}; + struct mlx5_devx_virtio_q_couners_attr *attr = &virtq->stats; int ret; if (!virtq->counters) { @@ -598,7 +593,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, "is invalid.", qid); return -EINVAL; } - ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr); + ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, attr); if (ret) { DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid); return ret; @@ -608,37 +603,37 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, return ret; stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS, - .value = attr.received_desc - virtq->reset.received_desc, + .value = attr->received_desc - virtq->reset.received_desc, }; if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS) return ret; stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS, - .value = attr.completed_desc - virtq->reset.completed_desc, + .value = attr->completed_desc - virtq->reset.completed_desc, }; if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS) return ret; stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS, - .value = attr.bad_desc_errors - virtq->reset.bad_desc_errors, + .value = attr->bad_desc_errors - virtq->reset.bad_desc_errors, }; if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN) return ret; stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN, - .value = attr.exceed_max_chain - virtq->reset.exceed_max_chain, + .value = attr->exceed_max_chain - virtq->reset.exceed_max_chain, }; if (ret == MLX5_VDPA_STATS_INVALID_BUFFER) return ret; stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_INVALID_BUFFER, - .value = attr.invalid_buffer - virtq->reset.invalid_buffer, + .value = attr->invalid_buffer - virtq->reset.invalid_buffer, }; if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS) return ret; stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) { .id = MLX5_VDPA_STATS_COMPLETION_ERRORS, - .value = attr.error_cqes - virtq->reset.error_cqes, + .value = attr->error_cqes - virtq->reset.error_cqes, }; return ret; } @@ -649,11 +644,8 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; int ret; - if (!virtq->counters) { - DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " - "is invalid.", qid); - return -EINVAL; - } + if (virtq->counters == NULL) /* VQ not enabled. */ + return 0; ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &virtq->reset); if (ret)