From patchwork Fri Apr 8 07:55:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109474 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF41FA00BE; Fri, 8 Apr 2022 09:56:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DFC9F40E5A; Fri, 8 Apr 2022 09:56:55 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2059.outbound.protection.outlook.com [40.107.243.59]) by mails.dpdk.org (Postfix) with ESMTP id 659804003F; Fri, 8 Apr 2022 09:56:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RKhfjFQqs/jm0/krBWDaTAnTb1xahYoRReAkF33i6k84tq5zRZ47FXillx8w1vgHZlvPCohzy+VX2ulV51fpUaE0G6ItMZO5EMzBy7Mcfo5498RQAvBC7sh16moD5+howOXv2Qxq8XzEnh3v4x5a+FpgIN69gIZOT5SRA9gWjqLNZqOK40zXfmEYQLBDHMTvrnacomPDVRO1Icbz0FaHvyn1cGrYNP3KaqkzY8L5ISuw2inrh7HpCnuBNW+rcXLqKR2XGw/k5Z3wOaM+xVRbFfri+1JATPE6dZ3us3FoNuxdlJgKZplxcdoZQpBJbz0BkmWIZlmBuyEyImFpYzTssA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=R7+AjK9N9iqmqWBsQsB7oJe+E+uUxMe2MTOfHNCr7ZM=; b=JbCEUv//mrGQc6DKcPVYi8UO5Ez69vWPNrTT9nsJR/YeN0/6VWA9VjGpG6Et+WPA1sruaw5cw+dgvKyFVkgYKbRAnV8lMaoENeiS+/VPpgLQOzG7wrt9AJ/iIKDPoSYishJPd1nag8ePe2U0q/YE9prAuoBDayq6B5ksvgVcx/pag3e/Un1RJTRgStOoyyXjAyH1M8WP76tsHz0XuzPmBja2gPlCkN/zhH78krXOXt4WEYw8Tkz5Jy25eMKdR011g4iiZYzwwbe3c7JC9N48Yk7iE9UvPmqH1q+Wv8fU6auqdljtQNao+v2vbgyyi8RKzjWejul3UKtumLh1aaiWBA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R7+AjK9N9iqmqWBsQsB7oJe+E+uUxMe2MTOfHNCr7ZM=; b=FNVIyd3d7aTPZqDySUlwKXBE1I5pPE/NYZ2nNmbaPMmheBB0QSP76pEuZvJIkgeFzDMbTSr7MBU1A8CoRHRuPXYEPj/IWlVh6gu/6JwnpTqHWz2RBT4/pFpOjl4owtfFoQU2C9wDjAK+TAJ+4iUL3nvThgutwSm3YOAhjlwGOKSd0Lv/G6ac+xDXG1zfxfVYgegvUJb3a9stvEIed9wZfJRnjUar7WM8sf5+h/HvXygSQyiC7RQKAQRCYA3Bju5WjZMVx2zL7ilSyJsG3m1QKXH8ZCyES52d5yMrVYwT+TP7corhabd92dOc452dPPaqVGqY8x9oGT2Ct8oDOswO8g== Received: from BN0PR04CA0133.namprd04.prod.outlook.com (2603:10b6:408:ed::18) by DS7PR12MB5768.namprd12.prod.outlook.com (2603:10b6:8:77::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:56:53 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ed:cafe::e2) by BN0PR04CA0133.outlook.office365.com (2603:10b6:408:ed::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.21 via Frontend Transport; Fri, 8 Apr 2022 07:56:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:56:52 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:56:50 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:56:47 -0700 From: Li Zhang To: , , , , Maxime Coquelin , "Chenbo Xia" , Xiaolong Ye , Xiao Wang CC: , , Yajun Wu , Subject: [RFC 01/15] examples/vdpa: fix vDPA device remove Date: Fri, 8 Apr 2022 10:55:51 +0300 Message-ID: <20220408075606.33056-2-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a83a2df4-e6d5-4c0f-2c8e-08da193557f8 X-MS-TrafficTypeDiagnostic: DS7PR12MB5768:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5DYzfa/8+9Bl6jt7FWxDG0l+uOrRM8AZwDKsO43FMCRXoLEi0fQLn+Ete13JF2UgvAyFlT8J2gd0yXmRLZXmbl6mgYWSXNedvF02YIOMp5qsvyPMsV6mpzqpoOxewLkhAfd6b3GYxU7Qz7IoI76kXg1eCJP3CecoT4sJWJmVAxCdULinhQgCwVSRbv7WW2INiVU30Hc3gOl6uLJ5wKU0XfUUemUKn7RBGP6vMeZ70+oGadH2VXwdwBrsJRGLrjCxL/VC6V6qWICWA5p8SJVxmzqQTK7XLtr/I5uTZjbqJ07TXj++Zyi6Ao2HBWaZJabjerC43Ek9bE1NC6ktXTG5xNXn9Kqj4HkDX5dAb73h96U8hsJE2ch+lhKALElX5rPIR+vUZ8vZOA5GMvYngIXLKiNYJEhoMnQB6y7NGBbE6SKdTX8WrZoI4mqJmJdK6dWUErRbu+PMsiX3pVQHnq6FYMIaQqsQE81Ee2XUv6KvYtz6ogWTzoY/LNvfv0nHJ/3RU0VaEyiqr3pvy2dYarzWJ1gvsIAMHFN+T45t1TcqIyCXF1CBlJOULIE6s7Y5fbwEfwmE5AEceNnKROxIIU3/z1WqWnkVn+1cSnyOdtPBHeHDgbt0shfubgZbav+Llk1sdJVDtlQNdXII56nbfy6qQg/byM2cDwe8R9R0GZdjyIjIjKbFu3iQlNmsMvfJbA1PmgkJ1LNcAMFfDqmIRuhPIg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(336012)(86362001)(47076005)(4744005)(16526019)(186003)(426003)(26005)(36860700001)(70206006)(36756003)(70586007)(82310400005)(2906002)(8936002)(4326008)(5660300002)(8676002)(1076003)(110136005)(2616005)(54906003)(316002)(81166007)(356005)(40460700003)(55016003)(7696005)(6666004)(6286002)(508600001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:56:52.4627 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a83a2df4-e6d5-4c0f-2c8e-08da193557f8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5768 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu Add calling rte_dev_remove in vDPA example application exit. Otherwise rte_dev_remove never get called. Fixes: edbed86d1cc ("examples/vdpa: introduce a new sample for vDPA") Cc: stable@dpdk.org Signed-off-by: Yajun Wu --- examples/vdpa/main.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c index bd66deca85..19753f6e09 100644 --- a/examples/vdpa/main.c +++ b/examples/vdpa/main.c @@ -593,6 +593,10 @@ main(int argc, char *argv[]) vdpa_sample_quit(); } + RTE_DEV_FOREACH(dev, "class=vdpa", &dev_iter) { + rte_dev_remove(dev); + } + /* clean up the EAL */ rte_eal_cleanup(); From patchwork Fri Apr 8 07:55:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109475 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F861A00BE; Fri, 8 Apr 2022 09:57:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 004D6427EB; Fri, 8 Apr 2022 09:57:03 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2077.outbound.protection.outlook.com [40.107.212.77]) by mails.dpdk.org (Postfix) with ESMTP id 0A2E24003F for ; Fri, 8 Apr 2022 09:57:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Wh7EKP3K/Eacyt0ED19LiOPucvyezR1XoOP4c2C+UAsYseVBlpTrkeo8Lv3Ub88J6ZiQcR4go40yuDVH1ssEWqmI7kjTG6JGRJCEhOByUEb/5LuEkqKZg/d5NAQrA8st1LYz7+QvUHYufgTNGK7wVbt9jrajyLgNnF+m4FM8YaV6iNAdXbb69ZKoZBQmRK692+olJXrRJvbS69qEZg8aHq0uwX3yOMjtaOJxd4iyuDd/ssauIHdNTKXvfP2akN+0Tye5BgNOZy2NWYL8j9pmsdIbNYExFQbAjT62EzCdyE4Y35/Q9gPGQSfmUl1zbh4yznoYz4q1rYSURQJZbOqUsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XleKiUL9wEUT8UmdOrfk3ZBc0m3+sP8RQrGwJ9XLRKQ=; b=kH0KboktvKabrvzzfexruOhxkNPvgPD0bPIDzV6PXxWFVCCB9XFi6V72fUgV/TdXcV959F1GLwPZJS2aMSvvUgWAnjBYrjiDlsuZHs/qNRaLAJ7ZWvdY9DRGbXnxQtkEFaNoHeYYhnBvS29XvkDgutPyYRCqWLHskjocryJ7qyhvU85h3fF2HMSIktFMbsNwS/hBtP6FFFlE8UmZmj/rHGQGKiEIYyN4maPA2CQGxWFukJaR+Xuo16umxOt66YaEPJYcp04kAys3DkzRPJ87tAVstR2n71en6FIqEx5JmuKRXIYBbagscRdgMLOZoefSDmQ2XwdUjJFqLPQ5wtZrOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XleKiUL9wEUT8UmdOrfk3ZBc0m3+sP8RQrGwJ9XLRKQ=; b=kZ5Qo0hXIhOOojMtWQC///9PeA3lnIqjvVDHdNcLA6ZLUrLRilmagsRuwfPEpjEBHNXlxBoHkJMCRUbbV7XYwUy8s/W5AkLXXUGMHBnNFi/euQQBoL8DbmccwTzDyvAcgZrLpDcrXVfPqgt2Hxq846lP/bb1k1VCdS+PvX8f48TaB2w3r94Qla1Ck6iqNsndfT57mDhlYpDnk2Ik3A/bq8t9kAPZssx3k1duYrK8H4JTOk3YaGq5JJHlxzefD30FhV+F87jQI8/uGGzysi25UH9AfrUQ8eu6lpoJ7hzNEW/klAZqi6fwqtDTMG+ICMvm6YPOBY5nZSyDoEG8eBt6wg== Received: from BN9PR03CA0576.namprd03.prod.outlook.com (2603:10b6:408:10d::11) by BN9PR12MB5195.namprd12.prod.outlook.com (2603:10b6:408:11c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:00 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::34) by BN9PR03CA0576.outlook.office365.com (2603:10b6:408:10d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.21 via Frontend Transport; Fri, 8 Apr 2022 07:57:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:56:59 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:56:57 -0700 From: Li Zhang To: , , , CC: , , Yajun Wu Subject: [RFC 02/15] vdpa/mlx5: support pre create virtq resource Date: Fri, 8 Apr 2022 10:55:52 +0300 Message-ID: <20220408075606.33056-3-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1435a725-9e60-43f5-81f8-08da19355cac X-MS-TrafficTypeDiagnostic: BN9PR12MB5195:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jq1bRD8VslL5P6sEem/8KpK3yNMlagqLrxQNSdyhrvclKh4MluvvEZ3JQnBf4mxvJNCPagrym5siXs0v7f2CwWLxn5BCFGdjRgwNaC/RcSR+YTX3LHQsCdDgffm7mMgs3jHw+fC8/+IdTY3xV0+DoTHnkgtrV/ld2I5pSE1gzcgHLz25k00TqZmmOsM6gSZoOl4xadXo1xPYklEL/WXlMJnS6feVbrNCIh3LMd9LJeKTmMEYnZ0An8VN+03aN5mjzgkXJT2dBMEGb9XUeIpxHrkWVq8F6tco3hB0mp3G6DYe7oSkn089jCmZpEZviCoXs1aZ1Cz9ccYJqjTEeKlPxGlfZxJX9tsxykQu5OMEu9C/h9i6CEfoJek6i0ZtYdHomTLeQFQwLZkZ/e5PlkgXK+DkfPSeXL82X4FdmK1/gUM2RnhRwalHIlz5FnLbbfOd2HfAdTLnTEe2Q7BEVqmCsKTpk0N2ycRlstv3So8XxrzcJJpU9x69khkBxEGDCvXFBeDb2ykXQqV2aOhcVTODGe/DOu/y9GEonM6nAlQEihnK+P96czNr6YyNj0XYMTEwvy/Zujs6F9fokix1vWG4rvpdZAVQKDU94ZJbas4Ns3digXbv5WLsdRrET3qH/unrOZLBtngv8pxHEXGpQmWx6ChY8flZ18dorRV6cRY9a+mKHBqD3x/MhT7MyJx2xfJRuyZqmJfFPdHGTqVf+PS7SA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(5660300002)(4326008)(82310400005)(40460700003)(7696005)(86362001)(186003)(6666004)(16526019)(70206006)(81166007)(8936002)(2906002)(36756003)(8676002)(70586007)(356005)(47076005)(6286002)(26005)(2616005)(83380400001)(336012)(426003)(54906003)(110136005)(508600001)(316002)(36860700001)(1076003)(55016003)(107886003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:00.3242 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1435a725-9e60-43f5-81f8-08da19355cac X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5195 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu The motivation of this change is to reduce vDPA device queue creation time by create some queue resource in vDPA device probe stage. In VM live migration scenario, this can reduce 0.8ms for each queue creation, thus reduce LM network downtime. To create queue resource(umem/counter) in advance, we need to know virtio queue depth and max number of queue VM will use. Introduce two new devargs: queues(max queue pair number) and queue_size (queue depth). Two args must be both provided, if only one argument provided, the argument will be ignored and no pre-creation. The queues and queue_size must also be identical to vhost configurtion driver later receive. Otherwise either the pre-create resource is wasted or missing or the resource need destroy and recreate(in case queue_size mismatch). Pre-create umem/counter will keep alive until vDPA device removal. Signed-off-by: Yajun Wu --- doc/guides/vdpadevs/mlx5.rst | 14 +++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 75 ++++++++++++++++++++++++++++++++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 2 + 3 files changed, 89 insertions(+), 2 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 3ded142311..0ad77bf535 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -101,6 +101,20 @@ for an additional list of options shared with other mlx5 drivers. - 0, HW default. +- ``queue_size`` parameter [int] + + - 1 - 1024, Virio Queue depth for pre-creating queue resource to speed up + first time queue creation. Set it together with queues devarg. + + - 0, default value, no pre-create virtq resource. + +- ``queues`` parameter [int] + + - 1 - 128, Max number of virio queue pair(including 1 rx queue and 1 tx queue) + for pre-create queue resource to speed up first time queue creation. Set it + together with queue_size devarg. + + - 0, default value, no pre-create virtq resource. Error handling ^^^^^^^^^^^^^^ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 534ba64b02..57f9b05e35 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -244,7 +244,9 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) static void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { - mlx5_vdpa_virtqs_cleanup(priv); + /* Clean pre-created resource in dev removal only. */ + if (!priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_mem_dereg(priv); } @@ -494,6 +496,12 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) priv->hw_max_latency_us = (uint32_t)tmp; } else if (strcmp(key, "hw_max_pending_comp") == 0) { priv->hw_max_pending_comp = (uint32_t)tmp; + } else if (strcmp(key, "queue_size") == 0) { + priv->queue_size = (uint16_t)tmp; + } else if (strcmp(key, "queues") == 0) { + priv->queues = (uint16_t)tmp; + } else { + DRV_LOG(WARNING, "Invalid key %s.", key); } return 0; } @@ -524,9 +532,68 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, if (!priv->event_us && priv->event_mode == MLX5_VDPA_EVENT_MODE_DYNAMIC_TIMER) priv->event_us = MLX5_VDPA_DEFAULT_TIMER_STEP_US; + if ((priv->queue_size && !priv->queues) || + (!priv->queue_size && priv->queues)) { + priv->queue_size = 0; + priv->queues = 0; + DRV_LOG(WARNING, "Please provide both queue_size and queues."); + } DRV_LOG(DEBUG, "event mode is %d.", priv->event_mode); DRV_LOG(DEBUG, "event_us is %u us.", priv->event_us); DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); + DRV_LOG(DEBUG, "queues is %u, queue_size is %u.", priv->queues, + priv->queue_size); +} + +static int +mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) +{ + uint32_t index; + uint32_t i; + + if (!priv->queues) + return 0; + for (index = 0; index < (priv->queues * 2); ++index) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + + if (priv->caps.queue_counters_valid) { + if (!virtq->counters) + virtq->counters = + mlx5_devx_cmd_create_virtio_q_counters + (priv->cdev->ctx); + if (!virtq->counters) { + DRV_LOG(ERR, "Failed to create virtq couners for virtq" + " %d.", index); + return -1; + } + } + for (i = 0; i < RTE_DIM(virtq->umems); ++i) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * priv->queue_size + + priv->caps.umems[i].b; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + " %u.", i, index); + return -1; + } + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, + size, IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { + rte_free(buf); + DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", + i, index); + return -1; + } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; + } + } + return 0; } static int @@ -604,6 +671,8 @@ mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) return -rte_errno; if (mlx5_vdpa_event_qp_global_prepare(priv)) return -rte_errno; + if (mlx5_vdpa_virtq_resource_prepare(priv)) + return -rte_errno; return 0; } @@ -638,6 +707,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = 1; pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; + mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) goto error; priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); @@ -646,7 +716,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - mlx5_vdpa_config_get(mkvlist, priv); SLIST_INIT(&priv->mr_list); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); @@ -684,6 +753,8 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + if (priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_dev_cache_clean(priv); for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { if (!priv->virtqs[i].counters) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e7f3319f89..f6719a3c60 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -135,6 +135,8 @@ struct mlx5_vdpa_priv { uint8_t hw_latency_mode; /* Hardware CQ moderation mode. */ uint16_t hw_max_latency_us; /* Hardware CQ moderation period in usec. */ uint16_t hw_max_pending_comp; /* Hardware CQ moderation counter. */ + uint16_t queue_size; /* virtq depth for pre-creating virtq resource */ + uint16_t queues; /* Max virtq pair for pre-creating virtq resource */ struct rte_vdpa_device *vdev; /* vDPA device. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ int vid; /* vhost device id. */ From patchwork Fri Apr 8 07:55:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109477 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 323FCA00BE; Fri, 8 Apr 2022 09:57:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BE8A42800; Fri, 8 Apr 2022 09:57:17 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2054.outbound.protection.outlook.com [40.107.244.54]) by mails.dpdk.org (Postfix) with ESMTP id 9F9EC4003F for ; Fri, 8 Apr 2022 09:57:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aU5PrjJ7MZWgVQBRvGLCP3i3imTc6zSsLBnVO8caWnST3teGItBcvnRD/n2Dml2uEpczo11MF+6Yljz+EYJnKMtvcjMQAuKw19PfK4l0+m/MuBO2RJOvwO7e04N1/jKC1RWrzdinQ43i6nnpi1Mq/mlqb55vR2ciTusWk2HqgDVhcTY5NmiEiEWCco/lNYzNt0FTOydVfK9BEz24oV4fHmc5pZfCzFtGVoXRs7mJXvbfFDT5n6qQzZceDNbnOh0MXA2MjuViaPTTgzv8pyzZtAyV0CyiWmJKJ7yebb43VOWd5kPNoKq8iQuWBTQoGttlUC58UZvV+c9VuEfMyEArog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QzmzPQnZGbZTwNxrBysNNomrKxFiyKJX83/erogY0X0=; b=NeLGNSK+AANeBIp5f/Aq7miZX/p7d2ANg2qPV53EidajsfZQXQqdm02RSSe43oKtVJMj0Ne1llONEXguGGPal9F3KMNspO9F4IUnLV++6mVkqWPTJh5ZkIIgeWG8I0VMMLE6G878c8g3I0DF9GFPcf+nrdZmVT8UrJIbt3sJtod1bvqVx6TKt9cZUdnCY02tchcwpzsE1JwAal1h3aRWDhafnHYEk+Z+Y3SMXIgQ7L029TvewFSRkO2vTW8uMD+J1jEw3pwUe9xQ4qxc/ghaef/9rpFUjGvNT6lPokAMqsElL/2oOj9PlGmGxFEiLOiLq/rfjUTeuqTHp7otSTenhA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QzmzPQnZGbZTwNxrBysNNomrKxFiyKJX83/erogY0X0=; b=bCjdu21IxRt7uERWhn0v9iZ/l/cQPzGusQrUdqZUfQj1tXToF1k9gFSwJeVXPzs0vYeG/xqIvWKpce7zpqPX6BC/orv9nfqMxGUhBUO9kMZwAvKZcdf5eHMxrldIpjRtiULPX1wiHkPGFL4Uvtt1NT1nN2AoPa2DXwaobUWE9B+KntbSkfSDMLSkszFFUnK2ZVfd5u8w8a+Ys3atc921go2ee5g/9f9T+jRrQ5q+HWK3OQql4YWQ3ZNntdogWOe8OIq9aLYARqbSh3OXuOjBLYHrg9jm2dVjrcGFHeSnTUsQC2c301Et44hDwjWg3VKdce7I6kq9Lcwp990s7GTo7Q== Received: from BN6PR14CA0028.namprd14.prod.outlook.com (2603:10b6:404:13f::14) by DM5PR12MB1786.namprd12.prod.outlook.com (2603:10b6:3:112::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:13 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:13f:cafe::1) by BN6PR14CA0028.outlook.office365.com (2603:10b6:404:13f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25 via Frontend Transport; Fri, 8 Apr 2022 07:57:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:02 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:56:59 -0700 From: Li Zhang To: , , , CC: , , Yajun Wu Subject: [RFC 03/15] common/mlx5: add DevX API to move QP to reset state Date: Fri, 8 Apr 2022 10:55:53 +0300 Message-ID: <20220408075606.33056-4-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7a81f75c-5ea5-45f9-6ed4-08da19355f20 X-MS-TrafficTypeDiagnostic: DM5PR12MB1786:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PmczEad8yq53niGeYVvtuWgqYSVVheoj/bIXKLa8P6G6SphWmqjJj0qRi6aXCL+PaHkuCzSOfFZl9pqMrYGJa0pzfk9rBaw0VLzC9sCr1Uy37rMwqbaWu2+ZH9aoNVlDEm4/MbNbhPmebiKIHQBmJXGSoLCqdjvXV3npH5RM38qnqvDjJuMcgxRnxt6cycPE7l81a9UorOB/HXWvHHrQXaVEbgyOudmYwNN91jp5xmBfzD22YFKpteLZz5wUTqyk0EE0mnwXrT3+gqgk2CRaZn/0bl3oSqUWn+VK3VDoERBMv22Vi2oMvp+r1LWoxoFxu42xOCVSTQ0hQRpIWgNdzlf4gmMO3cePR9yWAurKhpV9JWpdun5N6foACrvhmLPmjELUuDNhjlFQOUgc2si9XL2F5AQ+LaFpHm0xkFyr/wR5X7hqKHnLf/BS4tKJYh6HLryhbzmOKQBtFnaCwIbGOsx3mRlKjX3b/22LO79Hbfa2Z3bBXKiOC9l5rH9AKEx16OfRbkvlskv9r9m3XfsbDPcjqAr76D5EOFJGx3yPKUuoxRF2OpYPpDq5HNlyBMzZhgOgv8H7RhuLCmNeNFHD9d8Z2ymmfwbwr/HO6Kkksmh6fwx6ET+NqChr9HvGVHDKYkSCeHbeJMqd9ZH2RC883RPmDWVVdRN60cFFE7yfHo7p3J1mIoX9Tc8e4pYl+57LAGOl2UQFfNFiLWWI4pujhw== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(82310400005)(1076003)(6286002)(36860700001)(8676002)(2616005)(86362001)(36756003)(186003)(316002)(26005)(107886003)(40460700003)(6666004)(16526019)(7696005)(8936002)(4326008)(55016003)(426003)(336012)(47076005)(5660300002)(81166007)(70206006)(54906003)(110136005)(356005)(2906002)(508600001)(70586007)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:04.4393 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a81f75c-5ea5-45f9-6ed4-08da19355f20 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1786 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu Support set QP to RESET state. Signed-off-by: Yajun Wu --- drivers/common/mlx5/mlx5_devx_cmds.c | 7 +++++++ drivers/common/mlx5/mlx5_prm.h | 17 +++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index d02ac2a678..a2943c9a58 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -2255,11 +2255,13 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op, uint32_t rst2init[MLX5_ST_SZ_DW(rst2init_qp_in)]; uint32_t init2rtr[MLX5_ST_SZ_DW(init2rtr_qp_in)]; uint32_t rtr2rts[MLX5_ST_SZ_DW(rtr2rts_qp_in)]; + uint32_t qp2rst[MLX5_ST_SZ_DW(2rst_qp_in)]; } in; union { uint32_t rst2init[MLX5_ST_SZ_DW(rst2init_qp_out)]; uint32_t init2rtr[MLX5_ST_SZ_DW(init2rtr_qp_out)]; uint32_t rtr2rts[MLX5_ST_SZ_DW(rtr2rts_qp_out)]; + uint32_t qp2rst[MLX5_ST_SZ_DW(2rst_qp_out)]; } out; void *qpc; int ret; @@ -2302,6 +2304,11 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op, inlen = sizeof(in.rtr2rts); outlen = sizeof(out.rtr2rts); break; + case MLX5_CMD_OP_QP_2RST: + MLX5_SET(2rst_qp_in, &in, qpn, qp->id); + inlen = sizeof(in.qp2rst); + outlen = sizeof(out.qp2rst); + break; default: DRV_LOG(ERR, "Invalid or unsupported QP modify op %u.", qp_st_mod_op); diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 44b18225f6..cca6bfc6d4 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3653,6 +3653,23 @@ struct mlx5_ifc_init2init_qp_in_bits { u8 reserved_at_800[0x80]; }; +struct mlx5_ifc_2rst_qp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_2rst_qp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 vhca_tunnel_id[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_80[0x8]; + u8 qpn[0x18]; + u8 reserved_at_a0[0x20]; +}; + struct mlx5_ifc_dealloc_pd_out_bits { u8 status[0x8]; u8 reserved_0[0x18]; From patchwork Fri Apr 8 07:55:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109476 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06310A00BE; Fri, 8 Apr 2022 09:57:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ECB1A4069F; Fri, 8 Apr 2022 09:57:09 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2053.outbound.protection.outlook.com [40.107.94.53]) by mails.dpdk.org (Postfix) with ESMTP id 7768B4003F for ; Fri, 8 Apr 2022 09:57:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l9XfmOXI5J6y5gJKRATiE+REm/N3L9DdEAUOBI4IBeBCcfbRe3lFUNWQyvja7bVO0szB4ANoRyFEKAtQhzSecKykZwEqx+9PL/hY0MLES1Q4HQd0tAD+pA1sOmvhYFoUEa1LJHSKpfwQUAnVn7FSTgVC+vn6i86sQCPCvq2Y6wJdfrw00GeW1mPCx1uN715oqz6Q1fiEJI9VlwWMfu45P7f+oeEZ26R7+hSXczmFnnMGXSxE1S8zN5ptg7XVvYygSz7JLhGGh5y4ivhmFAAmQ6v8OW0oMdueMdLlNBjgeOJu1QpD+PMKOPx/JIRIiL9Vafohhbx4ak64IpAjwyPM0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5llDo/VEnhj/LFT+BCQtiTmTHeZopi7Sdt7SvKuZetw=; b=YrWXYaiAB1hFIonoSWPaiKBpOki/B3/0U24a17n+89TTgiCcgMud0FCGTmmtuLcishgh5C/cvJ2kR0sV5ITMmaXC5laHfn5Mse29GfgInOxa/5Ib8+cca3rdFWHuc+kfUfKfT6OVauwvTA4sHZgPlaeIxlJ0Xm6Z7dJeGTc31oGNrWER5gbrX0wP08kL9A4kmeIUYcXTEJaRd1o/pttdC4VGiOM4DdPMZnudq7NuzdvkXE9GbA8nfeuQFwxL2dIZP3U0MLZenwBUQtWkxgJ4/eij+3stH1lSw3nqd/W8dG9FvRpsmJdcIheFQrU43LyYKmmuvCzX/0IgJ8zAXqkqfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5llDo/VEnhj/LFT+BCQtiTmTHeZopi7Sdt7SvKuZetw=; b=q2fJI+LZ/1Ycy1FYTt90MlJnLypkDAdk0xmaArnD+R9iE4Gi/unmQhslb9HA0J9+O87YMkQT2WTd9ZqxZosq/PVqIm8x0s+lo+dgyczHCQZ4MrZuHCDEzwPb65id1QSnjpYnhvOzhBHicMdqdCxOwLb52WA23A9F3QX4Rpc2k5tA5wpLTDH0NTsiYLUBxj30IuYKZzxr3lUSEBddQhg4B2i2UUDEBhk45qkuDPW9ZN51BrNBUzCBqYGiDwW3vs5rl733OZ9k0lpn9CLen0LdrRFrTJDw+gx+B1zxp5C/1qsf5d02i6hclMhYbTnwoC/dI2kjCG8lqJL5v8rrYohLCA== Received: from BN9PR03CA0416.namprd03.prod.outlook.com (2603:10b6:408:111::31) by BYAPR12MB2936.namprd12.prod.outlook.com (2603:10b6:a03:12f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.31; Fri, 8 Apr 2022 07:57:06 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:111:cafe::ec) by BN9PR03CA0416.outlook.office365.com (2603:10b6:408:111::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25 via Frontend Transport; Fri, 8 Apr 2022 07:57:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:04 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:02 -0700 From: Li Zhang To: , , , CC: , , Yajun Wu Subject: [RFC 04/15] vdpa/mlx5: support event qp reuse Date: Fri, 8 Apr 2022 10:55:54 +0300 Message-ID: <20220408075606.33056-5-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dc7651d8-fe49-4af3-85c8-08da19355fe5 X-MS-TrafficTypeDiagnostic: BYAPR12MB2936:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Iphc43C1cRTChwErwkMsNqMI1TowcryqJQQdGK6W/9c6KIo/a6jmxqxUzWp2W0FyVaz/ZLOqjmtfH4Kx/JDc0Lke3senw0XcW2nXyKKePlgcYBtivay0WT+B5+STR6rALVT5qg6HG8zYbeLLrtkqWmkDXmrdhvKEoxFwpdmi1YHtT7AsenOwBvSCAeL1C23Ybnp/T7CfYzTPbcbmoE8MuNCmZvrV7mFs9vqzISyoIdnJi7Fo5K6uZMtidwa9eovusWgqic6r30G1z+RdsINooar1ZbnSw2LBnWrkWBMGV/4EaPu8W6KtAAylMYxh/LbMDQj2w9xXLT5Lyp0TH2sTFp/aCEa4gq/W4JQBeJCzZIqpHBNiNweaqcqanM7lOIk2feiGOf+9aa6Mg+B1ijBfwDX9FgptY3chhMsxcUoUDEuesBTu8mUioC8Qyi6tIP7K++g8olwUx8BVvtTWGV0YQ7EFqRfSLiOSsAmPbKvsUbk2VvkFyG3T1QEeEu9Qz3Dyf/TJR4AoBy0gXRTfPvGCthKOKtDQIqUhO9oAmXqJiKAfPgzkXrZn0WH+vH0Rrp8RhmPp1bwO9DZu+ptjo1NK3dNajxBSMstTvfBNaA+xNbkr0RkQzjb2oM/4ZVfNQKwpEF97yps/3on9DW/RAFcdgNdiXzG+xaqsQrxy4UnvxLtWMo430xdwrzsAKia1mK72O4e3IADvZ2Tru9B9lIm3yA== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(6286002)(54906003)(107886003)(5660300002)(336012)(508600001)(2906002)(7696005)(8936002)(110136005)(6666004)(1076003)(356005)(2616005)(316002)(82310400005)(426003)(83380400001)(36860700001)(8676002)(55016003)(70206006)(4326008)(26005)(81166007)(70586007)(186003)(47076005)(86362001)(36756003)(16526019)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:05.7466 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dc7651d8-fe49-4af3-85c8-08da19355fe5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu To speed up queue create time, event qp and cq will create only once. Each virtq creation will reuse same event qp and cq. Because FW will set event qp to error state during virtq destory, need modify event qp to RESET state, then modify qp to RTS state as usual. This can save about 1.5ms for each virtq creation. After SW qp reset, qp pi/ci all become 0 while cq pi/ci keep as previous. Add new variable qp_ci to save SW qp ci. Move qp pi independently with cq ci. Add new function mlx5_vdpa_drain_cq to drain cq CQE after virtq release. Signed-off-by: Yajun Wu --- drivers/vdpa/mlx5/mlx5_vdpa.c | 8 ++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 12 +++++- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 60 +++++++++++++++++++++++++++-- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 6 +-- 4 files changed, 78 insertions(+), 8 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 57f9b05e35..03ad01c156 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -269,6 +269,7 @@ mlx5_vdpa_dev_close(int vid) } mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); priv->state = MLX5_VDPA_STATE_PROBED; @@ -555,7 +556,14 @@ mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) return 0; for (index = 0; index < (priv->queues * 2); ++index) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, + -1, &virtq->eqp); + if (ret) { + DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", + index); + return -1; + } if (priv->caps.queue_counters_valid) { if (!virtq->counters) virtq->counters = diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index f6719a3c60..bf82026e37 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -55,6 +55,7 @@ struct mlx5_vdpa_event_qp { struct mlx5_vdpa_cq cq; struct mlx5_devx_obj *fw_qp; struct mlx5_devx_qp sw_qp; + uint16_t qp_pi; }; struct mlx5_vdpa_query_mr { @@ -226,7 +227,7 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * @return * 0 on success, -1 otherwise and rte_errno is set. */ -int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, +int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, int callfd, struct mlx5_vdpa_event_qp *eqp); /** @@ -479,4 +480,13 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, */ int mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid); + +/** + * Drain virtq CQ CQE. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void +mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 7167a98db0..b43dca9255 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -137,7 +137,7 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) }; uint32_t word; } last_word; - uint16_t next_wqe_counter = cq->cq_ci; + uint16_t next_wqe_counter = eqp->qp_pi; uint16_t cur_wqe_counter; uint16_t comp; @@ -156,9 +156,10 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) rte_io_wmb(); /* Ring CQ doorbell record. */ cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + eqp->qp_pi += comp; rte_io_wmb(); /* Ring SW QP doorbell record. */ - eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cq_size); + eqp->sw_qp.db_rec[0] = rte_cpu_to_be_32(eqp->qp_pi + cq_size); } return comp; } @@ -232,6 +233,25 @@ mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv) return max; } +void +mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) +{ + unsigned int i; + + for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { + struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; + + mlx5_vdpa_queue_complete(cq); + if (cq->cq_obj.cq) { + cq->cq_obj.cqes[0].wqe_counter = + rte_cpu_to_be_16(UINT16_MAX); + priv->virtqs[i].eqp.qp_pi = 0; + if (!cq->armed) + mlx5_vdpa_cq_arm(priv, cq); + } + } +} + /* Wait on all CQs channel for completion event. */ static struct mlx5_vdpa_cq * mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) @@ -574,14 +594,44 @@ mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) return 0; } +static int +mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) +{ + if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_QP_2RST, + eqp->sw_qp.qp->id)) { + DRV_LOG(ERR, "Failed to modify FW QP to RST state(%u).", + rte_errno); + return -1; + } + if (mlx5_devx_cmd_modify_qp_state(eqp->sw_qp.qp, + MLX5_CMD_OP_QP_2RST, eqp->fw_qp->id)) { + DRV_LOG(ERR, "Failed to modify SW QP to RST state(%u).", + rte_errno); + return -1; + } + return mlx5_vdpa_qps2rts(eqp); +} + int -mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, +mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, int callfd, struct mlx5_vdpa_event_qp *eqp) { struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; + if (eqp->cq.cq_obj.cq != NULL && log_desc_n == eqp->cq.log_desc_n) { + /* Reuse existing resources. */ + eqp->cq.callfd = callfd; + /* FW will set event qp to error state in q destroy. */ + if (!mlx5_vdpa_qps2rst2rts(eqp)) { + rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), + &eqp->sw_qp.db_rec[0]); + return 0; + } + } + if (eqp->fw_qp) + mlx5_vdpa_event_qp_destroy(eqp); if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; attr.pd = priv->cdev->pdn; @@ -608,8 +658,10 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } if (mlx5_vdpa_qps2rts(eqp)) goto error; + eqp->qp_pi = 0; /* First ringing. */ - rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), + if (eqp->sw_qp.db_rec) + rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), &eqp->sw_qp.db_rec[0]); return 0; error: diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index e025be47d2..28cef69a58 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -87,6 +87,8 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } virtq->umems[j].size = 0; } + if (virtq->eqp.fw_qp) + mlx5_vdpa_event_qp_destroy(&virtq->eqp); } } @@ -117,8 +119,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; - if (virtq->eqp.fw_qp) - mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; return 0; } @@ -246,7 +246,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr.event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_create(priv, vq.size, vq.callfd, + ret = mlx5_vdpa_event_qp_prepare(priv, vq.size, vq.callfd, &virtq->eqp); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", From patchwork Fri Apr 8 07:55:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109481 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C2F7A00BE; Fri, 8 Apr 2022 09:57:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6BE4142869; Fri, 8 Apr 2022 09:57:21 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2061.outbound.protection.outlook.com [40.107.94.61]) by mails.dpdk.org (Postfix) with ESMTP id 0E4AE427E9 for ; Fri, 8 Apr 2022 09:57:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GvaLqk68GL8E2A8626W8ZuQCKznkYiYX8cJSmAMunkIUATdaTU6pvtvqBNfH1By8V4TP+zzdlGVK3r28ZiimocXbpmlse0oaCvPjf2P5yXyVYJS6xQCkco111sq8Naqie8YKWBbwvKSPi3XEf/UWQPNJMQOH9kBrOEdCmSG90K30Sg6FZ7HBUpGcffoM3ihXC2K2uwOZzIi9/fsG8GandEW9kM1Id8UFfrMxqFaiMDQDeg98X+6mjQF+KHw6+V3Q4LpobrfaOqJvVKrwp9MHWx3YqNDBeVl+owUj1iIb5x9b5inNfq/ugktzuSVQj7+XnFlmzwZJER/JljYkIObTOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j9eH1g2b/Og9peSi4vo0BjhPwApjQDZhIUhN5Yjfxuw=; b=RdI0ZyL0NRO8aaOLpOmjvlP4q2V278AlpWraK/qfCFmujDALWt/l+/9+4Sud886aftVe72ivwgCXmZJ5Jhvc3mR/ohHGKViFVQ/6MYz6r2UOvsUzRGXRs7lf+SaDGQ4bSrEDn2mqBJzu4HfkXXkxZtQE6Rw1WZw0Dl2+3qsR6uVYvxMEjTM3++ouLMFqeXyO9euyG9zGx77e/KVXNuqpByD7lod3OqPTionHGBmExpj0ygYoJZ3RI6RfcxUeo/DMo5LidjvckzsjowzlUADl5NuBPkit+/YOfpfvM0rLs83NB78O8Dd0vBeCO7B4/PFamFl64bzfN6EggeciRubBaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j9eH1g2b/Og9peSi4vo0BjhPwApjQDZhIUhN5Yjfxuw=; b=EsS7vyhMhZY0OGMFE8NhtqvsIyQwCDQu8WFF1aNJn9g82Tf4x5yvRcsFfnsbRF1NIUg97PJbESsW63AEn7MKpD1URQ1LuQ+QvHWCXdk5KLxlmi/0sCHFsIQqcMmrcfaJHsLv5nmg6uxDVYf49iisZLlp1b2jZWi4mRQjcaEChFxzSa8+d+6+ZHjv6T2WP1yvP7tbsQsOHzqX9gXVLS18lavQT6NSsCWW9BgPSNRMtzKibTCap3oApHPv7ky8A8imWTZr103+HWD4fOqb20ccgzGch2YU2aTZHaYPZ8OS/uEbj9vfaWZtPIIp8qwpHFzOTdhx1zl4Np/GhvU1M9qxuQ== Received: from BN9PR03CA0875.namprd03.prod.outlook.com (2603:10b6:408:13c::10) by SN6PR12MB2607.namprd12.prod.outlook.com (2603:10b6:805:6d::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26; Fri, 8 Apr 2022 07:57:08 +0000 Received: from BN8NAM11FT040.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::ce) by BN9PR03CA0875.outlook.office365.com (2603:10b6:408:13c::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25 via Frontend Transport; Fri, 8 Apr 2022 07:57:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT040.mail.protection.outlook.com (10.13.177.166) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:07 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:04 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 05/15] common/mlx5: extend virtq modifiable fields Date: Fri, 8 Apr 2022 10:55:55 +0300 Message-ID: <20220408075606.33056-6-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 72ee483d-1fbc-4f53-235b-08da19356133 X-MS-TrafficTypeDiagnostic: SN6PR12MB2607:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Cwy+HFS9rXusceU9aK0kPiGYgEFdAlyqwLxeQmmbbgkz+eXkzG46ynGS3B4vggNR4VeAWNopSJi5nxkHI4bSifZl1PeOhmvXZ64KsGi/i5pwYyRmlyTOgagV+3KyA/zJlqWtfbjQqmnQ7arjB6GfhA5PZsVQ5hapG9kTeh6n2gTvmLlXiZCQ6Opx8ciWatEKYuFM+w9jQB+4AaXv0biv0CfxBSLqP7+z8lMTszllKGBi9N0S1wWnsfTQFi5p4OhiDXqoKuiKC2E8sigMiLg4ZVQaj7TehigOLR4zwo20MDPMm3W776aDiIfzFPXbelHJ7a3OHK3s6OvKRKMZkcVXQOyaRfmT74RVqwF/lKswMGBSTR4gu1N7Xo0A8BPoiTjciZIJtqwdkk+ZkpZ56F4GAEJrjMEm71IqrJR74QtjUL6QbKfOr19nsUxA6yfOXe2CyaBTspUBiz7iKWZSi2PEFGQ0oP47KsfIrKPOCiHBKVM0UkJkwXgCpydzF2HolDJhailVWUYMb6Kh7BUXY/EL/4WLj+ByHrXd1++XNSuoI97x95DAEjHQ5SVk4W2JmZQ0X2z8i2M2f//w1FfLH3uWyzB/TUdze++HK7M72jOA9tyTX1Lhw0VxudYYwUs7tcgPeOkqz9lmPXpM9vXYKMJtMqgl7nx1lPyZ+PNWh/wPjDrvZXQ0OAkR46N3PI36LdAJzRnJ7kq0mTwW5fAPMFlq2g== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(2906002)(54906003)(83380400001)(110136005)(508600001)(316002)(6286002)(86362001)(7696005)(6666004)(426003)(336012)(2616005)(40460700003)(16526019)(107886003)(186003)(1076003)(55016003)(8936002)(26005)(5660300002)(36860700001)(70586007)(4326008)(36756003)(70206006)(8676002)(82310400005)(81166007)(47076005)(356005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:07.9330 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 72ee483d-1fbc-4f53-235b-08da19356133 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT040.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2607 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A virtq configuration can be modified after the virtq creation. Added the following modifiable fields: 1.address fields: desc_addr/used_addr/available_addr 2.hw_available_index 3.hw_used_index 4.virtio_q_type 5.version type 6.queue mkey 7.feature bit mask: tso_ipv4/tso_ipv6/tx_csum/rx_csum 8.event mode: event_mode/event_qpn_or_msix Signed-off-by: Li Zhang --- drivers/common/mlx5/mlx5_devx_cmds.c | 70 +++++++++++++++++++++++----- drivers/common/mlx5/mlx5_devx_cmds.h | 6 ++- drivers/common/mlx5/mlx5_prm.h | 13 +++++- 3 files changed, 76 insertions(+), 13 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index a2943c9a58..fd5b5dd378 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -545,6 +545,15 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx, vdpa_attr->log_doorbell_stride = MLX5_GET(virtio_emulation_cap, hcattr, log_doorbell_stride); + vdpa_attr->vnet_modify_ext = + MLX5_GET(virtio_emulation_cap, hcattr, + vnet_modify_ext); + vdpa_attr->virtio_net_q_addr_modify = + MLX5_GET(virtio_emulation_cap, hcattr, + virtio_net_q_addr_modify); + vdpa_attr->virtio_q_index_modify = + MLX5_GET(virtio_emulation_cap, hcattr, + virtio_q_index_modify); vdpa_attr->log_doorbell_bar_size = MLX5_GET(virtio_emulation_cap, hcattr, log_doorbell_bar_size); @@ -2065,27 +2074,66 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj, MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, MLX5_GENERAL_OBJ_TYPE_VIRTQ); MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, virtq_obj->id); - MLX5_SET64(virtio_net_q, virtq, modify_field_select, attr->type); + MLX5_SET64(virtio_net_q, virtq, modify_field_select, + attr->mod_fields_bitmap); MLX5_SET16(virtio_q, virtctx, queue_index, attr->queue_index); - switch (attr->type) { - case MLX5_VIRTQ_MODIFY_TYPE_STATE: + if (!attr->mod_fields_bitmap) { + DRV_LOG(ERR, "Failed to modify VIRTQ for no type set."); + rte_errno = EINVAL; + return -rte_errno; + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_STATE) MLX5_SET16(virtio_net_q, virtq, state, attr->state); - break; - case MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS: + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS) { MLX5_SET(virtio_net_q, virtq, dirty_bitmap_mkey, attr->dirty_bitmap_mkey); MLX5_SET64(virtio_net_q, virtq, dirty_bitmap_addr, attr->dirty_bitmap_addr); MLX5_SET(virtio_net_q, virtq, dirty_bitmap_size, attr->dirty_bitmap_size); - break; - case MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE: + } + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE) MLX5_SET(virtio_net_q, virtq, dirty_bitmap_dump_enable, attr->dirty_bitmap_dump_enable); - break; - default: - rte_errno = EINVAL; - return -rte_errno; + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD) { + MLX5_SET(virtio_q, virtctx, queue_period_mode, + attr->hw_latency_mode); + MLX5_SET(virtio_q, virtctx, queue_period_us, + attr->hw_max_latency_us); + MLX5_SET(virtio_q, virtctx, queue_max_count, + attr->hw_max_pending_comp); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_ADDR) { + MLX5_SET64(virtio_q, virtctx, desc_addr, attr->desc_addr); + MLX5_SET64(virtio_q, virtctx, used_addr, attr->used_addr); + MLX5_SET64(virtio_q, virtctx, available_addr, + attr->available_addr); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX) + MLX5_SET16(virtio_net_q, virtq, hw_available_index, + attr->hw_available_index); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX) + MLX5_SET16(virtio_net_q, virtq, hw_used_index, + attr->hw_used_index); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE) + MLX5_SET16(virtio_q, virtctx, virtio_q_type, attr->q_type); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0) + MLX5_SET16(virtio_q, virtctx, virtio_version_1_0, + attr->virtio_version_1_0); + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY) + MLX5_SET(virtio_q, virtctx, virtio_q_mkey, attr->mkey); + if (attr->mod_fields_bitmap & + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK) { + MLX5_SET16(virtio_net_q, virtq, tso_ipv4, attr->tso_ipv4); + MLX5_SET16(virtio_net_q, virtq, tso_ipv6, attr->tso_ipv6); + MLX5_SET16(virtio_net_q, virtq, tx_csum, attr->tx_csum); + MLX5_SET16(virtio_net_q, virtq, rx_csum, attr->rx_csum); + } + if (attr->mod_fields_bitmap & MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE) { + MLX5_SET16(virtio_q, virtctx, event_mode, attr->event_mode); + MLX5_SET(virtio_q, virtctx, event_qpn_or_msix, attr->qp_id); } ret = mlx5_glue->devx_obj_modify(virtq_obj->obj, in, sizeof(in), out, sizeof(out)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 1bac18c59d..d93be8fe2c 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -74,6 +74,9 @@ struct mlx5_hca_vdpa_attr { uint32_t log_doorbell_stride:5; uint32_t log_doorbell_bar_size:5; uint32_t queue_counters_valid:1; + uint32_t vnet_modify_ext:1; + uint32_t virtio_net_q_addr_modify:1; + uint32_t virtio_q_index_modify:1; uint32_t max_num_virtio_queues; struct { uint32_t a; @@ -464,7 +467,7 @@ struct mlx5_devx_virtq_attr { uint32_t tis_id; uint32_t counters_obj_id; uint64_t dirty_bitmap_addr; - uint64_t type; + uint64_t mod_fields_bitmap; uint64_t desc_addr; uint64_t used_addr; uint64_t available_addr; @@ -474,6 +477,7 @@ struct mlx5_devx_virtq_attr { uint64_t offset; } umems[3]; uint8_t error_type; + uint8_t q_type; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index cca6bfc6d4..4cc1427b9b 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1798,7 +1798,9 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 virtio_queue_type[0x8]; u8 reserved_at_20[0x13]; u8 log_doorbell_stride[0x5]; - u8 reserved_at_3b[0x3]; + u8 vnet_modify_ext[0x1]; + u8 virtio_net_q_addr_modify[0x1]; + u8 virtio_q_index_modify[0x1]; u8 log_doorbell_bar_size[0x5]; u8 doorbell_bar_offset[0x40]; u8 reserved_at_80[0x8]; @@ -3020,6 +3022,15 @@ enum { MLX5_VIRTQ_MODIFY_TYPE_STATE = (1UL << 0), MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS = (1UL << 3), MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE = (1UL << 4), + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD = (1UL << 5), + MLX5_VIRTQ_MODIFY_TYPE_ADDR = (1UL << 6), + MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX = (1UL << 7), + MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX = (1UL << 8), + MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE = (1UL << 9), + MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0 = (1UL << 10), + MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY = (1UL << 11), + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK = (1UL << 12), + MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE = (1UL << 13), }; struct mlx5_ifc_virtio_q_bits { From patchwork Fri Apr 8 07:55:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109479 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2ECAA00BE; Fri, 8 Apr 2022 09:57:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 801DD42860; Fri, 8 Apr 2022 09:57:19 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2067.outbound.protection.outlook.com [40.107.95.67]) by mails.dpdk.org (Postfix) with ESMTP id E72814003F for ; Fri, 8 Apr 2022 09:57:16 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QA+jEQWbDzEffcNiGu3rscMUkzv39/FE2I3debgpSorOzpPnE+PuTdoXmpEkwoN6eiAy7KKSJskI/kFeV+VbNStVG5yA6LbipETb+C+rqEwk7M8CrKOJB8FpHZXylcwxx7M5CYKqwO5XMxu3LTpbylNxtesHPecCOFaTT2CovmWQLJq18bcECHgfKkqfmo1DYMoX+VJfSv2WpawkzSMXgNGKGQIvYV6AAQvZ8luTDbK7qZuA5QLiwOx9LvLo2hPHZbLe8S2xUlkMVHvYhyaEp/Obeakt5B3vt/9inGrPru8o3gVlKVU5GR7i4Ib7Yn3UWzMTDhtygxV07Vdnl8cYsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sYF39nSAZXuebCZo9SU0cBiP77aBrKUrblBBPuyZIvc=; b=lC6Ik85Y67Do16uPLIdDLYGZnzuupeeeHiyGzS5KOWIqLUhJhobojQ4ZL/mTsRVqw9CVWCy0eYBPu8YGuakkKf37HfeLcm1ZCqKw1aR+faXDEOz4FVDBF8SNTsu5XVA4fgZSPEMh6ox2gqT0olHD8BjEaxqHIcoT0axk4WuaBIWCyhJFmANGp+I1DSKcHXI1HoMvsgFbe16MXqnmxBnaMD4bnplxgKzQZPCaf6FmF1MoB9RdIkjxmi5lqzEypWD00k+b3Eob79tuRRCzkTjLNXCLZvoJh9fsBoRzm+a/dVIPXTnCebFr2NhdMkRL9D/+ua8qru/kSlQsEgy4uFWLoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sYF39nSAZXuebCZo9SU0cBiP77aBrKUrblBBPuyZIvc=; b=msLWQn/yC5iY7LOhNyiXjv+lyCr3s8A5nmjQpHRj5kbLFFOyajfHsVHxXJBzZNcq+iFyxeAmU18e6Ay/5f/CgK4mybQwVVIYYX/i/STzxLYG3JjVePqbgpYMYkPVqg4K2PMRurdwjj89InaXPQHjDrMdooWEL6J+8YjJlGJkO0fOPtf2AbnzcLDqyseEeLxoAsHdoaLmTLN/8aHYRcrPVKmEe61vXYTuAWTQuUOCh8ZYDMwzCLCceDjsyHmTSzbpG4V0RkA5Im9Zza2ZEOzmTAE+ufDzMPVvaZqNxixOrX0Ludjt425cOjfld3pykzt7lM/ii/Jwndrmocd+tiNalA== Received: from BN0PR08CA0016.namprd08.prod.outlook.com (2603:10b6:408:142::35) by DM6PR12MB4778.namprd12.prod.outlook.com (2603:10b6:5:167::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26; Fri, 8 Apr 2022 07:57:12 +0000 Received: from BN8NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:408:142:cafe::26) by BN0PR08CA0016.outlook.office365.com (2603:10b6:408:142::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22 via Frontend Transport; Fri, 8 Apr 2022 07:57:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT067.mail.protection.outlook.com (10.13.177.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:10 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:07 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 06/15] vdpa/mlx5: pre-create virtq in the prob Date: Fri, 8 Apr 2022 10:55:56 +0300 Message-ID: <20220408075606.33056-7-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a2e7d1cc-8616-49e1-d0fd-08da1935634a X-MS-TrafficTypeDiagnostic: DM6PR12MB4778:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sDycZZjs6DJGeoU1I60WEOeA4ogOlfDKSTW10BGg7Bkjp/W2CvBbAw6yyhgWO6vkDDE3OMgacS8I0x99HFNWNVHzDe6zPwTobEgP4qWFWu2/oD7JxAVD+8e+7Dioe4lccz43AKKEG9rSHJqLWa3bqbrWR4PSLUOcAAY8RIjxDwZoyvyaXh62oDOu/8UEGKXJkU/8BCjm60if88ye4tg9d2D2FhEItIf72qWwM9thau4KuOtUBFIQgWOYCJ1zOH1X8abRBpFbnl0Gev5kB+1SgHvLlwFMeiN9AJk3nd1nKUyw3b0K90J2yXts2Cqh5bOFE9csGOn2WtGYRi1MQpJ0ljJbsqtmbcdiHHVMDKFHYK+vi+pheaatusBCqDyrwbBFnWqzv6PorAzLHQpBYUOzF0oAYx5Ow468PjRmbiB1h/BpjMIHt4hBO+z9oQ08YpfYSwLGu2djbEohKftw9beVTrAig13cVDrBJvm89zUZ/J2QKy98stDbamUqGGTd7HaoZNzAeCOqlWeFUWwJfFIGN9yYTTaml/blmUv/0QcJlutE/G2y3k4JS1XzhNZ+UbOxOGS2HIm124vSEiCk/nT2g873hPCK0BHhXyp9fAI9vxBZLIREL9LMO+GNduUsgYhUupEhtkIpnF2PSpWc1I5OZofF/5PUIE2ZgjFO6KsvHm3tjG3tevNPDCtoGdf3RNGLoyYjpv4YZDLuasBHwW7OEA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(81166007)(316002)(8936002)(5660300002)(86362001)(6286002)(36756003)(8676002)(4326008)(110136005)(82310400005)(70586007)(70206006)(356005)(54906003)(26005)(55016003)(508600001)(2616005)(426003)(336012)(107886003)(16526019)(186003)(1076003)(36860700001)(2906002)(47076005)(40460700003)(6666004)(83380400001)(7696005)(30864003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:11.4436 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a2e7d1cc-8616-49e1-d0fd-08da1935634a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4778 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org dev_config operation is called in LM progress. LM time is very critical because all the VM packets are dropped directly at that time. Move the virtq creation to probe time and only modify the configuration later in the dev_config stage using the new ability to modify virtq. This optimization accelerates the LM process and reduces its time by 70%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 4 + drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 13 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 257 +++++++++++++++++----------- 3 files changed, 170 insertions(+), 104 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index bf82026e37..e5553079fe 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ struct mlx5_vdpa_virtq { uint16_t vq_size; uint8_t notifier_state; bool stopped; + uint32_t configured:1; uint32_t version; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; @@ -489,4 +490,7 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid); */ void mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); + +bool +mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index 43a2b98255..a8faf0c116 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -12,14 +12,17 @@ int mlx5_vdpa_logging_enable(struct mlx5_vdpa_priv *priv, int enable) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE, + .mod_fields_bitmap = + MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_DUMP_ENABLE, .dirty_bitmap_dump_enable = enable, }; + struct mlx5_vdpa_virtq *virtq; int i; for (i = 0; i < priv->nr_virtqs; ++i) { attr.queue_index = i; - if (!priv->virtqs[i].virtq) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap " "enabling.", i); } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, @@ -37,10 +40,11 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, uint64_t log_size) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS, + .mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_DIRTY_BITMAP_PARAMS, .dirty_bitmap_addr = log_base, .dirty_bitmap_size = log_size, }; + struct mlx5_vdpa_virtq *virtq; int i; int ret = mlx5_os_wrapped_mkey_create(priv->cdev->ctx, priv->cdev->pd, priv->cdev->pdn, @@ -54,7 +58,8 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, attr.dirty_bitmap_mkey = priv->lm_mr.lkey; for (i = 0; i < priv->nr_virtqs; ++i) { attr.queue_index = i; - if (!priv->virtqs[i].virtq) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i); } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 28cef69a58..ef5bf1ef01 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -75,6 +75,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -111,11 +112,12 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) rte_intr_fd_set(virtq->intr_handle, -1); } rte_intr_instance_free(virtq->intr_handle); - if (virtq->virtq) { + if (virtq->configured) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); + virtq->configured = 0; claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); } virtq->virtq = NULL; @@ -138,7 +140,7 @@ int mlx5_vdpa_virtq_modify(struct mlx5_vdpa_virtq *virtq, int state) { struct mlx5_devx_virtq_attr attr = { - .type = MLX5_VIRTQ_MODIFY_TYPE_STATE, + .mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE, .state = state ? MLX5_VIRTQ_STATE_RDY : MLX5_VIRTQ_STATE_SUSPEND, .queue_index = virtq->index, @@ -153,7 +155,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; int ret; - if (virtq->stopped) + if (virtq->stopped || !virtq->configured) return 0; ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) @@ -209,51 +211,54 @@ mlx5_vdpa_hva_to_gpa(struct rte_vhost_memory *mem, uint64_t hva) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, + struct mlx5_devx_virtq_attr *attr, + struct rte_vhost_vring *vq, int index) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; - struct rte_vhost_vring vq; - struct mlx5_devx_virtq_attr attr = {0}; uint64_t gpa; int ret; unsigned int i; - uint16_t last_avail_idx; - uint16_t last_used_idx; - uint16_t event_num = MLX5_EVENT_TYPE_OBJECT_CHANGE; - uint64_t cookie; - - ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); - if (ret) - return -1; - if (vq.size == 0) - return 0; - virtq->index = index; - virtq->vq_size = vq.size; - attr.tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); - attr.tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); - attr.tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); - attr.rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); - attr.virtio_version_1_0 = !!(priv->features & (1ULL << - VIRTIO_F_VERSION_1)); - attr.type = (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? + uint16_t last_avail_idx = 0; + uint16_t last_used_idx = 0; + + if (virtq->virtq) + attr->mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE | + MLX5_VIRTQ_MODIFY_TYPE_ADDR | + MLX5_VIRTQ_MODIFY_TYPE_HW_AVAILABLE_INDEX | + MLX5_VIRTQ_MODIFY_TYPE_HW_USED_INDEX | + MLX5_VIRTQ_MODIFY_TYPE_VERSION_1_0 | + MLX5_VIRTQ_MODIFY_TYPE_Q_TYPE | + MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY | + MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK | + MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; + attr->tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); + attr->tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); + attr->tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); + attr->rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); + attr->virtio_version_1_0 = + !!(priv->features & (1ULL << VIRTIO_F_VERSION_1)); + attr->q_type = + (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? MLX5_VIRTQ_TYPE_PACKED : MLX5_VIRTQ_TYPE_SPLIT; /* * No need event QPs creation when the guest in poll mode or when the * capability allows it. */ - attr.event_mode = vq.callfd != -1 || !(priv->caps.event_mode & (1 << - MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? - MLX5_VIRTQ_EVENT_MODE_QP : - MLX5_VIRTQ_EVENT_MODE_NO_MSIX; - if (attr.event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_prepare(priv, vq.size, vq.callfd, - &virtq->eqp); + attr->event_mode = vq->callfd != -1 || + !(priv->caps.event_mode & (1 << MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? + MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; + if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { + ret = mlx5_vdpa_event_qp_prepare(priv, + vq->size, vq->callfd, &virtq->eqp); if (ret) { - DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", + DRV_LOG(ERR, + "Failed to create event QPs for virtq %d.", index); return -1; } - attr.qp_id = virtq->eqp.fw_qp->id; + attr->mod_fields_bitmap |= MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; + attr->qp_id = virtq->eqp.fw_qp->id; } else { DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no" " need event QPs and event mechanism.", index); @@ -265,77 +270,82 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) if (!virtq->counters) { DRV_LOG(ERR, "Failed to create virtq couners for virtq" " %d.", index); - goto error; + return -1; } - attr.counters_obj_id = virtq->counters->id; + attr->counters_obj_id = virtq->counters->id; } /* Setup 3 UMEMs for each virtq. */ - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - uint32_t size; - void *buf; - struct mlx5dv_devx_umem *obj; - - size = priv->caps.umems[i].a * vq.size + priv->caps.umems[i].b; - if (virtq->umems[i].size == size && - virtq->umems[i].obj != NULL) { - /* Reuse registered memory. */ - memset(virtq->umems[i].buf, 0, size); - goto reuse; - } - if (virtq->umems[i].obj) - claim_zero(mlx5_glue->devx_umem_dereg + if (virtq->virtq) { + for (i = 0; i < RTE_DIM(virtq->umems); ++i) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = + priv->caps.umems[i].a * vq->size + priv->caps.umems[i].b; + if (virtq->umems[i].size == size && + virtq->umems[i].obj != NULL) { + /* Reuse registered memory. */ + memset(virtq->umems[i].buf, 0, size); + goto reuse; + } + if (virtq->umems[i].obj) + claim_zero(mlx5_glue->devx_umem_dereg (virtq->umems[i].obj)); - if (virtq->umems[i].buf) - rte_free(virtq->umems[i].buf); - virtq->umems[i].size = 0; - virtq->umems[i].obj = NULL; - virtq->umems[i].buf = NULL; - buf = rte_zmalloc(__func__, size, 4096); - if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + if (virtq->umems[i].buf) + rte_free(virtq->umems[i].buf); + virtq->umems[i].size = 0; + virtq->umems[i].obj = NULL; + virtq->umems[i].buf = NULL; + buf = rte_zmalloc(__func__, + size, 4096); + if (buf == NULL) { + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" " %u.", i, index); - goto error; - } - obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, size, - IBV_ACCESS_LOCAL_WRITE); - if (obj == NULL) { - DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", + return -1; + } + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, + buf, size, IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { + DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", i, index); - goto error; - } - virtq->umems[i].size = size; - virtq->umems[i].buf = buf; - virtq->umems[i].obj = obj; + rte_free(buf); + return -1; + } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; reuse: - attr.umems[i].id = virtq->umems[i].obj->umem_id; - attr.umems[i].offset = 0; - attr.umems[i].size = virtq->umems[i].size; + attr->umems[i].id = virtq->umems[i].obj->umem_id; + attr->umems[i].offset = 0; + attr->umems[i].size = virtq->umems[i].size; + } } - if (attr.type == MLX5_VIRTQ_TYPE_SPLIT) { + if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.desc); + (uint64_t)(uintptr_t)vq->desc); if (!gpa) { DRV_LOG(ERR, "Failed to get descriptor ring GPA."); - goto error; + return -1; } - attr.desc_addr = gpa; + attr->desc_addr = gpa; gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.used); + (uint64_t)(uintptr_t)vq->used); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for used ring."); - goto error; + return -1; } - attr.used_addr = gpa; + attr->used_addr = gpa; gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, - (uint64_t)(uintptr_t)vq.avail); + (uint64_t)(uintptr_t)vq->avail); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for available ring."); - goto error; + return -1; } - attr.available_addr = gpa; + attr->available_addr = gpa; } - ret = rte_vhost_get_vring_base(priv->vid, index, &last_avail_idx, - &last_used_idx); + ret = rte_vhost_get_vring_base(priv->vid, + index, &last_avail_idx, &last_used_idx); if (ret) { last_avail_idx = 0; last_used_idx = 0; @@ -345,24 +355,71 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) "virtq %d.", priv->vid, last_avail_idx, last_used_idx, index); } - attr.hw_available_index = last_avail_idx; - attr.hw_used_index = last_used_idx; - attr.q_size = vq.size; - attr.mkey = priv->gpa_mkey_index; - attr.tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; - attr.queue_index = index; - attr.pd = priv->cdev->pdn; - attr.hw_latency_mode = priv->hw_latency_mode; - attr.hw_max_latency_us = priv->hw_max_latency_us; - attr.hw_max_pending_comp = priv->hw_max_pending_comp; - virtq->virtq = mlx5_devx_cmd_create_virtq(priv->cdev->ctx, &attr); + attr->hw_available_index = last_avail_idx; + attr->hw_used_index = last_used_idx; + attr->q_size = vq->size; + attr->mkey = priv->gpa_mkey_index; + attr->tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; + attr->queue_index = index; + attr->pd = priv->cdev->pdn; + attr->hw_latency_mode = priv->hw_latency_mode; + attr->hw_max_latency_us = priv->hw_max_latency_us; + attr->hw_max_pending_comp = priv->hw_max_pending_comp; + if (attr->hw_latency_mode || attr->hw_max_latency_us || + attr->hw_max_pending_comp) + attr->mod_fields_bitmap |= MLX5_VIRTQ_MODIFY_TYPE_QUEUE_PERIOD; + return 0; +} + +bool +mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) +{ + return (priv->caps.vnet_modify_ext && + priv->caps.virtio_net_q_addr_modify && + priv->caps.virtio_q_index_modify) ? true : false; +} + +static int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +{ + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + struct rte_vhost_vring vq; + struct mlx5_devx_virtq_attr attr = {0}; + int ret; + uint16_t event_num = MLX5_EVENT_TYPE_OBJECT_CHANGE; + uint64_t cookie; + + ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); + if (ret) + return -1; + if (vq.size == 0) + return 0; virtq->priv = priv; - if (!virtq->virtq) + virtq->stopped = 0; + ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, + &vq, index); + if (ret) { + DRV_LOG(ERR, "Failed to setup update virtq attr" + " %d.", index); goto error; - claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); - if (mlx5_vdpa_virtq_modify(virtq, 1)) + } + if (!virtq->virtq) { + virtq->index = index; + virtq->vq_size = vq.size; + virtq->virtq = mlx5_devx_cmd_create_virtq(priv->cdev->ctx, + &attr); + if (!virtq->virtq) + goto error; + attr.mod_fields_bitmap = MLX5_VIRTQ_MODIFY_TYPE_STATE; + } + attr.state = MLX5_VIRTQ_STATE_RDY; + ret = mlx5_devx_cmd_modify_virtq(virtq->virtq, &attr); + if (ret) { + DRV_LOG(ERR, "Failed to modify virtq %d.", index); goto error; - virtq->priv = priv; + } + claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); + virtq->configured = 1; rte_write32(virtq->index, priv->virtq_db_addr); /* Setup doorbell mapping. */ virtq->intr_handle = @@ -553,7 +610,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) return 0; DRV_LOG(INFO, "Virtq %d was modified, recreate it.", index); } - if (virtq->virtq) { + if (virtq->configured) { virtq->enable = 0; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { ret = mlx5_vdpa_steer_update(priv); From patchwork Fri Apr 8 07:55:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109478 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B802BA00BE; Fri, 8 Apr 2022 09:57:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 337EE4284F; Fri, 8 Apr 2022 09:57:18 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2063.outbound.protection.outlook.com [40.107.92.63]) by mails.dpdk.org (Postfix) with ESMTP id 13ADE4003F for ; Fri, 8 Apr 2022 09:57:16 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PO6+iZ39BcOPFqSxH4ZGpJCwRVvnm24bDMUqxE3k/pOimohDkUjzUtufBKzwXfAY2oCZ6NbmuL3D4Nj9F5U+4czUYqnOVdnSrLOm5HKb25cOh3+4pHAMDMU0ttTVCQcaMz8GqgUAux4K4oMBqn6dguZVvQAXndDWsk/m+GIMgaE+5FElYgUcyOCW5by69+Hszqt8C0nHuEzlqzOXAYbIXFO6oFoPCWxYqlwridjSxhyHfU7uI7dHB27cGiTejUcbZuJgVHTOdY15ZbKlu40BI06B3TpwzLT8OwobSYduoif+CXHxHPZBBGfLTq4FZQ4WIp8ZIgtawKg0FdPbldakJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zvs9ljvrLfi1BBN/HNnHRRBYWqYQX0iefm9icczuaag=; b=FwgArds6bAhaZX+FDivSvxL1WOcDuf+HkmsoeEgN1dYyD7f+aB7w4r7Jfs5bBE70+rtHxIAwrNLc00uGcWcal+K5msQs77rgJnUfGiYcARaOx2J4eaVdq6BdoYXAuTLpz2UZlTQU6dZdDs/9bn8rPdV2gPqBQiuQeQosLtA3+RP3UqMOS5OfA6LhfEMOyZZTSaSGHVm5rNWiQJO1PjZmkpWS8chSoYgz17EiAK97RE4pFgl1Q1dmzdjkZL1VPtzlDgQdcuQaics0MdX/KRJ9FHT9xLBTmHAe13M6qhJ+1a/9rNyLF7ndltciHrSnEKOJiYkOzwlqBHU64ASnTZu32w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zvs9ljvrLfi1BBN/HNnHRRBYWqYQX0iefm9icczuaag=; b=PE00RUFpM6ioL652U0cNNc2scYBWYj7hvFXZGYUKvmPvySsF2K2gLpu6UMckZI3cAUO/B0T9mMbWH1OsUzbbpsdU83U+MXv6MtJf3uCIOnZ0m4twSJXouPdXCCEq1NYIdm5e3sTNSTPpf961b0n4d81gUYd8/YRI2iPWjLAXX3cg1uYGktXzoGYcLzAaWq8MKvb6b3j2QLXUJzi3u1kzHaWIqMS/z7sw40LDRCZAYByP/oHJ6hAFe5MldN9VvlENjlAxq/T4cYbA4pggIg2lLnBJmaUQUWLMGhjzY0vHnyxIpzEc/hUzmcgkxF8Oo9LbPYXvavEpRHIEDyMblAwWBQ== Received: from BN6PR14CA0042.namprd14.prod.outlook.com (2603:10b6:404:13f::28) by BN6PR12MB1428.namprd12.prod.outlook.com (2603:10b6:404:21::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26; Fri, 8 Apr 2022 07:57:14 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:13f:cafe::66) by BN6PR14CA0042.outlook.office365.com (2603:10b6:404:13f::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25 via Frontend Transport; Fri, 8 Apr 2022 07:57:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:12 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:10 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 07/15] vdpa/mlx5: optimize datapath-control synchronization Date: Fri, 8 Apr 2022 10:55:57 +0300 Message-ID: <20220408075606.33056-8-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5a6fcef0-a9ad-4d1c-557a-08da193564eb X-MS-TrafficTypeDiagnostic: BN6PR12MB1428:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: srHNfNjJEpfvA8yPMBvqSejKccErhz5Hja71b6TJO5cw+MjwC6xTPPCCAChN1zb2/LnPOfCam9J/CnXn831q6+vH7o+7keTu3ybFjVkQPT9k8oKxyrWWPaZM6auqQyK3atRgbKS6L19JIJwBB1IRXpB89VOQ8Dt2JFVYReD/WVnLG0pWivZSW3O2PP82EWwByBLIJMUKFLVcw83+JTjcIufcXmKBoimly4tbAt95kNY3ReWcA/PvFJLCts6k51KJlJ+e75WV3B17efUcVfsxPQd+ev0ytVdk1X6wZiS0i5lbnZ6dLI/3/IM+gtVOEg6W4nLbUpXauB7nLJFFgkn9RBapbKyORqcFxtxhkGaVq+gpmljA8MqzKpNJ59dvOAPCmMQCTmRENwTc9vwYlhXyi9rLPQcCzWOkkHVAIGwMKYMB6d71uJT5T4bw8bHBSGsJjzGwZuq3WP2MJqcmfYDO2Fb2NjNniB+i3CbfWitXkWDDeuPF9xtLGe6822Rpb4Bx/5wiB8Kh3ATmgZYuvbHs7ibKbE3qjCzT7Or0LXRomWcyrS6kAXTQ/grXl/+X33q9TbOjQcPBbDouGRrrW+wDE+sUGvHQrVkrxgtBJpr8Tg9P8mOPWXZ1qJ5LpJOHDBC5dje8XQht6NrXAQeyd8KUOMgHFPHyg87xr4T+qVdOKF1pugk1cdeyaRwK50qYR2R3jJTQNMKs7AuvZht8XAo2gQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(508600001)(356005)(47076005)(54906003)(82310400005)(110136005)(81166007)(107886003)(2616005)(40460700003)(6286002)(7696005)(6666004)(83380400001)(55016003)(2906002)(8936002)(316002)(70586007)(70206006)(36860700001)(4326008)(8676002)(186003)(30864003)(16526019)(1076003)(5660300002)(426003)(336012)(36756003)(26005)(86362001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:14.1733 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5a6fcef0-a9ad-4d1c-557a-08da193564eb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1428 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver used a single global lock for any synchronization needed for the datapath and control path. It is better to group the critical sections with the other ones that should be synchronized. Replace the global lock with the following locks: 1.virtq locks(per virtq) synchronize datapath polling and parallel configurations on the same virtq. 2.A doorbell lock synchronizes doorbell update, which is shared for all the virtqs in the device. 3.A steering lock for the shared steering objects updates. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 24 ++++--- drivers/vdpa/mlx5/mlx5_vdpa.h | 13 ++-- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 97 ++++++++++++++++++----------- drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 34 +++++++--- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 7 ++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 83 +++++++++++++++++------- 6 files changed, 180 insertions(+), 78 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 03ad01c156..e99c86b3d6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -135,6 +135,7 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + struct mlx5_vdpa_virtq *virtq; int ret; if (priv == NULL) { @@ -145,9 +146,10 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) DRV_LOG(ERR, "Too big vring id: %d.", vring); return -E2BIG; } - pthread_mutex_lock(&priv->vq_config_lock); + virtq = &priv->virtqs[vring]; + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_enable(priv, vring, state); - pthread_mutex_unlock(&priv->vq_config_lock); + pthread_mutex_unlock(&virtq->virtq_lock); return ret; } @@ -267,7 +269,9 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) @@ -276,8 +280,6 @@ mlx5_vdpa_dev_close(int vid) if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; - /* The mutex may stay locked after event thread cancel - initiate it. */ - pthread_mutex_init(&priv->vq_config_lock, NULL); DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -549,15 +551,21 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint32_t index; uint32_t i; + for (index = 0; index < priv->caps.max_num_virtio_queues * 2; + index++) { + virtq = &priv->virtqs[index]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } if (!priv->queues) return 0; for (index = 0; index < (priv->queues * 2); ++index) { - struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + virtq = &priv->virtqs[index]; int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, - -1, &virtq->eqp); + -1, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -713,7 +721,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; - pthread_mutex_init(&priv->vq_config_lock, NULL); + rte_spinlock_init(&priv->db_lock); + pthread_mutex_init(&priv->steer_update_lock, NULL); priv->cdev = cdev; mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) @@ -797,7 +806,6 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e5553079fe..3fd5eefc5e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -82,6 +82,7 @@ struct mlx5_vdpa_virtq { bool stopped; uint32_t configured:1; uint32_t version; + pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; struct mlx5_devx_obj *counters; @@ -126,7 +127,8 @@ struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; bool connected; enum mlx5_dev_state state; - pthread_mutex_t vq_config_lock; + rte_spinlock_t db_lock; + pthread_mutex_t steer_update_lock; uint64_t no_traffic_counter; pthread_t timer_tid; int event_mode; @@ -222,14 +224,15 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * Number of descriptors. * @param[in] callfd * The guest notification file descriptor. - * @param[in/out] eqp - * Pointer to the event QP structure. + * @param[in/out] virtq + * Pointer to the virt-queue structure. * * @return * 0 on success, -1 otherwise and rte_errno is set. */ -int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp); +int +mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, + int callfd, struct mlx5_vdpa_virtq *virtq); /** * Destroy an event QP and all its related resources. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b43dca9255..2b0f5936d1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -85,12 +85,13 @@ mlx5_vdpa_cq_arm(struct mlx5_vdpa_priv *priv, struct mlx5_vdpa_cq *cq) static int mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, - int callfd, struct mlx5_vdpa_cq *cq) + int callfd, struct mlx5_vdpa_virtq *virtq) { struct mlx5_devx_cq_attr attr = { .use_first_only = 1, .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), }; + struct mlx5_vdpa_cq *cq = &virtq->eqp.cq; uint16_t event_nums[1] = {0}; int ret; @@ -102,10 +103,11 @@ mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, cq->log_desc_n = log_desc_n; rte_spinlock_init(&cq->sl); /* Subscribe CQ event to the event channel controlled by the driver. */ - ret = mlx5_os_devx_subscribe_devx_event(priv->eventc, - cq->cq_obj.cq->obj, - sizeof(event_nums), event_nums, - (uint64_t)(uintptr_t)cq); + ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, + cq->cq_obj.cq->obj, + sizeof(event_nums), + event_nums, + (uint64_t)(uintptr_t)virtq); if (ret) { DRV_LOG(ERR, "Failed to subscribe CQE event."); rte_errno = errno; @@ -167,13 +169,17 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) static void mlx5_vdpa_arm_all_cqs(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_cq *cq; int i; for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); cq = &priv->virtqs[i].eqp.cq; if (cq->cq_obj.cq && !cq->armed) mlx5_vdpa_cq_arm(priv, cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } @@ -220,13 +226,18 @@ mlx5_vdpa_queue_complete(struct mlx5_vdpa_cq *cq) static uint32_t mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv) { - int i; + struct mlx5_vdpa_virtq *virtq; + struct mlx5_vdpa_cq *cq; uint32_t max = 0; + uint32_t comp; + int i; for (i = 0; i < priv->nr_virtqs; i++) { - struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; - uint32_t comp = mlx5_vdpa_queue_complete(cq); - + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + cq = &virtq->eqp.cq; + comp = mlx5_vdpa_queue_complete(cq); + pthread_mutex_unlock(&virtq->virtq_lock); if (comp > max) max = comp; } @@ -253,7 +264,7 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) } /* Wait on all CQs channel for completion event. */ -static struct mlx5_vdpa_cq * +static struct mlx5_vdpa_virtq * mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) { #ifdef HAVE_IBV_DEVX_EVENT @@ -265,7 +276,8 @@ mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) sizeof(out.buf)); if (ret >= 0) - return (struct mlx5_vdpa_cq *)(uintptr_t)out.event_resp.cookie; + return (struct mlx5_vdpa_virtq *) + (uintptr_t)out.event_resp.cookie; DRV_LOG(INFO, "Got error in devx_get_event, ret = %d, errno = %d.", ret, errno); #endif @@ -276,7 +288,7 @@ static void * mlx5_vdpa_event_handle(void *arg) { struct mlx5_vdpa_priv *priv = arg; - struct mlx5_vdpa_cq *cq; + struct mlx5_vdpa_virtq *virtq; uint32_t max; switch (priv->event_mode) { @@ -284,7 +296,6 @@ mlx5_vdpa_event_handle(void *arg) case MLX5_VDPA_EVENT_MODE_FIXED_TIMER: priv->timer_delay_us = priv->event_us; while (1) { - pthread_mutex_lock(&priv->vq_config_lock); max = mlx5_vdpa_queues_complete(priv); if (max == 0 && priv->no_traffic_counter++ >= priv->no_traffic_max) { @@ -292,32 +303,37 @@ mlx5_vdpa_event_handle(void *arg) priv->vdev->device->name); mlx5_vdpa_arm_all_cqs(priv); do { - pthread_mutex_unlock - (&priv->vq_config_lock); - cq = mlx5_vdpa_event_wait(priv); - pthread_mutex_lock - (&priv->vq_config_lock); - if (cq == NULL || - mlx5_vdpa_queue_complete(cq) > 0) + virtq = mlx5_vdpa_event_wait(priv); + if (virtq == NULL) break; + pthread_mutex_lock( + &virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) { + pthread_mutex_unlock( + &virtq->virtq_lock); + break; + } + pthread_mutex_unlock( + &virtq->virtq_lock); } while (1); priv->timer_delay_us = priv->event_us; priv->no_traffic_counter = 0; } else if (max != 0) { priv->no_traffic_counter = 0; } - pthread_mutex_unlock(&priv->vq_config_lock); mlx5_vdpa_timer_sleep(priv, max); } return NULL; case MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT: do { - cq = mlx5_vdpa_event_wait(priv); - if (cq != NULL) { - pthread_mutex_lock(&priv->vq_config_lock); - if (mlx5_vdpa_queue_complete(cq) > 0) - mlx5_vdpa_cq_arm(priv, cq); - pthread_mutex_unlock(&priv->vq_config_lock); + virtq = mlx5_vdpa_event_wait(priv); + if (virtq != NULL) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) + mlx5_vdpa_cq_arm(priv, &virtq->eqp.cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } while (1); return NULL; @@ -339,7 +355,6 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) struct mlx5_vdpa_virtq *virtq; uint64_t sec; - pthread_mutex_lock(&priv->vq_config_lock); while (mlx5_glue->devx_get_event(priv->err_chnl, &out.event_resp, sizeof(out.buf)) >= (ssize_t)sizeof(out.event_resp.cookie)) { @@ -351,10 +366,11 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) continue; } virtq = &priv->virtqs[vq_index]; + pthread_mutex_lock(&virtq->virtq_lock); if (!virtq->enable || virtq->version != version) - continue; + goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) - continue; + goto unlock; virtq->stopped = true; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) @@ -384,8 +400,9 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) for (i = 1; i < RTE_DIM(virtq->err_time); i++) virtq->err_time[i - 1] = virtq->err_time[i]; virtq->err_time[RTE_DIM(virtq->err_time) - 1] = rte_rdtsc(); +unlock: + pthread_mutex_unlock(&virtq->virtq_lock); } - pthread_mutex_unlock(&priv->vq_config_lock); #endif } @@ -533,11 +550,18 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; void *status; + int i; if (priv->timer_tid) { pthread_cancel(priv->timer_tid); pthread_join(priv->timer_tid, &status); + /* The mutex may stay locked after event thread cancel, initiate it. */ + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } } priv->timer_tid = 0; } @@ -614,8 +638,9 @@ mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp) + int callfd, struct mlx5_vdpa_virtq *virtq) { + struct mlx5_vdpa_event_qp *eqp = &virtq->eqp; struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; @@ -632,7 +657,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } if (eqp->fw_qp) mlx5_vdpa_event_qp_destroy(eqp); - if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) + if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, virtq) || + !eqp->cq.cq_obj.cq) return -1; attr.pd = priv->cdev->pdn; attr.ts_format = @@ -650,8 +676,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); ret = mlx5_devx_qp_create(priv->cdev->ctx, &(eqp->sw_qp), - attr.num_of_receive_wqes * - MLX5_WSEG_SIZE, &attr, SOCKET_ID_ANY); + attr.num_of_receive_wqes * MLX5_WSEG_SIZE, + &attr, SOCKET_ID_ANY); if (ret) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; @@ -668,3 +694,4 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, mlx5_vdpa_event_qp_destroy(eqp); return -1; } + diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index a8faf0c116..efebf364d0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -25,11 +25,18 @@ mlx5_vdpa_logging_enable(struct mlx5_vdpa_priv *priv, int enable) if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap " "enabling.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for dirty " + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, "Failed to modify virtq %d for dirty " "bitmap enabling.", i); - return -1; + return -1; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -61,10 +68,19 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, virtq = &priv->virtqs[i]; if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, - &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for LM.", i); - goto err; + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq( + priv->virtqs[i].virtq, + &attr)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to modify virtq %d for LM.", i); + goto err; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -79,6 +95,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, int mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint64_t features; int ret = rte_vhost_get_negotiated_features(priv->vid, &features); int i; @@ -90,10 +107,13 @@ mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) if (!RTE_VHOST_NEED_LOG(features)) return 0; for (i = 0; i < priv->nr_virtqs; ++i) { + virtq = &priv->virtqs[i]; if (!priv->virtqs[i].virtq) { DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i); } else { + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_stop(priv, i); + pthread_mutex_unlock(&virtq->virtq_lock); if (ret) { DRV_LOG(ERR, "Failed to stop virtq %d for LM " "log.", i); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index d4b4375c88..4cbf09784e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -237,19 +237,24 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) { - int ret = mlx5_vdpa_rqt_prepare(priv); + int ret; + pthread_mutex_lock(&priv->steer_update_lock); + ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { + pthread_mutex_unlock(&priv->steer_update_lock); return ret; } else if (!priv->steer.rss[0].flow) { ret = mlx5_vdpa_rss_flows_create(priv); if (ret) { DRV_LOG(ERR, "Cannot create RSS flows."); + pthread_mutex_unlock(&priv->steer_update_lock); return -1; } } + pthread_mutex_unlock(&priv->steer_update_lock); return 0; } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index ef5bf1ef01..c2c5386075 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -24,13 +24,17 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + pthread_mutex_lock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + pthread_mutex_unlock(&virtq->virtq_lock); DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); return; } - if (rte_intr_fd_get(virtq->intr_handle) < 0) + if (rte_intr_fd_get(virtq->intr_handle) < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); @@ -44,9 +48,14 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) } break; } - if (nbytes < 0) + if (nbytes < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); + pthread_mutex_unlock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); @@ -66,6 +75,30 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Virtq must be locked before calling this function. */ +static void +mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) +{ + int ret = -EAGAIN; + + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { + ret = rte_intr_callback_unregister(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq); + if (ret == -EAGAIN) { + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + virtq->index); + pthread_mutex_unlock(&virtq->virtq_lock); + usleep(MLX5_VDPA_INTR_RETRIES_USEC); + pthread_mutex_lock(&virtq->virtq_lock); + } + } + rte_intr_fd_set(virtq->intr_handle, -1); + } + rte_intr_instance_free(virtq->intr_handle); +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) @@ -75,6 +108,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { @@ -90,28 +124,17 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); + pthread_mutex_unlock(&virtq->virtq_lock); } } + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) >= 0) { - while (ret == -EAGAIN) { - ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, virtq); - if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", - rte_intr_fd_get(virtq->intr_handle), - virtq->index); - usleep(MLX5_VDPA_INTR_RETRIES_USEC); - } - } - rte_intr_fd_set(virtq->intr_handle, -1); - } - rte_intr_instance_free(virtq->intr_handle); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); if (virtq->configured) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) @@ -128,10 +151,15 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; int i; - for (i = 0; i < priv->nr_virtqs; i++) - mlx5_vdpa_virtq_unset(&priv->virtqs[i]); + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } priv->features = 0; priv->nr_virtqs = 0; } @@ -250,7 +278,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { ret = mlx5_vdpa_event_qp_prepare(priv, - vq->size, vq->callfd, &virtq->eqp); + vq->size, vq->callfd, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -420,7 +448,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); virtq->configured = 1; + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ virtq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); @@ -537,6 +567,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + struct mlx5_vdpa_virtq *virtq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -556,9 +587,17 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) - if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) - goto error; + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (virtq->enable) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_virtq_setup(priv, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } return 0; error: mlx5_vdpa_virtqs_release(priv); From patchwork Fri Apr 8 07:55:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109480 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31F43A00BE; Fri, 8 Apr 2022 09:57:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 753EB42866; Fri, 8 Apr 2022 09:57:20 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2049.outbound.protection.outlook.com [40.107.243.49]) by mails.dpdk.org (Postfix) with ESMTP id 182314003F for ; Fri, 8 Apr 2022 09:57:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cWGP8uydIBm7IeIhBtruCGSaqLyiApiaTNJCYC3kps8fEHVZ9rS/39WbBxrW1vgbXh7FQCSyoCgse2QccFv4hjWLrDVFLxvfGuSICi9qXEuoxPaxBg4Hs5DXKt90i/Rfb/hFBmZFlM/JOV1cAVvf6oFrntJb2F3G+HZ0ThLc1B2WluiWtEhW09oc5VWaQnHbxLSsBtGwH3eOydqoipQwRuzG1wPLiWte07PynzG5QYfAxITxtwbM0QSnQngOa3P47e86aLTwHxUQevpZ6YkFPGmWMyfxcPVjwDr0+UmSpW6W9n1ZRuD2Wm3P75QWvxNmUwktkQovh+IyjSpDA2NQmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5YzVrZfQLk3AUk6t/wVGPyQKPEcIy1ZHk58OeSN2qmc=; b=m0Z8VIdZ6HrIskIJ2tm0IYQyvAjZKoADlf7nT9tGKujjWBBmkT+8Q5l9L3wgEFmfs/AvlphONUrMFccwrnuSl8ajRmzf8QIpq3i048OBam4Yo1ZzIVqrC5kwpp3RX3QxoGJtHuswGylE4EiJ2qXXp9hlgJ2w4I+6WZrLuj5o/+Hoim9dxCC7wLjETvooLYKyAExq/KL7YR3Z8GHG/cfEDqqRt9i1SA0h0feScm93CP52fFVUANNT0hKfBMF8UvvNUHVV8ysDZ3Uas6rm7eQxnAb2PZM2HgMLAdW97Hx1M54DjZbrfrWTHQchNa8KUYzjR2/xjXXvhh814F4EmjUZeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5YzVrZfQLk3AUk6t/wVGPyQKPEcIy1ZHk58OeSN2qmc=; b=grLByYw3jYYcARZo1ZKcUggq6QQrkcYg9+FwmEFgx2a92GpfDY7OYHVCnFlfgYWuuGp00rl9SxX6vruDE9NP99eUXeUSIIN3te7HEs0swUyb5nfaQoeRdRqlDQvrFOmVkwAvtwk5EmK9S/EJQ6tsmOiZ5H11nbRpueyTZrY913E0QylHu6McWrnIMtOm53i91j56as/QxdUmpVuVKclaO33w0ihTXlttNCtkCe/myZzzIROC3v+g3I11HcF9o/BIRWwIssG5g1Mihgv8GVzZI1tTteLwO1IPzH1agW26fKA/pDDY03G+1yjHNYFHH1uBpgyN+9NN9U2KbJkRFBoP5A== Received: from BN6PR14CA0037.namprd14.prod.outlook.com (2603:10b6:404:13f::23) by CH2PR12MB4054.namprd12.prod.outlook.com (2603:10b6:610:a6::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:16 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:13f:cafe::9a) by BN6PR14CA0037.outlook.office365.com (2603:10b6:404:13f::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22 via Frontend Transport; Fri, 8 Apr 2022 07:57:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:16 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:15 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:12 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 08/15] vdpa/mlx5: add multi-thread management for configuration Date: Fri, 8 Apr 2022 10:55:58 +0300 Message-ID: <20220408075606.33056-9-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 31c87cd9-1812-4888-3132-08da1935660e X-MS-TrafficTypeDiagnostic: CH2PR12MB4054:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: niL8jTsnZcVgTiZlqR822qXMrumcG/2OBmisBmDf5TnN7rHKr1nzio58Q02bgDrGIkdqoVTeUR3L7RJqd4+DZF8cQQ0lZgUAGUmA2/zsurn4EB3/Y1zYgCmnLQpdlTp0ExSUi9e+gFmUCm9YUdNjhadoraDM95LK2E8nUpDCFavJjE4YAbo7QKTyAgGmfd9ny0uvFQ6PEytagIFeih+lBKB/VT3emqHPQBZIQlfvGkaAnsp7Q06YKQWY9Mt+NllB6Gq3kiA6F9DRvaNAOS30N2Dir+OAvJsLCiVJjjmxYV5k7sMz9RmO+8D1z71Pru0xfTIf1gk3U8rVzmY4xdYR+zhw3uDCORaS0R+wMAdyzTcA1D/tOXLt2jsWindusQkCeJiGMCuw0MpvYz/Fbnl3J7skmez1WTF1PTz7e2taOO8F9BRJVwNHJKh4v4qGVn9xe5IvbTNPJI3VlulNv25xCQdTpBOoz3vfOCKu++2P7hLLB2hCW4Wn9bWojApBYsqdns5Bg61S9gyp+No+bFxl5Hb167Uyl2oZs9uT92IHStU6RywBW38A+jWNui3troul2yHh8h3y0KM/5XfRDFeI439t9CkgzsS/bD2PpQPf5iRAXHQrH+6uDKpxWFQiHQUrFg7PbEQKTni4BdXL+SqSPjgoUm88ekvjdBB+BIVyhWKILaeYCxW6iXiOmcq5QDZKne4JmFljSM0UELVuRGgfUw== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(426003)(8676002)(4326008)(70206006)(316002)(7696005)(5660300002)(55016003)(336012)(54906003)(110136005)(86362001)(83380400001)(70586007)(40460700003)(356005)(81166007)(36860700001)(6286002)(47076005)(508600001)(30864003)(1076003)(186003)(36756003)(107886003)(2616005)(16526019)(2906002)(8936002)(26005)(6666004)(82310400005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:16.0013 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 31c87cd9-1812-4888-3132-08da1935660e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4054 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The LM process includes a lot of objects creations and destructions in the source and the destination servers. As much as LM time increases, the packet drop of the VM increases. To improve LM time need to parallel the configurations for mlx5 FW. Add internal multi-thread management in the driver for it. A new devarg defines the number of threads and their CPU. The management is shared between all the devices of the driver. Since the event_core also affects the datapath events thread, reduce the priority of the datapath event thread to allow fast configuration of the devices doing the LM. Signed-off-by: Li Zhang --- doc/guides/vdpadevs/mlx5.rst | 11 +++ drivers/vdpa/mlx5/meson.build | 1 + drivers/vdpa/mlx5/mlx5_vdpa.c | 41 ++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 36 +++++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 129 ++++++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 +- 7 files changed, 223 insertions(+), 5 deletions(-) create mode 100644 drivers/vdpa/mlx5/mlx5_vdpa_cthread.c diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 0ad77bf535..b75a01688d 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -78,6 +78,17 @@ for an additional list of options shared with other mlx5 drivers. CPU core number to set polling thread affinity to, default to control plane cpu. +- ``max_conf_threads`` parameter [int] + + Allow the driver to use internal threads to obtain fast configuration. + All the threads will be open on the same core of the event completion queue scheduling thread. + + - 0, default, don't use internal threads for configuration. + + - 1 - 256, number of internal threads in addition to the caller thread (8 is suggested). + This value, if not 0, should be the same for all the devices; + the first prob will take it with the event_core for all the multi-thread configurations in the driver. + - ``hw_latency_mode`` parameter [int] The completion queue moderation mode: diff --git a/drivers/vdpa/mlx5/meson.build b/drivers/vdpa/mlx5/meson.build index 0fa82ad257..9d8dbb1a82 100644 --- a/drivers/vdpa/mlx5/meson.build +++ b/drivers/vdpa/mlx5/meson.build @@ -15,6 +15,7 @@ sources = files( 'mlx5_vdpa_virtq.c', 'mlx5_vdpa_steer.c', 'mlx5_vdpa_lm.c', + 'mlx5_vdpa_cthread.c', ) cflags_options = [ '-std=c11', diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index e99c86b3d6..eace0e4c9e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -50,6 +50,8 @@ TAILQ_HEAD(mlx5_vdpa_privs, mlx5_vdpa_priv) priv_list = TAILQ_HEAD_INITIALIZER(priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; +struct mlx5_vdpa_conf_thread_mng conf_thread_mng; + static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv); static struct mlx5_vdpa_priv * @@ -493,6 +495,29 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) DRV_LOG(WARNING, "Invalid event_core %s.", val); else priv->event_core = tmp; + } else if (strcmp(key, "max_conf_threads") == 0) { + if (tmp) { + priv->use_c_thread = true; + if (!conf_thread_mng.initializer_priv) { + conf_thread_mng.initializer_priv = priv; + if (tmp > MLX5_VDPA_MAX_C_THRD) { + DRV_LOG(WARNING, + "Invalid max_conf_threads %s " + "and set max_conf_threads to %d", + val, MLX5_VDPA_MAX_C_THRD); + tmp = MLX5_VDPA_MAX_C_THRD; + } + conf_thread_mng.max_thrds = tmp; + } else if (tmp != conf_thread_mng.max_thrds) { + DRV_LOG(WARNING, + "max_conf_threads is PMD argument and not per device, " + "only the first device configuration set it, current value is %d " + "and will not be changed to %d.", + conf_thread_mng.max_thrds, (int)tmp); + } + } else { + priv->use_c_thread = false; + } } else if (strcmp(key, "hw_latency_mode") == 0) { priv->hw_latency_mode = (uint32_t)tmp; } else if (strcmp(key, "hw_max_latency_us") == 0) { @@ -521,6 +546,9 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, "hw_max_latency_us", "hw_max_pending_comp", "no_traffic_time", + "queue_size", + "queues", + "max_conf_threads", NULL, }; @@ -725,6 +753,13 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, pthread_mutex_init(&priv->steer_update_lock, NULL); priv->cdev = cdev; mlx5_vdpa_config_get(mkvlist, priv); + if (priv->use_c_thread) { + if (conf_thread_mng.initializer_priv == priv) + if (mlx5_vdpa_mult_threads_create(priv->event_core)) + goto error; + __atomic_fetch_add(&conf_thread_mng.refcnt, 1, + __ATOMIC_RELAXED); + } if (mlx5_vdpa_create_dev_resources(priv)) goto error; priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); @@ -739,6 +774,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, pthread_mutex_unlock(&priv_list_lock); return 0; error: + if (conf_thread_mng.initializer_priv == priv) + mlx5_vdpa_mult_threads_destroy(false); if (priv) mlx5_vdpa_dev_release(priv); return -rte_errno; @@ -806,6 +843,10 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); + if (priv->use_c_thread) + if (__atomic_fetch_sub(&conf_thread_mng.refcnt, + 1, __ATOMIC_RELAXED) == 1) + mlx5_vdpa_mult_threads_destroy(true); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3fd5eefc5e..4e7c2557b7 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -73,6 +73,22 @@ enum { MLX5_VDPA_NOTIFIER_STATE_ERR }; +#define MLX5_VDPA_MAX_C_THRD 256 + +/* Generic mlx5_vdpa_c_thread information. */ +struct mlx5_vdpa_c_thread { + pthread_t tid; +}; + +struct mlx5_vdpa_conf_thread_mng { + void *initializer_priv; + uint32_t refcnt; + uint32_t max_thrds; + pthread_mutex_t cthrd_lock; + struct mlx5_vdpa_c_thread cthrd[MLX5_VDPA_MAX_C_THRD]; +}; +extern struct mlx5_vdpa_conf_thread_mng conf_thread_mng; + struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; uint8_t enable; @@ -126,6 +142,7 @@ enum mlx5_dev_state { struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; bool connected; + bool use_c_thread; enum mlx5_dev_state state; rte_spinlock_t db_lock; pthread_mutex_t steer_update_lock; @@ -496,4 +513,23 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv); bool mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv); + +/** + * Create configuration multi-threads resource + * + * @param[in] cpu_core + * CPU core number to set configuration threads affinity to. + * + * @return + * 0 on success, a negative value otherwise. + */ +int +mlx5_vdpa_mult_threads_create(int cpu_core); + +/** + * Destroy configuration multi-threads resource + * + */ +void +mlx5_vdpa_mult_threads_destroy(bool need_unlock); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c new file mode 100644 index 0000000000..ba7d8b63b3 --- /dev/null +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include + +#include "mlx5_vdpa_utils.h" +#include "mlx5_vdpa.h" + +static void * +mlx5_vdpa_c_thread_handle(void *arg) +{ + /* To be added later. */ + return arg; +} + +static void +mlx5_vdpa_c_thread_destroy(uint32_t thrd_idx, bool need_unlock) +{ + if (conf_thread_mng.cthrd[thrd_idx].tid) { + pthread_cancel(conf_thread_mng.cthrd[thrd_idx].tid); + pthread_join(conf_thread_mng.cthrd[thrd_idx].tid, NULL); + conf_thread_mng.cthrd[thrd_idx].tid = 0; + if (need_unlock) + pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); + } +} + +static int +mlx5_vdpa_c_thread_create(int cpu_core) +{ + const struct sched_param sp = { + .sched_priority = sched_get_priority_max(SCHED_RR), + }; + rte_cpuset_t cpuset; + pthread_attr_t attr; + uint32_t thrd_idx; + char name[32]; + int ret; + + pthread_mutex_lock(&conf_thread_mng.cthrd_lock); + pthread_attr_init(&attr); + ret = pthread_attr_setschedpolicy(&attr, SCHED_RR); + if (ret) { + DRV_LOG(ERR, "Failed to set thread sched policy = RR."); + goto c_thread_err; + } + ret = pthread_attr_setschedparam(&attr, &sp); + if (ret) { + DRV_LOG(ERR, "Failed to set thread priority."); + goto c_thread_err; + } + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) { + ret = pthread_create(&conf_thread_mng.cthrd[thrd_idx].tid, + &attr, mlx5_vdpa_c_thread_handle, + (void *)&conf_thread_mng); + if (ret) { + DRV_LOG(ERR, "Failed to create vdpa multi-threads %d.", + thrd_idx); + goto c_thread_err; + } + CPU_ZERO(&cpuset); + if (cpu_core != -1) + CPU_SET(cpu_core, &cpuset); + else + cpuset = rte_lcore_cpuset(rte_get_main_lcore()); + ret = pthread_setaffinity_np( + conf_thread_mng.cthrd[thrd_idx].tid, + sizeof(cpuset), &cpuset); + if (ret) { + DRV_LOG(ERR, "Failed to set thread affinity for " + "vdpa multi-threads %d.", thrd_idx); + goto c_thread_err; + } + snprintf(name, sizeof(name), "vDPA-mthread-%d", thrd_idx); + ret = pthread_setname_np( + conf_thread_mng.cthrd[thrd_idx].tid, name); + if (ret) + DRV_LOG(ERR, "Failed to set vdpa multi-threads name %s.", + name); + else + DRV_LOG(DEBUG, "Thread name: %s.", name); + } + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return 0; +c_thread_err: + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) + mlx5_vdpa_c_thread_destroy(thrd_idx, false); + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return -1; +} + +int +mlx5_vdpa_mult_threads_create(int cpu_core) +{ + pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); + if (mlx5_vdpa_c_thread_create(cpu_core)) { + DRV_LOG(ERR, "Cannot create vDPA configuration threads."); + mlx5_vdpa_mult_threads_destroy(false); + return -1; + } + return 0; +} + +void +mlx5_vdpa_mult_threads_destroy(bool need_unlock) +{ + uint32_t thrd_idx; + + if (!conf_thread_mng.initializer_priv) + return; + for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; + thrd_idx++) + mlx5_vdpa_c_thread_destroy(thrd_idx, need_unlock); + pthread_mutex_destroy(&conf_thread_mng.cthrd_lock); + memset(&conf_thread_mng, 0, sizeof(struct mlx5_vdpa_conf_thread_mng)); +} diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 2b0f5936d1..b45fbac146 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -507,7 +507,7 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) pthread_attr_t attr; char name[16]; const struct sched_param sp = { - .sched_priority = sched_get_priority_max(SCHED_RR), + .sched_priority = sched_get_priority_max(SCHED_RR) - 1, }; if (!priv->eventc) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index c2c5386075..b884da4ded 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -43,7 +43,7 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) errno == EWOULDBLOCK || errno == EAGAIN) continue; - DRV_LOG(ERR, "Failed to read kickfd of virtq %d: %s", + DRV_LOG(ERR, "Failed to read kickfd of virtq %d: %s.", virtq->index, strerror(errno)); } break; @@ -57,7 +57,7 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) rte_spinlock_unlock(&priv->db_lock); pthread_mutex_unlock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { - DRV_LOG(ERR, "device %d queue %d down, skip kick handling", + DRV_LOG(ERR, "device %d queue %d down, skip kick handling.", priv->vid, virtq->index); return; } @@ -215,7 +215,7 @@ mlx5_vdpa_virtq_query(struct mlx5_vdpa_priv *priv, int index) return -1; } if (attr.state == MLX5_VIRTQ_STATE_ERROR) - DRV_LOG(WARNING, "vid %d vring %d hw error=%hhu", + DRV_LOG(WARNING, "vid %d vring %d hw error=%hhu.", priv->vid, index, attr.error_type); return 0; } @@ -377,7 +377,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, if (ret) { last_avail_idx = 0; last_used_idx = 0; - DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0"); + DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); } else { DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " "virtq %d.", priv->vid, last_avail_idx, From patchwork Fri Apr 8 07:55:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109482 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63933A00BE; Fri, 8 Apr 2022 09:57:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E61742874; Fri, 8 Apr 2022 09:57:22 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2043.outbound.protection.outlook.com [40.107.92.43]) by mails.dpdk.org (Postfix) with ESMTP id 11BED42864 for ; Fri, 8 Apr 2022 09:57:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JYPrE8V/qlnhjJapG7J8u5kEySad492UYD801cnjwnomR2Nrwh5DGV9HegoIRLjZqGNJenQeDx6WNkam25sgfhXrRVlYRCXMl5q2Vvt4fUCYh1hAyIQDeTgPCsWE6WxBLoxFyD2gYGsDk3vp2f3WY/o1gSuuHSoyG8MJa3Yac6ROiv4B/QRIWXbCGv8T+8VBhkZjcbVVTNS4NPMG11OLsR0VeOTzN+LocRMMfYKpeveGj5GC5s5G4lI9RzuGAE2qriPTYoAFN8ea/fhMpFQjEEoAvKdCtxhHkQFKHxOIYTteNlbd7a3keS2zGPI8fVIIEfD+hWP2F/g8ED+ND+eI+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4FeQdWb615zmvcpkFHVwI9+lv8aJD/bdTotoWVf5LcA=; b=YNcJwmzBDdpTSxkXmJzOzStqfWT5xiDAX4ptmmctVuJeeeIWHOlK+NGESKiUSevf0lVbzcDYXi0ZBdQXPzC/g2M9ZkLP6aXgmE3Evz0WkOqXq/odR4DD9soUTCOIDPKs92PNs1Pjw/H2C7dNlKyxznaJfN8amqzcyIiXCKHKwwQpEIFWn02wFcSh4/4JNGRFhpOr8Guo7tyQhm3MntyBb4+B9n7J9mXW2Zgetkr0f56eWILOeFC4k13pAySirOJ6Fs6xMcaPMilbs4pKkkZrzE7qDnfRL+WfxJ2xUaa9RuG+f+00Y79Eros8O/9tbpoVBZH+bNy69kcFjWgc88HVQg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4FeQdWb615zmvcpkFHVwI9+lv8aJD/bdTotoWVf5LcA=; b=PulQgCUOk0C6UbtIm1yldxMxA/Q7Ly5kUIrjIb2gluKBIobW7Y3q3144oBUnDM+vtb8q4+0KMpcY3Qb6Sza+WhEatUK/OUEMzHpREKlvY8DRsImS20tgTauqCuq0rcXmzFCAje0jCCH+wxSd4PFOhabi5uui3rnPjVOUTeONCosEMRkI+5+uEE78vc2/qICCPjX4hb4fdVDJ65HzcNDbtu5M/cFZdZraQtUZec7pC9Tp6AY1TwkUDgubMqtskr4/frk3mxoZvFArj7waVO/Sy8kaBfCTOFULbQabjTyFDWBPGaVLV13RSd7FVaz3ELA42PsEZEsnYjhRTLy9js7Nuw== Received: from BN9P222CA0029.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::34) by DM5PR1201MB0185.namprd12.prod.outlook.com (2603:10b6:4:55::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25; Fri, 8 Apr 2022 07:57:18 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::d9) by BN9P222CA0029.outlook.office365.com (2603:10b6:408:10c::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.19 via Frontend Transport; Fri, 8 Apr 2022 07:57:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:17 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:15 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 09/15] vdpa/mlx5: add task ring for MT management Date: Fri, 8 Apr 2022 10:55:59 +0300 Message-ID: <20220408075606.33056-10-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1ddab3b1-cbfc-40e1-34fd-08da19356764 X-MS-TrafficTypeDiagnostic: DM5PR1201MB0185:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OtSxNyC8zDkAOGcVVAcagc8UCxZP3BY0LDflxtkJLjgt4/ohf9FVXedSDBbFZ2/7ZZ5WSzWH7wypvXuQedV5JAEJ0dvswOdm8DDL/ueRenjrDI+tzrNwMM78tgqNeR0CVnW/G0KciXj6jI1U90/yKPw5qIwIbDCtVx2xUV0RHPc/yblj+084tGTVblqpix8xoxbN9raL/67QJhkxhJQDwOp3vtTjPFtJFBWi5+T8g426JaNdFmN1lgvuIkzyOU3K2YT3kL+wLwh5amdtsa4b92z4W+JTzqeJ+N61HojUtNoaD7ZLJpS8/w6K6ysvz6NnCAwNdXc+w0fREUAZwF1RNJhIwD55QZPwNbj5tcLL6C11H0LTWe7T4Ns3wEfJHlx7fuD0NRV7fMz6pQ+tYXyQyYa2+FdnF3+fswVgeTXYuQKMDFdonxBW8ASt6Er5m2STG75PDQM8kiMBKqgueYTbFAeh+CwS7nHTT4X0qqARRNXyHsjV8Yu4Yr8+p0OZeWxr+NZND10mFPvFmLr+6tKuSQ1+hp18zhKi3I2RR++utZJ87nWSFcOmI9aWukd/JUMVYz508cKlQnRpl42u5+yFqX98CmPO2LVQPzRhL5Z3fWf6Gii9YiJaZT1PBZxoR0tx0mzY/FI3SvgRChC2EToat0Jtf1Eq3ayoSkhf9M3imGRRU9B27aiMrdEflKzEAcAopI93dRZbMiNzJy6TLncZYQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(6286002)(70206006)(83380400001)(8676002)(86362001)(70586007)(6666004)(7696005)(336012)(16526019)(508600001)(26005)(186003)(55016003)(1076003)(426003)(316002)(107886003)(47076005)(5660300002)(36756003)(4326008)(40460700003)(2616005)(54906003)(8936002)(82310400005)(2906002)(356005)(81166007)(36860700001)(110136005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:18.3053 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ddab3b1-cbfc-40e1-34fd-08da19356764 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB0185 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The configuration threads tasks need a container to support multiple tasks assigned to a thread in parallel. Use rte_ring container per thread to manage the thread tasks without locks. The caller thread from the user context opens a task to a thread and enqueue it to the thread ring. The thread polls its ring and dequeue tasks. That’s why the ring should be in multi-producer and single consumer mode. Anatomic counter manages the tasks completion notification. The threads report errors to the caller by a dedicated error counter per task. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 17 ++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 118 +++++++++++++++++++++++++- 2 files changed, 133 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 4e7c2557b7..2bbb868ec6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -74,10 +74,22 @@ enum { }; #define MLX5_VDPA_MAX_C_THRD 256 +#define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 +#define MLX5_VDPA_TASKS_PER_DEV 64 + +/* Generic task information and size must be multiple of 4B. */ +struct mlx5_vdpa_task { + struct mlx5_vdpa_priv *priv; + uint32_t *remaining_cnt; + uint32_t *err_cnt; + uint32_t idx; +} __rte_packed __rte_aligned(4); /* Generic mlx5_vdpa_c_thread information. */ struct mlx5_vdpa_c_thread { pthread_t tid; + struct rte_ring *rng; + pthread_cond_t c_cond; }; struct mlx5_vdpa_conf_thread_mng { @@ -532,4 +544,9 @@ mlx5_vdpa_mult_threads_create(int cpu_core); */ void mlx5_vdpa_mult_threads_destroy(bool need_unlock); + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index ba7d8b63b3..8475d7788a 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -11,17 +11,106 @@ #include #include #include +#include #include #include "mlx5_vdpa_utils.h" #include "mlx5_vdpa.h" +static inline uint32_t +mlx5_vdpa_c_thrd_ring_dequeue_bulk(struct rte_ring *r, + void **obj, uint32_t n, uint32_t *avail) +{ + uint32_t m; + + m = rte_ring_dequeue_bulk_elem_start(r, obj, + sizeof(struct mlx5_vdpa_task), n, avail); + n = (m == n) ? n : 0; + rte_ring_dequeue_elem_finish(r, n); + return n; +} + +static inline uint32_t +mlx5_vdpa_c_thrd_ring_enqueue_bulk(struct rte_ring *r, + void * const *obj, uint32_t n, uint32_t *free) +{ + uint32_t m; + + m = rte_ring_enqueue_bulk_elem_start(r, n, free); + n = (m == n) ? n : 0; + rte_ring_enqueue_elem_finish(r, obj, + sizeof(struct mlx5_vdpa_task), n); + return n; +} + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num) +{ + struct rte_ring *rng = conf_thread_mng.cthrd[thrd_idx].rng; + struct mlx5_vdpa_task task[MLX5_VDPA_TASKS_PER_DEV]; + uint32_t i; + + MLX5_ASSERT(num <= MLX5_VDPA_TASKS_PER_DEV); + for (i = 0 ; i < num; i++) { + task[i].priv = priv; + /* To be added later. */ + } + if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) + return -1; + for (i = 0 ; i < num; i++) + if (task[i].remaining_cnt) + __atomic_fetch_add(task[i].remaining_cnt, 1, + __ATOMIC_RELAXED); + /* wake up conf thread. */ + pthread_mutex_lock(&conf_thread_mng.cthrd_lock); + pthread_cond_signal(&conf_thread_mng.cthrd[thrd_idx].c_cond); + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return 0; +} + static void * mlx5_vdpa_c_thread_handle(void *arg) { - /* To be added later. */ - return arg; + struct mlx5_vdpa_conf_thread_mng *multhrd = arg; + pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_priv *priv; + struct mlx5_vdpa_task task; + struct rte_ring *rng; + uint32_t thrd_idx; + uint32_t task_num; + + for (thrd_idx = 0; thrd_idx < multhrd->max_thrds; + thrd_idx++) + if (multhrd->cthrd[thrd_idx].tid == thread_id) + break; + if (thrd_idx >= multhrd->max_thrds) { + DRV_LOG(ERR, "Invalid thread_id 0x%lx in vdpa multi-thread", + thread_id); + return NULL; + } + rng = multhrd->cthrd[thrd_idx].rng; + while (1) { + task_num = mlx5_vdpa_c_thrd_ring_dequeue_bulk(rng, + (void **)&task, 1, NULL); + if (!task_num) { + /* No task and condition wait. */ + pthread_mutex_lock(&multhrd->cthrd_lock); + pthread_cond_wait( + &multhrd->cthrd[thrd_idx].c_cond, + &multhrd->cthrd_lock); + pthread_mutex_unlock(&multhrd->cthrd_lock); + } + priv = task.priv; + if (priv == NULL) + continue; + __atomic_fetch_sub(task.remaining_cnt, + 1, __ATOMIC_RELAXED); + /* To be added later. */ + } + return NULL; } static void @@ -34,6 +123,10 @@ mlx5_vdpa_c_thread_destroy(uint32_t thrd_idx, bool need_unlock) if (need_unlock) pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); } + if (conf_thread_mng.cthrd[thrd_idx].rng) { + rte_ring_free(conf_thread_mng.cthrd[thrd_idx].rng); + conf_thread_mng.cthrd[thrd_idx].rng = NULL; + } } static int @@ -45,6 +138,7 @@ mlx5_vdpa_c_thread_create(int cpu_core) rte_cpuset_t cpuset; pthread_attr_t attr; uint32_t thrd_idx; + uint32_t ring_num; char name[32]; int ret; @@ -60,8 +154,26 @@ mlx5_vdpa_c_thread_create(int cpu_core) DRV_LOG(ERR, "Failed to set thread priority."); goto c_thread_err; } + ring_num = MLX5_VDPA_MAX_TASKS_PER_THRD / conf_thread_mng.max_thrds; + if (!ring_num) { + DRV_LOG(ERR, "Invaild ring number for thread."); + goto c_thread_err; + } for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; thrd_idx++) { + snprintf(name, sizeof(name), "vDPA-mthread-ring-%d", + thrd_idx); + conf_thread_mng.cthrd[thrd_idx].rng = rte_ring_create_elem(name, + sizeof(struct mlx5_vdpa_task), ring_num, + rte_socket_id(), + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ | + RING_F_EXACT_SZ); + if (!conf_thread_mng.cthrd[thrd_idx].rng) { + DRV_LOG(ERR, + "Failed to create vdpa multi-threads %d ring.", + thrd_idx); + goto c_thread_err; + } ret = pthread_create(&conf_thread_mng.cthrd[thrd_idx].tid, &attr, mlx5_vdpa_c_thread_handle, (void *)&conf_thread_mng); @@ -91,6 +203,8 @@ mlx5_vdpa_c_thread_create(int cpu_core) name); else DRV_LOG(DEBUG, "Thread name: %s.", name); + pthread_cond_init(&conf_thread_mng.cthrd[thrd_idx].c_cond, + NULL); } pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); return 0; From patchwork Fri Apr 8 07:56:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109483 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5FC6A00BE; Fri, 8 Apr 2022 09:58:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E47B4285F; Fri, 8 Apr 2022 09:57:25 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2083.outbound.protection.outlook.com [40.107.244.83]) by mails.dpdk.org (Postfix) with ESMTP id 2783542878 for ; Fri, 8 Apr 2022 09:57:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aGWKEywQA2Im3hweD1Le+u2xG/jJwSXCzJHp6Ln0mqjl07JiUgnzITnK1Pdno2idIiz6y4bk++YrwhptOpT/qHy70H61h/oh8kPfrpA/2BGrAxOzfpazSb+pMnSf6aX523MO7n/oUUuV5Gw9vOZMfggtdg1wv7pjrGnZx7E/B7oWkqqjRVkcOltdzGLPiA68bah7+gbyHcYQ2BftSmywv29CPdBVORueo6PdGYOSYKBkLCwTg0z6eU/VKamN3x0vOyV73hQEHYcdVw8GUrqKFDlY5zvr/ZPTfEFQwPpf61VAKE6mTek4ZvW9x+m0l76Z4wMuXjCNKFzi12p0O08SdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JK8YU7z1dthYbZDr62+g98/sJ/n/uRFW92hq8FHfmLE=; b=fwiaeK78POU7ZsijVsGexGufmTp3stubg44s4/1FBCc4Tzd2c+aq+1JNKws+k109mlNJqIdwx6Kv0BEYHUQaAbfp6V/VUEhphbxesGFuZDPlc+GU5ERFI/AXAu//VTn+La+O4qJZiqct2YlA+4WFUOP0YJncJilxza01yIMVuDGzchQsy1mRKtBO6kXlLdkIoipnjcO4ryga0H3ZMVEzL3EMYjyMoN0PJJJJs4ipwafQQ9Fl0VdNyyF2Dj8Afo41Mc81poYbul2Ns2/EEcm1yWqTVIX3DRdlOOgfwqqQpr6Q6c759mbVjAS6gPxvBSBxPsMOR/phi6OkzlkCpRIdZg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JK8YU7z1dthYbZDr62+g98/sJ/n/uRFW92hq8FHfmLE=; b=mF674+qdUhL+x6Ra8WAhggmQTIzGV0ydrY38KWH3ug1gW4ccuN8tsqxvnqPmP9tEPUcq/eHlw/a14aMR3xI8G1kgIli3L0/oq6NMPux0tJW9c7ds/kf/XmhLK9X6bAsQeLsY+L4g6Rcuhd7XmW8vNND7dPIfhFQCv7deMv0+zmujpn6J/GAeNfEdkpEJ8r8kRzDtTttCywR2W1W2CCDdFV01xltVjpjillFEKHsVtidB6DxjBFvfyofROOqymRzh61s88NKuGAXZy8Xsun1xCunihRZxk4I1DOQvuipuGFUHrS62rjrGX3dAaldiXYXQXbXDx5UEFTibyIJGE1pmqg== Received: from BN9PR03CA0942.namprd03.prod.outlook.com (2603:10b6:408:108::17) by MN2PR12MB4208.namprd12.prod.outlook.com (2603:10b6:208:1d0::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:21 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:408:108:cafe::e6) by BN9PR03CA0942.outlook.office365.com (2603:10b6:408:108::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26 via Frontend Transport; Fri, 8 Apr 2022 07:57:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:20 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:20 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:17 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 10/15] vdpa/mlx5: add MT task for VM memory registration Date: Fri, 8 Apr 2022 10:56:00 +0300 Message-ID: <20220408075606.33056-11-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1dde15aa-7a7b-4cfb-8c31-08da193568de X-MS-TrafficTypeDiagnostic: MN2PR12MB4208:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RpyDRxekSaRRpVnNtZrfP+iqZ9PM8KLd/2kp4LS7YoBWulJuIYJGOobiIP9TG75+alFtOfx3JQvqJGztZXw0QPFOFsMo9Z93W7NDanC3pVPfTDiFG+cyL5YM/i9i5xkOePtpg4Gmi31NxYX7K7r8E+AmvqVnDBTTy0bblmcL2gfOoZCybsWEFQigD8Iv0IKgyepZihmhYh7C76x35QjU+jlOddKaejJG43uo+SPv7ee5Qa4jDCcyuE/IToEA9VJUUNS4630OezyY2WdXipxspwA9xjDH0xy5+qKqwlJ4dl2SUjQNfU2qt5OJ/m6AIrasiw5hwEJcvMkPyTzE/MQuoBjvzRKE+meRTQPH1NN9BczqVmTL5Zo+3ioYzenOiWJOsNV0+SVXWfzYRZTC9yrMFcn6CezwTeneyGXKI0MIREqXGaAW0SP5UvRPsqgnpFOqFx+KqMyztqeQTUWxUBBxt0cPreYcnzR1SJKDxxXVGstNz/MPsOx7fcByIVjfn0uIZshjG9I9S0gwOY+u31cL1b1mXSudPVy9oHk0GIjkvlUJV6QoNU9YB1PRIVjRq/EZIwD3JwC52rdnlcIIsKgnKqPce+kcRsjeFKJYyiuiMtNhyXi3E8RX5/2ELlxLp2ApU1NMALC6t8uU8ytX2K+h4fyW1MmyfN5zhSUmPgLMeFHTjW0yQH3mge4clqrDzHDFTZlBQ8vu2IQOgndWjcChUw== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(36756003)(186003)(107886003)(30864003)(1076003)(508600001)(47076005)(55016003)(82310400005)(8936002)(6666004)(5660300002)(26005)(2906002)(16526019)(2616005)(7696005)(54906003)(4326008)(426003)(70206006)(8676002)(336012)(316002)(356005)(40460700003)(36860700001)(6286002)(81166007)(86362001)(110136005)(70586007)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:20.7379 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1dde15aa-7a7b-4cfb-8c31-08da193568de X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4208 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver creates a direct MR object of the HW for each VM memory region, which maps the VM physical address to the actual physical address. Later, after all the MRs are ready, the driver creates an indirect MR to group all the direct MRs into one virtual space from the HW perspective. Create direct MRs in parallel using the MT mechanism. After completion, the master thread creates the indirect MR needed for the following virtqs configurations. This optimization accelerrate the LM proccess and reduce its time by 5%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 1 - drivers/vdpa/mlx5/mlx5_vdpa.h | 31 ++- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 47 ++++- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 268 +++++++++++++++++--------- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 6 +- 5 files changed, 256 insertions(+), 97 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index eace0e4c9e..8dd8e6a2a0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -768,7 +768,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - SLIST_INIT(&priv->mr_list); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2bbb868ec6..3316ce42be 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -59,7 +59,6 @@ struct mlx5_vdpa_event_qp { }; struct mlx5_vdpa_query_mr { - SLIST_ENTRY(mlx5_vdpa_query_mr) next; union { struct ibv_mr *mr; struct mlx5_devx_obj *mkey; @@ -76,10 +75,17 @@ enum { #define MLX5_VDPA_MAX_C_THRD 256 #define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 #define MLX5_VDPA_TASKS_PER_DEV 64 +#define MLX5_VDPA_MAX_MRS 0xFFFF + +/* Vdpa task types. */ +enum mlx5_vdpa_task_type { + MLX5_VDPA_TASK_REG_MR = 1, +}; /* Generic task information and size must be multiple of 4B. */ struct mlx5_vdpa_task { struct mlx5_vdpa_priv *priv; + enum mlx5_vdpa_task_type type; uint32_t *remaining_cnt; uint32_t *err_cnt; uint32_t idx; @@ -101,6 +107,14 @@ struct mlx5_vdpa_conf_thread_mng { }; extern struct mlx5_vdpa_conf_thread_mng conf_thread_mng; +struct mlx5_vdpa_vmem_info { + struct rte_vhost_memory *vmem; + uint32_t entries_num; + uint64_t gcd; + uint64_t size; + uint8_t mode; +}; + struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; uint8_t enable; @@ -176,7 +190,7 @@ struct mlx5_vdpa_priv { struct mlx5_hca_vdpa_attr caps; uint32_t gpa_mkey_index; struct ibv_mr *null_mr; - struct rte_vhost_memory *vmem; + struct mlx5_vdpa_vmem_info vmem_info; struct mlx5dv_devx_event_channel *eventc; struct mlx5dv_devx_event_channel *err_chnl; struct mlx5_uar uar; @@ -187,11 +201,13 @@ struct mlx5_vdpa_priv { uint8_t num_lag_ports; uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; + uint16_t last_c_thrd_idx; + uint16_t num_mrs; /* Number of memory regions. */ struct mlx5_vdpa_steer steer; struct mlx5dv_var *var; void *virtq_db_addr; struct mlx5_pmd_wrapped_mr lm_mr; - SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list; + struct mlx5_vdpa_query_mr **mrs; struct mlx5_vdpa_virtq virtqs[]; }; @@ -548,5 +564,12 @@ mlx5_vdpa_mult_threads_destroy(bool need_unlock); bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, - uint32_t num); + enum mlx5_vdpa_task_type task_type, + uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + void **task_data, uint32_t num); +int +mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); +bool +mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, + uint32_t *err_cnt, uint32_t sleep_time); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 8475d7788a..22e24f7e75 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -47,16 +47,23 @@ mlx5_vdpa_c_thrd_ring_enqueue_bulk(struct rte_ring *r, bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, - uint32_t num) + enum mlx5_vdpa_task_type task_type, + uint32_t *remaining_cnt, uint32_t *err_cnt, + void **task_data, uint32_t num) { struct rte_ring *rng = conf_thread_mng.cthrd[thrd_idx].rng; struct mlx5_vdpa_task task[MLX5_VDPA_TASKS_PER_DEV]; + uint32_t *data = (uint32_t *)task_data; uint32_t i; MLX5_ASSERT(num <= MLX5_VDPA_TASKS_PER_DEV); for (i = 0 ; i < num; i++) { task[i].priv = priv; /* To be added later. */ + task[i].type = task_type; + task[i].remaining_cnt = remaining_cnt; + task[i].err_cnt = err_cnt; + task[i].idx = data[i]; } if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) return -1; @@ -71,6 +78,23 @@ mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, return 0; } +bool +mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, + uint32_t *err_cnt, uint32_t sleep_time) +{ + /* Check and wait all tasks done. */ + while (__atomic_load_n(remaining_cnt, + __ATOMIC_RELAXED) != 0) { + rte_delay_us_sleep(sleep_time); + } + if (__atomic_load_n(err_cnt, + __ATOMIC_RELAXED)) { + DRV_LOG(ERR, "Tasks done with error."); + return true; + } + return false; +} + static void * mlx5_vdpa_c_thread_handle(void *arg) { @@ -81,6 +105,7 @@ mlx5_vdpa_c_thread_handle(void *arg) struct rte_ring *rng; uint32_t thrd_idx; uint32_t task_num; + int ret; for (thrd_idx = 0; thrd_idx < multhrd->max_thrds; thrd_idx++) @@ -102,13 +127,29 @@ mlx5_vdpa_c_thread_handle(void *arg) &multhrd->cthrd[thrd_idx].c_cond, &multhrd->cthrd_lock); pthread_mutex_unlock(&multhrd->cthrd_lock); + continue; } priv = task.priv; if (priv == NULL) continue; - __atomic_fetch_sub(task.remaining_cnt, + switch (task.type) { + case MLX5_VDPA_TASK_REG_MR: + ret = mlx5_vdpa_register_mr(priv, task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to register mr %d.", task.idx); + __atomic_fetch_add(task.err_cnt, 1, + __ATOMIC_RELAXED); + } + break; + default: + DRV_LOG(ERR, "Invalid vdpa task type %d.", + task.type); + break; + } + if (task.remaining_cnt) + __atomic_fetch_sub(task.remaining_cnt, 1, __ATOMIC_RELAXED); - /* To be added later. */ } return NULL; } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index d6e3dd664b..3d17ca88af 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -18,24 +18,30 @@ void mlx5_vdpa_mem_dereg(struct mlx5_vdpa_priv *priv) { struct mlx5_vdpa_query_mr *entry; - struct mlx5_vdpa_query_mr *next; + uint32_t i; - entry = SLIST_FIRST(&priv->mr_list); - while (entry) { - next = SLIST_NEXT(entry, next); - if (entry->is_indirect) - claim_zero(mlx5_devx_cmd_destroy(entry->mkey)); - else - claim_zero(mlx5_glue->dereg_mr(entry->mr)); - SLIST_REMOVE(&priv->mr_list, entry, mlx5_vdpa_query_mr, next); - rte_free(entry); - entry = next; + if (priv->mrs) { + for (i = 0; i < priv->num_mrs; i++) { + entry = (struct mlx5_vdpa_query_mr *)&priv->mrs[i]; + if (entry->is_indirect) { + if (entry->mkey) + claim_zero( + mlx5_devx_cmd_destroy(entry->mkey)); + } else { + if (entry->mr) + claim_zero( + mlx5_glue->dereg_mr(entry->mr)); + } + } + rte_free(priv->mrs); + priv->mrs = NULL; + priv->num_mrs = 0; } - SLIST_INIT(&priv->mr_list); - if (priv->vmem) { - free(priv->vmem); - priv->vmem = NULL; + if (priv->vmem_info.vmem) { + free(priv->vmem_info.vmem); + priv->vmem_info.vmem = NULL; } + priv->gpa_mkey_index = 0; } static int @@ -167,72 +173,29 @@ mlx5_vdpa_mem_cmp(struct rte_vhost_memory *mem1, struct rte_vhost_memory *mem2) #define KLM_SIZE_MAX_ALIGN(sz) ((sz) > MLX5_MAX_KLM_BYTE_COUNT ? \ MLX5_MAX_KLM_BYTE_COUNT : (sz)) -/* - * The target here is to group all the physical memory regions of the - * virtio device in one indirect mkey. - * For KLM Fixed Buffer Size mode (HW find the translation entry in one - * read according to the guest physical address): - * All the sub-direct mkeys of it must be in the same size, hence, each - * one of them should be in the GCD size of all the virtio memory - * regions and the holes between them. - * For KLM mode (each entry may be in different size so HW must iterate - * the entries): - * Each virtio memory region and each hole between them have one entry, - * just need to cover the maximum allowed size(2G) by splitting entries - * which their associated memory regions are bigger than 2G. - * It means that each virtio memory region may be mapped to more than - * one direct mkey in the 2 modes. - * All the holes of invalid memory between the virtio memory regions - * will be mapped to the null memory region for security. - */ -int -mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) +static int +mlx5_vdpa_create_indirect_mkey(struct mlx5_vdpa_priv *priv) { struct mlx5_devx_mkey_attr mkey_attr; - struct mlx5_vdpa_query_mr *entry = NULL; - struct rte_vhost_mem_region *reg = NULL; - uint8_t mode = 0; - uint32_t entries_num = 0; - uint32_t i; - uint64_t gcd = 0; + struct mlx5_vdpa_query_mr *mrs = + (struct mlx5_vdpa_query_mr *)priv->mrs; + struct mlx5_vdpa_query_mr *entry; + struct rte_vhost_mem_region *reg; + uint8_t mode = priv->vmem_info.mode; + uint32_t entries_num = priv->vmem_info.entries_num; + struct rte_vhost_memory *mem = priv->vmem_info.vmem; + struct mlx5_klm klm_array[entries_num]; + uint64_t gcd = priv->vmem_info.gcd; + int ret = -rte_errno; uint64_t klm_size; - uint64_t mem_size; - uint64_t k; int klm_index = 0; - int ret; - struct rte_vhost_memory *mem = mlx5_vdpa_vhost_mem_regions_prepare - (priv->vid, &mode, &mem_size, &gcd, &entries_num); - struct mlx5_klm klm_array[entries_num]; + uint64_t k; + uint32_t i; - if (!mem) - return -rte_errno; - if (priv->vmem != NULL) { - if (mlx5_vdpa_mem_cmp(mem, priv->vmem) == 0) { - /* VM memory not changed, reuse resources. */ - free(mem); - return 0; - } - mlx5_vdpa_mem_dereg(priv); - } - priv->vmem = mem; + /* If it is the last entry, create indirect mkey. */ for (i = 0; i < mem->nregions; i++) { + entry = &mrs[i]; reg = &mem->regions[i]; - entry = rte_zmalloc(__func__, sizeof(*entry), 0); - if (!entry) { - ret = -ENOMEM; - DRV_LOG(ERR, "Failed to allocate mem entry memory."); - goto error; - } - entry->mr = mlx5_glue->reg_mr_iova(priv->cdev->pd, - (void *)(uintptr_t)(reg->host_user_addr), - reg->size, reg->guest_phys_addr, - IBV_ACCESS_LOCAL_WRITE); - if (!entry->mr) { - DRV_LOG(ERR, "Failed to create direct Mkey."); - ret = -rte_errno; - goto error; - } - entry->is_indirect = 0; if (i > 0) { uint64_t sadd; uint64_t empty_region_sz = reg->guest_phys_addr - @@ -265,11 +228,10 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) klm_array[klm_index].address = reg->guest_phys_addr + k; klm_index++; } - SLIST_INSERT_HEAD(&priv->mr_list, entry, next); } memset(&mkey_attr, 0, sizeof(mkey_attr)); mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr); - mkey_attr.size = mem_size; + mkey_attr.size = priv->vmem_info.size; mkey_attr.pd = priv->cdev->pdn; mkey_attr.umem_id = 0; /* Must be zero for KLM mode. */ @@ -278,25 +240,159 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) mkey_attr.pg_access = 0; mkey_attr.klm_array = klm_array; mkey_attr.klm_num = klm_index; - entry = rte_zmalloc(__func__, sizeof(*entry), 0); - if (!entry) { - DRV_LOG(ERR, "Failed to allocate memory for indirect entry."); - ret = -ENOMEM; - goto error; - } + entry = &mrs[mem->nregions]; entry->mkey = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &mkey_attr); if (!entry->mkey) { DRV_LOG(ERR, "Failed to create indirect Mkey."); - ret = -rte_errno; - goto error; + rte_errno = -ret; + return ret; } entry->is_indirect = 1; - SLIST_INSERT_HEAD(&priv->mr_list, entry, next); priv->gpa_mkey_index = entry->mkey->id; return 0; +} + +/* + * The target here is to group all the physical memory regions of the + * virtio device in one indirect mkey. + * For KLM Fixed Buffer Size mode (HW find the translation entry in one + * read according to the guest phisical address): + * All the sub-direct mkeys of it must be in the same size, hence, each + * one of them should be in the GCD size of all the virtio memory + * regions and the holes between them. + * For KLM mode (each entry may be in different size so HW must iterate + * the entries): + * Each virtio memory region and each hole between them have one entry, + * just need to cover the maximum allowed size(2G) by splitting entries + * which their associated memory regions are bigger than 2G. + * It means that each virtio memory region may be mapped to more than + * one direct mkey in the 2 modes. + * All the holes of invalid memory between the virtio memory regions + * will be mapped to the null memory region for security. + */ +int +mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) +{ + void *mrs; + uint8_t mode = 0; + int ret = -rte_errno; + uint32_t i, thrd_idx, data[1]; + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + struct rte_vhost_memory *mem = mlx5_vdpa_vhost_mem_regions_prepare + (priv->vid, &mode, &priv->vmem_info.size, + &priv->vmem_info.gcd, &priv->vmem_info.entries_num); + + if (!mem) + return -rte_errno; + if (priv->vmem_info.vmem != NULL) { + if (mlx5_vdpa_mem_cmp(mem, priv->vmem_info.vmem) == 0) { + /* VM memory not changed, reuse resources. */ + free(mem); + return 0; + } + mlx5_vdpa_mem_dereg(priv); + } + priv->vmem_info.vmem = mem; + priv->vmem_info.mode = mode; + priv->num_mrs = mem->nregions; + if (!priv->num_mrs || priv->num_mrs >= MLX5_VDPA_MAX_MRS) { + DRV_LOG(ERR, + "Invaild number of memory regions."); + goto error; + } + /* The last one is indirect mkey entry. */ + priv->num_mrs++; + mrs = rte_zmalloc("mlx5 vDPA memory regions", + sizeof(struct mlx5_vdpa_query_mr) * priv->num_mrs, 0); + priv->mrs = mrs; + if (!priv->mrs) { + DRV_LOG(ERR, "Failed to allocate private memory regions."); + goto error; + } + if (priv->use_c_thread) { + uint32_t main_task_idx[mem->nregions]; + + for (i = 0; i < mem->nregions; i++) { + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_REG_MR, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, + "Fail to add task mem region (%d)", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + ret = mlx5_vdpa_register_mr(priv, + main_task_idx[i]); + if (ret) { + DRV_LOG(ERR, + "Failed to register mem region %d.", i); + goto error; + } + } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 100)) { + DRV_LOG(ERR, + "Failed to wait register mem region tasks ready."); + goto error; + } + } else { + for (i = 0; i < mem->nregions; i++) { + ret = mlx5_vdpa_register_mr(priv, i); + if (ret) { + DRV_LOG(ERR, + "Failed to register mem region %d.", i); + goto error; + } + } + } + ret = mlx5_vdpa_create_indirect_mkey(priv); + if (ret) { + DRV_LOG(ERR, "Failed to create indirect mkey ."); + goto error; + } + return 0; error: - rte_free(entry); mlx5_vdpa_mem_dereg(priv); rte_errno = -ret; return ret; } + +int +mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx) +{ + struct rte_vhost_memory *mem = priv->vmem_info.vmem; + struct mlx5_vdpa_query_mr *mrs = + (struct mlx5_vdpa_query_mr *)priv->mrs; + struct mlx5_vdpa_query_mr *entry; + struct rte_vhost_mem_region *reg; + int ret; + + reg = &mem->regions[idx]; + entry = &mrs[idx]; + entry->mr = mlx5_glue->reg_mr_iova + (priv->cdev->pd, + (void *)(uintptr_t)(reg->host_user_addr), + reg->size, reg->guest_phys_addr, + IBV_ACCESS_LOCAL_WRITE); + if (!entry->mr) { + DRV_LOG(ERR, "Failed to create direct Mkey."); + ret = -rte_errno; + return ret; + } + entry->is_indirect = 0; + return 0; +} diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index b884da4ded..3be09f218f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -350,21 +350,21 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, } } if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->desc); if (!gpa) { DRV_LOG(ERR, "Failed to get descriptor ring GPA."); return -1; } attr->desc_addr = gpa; - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->used); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for used ring."); return -1; } attr->used_addr = gpa; - gpa = mlx5_vdpa_hva_to_gpa(priv->vmem, + gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->avail); if (!gpa) { DRV_LOG(ERR, "Failed to get GPA for available ring."); From patchwork Fri Apr 8 07:56:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109484 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F023A00BE; Fri, 8 Apr 2022 09:58:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8E0342881; Fri, 8 Apr 2022 09:57:27 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2059.outbound.protection.outlook.com [40.107.94.59]) by mails.dpdk.org (Postfix) with ESMTP id E39DA4286E for ; Fri, 8 Apr 2022 09:57:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MaW1NC74jt383on0r+/GV3HKsbq/L4L/mYWmOoGocJwRi811iFwiI/V6cBw+yj292s6TGu3q3BnlKIltEO/zZvLOtvzt+QQ6C1kDn+viFB+ppVZZH0hYaY+fMR1f8ouKwTnJHcXMTGMVAwVzYCtCfPXar5ObjaO1iRzHAwQ7w8qTF7o7ZcOTYFfNzj4IvKXgX/eOURlCBqqTmnY4F3qcV6bwBwKT1YJESymIEFFuDzMA2uakrr5+WLmfK9G3AcJ3EY+mZRdJ3vlwIqMxdxseWMiro7brEgxLk8xsgRkVJTDWj78vw3Prg7eAfJZMh8lJAfBHIQ2JPYF1wfGTUg5/Aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=x63alFpR6hvwOm9paHo8bmJsRantiKrjw0ug2UEMibk=; b=OgV9ISIK6j1UxK9Adb7NVtjGbC0I6nPTdv2jZP5NCUvzWWyvdFoqJFIw7qldbc8RgaKKnSXdqCsd0NO0vQieCe9v0raQ3Wai84tsd6StzfJ9mQJStsdkS/BiylGIBAxCW0YIVBdrTD+f23mSIRNr6T2sPE3ti8tMOg4g9zQg4fW4Ip2pPNRLJE4kyES7tPPpIeR+Evh6GKwciUbylrNQLjTo7Du9IShlB47nJcSYg5V+LjO1zxGA9FBTyLP8+cvUnBKDSR9H4VqgPFNdI6gFPUzGasCrrwR7IcDoSFjTtqU+lhEveRSaqDr94RAjXud6SDXPdG8RNkB2leOtQTwNzA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=x63alFpR6hvwOm9paHo8bmJsRantiKrjw0ug2UEMibk=; b=is0/mbfrM6kARVo8FbYPOgP3Cvyjt9hHNy7tzzQn8mFENL7m7GZ0NM/Wp9nZt9lvRWEPkRc8Dqtb04thDR0JEEFMEAUWLZE/u+dG9Ds0iYMsix2HaYb5b45IfT4mXcy+iObZuUffIrqtP8KckoIfOqoieUNl1TfAgNNNWFouYbsjidxELfSdhFnDvGUPlejq1k9hhRAt+pU5u7d7wdz7J7l6qA0GAa1APYEL+HqB7jg042V7jl6h5WHBFd0Tx64v1P9I/Iou5PxvF6+DMX/aAMNJNEjjQRlK1HSLMCq8p7+YPLv32/mdkmu7KICObCCJkaBMAQU37E9ZAyn1tJdx3A== Received: from BN9PR03CA0494.namprd03.prod.outlook.com (2603:10b6:408:130::19) by DM5PR12MB2584.namprd12.prod.outlook.com (2603:10b6:4:b0::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:23 +0000 Received: from BN8NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:408:130:cafe::d1) by BN9PR03CA0494.outlook.office365.com (2603:10b6:408:130::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22 via Frontend Transport; Fri, 8 Apr 2022 07:57:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT018.mail.protection.outlook.com (10.13.176.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:22 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:19 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 11/15] vdpa/mlx5: add virtq creation task for MT management Date: Fri, 8 Apr 2022 10:56:01 +0300 Message-ID: <20220408075606.33056-12-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ac17baad-0cf5-47df-01c2-08da19356a44 X-MS-TrafficTypeDiagnostic: DM5PR12MB2584:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZZh3VeFez4yBu17XFiWxw8HWCjmUOZu3KZOvllppcTlKF1ZnyC/+GEZiVAtatw6Batob8Qh5QHxko4khkxoMGlDLoKMs/5CYv9/0aVh/WaTK9TKZfJvLfACw8QooOlH7+ptVNqLqKyRLXoBATTg9GNnkTiTo8W7ZHZR3Aw287G29jk10d7OC+4/kL3lXFFwjLjw/qbSrLKHF0gcXFyNn2U6E849IGt6tx/fzSQ80kMYL+lSljaOVEWS8g7g+POO3xXRgTrTaHtFR2wlC3QG+6Hen5JFAJV6yz6V3oRWLJ5S4L2b1h07IbaSxdzemrnBbA8yH9YHzbuX/Ft5wSFKS59++7FQJRwMiiZpXR28lq3e33OlHCOeSvM5s9XfO4V49Skq3fjHLfs/KEiWZ/Xo56n/NLdqM2om/pn7kNv3IdHaTif5hqG4S0x1qW0MYf9xsX/llgyUosp5R2alQ2ccZlvL1FxrSV8iANah/IzuUzeE7CuX60ZQnkSbWnJ7+n45u+q5bZHYVnfqTL77HrardhinXl1x/zAORKgvjtya25smqZ4fBHSvf38Q5Q5GnTRGaXzaQ0jUR37gVS2fFngXiU6LXLQMfuc9kmktXrbFwIJbyIHPCshY0U3z/dy0TbPVp+v3Ec95JJo12IOj7USTlw9O/RMGyHzxuKfIv05hBUDApRMpy8Ja1Qa5MEhx5fC9suhgxJU6rrThsCdayhwEpVA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(5660300002)(4326008)(82310400005)(40460700003)(7696005)(86362001)(186003)(6666004)(16526019)(70206006)(81166007)(30864003)(8936002)(2906002)(36756003)(8676002)(70586007)(356005)(47076005)(6286002)(26005)(2616005)(83380400001)(336012)(426003)(54906003)(110136005)(508600001)(316002)(36860700001)(1076003)(55016003)(107886003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:23.0805 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ac17baad-0cf5-47df-01c2-08da19356a44 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2584 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The virtq object and all its sub-resources use a lot of FW commands and can be accelerated by the MT management. Split the virtqs creation between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 +- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 148 +++++++++++++++++++------- 4 files changed, 133 insertions(+), 40 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3316ce42be..35221f5ddc 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ enum { /* Vdpa task types. */ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, + MLX5_VDPA_TASK_SETUP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -117,12 +118,12 @@ struct mlx5_vdpa_vmem_info { struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; - uint8_t enable; uint16_t index; uint16_t vq_size; uint8_t notifier_state; - bool stopped; uint32_t configured:1; + uint32_t enable:1; + uint32_t stopped:1; uint32_t version; pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; @@ -565,11 +566,13 @@ bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, enum mlx5_vdpa_task_type task_type, - uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + uint32_t *remaining_cnt, uint32_t *err_cnt, void **task_data, uint32_t num); int mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); bool mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 22e24f7e75..a2d1ddb1e1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -100,6 +100,7 @@ mlx5_vdpa_c_thread_handle(void *arg) { struct mlx5_vdpa_conf_thread_mng *multhrd = arg; pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; @@ -142,6 +143,19 @@ mlx5_vdpa_c_thread_handle(void *arg) __ATOMIC_RELAXED); } break; + case MLX5_VDPA_TASK_SETUP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_setup(priv, + task.idx, false); + if (ret) { + DRV_LOG(ERR, + "Failed to setup virtq %d.", task.idx); + __atomic_fetch_add( + task.err_cnt, 1, __ATOMIC_RELAXED); + } + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b45fbac146..f782b6b832 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -371,7 +371,7 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) goto unlock; - virtq->stopped = true; + virtq->stopped = 1; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) goto log; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 3be09f218f..127b1cee7f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -108,8 +108,9 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + if (virtq->index != i) + continue; pthread_mutex_lock(&virtq->virtq_lock); - virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -128,7 +129,6 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } - static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { @@ -188,7 +188,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) return -1; - virtq->stopped = true; + virtq->stopped = 1; DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); return mlx5_vdpa_virtq_query(priv, index); } @@ -408,7 +408,38 @@ mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_doorbell_setup(struct mlx5_vdpa_virtq *virtq, + struct rte_vhost_vring *vq, int index) +{ + virtq->intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (virtq->intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -1; + } + if (rte_intr_fd_set(virtq->intr_handle, vq->kickfd)) + return -1; + if (rte_intr_fd_get(virtq->intr_handle) == -1) { + DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); + } else { + if (rte_intr_type_set(virtq->intr_handle, + RTE_INTR_HANDLE_EXT)) + return -1; + if (rte_intr_callback_register(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq)) { + rte_intr_fd_set(virtq->intr_handle, -1); + DRV_LOG(ERR, "Failed to register virtq %d interrupt.", + index); + return -1; + } + DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", + rte_intr_fd_get(virtq->intr_handle), index); + } + return 0; +} + +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; struct rte_vhost_vring vq; @@ -452,33 +483,11 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_write32(virtq->index, priv->virtq_db_addr); rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ - virtq->intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (virtq->intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } - - if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd)) - goto error; - - if (rte_intr_fd_get(virtq->intr_handle) == -1) { - DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); - } else { - if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT)) - goto error; - - if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, - virtq)) { - rte_intr_fd_set(virtq->intr_handle, -1); + if (reg_kick) { + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, index)) { DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; - } else { - DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - rte_intr_fd_get(virtq->intr_handle), - index); } } /* Subscribe virtq error event. */ @@ -494,7 +503,6 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_errno = errno; goto error; } - virtq->stopped = false; /* Initial notification to ask Qemu handling completed buffers. */ if (virtq->eqp.cq.callfd != -1) eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1); @@ -564,10 +572,12 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - uint32_t i; - uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -587,16 +597,82 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) { - virtq = &priv->virtqs[i]; - if (virtq->enable) { + if (priv->use_c_thread) { + uint32_t main_task_idx[nr_vring]; + + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_SETUP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task setup virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - if (mlx5_vdpa_virtq_setup(priv, i)) { + if (mlx5_vdpa_virtq_setup(priv, + main_task_idx[i], false)) { pthread_mutex_unlock(&virtq->virtq_lock); goto error; } pthread_mutex_unlock(&virtq->virtq_lock); } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + goto error; + } + for (i = 0; i < nr_vring; i++) { + /* Setup doorbell mapping in order for Qume. */ + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->enable || !virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + continue; + } + if (rte_vhost_get_vhost_vring(priv->vid, i, &vq)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to register virtq %d interrupt.", i); + goto error; + } + } + } else { + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (virtq->enable) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_virtq_setup(priv, i, true)) { + pthread_mutex_unlock( + &virtq->virtq_lock); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } } return 0; error: @@ -660,7 +736,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) mlx5_vdpa_virtq_unset(virtq); } if (enable) { - ret = mlx5_vdpa_virtq_setup(priv, index); + ret = mlx5_vdpa_virtq_setup(priv, index, true); if (ret) { DRV_LOG(ERR, "Failed to setup virtq %d.", index); return ret; From patchwork Fri Apr 8 07:56:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109485 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4F4DA00BE; Fri, 8 Apr 2022 09:58:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDA064286D; Fri, 8 Apr 2022 09:57:30 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2061.outbound.protection.outlook.com [40.107.92.61]) by mails.dpdk.org (Postfix) with ESMTP id 93D3742884 for ; Fri, 8 Apr 2022 09:57:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zc53er027/h/6VlPcZV7LJ2DNLAr313xT3+uIu7JvvVvlJX4CV+4b7CpPRymeFqn+bOt+IMEmpBU7Gbz7uBM8pgYUbXRHJWiLgEgke80N9RqxPclNqZi9fgJg6PXq+zdgvWYqr1QyK9t/QAviUxQJOnFJniGahUwpdCzYD4Vs8Z5Uk2saBtp/EMa9WqYtbh9DY3bd1c0wBhqbvSw5jtowor2G8U6F0IDsFi+wckV+p7qg7vV/IIfNMjZroZDaaGN/RFqRtM52OndZmDtNYnGWwpzUdDrS38xIbA25i5mQmU8Oe2uV7Q1TpRUll9YCe76DO15WIUUCiqnWOpqoG51vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GvKAA5aC/ZcIeaxhCReHX16G4fVBWFQQYx29Dyw51O0=; b=fjKY6TkhYDGv1cCX1VGIQQMl/vypfsSxn7LiabAM0gnHFsVi0eZlErjh7iDhYFgzuiw7MZYbGaqptmdpsDyc6D7r50vbRUp0NKyi4r1oXHO0LaRZ4PJJuNb8yGy51dOkGoqBwEhbxwQpVwnp7viaJR3p96xOp2dppXCLP2UiOoXcsdyfAC84rtxRfzXOh+eoT9upuoAnfmBcpxy85Dc6MM/yVZxwzgv+GFCTtT3409Cjw3sN3pBoNPz/qqrRUV9TJQJc+Fek6jTIPp72HM6I9r8NBLXFPIYOSFw8L6e9kG8Wi3q+FAvG1k9sDbqBKeJffumZ/1H0DwEYXLwF2rFjdA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GvKAA5aC/ZcIeaxhCReHX16G4fVBWFQQYx29Dyw51O0=; b=IEqZZLqE7TpXFzmxgcCKytybPz8Fp7lS7D0uxntcSqsoAZnJtNrE6JFNFInxUNwZ+KeWH5xcP7wqt7l5hsghwuD398vFqvQbrVBdrHxzu+DZ+Uzz3zO7dW98k8Vz31wRX3knZl6M/pILS2S2/Icumln22VsqmwLn44Ix5PsoEK33PpG9QuRV3bkj38N9yTeVjFozkpsLS8UMQSoz45E9rV2Lad6CMBKfbBizB8hkBrkw9qT8Kb4rrcEWzrDLtyr+TbJKri+81kwT2vkH79UX+63A5IaQ7rITRuLEQbjsA7uLZoqUXeDkPWBuqi90nIjQ+zFvUftm7W6RDketWvE36w== Received: from BN6PR13CA0057.namprd13.prod.outlook.com (2603:10b6:404:11::19) by MWHPR12MB1838.namprd12.prod.outlook.com (2603:10b6:300:106::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.31; Fri, 8 Apr 2022 07:57:26 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::6f) by BN6PR13CA0057.outlook.office365.com (2603:10b6:404:11::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5164.8 via Frontend Transport; Fri, 8 Apr 2022 07:57:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:25 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:24 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:22 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 12/15] vdpa/mlx5: add virtq LM log task Date: Fri, 8 Apr 2022 10:56:02 +0300 Message-ID: <20220408075606.33056-13-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a5040b5b-b0c0-4af2-635f-08da19356b98 X-MS-TrafficTypeDiagnostic: MWHPR12MB1838:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WNTbReFEyCbPUb3dAwCbTYqoAZZvgIrcZdMU+GQ1iynSMBfxR47k/UA5rbJyvK2NWn95sIA3KjNWbNouFghZNzw9+/QU7cqv1FATBZUe/uwmQJZvNf7gz0uiQKLY3Qa1UR2kRlgwh9XSUBgUtcZYvwR43ZhVtn0sjXd7WvxWS5z+xYdjtbl/A+JZyQFe9ivLYPwJa5dPQNAvkftcg85j8L4S1vN3kx+uHS8xbL8v90GRFPqPF8dBDIQfH/tU1JPOM7zwAr5k7o/TnJTVPoV7UlfPIOywXunIIab0ARj2+v/ZyiZco7K4W9QTTfnXkrP6PNI3jlPQgJpXuNWuGa0ijTPMoT6H9+VYKERGnOhYroYBqqt63CsxhmiQnk3raFdkDSR+H20Wu7Ot44ejKg/5DNQFSR8GWKi/HRfPUzVhKolK3woS6DzAAOpuW/1fYKh7nG66cx16VmjsKCCU7N0ABXYrRrROuiRf0HbPuO72nbLfVvmPRvXSfd5lXU/fim2AlohUnaG4DOJj+Za+vGZRvktVCnjtoHw9PFTp4OspeVwqkg6BYg63j/s2yKonsbrJaIaXC0/WpDxcpUm6jKmD9ofR2iwWLC7eCSFjLUzKiZ23X6cvRjwH/GMYtYk4LxPHpFnkm/xqTQER2tuKmS6PCuptGkRBmkDSEcpUNPwUFxHNpWe+0A9yLjtz1NEpzt330deTamZsXsiETjAa4IAieQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(8936002)(508600001)(426003)(6666004)(36860700001)(7696005)(356005)(83380400001)(47076005)(5660300002)(86362001)(81166007)(6286002)(336012)(36756003)(82310400005)(26005)(70586007)(55016003)(107886003)(316002)(2616005)(70206006)(2906002)(186003)(40460700003)(16526019)(4326008)(110136005)(54906003)(8676002)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:25.3737 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a5040b5b-b0c0-4af2-635f-08da19356b98 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1838 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs LM log between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 3 + drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 34 ++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 90 ++++++++++++++++++++++----- 3 files changed, 110 insertions(+), 17 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 35221f5ddc..e08931719f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -72,6 +72,8 @@ enum { MLX5_VDPA_NOTIFIER_STATE_ERR }; +#define MLX5_VDPA_USED_RING_LEN(size) \ + ((size) * sizeof(struct vring_used_elem) + sizeof(uint16_t) * 3) #define MLX5_VDPA_MAX_C_THRD 256 #define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 #define MLX5_VDPA_TASKS_PER_DEV 64 @@ -81,6 +83,7 @@ enum { enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, MLX5_VDPA_TASK_SETUP_VIRTQ, + MLX5_VDPA_TASK_STOP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index a2d1ddb1e1..0e54226a90 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -104,6 +104,7 @@ mlx5_vdpa_c_thread_handle(void *arg) struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; + uint64_t features; uint32_t thrd_idx; uint32_t task_num; int ret; @@ -156,6 +157,39 @@ mlx5_vdpa_c_thread_handle(void *arg) } pthread_mutex_unlock(&virtq->virtq_lock); break; + case MLX5_VDPA_TASK_STOP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_stop(priv, + task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to stop virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + pthread_mutex_unlock(&virtq->virtq_lock); + break; + } + ret = rte_vhost_get_negotiated_features( + priv->vid, &features); + if (ret) { + DRV_LOG(ERR, + "Failed to get negotiated features virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + pthread_mutex_unlock(&virtq->virtq_lock); + break; + } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring( + priv->vid, task.idx, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index efebf364d0..07575ea8a9 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -89,39 +89,95 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, return -1; } -#define MLX5_VDPA_USED_RING_LEN(size) \ - ((size) * sizeof(struct vring_used_elem) + sizeof(uint16_t) * 3) - int mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) { + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; uint64_t features; - int ret = rte_vhost_get_negotiated_features(priv->vid, &features); - int i; + int ret; + ret = rte_vhost_get_negotiated_features(priv->vid, &features); if (ret) { DRV_LOG(ERR, "Failed to get negotiated features."); return -1; } - if (!RTE_VHOST_NEED_LOG(features)) - return 0; - for (i = 0; i < priv->nr_virtqs; ++i) { - virtq = &priv->virtqs[i]; - if (!priv->virtqs[i].virtq) { - DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i); - } else { + if (priv->use_c_thread && priv->nr_virtqs) { + uint32_t main_task_idx[priv->nr_virtqs]; + + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->configured) { + DRV_LOG(DEBUG, + "virtq %d is invalid for LM log.", i); + continue; + } + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_STOP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task stop virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - ret = mlx5_vdpa_virtq_stop(priv, i); + ret = mlx5_vdpa_virtq_stop(priv, + main_task_idx[i]); + if (ret) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to stop virtq %d.", i); + return -1; + } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring(priv->vid, i, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); pthread_mutex_unlock(&virtq->virtq_lock); + } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + return -1; + } + } else { + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(DEBUG, + "virtq %d is invalid for LM log.", i); + continue; + } + ret = mlx5_vdpa_virtq_stop(priv, i); if (ret) { - DRV_LOG(ERR, "Failed to stop virtq %d for LM " - "log.", i); + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to stop virtq %d for LM log.", i); return -1; } + if (RTE_VHOST_NEED_LOG(features)) + rte_vhost_log_used_vring(priv->vid, i, 0, + MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); + pthread_mutex_unlock(&virtq->virtq_lock); } - rte_vhost_log_used_vring(priv->vid, i, 0, - MLX5_VDPA_USED_RING_LEN(priv->virtqs[i].vq_size)); } return 0; } From patchwork Fri Apr 8 07:56:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109486 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEADAA00BE; Fri, 8 Apr 2022 09:58:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 260844288C; Fri, 8 Apr 2022 09:57:33 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2071.outbound.protection.outlook.com [40.107.220.71]) by mails.dpdk.org (Postfix) with ESMTP id 250534288B for ; Fri, 8 Apr 2022 09:57:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hGPHvofd7ohHOhdNNSuhY/3DpF2rBHhhUIwK7bN/aXgYDQ48Ec8liYW6lY0O40kkMPscvX/yHQd/hxECkza4awG09+o5AoipJs+PvuBMxeA9F0q09poAeVHugJUx9Fen4O8GuhompNYjfDCycKMnxG0SLZ2dmL3ApUL61tPwmqgMXMj33CxnwcAyZDOXQds1c5gbdfqA7rQAGbmFmzP961P7SYog+x51pQbQ8lFhAcx/LIfmw7uLze4eYdhfUr/uyAe/OPpQ/LDomALNJ6aAid+LNVULPQIVFtCN6qpjsrDuPwe/KqvuW/CoqJXZXMFKZZ7e2NhoA4u4M69YMh833g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y6PISlh1hm1ixwGQge6nycggmcHRm1xqokR102+9UNA=; b=PyA73sZcSLrG00iYIRVIE7rHkDFAlFP+tkisDyKzTwCrliKjK/6D2Oh6OYiTe17xkldI6kmBTZ8WhzvV6DDCp2+Llj/uwjf8CS7Pro13Gbh8/LMlmoH3HAcCbdEkIfNR5RSxpl08E1ON58MGqafATBhARsrX4/Nusklt1A+Ns4dnnBRia2714CJCGB1D9KI0jHyvQg73ylgd6vhgJRL2OX3cMJ03/tdHxCIaymOOgs5NrJ1lhV9/CstfD+gH79ofW/ugEij45O/GKQYY1o0dzfLwgy1ifbEsHvA2Y+QCDl8wFzHaScATCKZsUeSpn5o5E65wi9kaSOjBcSnsSA46XQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y6PISlh1hm1ixwGQge6nycggmcHRm1xqokR102+9UNA=; b=kyyhlpQeMv80OoTnZfvuTd3a2WVuVtg3+zEEtyfu8LaGGmCKN4P1NrGCLdusDWr/j+3maVgzNH2hic9Zpf/BM8qRXC5lrviAQzv+fcWOKkkHbwhsmBNaXK+1tWJ70pOf1qRRQzp1CUVrwCzMicebX/WfblvFKsfbUsljNrMbsJ4zpBD2R7SqIjLFNTrTr1WNd5R95Pp842yZqtBRG5RYoMY9W9k/71ti7Ji9t8LVOLnwxymBhiXr307jUtGkb0+8YOGXnhrzz3LEjrrSYV0eUs52QQwv+3Xkds66LnkAuFgU63p1I7bfORTDPFaXPhG0PHc0Ctk1ovIBLQUJVf/exg== Received: from BN9PR03CA0404.namprd03.prod.outlook.com (2603:10b6:408:111::19) by DM6PR12MB4330.namprd12.prod.outlook.com (2603:10b6:5:21d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26; Fri, 8 Apr 2022 07:57:28 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:111:cafe::ef) by BN9PR03CA0404.outlook.office365.com (2603:10b6:408:111::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26 via Frontend Transport; Fri, 8 Apr 2022 07:57:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:26 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:24 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 13/15] vdpa/mlx5: add device close task Date: Fri, 8 Apr 2022 10:56:03 +0300 Message-ID: <20220408075606.33056-14-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 250641da-1995-4e5b-72e8-08da19356cfd X-MS-TrafficTypeDiagnostic: DM6PR12MB4330:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gGKKWZf1HrH35FKSlng1V3qI8Y0kCbCCu33J/zB14nPa3Kyjmj/oVk+bXCgGLcCO2g6stpSw+AVVzE+o8UMmKjsNCSZP2a3zd++Ruw+/xGhTmGHfmNZxUrqObc3YUdUJAoi+wFEkXK2HXoLI1Iv+aXc5YXQKbnvsOnkFnttWaw9jYhp84iQQBlP0L0UN/YNm0A04+XjV/z5FTECjPpsdDSL3loqJVpx14sHAOzW9gD/QDKY0MUm+gBstDJtB2RsjN+TYXz7y32iV64Bghv/xebrlUqtlwrF2NmrVXWOAXdWHQUfaggI9K14B2Ok0rIq3R9eXfon5QwllcInH9Vp2CwEtMsIEha3XqYmrVO4iShvunEBLjpQraG+H/G4nhaOm/NErwmK4RW8E4Etz3MhlATKXw98EPPrsx2mNo7HZBzisSlCha3+qFnogqtcaZ2TTBuwILWcmM9lWqB3gL8tgVfEznid+kORTPcOa1JHWfpEMbHDmufI7Td5Wq0dcWzwdvhiejSPZpWm/imZZ4Huhnnu5D1qpMbEZwE5quFz3BND8zmSU5kePEzaBr75Eq5dX4Cf1w+Cl375mzotiW4J/oET+c9AFlDJPloaCwnsxYogjPfH2vgnRxvaD8VMsvEaQCMO0cnWEoQyftYrwT7NfiAtJrY719YmhAm4ff6gB40MVSJsFPmxXk7d1ABB1y3PC3Q3ecCGd9FBXC32+VFb4bg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(7696005)(356005)(8936002)(81166007)(2616005)(107886003)(316002)(5660300002)(26005)(110136005)(54906003)(1076003)(186003)(508600001)(6666004)(16526019)(6286002)(336012)(36860700001)(40460700003)(4326008)(8676002)(83380400001)(47076005)(426003)(70586007)(70206006)(36756003)(86362001)(82310400005)(2906002)(55016003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:27.7138 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 250641da-1995-4e5b-72e8-08da19356cfd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4330 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs device close tasks after stopping virt-queue between the configuration threads. This accelerates the LM process and reduces its time by 50%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 51 +++++++++++++++++++++++++-- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 20 ++++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 14 ++++++++ 4 files changed, 90 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 8dd8e6a2a0..d349682a83 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -245,7 +245,7 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } -static void +void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { /* Clean pre-created resource in dev removal only. */ @@ -254,6 +254,26 @@ mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) mlx5_vdpa_mem_dereg(priv); } +static bool +mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) +{ + uint32_t timeout = 0; + + /* Check and wait all close tasks done. */ + while (__atomic_load_n(&priv->dev_close_progress, + __ATOMIC_RELAXED) != 0 && timeout < 1000) { + rte_delay_us_sleep(10000); + timeout++; + } + if (priv->dev_close_progress) { + DRV_LOG(ERR, + "Failed to wait close device tasks done vid %d.", + priv->vid); + return true; + } + return false; +} + static int mlx5_vdpa_dev_close(int vid) { @@ -271,6 +291,27 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + if (priv->use_c_thread) { + if (priv->last_c_thrd_idx >= + (conf_thread_mng.max_thrds - 1)) + priv->last_c_thrd_idx = 0; + else + priv->last_c_thrd_idx++; + __atomic_store_n(&priv->dev_close_progress, + 1, __ATOMIC_RELAXED); + if (mlx5_vdpa_task_add(priv, + priv->last_c_thrd_idx, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + NULL, NULL, NULL, 1)) { + DRV_LOG(ERR, + "Fail to add dev close task. "); + goto single_thrd; + } + priv->state = MLX5_VDPA_STATE_PROBED; + DRV_LOG(INFO, "vDPA device %d was closed.", vid); + return ret; + } +single_thrd: pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); @@ -278,10 +319,12 @@ mlx5_vdpa_dev_close(int vid) mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - priv->state = MLX5_VDPA_STATE_PROBED; if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; + __atomic_store_n(&priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + priv->state = MLX5_VDPA_STATE_PROBED; DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -302,6 +345,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } + if (mlx5_vdpa_wait_dev_close_tasks_done(priv)) + return -1; priv->vid = vid; priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) @@ -839,6 +884,8 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); + if (priv->use_c_thread) + mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e08931719f..b6392b9d66 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -84,6 +84,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, }; /* Generic task information and size must be multiple of 4B. */ @@ -206,6 +207,7 @@ struct mlx5_vdpa_priv { uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; uint16_t last_c_thrd_idx; + uint16_t dev_close_progress; uint16_t num_mrs; /* Number of memory regions. */ struct mlx5_vdpa_steer steer; struct mlx5dv_var *var; @@ -578,4 +580,10 @@ mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); int mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); +void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq); +void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 0e54226a90..07efa0cb16 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -63,7 +63,8 @@ mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, task[i].type = task_type; task[i].remaining_cnt = remaining_cnt; task[i].err_cnt = err_cnt; - task[i].idx = data[i]; + if (data) + task[i].idx = data[i]; } if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) return -1; @@ -190,6 +191,23 @@ mlx5_vdpa_c_thread_handle(void *arg) MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); pthread_mutex_unlock(&virtq->virtq_lock); break; + case MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT: + mlx5_vdpa_virtq_unreg_intr_handle_all(priv); + pthread_mutex_lock(&priv->steer_update_lock); + mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); + mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_drain_cq(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy( + &priv->lm_mr); + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); + priv->vid = 0; + __atomic_store_n( + &priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 127b1cee7f..c1281be5f2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -99,6 +99,20 @@ mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) rte_intr_instance_free(virtq->intr_handle); } +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + struct mlx5_vdpa_virtq *virtq; + + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) From patchwork Fri Apr 8 07:56:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109487 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32DFBA00BE; Fri, 8 Apr 2022 09:58:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A3E94288F; Fri, 8 Apr 2022 09:57:34 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2045.outbound.protection.outlook.com [40.107.236.45]) by mails.dpdk.org (Postfix) with ESMTP id 4CA1A4288C for ; Fri, 8 Apr 2022 09:57:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HNpRtHBW9ck1zuWBNEi6dySEHStMvq+xrsfgt7qvOlr0fJWL5VyAkiP44e6nw9rlaHsZRler70RuJOAEbDbWsRduTamSigNhSvcRM/R3GzqHhMoI5H3Eubr/Tttfise2/P4eAbZcXkFpCMSJ2bZcQsT1sfnVt9/gpqByAif9rxDV9x2ODFJH+PVWjNPS7CyROsOsdrFqh07xggrq+0NHWJKEymepjkKSoCfGP7Zf/uMeVoJXdHgzVMFGlnarI3aSMI9NHNK9LcgzovtakVXPhZssHZf6bCaZDZSJoK9G4ygB4YK958uOsbmP3QcLHoi4j0eTCCn4wAVPQBVsL3Wq+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VU/Lv5JrXx/tG9CBaEsZzCU5lhBNhIv0T/0T1KXfiQE=; b=CCzDxNudCb4/AOc4NzjWOqtc97tRwox+KuOOFIeBmGfnpEbpuq1urAopBLfaZUf4Wa1fD8Zjz4lRoPyJUcwOLCNNh2WxgCQ52RTfLZoKiwtGsN9DojmTgNyZJ1v+fyWHqmAhwj4XptawLE1WdwxwY+euqAu7AkF/bj+jkFm+miV+uy0Y2qAbR4tc0V/hnx/NVJEbt1YA+uRFYJJ2LHIbu63+husEHtaWcJ47ukqY+/KMQ+H2+5u5TglLOOk3OMl2tXnSsHDw0YVGmjHIgmwQ5fIZ7XG+mlIZzGyOxzYXQGOh40oLi2lnvHmuKpGeVELWo6CHpAXtlH5+8t6WAOF0Vw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VU/Lv5JrXx/tG9CBaEsZzCU5lhBNhIv0T/0T1KXfiQE=; b=bPuk3iO8hBVTLYQ7E8bHWjKTpHYdkxIsQoYQSVPU4Wr3x7cJdfvoWsYXdZraFib6JG+Jw86d2uaE5LJjnP1mSI5GkKW1zQRZ+JIIBeWJnkANDQoTs//uCPbXMJwKXe9lOF9++Mv4GC6TaqDYtNo54946uaDF6VjSyOn+jrDVE4QlHDWvQCJrn5PqWpBWdJTCeWlcKhpawL4G9AEIPrMiiiIWKlU+eCaV39sakha1F8XI+nwwOPUU3d4VcVRJPgOjots7t+Xdcexvxz19tJ1KNGD0J6yiapDHYzR1RajWPTyieX3MBjgO32U0SnQw9N4c0FdeoLo9TJ+QN2ps4ma4Qw== Received: from BN8PR16CA0032.namprd16.prod.outlook.com (2603:10b6:408:4c::45) by BL1PR12MB5110.namprd12.prod.outlook.com (2603:10b6:208:312::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.25; Fri, 8 Apr 2022 07:57:30 +0000 Received: from BN8NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::42) by BN8PR16CA0032.outlook.office365.com (2603:10b6:408:4c::45) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.31 via Frontend Transport; Fri, 8 Apr 2022 07:57:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT031.mail.protection.outlook.com (10.13.177.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:29 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:26 -0700 From: Li Zhang To: , , , CC: , , Yajun Wu Subject: [RFC 14/15] vdpa/mlx5: add virtq sub-resources creation Date: Fri, 8 Apr 2022 10:56:04 +0300 Message-ID: <20220408075606.33056-15-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: caef1def-f378-45b3-3820-08da19356eb4 X-MS-TrafficTypeDiagnostic: BL1PR12MB5110:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BA6BnwZ39Je+g2cOIAFntUL0bsqElkDFgqvwtH+pB6P3TROtl5l1vCPP7NvqnRj3D7S4o3Kl0hSN5LRD1Nv0vOeDJicCP/ufCNThL22mM4hg3dzYkX+D3gvflvFBhpxI9xfWE+vSjgjGPrqWIS6yZ27NNAfCA2wSiavgAlYK6NiXY+UCLvYauYcGQnR4QZU7J2GuUss+eSWAqOl/nnoG1kZJgAXZT/5IS5e9IvPorQSG9r9O8r6nRyHxRwmm5OM1EjabdJq7y75ool+z5xSAPIFCrHs61L+dBNT083H7r24j4/G8pbcV1OSsZZJezGy36MCNV963i7CfN4hY3oVDHWxGhXjprp9A5LySA3rQI6TAsXXt8oqIoydalz1KrRWPp+w6+0C+cjDDb30xs/5H//fzgcaZInwPFmSSqGR7sWTWS/EhL6P7bsqI2cdL7nfj77ZOgtqiI7hmAcGjrANOd7vzFcyMXrFlwk1ThVmTVmFlO7dLo24OKmAhgWT115RcJkuRBGKjhe0jtAZoavYTYAttiEn9bnfMgzCnULw1LxJQ/JnOwtH68QijohudrGLVAtVsQEWMAKRP+9x9d00p1vSVlxSVUkbJCXv5ZtZNv6tq17lfz8BQNALVVwgmjZb7V2b9uGQhKH+o1SS1qs1/os556LXU8xZlt8OuRCsX26XRURbpvTsaZ9PvSQC0dWomnWolo1pEJUfQIDb75eAl0g== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(336012)(107886003)(26005)(86362001)(8936002)(36860700001)(55016003)(16526019)(2616005)(186003)(426003)(1076003)(316002)(508600001)(70586007)(70206006)(30864003)(54906003)(82310400005)(4326008)(5660300002)(36756003)(8676002)(2906002)(356005)(83380400001)(40460700003)(81166007)(110136005)(47076005)(6286002)(6666004)(7696005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:30.5112 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: caef1def-f378-45b3-3820-08da19356eb4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5110 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org pre-created virt-queue sub-resource in device probe stage and then modify virtqueue in device config stage. Steer table also need to support dummy virt-queue. This accelerates the LM process and reduces its time by 40%. Signed-off-by: Li Zhang Signed-off-by: Yajun Wu --- drivers/vdpa/mlx5/mlx5_vdpa.c | 68 ++++++-------------- drivers/vdpa/mlx5/mlx5_vdpa.h | 17 +++-- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 9 ++- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 15 +++-- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 99 +++++++++++++++++++++-------- 5 files changed, 117 insertions(+), 91 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index d349682a83..eaca571e3e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -624,65 +624,37 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { - struct mlx5_vdpa_virtq *virtq; + uint32_t max_queues = priv->queues * 2; uint32_t index; - uint32_t i; + struct mlx5_vdpa_virtq *virtq; for (index = 0; index < priv->caps.max_num_virtio_queues * 2; index++) { virtq = &priv->virtqs[index]; pthread_mutex_init(&virtq->virtq_lock, NULL); } - if (!priv->queues) + if (!priv->queues || !priv->queue_size) return 0; - for (index = 0; index < (priv->queues * 2); ++index) { + for (index = 0; index < max_queues; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + index)) + goto error; + if (mlx5_vdpa_is_modify_virtq_supported(priv)) + if (mlx5_vdpa_steer_update(priv, true)) + goto error; + return 0; +error: + for (index = 0; index < max_queues; ++index) { virtq = &priv->virtqs[index]; - int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, - -1, virtq); - - if (ret) { - DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", - index); - return -1; - } - if (priv->caps.queue_counters_valid) { - if (!virtq->counters) - virtq->counters = - mlx5_devx_cmd_create_virtio_q_counters - (priv->cdev->ctx); - if (!virtq->counters) { - DRV_LOG(ERR, "Failed to create virtq couners for virtq" - " %d.", index); - return -1; - } - } - for (i = 0; i < RTE_DIM(virtq->umems); ++i) { - uint32_t size; - void *buf; - struct mlx5dv_devx_umem *obj; - - size = priv->caps.umems[i].a * priv->queue_size + - priv->caps.umems[i].b; - buf = rte_zmalloc(__func__, size, 4096); - if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" - " %u.", i, index); - return -1; - } - obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, - size, IBV_ACCESS_LOCAL_WRITE); - if (obj == NULL) { - rte_free(buf); - DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", - i, index); - return -1; - } - virtq->umems[i].size = size; - virtq->umems[i].buf = buf; - virtq->umems[i].obj = obj; + if (virtq->virtq) { + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); } } - return 0; + if (mlx5_vdpa_is_modify_virtq_supported(priv)) + mlx5_vdpa_steer_unset(priv); + return -1; } static int diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index b6392b9d66..00700261ec 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -277,13 +277,15 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * The guest notification file descriptor. * @param[in/out] virtq * Pointer to the virt-queue structure. + * @param[in] reset + * If ture, it will reset event qp. * * @return * 0 on success, -1 otherwise and rte_errno is set. */ int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_virtq *virtq); + int callfd, struct mlx5_vdpa_virtq *virtq, bool reset); /** * Destroy an event QP and all its related resources. @@ -403,11 +405,13 @@ void mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv); * * @param[in] priv * The vdpa driver private structure. + * @param[in] is_dummy + * If set, it is updated with dummy queue for prepare resource. * * @return * 0 on success, a negative value otherwise. */ -int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv); +int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv, bool is_dummy); /** * Setup steering and all its related resources to enable RSS traffic from the @@ -581,9 +585,14 @@ mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, int mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); void -mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq); -void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv); void mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv); +bool +mlx5_vdpa_virtq_single_resource_prepare(struct mlx5_vdpa_priv *priv, + int index); +int +mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp); +void +mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index f782b6b832..c7be9d5f38 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -618,7 +618,7 @@ mlx5_vdpa_qps2rts(struct mlx5_vdpa_event_qp *eqp) return 0; } -static int +int mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) { if (mlx5_devx_cmd_modify_qp_state(eqp->fw_qp, MLX5_CMD_OP_QP_2RST, @@ -638,7 +638,7 @@ mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_virtq *virtq) + int callfd, struct mlx5_vdpa_virtq *virtq, bool reset) { struct mlx5_vdpa_event_qp *eqp = &virtq->eqp; struct mlx5_devx_qp_attr attr = {0}; @@ -649,11 +649,10 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, /* Reuse existing resources. */ eqp->cq.callfd = callfd; /* FW will set event qp to error state in q destroy. */ - if (!mlx5_vdpa_qps2rst2rts(eqp)) { + if (reset && !mlx5_vdpa_qps2rst2rts(eqp)) rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)), &eqp->sw_qp.db_rec[0]); - return 0; - } + return 0; } if (eqp->fw_qp) mlx5_vdpa_event_qp_destroy(eqp); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index 4cbf09784e..f7f6dce45c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -57,7 +57,7 @@ mlx5_vdpa_steer_unset(struct mlx5_vdpa_priv *priv) * -1 on error. */ static int -mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv, bool is_dummy) { int i; uint32_t rqt_n = RTE_MIN(MLX5_VDPA_DEFAULT_RQT_SIZE, @@ -67,15 +67,18 @@ mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv) sizeof(uint32_t), 0); uint32_t k = 0, j; int ret = 0, num; + uint16_t nr_vring = is_dummy ? priv->queues * 2 : priv->nr_virtqs; if (!attr) { DRV_LOG(ERR, "Failed to allocate RQT attributes memory."); rte_errno = ENOMEM; return -ENOMEM; } - for (i = 0; i < priv->nr_virtqs; i++) { + for (i = 0; i < nr_vring; i++) { if (is_virtq_recvq(i, priv->nr_virtqs) && - priv->virtqs[i].enable && priv->virtqs[i].virtq) { + (is_dummy || (priv->virtqs[i].enable && + priv->virtqs[i].configured)) && + priv->virtqs[i].virtq) { attr->rq_list[k] = priv->virtqs[i].virtq->id; k++; } @@ -235,12 +238,12 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) } int -mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv, bool is_dummy) { int ret; pthread_mutex_lock(&priv->steer_update_lock); - ret = mlx5_vdpa_rqt_prepare(priv); + ret = mlx5_vdpa_rqt_prepare(priv, is_dummy); if (ret == 0) { mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { @@ -261,7 +264,7 @@ mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { - if (mlx5_vdpa_steer_update(priv)) + if (mlx5_vdpa_steer_update(priv, false)) goto error; return 0; error: diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index c1281be5f2..4a74738d9c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -143,10 +143,10 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } -static int +void mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - int ret = -EAGAIN; + int ret; mlx5_vdpa_virtq_unregister_intr_handle(virtq); if (virtq->configured) { @@ -154,12 +154,12 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); - virtq->configured = 0; claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); + virtq->index = 0; + virtq->virtq = NULL; + virtq->configured = 0; } - virtq->virtq = NULL; virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; - return 0; } void @@ -172,6 +172,9 @@ mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) virtq = &priv->virtqs[i]; pthread_mutex_lock(&virtq->virtq_lock); mlx5_vdpa_virtq_unset(virtq); + if (i < (priv->queues * 2)) + mlx5_vdpa_virtq_single_resource_prepare( + priv, i); pthread_mutex_unlock(&virtq->virtq_lock); } priv->features = 0; @@ -255,7 +258,8 @@ mlx5_vdpa_hva_to_gpa(struct rte_vhost_memory *mem, uint64_t hva) static int mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, struct mlx5_devx_virtq_attr *attr, - struct rte_vhost_vring *vq, int index) + struct rte_vhost_vring *vq, + int index, bool is_prepare) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; uint64_t gpa; @@ -274,11 +278,15 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, MLX5_VIRTQ_MODIFY_TYPE_Q_MKEY | MLX5_VIRTQ_MODIFY_TYPE_QUEUE_FEATURE_BIT_MASK | MLX5_VIRTQ_MODIFY_TYPE_EVENT_MODE; - attr->tso_ipv4 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); - attr->tso_ipv6 = !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); - attr->tx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); - attr->rx_csum = !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); - attr->virtio_version_1_0 = + attr->tso_ipv4 = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO4)); + attr->tso_ipv6 = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_HOST_TSO6)); + attr->tx_csum = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_CSUM)); + attr->rx_csum = is_prepare ? 1 : + !!(priv->features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)); + attr->virtio_version_1_0 = is_prepare ? 1 : !!(priv->features & (1ULL << VIRTIO_F_VERSION_1)); attr->q_type = (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) ? @@ -287,12 +295,12 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, * No need event QPs creation when the guest in poll mode or when the * capability allows it. */ - attr->event_mode = vq->callfd != -1 || + attr->event_mode = is_prepare || vq->callfd != -1 || !(priv->caps.event_mode & (1 << MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { - ret = mlx5_vdpa_event_qp_prepare(priv, - vq->size, vq->callfd, virtq); + ret = mlx5_vdpa_event_qp_prepare(priv, vq->size, + vq->callfd, virtq, !virtq->virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -317,7 +325,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, attr->counters_obj_id = virtq->counters->id; } /* Setup 3 UMEMs for each virtq. */ - if (virtq->virtq) { + if (!virtq->virtq) { for (i = 0; i < RTE_DIM(virtq->umems); ++i) { uint32_t size; void *buf; @@ -342,7 +350,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, buf = rte_zmalloc(__func__, size, 4096); if (buf == NULL) { - DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq." " %u.", i, index); return -1; } @@ -363,7 +371,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, attr->umems[i].size = virtq->umems[i].size; } } - if (attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { + if (!is_prepare && attr->q_type == MLX5_VIRTQ_TYPE_SPLIT) { gpa = mlx5_vdpa_hva_to_gpa(priv->vmem_info.vmem, (uint64_t)(uintptr_t)vq->desc); if (!gpa) { @@ -386,21 +394,23 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, } attr->available_addr = gpa; } - ret = rte_vhost_get_vring_base(priv->vid, + if (!is_prepare) { + ret = rte_vhost_get_vring_base(priv->vid, index, &last_avail_idx, &last_used_idx); - if (ret) { - last_avail_idx = 0; - last_used_idx = 0; - DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); - } else { - DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " + if (ret) { + last_avail_idx = 0; + last_used_idx = 0; + DRV_LOG(WARNING, "Couldn't get vring base, idx are set to 0."); + } else { + DRV_LOG(INFO, "vid %d: Init last_avail_idx=%d, last_used_idx=%d for " "virtq %d.", priv->vid, last_avail_idx, last_used_idx, index); + } } attr->hw_available_index = last_avail_idx; attr->hw_used_index = last_used_idx; attr->q_size = vq->size; - attr->mkey = priv->gpa_mkey_index; + attr->mkey = is_prepare ? 0 : priv->gpa_mkey_index; attr->tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; attr->queue_index = index; attr->pd = priv->cdev->pdn; @@ -413,6 +423,39 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, return 0; } +bool +mlx5_vdpa_virtq_single_resource_prepare(struct mlx5_vdpa_priv *priv, + int index) +{ + struct mlx5_devx_virtq_attr attr = {0}; + struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq = { + .size = priv->queue_size, + .callfd = -1, + }; + int ret; + + virtq = &priv->virtqs[index]; + virtq->index = index; + virtq->vq_size = vq.size; + virtq->configured = 0; + virtq->virtq = NULL; + ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, &vq, index, true); + if (ret) { + DRV_LOG(ERR, + "Cannot prepare setup resource for virtq %d.", index); + return true; + } + if (mlx5_vdpa_is_modify_virtq_supported(priv)) { + virtq->virtq = + mlx5_devx_cmd_create_virtq(priv->cdev->ctx, &attr); + virtq->priv = priv; + if (!virtq->virtq) + return true; + } + return false; +} + bool mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) { @@ -470,7 +513,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) virtq->priv = priv; virtq->stopped = 0; ret = mlx5_vdpa_virtq_sub_objs_prepare(priv, &attr, - &vq, index); + &vq, index, false); if (ret) { DRV_LOG(ERR, "Failed to setup update virtq attr" " %d.", index); @@ -742,7 +785,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) if (virtq->configured) { virtq->enable = 0; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { - ret = mlx5_vdpa_steer_update(priv); + ret = mlx5_vdpa_steer_update(priv, false); if (ret) DRV_LOG(WARNING, "Failed to disable steering " "for virtq %d.", index); @@ -757,7 +800,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) } virtq->enable = 1; if (is_virtq_recvq(virtq->index, priv->nr_virtqs)) { - ret = mlx5_vdpa_steer_update(priv); + ret = mlx5_vdpa_steer_update(priv, false); if (ret) DRV_LOG(WARNING, "Failed to enable steering " "for virtq %d.", index); From patchwork Fri Apr 8 07:56:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 109488 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D78EEA00BE; Fri, 8 Apr 2022 09:58:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CB7B42887; Fri, 8 Apr 2022 09:57:36 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2089.outbound.protection.outlook.com [40.107.244.89]) by mails.dpdk.org (Postfix) with ESMTP id D77C442868 for ; Fri, 8 Apr 2022 09:57:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lll+tjiEi8xVnb1AGxYCVHITKGlKWC4F2aSJspq61mF78BYfS4N3I6d8PRxsy14AdTkgBR0hUaCizKEw+dY3BnyyUBzVI2kJLuk8YjTB3cTnCmIYOy6S9omzibWZwxDFDgz+DOi0JafcF+bgxYWwkqiaOGO0yYDi/6iWhXF7rJ2Dh+wN8cWLjBJxu88jf5MYE7z5W1vLIK+S3jJsBYKOEheBV9537ZbSGdTJblqvuu4JsVUCwILsciCzMYFUjj0aVUfx7QH/HIdSIhIRKhlQOlyx9h+QdoSfgqNEVcm761qVbt6dmIWZrbjVIcysr4TUi/HqdoXOCudYP5Z/iCJa8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2MK+apYgcEhDbEKKaOp6cOGnkSUBspB7IAJy0Qcphzw=; b=LyPxxRf4nuNDDQqgPOMO9THcOn4+OBci9vY+l1jCD/qGNsAsLOi7tVNeA0JRcadcyxAlTLBsqIcpl/ogOx6goQtvHbv5uV7ewDG9u9IN17LgD9pXD1i3QR9rt0teYr9bM58rxZr5R/LDW8STcq6E1Zh8YNbjB+dhDzc+bcP2K2wmEnfU8HAVjAF3snZB4dfcNjEKPtjV4UMk93y/tScvuTmKmqPy0OD5pLcz+VlXggcUwNlIysBWvL/VQFyKpkbJb6Zl15Gtc+Wf5KM6fvfbDzQlIBXqnpJ8J+Za7W6CZmQfdD8vLSSYnmzdOz276bCiIEVBQPhTe5DHz7PQwub+IQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2MK+apYgcEhDbEKKaOp6cOGnkSUBspB7IAJy0Qcphzw=; b=N5/rHNDDQ6ak/AONLPMGmxPfGZFtgyc0guKn1KGAYtQvTxvFGtb7+2IYmbNKs/61P8ZQsZtUaa+Cf3ajnYPtjh3gqwOkhRE23EzPRqK1RvVFyesvZC9UOpsOU6/CDbQ8zQ73rldAtNS131ovHjDQ/U1+ANlr45JL3SPZ7058t/BxoKtU0VvF5su3yliAwTAiFu530IuTqJ6MNCACGPJHAaSDpyRKIMzYz4ysFD3l/tunIMaeFIuM7ufbV9SufbYZE0+vnAKgYIMVj8nLeyL/opdATegeOT/f0Z5G9xPzNJDIRWYyQvgXKeBIyybsepTeIpAFsx2EBacVO7LiDxjGew== Received: from DM6PR02CA0086.namprd02.prod.outlook.com (2603:10b6:5:1f4::27) by DM5PR12MB2519.namprd12.prod.outlook.com (2603:10b6:4:b5::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:32 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1f4:cafe::e7) by DM6PR02CA0086.outlook.office365.com (2603:10b6:5:1f4::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.31 via Frontend Transport; Fri, 8 Apr 2022 07:57:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:32 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:29 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 15/15] vdpa/mlx5: prepare virtqueue resource creation Date: Fri, 8 Apr 2022 10:56:05 +0300 Message-ID: <20220408075606.33056-16-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 92129734-208c-464c-1181-08da19356fe3 X-MS-TrafficTypeDiagnostic: DM5PR12MB2519:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VIj2OWc0mx6dXGKueQaX/bT7W4a0g06zNSWhEy3Jmuwi5mBUiUBVUCXSCwMt6wVQobYk5RQlRkRlKhgGY6LPywnR2XvDVGrruzH/YgUJXKzPp2fYNhwosXJKEqhaCm0Qiij7ev/kR6HQQjSYhp7bLhHbr4KmksbzXMT+QezvPijsKAnWNeJY3LMpY+tJcfjAIo+Bvke0jp9wt8cKWk6tEQdkH62xRWq9IRKdPf+dEiRaft1GWLxP1KtZk7qmftXq7TZuWaxVWR2FMpHh3NqwcCp5Wzjp2K+BDsFSBaizK8ih9t4jG1VDtjjMOU+NX5UgwP831Bc7J52wwD7fJM8r/qAtaoGtYaUDCjiToU6iByWIO8QBXCdtGrFFTldsUH3lPLR3bjUskSIxXpGN0rCgq4SHfhSkF2ZxZszHlOnGVRFaJg58FOlELt0EiYywt9iGR/6mc/ECnC+Zrqhw15EKu4tLkJz6W8/wJxfnU4EXXUS5o0zsu7qJX60/YKPPF00ssOGrov0wo+Un2OrE+cmivD6mKqrTEixeO0RUYhMJctHls9WVhKAHKpAvfVgAM/A5EKK80ke//afqf7wvugxgUdMezaIMN22yTttnnYD1Kxy4ZjMNap/skGLN/T+LrZYtip7E9q8u37cbR8LDZQAMK7FSWrEvGBeHzvRqSs70hQ8cPyfr/hgL/dgS/fxB/6kLBWPnb6J+PzF9CjqnKPuTtg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(2906002)(508600001)(4326008)(426003)(336012)(86362001)(83380400001)(47076005)(70206006)(40460700003)(1076003)(107886003)(6286002)(16526019)(8936002)(2616005)(7696005)(26005)(186003)(5660300002)(81166007)(70586007)(8676002)(356005)(36756003)(55016003)(82310400005)(6666004)(54906003)(316002)(110136005)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:32.5741 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92129734-208c-464c-1181-08da19356fe3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2519 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs virt-queue resource between the configuration threads. Also need pre-created virt-queue resource after virtq destruction. This accelerates the LM process and reduces its time by 30%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 69 +++++++++++++++++++++++---- drivers/vdpa/mlx5/mlx5_vdpa.h | 7 ++- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 35 ++++++++++---- 4 files changed, 104 insertions(+), 21 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index eaca571e3e..15ce30bc49 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -275,13 +275,17 @@ mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_dev_close(int vid) +_internal_mlx5_vdpa_dev_close(int vid, bool release_resource) { struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); - struct mlx5_vdpa_priv *priv = - mlx5_vdpa_find_priv_resource_by_vdev(vdev); + struct mlx5_vdpa_priv *priv; int ret = 0; + if (!vdev) { + DRV_LOG(ERR, "Invalid vDPA device."); + return -1; + } + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); if (priv == NULL) { DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; @@ -291,7 +295,7 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } - if (priv->use_c_thread) { + if (priv->use_c_thread && !release_resource) { if (priv->last_c_thrd_idx >= (conf_thread_mng.max_thrds - 1)) priv->last_c_thrd_idx = 0; @@ -315,7 +319,7 @@ mlx5_vdpa_dev_close(int vid) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, release_resource); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); @@ -329,6 +333,12 @@ mlx5_vdpa_dev_close(int vid) return ret; } +static int +mlx5_vdpa_dev_close(int vid) +{ + return _internal_mlx5_vdpa_dev_close(vid, false); +} + static int mlx5_vdpa_dev_config(int vid) { @@ -624,8 +634,9 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; uint32_t max_queues = priv->queues * 2; - uint32_t index; + uint32_t index, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; for (index = 0; index < priv->caps.max_num_virtio_queues * 2; @@ -635,10 +646,48 @@ mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) } if (!priv->queues || !priv->queue_size) return 0; - for (index = 0; index < max_queues; ++index) - if (mlx5_vdpa_virtq_single_resource_prepare(priv, - index)) + if (priv->use_c_thread) { + uint32_t main_task_idx[max_queues]; + + for (index = 0; index < max_queues; ++index) { + virtq = &priv->virtqs[index]; + thrd_idx = index % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = index; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = index; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_PREPARE_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task prepare virtq (%d).", index); + main_task_idx[task_num] = index; + task_num++; + } + } + for (index = 0; index < task_num; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + main_task_idx[index])) + goto error; + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue prepare tasks ready."); goto error; + } + } else { + for (index = 0; index < max_queues; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + index)) + goto error; + } if (mlx5_vdpa_is_modify_virtq_supported(priv)) if (mlx5_vdpa_steer_update(priv, true)) goto error; @@ -855,7 +904,7 @@ static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); + _internal_mlx5_vdpa_dev_close(priv->vid, true); if (priv->use_c_thread) mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 00700261ec..477f2fdde0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -85,6 +85,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + MLX5_VDPA_TASK_PREPARE_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -355,8 +356,12 @@ void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); * * @param[in] priv * The vdpa driver private structure. + * @param[in] release_resource + * The vdpa driver realease resource without prepare resource. */ -void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource); /** * Cleanup cached resources of all virtqs. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 07efa0cb16..97109206d2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -196,7 +196,7 @@ mlx5_vdpa_c_thread_handle(void *arg) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, false); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy( @@ -208,6 +208,18 @@ mlx5_vdpa_c_thread_handle(void *arg) &priv->dev_close_progress, 0, __ATOMIC_RELAXED); break; + case MLX5_VDPA_TASK_PREPARE_VIRTQ: + ret = mlx5_vdpa_virtq_single_resource_prepare( + priv, task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to prepare virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + } + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 4a74738d9c..de6eab9bc6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -113,6 +113,16 @@ mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) } } +static void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq) +{ + /* Clean pre-created resource in dev removal only */ + claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); + virtq->index = 0; + virtq->virtq = NULL; + virtq->configured = 0; +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) @@ -125,6 +135,8 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) if (virtq->index != i) continue; pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->virtq) + mlx5_vdpa_vq_destroy(virtq); for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -154,29 +166,34 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); - claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); - virtq->index = 0; - virtq->virtq = NULL; - virtq->configured = 0; + mlx5_vdpa_vq_destroy(virtq); } virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; } void -mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource) { struct mlx5_vdpa_virtq *virtq; - int i; + uint32_t i, max_virtq; - for (i = 0; i < priv->nr_virtqs; i++) { + max_virtq = (release_resource && + (priv->queues * 2) > priv->nr_virtqs) ? + (priv->queues * 2) : priv->nr_virtqs; + for (i = 0; i < max_virtq; i++) { virtq = &priv->virtqs[i]; pthread_mutex_lock(&virtq->virtq_lock); mlx5_vdpa_virtq_unset(virtq); - if (i < (priv->queues * 2)) + if (!release_resource && i < (priv->queues * 2)) mlx5_vdpa_virtq_single_resource_prepare( priv, i); pthread_mutex_unlock(&virtq->virtq_lock); } + if (!release_resource && priv->queues && + mlx5_vdpa_is_modify_virtq_supported(priv)) + if (mlx5_vdpa_steer_update(priv, true)) + mlx5_vdpa_steer_unset(priv); priv->features = 0; priv->nr_virtqs = 0; } @@ -733,7 +750,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) } return 0; error: - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, true); return -1; }