From patchwork Tue Aug 17 13:44:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96992 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2489EA0548; Tue, 17 Aug 2021 15:45:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 82408411AB; Tue, 17 Aug 2021 15:45:21 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2070.outbound.protection.outlook.com [40.107.92.70]) by mails.dpdk.org (Postfix) with ESMTP id 4D90A411B2; Tue, 17 Aug 2021 15:45:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gn6OJlLxfHrVGnV43XI+VgTJWCAh/qpLCDVkpzCPpfGCGHwKy2pUc50GETE6VbW5fogtFO5HG68oES1rApk/NSoTs4oCRhQI+M8630fxWPKPbWNLQb6LJyT+/8tGtbS18H/FEN7HqJy6R00pAzFTJet5kKVNATiffpM9o0R+EnChhzqFhtrPDC2jiyMH0H00UfmHy9hi5S7bkTDBG4sWxqFlDMBtk+RN2bcczDIOmP3BRzu7ifei7MekXD3NRYDWpfFp1UJ3Qm5UUL8yvjki5XpCTX3M1zjZXSc+dXG1hJkP79m5qv4vzgia7bdCIQjO7VlbKp2/SXupWZZzHrmI6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pfLTsK5uCZRrH8j2khvKuoTENJ3Q/Kdn0uMdAcwH9yI=; b=J7x3UkCcQD9MfGUzJfBaiWT6B00zjdRL5CnlrW2iLAJmrH1euiQSyLmF44XW31koXxowBg/4cw6mai8cmyt283981Evo5yZ9nGLXvcK6OgTIeMABWpgC077Pw1nYl46dHu2uH7xP7EPIQ4boiFtG8IRp4On6AbMw6aBZkHBeWoEXl+0N4gMc/Mo4x7As/fiQ50LWMdXiNEjjB5/t0drwnJPEvLXWcHz6KI62+1EMxtKy7Q8oSpXPluB3v744f23uNsXSLqZFejvQarPArtEYkMm6cGRObQ3R3x4ELkBDd0jTv6tBLCsbrXFPYG5VWJdr/luuTMprCIIe5BGiT9fLlw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pfLTsK5uCZRrH8j2khvKuoTENJ3Q/Kdn0uMdAcwH9yI=; b=dqknW6TVsx6MT5jH+Vn9+9+jXcwyC+4qiAx0qzthDRl8VvB3B+p88N268uPfld0LeGGjewZT7EehxyqVc9p0qf7di04NOVCXwnwIU9zU/ee9HOI01niPDyOR8nvw7wR0tbG5dTHgM4Vu+tTgJNQPpwvvIGy0S+sM6xk7Ckxc7TD5sn6JwUx/PHHkHm7wZ92gFri71AC1If+BU9IUViEKJ5cpIW+dA8CeHXLpIKeBgt3KlDCTXoSi7zkpbr+RRPrLz/+Tli/qEOazjVfnmITNaaSe1UovktXNtPUwzgul42mVGNFjVrP+Vhjmu5itOYEBBSpj4mk5g1kW9F4FLyoDXw== Received: from MWHPR12CA0058.namprd12.prod.outlook.com (2603:10b6:300:103::20) by DM6PR12MB4513.namprd12.prod.outlook.com (2603:10b6:5:2ad::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17; Tue, 17 Aug 2021 13:45:19 +0000 Received: from CO1NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:300:103:cafe::f7) by MWHPR12CA0058.outlook.office365.com (2603:10b6:300:103::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17 via Frontend Transport; Tue, 17 Aug 2021 13:45:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT050.mail.protection.outlook.com (10.13.174.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:18 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:16 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:15 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Date: Tue, 17 Aug 2021 16:44:21 +0300 Message-ID: <20210817134441.1966618-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 60ece5a3-c060-49c2-513e-08d961854061 X-MS-TrafficTypeDiagnostic: DM6PR12MB4513: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:127; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ER1MIEOsiuwZlSDcDLi6ppa5Ek5dO+Ol38X9gVGDl++55jHes5MfmJ3qc5OoF0c9F+coa+IQCQSAKZsUkZeVuVaGxhG11hOlGRyVQ+rdWsqBTJT+KDb3aNTbBRKoo1cwSIFL3jmwKQjsyWyHnx8+q/bXsOmolKSmc9AWuMElO7QLGEiUxjYP7BW2FpbGq04FmQS2adGN776JCOzS2dlcDi6lZd4+JlnqOirRaC5Y5fhkBD8T0qH5WAbstOnr9nyXJN1+J/nuC7Bs+x7Eto8Wa3I0/TwDNBz+9DdRCpQ4CLKfnM7wu3LBTd7Wulx3uI+G5CURapmILU+X/9bitcdgI0SFyyws2Rj7I/yPxCnf0dgCYrrDA/lsgHpWdkh/Znm86qONgaqxk3B7IGIiPpS4FvZaKKPCaqBh2vVeAmwfu/oF8K7YSLVLqgklJJA043gYKfbKTQ0XAkbpEX8nTAdf+pO3UhnQqV1WuWlBpdsO3vnGayqytVLsq8Co13roZ7BAk8bmzYKx8D/s6sbBWbr+ZivGIAXc9OanFH2Zzq5wzLo8fqkta4DNzRu9w5tnFGrT6EPmco5DbXviPPDr+DbpYaxI4hz3KyULCbFS62WY2L7VlWxiRl+CEJwNu1H4rUJqGTyZIuXaWvYowaSuqKls714t1yWNjuhIM2aCP0P9gX9sgGgUqSeUKf9m8m8f/CE67DcocdDdXYbICkr/fj8mUw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(36840700001)(46966006)(2906002)(6666004)(86362001)(47076005)(4326008)(450100002)(82740400003)(8936002)(7636003)(8676002)(26005)(7696005)(356005)(82310400003)(36756003)(426003)(16526019)(5660300002)(2616005)(478600001)(316002)(54906003)(6916009)(1076003)(6286002)(4744005)(186003)(55016002)(36860700001)(336012)(70586007)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:18.7190 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 60ece5a3-c060-49c2-513e-08d961854061 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4513 Subject: [dpdk-dev] [RFC 01/21] net/mlx5: fix shared device context creation error flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In shared device context creation, there are two validations after MR btree memory allocation. When one of them fails, the MR btree memory was not freed what caused a memory leak. Free it. Fixes: 632f0f19056f ("net/mlx5: manage shared counters in three-level table") Cc: stable@dpdk.org Signed-off-by: Michael Baum --- drivers/net/mlx5/mlx5.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f84e061fe7..f0ec2d1279 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1254,6 +1254,8 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, MLX5_ASSERT(sh); if (sh->cnt_id_tbl) mlx5_l3t_destroy(sh->cnt_id_tbl); + if (sh->share_cache.cache.table) + mlx5_mr_btree_free(&sh->share_cache.cache); if (sh->tis) claim_zero(mlx5_devx_cmd_destroy(sh->tis)); if (sh->td) From patchwork Tue Aug 17 13:44:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96993 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AFBFA0548; Tue, 17 Aug 2021 15:45:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4071411BD; Tue, 17 Aug 2021 15:45:23 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2040.outbound.protection.outlook.com [40.107.93.40]) by mails.dpdk.org (Postfix) with ESMTP id AA48E411B6; Tue, 17 Aug 2021 15:45:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=josNKzYUXOitTGuuqfuY2NvZOMaZ+hM446zk6SivB/QtYJEL16HkWF5wj/YOEXR9wNOW/eNn0meEOxgc3hsphMjWCJRkP0c2Ezae1Iry6HLkv1kOnB9fb8tmgXQp7xU+AD8kD/4/w7180Pfd9ej632Nh8cd39WC0rSs6x7tTILL2VkLgx3DOad7ZZmlIBbiXAvbk7OEqWew2ehQd7KYbkL2CMcu5sncVPrdMdsGUwgBdBX0YZwoSrrb4PtD2QfhduU9slIBMGwz+MCQZ0cUOqLchhsLYceBchEP2+fSaC+JH2xb4dDE720X2RShtwcMnwuvnuWeDl5jSM3tluVPh1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K4LgXbnLSi+y7JrIJvSI2QtzJXe8m+1yWHKToS1s3BU=; b=KIpNfwVZ/EosaKkzSaVIDvCxnfjmZ6197ex61VmOOVf4SJtnnlcYf1P9ehT7g+tRG5URFoobv5QcFahwH4gH72ADRwCiciJ1VGxpchATjwusaD1thN6AQOISImBUa2AQ3x2m/+QRxFIUkhW4E8AQj7vE3sqpPdZup3HueNgCWL6fd7VTq9HRbMGRR8b6B76ZjTcxE0GdhxdHwSnBGK3vFAz5jloV1tSkuppCbJOTBTZ3a0RK7tyi/PpbfN6d8Mrf74sUxCVH77Xwl9YEm6KS3Edely4i6Kgl08wfe0kzOqW7UpZMlZ2pfVOUkCGWy5uaKM1PCFzqiN2GVNGAr5QSfg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K4LgXbnLSi+y7JrIJvSI2QtzJXe8m+1yWHKToS1s3BU=; b=lUUubme9iSDo2ZX2upQY/FwIZTWffWiLbn5eXK7cmLI82vrERq35Q5P/CF5GNeoZy62FscqEenuuWUQm+8lVU/OIP+uMUGf+n/UdLCePVgoANu0gHyUL07DGYD3P6p99OeQcEVuN+KsOBBh8+KhFwJWRMdIF2n4NJWzCEebEPwd3t8RIOW7lYmpZKUpP8SpUU5sPPln9HiQyWty9D5FgZn2AngdrBMzCZHdCGF5sIU4fARsnNC7bKoeYuduY87t1a5ChkMTtcEQnQ8OFcrlYsQbRXF8euB+VgI+mYde5xtCxx1LXV6Yr4WVBY5Ngac4MHyrXrhd2A6hGNpNa4DXQtA== Received: from BN9PR03CA0285.namprd03.prod.outlook.com (2603:10b6:408:f5::20) by CY4PR1201MB0104.namprd12.prod.outlook.com (2603:10b6:910:1d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.21; Tue, 17 Aug 2021 13:45:20 +0000 Received: from BN8NAM11FT062.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f5:cafe::7d) by BN9PR03CA0285.outlook.office365.com (2603:10b6:408:f5::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT062.mail.protection.outlook.com (10.13.177.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:19 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:18 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:17 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Date: Tue, 17 Aug 2021 16:44:22 +0300 Message-ID: <20210817134441.1966618-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ee72ca34-8876-4b73-3abd-08d9618540f9 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0104: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PVt138q5FObe8Yk4EYC2visWT3YVVDxIPCZjGTsG4eslMFMIZ1WBeeXHR9Aq0CihgQTD9LfsYiO0vFwtlM191SJnDUwl2VBl+h5Gx5VYWJTj1tu855OaZMO4+TRD+ohKPXmZbD2n5lEEf87FQ6geTYrAeZgbOjxuC3ae/R7yyKJDTrcSe0r56nnhs2Xh7oBoW58Vsd4rhpnGlP7+7dmgAd/noCPyaFa/egYCSBFKZQtIHRr4r/AGrF9AuuBeeK5naRQgrLHsjzr5MiqpsjUsht3hhpg+IufjYt5Nf4+w++tbvZ7OJIiE3BBpITp4DV6h+DThkG0XKbTh9x2e/MqrjLJiRKYvKUyRsj7N/if8kdPAarhzTr0IBdjV5AYIkiVZfOduuJxSGkSgiCWsbdQGWANP+N6RAiW0AqBVcw8e/A3C88mvEh4oo4OuZw1llwReCqRnjRUkRHgV1rlTS7VIAt4JmOhVLw3xGb60ANMDDServIdjjeMozdq4p1qqkU2WJXsdZ+ohRBDInp3PhtmX9sqvGQ36kJLKyGpCUYlmxvvQmrudjy+o58QRYYyN1ZcmmqYdSSLF3QPWdkk4YtsZsCQQnZW4cD7Yqck7oasIcoBlqkoWGxC55bWWBEesNo9wdwi6f8gv91lJtyT7DZq9+gogzFN8h59+06cD2lakIPoCvQbe3LzUjt9jaZd99gkub+vfr+UcyH2fkpPVOEUqXQ== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(46966006)(36840700001)(316002)(450100002)(4326008)(478600001)(186003)(26005)(8936002)(336012)(7696005)(426003)(54906003)(36860700001)(6286002)(55016002)(16526019)(8676002)(70586007)(70206006)(2906002)(6916009)(82740400003)(1076003)(5660300002)(47076005)(7636003)(83380400001)(356005)(82310400003)(36756003)(2616005)(6666004)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:19.6696 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ee72ca34-8876-4b73-3abd-08d9618540f9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT062.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0104 Subject: [dpdk-dev] [RFC 02/21] net/mlx5: fix PCI probing error flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In PCI probing, there is internal probe function that is called several times for several PFs. When one of them fails, the previous PFs are not destroyed what nay cause a memory leak. Destroy them. Fixes: 08c2772fc747 ("net/mlx5: support list of representor PF") Cc: stable@dpdk.org Signed-off-by: Michael Baum --- drivers/net/mlx5/linux/mlx5_os.c | 13 ++++++++++++- drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5.h | 1 + 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 5f8766aa48..3d204f99f7 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2700,9 +2700,20 @@ mlx5_os_pci_probe(struct rte_pci_device *pci_dev) if (eth_da.nb_ports > 0) { /* Iterate all port if devargs pf is range: "pf[0-1]vf[...]". */ - for (p = 0; p < eth_da.nb_ports; p++) + for (p = 0; p < eth_da.nb_ports; p++) { ret = mlx5_os_pci_probe_pf(pci_dev, ð_da, eth_da.ports[p]); + if (ret) + break; + } + if (ret) { + DRV_LOG(ERR, "Probe of PCI device " PCI_PRI_FMT " " + "aborted due to proding failure of PF %u", + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function, + eth_da.ports[p]); + mlx5_net_remove(&pci_dev->device); + } } else { ret = mlx5_os_pci_probe_pf(pci_dev, ð_da, 0); } diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f0ec2d1279..02ea2e781e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2386,7 +2386,7 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev) * @return * 0 on success, the function cannot fail. */ -static int +int mlx5_net_remove(struct rte_device *dev) { uint16_t port_id; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e02714e231..3581414b78 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1483,6 +1483,7 @@ int mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev, struct rte_eth_udp_tunnel *udp_tunnel); uint16_t mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev); int mlx5_dev_close(struct rte_eth_dev *dev); +int mlx5_net_remove(struct rte_device *dev); bool mlx5_is_hpf(struct rte_eth_dev *dev); bool mlx5_is_sf_repr(struct rte_eth_dev *dev); void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); From patchwork Tue Aug 17 13:44:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96995 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6CEB5A0548; Tue, 17 Aug 2021 15:45:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BB35C411D3; Tue, 17 Aug 2021 15:45:26 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) by mails.dpdk.org (Postfix) with ESMTP id AB922411B7 for ; Tue, 17 Aug 2021 15:45:24 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lvh0cIszqxJ4CRedIhPTIcSP7o67MhQ4eFw3GnM6c4qJyg3kVof0EOBAdBgZLXlGGbhrdnc8MYCNyOEU4zNyHzUMQ8s+wSOiSvoK9/mQx+czDElv1TEssXDDOVWH6plc6wpSUjeiazb2zewWwDWqAGmUAS2LhVWDJOiLDaXqzPedSH1egp/BU8NuH5mEUgo5y5Uc2zbvQEjaf+E5edP1Wp5rJjoRKR+g1QGBbSGjpjvyfFF9Fb2vn5dgzuHo0CraK11yGmYEC6rTXciF2Ziag3xObhOUqvkR8c3BDTwtwBOIUY4MwosB4WrwhcFJEdlPTFk89zKDGT1eZVtwi5yZrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CdMT7p1asX+4t19b6KNQKzuRw3+4nsWJ8yLGs7kyL2A=; b=YyLWrKrsroeSiHrZbYf9hJq5TrCttDkdD+R8WgJvnZcys9371Rf1iyWNrUNRzC95+zOXDiT5Wn/6vAQJD/bf7yHqr7p5qGudRIoG/UyjmXEnrKj8t2p8uOYbtgSjHjZw213aUnB6l7L0UnslwNvkQ888TZ59oSC5X8nmIwCUmsYwDEEJ553gYEbpOngP7tggPT6Nb4eWz0K+8Ra2LveHIkxfgr2AMgkStVP67BtvZryzISb9Z3R3JglsUHbQNCcHkqysPGOof2spDUoO4t3CPZZat0wqqETX8GDAR0jP5hTv3PQ+wjrfrkZcMbXUxB+vuYwFR/D0NrJfNhIcht/omQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CdMT7p1asX+4t19b6KNQKzuRw3+4nsWJ8yLGs7kyL2A=; b=oAuFJX73DZGnGoRknlJgNbNqp0WQ792lXrZz7r+fLCvyM7dzVHugQOs5rglvu/uGviYwaAPzWKqXK+SfsoIVwbebdjM0rZoREUDwS92pkJ4WLWA9P2CNVSlAPMSCNarpNoe128hPkU3ahQ4m90hG1CEGCgUHW9E0BAw1EPi8pMQQ2LPbvUsJr+vypasSmlevxlu9zsUqbexj0h/jp20UORme9KYch+pBkuHnJuLSvwTUUWnq1D+tOcyK3WeQ+5gl3bFY3eXFwNSDeRfIRJGURUBGcRqvcb9wuOCg073FdzLmOePpnw1IqdqfyRh7iMxaeftuMsa14WJbHNJWAUF21w== Received: from BN0PR04CA0208.namprd04.prod.outlook.com (2603:10b6:408:e9::33) by CY4PR12MB1944.namprd12.prod.outlook.com (2603:10b6:903:127::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.14; Tue, 17 Aug 2021 13:45:22 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e9:cafe::2d) by BN0PR04CA0208.outlook.office365.com (2603:10b6:408:e9::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:20 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:18 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:23 +0300 Message-ID: <20210817134441.1966618-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 806eba2f-996d-4c01-10b4-08d961854280 X-MS-TrafficTypeDiagnostic: CY4PR12MB1944: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RmnyMuHvQflfnSxI+BNEaLoScWsY3F8StemJDFzwNnw5ANBdfFUaeumpVR7sLpMi01+6V9NWJ/xNdnYIGl3RkpDOsHcb9VzDlXX2XJZAz1xAp6OTeGR6GjCSbv+X3f6TS6+GwvjUe9s3COvoqruoqC5fkt+XM9QRSNtomr3zxm4CVKjM2FYGy9jmQId45Pybu/jTaDgRHsXDnv/P/PvMDN/Dd+ZdlzrmyF4BUU+D2d69/OrWwJjvhsmBhf+ZBJnM8y7VXC+A7Raxh0XR6mQbMVLOrIl/pkdfFWS7y3gpTcxgiU2BXoB/v8td3iofMSE3ILZlYnGkNW58gudtV4a5/CJ5vJwPoM2/eiwCaBUsZUncPWJEpznwF3ZOAWd4juQPCaudBkJUAHMzghy7kmjlnueq5F3Ip2FriaxyRbZfxmh5rFtqhkDOnJ/Yw8mc0Up2jjmF4hNQhb07RFr/zjGQX/w5sTWNxjMMBjsC2YQEta0kkbIRIfUqoacGYHOB91kwEqVYFjkMbOBcvsjGTPtE4z/+xY/c1GXHhEu9poh2XJYYVgHv5dXpSFethf78jJoFUapjrQLR5im/ptZR0dz86w0J1JTSNNc03D0EAbYGQIpdPEqsbr1a/E3W0F8Erlf/pDpEA5b6kLXf8cTUZO5iZhlQu9lYL/jyz3ci8s276in+WG2RgqBQFz7CCetqsNAuAzS0o47osGeQIcbTKsinLEdcaBJIEySn1XeeCQXzssg= X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(39860400002)(376002)(136003)(46966006)(36840700001)(2616005)(6286002)(86362001)(336012)(6916009)(36756003)(70206006)(426003)(4326008)(54906003)(7696005)(8936002)(36860700001)(30864003)(356005)(70586007)(107886003)(47076005)(55016002)(82310400003)(7636003)(16526019)(186003)(1076003)(2906002)(316002)(26005)(6666004)(82740400003)(5660300002)(83380400001)(8676002)(478600001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:22.2160 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 806eba2f-996d-4c01-10b4-08d961854280 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1944 Subject: [dpdk-dev] [RFC 03/21] common/mlx5: add context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add context device structure which contains ctx and pd of dievice. In addition, provides prepare and release functions for this structure. Signed-off-by: Michael Baum --- drivers/common/mlx5/linux/mlx5_common_os.c | 144 ++++++++++++- drivers/common/mlx5/mlx5_common.c | 166 +++++++++++++++ drivers/common/mlx5/mlx5_common.h | 48 +++++ drivers/common/mlx5/version.map | 3 + drivers/common/mlx5/windows/mlx5_common_os.c | 207 ++++++++++++++++++- 5 files changed, 562 insertions(+), 6 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c index 9e0c823c97..6f78897390 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.c +++ b/drivers/common/mlx5/linux/mlx5_common_os.c @@ -23,6 +23,22 @@ const struct mlx5_glue *mlx5_glue; #endif +/* Environment variable to control the doorbell register mapping. */ +#define MLX5_SHUT_UP_BF "MLX5_SHUT_UP_BF" +#if defined(RTE_ARCH_ARM64) +#define MLX5_SHUT_UP_BF_DEFAULT "0" +#else +#define MLX5_SHUT_UP_BF_DEFAULT "1" +#endif + +/* Default PMD specific parameter value. */ +#define MLX5_TXDB_UNSET (-1) + +/* MLX5_TX_DB_NC supported values. */ +#define MLX5_TXDB_CACHED 0 +#define MLX5_TXDB_NCACHED 1 +#define MLX5_TXDB_HEURISTIC 2 + int mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr) { @@ -401,6 +417,127 @@ mlx5_glue_constructor(void) mlx5_glue = NULL; } +static int +mlx5_config_doorbell_mapping_env(int dbnc) +{ + char *env; + int value; + + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + /* Get environment variable to store. */ + env = getenv(MLX5_SHUT_UP_BF); + value = env ? !!strcmp(env, "0") : MLX5_TXDB_UNSET; + if (dbnc == MLX5_TXDB_UNSET) + setenv(MLX5_SHUT_UP_BF, MLX5_SHUT_UP_BF_DEFAULT, 1); + else + setenv(MLX5_SHUT_UP_BF, + dbnc == MLX5_TXDB_NCACHED ? "1" : "0", 1); + return value; +} + +static void +mlx5_restore_doorbell_mapping_env(int value) +{ + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + /* Restore the original environment variable state. */ + if (value == MLX5_TXDB_UNSET) + unsetenv(MLX5_SHUT_UP_BF); + else + setenv(MLX5_SHUT_UP_BF, value ? "1" : "0", 1); +} + +/** + * Function API to open IB device using DevX. + * + * This function calls the Linux glue APIs to open a device. + * + * @param dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to the generic device. + * @param dbnc + * Device argument help configure the environment variable. + * @param classes + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, + int dbnc, uint32_t classes) +{ + struct ibv_device *ibv; + struct ibv_context *ctx = NULL; + int dbmap_env; + + ibv = mlx5_os_get_ibv_dev(dev); + if (!ibv) + return -rte_errno; + DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name); + /* + * Configure environment variable "MLX5_BF_SHUT_UP" before the device + * creation. The rdma_core library checks the variable at device + * creation and stores the result internally. + */ + dbmap_env = mlx5_config_doorbell_mapping_env(dbnc); + /* Try to open IB device with DV. */ + errno = 0; + ctx = mlx5_glue->dv_open_device(ibv); + /* + * The environment variable is not needed anymore, all device creation + * attempts are completed. + */ + mlx5_restore_doorbell_mapping_env(dbmap_env); + if (ctx == NULL && classes != MLX5_CLASS_ETH) { + DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + rte_errno = errno ? errno : ENODEV; + return -rte_errno; + } + dev_ctx->ctx = ctx; + return 0; +} + +/** + * Allocate Protection Domain object and extract its pdn using DV API. + * + * @param[out] dev_ctx + * Pointer to the context device data structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_pd_create(struct mlx5_dev_ctx *dev_ctx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5dv_obj obj; + struct mlx5dv_pd pd_info; + int ret; + + dev_ctx->pd = mlx5_glue->alloc_pd(dev_ctx->ctx); + if (dev_ctx->pd == NULL) { + DRV_LOG(ERR, "Failed to allocate PD."); + return errno ? -errno : -ENOMEM; + } + obj.pd.in = dev_ctx->pd; + obj.pd.out = &pd_info; + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret != 0) { + DRV_LOG(ERR, "Fail to get PD object info."); + mlx5_glue->dealloc_pd(dev_ctx->pd); + dev_ctx->pd = NULL; + return -errno; + } + dev_ctx->pdn = pd_info.pdn; + return 0; +#else + (void)dev_ctx; + DRV_LOG(ERR, "Cannot get pdn - no DV support."); + return -ENOTSUP; +#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ +} + struct ibv_device * mlx5_os_get_ibv_device(const struct rte_pci_addr *addr) { @@ -423,8 +560,13 @@ mlx5_os_get_ibv_device(const struct rte_pci_addr *addr) ibv_match = ibv_list[n]; break; } - if (ibv_match == NULL) + if (ibv_match == NULL) { + DRV_LOG(WARNING, + "No Verbs device matches PCI device " PCI_PRI_FMT "," + " are kernel drivers loaded?", + addr->domain, addr->bus, addr->devid, addr->function); rte_errno = ENOENT; + } mlx5_glue->free_device_list(ibv_list); return ibv_match; } diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 459cf4bcc4..be3d0f2627 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -41,6 +41,20 @@ static inline void mlx5_cpu_id(unsigned int level, } #endif +/* + * Device parameter to force doorbell register mapping to non-cahed region + * eliminating the extra write memory barrier. + */ +#define MLX5_TX_DB_NC "tx_db_nc" + +/* Default PMD specific parameter value. */ +#define MLX5_TXDB_UNSET (-1) + +/* MLX5_TX_DB_NC supported values. */ +#define MLX5_TXDB_CACHED 0 +#define MLX5_TXDB_NCACHED 1 +#define MLX5_TXDB_HEURISTIC 2 + RTE_LOG_REGISTER_DEFAULT(mlx5_common_logtype, NOTICE) /* Head of list of drivers. */ @@ -88,6 +102,83 @@ driver_get(uint32_t class) return NULL; } +/** + * Verify and store value for device argument. + * + * @param[in] key + * Key argument to verify. + * @param[in] val + * Value associated with key. + * @param opaque + * User data. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_common_args_check(const char *key, const char *val, void *opaque) +{ + int *dbnc = opaque; + signed long tmp; + + errno = 0; + tmp = strtol(val, NULL, 0); + if (errno) { + rte_errno = errno; + DRV_LOG(WARNING, "%s: \"%s\" is not a valid integer", key, val); + return -rte_errno; + } + if (strcmp(MLX5_TX_DB_NC, key) == 0) { + if (tmp != MLX5_TXDB_CACHED && + tmp != MLX5_TXDB_NCACHED && + tmp != MLX5_TXDB_HEURISTIC) { + DRV_LOG(ERR, "Invalid Tx doorbell mapping parameter."); + rte_errno = EINVAL; + return -rte_errno; + } + *dbnc = tmp; + } + return 0; +} + +/** + * Parse Tx doorbell mapping parameter. + * + * @param devargs + * Device arguments structure. + * @param dbnc + * Pointer to get into doorbell mapping parameter. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_parse_db_map_arg(struct rte_devargs *devargs, int *dbnc) +{ + struct rte_kvargs *kvlist; + int ret = 0; + + if (devargs == NULL) + return 0; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + rte_errno = EINVAL; + return -rte_errno; + } + if (rte_kvargs_count(kvlist, MLX5_TX_DB_NC)) { + ret = rte_kvargs_process(kvlist, MLX5_TX_DB_NC, + mlx5_common_args_check, dbnc); + if (ret) { + rte_errno = EINVAL; + rte_kvargs_free(kvlist); + return -rte_errno; + } + } + rte_kvargs_free(kvlist); + return 0; +} + + static int devargs_class_handler(__rte_unused const char *key, const char *class_names, void *opaque) @@ -219,6 +310,81 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) #endif } +/** + * Uninitialize context device and release all its resources. + * + * @param dev_ctx + * Pointer to the context device data structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +void +mlx5_dev_ctx_release(struct mlx5_dev_ctx *dev_ctx) +{ + if (dev_ctx->pd != NULL) { + claim_zero(mlx5_os_dealloc_pd(dev_ctx->pd)); + dev_ctx->pd = NULL; + } + if (dev_ctx->ctx != NULL) { + claim_zero(mlx5_glue->close_device(dev_ctx->ctx)); + dev_ctx->ctx = NULL; + } +} + +/** + * Initialize context device and allocate all its resources. + * + * @param dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to mlx5 device structure. + * @param classes_loaded + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, + uint32_t classes_loaded) +{ + int dbnc = MLX5_TXDB_UNSET; + int ret; + + dev_ctx->numa_node = dev->numa_node; + /* + * Parse Tx doorbell mapping parameter. It helps to configure + * environment variable "MLX5_BF_SHUT_UP" before the device creation. + */ + ret = mlx5_parse_db_map_arg(dev->devargs, &dbnc); + if (ret < 0) + return ret; + /* + * Open device using DevX. + * If DevX isn't supported, ctx field remains NULL. + */ + ret = mlx5_os_devx_open_device(dev_ctx, dev, dbnc, classes_loaded); + if (ret < 0) + return ret; + /* + * When DevX is not supported and the classes selected by the user can + * also work with Verbs, the mlx5_os_devx_open_device function returns + * 0 although no device has been created at this time. + * Later they will try to create again in Verbs. + */ + if (dev_ctx->ctx == NULL) + return 0; + /* Allocate Protection Domain object and extract its pdn. */ + ret = mlx5_os_pd_create(dev_ctx); + if (ret) + goto error; + return ret; +error: + mlx5_dev_ctx_release(dev_ctx); + return ret; +} + static void dev_release(struct mlx5_common_device *dev) { diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index a772371200..609953b70e 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -324,6 +324,46 @@ void mlx5_common_init(void); * from devargs, locating target RDMA device and probing with it. */ +/** + * Shared device context structure. + * Contains HW device objects which belong to same device with multiple drivers. + */ +struct mlx5_dev_ctx { + void *ctx; /* Verbs/DV/DevX context. */ + void *pd; /* Protection Domain. */ + uint32_t pdn; /* Protection Domain Number. */ + int numa_node; /* Numa node of device. */ +}; + +/** + * Uninitialize context device and release all its resources. + * + * @param dev_ctx + * Pointer to the context device data structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_internal +void mlx5_dev_ctx_release(struct mlx5_dev_ctx *dev_ctx); + +/** + * Initialize context device and allocate all its resources. + * + * @param dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to mlx5 device structure. + * @param classes_loaded + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_internal +int mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, + uint32_t classes_loaded); + /** * Initialization function for the driver called during device probing. */ @@ -419,4 +459,12 @@ __rte_internal bool mlx5_dev_is_pci(const struct rte_device *dev); +/* mlx5_common_os.c */ + +int mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, + struct rte_device *dev, int dbnc, + uint32_t classes); +int mlx5_os_pd_create(struct mlx5_dev_ctx *dev_ctx); + + #endif /* RTE_PMD_MLX5_COMMON_H_ */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index e5cb6b7060..6a88105d02 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -9,6 +9,9 @@ INTERNAL { mlx5_common_init; + mlx5_dev_ctx_release; + mlx5_dev_ctx_prepare; + mlx5_common_verbs_reg_mr; # WINDOWS_NO_EXPORT mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index 5031bdca26..5d178b0452 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -17,7 +18,7 @@ #include "mlx5_malloc.h" /** - * Initialization routine for run-time dependency on external lib + * Initialization routine for run-time dependency on external lib. */ void mlx5_glue_constructor(void) @@ -25,7 +26,7 @@ mlx5_glue_constructor(void) } /** - * Allocate PD. Given a devx context object + * Allocate PD. Given a DevX context object * return an mlx5-pd object. * * @param[in] ctx @@ -37,8 +38,8 @@ mlx5_glue_constructor(void) void * mlx5_os_alloc_pd(void *ctx) { - struct mlx5_pd *ppd = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_pd), 0, SOCKET_ID_ANY); + struct mlx5_pd *ppd = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_pd), + 0, SOCKET_ID_ANY); if (!ppd) return NULL; @@ -60,7 +61,7 @@ mlx5_os_alloc_pd(void *ctx) * Pointer to mlx5_pd. * * @return - * Zero if pd is released successfully, negative number otherwise. + * Zero if pd is released successfully, negative number otherwise. */ int mlx5_os_dealloc_pd(void *pd) @@ -72,6 +73,202 @@ mlx5_os_dealloc_pd(void *pd) return 0; } +/** + * Detect if a devx_device_bdf object has identical DBDF values to the + * rte_pci_addr found in bus/pci probing + * + * @param[in] devx_bdf + * Pointer to the devx_device_bdf structure. + * @param[in] addr + * Pointer to the rte_pci_addr structure. + * + * @return + * 1 on Device match, 0 on mismatch. + */ +static int +mlx5_match_devx_bdf_to_addr(struct devx_device_bdf *devx_bdf, + struct rte_pci_addr *addr) +{ + if (addr->domain != (devx_bdf->bus_id >> 8) || + addr->bus != (devx_bdf->bus_id & 0xff) || + addr->devid != devx_bdf->dev_id || + addr->function != devx_bdf->fnc_id) { + return 0; + } + return 1; +} + +/** + * Detect if a devx_device_bdf object matches the rte_pci_addr + * found in bus/pci probing + * Compare both the Native/PF BDF and the raw_bdf representing a VF BDF. + * + * @param[in] devx_bdf + * Pointer to the devx_device_bdf structure. + * @param[in] addr + * Pointer to the rte_pci_addr structure. + * + * @return + * 1 on Device match, 0 on mismatch, rte_errno code on failure. + */ +static int +mlx5_match_devx_devices_to_addr(struct devx_device_bdf *devx_bdf, + struct rte_pci_addr *addr) +{ + int err; + struct devx_device mlx5_dev; + + if (mlx5_match_devx_bdf_to_addr(devx_bdf, addr)) + return 1; + /* + * Didn't match on Native/PF BDF, could still match a VF BDF, + * check it next. + */ + err = mlx5_glue->query_device(devx_bdf, &mlx5_dev); + if (err) { + DRV_LOG(ERR, "query_device failed"); + rte_errno = err; + return rte_errno; + } + if (mlx5_match_devx_bdf_to_addr(&mlx5_dev.raw_bdf, addr)) + return 1; + return 0; +} + +/** + * Look for DevX device that match to given rte_device. + * + * @param dev + * Pointer to the generic device. + * + * @return + * A device match on success, NULL otherwise and rte_errno is set. + */ +static struct devx_device_bdf * +mlx5_os_get_devx_device(struct rte_device *dev) +{ + int n; + struct devx_device_bdf *devx_list; + struct devx_device_bdf *orig_devx_list; + struct devx_device_bdf *devx_match = NULL; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev); + struct rte_pci_addr *addr = &pci_dev->addr; + + errno = 0; + devx_list = mlx5_glue->get_device_list(&n); + if (devx_list == NULL) { + rte_errno = errno ? errno : ENOSYS; + DRV_LOG(ERR, "Cannot list devices, is DevX enabled?"); + return NULL; + } + orig_devx_list = devx_list; + while (n-- > 0) { + int ret = mlx5_match_devx_devices_to_addr(devx_list, addr); + if (!ret) { + devx_list++; + continue; + } + if (ret != 1) { + rte_errno = ret; + goto exit; + } + devx_match = devx_list; + break; + } + if (devx_match == NULL) { + /* No device matches, just complain and bail out. */ + DRV_LOG(WARNING, + "No DevX device matches PCI device " PCI_PRI_FMT "," + " is DevX Configured?", + addr->domain, addr->bus, addr->devid, addr->function); + rte_errno = ENOENT; + } +exit: + mlx5_glue->free_device_list(orig_devx_list); + return devx_match; +} + +/** + * Function API open device under Windows. + * + * This function calls the Windows glue APIs to open a device. + * + * @param[out] dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to the generic device. + * @param dbnc + * Device argument help configure the environment variable. + * @param classes + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, + int dbnc, uint32_t classes) +{ + RTE_SET_USED(dbnc); + struct devx_device_bdf *devx_bdf_dev = NULL; + struct mlx5_context *mlx5_ctx; + + if (classes != MLX5_CLASS_ETH) { + DRV_LOG(WARNING, + "The chosen classes are not supported on Windows."); + rte_errno = ENOTSUP; + return -rte_errno; + } + devx_bdf_dev = mlx5_os_get_devx_device(dev); + if (devx_bdf_dev == NULL) + return -rte_errno; + /* Try to open DevX device with DV. */ + mlx5_ctx = mlx5_glue->open_device(devx_bdf_dev); + if (mlx5_ctx) { + DRV_LOG(ERR, "Failed to open DevX device."); + rte_errno = errno; + return -rte_errno; + } + if (mlx5_glue->query_device(devx_bdf_dev, &mlx5_ctx->mlx5_dev)) { + DRV_LOG(ERR, "Failed to query device context fields."); + claim_zero(mlx5_glue->close_device(mlx5_ctx)); + rte_errno = errno; + return -rte_errno; + } + dev_ctx->ctx = mlx5_ctx; + return 0; +} + +/** + * Allocate Protection Domain object and extract its pdn using DV API. + * + * @param[out] dev_ctx + * Pointer to the context device data structure. + * + * @return + * 0 on success, a negative value otherwise. + */ +int +mlx5_os_pd_create(struct mlx5_dev_ctx *dev_ctx) +{ + struct mlx5_pd *pd; + + pd = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pd), 0, SOCKET_ID_ANY); + if (!pd) + return -1; + struct mlx5_devx_obj *obj = mlx5_devx_cmd_alloc_pd(dev_ctx->ctx); + if (!obj) { + mlx5_free(pd); + return -1; + } + pd->obj = obj; + pd->pdn = obj->id; + pd->devx_ctx = dev_ctx->ctx; + dev_ctx->pd = pd; + dev_ctx->pdn = pd->pdn; + return 0; +} + /** * Register umem. * From patchwork Tue Aug 17 13:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96994 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8AB2A0548; Tue, 17 Aug 2021 15:45:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D7F4411CA; Tue, 17 Aug 2021 15:45:25 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2072.outbound.protection.outlook.com [40.107.95.72]) by mails.dpdk.org (Postfix) with ESMTP id C2082411BA for ; Tue, 17 Aug 2021 15:45:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JilUFwTzo+ZYz3d+IqAsceQWsTZvcx3TMTfU1c2nB4bOOFsc4iLhqiVnASarMzF/AS/m8la+ddww74FJQwnOENUWXX6EicIxehQeudBu+DdZPgEfCS2I8er2HrtFFfFa86ogVnP3jburMzM65i8/yawCYaut5brOSaXmYPZSRGWLAdfQAIhriAPvK6Aq0IXrYCuRGLb+H/pU/+iuuGy8isDN2pI7TEntHbvsLrRrrqX3++Vp5QzbvHvZyMavR60YqtaoZavZHv9w4WlL4IqfAUFSa5SVb96bV2LGiIqIHFJWfnbA/hGwe3qfDe8uj8UxggSf1JEPnQ50FBxXFrPTqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J4VuIl0A6drBdlp+eyWyUep7L/EtftmZdtjqXqPAb18=; b=i+NktARmu5/UJfP0WhG/Jr+Pm4V7xfZVXtHivckmciGdNGje9mLtGqBgjkg2cwqiTwgee1zFIH8EJp3pJ35lptkuUQ+0WvL2cAPKBHTfBA6dkiVNnJ7rTHzzQDrG84sLqduO/JwUTkWAlylu+iy+5I+HwLeCM7ZCMMew1xWUIuv9LZtMT6fHNtlp42TBhKdAQT8XGc2MheOHMFXGeIOxlQ5VsfNa2QhruHXzb2j3RZ3R1VdEnTLmOCjNUdtXjRz/DYbxFX2I6Ft96t+TxrZWDia2doiO2faJ3bHNeLCgJRa9blIsD3QzwiRpCveFY7E2Mi0ZYIybxQcE+1E+8P3TsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J4VuIl0A6drBdlp+eyWyUep7L/EtftmZdtjqXqPAb18=; b=E6PcFlNDvpXqRJMsT2teVN/a8Dpt9eExs+dSsJfFacNSyblbI3C4MWWdurLgwGRAXh6bwfU31wYuvB2KUA0JlncQGijUoc72arwaHVOuD9/OGsuDENW9Jg/pXQPz4iP3z8BGHDCGw/5YXqRP223cEoy5Fnj3EBJ0ozDBHZzk5WIH5DYDVoTPPARh2VlrU/W+5lSmu4A+PhLAC37E3OIvfQpB6T6RiX0Kv+MQd0TPxtAyHzT2xdWJU+GOq6n3qLPn+DOr1x99JssZ6jPXDpGJNkxteCmRj+jIxg5UD5Ecu2xIzcx+hv33FkgcK3Phyc/3+Lv6wpwE+FxT9fF5U8Td7w== Received: from MW4P222CA0016.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::21) by DM6PR12MB4926.namprd12.prod.outlook.com (2603:10b6:5:1bb::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.14; Tue, 17 Aug 2021 13:45:22 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::f8) by MW4P222CA0016.outlook.office365.com (2603:10b6:303:114::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.18 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:21 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:20 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:24 +0300 Message-ID: <20210817134441.1966618-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b27c8131-6535-4b75-6c4e-08d961854278 X-MS-TrafficTypeDiagnostic: DM6PR12MB4926: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:19; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: c7M3aadeBI8ugFqP3ULcX4ww11J1nJxLHsqourHTvAM5St6uEL2TAeblPo35uqsio4S31o0oolCHttuNCCCy3SbOWYk0Ujhgp6ZCpAmPkqsfP4HrS48efbvyjmHOrtKQMWT2QCLOihahtNYR1eH279WBZodYHaFcYIXo36efYwxLzUyX+PFdwjDgkyFUHnfR4e4iZUMKbVk13VfrjU6G+UQw/1O7BkHLNIbZmgay9bYVVBKh7WHK/Ua+MAnpXoN9WnwJYALIv/9yDglGWUq6jOnjxqYgCR5asVZjX0j4YBaXOnDEwpahiUYOKISeHTv6Aw0xn6Bew7QnfHDIDwnSBHkXU2IUjcDOJyV/CsUPWsqmGRsjbFQbMxqIku70fXj+JkXtNogcfAYoj9vyhbD2TfwXdGRDwwjwNWvL5rZ24chJTwsGMJmBalW5S37hvtY/tJKaDrJZlzCny2PT8FafQS3tAaLafEUq7V/EM1zzYqBgRmfYKCx6z6ZWNPj0Kt1G1+Mq1NHSd2yp49ChQc8pZ5brn4NY27jA7ZEimGmyVrk/jCRBu/LczEh7wxMCtVpuBwAcEi24xu+i3C+xaxYE13iz/K41uW6YVQUsD2PsKvm+IqhajGOOlQt+3/0+itYzhdgr1f7uA0/nq1Am3M1Ui/bgMfx7nrXik7PMAjKhNKSM+UELfbfNnwpPpwe9zSihDg1UgPudoQ75eIZoOtvdwQ== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(136003)(396003)(46966006)(36840700001)(83380400001)(7696005)(70206006)(4326008)(6286002)(55016002)(5660300002)(316002)(356005)(426003)(186003)(36860700001)(8676002)(36756003)(6916009)(82740400003)(70586007)(8936002)(7636003)(107886003)(54906003)(47076005)(1076003)(336012)(26005)(16526019)(86362001)(82310400003)(2906002)(478600001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:22.2049 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b27c8131-6535-4b75-6c4e-08d961854278 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4926 Subject: [dpdk-dev] [RFC 04/21] compress/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a priv field. Signed-off-by: Michael Baum --- drivers/compress/mlx5/mlx5_compress.c | 110 ++++++++++---------------- 1 file changed, 42 insertions(+), 68 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 883e720ec1..e906ddb066 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -35,14 +35,12 @@ struct mlx5_compress_xform { struct mlx5_compress_priv { TAILQ_ENTRY(mlx5_compress_priv) next; - struct ibv_context *ctx; /* Device context. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ struct rte_compressdev *cdev; void *uar; - uint32_t pdn; /* Protection Domain number. */ uint8_t min_block_size; uint8_t sq_ts_format; /* Whether SQ supports timestamp formats. */ /* Minimum huffman block size supported by the device. */ - struct ibv_pd *pd; struct rte_compressdev_config dev_config; LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; @@ -185,7 +183,7 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, struct mlx5_devx_create_sq_attr sq_attr = { .user_index = qp_id, .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->pdn, + .pd = priv->dev_ctx->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), }, }; @@ -228,24 +226,24 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, qp->priv = priv; qp->ops = (struct rte_comp_op **)RTE_ALIGN((uintptr_t)(qp + 1), RTE_CACHE_LINE_SIZE); - if (mlx5_common_verbs_reg_mr(priv->pd, opaq_buf, qp->entries_n * - sizeof(struct mlx5_gga_compress_opaque), + if (mlx5_common_verbs_reg_mr(priv->dev_ctx->pd, opaq_buf, + qp->entries_n * sizeof(struct mlx5_gga_compress_opaque), &qp->opaque_mr) != 0) { rte_free(opaq_buf); DRV_LOG(ERR, "Failed to register opaque MR."); rte_errno = ENOMEM; goto err; } - ret = mlx5_devx_cq_create(priv->ctx, &qp->cq, log_ops_n, &cq_attr, - socket_id); + ret = mlx5_devx_cq_create(priv->dev_ctx->ctx, &qp->cq, log_ops_n, + &cq_attr, socket_id); if (ret != 0) { DRV_LOG(ERR, "Failed to create CQ."); goto err; } sq_attr.cqn = qp->cq.cq->id; sq_attr.ts_format = mlx5_ts_format_conv(priv->sq_ts_format); - ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, - socket_id); + ret = mlx5_devx_sq_create(priv->dev_ctx->ctx, &qp->sq, log_ops_n, + &sq_attr, socket_id); if (ret != 0) { DRV_LOG(ERR, "Failed to create SQ."); goto err; @@ -465,7 +463,8 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr, + return mlx5_mr_addr2mr_bh(priv->dev_ctx->pd, 0, &priv->mr_scache, + mr_ctrl, addr, !!(ol_flags & EXT_ATTACHED_MBUF)); } @@ -689,57 +688,19 @@ mlx5_compress_dequeue_burst(void *queue_pair, struct rte_comp_op **ops, static void mlx5_compress_hw_global_release(struct mlx5_compress_priv *priv) { - if (priv->pd != NULL) { - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); - priv->pd = NULL; - } if (priv->uar != NULL) { mlx5_glue->devx_free_uar(priv->uar); priv->uar = NULL; } } -static int -mlx5_compress_pd_create(struct mlx5_compress_priv *priv) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret; - - priv->pd = mlx5_glue->alloc_pd(priv->ctx); - if (priv->pd == NULL) { - DRV_LOG(ERR, "Failed to allocate PD."); - return errno ? -errno : -ENOMEM; - } - obj.pd.in = priv->pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret != 0) { - DRV_LOG(ERR, "Fail to get PD object info."); - mlx5_glue->dealloc_pd(priv->pd); - priv->pd = NULL; - return -errno; - } - priv->pdn = pd_info.pdn; - return 0; -#else - (void)priv; - DRV_LOG(ERR, "Cannot get pdn - no DV support."); - return -ENOTSUP; -#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ -} - static int mlx5_compress_hw_global_prepare(struct mlx5_compress_priv *priv) { - if (mlx5_compress_pd_create(priv) != 0) - return -1; - priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->dev_ctx->ctx, -1); if (priv->uar == NULL || mlx5_os_get_devx_uar_reg_addr(priv->uar) == NULL) { rte_errno = errno; - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); DRV_LOG(ERR, "Failed to allocate UAR."); return -1; } @@ -775,7 +736,8 @@ mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, /* Iterate all the existing mlx5 devices. */ TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) mlx5_free_mr_by_addr(&priv->mr_scache, - priv->ctx->device->name, + mlx5_os_get_ctx_device_name + (priv->dev_ctx->ctx), addr, len); pthread_mutex_unlock(&priv_list_lock); break; @@ -788,60 +750,70 @@ mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, static int mlx5_compress_dev_probe(struct rte_device *dev) { - struct ibv_device *ibv; struct rte_compressdev *cdev; - struct ibv_context *ctx; + struct mlx5_dev_ctx *dev_ctx; struct mlx5_compress_priv *priv; struct mlx5_hca_attr att = { 0 }; struct rte_compressdev_pmd_init_params init_params = { .name = "", .socket_id = dev->numa_node, }; + const char *ibdev_name; + int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) { DRV_LOG(ERR, "Non-primary process type is not supported."); rte_errno = ENOTSUP; return -rte_errno; } - ibv = mlx5_os_get_ibv_dev(dev); - if (ibv == NULL) + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; return -rte_errno; - ctx = mlx5_glue->dv_open_device(ibv); - if (ctx == NULL) { - DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + } + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_COMPRESS); + if (ret < 0) { + DRV_LOG(ERR, "Failed to create device context."); + mlx5_free(dev_ctx); rte_errno = ENODEV; return -rte_errno; } - if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || + ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); + if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &att) != 0 || att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || att.mmo_dma_en == 0) { DRV_LOG(ERR, "Not enough capabilities to support compress " "operations, maybe old FW/OFED version?"); - claim_zero(mlx5_glue->close_device(ctx)); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); rte_errno = ENOTSUP; return -ENOTSUP; } - cdev = rte_compressdev_pmd_create(ibv->name, dev, + cdev = rte_compressdev_pmd_create(ibdev_name, dev, sizeof(*priv), &init_params); if (cdev == NULL) { - DRV_LOG(ERR, "Failed to create device \"%s\".", ibv->name); - claim_zero(mlx5_glue->close_device(ctx)); + DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); return -ENODEV; } DRV_LOG(INFO, - "Compress device %s was created successfully.", ibv->name); + "Compress device %s was created successfully.", ibdev_name); cdev->dev_ops = &mlx5_compress_ops; cdev->dequeue_burst = mlx5_compress_dequeue_burst; cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; - priv->ctx = ctx; + priv->dev_ctx = dev_ctx; priv->cdev = cdev; priv->min_block_size = att.compress_min_block_size; priv->sq_ts_format = att.sq_ts_format; if (mlx5_compress_hw_global_prepare(priv) != 0) { rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); return -1; } if (mlx5_mr_btree_init(&priv->mr_scache.cache, @@ -849,7 +821,8 @@ mlx5_compress_dev_probe(struct rte_device *dev) DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); rte_errno = ENOMEM; return -rte_errno; } @@ -885,7 +858,8 @@ mlx5_compress_dev_remove(struct rte_device *dev) mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); } return 0; } From patchwork Tue Aug 17 13:44:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96996 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5037A0548; Tue, 17 Aug 2021 15:46:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F105B411D9; Tue, 17 Aug 2021 15:45:28 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2057.outbound.protection.outlook.com [40.107.244.57]) by mails.dpdk.org (Postfix) with ESMTP id 75BC6411D7 for ; Tue, 17 Aug 2021 15:45:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PRE8vGBuzjXOwabAwEcW5l0+5clrT8xIDa+HdTxEUjKZp71xAWgPUKzSKibd7FfMpecT5Jp5ntYx0ojeUocNx4zzI+rW6cBj/rhw1jyMCmwAlHTmpJIVLgBL03GzLDsc5zqHMnlDQgKi8DvsRMiQ4eqfUYQ8ftX2zlks3T+XLCUmkoHY/tnoV6uHnVnSjghs3wOhXTsuuuU/sPCO5fWbXkTVdJZbRsEhCJwGE9ewgbzYlPNObVEOJtDk2kkfJbYXFQnmkV56TnQrkN9SDJ5RAvvinQkxtCZxLXuYmRziPGqrdwi5iufiPRZPX86/bKR+haIY4RQM2qmo6gfTDBS3hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TZ5uVy2vvrMWbaZUfIUMOzbhfuU+0ImkltQS2AabawA=; b=N6KKIyr2/5nD4nCC+RbRyOUzZ8sih32RbnT0DDk/ZngLpPoJredDlrt/QWvLMrktsywRKONQLbuOZQHisq7GhF9ZQ0BddvZCkRBtO9K3vW+VOzcoh/xbR4FJWuztOdwZLhJdIWGnoyCOWW81IwKjyCYbDO/0FCwSfxIkHpkRN4BsyXwLSxMm1ThXcsjexm3W46raFqyrlaSlnZJX10LIjQwmq9Si3tlESCEv/DkWVnRHhXblgSiloXelwb9dbCLidhgeGnjAK91sp6GeoE+9TQLEfDT9wbaikteoRPlAguouKRoKATda9XjBBgvnodz21aHKhTQZ5IVMXYHE85BUEQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TZ5uVy2vvrMWbaZUfIUMOzbhfuU+0ImkltQS2AabawA=; b=AHFKKZl0O6rI6BWhvw0J8Rq4lZxkxGxwUPCYkaqgg3JFiDrc3WOloOZz93Ipnq1ML3J+2WCO+CiBOJxdTa4BLRKzbzxC4n1tr5aVkhp92Yn2R1Quu3aAdBg+2cPA3MXFqsRAwsUXpsTIIIxemFkYOolsmIFJJmlcOfvospeTFz4sKXWkR7VgdwFSa5GexvZ0wc+r2mqDcQAS3cYRvjrRbs7g0i8gDWPMg0ollQA7UdSkZT/y1VzEi29o+wLijT4kmL+IcIOo32VgBpkKI5SsYfVFbtdXQbJnvhVwJnasAHO/VkTmAhaVDvVxo89rrNC9rkbuJMITf3Ho1wdB9zHEDg== Received: from CO2PR04CA0197.namprd04.prod.outlook.com (2603:10b6:104:5::27) by MWHPR12MB1248.namprd12.prod.outlook.com (2603:10b6:300:12::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.22; Tue, 17 Aug 2021 13:45:24 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:104:5:cafe::35) by CO2PR04CA0197.outlook.office365.com (2603:10b6:104:5::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:23 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:23 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:21 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:25 +0300 Message-ID: <20210817134441.1966618-6-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 913f5869-70b4-4b15-542d-08d961854381 X-MS-TrafficTypeDiagnostic: MWHPR12MB1248: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lgMIhm5GEACWD/IbCljcZ3+M8TdrwcRmR8taSVK2aQ14sqsgvSsjAKTn95GskA8lU7e4+dOyRr5QXMpNPmsa714rsSoSn4lNY1C20sU9ZNulTrCynLlj4H60KescZC/wJSqgi2qp7q2IBrj/KQnZ+IrF8AXYxPiRjOw7GwbePLMg/2q9+/H22ZIzPknp+odHYilHM7HlDZpimEeiOZxftKltPI2kQpetO1su4hnAigrnyLZ3FB24kk8BWTZRbIsPZdWJ96NiqySMic/bOcBhNoa6hZqho7FohC2jhwbgl5uc5T/HL78rJqOKPsTmWCXXlVVTw7NcubFS5JpqGhq6+NWEU38qCuEhpzhu79hcsf5qGUSiMYCQ0MzwjkB9Z1F0ceXR1KdHnq53O5KPqI+J1SpjWCIbHzXC35LyXhMN1Y3Uy5pn3BFOukKtmO0yPN61hc5Vi8EHagSMbCeVIPjMxGMA8NUUHWA+Va2z4IfgS7ek/xMpFMmiK5t7qZQiu7FeasQYFox/CDQBIGVbchdeM9v6juKEhYhaJqjaZFkf8QatEKwXrwmd0AF1WJ3xgLCOItzCFXZ9s5MVaqPcyWFz3/RGTAQUvRMn63IxcfRCg7GnpJMVx2MgsR0CP4fGiQh7vc+lJ0IVa4czWdXTVd10shlxq13uc351XeuVSuBm5QrYvlPdV/Xpx/GT+mWJ6aNUVgNcHSVVQdkV0v7HyWprqg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(136003)(376002)(396003)(39860400002)(46966006)(36840700001)(478600001)(30864003)(2906002)(36860700001)(8936002)(316002)(4326008)(54906003)(47076005)(8676002)(83380400001)(1076003)(36756003)(86362001)(426003)(107886003)(6286002)(6916009)(70206006)(5660300002)(2616005)(70586007)(7636003)(55016002)(356005)(26005)(16526019)(336012)(186003)(82310400003)(82740400003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:23.9210 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 913f5869-70b4-4b15-542d-08d961854381 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1248 Subject: [dpdk-dev] [RFC 05/21] crypto/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a priv field. Signed-off-by: Michael Baum --- drivers/crypto/mlx5/mlx5_crypto.c | 114 ++++++++++---------------- drivers/crypto/mlx5/mlx5_crypto.h | 4 +- drivers/crypto/mlx5/mlx5_crypto_dek.c | 5 +- 3 files changed, 49 insertions(+), 74 deletions(-) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index b3d5200ca3..7cb5bb5445 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -347,7 +347,8 @@ mlx5_crypto_addr2mr(struct mlx5_crypto_priv *priv, uintptr_t addr, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr, + return mlx5_mr_addr2mr_bh(priv->dev_ctx->pd, 0, &priv->mr_scache, + mr_ctrl, addr, !!(ol_flags & EXT_ATTACHED_MBUF)); } @@ -621,7 +622,7 @@ mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, struct mlx5_umr_wqe *umr; uint32_t i; struct mlx5_devx_mkey_attr attr = { - .pd = priv->pdn, + .pd = priv->dev_ctx->pdn, .umr_en = 1, .crypto_en = 1, .set_remote_rw = 1, @@ -631,7 +632,8 @@ mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, for (umr = (struct mlx5_umr_wqe *)qp->umem_buf, i = 0; i < qp->entries_n; i++, umr = RTE_PTR_ADD(umr, priv->wqe_set_size)) { attr.klm_array = (struct mlx5_klm *)&umr->kseg[0]; - qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->ctx, &attr); + qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->dev_ctx->ctx, + &attr); if (!qp->mkey[i]) { DRV_LOG(ERR, "Failed to allocate indirect mkey."); return -1; @@ -670,7 +672,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, rte_errno = ENOMEM; return -rte_errno; } - if (mlx5_devx_cq_create(priv->ctx, &qp->cq_obj, log_nb_desc, + if (mlx5_devx_cq_create(priv->dev_ctx->ctx, &qp->cq_obj, log_nb_desc, &cq_attr, socket_id) != 0) { DRV_LOG(ERR, "Failed to create CQ."); goto error; @@ -681,7 +683,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, rte_errno = ENOMEM; goto error; } - qp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, + qp->umem_obj = mlx5_glue->devx_umem_reg(priv->dev_ctx->ctx, (void *)(uintptr_t)qp->umem_buf, umem_size, IBV_ACCESS_LOCAL_WRITE); @@ -697,7 +699,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, goto error; } qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; - attr.pd = priv->pdn; + attr.pd = priv->dev_ctx->pdn; attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); attr.cqn = qp->cq_obj.cq->id; attr.log_page_size = rte_log2_u32(sysconf(_SC_PAGESIZE)); @@ -708,7 +710,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, attr.wq_umem_offset = 0; attr.dbr_umem_id = qp->umem_obj->umem_id; attr.dbr_address = RTE_BIT64(log_nb_desc) * priv->wqe_set_size; - qp->qp_obj = mlx5_devx_cmd_create_qp(priv->ctx, &attr); + qp->qp_obj = mlx5_devx_cmd_create_qp(priv->dev_ctx->ctx, &attr); if (qp->qp_obj == NULL) { DRV_LOG(ERR, "Failed to create QP(%u).", rte_errno); goto error; @@ -782,58 +784,20 @@ static struct rte_cryptodev_ops mlx5_crypto_ops = { static void mlx5_crypto_hw_global_release(struct mlx5_crypto_priv *priv) { - if (priv->pd != NULL) { - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); - priv->pd = NULL; - } if (priv->uar != NULL) { mlx5_glue->devx_free_uar(priv->uar); priv->uar = NULL; } } -static int -mlx5_crypto_pd_create(struct mlx5_crypto_priv *priv) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret; - - priv->pd = mlx5_glue->alloc_pd(priv->ctx); - if (priv->pd == NULL) { - DRV_LOG(ERR, "Failed to allocate PD."); - return errno ? -errno : -ENOMEM; - } - obj.pd.in = priv->pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret != 0) { - DRV_LOG(ERR, "Fail to get PD object info."); - mlx5_glue->dealloc_pd(priv->pd); - priv->pd = NULL; - return -errno; - } - priv->pdn = pd_info.pdn; - return 0; -#else - (void)priv; - DRV_LOG(ERR, "Cannot get pdn - no DV support."); - return -ENOTSUP; -#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ -} - static int mlx5_crypto_hw_global_prepare(struct mlx5_crypto_priv *priv) { - if (mlx5_crypto_pd_create(priv) != 0) - return -1; - priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->dev_ctx->ctx, -1); if (priv->uar) priv->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); if (priv->uar == NULL || priv->uar_addr == NULL) { rte_errno = errno; - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); DRV_LOG(ERR, "Failed to allocate UAR."); return -1; } @@ -966,7 +930,8 @@ mlx5_crypto_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, /* Iterate all the existing mlx5 devices. */ TAILQ_FOREACH(priv, &mlx5_crypto_priv_list, next) mlx5_free_mr_by_addr(&priv->mr_scache, - priv->ctx->device->name, + mlx5_os_get_ctx_device_name + (priv->dev_ctx->ctx), addr, len); pthread_mutex_unlock(&priv_list_lock); break; @@ -979,9 +944,8 @@ mlx5_crypto_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, static int mlx5_crypto_dev_probe(struct rte_device *dev) { - struct ibv_device *ibv; struct rte_cryptodev *crypto_dev; - struct ibv_context *ctx; + struct mlx5_dev_ctx *dev_ctx; struct mlx5_devx_obj *login; struct mlx5_crypto_priv *priv; struct mlx5_crypto_devarg_params devarg_prms = { 0 }; @@ -993,6 +957,7 @@ mlx5_crypto_dev_probe(struct rte_device *dev) .max_nb_queue_pairs = RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS, }; + const char *ibdev_name; uint16_t rdmw_wqe_size; int ret; @@ -1001,57 +966,66 @@ mlx5_crypto_dev_probe(struct rte_device *dev) rte_errno = ENOTSUP; return -rte_errno; } - ibv = mlx5_os_get_ibv_dev(dev); - if (ibv == NULL) + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; return -rte_errno; - ctx = mlx5_glue->dv_open_device(ibv); - if (ctx == NULL) { - DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + } + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_CRYPTO); + if (ret < 0) { + DRV_LOG(ERR, "Failed to create device context."); + mlx5_free(dev_ctx); rte_errno = ENODEV; return -rte_errno; } - if (mlx5_devx_cmd_query_hca_attr(ctx, &attr) != 0 || + ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); + if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr) != 0 || attr.crypto == 0 || attr.aes_xts == 0) { DRV_LOG(ERR, "Not enough capabilities to support crypto " "operations, maybe old FW/OFED version?"); - claim_zero(mlx5_glue->close_device(ctx)); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); rte_errno = ENOTSUP; return -ENOTSUP; } ret = mlx5_crypto_parse_devargs(dev->devargs, &devarg_prms); if (ret) { DRV_LOG(ERR, "Failed to parse devargs."); - claim_zero(mlx5_glue->close_device(ctx)); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); return -rte_errno; } - login = mlx5_devx_cmd_create_crypto_login_obj(ctx, + login = mlx5_devx_cmd_create_crypto_login_obj(dev_ctx->ctx, &devarg_prms.login_attr); if (login == NULL) { DRV_LOG(ERR, "Failed to configure login."); - claim_zero(mlx5_glue->close_device(ctx)); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); return -rte_errno; } - crypto_dev = rte_cryptodev_pmd_create(ibv->name, dev, - &init_params); + crypto_dev = rte_cryptodev_pmd_create(ibdev_name, dev, &init_params); if (crypto_dev == NULL) { - DRV_LOG(ERR, "Failed to create device \"%s\".", ibv->name); - claim_zero(mlx5_glue->close_device(ctx)); + DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); return -ENODEV; } - DRV_LOG(INFO, - "Crypto device %s was created successfully.", ibv->name); + DRV_LOG(INFO, "Crypto device %s was created successfully.", ibdev_name); crypto_dev->dev_ops = &mlx5_crypto_ops; crypto_dev->dequeue_burst = mlx5_crypto_dequeue_burst; crypto_dev->enqueue_burst = mlx5_crypto_enqueue_burst; crypto_dev->feature_flags = MLX5_CRYPTO_FEATURE_FLAGS; crypto_dev->driver_id = mlx5_crypto_driver_id; priv = crypto_dev->data->dev_private; - priv->ctx = ctx; + priv->dev_ctx = dev_ctx; priv->login_obj = login; priv->crypto_dev = crypto_dev; if (mlx5_crypto_hw_global_prepare(priv) != 0) { rte_cryptodev_pmd_destroy(priv->crypto_dev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); return -1; } if (mlx5_mr_btree_init(&priv->mr_scache.cache, @@ -1059,7 +1033,8 @@ mlx5_crypto_dev_probe(struct rte_device *dev) DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); rte_errno = ENOMEM; return -rte_errno; } @@ -1109,7 +1084,8 @@ mlx5_crypto_dev_remove(struct rte_device *dev) mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); claim_zero(mlx5_devx_cmd_destroy(priv->login_obj)); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); } return 0; } diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index d49b0001f0..7ae05f0b00 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -19,13 +19,11 @@ struct mlx5_crypto_priv { TAILQ_ENTRY(mlx5_crypto_priv) next; - struct ibv_context *ctx; /* Device context. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ struct rte_cryptodev *crypto_dev; void *uar; /* User Access Region. */ volatile uint64_t *uar_addr; - uint32_t pdn; /* Protection Domain number. */ uint32_t max_segs_num; /* Maximum supported data segs. */ - struct ibv_pd *pd; struct mlx5_hlist *dek_hlist; /* Dek hash list. */ struct rte_cryptodev_config dev_config; struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ diff --git a/drivers/crypto/mlx5/mlx5_crypto_dek.c b/drivers/crypto/mlx5/mlx5_crypto_dek.c index 67b1fa3819..91c06fffbb 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_dek.c +++ b/drivers/crypto/mlx5/mlx5_crypto_dek.c @@ -94,7 +94,7 @@ mlx5_crypto_dek_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) struct mlx5_crypto_dek *dek = rte_zmalloc(__func__, sizeof(*dek), RTE_CACHE_LINE_SIZE); struct mlx5_devx_dek_attr dek_attr = { - .pd = ctx->priv->pdn, + .pd = ctx->priv->dev_ctx->pdn, .key_purpose = MLX5_CRYPTO_KEY_PURPOSE_AES_XTS, .has_keytag = 1, }; @@ -117,7 +117,8 @@ mlx5_crypto_dek_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) return NULL; } memcpy(&dek_attr.key, cipher_ctx->key.data, cipher_ctx->key.length); - dek->obj = mlx5_devx_cmd_create_dek_obj(ctx->priv->ctx, &dek_attr); + dek->obj = mlx5_devx_cmd_create_dek_obj(ctx->priv->dev_ctx->ctx, + &dek_attr); if (dek->obj == NULL) { rte_free(dek); return NULL; From patchwork Tue Aug 17 13:44:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96997 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3279A0548; Tue, 17 Aug 2021 15:46:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A618411E3; Tue, 17 Aug 2021 15:45:30 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2058.outbound.protection.outlook.com [40.107.92.58]) by mails.dpdk.org (Postfix) with ESMTP id 57E4A411D7 for ; Tue, 17 Aug 2021 15:45:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ScP2mF4KIF6mX+bmHd5vCbkbLgUqpTrcrCD5hXUwwYzESGVRZIs+HumuXCSDAMjaNo7DdaGEBRsmE5tf9fYpKaFwQERdAfaYXJ34063xgmXEPqcS0kLZSJMf8BBhDV7T3wdsfglbLEmIMesWiKjyy7+WrC9lU45NwavkbAm/jOEi28ncbAEkeNEb2/8dvaRzBgOE89lZdua0SKB47a8aojA7GPR2GTMQ3gbx8JonbwSF4a5UW+S2RCAidfEIidohDjz4OUPMeBwB2y9JjBgAn6matM+nPbtnh+lDEikhecnCtJX3I8hqIhZe1soeFR1Oara4IZ64lmv1mjELToy5SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TWYvnUf8D8GepjnaE8FV5b75qBWUbbHshLz0uLOkDgg=; b=WGLoLliPKeVBy26pTU3hLpqAY3sGhsyhTTq1oi/E1wb5V3lePAwXhIp8ybPlnNah7q853HFDr1wni0r9ZSk1oRLYLAz+uLexmwLDWBn1jDcPYXOw9TW0BSN6L/NmwomaIwtPc9G3+8axLRzP0ihLGAeSKhtiEfEqfA0vG6sKg6emIbCQfUOiG9v9IOli178HZLJZinfW85agI1p2UjwsJnwhBwWcbaNz1sNE8kNJaZST1mntoh9OyZOWEImUzNezt5WEtTpZLlS8n4jXMGHuo0objw5PNMtOR/XEKT+lePA/GBy96DWlzna3GhWgiVe0q7eFXw0kdvaJ+PRVVUpHRw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TWYvnUf8D8GepjnaE8FV5b75qBWUbbHshLz0uLOkDgg=; b=OBiu5ZRfWkZm7R4n0nsIbdVUes8GnghLhPzwFrCiOAz9ehmsJ1QDiF0YC3GRtnurkmMP2b/3z9e/MHE1wqAjIAJuq6VCDJvPICgg3NgJQOS3Wr59GevK9/g2ehGtyMtUoGUoS8Va/mtEu4DmMhSpNqCQ0fHAT4kS3BHtvL+MsAPlZn0oXYe9+FVKKhlmNAH8lPUgO1SW1OGYnuf1maXxfjKaN61Q5GYz4E95Q8uf0rZvPO3gdz8UzCAQWzgnNyoh/Ofuz4jCcubf9165siGZxXK+yuHtepsbJ3TF/2FzHU3MA9R0FmdglYBmOCeCJOrOWijxE+8hj/yF2WBlucOF7Q== Received: from BN9PR03CA0466.namprd03.prod.outlook.com (2603:10b6:408:139::21) by CY4PR12MB1701.namprd12.prod.outlook.com (2603:10b6:903:121::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.21; Tue, 17 Aug 2021 13:45:26 +0000 Received: from BN8NAM11FT030.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::9a) by BN9PR03CA0466.outlook.office365.com (2603:10b6:408:139::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT030.mail.protection.outlook.com (10.13.177.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:26 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:24 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:23 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:26 +0300 Message-ID: <20210817134441.1966618-7-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c3122ffb-00e1-4fea-2dea-08d9618544df X-MS-TrafficTypeDiagnostic: CY4PR12MB1701: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:16; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iRr4xowrU7LuLaTH6HfeOe4wSv9dsbuf2MYf04xxERC/o7T7oUa3NXXi79rAjlT5U8X546PB4oZtGHamAq6XRRUvPlysukwiRXOhfFlcbx6/80G/BhPS7NxKECqZWgIf2ot47sipzCriNsJfUPyGTkMq0t01JqXEWmNVLBvbwiR684u+0bBsCUKr5m2FrBcFzBjOGIXI8cUzN1Wxix9B7BSZ9M1vdfCZhsZr7uS6AZR0K5PujlvgPfiTAz4RNvo433QrH0VUniQMSFjpcD8gByResChHYkNd5TgSLEEjfJPLCvxlgkcQtXCFitQlkWRSr2xnrT1aIf2ZR+xYy5DhZaouL7Y2T7J9xUXJZScPgXfk1ybk/TXI/oc+y0OwZ46AFzdrJUKGR1s0wEU0Mch9ENdebAn09ox+H6uuGy1L3NdofeSKk5M81JBdsXk4EpTMLOUpgfEcjjdx6QNyn6/h4gV87gj+a/lCrnrcN2eVPwsRGQp1c1i11w1FHe6t5orcPwqaNh1cEurzwntq6d4VVDjw0R5PtpwVXxFNGRkoH01gFCX7QB++oyLDzcjVnV41FQz3Xsb0ydk59f2MBg2CYqaF0o8i+NE8nVbtk5tyty9Jn6Eu6LPTjZjls5/TG47l8rKIu3CY3JUDbLTw1dgYHd8eGS+CT/aWpTXbwenl4YQsjuuzDGG0nizNtNIxKrMzyiY+kvpIlGQYBNlt3Vtn5g== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(39860400002)(376002)(136003)(396003)(46966006)(36840700001)(1076003)(186003)(16526019)(30864003)(7636003)(26005)(2906002)(82310400003)(356005)(36756003)(83380400001)(82740400003)(36860700001)(55016002)(478600001)(47076005)(86362001)(6286002)(316002)(336012)(8676002)(2616005)(426003)(107886003)(8936002)(70206006)(6916009)(70586007)(4326008)(7696005)(54906003)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:26.1938 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c3122ffb-00e1-4fea-2dea-08d9618544df X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT030.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1701 Subject: [dpdk-dev] [RFC 06/21] regex/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a priv field. Signed-off-by: Michael Baum --- drivers/regex/mlx5/mlx5_regex.c | 59 +++++++++++----------- drivers/regex/mlx5/mlx5_regex.h | 23 +-------- drivers/regex/mlx5/mlx5_regex_control.c | 12 ++--- drivers/regex/mlx5/mlx5_regex_fastpath.c | 18 +++---- drivers/regex/mlx5/mlx5_rxp.c | 64 ++++++++++++------------ 5 files changed, 72 insertions(+), 104 deletions(-) diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index f17b6df47f..11b24cde39 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -110,7 +110,8 @@ mlx5_regex_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, /* Iterate all the existing mlx5 devices. */ TAILQ_FOREACH(priv, &mlx5_mem_event_list, mem_event_cb) mlx5_free_mr_by_addr(&priv->mr_scache, - priv->ctx->device->name, + mlx5_os_get_ctx_device_name + (priv->dev_ctx->ctx), addr, len); pthread_mutex_unlock(&mem_event_list_lock); break; @@ -123,25 +124,31 @@ mlx5_regex_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, static int mlx5_regex_dev_probe(struct rte_device *rte_dev) { - struct ibv_device *ibv; struct mlx5_regex_priv *priv = NULL; - struct ibv_context *ctx = NULL; + struct mlx5_dev_ctx *dev_ctx = NULL; struct mlx5_hca_attr attr; char name[RTE_REGEXDEV_NAME_MAX_LEN]; + const char *ibdev_name; int ret; uint32_t val; - ibv = mlx5_os_get_ibv_dev(rte_dev); - if (ibv == NULL) + dev_ctx = rte_zmalloc("mlx5 context device", sizeof(*dev_ctx), + RTE_CACHE_LINE_SIZE); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; return -rte_errno; - DRV_LOG(INFO, "Probe device \"%s\".", ibv->name); - ctx = mlx5_glue->dv_open_device(ibv); - if (!ctx) { - DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + } + ret = mlx5_dev_ctx_prepare(dev_ctx, rte_dev, MLX5_CLASS_REGEX); + if (ret < 0) { + DRV_LOG(ERR, "Failed to create device context."); + rte_free(dev_ctx); rte_errno = ENODEV; return -rte_errno; } - ret = mlx5_devx_cmd_query_hca_attr(ctx, &attr); + ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); + DRV_LOG(INFO, "Probe device \"%s\".", ibdev_name); + ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr); if (ret) { DRV_LOG(ERR, "Unable to read HCA capabilities."); rte_errno = ENOTSUP; @@ -152,7 +159,7 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) rte_errno = ENOTSUP; goto dev_error; } - if (mlx5_regex_engines_status(ctx, 2)) { + if (mlx5_regex_engines_status(dev_ctx->ctx, 2)) { DRV_LOG(ERR, "RegEx engine error."); rte_errno = ENOMEM; goto dev_error; @@ -165,13 +172,13 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) goto dev_error; } priv->sq_ts_format = attr.sq_ts_format; - priv->ctx = ctx; + priv->dev_ctx = dev_ctx; priv->nb_engines = 2; /* attr.regexp_num_of_engines */ - ret = mlx5_devx_regex_register_read(priv->ctx, 0, + ret = mlx5_devx_regex_register_read(priv->dev_ctx->ctx, 0, MLX5_RXP_CSR_IDENTIFIER, &val); if (ret) { DRV_LOG(ERR, "CSR read failed!"); - return -1; + goto dev_error; } if (val == MLX5_RXP_BF2_IDENTIFIER) priv->is_bf2 = 1; @@ -189,18 +196,12 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) * registers writings, it is safe to allocate UAR with any * memory mapping type. */ - priv->uar = mlx5_devx_alloc_uar(ctx, -1); + priv->uar = mlx5_devx_alloc_uar(dev_ctx->ctx, -1); if (!priv->uar) { DRV_LOG(ERR, "can't allocate uar."); rte_errno = ENOMEM; goto error; } - priv->pd = mlx5_glue->alloc_pd(ctx); - if (!priv->pd) { - DRV_LOG(ERR, "can't allocate pd."); - rte_errno = ENOMEM; - goto error; - } priv->regexdev->dev_ops = &mlx5_regexdev_ops; priv->regexdev->enqueue = mlx5_regexdev_enqueue; #ifdef HAVE_MLX5_UMR_IMKEY @@ -238,15 +239,15 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) return 0; error: - if (priv->pd) - mlx5_glue->dealloc_pd(priv->pd); if (priv->uar) mlx5_glue->devx_free_uar(priv->uar); if (priv->regexdev) rte_regexdev_unregister(priv->regexdev); dev_error: - if (ctx) - mlx5_glue->close_device(ctx); + if (dev_ctx) { + mlx5_dev_ctx_release(dev_ctx); + rte_free(dev_ctx); + } if (priv) rte_free(priv); return -rte_errno; @@ -274,14 +275,14 @@ mlx5_regex_dev_remove(struct rte_device *rte_dev) NULL); if (priv->mr_scache.cache.table) mlx5_mr_release_cache(&priv->mr_scache); - if (priv->pd) - mlx5_glue->dealloc_pd(priv->pd); if (priv->uar) mlx5_glue->devx_free_uar(priv->uar); if (priv->regexdev) rte_regexdev_unregister(priv->regexdev); - if (priv->ctx) - mlx5_glue->close_device(priv->ctx); + if (priv->dev_ctx) { + mlx5_dev_ctx_release(priv->dev_ctx); + rte_free(priv->dev_ctx); + } rte_free(priv); } return 0; diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h index 514f3408f9..c7a57e6f1b 100644 --- a/drivers/regex/mlx5/mlx5_regex.h +++ b/drivers/regex/mlx5/mlx5_regex.h @@ -58,7 +58,7 @@ struct mlx5_regex_db { struct mlx5_regex_priv { TAILQ_ENTRY(mlx5_regex_priv) next; - struct ibv_context *ctx; /* Device context. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ struct rte_regexdev *regexdev; /* Pointer to the RegEx dev. */ uint16_t nb_queues; /* Number of queues. */ struct mlx5_regex_qp *qps; /* Pointer to the qp array. */ @@ -68,7 +68,6 @@ struct mlx5_regex_priv { MLX5_RXP_EM_COUNT]; uint32_t nb_engines; /* Number of RegEx engines. */ struct mlx5dv_devx_uar *uar; /* UAR object. */ - struct ibv_pd *pd; TAILQ_ENTRY(mlx5_regex_priv) mem_event_cb; /**< Called by memory event callback. */ struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ @@ -77,26 +76,6 @@ struct mlx5_regex_priv { uint8_t has_umr; /* The device supports UMR. */ }; -#ifdef HAVE_IBV_FLOW_DV_SUPPORT -static inline int -regex_get_pdn(void *pd, uint32_t *pdn) -{ - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret = 0; - - obj.pd.in = pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret) { - DRV_LOG(DEBUG, "Fail to get PD object info"); - return ret; - } - *pdn = pd_info.pdn; - return 0; -} -#endif - /* mlx5_regex.c */ int mlx5_regex_start(struct rte_regexdev *dev); int mlx5_regex_stop(struct rte_regexdev *dev); diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index 8ce2dabb55..125425a955 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -83,8 +83,8 @@ regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq) int ret; cq->ci = 0; - ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, cq->log_nb_desc, - &attr, SOCKET_ID_ANY); + ret = mlx5_devx_cq_create(priv->dev_ctx->ctx, &cq->cq_obj, + cq->log_nb_desc, &attr, SOCKET_ID_ANY); if (ret) { DRV_LOG(ERR, "Can't create CQ object."); memset(cq, 0, sizeof(*cq)); @@ -147,18 +147,14 @@ regex_ctrl_create_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, .state = MLX5_SQC_STATE_RDY, }; struct mlx5_regex_sq *sq = &qp->sqs[q_ind]; - uint32_t pd_num = 0; int ret; sq->log_nb_desc = log_nb_desc; sq->sqn = q_ind; sq->ci = 0; sq->pi = 0; - ret = regex_get_pdn(priv->pd, &pd_num); - if (ret) - return ret; - attr.wq_attr.pd = pd_num; - ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, + attr.wq_attr.pd = priv->dev_ctx->pdn; + ret = mlx5_devx_sq_create(priv->dev_ctx->ctx, &sq->sq_obj, MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_nb_desc), &attr, SOCKET_ID_ANY); if (ret) { diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index 786718af53..2a04713b9f 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -138,7 +138,8 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr, + return mlx5_mr_addr2mr_bh(priv->dev_ctx->pd, 0, &priv->mr_scache, + mr_ctrl, addr, !!(mbuf->ol_flags & EXT_ATTACHED_MBUF)); } @@ -634,7 +635,7 @@ setup_sqs(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *queue) static int setup_buffers(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp) { - struct ibv_pd *pd = priv->pd; + struct ibv_pd *pd = priv->dev_ctx->pd; uint32_t i; int err; @@ -724,6 +725,7 @@ mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id) .klm_array = &klm, .klm_num = 1, .umr_en = 1, + .pd = priv->dev_ctx->pdn, }; uint32_t i; int err = 0; @@ -740,19 +742,11 @@ mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id) setup_sqs(priv, qp); if (priv->has_umr) { -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - if (regex_get_pdn(priv->pd, &attr.pd)) { - err = -rte_errno; - DRV_LOG(ERR, "Failed to get pdn."); - mlx5_regexdev_teardown_fastpath(priv, qp_id); - return err; - } -#endif for (i = 0; i < qp->nb_desc; i++) { attr.klm_num = MLX5_REGEX_MAX_KLM_NUM; attr.klm_array = qp->jobs[i].imkey_array; - qp->jobs[i].imkey = mlx5_devx_cmd_mkey_create(priv->ctx, - &attr); + qp->jobs[i].imkey = mlx5_devx_cmd_mkey_create + (priv->dev_ctx->ctx, &attr); if (!qp->jobs[i].imkey) { err = -rte_errno; DRV_LOG(ERR, "Failed to allocate imkey."); diff --git a/drivers/regex/mlx5/mlx5_rxp.c b/drivers/regex/mlx5/mlx5_rxp.c index 380037e24c..7bd854883f 100644 --- a/drivers/regex/mlx5/mlx5_rxp.c +++ b/drivers/regex/mlx5/mlx5_rxp.c @@ -167,7 +167,7 @@ rxp_init_rtru(struct mlx5_regex_priv *priv, uint8_t id, uint32_t init_bits) uint32_t poll_value; uint32_t expected_value; uint32_t expected_mask; - struct ibv_context *ctx = priv->ctx; + struct ibv_context *ctx = priv->dev_ctx->ctx; int ret = 0; /* Read the rtru ctrl CSR. */ @@ -284,6 +284,7 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, uint32_t rof_rule_addr; uint64_t tmp_write_swap[4]; struct mlx5_rxp_rof_entry rules[8]; + struct ibv_context *ctx = priv->dev_ctx->ctx; int i; int db_free; int j; @@ -313,7 +314,7 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, tmp_addr = rxp_get_reg_address(address); if (tmp_addr == UINT32_MAX) goto parse_error; - ret = mlx5_devx_regex_register_read(priv->ctx, id, + ret = mlx5_devx_regex_register_read(ctx, id, tmp_addr, ®_val); if (ret) goto parse_error; @@ -337,7 +338,7 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, tmp_addr = rxp_get_reg_address(address); if (tmp_addr == UINT32_MAX) goto parse_error; - ret = mlx5_devx_regex_register_read(priv->ctx, id, + ret = mlx5_devx_regex_register_read(ctx, id, tmp_addr, ®_val); if (ret) goto parse_error; @@ -359,7 +360,7 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, tmp_addr = rxp_get_reg_address(address); if (tmp_addr == UINT32_MAX) goto parse_error; - ret = mlx5_devx_regex_register_read(priv->ctx, id, + ret = mlx5_devx_regex_register_read(ctx, id, tmp_addr, ®_val); if (ret) goto parse_error; @@ -395,7 +396,7 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, if (tmp_addr == UINT32_MAX) goto parse_error; - ret = mlx5_devx_regex_register_read(priv->ctx, id, + ret = mlx5_devx_regex_register_read(ctx, id, tmp_addr, ®_val); if (ret) { DRV_LOG(ERR, "RXP CSR read failed!"); @@ -418,17 +419,16 @@ rxp_program_rof(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, */ temp = val; ret |= mlx5_devx_regex_register_write - (priv->ctx, id, + (ctx, id, MLX5_RXP_RTRU_CSR_DATA_0, temp); temp = (uint32_t)(val >> 32); ret |= mlx5_devx_regex_register_write - (priv->ctx, id, + (ctx, id, MLX5_RXP_RTRU_CSR_DATA_0 + MLX5_RXP_CSR_WIDTH, temp); temp = address; ret |= mlx5_devx_regex_register_write - (priv->ctx, id, MLX5_RXP_RTRU_CSR_ADDR, - temp); + (ctx, id, MLX5_RXP_RTRU_CSR_ADDR, temp); if (ret) { DRV_LOG(ERR, "Failed to copy instructions to RXP."); @@ -506,13 +506,14 @@ mlnx_set_database(struct mlx5_regex_priv *priv, uint8_t id, uint8_t db_to_use) int ret; uint32_t umem_id; - ret = mlx5_devx_regex_database_stop(priv->ctx, id); + ret = mlx5_devx_regex_database_stop(priv->dev_ctx->ctx, id); if (ret < 0) { DRV_LOG(ERR, "stop engine failed!"); return ret; } umem_id = mlx5_os_get_umem_id(priv->db[db_to_use].umem.umem); - ret = mlx5_devx_regex_database_program(priv->ctx, id, umem_id, 0); + ret = mlx5_devx_regex_database_program(priv->dev_ctx->ctx, + id, umem_id, 0); if (ret < 0) { DRV_LOG(ERR, "program db failed!"); return ret; @@ -523,7 +524,7 @@ mlnx_set_database(struct mlx5_regex_priv *priv, uint8_t id, uint8_t db_to_use) static int mlnx_resume_database(struct mlx5_regex_priv *priv, uint8_t id) { - mlx5_devx_regex_database_resume(priv->ctx, id); + mlx5_devx_regex_database_resume(priv->dev_ctx->ctx, id); return 0; } @@ -588,13 +589,13 @@ program_rxp_rules(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, { int ret; uint32_t val; + struct ibv_context *ctx = priv->dev_ctx->ctx; ret = rxp_init_eng(priv, id); if (ret < 0) return ret; /* Confirm the RXP is initialised. */ - if (mlx5_devx_regex_register_read(priv->ctx, id, - MLX5_RXP_CSR_STATUS, &val)) { + if (mlx5_devx_regex_register_read(ctx, id, MLX5_RXP_CSR_STATUS, &val)) { DRV_LOG(ERR, "Failed to read from RXP!"); return -ENODEV; } @@ -602,14 +603,14 @@ program_rxp_rules(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, DRV_LOG(ERR, "RXP not initialised..."); return -EBUSY; } - ret = mlx5_devx_regex_register_read(priv->ctx, id, + ret = mlx5_devx_regex_register_read(ctx, id, MLX5_RXP_RTRU_CSR_CTRL, &val); if (ret) { DRV_LOG(ERR, "CSR read failed!"); return -1; } val |= MLX5_RXP_RTRU_CSR_CTRL_GO; - ret = mlx5_devx_regex_register_write(priv->ctx, id, + ret = mlx5_devx_regex_register_write(ctx, id, MLX5_RXP_RTRU_CSR_CTRL, val); if (ret) { DRV_LOG(ERR, "Can't program rof file!"); @@ -622,7 +623,7 @@ program_rxp_rules(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, } if (priv->is_bf2) { ret = rxp_poll_csr_for_value - (priv->ctx, &val, MLX5_RXP_RTRU_CSR_STATUS, + (ctx, &val, MLX5_RXP_RTRU_CSR_STATUS, MLX5_RXP_RTRU_CSR_STATUS_UPDATE_DONE, MLX5_RXP_RTRU_CSR_STATUS_UPDATE_DONE, MLX5_RXP_POLL_CSR_FOR_VALUE_TIMEOUT, id); @@ -632,29 +633,27 @@ program_rxp_rules(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, } DRV_LOG(DEBUG, "Rules update took %d cycles", ret); } - if (mlx5_devx_regex_register_read(priv->ctx, id, MLX5_RXP_RTRU_CSR_CTRL, + if (mlx5_devx_regex_register_read(ctx, id, MLX5_RXP_RTRU_CSR_CTRL, &val)) { DRV_LOG(ERR, "CSR read failed!"); return -1; } val &= ~(MLX5_RXP_RTRU_CSR_CTRL_GO); - if (mlx5_devx_regex_register_write(priv->ctx, id, + if (mlx5_devx_regex_register_write(ctx, id, MLX5_RXP_RTRU_CSR_CTRL, val)) { DRV_LOG(ERR, "CSR write failed!"); return -1; } - ret = mlx5_devx_regex_register_read(priv->ctx, id, MLX5_RXP_CSR_CTRL, - &val); + ret = mlx5_devx_regex_register_read(ctx, id, MLX5_RXP_CSR_CTRL, &val); if (ret) return ret; val &= ~MLX5_RXP_CSR_CTRL_INIT; - ret = mlx5_devx_regex_register_write(priv->ctx, id, MLX5_RXP_CSR_CTRL, - val); + ret = mlx5_devx_regex_register_write(ctx, id, MLX5_RXP_CSR_CTRL, val); if (ret) return ret; rxp_init_rtru(priv, id, MLX5_RXP_RTRU_CSR_CTRL_INIT_MODE_L1_L2); if (priv->is_bf2) { - ret = rxp_poll_csr_for_value(priv->ctx, &val, + ret = rxp_poll_csr_for_value(ctx, &val, MLX5_RXP_CSR_STATUS, MLX5_RXP_CSR_STATUS_INIT_DONE, MLX5_RXP_CSR_STATUS_INIT_DONE, @@ -670,9 +669,7 @@ program_rxp_rules(struct mlx5_regex_priv *priv, const char *buf, uint32_t len, DRV_LOG(ERR, "Failed to resume engine!"); return ret; } - return ret; - } static int @@ -680,7 +677,7 @@ rxp_init_eng(struct mlx5_regex_priv *priv, uint8_t id) { uint32_t ctrl; uint32_t reg; - struct ibv_context *ctx = priv->ctx; + struct ibv_context *ctx = priv->dev_ctx->ctx; int ret; ret = mlx5_devx_regex_register_read(ctx, id, MLX5_RXP_CSR_CTRL, &ctrl); @@ -758,9 +755,10 @@ rxp_db_setup(struct mlx5_regex_priv *priv) goto tidyup_error; } /* Register the memory. */ - priv->db[i].umem.umem = mlx5_glue->devx_umem_reg(priv->ctx, - priv->db[i].ptr, - MLX5_MAX_DB_SIZE, 7); + priv->db[i].umem.umem = mlx5_glue->devx_umem_reg + (priv->dev_ctx->ctx, + priv->db[i].ptr, + MLX5_MAX_DB_SIZE, 7); if (!priv->db[i].umem.umem) { DRV_LOG(ERR, "Failed to register memory!"); ret = ENODEV; @@ -804,14 +802,14 @@ mlx5_regex_rules_db_import(struct rte_regexdev *dev, } if (rule_db_len == 0) return -EINVAL; - if (mlx5_devx_regex_register_read(priv->ctx, 0, + if (mlx5_devx_regex_register_read(priv->dev_ctx->ctx, 0, MLX5_RXP_CSR_BASE_ADDRESS, &ver)) { DRV_LOG(ERR, "Failed to read Main CSRs Engine 0!"); return -1; } /* Need to ensure RXP not busy before stop! */ for (id = 0; id < priv->nb_engines; id++) { - ret = rxp_stop_engine(priv->ctx, id); + ret = rxp_stop_engine(priv->dev_ctx->ctx, id); if (ret) { DRV_LOG(ERR, "Can't stop engine."); ret = -ENODEV; @@ -823,7 +821,7 @@ mlx5_regex_rules_db_import(struct rte_regexdev *dev, ret = -ENODEV; goto tidyup_error; } - ret = rxp_start_engine(priv->ctx, id); + ret = rxp_start_engine(priv->dev_ctx->ctx, id); if (ret) { DRV_LOG(ERR, "Can't start engine."); ret = -ENODEV; From patchwork Tue Aug 17 13:44:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96998 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55BAAA0548; Tue, 17 Aug 2021 15:46:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C6E5411E7; Tue, 17 Aug 2021 15:45:31 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2083.outbound.protection.outlook.com [40.107.101.83]) by mails.dpdk.org (Postfix) with ESMTP id 2D8D6411DB for ; Tue, 17 Aug 2021 15:45:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VpC93uwiqV7zxuPXNW69VEOT7xAn68daqpnGf08HEB936YwMulSHQ2mRT/fv3pDrudifoVxbkI9XUDvwXJM9bsls9OxNdoZDPLPJO1XzSGKQjRVVnwnID/fSNkwbx5fuPMtFvCZ/8kjOceBVFHX7cK970Udv3gpBUCOJ2FWy4VEO0d0Gl+blosJM4R/pPGdAB8eGzqBKwq0Y540TAVRvhBX9enoweyeNqtAzsQ15/qhrqBXGOdbi9uRsSWvxjGKeZu5tO3IU/waSYPRtGUfUqQ/RckwWcOr+tXkrhWdRW1kR3OlQnO1S72B5knfefpxjpxE7SG7cyAHyNoxiEjGUZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tj9ETY/JgCvGGUw3bG4ji/wbs6BDHl3IRcuTGylCPVc=; b=gCQuDKWQm5oEhYKTiBPf3d6B1SNOPb3NkRK5UTNEz2f1Mk3oOuC2QVsjxTIXCWbGZ5xoqzl/Xf8femKDCKnRxbofK6+3wFGs1n2VOAkUrbD2gxDBlkcvsGqNV9Dso62B7kUIwYyVVZ79UcrBpKWQcKHfvDDSiNo8bw4QAgPpCstq3rK/w/NTP1J3yNwVYxDhmLDSMspwWPBLkXjr1yE+Yea83gPFbY4L+OKCCJyckDEtGjeR7hPI9oKljoDvmtzoMohsPwCwtdB1VFICU/+pGigK8L0onMvT7LJkYde2ea9Z/3cGjSDACi65/XaRxboIBQEgprulgKMhF34JJ7druQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tj9ETY/JgCvGGUw3bG4ji/wbs6BDHl3IRcuTGylCPVc=; b=T19MDtcQIfk2AsP6fStE91tGVEUJMufWbHocS+D9ibgxU97v+ZvnlNSSn3dWkWhB2D0ZkA9m/6mbIJoYk5y8+rTWY1msaRj8NAQOc0lUV65Xba6TFeDDbfSTDXLMSjtAIgvRKJUXGhFCixW9KYlszzgI+lsN1/lQeU0izwYU1oBQImpSsLFWd7DGI+YB3ou7gs4zwStZFfHGQqy3uK+humKNCJwBROWojlXEO4h4gDTcM3u/lyCftEE/hI6CwTkaYFbe3dXNsZjdJ7th7Cvt7hdlTKsAUijuwq7hPJvSCX3r7lVe5nyv4DemK+JuPuSLYdHn5bE7SsgLhj9vUVyJ6A== Received: from BN0PR04CA0118.namprd04.prod.outlook.com (2603:10b6:408:ec::33) by BN9PR12MB5066.namprd12.prod.outlook.com (2603:10b6:408:133::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17; Tue, 17 Aug 2021 13:45:27 +0000 Received: from BN8NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ec:cafe::cf) by BN0PR04CA0118.outlook.office365.com (2603:10b6:408:ec::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.18 via Frontend Transport; Tue, 17 Aug 2021 13:45:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT027.mail.protection.outlook.com (10.13.177.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:27 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:26 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:24 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:27 +0300 Message-ID: <20210817134441.1966618-8-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 21b61ac8-1e23-492e-9e9f-08d961854586 X-MS-TrafficTypeDiagnostic: BN9PR12MB5066: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OErXcw8Ov0YgE5s0qvpKEoW+IpEWf4Bf6zK+C0I7fHL6mDDt1J8DI3S7xD+fH2xB3LBIpdbHbTjXUan3DX8HJB3CaNXSaDq4MM1xjmkNPhMfMInK8Qw+BtdiU6Lm1J9InLwwJBAid+gUmuRQzCHS2u+8C02q8M0I2FjvSWvyZh04lqpBnGUISnbr+o1wyXvn0QXSt72JlOrPNGhYI6mgyi39N7FJgQo+mlrBNkpSQdsWOo38kdyheTg9wwyC4e65Jzk0u0hjEfobhz5eYQAVK8bc7ChFM4teWNaJBXe1LW58NcayOaBKYsuyV+eKFXvKpaCNcbvHuddaj/ZiRhGq+HGemAf5gwSgyX6pfPbItfY+M9MLpyYP9NwC5XZCqz5CT4VMfTwiLxJTQzApjJyhm5P0dMYj2zifn7sxMdmvm+wwXFTw/NDAEaMvlKqKGqVFsHn4g/fNHN4HXWP9sWniLLOMrkFmOKBuZ+QR1S7U5GNpEdNnXsGYOPPlgWTwmXbay6afDoXfzBTm1XrUFYDIal9SmJ7aZhyPikKdc65fB5DdPHXemhDP9i5+8rMY7lIckIj+894aS2aTReCJoPhWR761/NdSUxwBA6AuF3bNV0uIVhcJDSH901CB9WiV6w1zypP14UchTgJ4EsgxwEZYTfY6LogBOqro8UXOVZJLcKyQIxCjOMxAiqDYSD1yBWICZbAvJKb1lUxQzaHW4iaZoQ== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(36840700001)(46966006)(1076003)(8676002)(47076005)(36860700001)(36756003)(86362001)(316002)(54906003)(26005)(107886003)(16526019)(186003)(83380400001)(6286002)(4326008)(336012)(426003)(82310400003)(2906002)(6916009)(5660300002)(478600001)(82740400003)(356005)(70206006)(2616005)(70586007)(7636003)(55016002)(8936002)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:27.2382 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 21b61ac8-1e23-492e-9e9f-08d961854586 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5066 Subject: [dpdk-dev] [RFC 07/21] net/mlx5: improve probe function on Windows X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" some improvements: - use aux function to find match device. - use spawn a local variable instead of pointing to a list with a single member. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.h | 2 + drivers/common/mlx5/version.map | 1 + drivers/common/mlx5/windows/mlx5_common_os.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 196 +++---------------- 4 files changed, 26 insertions(+), 175 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 609953b70e..10061f364f 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -465,6 +465,8 @@ int mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, int dbnc, uint32_t classes); int mlx5_os_pd_create(struct mlx5_dev_ctx *dev_ctx); +__rte_internal +struct devx_device_bdf *mlx5_os_get_devx_device(struct rte_device *dev); #endif /* RTE_PMD_MLX5_COMMON_H_ */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 6a88105d02..18856c198e 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -145,6 +145,7 @@ INTERNAL { mlx5_os_dealloc_pd; mlx5_os_dereg_mr; mlx5_os_get_ibv_dev; # WINDOWS_NO_EXPORT + mlx5_os_get_devx_device; mlx5_os_reg_mr; mlx5_os_umem_dereg; mlx5_os_umem_reg; diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index 5d178b0452..12819383c1 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -144,7 +144,7 @@ mlx5_match_devx_devices_to_addr(struct devx_device_bdf *devx_bdf, * @return * A device match on success, NULL otherwise and rte_errno is set. */ -static struct devx_device_bdf * +struct devx_device_bdf * mlx5_os_get_devx_device(struct rte_device *dev) { int n; diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 7e1df1c751..0ff9e70d96 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -904,68 +904,6 @@ mlx5_os_set_allmulti(struct rte_eth_dev *dev, int enable) return -ENOTSUP; } -/** - * Detect if a devx_device_bdf object has identical DBDF values to the - * rte_pci_addr found in bus/pci probing - * - * @param[in] devx_bdf - * Pointer to the devx_device_bdf structure. - * @param[in] addr - * Pointer to the rte_pci_addr structure. - * - * @return - * 1 on Device match, 0 on mismatch. - */ -static int -mlx5_match_devx_bdf_to_addr(struct devx_device_bdf *devx_bdf, - struct rte_pci_addr *addr) -{ - if (addr->domain != (devx_bdf->bus_id >> 8) || - addr->bus != (devx_bdf->bus_id & 0xff) || - addr->devid != devx_bdf->dev_id || - addr->function != devx_bdf->fnc_id) { - return 0; - } - return 1; -} - -/** - * Detect if a devx_device_bdf object matches the rte_pci_addr - * found in bus/pci probing - * Compare both the Native/PF BDF and the raw_bdf representing a VF BDF. - * - * @param[in] devx_bdf - * Pointer to the devx_device_bdf structure. - * @param[in] addr - * Pointer to the rte_pci_addr structure. - * - * @return - * 1 on Device match, 0 on mismatch, rte_errno code on failure. - */ -static int -mlx5_match_devx_devices_to_addr(struct devx_device_bdf *devx_bdf, - struct rte_pci_addr *addr) -{ - int err; - struct devx_device mlx5_dev; - - if (mlx5_match_devx_bdf_to_addr(devx_bdf, addr)) - return 1; - /** - * Didn't match on Native/PF BDF, could still - * Match a VF BDF, check it next - */ - err = mlx5_glue->query_device(devx_bdf, &mlx5_dev); - if (err) { - DRV_LOG(ERR, "query_device failed"); - rte_errno = err; - return rte_errno; - } - if (mlx5_match_devx_bdf_to_addr(&mlx5_dev.raw_bdf, addr)) - return 1; - return 0; -} - /** * DPDK callback to register a PCI device. * @@ -981,39 +919,15 @@ int mlx5_os_net_probe(struct rte_device *dev) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev); - struct devx_device_bdf *devx_bdf_devs, *orig_devx_bdf_devs; - /* - * Number of found IB Devices matching with requested PCI BDF. - * nd != 1 means there are multiple IB devices over the same - * PCI device and we have representors and master. - */ - unsigned int nd = 0; - /* - * Number of found IB device Ports. nd = 1 and np = 1..n means - * we have the single multiport IB device, and there may be - * representors attached to some of found ports. - * Currently not supported. - * unsigned int np = 0; - */ - - /* - * Number of DPDK ethernet devices to Spawn - either over - * multiple IB devices or multiple ports of single IB device. - * Actually this is the number of iterations to spawn. - */ - unsigned int ns = 0; - /* - * Bonding device - * < 0 - no bonding device (single one) - * >= 0 - bonding device (value is slave PF index) - */ - int bd = -1; - struct mlx5_dev_spawn_data *list = NULL; + struct mlx5_dev_spawn_data spawn = { .pf_bond = -1 }; + struct devx_device_bdf *devx_bdf_match = mlx5_os_get_devx_device(dev); struct mlx5_dev_config dev_config; unsigned int dev_config_vf; - int ret, err; + int ret; uint32_t restore; + if (devx_bdf_match == NULL) + return -rte_errno; if (rte_eal_process_type() == RTE_PROC_SECONDARY) { DRV_LOG(ERR, "Secondary process is not supported on Windows."); return -ENOTSUP; @@ -1024,67 +938,14 @@ mlx5_os_net_probe(struct rte_device *dev) strerror(rte_errno)); return -rte_errno; } - errno = 0; - devx_bdf_devs = mlx5_glue->get_device_list(&ret); - orig_devx_bdf_devs = devx_bdf_devs; - if (!devx_bdf_devs) { - rte_errno = errno ? errno : ENOSYS; - DRV_LOG(ERR, "cannot list devices, is ib_uverbs loaded?"); - return -rte_errno; - } - /* - * First scan the list of all Infiniband devices to find - * matching ones, gathering into the list. - */ - struct devx_device_bdf *devx_bdf_match[ret + 1]; - - while (ret-- > 0) { - err = mlx5_match_devx_devices_to_addr(devx_bdf_devs, - &pci_dev->addr); - if (!err) { - devx_bdf_devs++; - continue; - } - if (err != 1) { - ret = -err; - goto exit; - } - devx_bdf_match[nd++] = devx_bdf_devs; - } - devx_bdf_match[nd] = NULL; - if (!nd) { - /* No device matches, just complain and bail out. */ - DRV_LOG(WARNING, - "no DevX device matches PCI device " PCI_PRI_FMT "," - " is DevX Configured?", - pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); - rte_errno = ENOENT; - ret = -rte_errno; - goto exit; - } - /* - * Now we can determine the maximal - * amount of devices to be spawned. - */ - list = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_dev_spawn_data), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!list) { - DRV_LOG(ERR, "spawn data array allocation failure"); - rte_errno = ENOMEM; - ret = -rte_errno; - goto exit; - } - memset(&list[ns].info, 0, sizeof(list[ns].info)); - list[ns].max_port = 1; - list[ns].phys_port = 1; - list[ns].phys_dev = devx_bdf_match[ns]; - list[ns].eth_dev = NULL; - list[ns].pci_dev = pci_dev; - list[ns].pf_bond = bd; - list[ns].ifindex = -1; /* Spawn will assign */ - list[ns].info = + memset(&spawn.info, 0, sizeof(spawn.info)); + spawn.max_port = 1; + spawn.phys_port = 1; + spawn.phys_dev = devx_bdf_match; + spawn.eth_dev = NULL; + spawn.pci_dev = pci_dev; + spawn.ifindex = -1; /* Spawn will assign */ + spawn.info = (struct mlx5_switch_info){ .master = 0, .representor = 0, @@ -1125,29 +986,16 @@ mlx5_os_net_probe(struct rte_device *dev) dev_config.dv_flow_en = 1; dev_config.decap_en = 0; dev_config.log_hp_size = MLX5_ARG_UNSET; - list[ns].numa_node = pci_dev->device.numa_node; - list[ns].eth_dev = mlx5_dev_spawn(&pci_dev->device, - &list[ns], - &dev_config); - if (!list[ns].eth_dev) - goto exit; - restore = list[ns].eth_dev->data->dev_flags; - rte_eth_copy_pci_info(list[ns].eth_dev, pci_dev); + spawn.numa_node = pci_dev->device.numa_node; + spawn.eth_dev = mlx5_dev_spawn(dev, &spawn, &dev_config); + if (!spawn.eth_dev) + return -rte_errno; + restore = spawn.eth_dev->data->dev_flags; + rte_eth_copy_pci_info(spawn.eth_dev, pci_dev); /* Restore non-PCI flags cleared by the above call. */ - list[ns].eth_dev->data->dev_flags |= restore; - rte_eth_dev_probing_finish(list[ns].eth_dev); - ret = 0; -exit: - /* - * Do the routine cleanup: - * - free allocated spawn data array - * - free the device list - */ - if (list) - mlx5_free(list); - MLX5_ASSERT(orig_devx_bdf_devs); - mlx5_glue->free_device_list(orig_devx_bdf_devs); - return ret; + spawn.eth_dev->data->dev_flags |= restore; + rte_eth_dev_probing_finish(spawn.eth_dev); + return 0; } /** From patchwork Tue Aug 17 13:44:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96999 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FC6BA0548; Tue, 17 Aug 2021 15:46:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D60D5411ED; Tue, 17 Aug 2021 15:45:32 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2041.outbound.protection.outlook.com [40.107.92.41]) by mails.dpdk.org (Postfix) with ESMTP id A672A411DD for ; Tue, 17 Aug 2021 15:45:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O9C3MP9BA1AEsAcn9g3046i/HARJO7TWNrqiTK7fucNC/ym0ynwdzUqogFgCCuT8+f1DZJwQYXweG077EPsRi/rd2QBntHnIHF4IiOouQW2HLOH3Zeby2svc/2lkXdw3022KxZSX9DJ0G2T+pKQugdJ8uUY7TvAst5V8vlhyR4j1WdRF03kgelVG9Z8XAL2efeP1/G5l2t0QIisQ5q2HGIsolQpQScDCUFIiipxNzWQoUipPnKqHY9+ev31aebzRAGHf2n8EznivqbOv+nLeSDJCMiN8w2S0Zpo7q05YFuEBn10vrByMwZswWS5uCThNUQ/qRxVEZkc8+wU605SLjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8Zx1kK4VZC4/lrxToNWcmpObEPh8PdI6sG0yMr3/OnM=; b=DZWyrTnkDwltBdqSM/RW4T/l81pBS7yLJ+WltKSaIVU/4OKzIdpFFPG4S94vRs1yx1ypd6J1KLYxsR0X1+DujI4bYKvfu66PQ0kvesXB0DKapuvWuchskykLp0IyWU2a/607U1kJc+WkZhUrcFX23huzwt7ThnhCIoZL5bIXB4ZJW7eQqo6AgRK9CzjlXb/l+HIpLoCRriNbmp/2Vp4T+Mo1w/bYsmrnnIAXKyjcvvtW0k6aRUKm8IhUCjXO/xRNtbbtpulxX+YnCO0rBX1e4B7KZkPfRAzb0DBhYg+A2wWUB72RjCzdKaOfZDaiHZT9wItc7P6vUQAVPy2cMRUhYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8Zx1kK4VZC4/lrxToNWcmpObEPh8PdI6sG0yMr3/OnM=; b=Cdi0aqdEY5axVlWfeAFPLberWiZdHsaAF9cMX25wJVVmOlmLcgSiNkahNC/DDJf7zI43goueznJZZu7XR9EYV2c4XEHlv+eYBnLujyHZXdS7jZ5BuwAOVffQYkLiznDBKaEQgQotUBxZSfYDHlVxpG1DlZ3dbvWgJUO8/thzt//0oQwc8R3Xk+ckP1weAtvuuLDjF21BoPixn5J3URJkqeef7T2n5fGLosqiswT3vW1M5SlagoqziYEjOoqMaKIg99ewwyaCp5bTaJ8AQTXtwQh0YNc8nFT7V2tCbPzPV3+TyKciCWeceSYupz5qvoDzQqa8+D+VXiyDv6zCHC3UqQ== Received: from MWHPR04CA0053.namprd04.prod.outlook.com (2603:10b6:300:6c::15) by SA0PR12MB4480.namprd12.prod.outlook.com (2603:10b6:806:99::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.14; Tue, 17 Aug 2021 13:45:28 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:300:6c:cafe::bf) by MWHPR04CA0053.outlook.office365.com (2603:10b6:300:6c::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:28 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:27 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:26 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:28 +0300 Message-ID: <20210817134441.1966618-9-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 220e2380-142e-4234-82ef-08d961854615 X-MS-TrafficTypeDiagnostic: SA0PR12MB4480: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: x5sdsBhlApU9OzX9iB4e5dJYc/i0fjourFW/x0cHJmp4Nr2LLVvfbIBxUyTw54WarufybRyS1kRq2he0/+8N8HnHFdCmX5f4lEh12IiCfdvNNSZcsAfXszEps2nkZ5fk2WZS2L3vu39depqiD/fAUpZvIB6bmWZA2/yaANQlYyrPJwgFoKlvZwdR/BtmL0Ii+vdXf4h00Lq4fYS71r4icb+vgymNJn009ZmswsNtvWyIbeom3NcJU8mukK8yRPreq+u8P5kA/zHB5+l3OcO9xD7PTL/O4FiQBdZUMGYbJ39zg/tfW9Y9oXrW/XXHuE/9GFyiVIy+NUpOMbJS1X+YaYKPy6gN4DeKqNF7tvA1E2AYCJQltIvr9f6sV9PVHPM5m0RWmatMXkaFNcq86oetIdcrw2YbiPBK7ATc6i3Whuw1CWSCz1tsWHQPB/UK+UIdOop1+Wr3hyONstZNZ1UE52vmNwju1HsZELf/lIIkEK9CipA18t2/AsOHyFEWSk2S6EFh8LQ3LMrHaxfbyIbtPKkKy1oM12JMDBwnmkf8rsniCdtpmXJ8uF9k1ree/0Nk3hdozEC4ZQtIsTp7N+wSxo0grlAterT2lHS84kZjyAZwOqkN2A5/L+FAf+uH1kQY7q2P47An+VrkQMDUZr9G0166YjtReXinaESz88s8OOhC+L4WQZgh79PchOWZ50djajLRpra1dXYcMqcQ6OvHqg== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(336012)(47076005)(16526019)(186003)(26005)(54906003)(356005)(6916009)(426003)(2616005)(2906002)(1076003)(7636003)(107886003)(5660300002)(83380400001)(316002)(82310400003)(4326008)(7696005)(508600001)(8936002)(86362001)(70206006)(36756003)(8676002)(70586007)(6286002)(55016002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:28.3045 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 220e2380-142e-4234-82ef-08d961854615 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4480 Subject: [dpdk-dev] [RFC 08/21] net/mlx5: improve probe function on Linux X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" some improvements: - Update parameters for mlx5_device_bond_pci_match function. - Fix spelling and typos in comments. - Prevent breaking lines on drv logs. Signed-off-by: Michael Baum --- drivers/net/mlx5/linux/mlx5_os.c | 96 ++++++++++++++------------------ 1 file changed, 42 insertions(+), 54 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 3d204f99f7..375bc55e79 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1984,14 +1984,14 @@ mlx5_dev_spawn_data_cmp(const void *a, const void *b) /** * Match PCI information for possible slaves of bonding device. * - * @param[in] ibv_dev - * Pointer to Infiniband device structure. + * @param[in] ibdev_name + * Name of Infiniband device. * @param[in] pci_dev * Pointer to primary PCI address structure to match. * @param[in] nl_rdma * Netlink RDMA group socket handle. * @param[in] owner - * Rerepsentor owner PF index. + * Representor owner PF index. * @param[out] bond_info * Pointer to bonding information. * @@ -2000,7 +2000,7 @@ mlx5_dev_spawn_data_cmp(const void *a, const void *b) * positive index of slave PF in bonding. */ static int -mlx5_device_bond_pci_match(const struct ibv_device *ibv_dev, +mlx5_device_bond_pci_match(const char *ibdev_name, const struct rte_pci_addr *pci_dev, int nl_rdma, uint16_t owner, struct mlx5_bond_info *bond_info) @@ -2013,27 +2013,25 @@ mlx5_device_bond_pci_match(const struct ibv_device *ibv_dev, int ret; /* - * Try to get master device name. If something goes - * wrong suppose the lack of kernel support and no - * bonding devices. + * Try to get master device name. If something goes wrong suppose + * the lack of kernel support and no bonding devices. */ memset(bond_info, 0, sizeof(*bond_info)); if (nl_rdma < 0) return -1; - if (!strstr(ibv_dev->name, "bond")) + if (!strstr(ibdev_name, "bond")) return -1; - np = mlx5_nl_portnum(nl_rdma, ibv_dev->name); + np = mlx5_nl_portnum(nl_rdma, ibdev_name); if (!np) return -1; /* - * The Master device might not be on the predefined - * port (not on port index 1, it is not garanted), - * we have to scan all Infiniband device port and - * find master. + * The master device might not be on the predefined port(not on port + * index 1, it is not guaranteed), we have to scan all Infiniband + * device ports and find master. */ for (i = 1; i <= np; ++i) { /* Check whether Infiniband port is populated. */ - ifindex = mlx5_nl_ifindex(nl_rdma, ibv_dev->name, i); + ifindex = mlx5_nl_ifindex(nl_rdma, ibdev_name, i); if (!ifindex) continue; if (!if_indextoname(ifindex, ifname)) @@ -2058,8 +2056,9 @@ mlx5_device_bond_pci_match(const struct ibv_device *ibv_dev, snprintf(tmp_str, sizeof(tmp_str), "/sys/class/net/%s", ifname); if (mlx5_get_pci_addr(tmp_str, &pci_addr)) { - DRV_LOG(WARNING, "can not get PCI address" - " for netdev \"%s\"", ifname); + DRV_LOG(WARNING, + "Cannot get PCI address for netdev \"%s\".", + ifname); continue; } /* Slave interface PCI address match found. */ @@ -2218,9 +2217,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, struct rte_pci_addr pci_addr; DRV_LOG(DEBUG, "checking device \"%s\"", ibv_list[ret]->name); - bd = mlx5_device_bond_pci_match - (ibv_list[ret], &owner_pci, nl_rdma, owner_id, - &bond_info); + bd = mlx5_device_bond_pci_match(ibv_list[ret]->name, &owner_pci, + nl_rdma, owner_id, &bond_info); if (bd >= 0) { /* * Bonding device detected. Only one match is allowed, @@ -2240,9 +2238,9 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, /* Amend owner pci address if owner PF ID specified. */ if (eth_da.nb_representor_ports) owner_pci.function += owner_id; - DRV_LOG(INFO, "PCI information matches for" - " slave %d bonding device \"%s\"", - bd, ibv_list[ret]->name); + DRV_LOG(INFO, + "PCI information matches for slave %d bonding device \"%s\"", + bd, ibv_list[ret]->name); ibv_match[nd++] = ibv_list[ret]; break; } else { @@ -2281,23 +2279,19 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, if (nl_rdma >= 0) np = mlx5_nl_portnum(nl_rdma, ibv_match[0]->name); if (!np) - DRV_LOG(WARNING, "can not get IB device \"%s\"" - " ports number", ibv_match[0]->name); + DRV_LOG(WARNING, + "Cannot get IB device \"%s\" ports number.", + ibv_match[0]->name); if (bd >= 0 && !np) { - DRV_LOG(ERR, "can not get ports" - " for bonding device"); + DRV_LOG(ERR, "Cannot get ports for bonding device."); rte_errno = ENOENT; ret = -rte_errno; goto exit; } } - /* - * Now we can determine the maximal - * amount of devices to be spawned. - */ + /* Now we can determine the maximal amount of devices to be spawned. */ list = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_dev_spawn_data) * - (np ? np : nd), + sizeof(struct mlx5_dev_spawn_data) * (np ? np : nd), RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); if (!list) { DRV_LOG(ERR, "spawn data array allocation failure"); @@ -2339,10 +2333,9 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, } ret = -1; if (nl_route >= 0) - ret = mlx5_nl_switch_info - (nl_route, - list[ns].ifindex, - &list[ns].info); + ret = mlx5_nl_switch_info(nl_route, + list[ns].ifindex, + &list[ns].info); if (ret || (!list[ns].info.representor && !list[ns].info.master)) { /* @@ -2350,9 +2343,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, * Netlink, let's try to perform the task * with sysfs. */ - ret = mlx5_sysfs_switch_info - (list[ns].ifindex, - &list[ns].info); + ret = mlx5_sysfs_switch_info(list[ns].ifindex, + &list[ns].info); } if (!ret && bd >= 0) { switch (list[ns].info.name_type) { @@ -2465,10 +2457,9 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, } ret = -1; if (nl_route >= 0) - ret = mlx5_nl_switch_info - (nl_route, - list[ns].ifindex, - &list[ns].info); + ret = mlx5_nl_switch_info(nl_route, + list[ns].ifindex, + &list[ns].info); if (ret || (!list[ns].info.representor && !list[ns].info.master)) { /* @@ -2476,9 +2467,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, * Netlink, let's try to perform the task * with sysfs. */ - ret = mlx5_sysfs_switch_info - (list[ns].ifindex, - &list[ns].info); + ret = mlx5_sysfs_switch_info(list[ns].ifindex, + &list[ns].info); } if (!ret && (list[ns].info.representor ^ list[ns].info.master)) { @@ -2487,11 +2477,10 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, !list[ns].info.representor && !list[ns].info.master) { /* - * Single IB device with - * one physical port and + * Single IB device with one physical port and * attached network device. - * May be SRIOV is not enabled - * or there is no representors. + * May be SRIOV is not enabled or there is no + * representors. */ DRV_LOG(INFO, "no E-Switch support detected"); ns++; @@ -2508,10 +2497,9 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, } /* * New kernels may add the switch_id attribute for the case - * there is no E-Switch and we wrongly recognized the - * only device as master. Override this if there is the - * single device with single port and new device name - * format present. + * there is no E-Switch and we wrongly recognized the only + * device as master. Override this if there is the single + * device with single port and new device name format present. */ if (nd == 1 && list[0].info.name_type == MLX5_PHYS_PORT_NAME_TYPE_UPLINK) { From patchwork Tue Aug 17 13:44:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97000 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6F52A0548; Tue, 17 Aug 2021 15:46:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 615CC411F8; Tue, 17 Aug 2021 15:45:34 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2075.outbound.protection.outlook.com [40.107.244.75]) by mails.dpdk.org (Postfix) with ESMTP id 18BA7411EA for ; Tue, 17 Aug 2021 15:45:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hlX2ai5Jqr9kDF0U+jEEhlCKD3+gfZrsR/dzCPHk6YDSoZ30lu/6wtyMgcgm0+rJ0zTzaxi1dC13rfPzTZ/5bVA5JaIOefc2KN2anORRwCcZWlqb3lwKik6H/k7zl6UrJvMao0+RydHbWf1V3dMM5qAGOYHiX3yi53VcHQAQlHvi4hBHOkatVlgkYRIxmKUEI4hyPTOEJqaAjxhuEXbIkkpdwu6sSVE2vFtgbSfuXqKr23B++8JbpDY2fANxdVTFsfUUwxTybAHCcOXaHY+IlkUQ/SJN/Yq0Om0H2YU7DCY9Ak5jQmtPXcWS4raoCn7mBApeqKI1aMAjM+9YrSzlpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QENo6o0ltxtxbAsDVsS71dyHsJi3DS3GmBjVZvPdDww=; b=e5PxDGdH44kBGdwirpev5F2bb7Syd94ypSlbA6mfqubr6WwBNMjGHqBQ/XzvwfeT+FFk5MYzuPO+/wP8VigXurAWouPwYK26WR3BxC29i2Uak/1HScDAy/5Xq7UNafB4sczR7WlCo4j1QNCPytjTXa4otJ1WtgcGZNiOZqo8DyNvXwy646ONnnH+FsNYbux2begm8L8+tXwbtGSgvdweR4PsEktdAdK6AzXuX32Ew8t7HZ5N+lQJ9NKSQpefyyFkFs34494V+/3LyvsUxTRglk7AE0OgjIoS+8oI3YPvftciWsvQGnbSxZPxlXMSojlFNSYrn5h4yLduzz1u8KqtGA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QENo6o0ltxtxbAsDVsS71dyHsJi3DS3GmBjVZvPdDww=; b=nDmhxcst8eVT8CyV0uRkpkDjL37R1pj065AbYCMesYdpxQRAve1qXjGBTVF5zvNRt0quEWK4cN5EgucUB5pFum9aoag9qAzF/FH00rX1sBdQFyT5v0wxd1TcbsacS9IWRXiy1ZIpOP6p2PIcAKjneXIv6XkyF6SG78Jfv56MXK+4OV1ZBejhloz9XeRVUz9DnYdvHZWeh2Ltj1QZ9e718YQ9Q+pWikpyi4acU/4guFnJJhQNoGYph1VRsr/OPbXQECZQIfkSCkfgoO29lCf6m9smbVcrZGp0Ujf4sEgJfzwOIS9Ck9E8dwvSlvFmXbpm0iPwC7J5dS+QOD4hgmkvLQ== Received: from MWHPR11CA0040.namprd11.prod.outlook.com (2603:10b6:300:115::26) by CY4PR1201MB2500.namprd12.prod.outlook.com (2603:10b6:903:d0::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.19; Tue, 17 Aug 2021 13:45:30 +0000 Received: from CO1NAM11FT007.eop-nam11.prod.protection.outlook.com (2603:10b6:300:115:cafe::a3) by MWHPR11CA0040.outlook.office365.com (2603:10b6:300:115::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.18 via Frontend Transport; Tue, 17 Aug 2021 13:45:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT007.mail.protection.outlook.com (10.13.174.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:29 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:29 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:27 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:29 +0300 Message-ID: <20210817134441.1966618-10-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 996ad888-92ff-4ed5-f922-08d961854709 X-MS-TrafficTypeDiagnostic: CY4PR1201MB2500: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:72; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tDOZi3d6wB5VcnZDNh55BjpWGuPiBHXm3t8xFzJEAwq1AXnsY9UdWBZ0mS0pQ6j3EkC5Yj827ZSaonzvnojl9J6Mr7l7E1gzTrK6X5pZAtZdxJfoTf9IWNzb3p3qRUTCYFaft4pxOqQkYFUgRb/bTb+LzmmhmiNbQJIBqtEXfCAki6d3WBsVowyE5wfnNhD8gDtYs9SVFh0srf8LU9JJ8upW/G55XM9ve3mohdBjjvyHle3KS96OvZUtRyYtq5+L3g/4yg7LrzxCu9h5Uh4HtAWRV7uiR0h6Zy9BY1DbFl5gzM7dRsML3APLM+vu9ShZxIJr7jbJSaP0FOEqVln9Pi4trhMGit4sc514m5dICr+9kywzlJVWghV38q6Z+u0J+De1RuNdXaIoKN4qzwIL7gBu7rr8e5Z29Jvv7emJs/GihGx0kQ2W2M4afZTdo6pI/Fef2RLEftaTevyEZiVkkGv5xucvU9793KhBDYFwCZpAhelBfgksMSF1C+LMbMlNRXebqJHemqqeCzeVlLLZ6PcFePhq+i1HHSp+bYV0zdFv7KTySMQbIWqOE2Lpsu8ZCzsgArr351UkujR5xJwtpmldjZC0FTKA8AOfz/Kea056mdvGrxQdB6sl5XWRKt2cmsGiXgxOhcEviKANUqkZYsSYZoh2kUPj3AaociWxP+16sqe1lVid9iyZXhqF6osF+YXDut3yDLBPe+jJzOUB0A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966006)(36840700001)(55016002)(70206006)(86362001)(7696005)(36860700001)(356005)(82310400003)(70586007)(2906002)(2616005)(8936002)(82740400003)(36756003)(26005)(186003)(5660300002)(16526019)(6286002)(47076005)(426003)(316002)(83380400001)(107886003)(7636003)(478600001)(54906003)(4326008)(8676002)(6916009)(336012)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:29.9017 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 996ad888-92ff-4ed5-f922-08d961854709 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT007.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB2500 Subject: [dpdk-dev] [RFC 09/21] net/mlx5: improve spawn function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add name as a parameter to spawn structure. Signed-off-by: Michael Baum --- drivers/net/mlx5/linux/mlx5_os.c | 24 ++++++++++-------------- drivers/net/mlx5/mlx5.c | 5 ++--- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/windows/mlx5_os.c | 1 + 4 files changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 375bc55e79..b4670fad6e 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -938,7 +938,7 @@ mlx5_representor_match(struct mlx5_dev_spawn_data *spawn, * Verbs device parameters (name, port, switch_info) to spawn. * @param config * Device configuration parameters. - * @param config + * @param eth_da * Device arguments. * * @return @@ -997,12 +997,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, /* Bonding device. */ if (!switch_info->representor) { err = snprintf(name, sizeof(name), "%s_%s", - dpdk_dev->name, - mlx5_os_get_dev_device_name(spawn->phys_dev)); + dpdk_dev->name, spawn->phys_dev_name); } else { err = snprintf(name, sizeof(name), "%s_%s_representor_c%dpf%d%s%u", dpdk_dev->name, - mlx5_os_get_dev_device_name(spawn->phys_dev), + spawn->phys_dev_name, switch_info->ctrl_num, switch_info->pf_num, switch_info->name_type == @@ -1227,8 +1226,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (err) { DRV_LOG(WARNING, "can't query devx port %d on device %s", - spawn->phys_port, - mlx5_os_get_dev_device_name(spawn->phys_dev)); + spawn->phys_port, spawn->phys_dev_name); vport_info.query_flags = 0; } } @@ -1238,18 +1236,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->vport_meta_mask) { DRV_LOG(ERR, "vport zero mask for port %d" " on bonding device %s", - spawn->phys_port, - mlx5_os_get_dev_device_name - (spawn->phys_dev)); + spawn->phys_port, spawn->phys_dev_name); err = ENOTSUP; goto error; } if (priv->vport_meta_tag & ~priv->vport_meta_mask) { DRV_LOG(ERR, "invalid vport tag for port %d" " on bonding device %s", - spawn->phys_port, - mlx5_os_get_dev_device_name - (spawn->phys_dev)); + spawn->phys_port, spawn->phys_dev_name); err = ENOTSUP; goto error; } @@ -1260,8 +1254,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, (switch_info->representor || switch_info->master)) { DRV_LOG(ERR, "can't deduce vport index for port %d" " on bonding device %s", - spawn->phys_port, - mlx5_os_get_dev_device_name(spawn->phys_dev)); + spawn->phys_port, spawn->phys_dev_name); err = ENOTSUP; goto error; } else { @@ -2314,6 +2307,7 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, list[ns].max_port = np; list[ns].phys_port = i; list[ns].phys_dev = ibv_match[0]; + list[ns].phys_dev_name = ibv_match[0]->name; list[ns].eth_dev = NULL; list[ns].pci_dev = pci_dev; list[ns].pf_bond = bd; @@ -2410,6 +2404,7 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, list[ns].max_port = 1; list[ns].phys_port = 1; list[ns].phys_dev = ibv_match[i]; + list[ns].phys_dev_name = ibv_match[i]->name; list[ns].eth_dev = NULL; list[ns].pci_dev = pci_dev; list[ns].pf_bond = -1; @@ -2732,6 +2727,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev) spawn.phys_dev = mlx5_os_get_ibv_dev(dev); if (spawn.phys_dev == NULL) return -rte_errno; + spawn.phys_dev_name = mlx5_os_get_dev_device_name(spawn.phys_dev); ret = mlx5_auxiliary_get_ifindex(dev->name); if (ret < 0) { DRV_LOG(ERR, "failed to get ethdev ifindex: %s", dev->name); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 02ea2e781e..08c9a6ec6f 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1107,7 +1107,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, */ struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, - const struct mlx5_dev_config *config) + const struct mlx5_dev_config *config) { struct mlx5_dev_ctx_shared *sh; int err = 0; @@ -1120,8 +1120,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, pthread_mutex_lock(&mlx5_dev_ctx_list_mutex); /* Search for IB context by device name. */ LIST_FOREACH(sh, &mlx5_dev_ctx_list, next) { - if (!strcmp(sh->ibdev_name, - mlx5_os_get_dev_device_name(spawn->phys_dev))) { + if (!strcmp(sh->ibdev_name, spawn->phys_dev_name)) { sh->refcnt++; goto exit; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3581414b78..9a8e34535c 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -137,6 +137,7 @@ struct mlx5_dev_spawn_data { int numa_node; /**< Device numa node. */ struct mlx5_switch_info info; /**< Switch information. */ void *phys_dev; /**< Associated physical device. */ + const char *phys_dev_name; /**< Name of physical device. */ struct rte_eth_dev *eth_dev; /**< Associated Ethernet device. */ struct rte_pci_device *pci_dev; /**< Backend PCI device. */ struct mlx5_bond_info *bond_info; diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 0ff9e70d96..2f5c29662e 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -942,6 +942,7 @@ mlx5_os_net_probe(struct rte_device *dev) spawn.max_port = 1; spawn.phys_port = 1; spawn.phys_dev = devx_bdf_match; + spawn.phys_dev_name = mlx5_os_get_dev_device_name(devx_bdf_match); spawn.eth_dev = NULL; spawn.pci_dev = pci_dev; spawn.ifindex = -1; /* Spawn will assign */ From patchwork Tue Aug 17 13:44:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97003 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DE41A0548; Tue, 17 Aug 2021 15:47:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 87A3741214; Tue, 17 Aug 2021 15:45:40 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2077.outbound.protection.outlook.com [40.107.95.77]) by mails.dpdk.org (Postfix) with ESMTP id 9791F411FB for ; Tue, 17 Aug 2021 15:45:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PLCZIlUybfpSQBcDzKzeW/fYIX1akbVLQqu2lkBxj6U0hc4MGXez9WYou4YkdzRrzeuYZebheTrYPkdOyX0kCZEHbOCJbqp/YCEbBn1dq7O01VFrqK0KzahcZlUpChJYD8Pw5nNQd2bGwWAdyr9mnFcUARB/enA01uDjUuEXFYgj2AyhBKXlC5XB1ONkBPfNuFLZUAq0r8fskC38PN2sPdyeBYbgebT8t80q2wa0jQ1M0xtH9jE0gln8kuGquj5SqGc0MiEoxmCW4jtDC+IGhvXVfA785NlEH5R9i8fHgmpeG4+e5hYnaLQQiG2G/Wo+dGOWUW2iCzfER72wQNItRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eSq61LMnycilc+5wrnNjt1Rg9/VztTjOpDczQYYXa94=; b=CjUHh8/PkezsnIbWf4ANwNTTjJzSKaU7/d/UPx8nIAC8UiJqlHYu6eUYheXSoDdWZlR+YIUbxMygzUSh7cW8H6h5JxPsVFthESS+1XOBQZHUCx49CoxlCBt155ypqXQWA1/MtunwPFa5yjfgUYSRkL32wqY0dQ3eieUsdE6CBH+TNlPuHKLQDKUyxMhUMcKZoFFWlnV0wzobcAOA8VlUlPBXMuFuaxqTSDX4kVicxtVFCoYFJmFabe+OtwaD9hK10ESkYxxLssrExIF/RbmINfbI8flS40sUw4m3o6wd3zXuENu23Ph0WnYA+4FqYOKH9pAVVa1hz3w9FPlT6d44eQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eSq61LMnycilc+5wrnNjt1Rg9/VztTjOpDczQYYXa94=; b=N9Q4qUC7Ds9gY8p35eQsifwyFaEqpxE8Na2iehwhzPQrMcDmpRmlEp2Cu124zBx1Ux37ngvkJRnF0fZBi4seCRvOJLpdzCqNAHXLWo4tvTeHHMd3CGwuCCQp7CqHGs40cpOjPl3NRlOg+hC/Z5TKswDrwT9SYLtr5hyi9U7/Ffdg8vp828Okp9V45IRbA2i0ssyq8aT1I7xFIQpdMhweNuXl7tmoJEful1SjHaueJxMlXwNLR6aHlPWtzp6JIc3tQYS0DXNHuWAynEZQ50nhHYea7IuaMuJVHG7//50it5hHEn1rrviLmHYYDSpuGsHR7kWgZDVjEb0FAp6gh1OLMg== Received: from BN9PR03CA0472.namprd03.prod.outlook.com (2603:10b6:408:139::27) by MW3PR12MB4380.namprd12.prod.outlook.com (2603:10b6:303:5a::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17; Tue, 17 Aug 2021 13:45:33 +0000 Received: from BN8NAM11FT030.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::c9) by BN9PR03CA0472.outlook.office365.com (2603:10b6:408:139::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT030.mail.protection.outlook.com (10.13.177.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:32 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:31 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:29 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:30 +0300 Message-ID: <20210817134441.1966618-11-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b4d787b6-dcc6-4c0b-ec62-08d9618548ee X-MS-TrafficTypeDiagnostic: MW3PR12MB4380: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hjF3O7/CSKWgUOim2VcNRVpvY/Oobm5VlZ9YCLjeICC+qbz18h4SjMGPYBz9iIErmYl7MlF6JiinzYBFUvYNDlhqsb+wRJ1RXZZpX1AHCP0YMmStWL4uusC5cUZB7wkvwZcPrdEpn7+Iq3LwJMnTXB9iU0Qv6S4cCeW547yq9eiLeB0sN2v8onAeFVCN1XDxiXEXfVMd+6zdM/+848hTHXVL6izoOoKhlcEYgizbMTlxWhilb8iPX+EeZ38J4wC3iznvbhAcIE8ZKHRbkppnNMqQ4AXCL/UGA1FB2F2hf4tFry/VYHvdzFanYqACGXkJAibHeuFyKglhr8tyHsujEstSv5Y9ZIIE1Dfa7ROUk3j1BhFy9S749VChTebApprwt1oxB68JzTHWVnG6qQjfGEBj8QVbfp1aj8FBmJuOs8nm4W2UFHc/xoEeY0/jcVVbD1PEM31W3IeDYqbR4CRwXtSiiL1Pgyc1sgypTz0XLMkdyuOf2JwXcE335+e8bWcbNtDr9P0rJtJ+oB+RgbcU4Rl7zFOsgLLJyLkARpP9KjNN7SoJF0U32Vo1Ts0BVJNDXHT8u/V/gCqSzSk3Zy1afYRa/B2ZrqjqgDofvwBCHUnFf3I6rAeNCcMZ+w5c29eO+7fFKMH3XHsCAzI94mIWbjppZM+s3OX9JU+iPC0o4ZyzLQttf+J/b762yd8gFQWxiKTbq1Z+d4xTaGqnuyft18pTJliMqs38Q9gbVh/CKyU= X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(36840700001)(46966006)(70206006)(7636003)(6286002)(70586007)(86362001)(356005)(186003)(7696005)(82740400003)(107886003)(26005)(6916009)(8676002)(2906002)(8936002)(16526019)(82310400003)(47076005)(36860700001)(36756003)(30864003)(336012)(1076003)(54906003)(426003)(5660300002)(55016002)(6666004)(316002)(478600001)(2616005)(4326008)(83380400001)(579004)(559001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:32.9509 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b4d787b6-dcc6-4c0b-ec62-08d9618548ee X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT030.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4380 Subject: [dpdk-dev] [RFC 10/21] net/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a sh field. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.c | 2 +- drivers/common/mlx5/mlx5_common.h | 6 +- drivers/common/mlx5/version.map | 2 +- drivers/common/mlx5/windows/mlx5_common_os.c | 2 +- drivers/net/mlx5/linux/mlx5_ethdev_os.c | 8 +- drivers/net/mlx5/linux/mlx5_mp_os.c | 9 +- drivers/net/mlx5/linux/mlx5_os.c | 432 ++++++++++--------- drivers/net/mlx5/linux/mlx5_verbs.c | 55 +-- drivers/net/mlx5/mlx5.c | 103 +++-- drivers/net/mlx5/mlx5.h | 12 +- drivers/net/mlx5/mlx5_devx.c | 34 +- drivers/net/mlx5/mlx5_flow.c | 6 +- drivers/net/mlx5/mlx5_flow_aso.c | 24 +- drivers/net/mlx5/mlx5_flow_dv.c | 51 +-- drivers/net/mlx5/mlx5_flow_verbs.c | 4 +- drivers/net/mlx5/mlx5_mr.c | 14 +- drivers/net/mlx5/mlx5_txpp.c | 17 +- drivers/net/mlx5/windows/mlx5_ethdev_os.c | 14 +- drivers/net/mlx5/windows/mlx5_os.c | 113 ++--- 19 files changed, 453 insertions(+), 455 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index be3d0f2627..ffd2c2c129 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -152,7 +152,7 @@ mlx5_common_args_check(const char *key, const char *val, void *opaque) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int mlx5_parse_db_map_arg(struct rte_devargs *devargs, int *dbnc) { struct rte_kvargs *kvlist; diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 10061f364f..c4e86c3175 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -459,14 +459,16 @@ __rte_internal bool mlx5_dev_is_pci(const struct rte_device *dev); +__rte_internal +int +mlx5_parse_db_map_arg(struct rte_devargs *devargs, int *dbnc); + /* mlx5_common_os.c */ int mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, int dbnc, uint32_t classes); int mlx5_os_pd_create(struct mlx5_dev_ctx *dev_ctx); -__rte_internal -struct devx_device_bdf *mlx5_os_get_devx_device(struct rte_device *dev); #endif /* RTE_PMD_MLX5_COMMON_H_ */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 18856c198e..a1a8bae5bd 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -9,6 +9,7 @@ INTERNAL { mlx5_common_init; + mlx5_parse_db_map_arg; # WINDOWS_NO_EXPORT mlx5_dev_ctx_release; mlx5_dev_ctx_prepare; @@ -145,7 +146,6 @@ INTERNAL { mlx5_os_dealloc_pd; mlx5_os_dereg_mr; mlx5_os_get_ibv_dev; # WINDOWS_NO_EXPORT - mlx5_os_get_devx_device; mlx5_os_reg_mr; mlx5_os_umem_dereg; mlx5_os_umem_reg; diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index 12819383c1..5d178b0452 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -144,7 +144,7 @@ mlx5_match_devx_devices_to_addr(struct devx_device_bdf *devx_bdf, * @return * A device match on success, NULL otherwise and rte_errno is set. */ -struct devx_device_bdf * +static struct devx_device_bdf * mlx5_os_get_devx_device(struct rte_device *dev) { int n; diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c index f34133e2c6..b4bbf841cc 100644 --- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c +++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c @@ -324,7 +324,7 @@ int mlx5_read_clock(struct rte_eth_dev *dev, uint64_t *clock) { struct mlx5_priv *priv = dev->data->dev_private; - struct ibv_context *ctx = priv->sh->ctx; + struct ibv_context *ctx = priv->sh->dev_ctx->ctx; struct ibv_values_ex values; int err = 0; @@ -778,7 +778,7 @@ mlx5_dev_interrupt_handler(void *cb_arg) struct rte_eth_dev *dev; uint32_t tmp; - if (mlx5_glue->get_async_event(sh->ctx, &event)) + if (mlx5_glue->get_async_event(sh->dev_ctx->ctx, &event)) break; /* Retrieve and check IB port index. */ tmp = (uint32_t)event.element.port_num; @@ -987,10 +987,10 @@ mlx5_set_link_up(struct rte_eth_dev *dev) int mlx5_is_removed(struct rte_eth_dev *dev) { - struct ibv_device_attr device_attr; + struct ibv_device_attr dev_attr; struct mlx5_priv *priv = dev->data->dev_private; - if (mlx5_glue->query_device(priv->sh->ctx, &device_attr) == EIO) + if (mlx5_glue->query_device(priv->sh->dev_ctx->ctx, &dev_attr) == EIO) return 1; return 0; } diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index 3a4aa766f8..53e372694c 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -29,6 +29,7 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) (const struct mlx5_mp_param *)mp_msg->param; struct rte_eth_dev *dev; struct mlx5_priv *priv; + struct mlx5_dev_ctx *dev_ctx; struct mr_cache_entry entry; uint32_t lkey; int ret; @@ -41,10 +42,11 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) } dev = &rte_eth_devices[param->port_id]; priv = dev->data->dev_private; + dev_ctx = priv->sh->dev_ctx; switch (param->type) { case MLX5_MP_REQ_CREATE_MR: mp_init_msg(&priv->mp_id, &mp_res, param->type); - lkey = mlx5_mr_create_primary(priv->sh->pd, + lkey = mlx5_mr_create_primary(dev_ctx->pd, &priv->sh->share_cache, &entry, param->args.addr, priv->config.mr_ext_memseg_en); @@ -55,7 +57,7 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) case MLX5_MP_REQ_VERBS_CMD_FD: mp_init_msg(&priv->mp_id, &mp_res, param->type); mp_res.num_fds = 1; - mp_res.fds[0] = ((struct ibv_context *)priv->sh->ctx)->cmd_fd; + mp_res.fds[0] = ((struct ibv_context *)dev_ctx->ctx)->cmd_fd; res->result = 0; ret = rte_mp_reply(&mp_res, peer); break; @@ -202,7 +204,8 @@ mp_req_on_rxtx(struct rte_eth_dev *dev, enum mlx5_mp_req_type type) mp_init_msg(&priv->mp_id, &mp_req, type); if (type == MLX5_MP_REQ_START_RXTX) { mp_req.num_fds = 1; - mp_req.fds[0] = ((struct ibv_context *)priv->sh->ctx)->cmd_fd; + mp_req.fds[0] = + ((struct ibv_context *)priv->sh->dev_ctx->ctx)->cmd_fd; } ret = rte_mp_request_sync(&mp_req, &mp_rep, &ts); if (ret) { diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index b4670fad6e..e2a7c3d09c 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -214,7 +214,7 @@ mlx5_os_get_dev_attr(void *ctx, struct mlx5_dev_attr *device_attr) static void * mlx5_alloc_verbs_buf(size_t size, void *data) { - struct mlx5_dev_ctx_shared *sh = data; + struct mlx5_dev_ctx *dev_ctx = data; void *ret; size_t alignment = rte_mem_page_size(); if (alignment == (size_t)-1) { @@ -224,7 +224,7 @@ mlx5_alloc_verbs_buf(size_t size, void *data) } MLX5_ASSERT(data != NULL); - ret = mlx5_malloc(0, size, alignment, sh->numa_node); + ret = mlx5_malloc(0, size, alignment, dev_ctx->numa_node); if (!ret && size) rte_errno = ENOMEM; return ret; @@ -290,7 +290,7 @@ __mlx5_discovery_misc5_cap(struct mlx5_priv *priv) metadata_reg_c_0, 0xffff); } #endif - matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx, + matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->dev_ctx->ctx, &dv_attr, tbl); if (matcher) { priv->sh->misc5_cap = 1; @@ -389,7 +389,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) void *domain; /* Reference counter is zero, we should initialize structures. */ - domain = mlx5_glue->dr_create_domain(sh->ctx, + domain = mlx5_glue->dr_create_domain(sh->dev_ctx->ctx, MLX5DV_DR_DOMAIN_TYPE_NIC_RX); if (!domain) { DRV_LOG(ERR, "ingress mlx5dv_dr_create_domain failed"); @@ -397,7 +397,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; } sh->rx_domain = domain; - domain = mlx5_glue->dr_create_domain(sh->ctx, + domain = mlx5_glue->dr_create_domain(sh->dev_ctx->ctx, MLX5DV_DR_DOMAIN_TYPE_NIC_TX); if (!domain) { DRV_LOG(ERR, "egress mlx5dv_dr_create_domain failed"); @@ -407,8 +407,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) sh->tx_domain = domain; #ifdef HAVE_MLX5DV_DR_ESWITCH if (priv->config.dv_esw_en) { - domain = mlx5_glue->dr_create_domain - (sh->ctx, MLX5DV_DR_DOMAIN_TYPE_FDB); + domain = mlx5_glue->dr_create_domain(sh->dev_ctx->ctx, + MLX5DV_DR_DOMAIN_TYPE_FDB); if (!domain) { DRV_LOG(ERR, "FDB mlx5dv_dr_create_domain failed"); err = errno; @@ -816,7 +816,7 @@ static void mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - void *ctx = priv->sh->ctx; + void *ctx = priv->sh->dev_ctx->ctx; priv->q_counters = mlx5_devx_cmd_queue_counter_alloc(ctx); if (!priv->q_counters) { @@ -833,7 +833,7 @@ mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev) .wq_type = IBV_WQT_RQ, .max_wr = 1, .max_sge = 1, - .pd = priv->sh->pd, + .pd = priv->sh->dev_ctx->pd, .cq = cq, }); if (wq) { @@ -934,6 +934,8 @@ mlx5_representor_match(struct mlx5_dev_spawn_data *spawn, * * @param dpdk_dev * Backing DPDK device. + * @param dev_ctx + * Pointer to the context device data structure. * @param spawn * Verbs device parameters (name, port, switch_info) to spawn. * @param config @@ -950,6 +952,7 @@ mlx5_representor_match(struct mlx5_dev_spawn_data *spawn, */ static struct rte_eth_dev * mlx5_dev_spawn(struct rte_device *dpdk_dev, + struct mlx5_dev_ctx *dev_ctx, struct mlx5_dev_spawn_data *spawn, struct mlx5_dev_config *config, struct rte_eth_devargs *eth_da) @@ -1073,10 +1076,9 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->dv_xmeta_en = MLX5_XMETA_MODE_META16; } mlx5_malloc_mem_select(config->sys_mem_en); - sh = mlx5_alloc_shared_dev_ctx(spawn, config); + sh = mlx5_alloc_shared_dev_ctx(spawn, dev_ctx, config); if (!sh) return NULL; - config->devx = sh->devx; #ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR config->dest_tir = 1; #endif @@ -1093,7 +1095,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_STRIDING_RQ; #endif - mlx5_glue->dv_query_device(sh->ctx, &dv_attr); + mlx5_glue->dv_query_device(sh->dev_ctx->ctx, &dv_attr); if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { DRV_LOG(DEBUG, "enhanced MPW is supported"); @@ -1170,7 +1172,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #endif config->mpls_en = mpls_en; /* Check port status. */ - err = mlx5_glue->query_port(sh->ctx, spawn->phys_port, &port_attr); + err = mlx5_glue->query_port(sh->dev_ctx->ctx, spawn->phys_port, + &port_attr); if (err) { DRV_LOG(ERR, "port query failed: %s", strerror(err)); goto error; @@ -1220,7 +1223,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, * register is defined by mask. */ if (switch_info->representor || switch_info->master) { - err = mlx5_glue->devx_port_query(sh->ctx, + err = mlx5_glue->devx_port_query(sh->dev_ctx->ctx, spawn->phys_port, &vport_info); if (err) { @@ -1377,7 +1380,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); if (config->devx) { - err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr); + err = mlx5_devx_cmd_query_hca_attr(sh->dev_ctx->ctx, + &config->hca_attr); if (err) { err = -err; goto error; @@ -1600,7 +1604,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = config->hca_attr.access_register_user ? mlx5_devx_cmd_register_read - (sh->ctx, MLX5_REGISTER_ID_MTUTC, 0, + (sh->dev_ctx->ctx, MLX5_REGISTER_ID_MTUTC, 0, reg, MLX5_ST_SZ_DW(register_mtutc)) : ENOTSUP; if (!err) { uint32_t ts_mode; @@ -1741,12 +1745,12 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->mtr_profile_tbl) goto error; /* Hint libmlx5 to use PMD allocator for data plane resources */ - mlx5_glue->dv_set_context_attr(sh->ctx, + mlx5_glue->dv_set_context_attr(sh->dev_ctx->ctx, MLX5DV_CTX_ATTR_BUF_ALLOCATORS, (void *)((uintptr_t)&(struct mlx5dv_ctx_allocators){ .alloc = &mlx5_alloc_verbs_buf, .free = &mlx5_free_verbs_buf, - .data = sh, + .data = dev_ctx, })); /* Bring Ethernet device up. */ DRV_LOG(DEBUG, "port %u forcing Ethernet interface up", @@ -1923,9 +1927,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, eth_dev->data->dev_private = NULL; } if (eth_dev != NULL) { - /* mac_addrs must not be freed alone because part of + /* + * mac_addrs must not be freed alone because part of * dev_private - **/ + */ eth_dev->data->mac_addrs = NULL; rte_eth_dev_release_port(eth_dev); } @@ -2144,6 +2149,8 @@ mlx5_os_config_default(struct mlx5_dev_config *config) * * @param[in] pci_dev * PCI device information. + * @param dev_ctx + * Pointer to the context device data structure. * @param[in] req_eth_da * Requested ethdev device argument. * @param[in] owner_id @@ -2154,8 +2161,9 @@ mlx5_os_config_default(struct mlx5_dev_config *config) */ static int mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, + struct mlx5_dev_ctx *dev_ctx, struct rte_eth_devargs *req_eth_da, - uint16_t owner_id) + uint16_t owner_id, uint8_t devx) { struct ibv_device **ibv_list; /* @@ -2181,13 +2189,14 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, * < 0 - no bonding device (single one) * >= 0 - bonding device (value is slave PF index) */ - int bd = -1; + int bd; struct mlx5_dev_spawn_data *list = NULL; struct mlx5_dev_config dev_config; unsigned int dev_config_vf; struct rte_eth_devargs eth_da = *req_eth_da; struct rte_pci_addr owner_pci = pci_dev->addr; /* Owner PF. */ struct mlx5_bond_info bond_info; + const char *ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); int ret = -1; errno = 0; @@ -2206,38 +2215,22 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, int nl_rdma = mlx5_nl_init(NETLINK_RDMA); unsigned int i; - while (ret-- > 0) { - struct rte_pci_addr pci_addr; + bd = mlx5_device_bond_pci_match(ibdev_name, &owner_pci, nl_rdma, + owner_id, &bond_info); + if (bd >= 0) { + /* Amend owner pci address if owner PF ID specified. */ + if (eth_da.nb_representor_ports) + owner_pci.function += owner_id; + DRV_LOG(INFO, + "PCI information matches for slave %d bonding device \"%s\".", + bd, ibdev_name); + nd++; + } else { + while (ret-- > 0) { + struct rte_pci_addr pci_addr; - DRV_LOG(DEBUG, "checking device \"%s\"", ibv_list[ret]->name); - bd = mlx5_device_bond_pci_match(ibv_list[ret]->name, &owner_pci, - nl_rdma, owner_id, &bond_info); - if (bd >= 0) { - /* - * Bonding device detected. Only one match is allowed, - * the bonding is supported over multi-port IB device, - * there should be no matches on representor PCI - * functions or non VF LAG bonding devices with - * specified address. - */ - if (nd) { - DRV_LOG(ERR, - "multiple PCI match on bonding device" - "\"%s\" found", ibv_list[ret]->name); - rte_errno = ENOENT; - ret = -rte_errno; - goto exit; - } - /* Amend owner pci address if owner PF ID specified. */ - if (eth_da.nb_representor_ports) - owner_pci.function += owner_id; - DRV_LOG(INFO, - "PCI information matches for slave %d bonding device \"%s\"", - bd, ibv_list[ret]->name); - ibv_match[nd++] = ibv_list[ret]; - break; - } else { - /* Bonding device not found. */ + DRV_LOG(DEBUG, "checking device \"%s\"", + ibv_list[ret]->name); if (mlx5_get_pci_addr(ibv_list[ret]->ibdev_path, &pci_addr)) continue; @@ -2246,22 +2239,26 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, owner_pci.devid != pci_addr.devid || owner_pci.function != pci_addr.function) continue; - DRV_LOG(INFO, "PCI information matches for device \"%s\"", + DRV_LOG(INFO, + "PCI information matches for device \"%s\"", ibv_list[ret]->name); ibv_match[nd++] = ibv_list[ret]; } } ibv_match[nd] = NULL; - if (!nd) { - /* No device matches, just complain and bail out. */ - DRV_LOG(WARNING, - "no Verbs device matches PCI device " PCI_PRI_FMT "," - " are kernel drivers loaded?", - owner_pci.domain, owner_pci.bus, - owner_pci.devid, owner_pci.function); - rte_errno = ENOENT; - ret = -rte_errno; - goto exit; + if (bd >= 0 && nd > 1) { + /* + * Bonding device detected. Only one match is allowed, the + * bonding is supported over multi-port IB device, there should + * be no matches on representor PCI functions or non VF LAG + * bonding devices with specified address. + */ + DRV_LOG(ERR, + "Multiple PCI match on bonding device \"%s\" found.", + ibdev_name); + rte_errno = ENOENT; + ret = -rte_errno; + goto exit; } if (nd == 1) { /* @@ -2270,11 +2267,11 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, * number and check the representors existence. */ if (nl_rdma >= 0) - np = mlx5_nl_portnum(nl_rdma, ibv_match[0]->name); + np = mlx5_nl_portnum(nl_rdma, ibdev_name); if (!np) DRV_LOG(WARNING, "Cannot get IB device \"%s\" ports number.", - ibv_match[0]->name); + ibdev_name); if (bd >= 0 && !np) { DRV_LOG(ERR, "Cannot get ports for bonding device."); rte_errno = ENOENT; @@ -2306,15 +2303,12 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, list[ns].bond_info = &bond_info; list[ns].max_port = np; list[ns].phys_port = i; - list[ns].phys_dev = ibv_match[0]; - list[ns].phys_dev_name = ibv_match[0]->name; + list[ns].phys_dev_name = ibdev_name; list[ns].eth_dev = NULL; list[ns].pci_dev = pci_dev; list[ns].pf_bond = bd; - list[ns].ifindex = mlx5_nl_ifindex - (nl_rdma, - mlx5_os_get_dev_device_name - (list[ns].phys_dev), i); + list[ns].ifindex = mlx5_nl_ifindex(nl_rdma, + ibdev_name, i); if (!list[ns].ifindex) { /* * No network interface index found for the @@ -2403,17 +2397,15 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, list[ns].bond_info = NULL; list[ns].max_port = 1; list[ns].phys_port = 1; - list[ns].phys_dev = ibv_match[i]; - list[ns].phys_dev_name = ibv_match[i]->name; + list[ns].phys_dev_name = ibdev_name; list[ns].eth_dev = NULL; list[ns].pci_dev = pci_dev; list[ns].pf_bond = -1; list[ns].ifindex = 0; if (nl_rdma >= 0) - list[ns].ifindex = mlx5_nl_ifindex - (nl_rdma, - mlx5_os_get_dev_device_name - (list[ns].phys_dev), 1); + list[ns].ifindex = mlx5_nl_ifindex(nl_rdma, + ibdev_name, + 1); if (!list[ns].ifindex) { char ifname[IF_NAMESIZE]; @@ -2477,7 +2469,7 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, * May be SRIOV is not enabled or there is no * representors. */ - DRV_LOG(INFO, "no E-Switch support detected"); + DRV_LOG(INFO, "No E-Switch support detected."); ns++; break; } @@ -2546,12 +2538,11 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, /* Default configuration. */ mlx5_os_config_default(&dev_config); + dev_config.devx = devx; dev_config.vf = dev_config_vf; dev_config.allow_duplicate_pattern = 1; - list[i].numa_node = pci_dev->device.numa_node; - list[i].eth_dev = mlx5_dev_spawn(&pci_dev->device, - &list[i], - &dev_config, + list[i].eth_dev = mlx5_dev_spawn(&pci_dev->device, dev_ctx, + &list[i], &dev_config, ð_da); if (!list[i].eth_dev) { if (rte_errno != EBUSY && rte_errno != EEXIST) @@ -2671,7 +2662,8 @@ mlx5_os_parse_eth_devargs(struct rte_device *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_os_pci_probe(struct rte_pci_device *pci_dev) +mlx5_os_pci_probe(struct rte_pci_device *pci_dev, struct mlx5_dev_ctx *dev_ctx, + uint8_t devx) { struct rte_eth_devargs eth_da = { .nb_ports = 0 }; int ret = 0; @@ -2684,8 +2676,8 @@ mlx5_os_pci_probe(struct rte_pci_device *pci_dev) if (eth_da.nb_ports > 0) { /* Iterate all port if devargs pf is range: "pf[0-1]vf[...]". */ for (p = 0; p < eth_da.nb_ports; p++) { - ret = mlx5_os_pci_probe_pf(pci_dev, ð_da, - eth_da.ports[p]); + ret = mlx5_os_pci_probe_pf(pci_dev, dev_ctx, ð_da, + eth_da.ports[p], devx); if (ret) break; } @@ -2698,14 +2690,15 @@ mlx5_os_pci_probe(struct rte_pci_device *pci_dev) mlx5_net_remove(&pci_dev->device); } } else { - ret = mlx5_os_pci_probe_pf(pci_dev, ð_da, 0); + ret = mlx5_os_pci_probe_pf(pci_dev, dev_ctx, ð_da, 0, devx); } return ret; } /* Probe a single SF device on auxiliary bus, no representor support. */ static int -mlx5_os_auxiliary_probe(struct rte_device *dev) +mlx5_os_auxiliary_probe(struct rte_device *dev, struct mlx5_dev_ctx *dev_ctx, + uint8_t devx) { struct rte_eth_devargs eth_da = { .nb_ports = 0 }; struct mlx5_dev_config config; @@ -2721,22 +2714,19 @@ mlx5_os_auxiliary_probe(struct rte_device *dev) /* Set default config data. */ mlx5_os_config_default(&config); config.sf = 1; + config.devx = devx; /* Init spawn data. */ spawn.max_port = 1; spawn.phys_port = 1; - spawn.phys_dev = mlx5_os_get_ibv_dev(dev); - if (spawn.phys_dev == NULL) - return -rte_errno; - spawn.phys_dev_name = mlx5_os_get_dev_device_name(spawn.phys_dev); + spawn.phys_dev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); ret = mlx5_auxiliary_get_ifindex(dev->name); if (ret < 0) { DRV_LOG(ERR, "failed to get ethdev ifindex: %s", dev->name); return ret; } spawn.ifindex = ret; - spawn.numa_node = dev->numa_node; /* Spawn device. */ - eth_dev = mlx5_dev_spawn(dev, &spawn, &config, ð_da); + eth_dev = mlx5_dev_spawn(dev, dev_ctx, &spawn, &config, ð_da); if (eth_dev == NULL) return -rte_errno; /* Post create. */ @@ -2750,38 +2740,8 @@ mlx5_os_auxiliary_probe(struct rte_device *dev) return 0; } -/** - * Net class driver callback to probe a device. - * - * This function probe PCI bus device(s) or a single SF on auxiliary bus. - * - * @param[in] dev - * Pointer to the generic device. - * - * @return - * 0 on success, the function cannot fail. - */ -int -mlx5_os_net_probe(struct rte_device *dev) -{ - int ret; - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - mlx5_pmd_socket_init(); - ret = mlx5_init_once(); - if (ret) { - DRV_LOG(ERR, "unable to init PMD global data: %s", - strerror(rte_errno)); - return -rte_errno; - } - if (mlx5_dev_is_pci(dev)) - return mlx5_os_pci_probe(RTE_DEV_TO_PCI(dev)); - else - return mlx5_os_auxiliary_probe(dev); -} - static int -mlx5_config_doorbell_mapping_env(const struct mlx5_dev_config *config) +mlx5_config_doorbell_mapping_env(int dbnc) { char *env; int value; @@ -2790,11 +2750,11 @@ mlx5_config_doorbell_mapping_env(const struct mlx5_dev_config *config) /* Get environment variable to store. */ env = getenv(MLX5_SHUT_UP_BF); value = env ? !!strcmp(env, "0") : MLX5_ARG_UNSET; - if (config->dbnc == MLX5_ARG_UNSET) + if (dbnc == MLX5_ARG_UNSET) setenv(MLX5_SHUT_UP_BF, MLX5_SHUT_UP_BF_DEFAULT, 1); else setenv(MLX5_SHUT_UP_BF, - config->dbnc == MLX5_TXDB_NCACHED ? "1" : "0", 1); + dbnc == MLX5_TXDB_NCACHED ? "1" : "0", 1); return value; } @@ -2810,104 +2770,163 @@ mlx5_restore_doorbell_mapping_env(int value) } /** - * Extract pdn of PD object using DV API. + * Function API to open IB device using Verbs. + * + * This function calls the Linux glue APIs to open a device. * - * @param[in] pd - * Pointer to the verbs PD object. - * @param[out] pdn - * Pointer to the PD object number variable. + * @param dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to the generic device. + * @param dbnc + * Device argument help configure the environment variable. * * @return - * 0 on success, error value otherwise. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ -int -mlx5_os_get_pdn(void *pd, uint32_t *pdn) +static int +mlx5_verbs_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, + int dbnc) { -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret = 0; + struct ibv_device *ibv; + struct ibv_context *ctx = NULL; + int dbmap_env; - obj.pd.in = pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret) { - DRV_LOG(DEBUG, "Fail to get PD object info"); + ibv = mlx5_os_get_ibv_dev(dev); + if (!ibv) + return -rte_errno; + DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name); + /* + * Configure environment variable "MLX5_BF_SHUT_UP" before the device + * creation. The rdma_core library checks the variable at device + * creation and stores the result internally. + */ + dbmap_env = mlx5_config_doorbell_mapping_env(dbnc); + /* Try to open IB device with Verbs. */ + errno = 0; + ctx = mlx5_glue->open_device(ibv); + /* + * The environment variable is not needed anymore, all device creation + * attempts are completed. + */ + mlx5_restore_doorbell_mapping_env(dbmap_env); + if (!ctx) { + DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + rte_errno = errno ? errno : ENODEV; + return -rte_errno; + } + /* Hint libmlx5 to use PMD allocator for data plane resources */ + mlx5_glue->dv_set_context_attr(ctx, MLX5DV_CTX_ATTR_BUF_ALLOCATORS, + (void *)((uintptr_t)&(struct mlx5dv_ctx_allocators){ + .alloc = &mlx5_alloc_verbs_buf, + .free = &mlx5_free_verbs_buf, + .data = dev_ctx, + })); + dev_ctx->ctx = ctx; + return 0; +} + +/** + * Initialize context device and allocate all its resources. + * + * @param dev_ctx + * Pointer to the context device data structure. + * @param dev + * Pointer to mlx5 device structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_verbs_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev) +{ + int dbnc = MLX5_ARG_UNSET; + int ret; + + /* + * Parse Tx doorbell mapping parameter. It helps to configure + * environment variable "MLX5_BF_SHUT_UP" before the device creation. + */ + ret = mlx5_parse_db_map_arg(dev->devargs, &dbnc); + if (ret < 0) + return ret; + /* Open device using Verbs. */ + ret = mlx5_verbs_open_device(dev_ctx, dev, dbnc); + if (ret < 0) return ret; + /* Allocate Protection Domain object. */ + dev_ctx->pd = mlx5_glue->alloc_pd(dev_ctx->ctx); + if (dev_ctx->pd == NULL) { + DRV_LOG(ERR, "Failed to allocate PD."); + rte_errno = errno ? errno : ENOMEM; + claim_zero(mlx5_glue->close_device(dev_ctx->ctx)); + dev_ctx->ctx = NULL; + return -rte_errno; } - *pdn = pd_info.pdn; return 0; -#else - (void)pd; - (void)pdn; - return -ENOTSUP; -#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ } + /** - * Function API to open IB device. + * Net class driver callback to probe a device. * - * This function calls the Linux glue APIs to open a device. + * This function probe PCI bus device(s) or a single SF on auxiliary bus. * - * @param[in] spawn - * Pointer to the IB device attributes (name, port, etc). - * @param[out] config - * Pointer to device configuration structure. - * @param[out] sh - * Pointer to shared context structure. + * @param[in] dev + * Pointer to the generic device. * * @return - * 0 on success, a positive error value otherwise. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_os_open_device(const struct mlx5_dev_spawn_data *spawn, - const struct mlx5_dev_config *config, - struct mlx5_dev_ctx_shared *sh) +mlx5_os_net_probe(struct rte_device *dev) { - int dbmap_env; - int err = 0; + struct mlx5_dev_ctx *dev_ctx; + uint8_t devx = 0; + int ret; - pthread_mutex_init(&sh->txpp.mutex, NULL); + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; + return -rte_errno; + } /* - * Configure environment variable "MLX5_BF_SHUT_UP" - * before the device creation. The rdma_core library - * checks the variable at device creation and - * stores the result internally. + * Initialize context device and allocate all its resources. + * Try to do it with DV first, then usual Verbs. */ - dbmap_env = mlx5_config_doorbell_mapping_env(config); - /* Try to open IB device with DV first, then usual Verbs. */ - errno = 0; - sh->ctx = mlx5_glue->dv_open_device(spawn->phys_dev); - if (sh->ctx) { - sh->devx = 1; - DRV_LOG(DEBUG, "DevX is supported"); - /* The device is created, no need for environment. */ - mlx5_restore_doorbell_mapping_env(dbmap_env); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_ETH); + if (ret < 0) { + goto error; + } else if (dev_ctx->ctx) { + devx = 1; + DRV_LOG(DEBUG, "DevX is supported."); } else { - /* The environment variable is still configured. */ - sh->ctx = mlx5_glue->open_device(spawn->phys_dev); - err = errno ? errno : ENODEV; - /* - * The environment variable is not needed anymore, - * all device creation attempts are completed. - */ - mlx5_restore_doorbell_mapping_env(dbmap_env); - if (!sh->ctx) - return err; - DRV_LOG(DEBUG, "DevX is NOT supported"); - err = 0; - } - if (!err && sh->ctx) { - /* Hint libmlx5 to use PMD allocator for data plane resources */ - mlx5_glue->dv_set_context_attr(sh->ctx, - MLX5DV_CTX_ATTR_BUF_ALLOCATORS, - (void *)((uintptr_t)&(struct mlx5dv_ctx_allocators){ - .alloc = &mlx5_alloc_verbs_buf, - .free = &mlx5_free_verbs_buf, - .data = sh, - })); + ret = mlx5_verbs_dev_ctx_prepare(dev_ctx, dev); + if (ret < 0) + goto error; + DRV_LOG(DEBUG, "DevX is NOT supported."); } - return err; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + mlx5_pmd_socket_init(); + ret = mlx5_init_once(); + if (ret) { + DRV_LOG(ERR, "unable to init PMD global data: %s", + strerror(rte_errno)); + goto error; + } + if (mlx5_dev_is_pci(dev)) + ret = mlx5_os_pci_probe(RTE_DEV_TO_PCI(dev), dev_ctx, devx); + else + ret = mlx5_os_auxiliary_probe(dev, dev_ctx, devx); + if (ret) + goto error; + return ret; +error: + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); + return ret; } /** @@ -2921,18 +2940,18 @@ mlx5_os_open_device(const struct mlx5_dev_spawn_data *spawn, void mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) { + struct ibv_context *ctx = sh->dev_ctx->ctx; int ret; int flags; sh->intr_handle.fd = -1; - flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL); - ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd, - F_SETFL, flags | O_NONBLOCK); + flags = fcntl(ctx->async_fd, F_GETFL); + ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK); if (ret) { DRV_LOG(INFO, "failed to change file descriptor async event" " queue"); } else { - sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd; + sh->intr_handle.fd = ctx->async_fd; sh->intr_handle.type = RTE_INTR_HANDLE_EXT; if (rte_intr_callback_register(&sh->intr_handle, mlx5_dev_interrupt_handler, sh)) { @@ -2943,8 +2962,7 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) if (sh->devx) { #ifdef HAVE_IBV_DEVX_ASYNC sh->intr_handle_devx.fd = -1; - sh->devx_comp = - (void *)mlx5_glue->devx_create_cmd_comp(sh->ctx); + sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx); struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp; if (!devx_comp) { DRV_LOG(INFO, "failed to allocate devx_comp."); diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index d4fa202ac4..7c266981cd 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -249,9 +249,9 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD; } #endif - return mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(priv->sh->ctx, - &cq_attr.ibv, - &cq_attr.mlx5)); + return mlx5_glue->cq_ex_to_cq + (mlx5_glue->dv_create_cq(priv->sh->dev_ctx->ctx, + &cq_attr.ibv, &cq_attr.mlx5)); } /** @@ -288,7 +288,7 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) .max_wr = wqe_n >> rxq_data->sges_n, /* Max number of scatter/gather elements in a WR. */ .max_sge = 1 << rxq_data->sges_n, - .pd = priv->sh->pd, + .pd = priv->sh->dev_ctx->pd, .cq = rxq_obj->ibv_cq, .comp_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING | 0, .create_flags = (rxq_data->vlan_strip ? @@ -323,10 +323,11 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) .two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT, }; } - rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv, - &wq_attr.mlx5); + rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->dev_ctx->ctx, + &wq_attr.ibv, &wq_attr.mlx5); #else - rxq_obj->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv); + rxq_obj->wq = mlx5_glue->create_wq(priv->sh->dev_ctx->ctx, + &wq_attr.ibv); #endif if (rxq_obj->wq) { /* @@ -378,8 +379,8 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) MLX5_ASSERT(tmpl); tmpl->rxq_ctrl = rxq_ctrl; if (rxq_ctrl->irq) { - tmpl->ibv_channel = - mlx5_glue->create_comp_channel(priv->sh->ctx); + tmpl->ibv_channel = mlx5_glue->create_comp_channel + (priv->sh->dev_ctx->ctx); if (!tmpl->ibv_channel) { DRV_LOG(ERR, "Port %u: comp channel creation failure.", dev->data->port_id); @@ -542,12 +543,13 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, /* Finalise indirection table. */ for (j = 0; i != (unsigned int)(1 << log_n); ++j, ++i) wq[i] = wq[j]; - ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table(priv->sh->ctx, - &(struct ibv_rwq_ind_table_init_attr){ + ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table + (priv->sh->dev_ctx->ctx, + &(struct ibv_rwq_ind_table_init_attr){ .log_ind_tbl_size = log_n, .ind_tbl = wq, .comp_mask = 0, - }); + }); if (!ind_tbl->ind_table) { rte_errno = errno; return -rte_errno; @@ -609,7 +611,7 @@ mlx5_ibv_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, } #endif qp = mlx5_glue->dv_create_qp - (priv->sh->ctx, + (priv->sh->dev_ctx->ctx, &(struct ibv_qp_init_attr_ex){ .qp_type = IBV_QPT_RAW_PACKET, .comp_mask = @@ -625,12 +627,12 @@ mlx5_ibv_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, .rx_hash_fields_mask = hash_fields, }, .rwq_ind_tbl = ind_tbl->ind_table, - .pd = priv->sh->pd, + .pd = priv->sh->dev_ctx->pd, }, &qp_init_attr); #else qp = mlx5_glue->create_qp_ex - (priv->sh->ctx, + (priv->sh->dev_ctx->ctx, &(struct ibv_qp_init_attr_ex){ .qp_type = IBV_QPT_RAW_PACKET, .comp_mask = @@ -646,7 +648,7 @@ mlx5_ibv_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, .rx_hash_fields_mask = hash_fields, }, .rwq_ind_tbl = ind_tbl->ind_table, - .pd = priv->sh->pd, + .pd = priv->sh->dev_ctx->pd, }); #endif if (!qp) { @@ -715,7 +717,7 @@ static int mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct ibv_context *ctx = priv->sh->ctx; + struct ibv_context *ctx = priv->sh->dev_ctx->ctx; struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; if (rxq) @@ -739,7 +741,7 @@ mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev) .wq_type = IBV_WQT_RQ, .max_wr = 1, .max_sge = 1, - .pd = priv->sh->pd, + .pd = priv->sh->dev_ctx->pd, .cq = rxq->ibv_cq, }); if (!rxq->wq) { @@ -779,7 +781,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev) goto error; rxq = priv->drop_queue.rxq; ind_tbl = mlx5_glue->create_rwq_ind_table - (priv->sh->ctx, + (priv->sh->dev_ctx->ctx, &(struct ibv_rwq_ind_table_init_attr){ .log_ind_tbl_size = 0, .ind_tbl = (struct ibv_wq **)&rxq->wq, @@ -792,7 +794,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev) rte_errno = errno; goto error; } - hrxq->qp = mlx5_glue->create_qp_ex(priv->sh->ctx, + hrxq->qp = mlx5_glue->create_qp_ex(priv->sh->dev_ctx->ctx, &(struct ibv_qp_init_attr_ex){ .qp_type = IBV_QPT_RAW_PACKET, .comp_mask = IBV_QP_INIT_ATTR_PD | @@ -805,7 +807,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev) .rx_hash_fields_mask = 0, }, .rwq_ind_tbl = ind_tbl, - .pd = priv->sh->pd + .pd = priv->sh->dev_ctx->pd }); if (!hrxq->qp) { DRV_LOG(DEBUG, "Port %u cannot allocate QP for drop queue.", @@ -893,7 +895,7 @@ mlx5_txq_ibv_qp_create(struct rte_eth_dev *dev, uint16_t idx) qp_attr.qp_type = IBV_QPT_RAW_PACKET, /* Do *NOT* enable this, completions events are managed per Tx burst. */ qp_attr.sq_sig_all = 0; - qp_attr.pd = priv->sh->pd; + qp_attr.pd = priv->sh->dev_ctx->pd; qp_attr.comp_mask = IBV_QP_INIT_ATTR_PD; if (txq_data->inlen_send) qp_attr.cap.max_inline_data = txq_ctrl->max_inline_data; @@ -901,7 +903,7 @@ mlx5_txq_ibv_qp_create(struct rte_eth_dev *dev, uint16_t idx) qp_attr.max_tso_header = txq_ctrl->max_tso_header; qp_attr.comp_mask |= IBV_QP_INIT_ATTR_MAX_TSO_HEADER; } - qp_obj = mlx5_glue->create_qp_ex(priv->sh->ctx, &qp_attr); + qp_obj = mlx5_glue->create_qp_ex(priv->sh->dev_ctx->ctx, &qp_attr); if (qp_obj == NULL) { DRV_LOG(ERR, "Port %u Tx queue %u QP creation failure.", dev->data->port_id, idx); @@ -947,7 +949,8 @@ mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) } cqe_n = desc / MLX5_TX_COMP_THRESH + 1 + MLX5_TX_COMP_THRESH_INLINE_DIV; - txq_obj->cq = mlx5_glue->create_cq(priv->sh->ctx, cqe_n, NULL, NULL, 0); + txq_obj->cq = mlx5_glue->create_cq(priv->sh->dev_ctx->ctx, cqe_n, + NULL, NULL, 0); if (txq_obj->cq == NULL) { DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.", dev->data->port_id, idx); @@ -1070,7 +1073,7 @@ mlx5_rxq_ibv_obj_dummy_lb_create(struct rte_eth_dev *dev) #if defined(HAVE_IBV_DEVICE_TUNNEL_SUPPORT) && defined(HAVE_IBV_FLOW_DV_SUPPORT) struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct ibv_context *ctx = sh->ctx; + struct ibv_context *ctx = sh->dev_ctx->ctx; struct mlx5dv_qp_init_attr qp_init_attr = {0}; struct { struct ibv_cq_init_attr_ex ibv; @@ -1114,7 +1117,7 @@ mlx5_rxq_ibv_obj_dummy_lb_create(struct rte_eth_dev *dev) &(struct ibv_qp_init_attr_ex){ .qp_type = IBV_QPT_RAW_PACKET, .comp_mask = IBV_QP_INIT_ATTR_PD, - .pd = sh->pd, + .pd = sh->dev_ctx->pd, .send_cq = sh->self_lb.ibv_cq, .recv_cq = sh->self_lb.ibv_cq, .cap.max_recv_wr = 1, diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 08c9a6ec6f..f5f325d35a 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -910,7 +910,8 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev) * start after the common header that with the length of a DW(u32). */ node.sample[1].flow_match_sample_field_base_offset = sizeof(uint32_t); - prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->ctx, &node); + prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->dev_ctx->ctx, + &node); if (!prf->obj) { DRV_LOG(ERR, "Failed to create flex parser node object."); return (rte_errno == 0) ? -ENODEV : -rte_errno; @@ -967,6 +968,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, uint32_t uar_mapping, retry; int err = 0; void *base_addr; + void *ctx = sh->dev_ctx->ctx; for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) { #ifdef MLX5DV_UAR_ALLOC_TYPE_NC @@ -985,7 +987,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, */ uar_mapping = 0; #endif - sh->tx_uar = mlx5_glue->devx_alloc_uar(sh->ctx, uar_mapping); + sh->tx_uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping); #ifdef MLX5DV_UAR_ALLOC_TYPE_NC if (!sh->tx_uar && uar_mapping == MLX5DV_UAR_ALLOC_TYPE_BF) { @@ -1004,7 +1006,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, DRV_LOG(DEBUG, "Failed to allocate Tx DevX UAR (BF)"); uar_mapping = MLX5DV_UAR_ALLOC_TYPE_NC; sh->tx_uar = mlx5_glue->devx_alloc_uar - (sh->ctx, uar_mapping); + (ctx, uar_mapping); } else if (!sh->tx_uar && uar_mapping == MLX5DV_UAR_ALLOC_TYPE_NC) { if (config->dbnc == MLX5_TXDB_NCACHED) @@ -1017,7 +1019,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, DRV_LOG(DEBUG, "Failed to allocate Tx DevX UAR (NC)"); uar_mapping = MLX5DV_UAR_ALLOC_TYPE_BF; sh->tx_uar = mlx5_glue->devx_alloc_uar - (sh->ctx, uar_mapping); + (ctx, uar_mapping); } #endif if (!sh->tx_uar) { @@ -1044,8 +1046,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, } for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) { uar_mapping = 0; - sh->devx_rx_uar = mlx5_glue->devx_alloc_uar - (sh->ctx, uar_mapping); + sh->devx_rx_uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping); #ifdef MLX5DV_UAR_ALLOC_TYPE_NC if (!sh->devx_rx_uar && uar_mapping == MLX5DV_UAR_ALLOC_TYPE_BF) { @@ -1057,7 +1058,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, DRV_LOG(DEBUG, "Failed to allocate Rx DevX UAR (BF)"); uar_mapping = MLX5DV_UAR_ALLOC_TYPE_NC; sh->devx_rx_uar = mlx5_glue->devx_alloc_uar - (sh->ctx, uar_mapping); + (ctx, uar_mapping); } #endif if (!sh->devx_rx_uar) { @@ -1098,6 +1099,8 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, * * @param[in] spawn * Pointer to the device attributes (name, port, etc). + * @param dev_ctx + * Pointer to the context device data structure. * @param[in] config * Pointer to device configuration structure. * @@ -1107,6 +1110,7 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, */ struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, + struct mlx5_dev_ctx *dev_ctx, const struct mlx5_dev_config *config) { struct mlx5_dev_ctx_shared *sh; @@ -1137,13 +1141,13 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, rte_errno = ENOMEM; goto exit; } - sh->numa_node = spawn->numa_node; + sh->devx = config->devx; + sh->numa_node = dev_ctx->numa_node; if (spawn->bond_info) sh->bond = *spawn->bond_info; - err = mlx5_os_open_device(spawn, config, sh); - if (!sh->ctx) - goto error; - err = mlx5_os_get_dev_attr(sh->ctx, &sh->device_attr); + pthread_mutex_init(&sh->txpp.mutex, NULL); + sh->dev_ctx = dev_ctx; + err = mlx5_os_get_dev_attr(sh->dev_ctx->ctx, &sh->device_attr); if (err) { DRV_LOG(DEBUG, "mlx5_os_get_dev_attr() failed"); goto error; @@ -1151,39 +1155,27 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, sh->refcnt = 1; sh->max_port = spawn->max_port; sh->reclaim_mode = config->reclaim_mode; - strncpy(sh->ibdev_name, mlx5_os_get_ctx_device_name(sh->ctx), + strncpy(sh->ibdev_name, mlx5_os_get_ctx_device_name(sh->dev_ctx->ctx), sizeof(sh->ibdev_name) - 1); - strncpy(sh->ibdev_path, mlx5_os_get_ctx_device_path(sh->ctx), + strncpy(sh->ibdev_path, mlx5_os_get_ctx_device_path(sh->dev_ctx->ctx), sizeof(sh->ibdev_path) - 1); /* - * Setting port_id to max unallowed value means - * there is no interrupt subhandler installed for - * the given port index i. + * Setting port_id to max unallowed value means there is no interrupt + * subhandler installed for the given port index i. */ for (i = 0; i < sh->max_port; i++) { sh->port[i].ih_port_id = RTE_MAX_ETHPORTS; sh->port[i].devx_ih_port_id = RTE_MAX_ETHPORTS; } - sh->pd = mlx5_os_alloc_pd(sh->ctx); - if (sh->pd == NULL) { - DRV_LOG(ERR, "PD allocation failure"); - err = ENOMEM; - goto error; - } if (sh->devx) { - err = mlx5_os_get_pdn(sh->pd, &sh->pdn); - if (err) { - DRV_LOG(ERR, "Fail to extract pdn from PD"); - goto error; - } - sh->td = mlx5_devx_cmd_create_td(sh->ctx); + sh->td = mlx5_devx_cmd_create_td(sh->dev_ctx->ctx); if (!sh->td) { DRV_LOG(ERR, "TD allocation failure"); err = ENOMEM; goto error; } tis_attr.transport_domain = sh->td->id; - sh->tis = mlx5_devx_cmd_create_tis(sh->ctx, &tis_attr); + sh->tis = mlx5_devx_cmd_create_tis(sh->dev_ctx->ctx, &tis_attr); if (!sh->tis) { DRV_LOG(ERR, "TIS allocation failure"); err = ENOMEM; @@ -1263,10 +1255,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, mlx5_glue->devx_free_uar(sh->devx_rx_uar); if (sh->tx_uar) mlx5_glue->devx_free_uar(sh->tx_uar); - if (sh->pd) - claim_zero(mlx5_os_dealloc_pd(sh->pd)); - if (sh->ctx) - claim_zero(mlx5_glue->close_device(sh->ctx)); mlx5_free(sh); MLX5_ASSERT(err > 0); rte_errno = err; @@ -1278,7 +1266,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, * all allocated resources and close handles. * * @param[in] sh - * Pointer to mlx5_dev_ctx_shared object to free + * Pointer to mlx5_dev_ctx_shared object to free. */ void mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) @@ -1318,7 +1306,7 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) /* * Ensure there is no async event handler installed. * Only primary process handles async device events. - **/ + */ mlx5_flow_counters_mng_close(sh); if (sh->aso_age_mng) { mlx5_flow_aso_age_mng_close(sh); @@ -1336,16 +1324,12 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) mlx5_glue->devx_free_uar(sh->tx_uar); sh->tx_uar = NULL; } - if (sh->pd) - claim_zero(mlx5_os_dealloc_pd(sh->pd)); if (sh->tis) claim_zero(mlx5_devx_cmd_destroy(sh->tis)); if (sh->td) claim_zero(mlx5_devx_cmd_destroy(sh->td)); if (sh->devx_rx_uar) mlx5_glue->devx_free_uar(sh->devx_rx_uar); - if (sh->ctx) - claim_zero(mlx5_glue->close_device(sh->ctx)); MLX5_ASSERT(sh->geneve_tlv_option_resource == NULL); pthread_mutex_destroy(&sh->txpp.mutex); mlx5_free(sh); @@ -1548,10 +1532,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) } if (!priv->sh) return 0; - DRV_LOG(DEBUG, "port %u closing device \"%s\"", - dev->data->port_id, - ((priv->sh->ctx != NULL) ? - mlx5_os_get_ctx_device_name(priv->sh->ctx) : "")); + DRV_LOG(DEBUG, "port %u closing device \"%s\"", dev->data->port_id, + ((priv->sh->dev_ctx->ctx != NULL) ? priv->sh->ibdev_name : "")); /* * If default mreg copy action is removed at the stop stage, * the search will return none and nothing will be done anymore. @@ -2374,6 +2356,33 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev) return port_id; } +/** + * Finds the device context that match the device. + * The existence of multiple ethdev per pci device is only with representors. + * On such case, it is enough to get only one of the ports as they all share + * the same device context. + * + * @param dev + * Pointer to the device. + * + * @return + * Pointer to the device context if found, NULL otherwise. + */ +static struct mlx5_dev_ctx * +mlx5_get_dev_ctx(struct rte_device *dev) +{ + struct mlx5_priv *priv; + uint16_t port_id; + + port_id = rte_eth_find_next_of(0, dev); + if (port_id == RTE_MAX_ETHPORTS) + return NULL; + priv = rte_eth_devices[port_id].data->dev_private; + if (priv == NULL) + return NULL; + return priv->sh->dev_ctx; +} + /** * Callback to remove a device. * @@ -2388,6 +2397,7 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev) int mlx5_net_remove(struct rte_device *dev) { + struct mlx5_dev_ctx *dev_ctx = mlx5_get_dev_ctx(dev); uint16_t port_id; int ret = 0; @@ -2401,6 +2411,11 @@ mlx5_net_remove(struct rte_device *dev) else ret |= rte_eth_dev_close(port_id); } + + if (dev_ctx) { + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); + } return ret == 0 ? 0 : -EIO; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9a8e34535c..1e52b9ac9a 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1140,9 +1140,7 @@ struct mlx5_dev_ctx_shared { uint32_t reclaim_mode:1; /* Reclaim memory. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ - void *ctx; /* Verbs/DV/DevX context. */ - void *pd; /* Protection Domain. */ - uint32_t pdn; /* Protection Domain number. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ uint32_t tdn; /* Transport Domain number. */ char ibdev_name[MLX5_FS_NAME_MAX]; /* SYSFS dev name. */ char ibdev_path[MLX5_FS_PATH_MAX]; /* SYSFS dev path for secondary */ @@ -1497,7 +1495,8 @@ void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); int mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs); struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, - const struct mlx5_dev_config *config); + struct mlx5_dev_ctx *dev_ctx, + const struct mlx5_dev_config *config); void mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh); void mlx5_free_table_hash_list(struct mlx5_priv *priv); int mlx5_alloc_table_hash_list(struct mlx5_priv *priv); @@ -1766,13 +1765,10 @@ int mlx5_flow_meter_flush(struct rte_eth_dev *dev, void mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev); /* mlx5_os.c */ + struct rte_pci_driver; int mlx5_os_get_dev_attr(void *ctx, struct mlx5_dev_attr *dev_attr); void mlx5_os_free_shared_dr(struct mlx5_priv *priv); -int mlx5_os_open_device(const struct mlx5_dev_spawn_data *spawn, - const struct mlx5_dev_config *config, - struct mlx5_dev_ctx_shared *sh); -int mlx5_os_get_pdn(void *pd, uint32_t *pdn); int mlx5_os_net_probe(struct rte_device *dev); void mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh); void mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index a1db53577a..3cafd46837 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -276,12 +276,12 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx) rq_attr.wq_attr.end_padding_mode = priv->config.hw_padding ? MLX5_WQ_END_PAD_MODE_ALIGN : MLX5_WQ_END_PAD_MODE_NONE; - rq_attr.wq_attr.pd = priv->sh->pdn; + rq_attr.wq_attr.pd = priv->sh->dev_ctx->pdn; rq_attr.counter_set_id = priv->counter_set_id; /* Create RQ using DevX API. */ - return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj, - wqe_size, log_desc_n, &rq_attr, - rxq_ctrl->socket); + return mlx5_devx_rq_create(priv->sh->dev_ctx->ctx, + &rxq_ctrl->obj->rq_obj, wqe_size, log_desc_n, + &rq_attr, rxq_ctrl->socket); } /** @@ -365,8 +365,8 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx) cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar); log_cqe_n = log2above(cqe_n); /* Create CQ using DevX API. */ - ret = mlx5_devx_cq_create(sh->ctx, &rxq_ctrl->obj->cq_obj, log_cqe_n, - &cq_attr, sh->numa_node); + ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &rxq_ctrl->obj->cq_obj, + log_cqe_n, &cq_attr, sh->numa_node); if (ret) return ret; cq_obj = &rxq_ctrl->obj->cq_obj; @@ -442,7 +442,7 @@ mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) attr.wq_attr.log_hairpin_data_sz - MLX5_HAIRPIN_QUEUE_STRIDE; attr.counter_set_id = priv->counter_set_id; - tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &attr, + tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->dev_ctx->ctx, &attr, rxq_ctrl->socket); if (!tmpl->rq) { DRV_LOG(ERR, @@ -486,8 +486,7 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA; tmpl->devx_channel = mlx5_os_devx_create_event_channel - (priv->sh->ctx, - devx_ev_flag); + (priv->sh->dev_ctx->ctx, devx_ev_flag); if (!tmpl->devx_channel) { rte_errno = errno; DRV_LOG(ERR, "Failed to create event channel %d.", @@ -602,7 +601,8 @@ mlx5_devx_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, ind_tbl->queues_n); if (!rqt_attr) return -rte_errno; - ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx, rqt_attr); + ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->dev_ctx->ctx, + rqt_attr); mlx5_free(rqt_attr); if (!ind_tbl->rqt) { DRV_LOG(ERR, "Port %u cannot create DevX RQT.", @@ -770,7 +770,7 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, mlx5_devx_tir_attr_set(dev, hrxq->rss_key, hrxq->hash_fields, hrxq->ind_table, tunnel, &tir_attr); - hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr); + hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->dev_ctx->ctx, &tir_attr); if (!hrxq->tir) { DRV_LOG(ERR, "Port %u cannot create DevX TIR.", dev->data->port_id); @@ -936,7 +936,7 @@ mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) attr.wq_attr.log_hairpin_data_sz - MLX5_HAIRPIN_QUEUE_STRIDE; attr.tis_num = priv->sh->tis->id; - tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr); + tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->dev_ctx->ctx, &attr); if (!tmpl->sq) { DRV_LOG(ERR, "Port %u tx hairpin queue %u can't create SQ object.", @@ -994,15 +994,15 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, .tis_lst_sz = 1, .tis_num = priv->sh->tis->id, .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->sh->pdn, + .pd = priv->sh->dev_ctx->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar), }, .ts_format = mlx5_ts_format_conv(priv->sh->sq_ts_format), }; /* Create Send Queue object with DevX. */ - return mlx5_devx_sq_create(priv->sh->ctx, &txq_obj->sq_obj, log_desc_n, - &sq_attr, priv->sh->numa_node); + return mlx5_devx_sq_create(priv->sh->dev_ctx->ctx, &txq_obj->sq_obj, + log_desc_n, &sq_attr, priv->sh->numa_node); } #endif @@ -1058,8 +1058,8 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) return 0; } /* Create completion queue object with DevX. */ - ret = mlx5_devx_cq_create(sh->ctx, &txq_obj->cq_obj, log_desc_n, - &cq_attr, priv->sh->numa_node); + ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &txq_obj->cq_obj, + log_desc_n, &cq_attr, sh->numa_node); if (ret) { DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.", dev->data->port_id, idx); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 4762fa0f5f..b97790cf38 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7604,7 +7604,7 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh) } mem_mng = (struct mlx5_counter_stats_mem_mng *)(mem + size) - 1; size = sizeof(*raw_data) * MLX5_COUNTERS_PER_POOL * raws_n; - mem_mng->umem = mlx5_os_umem_reg(sh->ctx, mem, size, + mem_mng->umem = mlx5_os_umem_reg(sh->dev_ctx->ctx, mem, size, IBV_ACCESS_LOCAL_WRITE); if (!mem_mng->umem) { rte_errno = errno; @@ -7615,10 +7615,10 @@ mlx5_flow_create_counter_stat_mem_mng(struct mlx5_dev_ctx_shared *sh) mkey_attr.addr = (uintptr_t)mem; mkey_attr.size = size; mkey_attr.umem_id = mlx5_os_get_umem_id(mem_mng->umem); - mkey_attr.pd = sh->pdn; + mkey_attr.pd = sh->dev_ctx->pdn; mkey_attr.relaxed_ordering_write = sh->cmng.relaxed_ordering_write; mkey_attr.relaxed_ordering_read = sh->cmng.relaxed_ordering_read; - mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->ctx, &mkey_attr); + mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->dev_ctx->ctx, &mkey_attr); if (!mem_mng->dm) { mlx5_os_umem_dereg(mem_mng->umem); rte_errno = errno; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index e11327a11b..6b90d0d7c1 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -103,7 +103,7 @@ mlx5_aso_reg_mr(struct mlx5_dev_ctx_shared *sh, size_t length, DRV_LOG(ERR, "Failed to create ASO bits mem for MR."); return -1; } - ret = sh->share_cache.reg_mr_cb(sh->pd, mr->addr, length, mr); + ret = sh->share_cache.reg_mr_cb(sh->dev_ctx->pd, mr->addr, length, mr); if (ret) { DRV_LOG(ERR, "Failed to create direct Mkey."); mlx5_free(mr->addr); @@ -309,24 +309,27 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, enum mlx5_access_aso_opc_mod aso_opc_mod) { uint32_t sq_desc_n = 1 << MLX5_ASO_QUEUE_LOG_DESC; + struct mlx5_dev_ctx *dev_ctx = sh->dev_ctx; switch (aso_opc_mod) { case ASO_OPC_MOD_FLOW_HIT: if (mlx5_aso_reg_mr(sh, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) * sq_desc_n, &sh->aso_age_mng->aso_sq.mr, 0)) return -1; - if (mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0, - sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC, - sh->sq_ts_format)) { + if (mlx5_aso_sq_create(dev_ctx->ctx, &sh->aso_age_mng->aso_sq, + 0, sh->tx_uar, dev_ctx->pdn, + MLX5_ASO_QUEUE_LOG_DESC, + sh->sq_ts_format)) { mlx5_aso_dereg_mr(sh, &sh->aso_age_mng->aso_sq.mr); return -1; } mlx5_aso_age_init_sq(&sh->aso_age_mng->aso_sq); break; case ASO_OPC_MOD_POLICER: - if (mlx5_aso_sq_create(sh->ctx, &sh->mtrmng->pools_mng.sq, 0, - sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC, - sh->sq_ts_format)) + if (mlx5_aso_sq_create(dev_ctx->ctx, &sh->mtrmng->pools_mng.sq, + 0, sh->tx_uar, dev_ctx->pdn, + MLX5_ASO_QUEUE_LOG_DESC, + sh->sq_ts_format)) return -1; mlx5_aso_mtr_init_sq(&sh->mtrmng->pools_mng.sq); break; @@ -335,9 +338,10 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, if (mlx5_aso_reg_mr(sh, 64 * sq_desc_n, &sh->ct_mng->aso_sq.mr, 0)) return -1; - if (mlx5_aso_sq_create(sh->ctx, &sh->ct_mng->aso_sq, 0, - sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC, - sh->sq_ts_format)) { + if (mlx5_aso_sq_create(dev_ctx->ctx, &sh->ct_mng->aso_sq, 0, + sh->tx_uar, dev_ctx->pdn, + MLX5_ASO_QUEUE_LOG_DESC, + sh->sq_ts_format)) { mlx5_aso_dereg_mr(sh, &sh->ct_mng->aso_sq.mr); return -1; } diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5bb6d89a3f..6a336ac128 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3684,8 +3684,8 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) } *resource = *ctx_resource; resource->idx = idx; - ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->ctx, domain, - resource, + ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->dev_ctx->ctx, + domain, resource, &resource->action); if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); @@ -5485,7 +5485,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) else ns = sh->rx_domain; ret = mlx5_flow_os_create_flow_action_modify_header - (sh->ctx, ns, entry, + (sh->dev_ctx->ctx, ns, entry, data_len, &entry->action); if (ret) { mlx5_ipool_free(sh->mdh_ipools[ref->actions_num - 1], idx); @@ -6096,6 +6096,7 @@ flow_dv_counter_pool_prepare(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_counter_mng *cmng = &priv->sh->cmng; + struct mlx5_dev_ctx *dev_ctx = priv->sh->dev_ctx; struct mlx5_flow_counter_pool *pool; struct mlx5_counters tmp_tq; struct mlx5_devx_obj *dcs = NULL; @@ -6107,7 +6108,7 @@ flow_dv_counter_pool_prepare(struct rte_eth_dev *dev, if (fallback) { /* bulk_bitmap must be 0 for single counter allocation. */ - dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0); + dcs = mlx5_devx_cmd_flow_counter_alloc(dev_ctx->ctx, 0); if (!dcs) return NULL; pool = flow_dv_find_pool_by_id(cmng, dcs->id); @@ -6125,7 +6126,7 @@ flow_dv_counter_pool_prepare(struct rte_eth_dev *dev, *cnt_free = cnt; return pool; } - dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0x4); + dcs = mlx5_devx_cmd_flow_counter_alloc(dev_ctx->ctx, 0x4); if (!dcs) { rte_errno = ENODATA; return NULL; @@ -6477,16 +6478,17 @@ flow_dv_mtr_pool_create(struct rte_eth_dev *dev, struct mlx5_aso_mtr **mtr_free) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_aso_mtr_pools_mng *pools_mng = - &priv->sh->mtrmng->pools_mng; + struct mlx5_dev_ctx *dev_ctx = priv->sh->dev_ctx; + struct mlx5_aso_mtr_pools_mng *pools_mng = &priv->sh->mtrmng->pools_mng; struct mlx5_aso_mtr_pool *pool = NULL; struct mlx5_devx_obj *dcs = NULL; uint32_t i; uint32_t log_obj_size; log_obj_size = rte_log2_u32(MLX5_ASO_MTRS_PER_POOL >> 1); - dcs = mlx5_devx_cmd_create_flow_meter_aso_obj(priv->sh->ctx, - priv->sh->pdn, log_obj_size); + dcs = mlx5_devx_cmd_create_flow_meter_aso_obj(dev_ctx->ctx, + dev_ctx->pdn, + log_obj_size); if (!dcs) { rte_errno = ENODATA; return NULL; @@ -6508,8 +6510,7 @@ flow_dv_mtr_pool_create(struct rte_eth_dev *dev, pools_mng->n_valid++; for (i = 1; i < MLX5_ASO_MTRS_PER_POOL; ++i) { pool->mtrs[i].offset = i; - LIST_INSERT_HEAD(&pools_mng->meters, - &pool->mtrs[i], next); + LIST_INSERT_HEAD(&pools_mng->meters, &pool->mtrs[i], next); } pool->mtrs[0].offset = 0; *mtr_free = &pool->mtrs[0]; @@ -9181,7 +9182,7 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } } else { /* Create a GENEVE TLV object and resource. */ - obj = mlx5_devx_cmd_create_geneve_tlv_option(sh->ctx, + obj = mlx5_devx_cmd_create_geneve_tlv_option(sh->dev_ctx->ctx, geneve_opt_v->option_class, geneve_opt_v->option_type, geneve_opt_v->option_len); @@ -10539,7 +10540,8 @@ flow_dv_matcher_create_cb(void *tool_ctx, void *cb_ctx) dv_attr.priority = ref->priority; if (tbl->is_egress) dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; - ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->tbl.obj, + ret = mlx5_flow_os_create_flow_matcher(sh->dev_ctx->ctx, &dv_attr, + tbl->tbl.obj, &resource->matcher_object); if (ret) { mlx5_free(resource); @@ -11958,8 +11960,8 @@ flow_dv_age_pool_create(struct rte_eth_dev *dev, struct mlx5_devx_obj *obj = NULL; uint32_t i; - obj = mlx5_devx_cmd_create_flow_hit_aso_obj(priv->sh->ctx, - priv->sh->pdn); + obj = mlx5_devx_cmd_create_flow_hit_aso_obj(priv->sh->dev_ctx->ctx, + priv->sh->dev_ctx->pdn); if (!obj) { rte_errno = ENODATA; DRV_LOG(ERR, "Failed to create flow_hit_aso_obj using DevX."); @@ -12371,13 +12373,15 @@ flow_dv_ct_pool_create(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_dev_ctx *dev_ctx = priv->sh->dev_ctx; struct mlx5_aso_ct_pool *pool = NULL; struct mlx5_devx_obj *obj = NULL; uint32_t i; uint32_t log_obj_size = rte_log2_u32(MLX5_ASO_CT_ACTIONS_PER_POOL); - obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->ctx, - priv->sh->pdn, log_obj_size); + obj = mlx5_devx_cmd_create_conn_track_offload_obj(dev_ctx->ctx, + dev_ctx->pdn, + log_obj_size); if (!obj) { rte_errno = ENODATA; DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX."); @@ -17123,8 +17127,7 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, break; case MLX5_FLOW_FATE_QUEUE: sub_policy = mtr_policy->sub_policys[domain][0]; - __flow_dv_destroy_sub_policy_rules(dev, - sub_policy); + __flow_dv_destroy_sub_policy_rules(dev, sub_policy); break; default: /*Other actions without queue and do nothing*/ @@ -17173,8 +17176,8 @@ mlx5_flow_discover_dr_action_support(struct rte_eth_dev *dev) goto err; dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf); __flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable); - ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj, - &matcher); + ret = mlx5_flow_os_create_flow_matcher(sh->dev_ctx->ctx, &dv_attr, + tbl->obj, &matcher); if (ret) goto err; __flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable); @@ -17242,7 +17245,7 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev) 0, 0, 0, NULL); if (!tbl) goto err; - dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0x4); + dcs = mlx5_devx_cmd_flow_counter_alloc(sh->dev_ctx->ctx, 0x4); if (!dcs) goto err; ret = mlx5_flow_os_create_flow_action_count(dcs->obj, UINT16_MAX, @@ -17251,8 +17254,8 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev) goto err; dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf); __flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable); - ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj, - &matcher); + ret = mlx5_flow_os_create_flow_matcher(sh->dev_ctx->ctx, &dv_attr, + tbl->obj, &matcher); if (ret) goto err; __flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable); diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index b93fd4d2c9..2c132a8c16 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -198,7 +198,7 @@ flow_verbs_counter_create(struct rte_eth_dev *dev, { #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) struct mlx5_priv *priv = dev->data->dev_private; - struct ibv_context *ctx = priv->sh->ctx; + struct ibv_context *ctx = priv->sh->dev_ctx->ctx; struct ibv_counter_set_init_attr init = { .counter_set_id = counter->shared_info.id}; @@ -210,7 +210,7 @@ flow_verbs_counter_create(struct rte_eth_dev *dev, return 0; #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45) struct mlx5_priv *priv = dev->data->dev_private; - struct ibv_context *ctx = priv->sh->ctx; + struct ibv_context *ctx = priv->sh->dev_ctx->ctx; struct ibv_counters_init_attr init = {0}; struct ibv_counter_attach_attr attach; int ret; diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 44afda731f..b7297f22fe 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -84,7 +84,7 @@ mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr) struct mlx5_mr_ctrl *mr_ctrl = &rxq->mr_ctrl; struct mlx5_priv *priv = rxq_ctrl->priv; - return mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id, + return mlx5_mr_addr2mr_bh(priv->sh->dev_ctx->pd, &priv->mp_id, &priv->sh->share_cache, mr_ctrl, addr, priv->config.mr_ext_memseg_en); } @@ -108,7 +108,7 @@ mlx5_tx_addr2mr_bh(struct mlx5_txq_data *txq, uintptr_t addr) struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; struct mlx5_priv *priv = txq_ctrl->priv; - return mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id, + return mlx5_mr_addr2mr_bh(priv->sh->dev_ctx->pd, &priv->mp_id, &priv->sh->share_cache, mr_ctrl, addr, priv->config.mr_ext_memseg_en); } @@ -177,7 +177,7 @@ mlx5_mr_update_ext_mp_cb(struct rte_mempool *mp, void *opaque, return; DRV_LOG(DEBUG, "port %u register MR for chunk #%d of mempool (%s)", dev->data->port_id, mem_idx, mp->name); - mr = mlx5_create_mr_ext(sh->pd, addr, len, mp->socket_id, + mr = mlx5_create_mr_ext(sh->dev_ctx->pd, addr, len, mp->socket_id, sh->share_cache.reg_mr_cb); if (!mr) { DRV_LOG(WARNING, @@ -193,7 +193,7 @@ mlx5_mr_update_ext_mp_cb(struct rte_mempool *mp, void *opaque, mlx5_mr_insert_cache(&sh->share_cache, mr); rte_rwlock_write_unlock(&sh->share_cache.rwlock); /* Insert to the local cache table */ - mlx5_mr_addr2mr_bh(sh->pd, &priv->mp_id, &sh->share_cache, + mlx5_mr_addr2mr_bh(sh->dev_ctx->pd, &priv->mp_id, &sh->share_cache, mr_ctrl, addr, priv->config.mr_ext_memseg_en); } @@ -253,8 +253,8 @@ mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, } priv = dev->data->dev_private; sh = priv->sh; - mr = mlx5_create_mr_ext(sh->pd, (uintptr_t)addr, len, SOCKET_ID_ANY, - sh->share_cache.reg_mr_cb); + mr = mlx5_create_mr_ext(sh->dev_ctx->pd, (uintptr_t)addr, len, + SOCKET_ID_ANY, sh->share_cache.reg_mr_cb); if (!mr) { DRV_LOG(WARNING, "port %u unable to dma map", dev->data->port_id); @@ -409,7 +409,7 @@ mlx5_mr_update_mp_cb(struct rte_mempool *mp __rte_unused, void *opaque, if (data->ret < 0) return; /* Register address of the chunk and update local caches. */ - lkey = mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id, + lkey = mlx5_mr_addr2mr_bh(priv->sh->dev_ctx->pd, &priv->mp_id, &priv->sh->share_cache, data->mr_ctrl, (uintptr_t)memhdr->addr, priv->config.mr_ext_memseg_en); diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 4f6da9f2d1..ff1c3d204c 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -49,7 +49,7 @@ static int mlx5_txpp_create_event_channel(struct mlx5_dev_ctx_shared *sh) { MLX5_ASSERT(!sh->txpp.echan); - sh->txpp.echan = mlx5_os_devx_create_event_channel(sh->ctx, + sh->txpp.echan = mlx5_os_devx_create_event_channel(sh->dev_ctx->ctx, MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); if (!sh->txpp.echan) { rte_errno = errno; @@ -104,7 +104,7 @@ mlx5_txpp_alloc_pp_index(struct mlx5_dev_ctx_shared *sh) MLX5_SET(set_pp_rate_limit_context, &pp, rate_mode, sh->txpp.test ? MLX5_DATA_RATE : MLX5_WQE_RATE); sh->txpp.pp = mlx5_glue->dv_alloc_pp - (sh->ctx, sizeof(pp), &pp, + (sh->dev_ctx->ctx, sizeof(pp), &pp, MLX5DV_PP_ALLOC_FLAGS_DEDICATED_INDEX); if (sh->txpp.pp == NULL) { DRV_LOG(ERR, "Failed to allocate packet pacing index."); @@ -232,7 +232,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) .tis_lst_sz = 1, .tis_num = sh->tis->id, .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = sh->pdn, + .pd = sh->dev_ctx->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar), }, .ts_format = mlx5_ts_format_conv(sh->sq_ts_format), @@ -245,7 +245,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) int ret; /* Create completion queue object for Rearm Queue. */ - ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj, + ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &wq->cq_obj, log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr, sh->numa_node); if (ret) { @@ -259,7 +259,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) /* Create send queue object for Rearm Queue. */ sq_attr.cqn = wq->cq_obj.cq->id; /* There should be no WQE leftovers in the cyclic queue. */ - ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj, + ret = mlx5_devx_sq_create(sh->dev_ctx->ctx, &wq->sq_obj, log2above(MLX5_TXPP_REARM_SQ_SIZE), &sq_attr, sh->numa_node); if (ret) { @@ -409,7 +409,7 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) sh->txpp.ts_p = 0; sh->txpp.ts_n = 0; /* Create completion queue object for Clock Queue. */ - ret = mlx5_devx_cq_create(sh->ctx, &wq->cq_obj, + ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &wq->cq_obj, log2above(MLX5_TXPP_CLKQ_SIZE), &cq_attr, sh->numa_node); if (ret) { @@ -444,9 +444,10 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) sq_attr.packet_pacing_rate_limit_index = sh->txpp.pp_id; sq_attr.wq_attr.cd_slave = 1; sq_attr.wq_attr.uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar); - sq_attr.wq_attr.pd = sh->pdn; + sq_attr.wq_attr.pd = sh->dev_ctx->pdn; sq_attr.ts_format = mlx5_ts_format_conv(sh->sq_ts_format); - ret = mlx5_devx_sq_create(sh->ctx, &wq->sq_obj, log2above(wq->sq_size), + ret = mlx5_devx_sq_create(sh->dev_ctx->ctx, &wq->sq_obj, + log2above(wq->sq_size), &sq_attr, sh->numa_node); if (ret) { rte_errno = errno; diff --git a/drivers/net/mlx5/windows/mlx5_ethdev_os.c b/drivers/net/mlx5/windows/mlx5_ethdev_os.c index c709dd19be..352dfa9331 100644 --- a/drivers/net/mlx5/windows/mlx5_ethdev_os.c +++ b/drivers/net/mlx5/windows/mlx5_ethdev_os.c @@ -38,7 +38,7 @@ mlx5_get_mac(struct rte_eth_dev *dev, uint8_t (*mac)[RTE_ETHER_ADDR_LEN]) return -rte_errno; } priv = dev->data->dev_private; - context_obj = (mlx5_context_st *)priv->sh->ctx; + context_obj = (mlx5_context_st *)priv->sh->dev_ctx->ctx; memcpy(mac, context_obj->mlx5_dev.eth_mac, RTE_ETHER_ADDR_LEN); return 0; } @@ -66,7 +66,7 @@ mlx5_get_ifname(const struct rte_eth_dev *dev, char (*ifname)[MLX5_NAMESIZE]) return -rte_errno; } priv = dev->data->dev_private; - context_obj = (mlx5_context_st *)priv->sh->ctx; + context_obj = (mlx5_context_st *)priv->sh->dev_ctx->ctx; strncpy(*ifname, context_obj->mlx5_dev.name, MLX5_NAMESIZE); return 0; } @@ -93,7 +93,7 @@ mlx5_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu) return -rte_errno; } priv = dev->data->dev_private; - context_obj = (mlx5_context_st *)priv->sh->ctx; + context_obj = (mlx5_context_st *)priv->sh->dev_ctx->ctx; *mtu = context_obj->mlx5_dev.mtu_bytes; return 0; } @@ -253,7 +253,7 @@ mlx5_link_update(struct rte_eth_dev *dev, int wait_to_complete) return -rte_errno; } priv = dev->data->dev_private; - context_obj = (mlx5_context_st *)priv->sh->ctx; + context_obj = (mlx5_context_st *)priv->sh->dev_ctx->ctx; dev_link.link_speed = context_obj->mlx5_dev.link_speed / (1000 * 1000); dev_link.link_status = (context_obj->mlx5_dev.link_state == 1 && !mlx5_is_removed(dev)) @@ -359,7 +359,8 @@ mlx5_read_clock(struct rte_eth_dev *dev, uint64_t *clock) int err; struct mlx5_devx_clock mlx5_clock; struct mlx5_priv *priv = dev->data->dev_private; - mlx5_context_st *context_obj = (mlx5_context_st *)priv->sh->ctx; + mlx5_context_st *context_obj = + (mlx5_context_st *)priv->sh->dev_ctx->ctx; err = mlx5_glue->query_rt_values(context_obj, &mlx5_clock); if (err != 0) { @@ -383,7 +384,8 @@ int mlx5_is_removed(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - mlx5_context_st *context_obj = (mlx5_context_st *)priv->sh->ctx; + mlx5_context_st *context_obj = + (mlx5_context_st *)priv->sh->dev_ctx->ctx; if (*context_obj->shutdown_event_obj.p_flag) return 1; diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 2f5c29662e..f6a7fbaca1 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -240,50 +240,6 @@ mlx5_os_set_nonblock_channel_fd(int fd) return -ENOTSUP; } -/** - * Function API open device under Windows - * - * This function calls the Windows glue APIs to open a device. - * - * @param[in] spawn - * Pointer to the device attributes (name, port, etc). - * @param[out] config - * Pointer to device configuration structure. - * @param[out] sh - * Pointer to shared context structure. - * - * @return - * 0 on success, a positive error value otherwise. - */ -int -mlx5_os_open_device(const struct mlx5_dev_spawn_data *spawn, - const struct mlx5_dev_config *config, - struct mlx5_dev_ctx_shared *sh) -{ - RTE_SET_USED(config); - int err = 0; - struct mlx5_context *mlx5_ctx; - - pthread_mutex_init(&sh->txpp.mutex, NULL); - /* Set numa node from pci probe */ - sh->numa_node = spawn->pci_dev->device.numa_node; - - /* Try to open device with DevX */ - rte_errno = 0; - sh->ctx = mlx5_glue->open_device(spawn->phys_dev); - if (!sh->ctx) { - DRV_LOG(ERR, "open_device failed"); - err = errno; - return err; - } - sh->devx = 1; - mlx5_ctx = (struct mlx5_context *)sh->ctx; - err = mlx5_glue->query_device(spawn->phys_dev, &mlx5_ctx->mlx5_dev); - if (err) - DRV_LOG(ERR, "Failed to query device context fields."); - return err; -} - /** * DV flow counter mode detect and config. * @@ -328,6 +284,8 @@ mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) * * @param dpdk_dev * Backing DPDK device. + * @param dev_ctx + * Pointer to the context device data structure. * @param spawn * Verbs device parameters (name, port, switch_info) to spawn. * @param config @@ -341,6 +299,7 @@ mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) */ static struct rte_eth_dev * mlx5_dev_spawn(struct rte_device *dpdk_dev, + struct mlx5_dev_ctx *dev_ctx, struct mlx5_dev_spawn_data *spawn, struct mlx5_dev_config *config) { @@ -378,21 +337,20 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } mlx5_malloc_mem_select(config->sys_mem_en); - sh = mlx5_alloc_shared_dev_ctx(spawn, config); + sh = mlx5_alloc_shared_dev_ctx(spawn, dev_ctx, config); if (!sh) return NULL; - config->devx = sh->devx; /* Initialize the shutdown event in mlx5_dev_spawn to * support mlx5_is_removed for Windows. */ - err = mlx5_glue->devx_init_showdown_event(sh->ctx); + err = mlx5_glue->devx_init_showdown_event(sh->dev_ctx->ctx); if (err) { DRV_LOG(ERR, "failed to init showdown event: %s", strerror(errno)); goto error; } DRV_LOG(DEBUG, "MPW isn't supported"); - mlx5_os_get_dev_attr(sh->ctx, &device_attr); + mlx5_os_get_dev_attr(sh->dev_ctx->ctx, &device_attr); config->swp = 0; config->ind_table_max_size = sh->device_attr.max_rwq_indirection_table_size; @@ -485,7 +443,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->cqe_comp = 0; } if (config->devx) { - err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr); + err = mlx5_devx_cmd_query_hca_attr(sh->dev_ctx->ctx, + &config->hca_attr); if (err) { err = -err; goto error; @@ -508,7 +467,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = config->hca_attr.access_register_user ? mlx5_devx_cmd_register_read - (sh->ctx, MLX5_REGISTER_ID_MTUTC, 0, + (sh->dev_ctx->ctx, MLX5_REGISTER_ID_MTUTC, 0, reg, MLX5_ST_SZ_DW(register_mtutc)) : ENOTSUP; if (!err) { uint32_t ts_mode; @@ -701,7 +660,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (eth_dev != NULL) { /* mac_addrs must not be freed alone because part of * dev_private - **/ + */ eth_dev->data->mac_addrs = NULL; rte_eth_dev_release_port(eth_dev); } @@ -919,15 +878,13 @@ int mlx5_os_net_probe(struct rte_device *dev) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev); + struct mlx5_dev_ctx *dev_ctx; struct mlx5_dev_spawn_data spawn = { .pf_bond = -1 }; - struct devx_device_bdf *devx_bdf_match = mlx5_os_get_devx_device(dev); struct mlx5_dev_config dev_config; unsigned int dev_config_vf; int ret; uint32_t restore; - if (devx_bdf_match == NULL) - return -rte_errno; if (rte_eal_process_type() == RTE_PROC_SECONDARY) { DRV_LOG(ERR, "Secondary process is not supported on Windows."); return -ENOTSUP; @@ -938,11 +895,20 @@ mlx5_os_net_probe(struct rte_device *dev) strerror(rte_errno)); return -rte_errno; } + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; + return -rte_errno; + } + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_ETH); + if (ret < 0) + goto error; memset(&spawn.info, 0, sizeof(spawn.info)); spawn.max_port = 1; spawn.phys_port = 1; - spawn.phys_dev = devx_bdf_match; - spawn.phys_dev_name = mlx5_os_get_dev_device_name(devx_bdf_match); + spawn.phys_dev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); spawn.eth_dev = NULL; spawn.pci_dev = pci_dev; spawn.ifindex = -1; /* Spawn will assign */ @@ -972,6 +938,7 @@ mlx5_os_net_probe(struct rte_device *dev) /* Default configuration. */ memset(&dev_config, 0, sizeof(struct mlx5_dev_config)); dev_config.vf = dev_config_vf; + dev_config.devx = 1; dev_config.mps = 0; dev_config.dbnc = MLX5_ARG_UNSET; dev_config.rx_vec_en = 1; @@ -987,16 +954,21 @@ mlx5_os_net_probe(struct rte_device *dev) dev_config.dv_flow_en = 1; dev_config.decap_en = 0; dev_config.log_hp_size = MLX5_ARG_UNSET; - spawn.numa_node = pci_dev->device.numa_node; - spawn.eth_dev = mlx5_dev_spawn(dev, &spawn, &dev_config); - if (!spawn.eth_dev) - return -rte_errno; + spawn.eth_dev = mlx5_dev_spawn(dev, dev_ctx, &spawn, &dev_config); + if (!spawn.eth_dev) { + ret = -rte_errno; + goto error; + } restore = spawn.eth_dev->data->dev_flags; rte_eth_copy_pci_info(spawn.eth_dev, pci_dev); /* Restore non-PCI flags cleared by the above call. */ spawn.eth_dev->data->dev_flags |= restore; rte_eth_dev_probing_finish(spawn.eth_dev); return 0; +error: + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); + return ret; } /** @@ -1016,25 +988,4 @@ mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, *dereg_mr_cb = mlx5_os_dereg_mr; } -/** - * Extract pdn of PD object using DevX - * - * @param[in] pd - * Pointer to the DevX PD object. - * @param[out] pdn - * Pointer to the PD object number variable. - * - * @return - * 0 on success, error value otherwise. - */ -int -mlx5_os_get_pdn(void *pd, uint32_t *pdn) -{ - if (!pd) - return -EINVAL; - - *pdn = ((struct mlx5_pd *)pd)->pdn; - return 0; -} - const struct mlx5_flow_driver_ops mlx5_flow_verbs_drv_ops = {0}; From patchwork Tue Aug 17 13:44:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97001 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92F5EA0548; Tue, 17 Aug 2021 15:46:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C461411FD; Tue, 17 Aug 2021 15:45:37 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2065.outbound.protection.outlook.com [40.107.236.65]) by mails.dpdk.org (Postfix) with ESMTP id 11078411FD for ; Tue, 17 Aug 2021 15:45:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LclanSLVL663JP128D/o9eKO9U3+ba9h6jsqz+yqkWHnTB0lUurZObOWuOa6sJo+IKTfK9cH2b28KPPyd6Y9WOaRSy3b+IqHTcDB68fFySySmOu+GFt+k9JfR10Dl8fw1UE5eOPxh3dtJLbPCjJRmH+xw5tSJC2F4Y/odhQ5MxG9n8lXjHTYiWyHollXaQxlVpXnYmcgcbUPKBwS/AwiZcBSyNbCG8HLnA25mHFfjkFK99b480uF7/8BrZhcy/0x/8zR3RznStnnINaRg+3Li6Z8l6recSX9cq3RV84IYB20aLAFxDbWjNQXgGPxBkbgG+cRxBn71QyXmBR+CPj9TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LusB84Gbn+n0O0fsM07rjREfcejrLsEJMUgXr2I7OYg=; b=hXWxGXYCTyfvwM7XJBM8SF/QNCNFX0l+vt4Y3kVPByqsgiYdl9LQ361JhLIBdrZWFCYpoLkGhlWHjSI6gJ9+Al3y42gQ+4tvhnAq/epilPtO3ebkUmRMHjZsAJBYyr3WK2R7RSOjOb2ODXT6UW8ohTbtINHLulqfLEQ0xU0QjrG0tvWi1iXyb5RG/PKD0Tcos0hzioTM7gWrweUDF2fjFaXXMQafv9YJVW2DxtyzFbDITkvV9xVzulgV5RJjQkKlYPHs9tDbZikjx/5fiiGnBf0C0wFUlyzfsKoxWAXTJqxca/LVXImz1bxbXsMrZtxVEdaCetkZUYfyuNTvi/JvBA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LusB84Gbn+n0O0fsM07rjREfcejrLsEJMUgXr2I7OYg=; b=VxLTd6NO2pCGQu16svtKlm0n2bK8anrK2rjfQ7WiiEQm8dBL03Yp61joPjstEMu0s8bdLtjji70RN91Kw+EfdmUP6xtDP5o4RoAbzbispnjlyR7wHRfy64RC4HYHD7vvqJz6QazVD08eLF0zidPfLPcZBGFHJBDTuaO/rpoAdvpw9fh+EiYtrf6fp4FxkMCFJmk8l4KEN3XVK9PBQSPbvxRPu9kqtxIcikk4v2vd/gH5+0t/rIOA5NrMDlLH/1NWgk3sLhlDntuvjQtvhwOh8/dadCtWUIIw5FX9E/2gutUbM8mfLCPi12vnN61tiV0DgYNXvuoTMXRxsm/N6ap2fw== Received: from BN9PR03CA0254.namprd03.prod.outlook.com (2603:10b6:408:ff::19) by DM5PR12MB1706.namprd12.prod.outlook.com (2603:10b6:3:10f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.19; Tue, 17 Aug 2021 13:45:34 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ff:cafe::7f) by BN9PR03CA0254.outlook.office365.com (2603:10b6:408:ff::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17 via Frontend Transport; Tue, 17 Aug 2021 13:45:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:34 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:33 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:31 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:31 +0300 Message-ID: <20210817134441.1966618-12-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 42837e17-219f-4d54-093a-08d96185499e X-MS-TrafficTypeDiagnostic: DM5PR12MB1706: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:473; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CXFPDLimDGfTbkDEuusyhSl0ka8AKn0TCKwGDijC7e3d2M4UsGJvUdlj7b21YtiG7R9YWPeIA776BiiDOF1TsRxpN1v0iVCqO/ySKX8/roDxHPa93rcuT6D6kQrBrtcJQfzbG4ZDZq2PjFH+wxNjnqEoJN+vLI4+ha+0c8c/U/FixUAbhQl9qnWeRoq2wNMEqZGKuVEPt00JeLhOzgKrRKvoEnyx3UQ/hK1vFrwTbAxMaGH4V85G07pNEyy3L8aIbnXm5Mjq99soNgTbpsk+BCYoEXIrVKcufGvopeH/vS+Jbo3QSa9PYJRjsjRynmpysTMPxG0maHX7ktfGuJl8mofl/edrvWmrGgDw9xRH7scDgFRm1Tg1Wjel5B8fuKLOENNzkevchEGi4RzcIJT2eFE3D6XLZbcA6GUvMNppu49rgbxLkrbOmctNMrttp7wejoPLy9a3TEtUZ5cjLq8NY4FeYiaATO+hz1nn0x3X0GvF+gUP3ZIoP2G5G3jhSOdXpYAv+a9l/rcJ5fsdLL612iUtYEM4qmU5QmMUzdrXNM7q5VBADJrwFSgUoi3929v6UpYvv9AF4uRUtayoDm7rJdEvuAzOVEMVd//v9/reCbgJOQ30rBwehaJniUlueODQVRsbNtfwDJOrC44po6BZwVzze5supGw/qhVHovpT7B9IEQQCTdbkqJeNVBWebwBhdK/61u0Yv7KU+C7MO6XPdw== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(36840700001)(46966006)(86362001)(26005)(7696005)(186003)(16526019)(2616005)(8676002)(8936002)(478600001)(36860700001)(82740400003)(47076005)(316002)(82310400003)(36756003)(356005)(6286002)(1076003)(7636003)(83380400001)(54906003)(107886003)(70206006)(336012)(4326008)(5660300002)(426003)(70586007)(2906002)(6916009)(55016002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:34.1615 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 42837e17-219f-4d54-093a-08d96185499e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1706 Subject: [dpdk-dev] [RFC 11/21] net/mlx5: move NUMA node field to context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove numa node field from sh structure, and use instead in context device structure. Signed-off-by: Michael Baum --- drivers/net/mlx5/mlx5.c | 3 +-- drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_devx.c | 11 ++++++----- drivers/net/mlx5/mlx5_txpp.c | 10 +++++----- 4 files changed, 12 insertions(+), 13 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f5f325d35a..b695f2f6d3 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1142,7 +1142,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, goto exit; } sh->devx = config->devx; - sh->numa_node = dev_ctx->numa_node; if (spawn->bond_info) sh->bond = *spawn->bond_info; pthread_mutex_init(&sh->txpp.mutex, NULL); @@ -1207,7 +1206,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, */ err = mlx5_mr_btree_init(&sh->share_cache.cache, MLX5_MR_BTREE_CACHE_N * 2, - sh->numa_node); + dev_ctx->numa_node); if (err) { err = rte_errno; goto error; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1e52b9ac9a..f6d8e1d817 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1145,7 +1145,6 @@ struct mlx5_dev_ctx_shared { char ibdev_name[MLX5_FS_NAME_MAX]; /* SYSFS dev name. */ char ibdev_path[MLX5_FS_PATH_MAX]; /* SYSFS dev path for secondary */ struct mlx5_dev_attr device_attr; /* Device properties. */ - int numa_node; /* Numa node of backing physical device. */ LIST_ENTRY(mlx5_dev_ctx_shared) mem_event_cb; /**< Called by memory event callback. */ struct mlx5_mr_share_cache share_cache; diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 3cafd46837..787c771167 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -366,7 +366,7 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx) log_cqe_n = log2above(cqe_n); /* Create CQ using DevX API. */ ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &rxq_ctrl->obj->cq_obj, - log_cqe_n, &cq_attr, sh->numa_node); + log_cqe_n, &cq_attr, sh->dev_ctx->numa_node); if (ret) return ret; cq_obj = &rxq_ctrl->obj->cq_obj; @@ -981,6 +981,7 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, uint16_t log_desc_n) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx *dev_ctx = priv->sh->dev_ctx; struct mlx5_txq_data *txq_data = (*priv->txqs)[idx]; struct mlx5_txq_ctrl *txq_ctrl = container_of(txq_data, struct mlx5_txq_ctrl, txq); @@ -994,15 +995,15 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, .tis_lst_sz = 1, .tis_num = priv->sh->tis->id, .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->sh->dev_ctx->pdn, + .pd = dev_ctx->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar), }, .ts_format = mlx5_ts_format_conv(priv->sh->sq_ts_format), }; /* Create Send Queue object with DevX. */ - return mlx5_devx_sq_create(priv->sh->dev_ctx->ctx, &txq_obj->sq_obj, - log_desc_n, &sq_attr, priv->sh->numa_node); + return mlx5_devx_sq_create(dev_ctx->ctx, &txq_obj->sq_obj, log_desc_n, + &sq_attr, dev_ctx->numa_node); } #endif @@ -1059,7 +1060,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) } /* Create completion queue object with DevX. */ ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &txq_obj->cq_obj, - log_desc_n, &cq_attr, sh->numa_node); + log_desc_n, &cq_attr, sh->dev_ctx->numa_node); if (ret) { DRV_LOG(ERR, "Port %u Tx queue %u CQ creation failure.", dev->data->port_id, idx); diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index ff1c3d204c..b49a47bd77 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -247,7 +247,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) /* Create completion queue object for Rearm Queue. */ ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &wq->cq_obj, log2above(MLX5_TXPP_REARM_CQ_SIZE), &cq_attr, - sh->numa_node); + sh->dev_ctx->numa_node); if (ret) { DRV_LOG(ERR, "Failed to create CQ for Rearm Queue."); return ret; @@ -261,7 +261,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) /* There should be no WQE leftovers in the cyclic queue. */ ret = mlx5_devx_sq_create(sh->dev_ctx->ctx, &wq->sq_obj, log2above(MLX5_TXPP_REARM_SQ_SIZE), &sq_attr, - sh->numa_node); + sh->dev_ctx->numa_node); if (ret) { rte_errno = errno; DRV_LOG(ERR, "Failed to create SQ for Rearm Queue."); @@ -401,7 +401,7 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) sh->txpp.tsa = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, MLX5_TXPP_REARM_SQ_SIZE * sizeof(struct mlx5_txpp_ts), - 0, sh->numa_node); + 0, sh->dev_ctx->numa_node); if (!sh->txpp.tsa) { DRV_LOG(ERR, "Failed to allocate memory for CQ stats."); return -ENOMEM; @@ -411,7 +411,7 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) /* Create completion queue object for Clock Queue. */ ret = mlx5_devx_cq_create(sh->dev_ctx->ctx, &wq->cq_obj, log2above(MLX5_TXPP_CLKQ_SIZE), &cq_attr, - sh->numa_node); + sh->dev_ctx->numa_node); if (ret) { DRV_LOG(ERR, "Failed to create CQ for Clock Queue."); goto error; @@ -448,7 +448,7 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) sq_attr.ts_format = mlx5_ts_format_conv(sh->sq_ts_format); ret = mlx5_devx_sq_create(sh->dev_ctx->ctx, &wq->sq_obj, log2above(wq->sq_size), - &sq_attr, sh->numa_node); + &sq_attr, sh->dev_ctx->numa_node); if (ret) { rte_errno = errno; DRV_LOG(ERR, "Failed to create SQ for Clock Queue."); From patchwork Tue Aug 17 13:44:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97002 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D78E0A0548; Tue, 17 Aug 2021 15:46:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1390A41210; Tue, 17 Aug 2021 15:45:39 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2067.outbound.protection.outlook.com [40.107.236.67]) by mails.dpdk.org (Postfix) with ESMTP id D3C46411FD for ; Tue, 17 Aug 2021 15:45:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=apgH6yjrUKphL9UefQUE+FUATO5bC9u9Qm3ihJFDj5R/SoqxcbjmKk6TFHzTSQJ207Ga3FqU/hPig/CYyiIrkDgvd0euUyIjDUUgG7z/67DlPc0vOonlJC92lu7Z3WFheahXFvwKvfl/SQu28huJ0LknHEBDrB2OC0KapyfKYeNOeFcneUYqdyOkQnd7Y/oTteWWuN8wXAkrImpy/PZT0MRwPpnV1pOFRu8VeHAWtk5qDA/OeJYIMTaicTT4MNyp0Zei1/Z+Okks7L0cUXWLrq+vSUijXppH+uKV6IIiPon3ZOptah42axbHuGdbw+cGcRR6fUYkNZD2jlVlvbTjcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=izfhNgChbR4EUhI0CMTNwFcF8+C2jgLFhePBWlWkysQ=; b=CFaWaX7Kvy1xfA4jGFFGdnWowjdaL4RDe5DUlJp/8BsFPEhaU9xMy0vdfz2cUl0OI7NzmjFmpwu4kLruhMiNOkb0GiJPGEVgqJVcPMtxq+J/P4NrVv0BpHBv4LGK8LxNxbTnixs+IfSPegSDdl6T2cnGqXAf+xN374YT9JyZxOw+3lRSjnjgWH/Inqj05ctKF8As0d4RBPx9lwXvFi7j73uG9BGfnkGJ2ySZ3dacOfsrgtCAthDzSJ+et8rc+wsp/LOM+jtYyyulru5EK+egHjbwDpn4TA3rWTqnFjtC+gOeRpJR93NBMjWvKmRNFm1NHk/hjLX7MPN+qU3I6jDXPA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=izfhNgChbR4EUhI0CMTNwFcF8+C2jgLFhePBWlWkysQ=; b=E8ZKkc9MIw1DgrfGsgG0BL50AWGtl2iWMWJNRDySOP0uXtG/X/Y5tl7FzksMk0lLnVdOdLRkH9q/V4y/HgnuBdoNU6T27DIhKh5+swRUg/eYh+BqpISKnC/ER5Ner0ub7x4KXtRZGSkNEp6kqJUI56NAnoAduE+7DdZmSmqMW4C3vvpE2ZLZDDFsEBtgpyvmD9Hci0Ia0UZ1RosLsSoIQqIPBcsKMGWI/AaWZMje06CKOG+1a9/WJv8sFkVIg6oYLxBnrcYQ4jsxnEIxB92H5/mPidYHKBAPVvJ8XjmQq6H+PF+ZqWwhvD927nJAf2y8CcIRuGr0arvw4hoa7pMN5w== Received: from MWHPR17CA0060.namprd17.prod.outlook.com (2603:10b6:300:93::22) by CY4PR1201MB2535.namprd12.prod.outlook.com (2603:10b6:903:db::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.15; Tue, 17 Aug 2021 13:45:35 +0000 Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:300:93:cafe::41) by MWHPR17CA0060.outlook.office365.com (2603:10b6:300:93::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17 via Frontend Transport; Tue, 17 Aug 2021 13:45:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:35 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:34 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:33 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:32 +0300 Message-ID: <20210817134441.1966618-13-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e2caf681-6833-40de-37e9-08d961854a32 X-MS-TrafficTypeDiagnostic: CY4PR1201MB2535: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: H83G5SpaWPbWyapieaew36/gLaoCEGs13QvzCZB/rH/SrET8mFhajBqfxA3VV+N2sB0DvkrugrLq87rBfJd3AQAiqJ8Tnsq81pD1ldmR0vVKm8FSmBVaXv0Djqp9R1EFHKazn0UFJNKxKKasWZJulqy+wUdDsJKr18mzZ29B9iBc1v82mvzccODe1DKUWuHuKH7g5e2uWyIihIOpy0pZYI6VlKmEyomwERrmCjQT/fHpYGiicSPoJA9euf51yIv6rlznvTdRvxLUzMPDcY7ghdszO6/SSh79pzc3iOOGUOVyggUaPdrunBNnVPoWp63SeFaGmkM6uTleL/Nlt/s1dsh17dHJHsmz2JxBtJ62T6xpL/jzFaZJZdXKIEPYbPhrsI3qfNa10idV6hhQc9GImilQu4N8o4EFJQtiisQzpib4+xVCmJPJ8hv2bgAPLyrbXk7iNjMFAnYAZstgmC1V9hLIAQQ/fNjWwFZNApeAIdc0XqAzzd+OIPZh7+g287Lko5emd0GPylffoWsN6hT4is9MnvxZQuuuSod2gZmk5/QDQ82YrzQOR0Vo9Flr8H7jJfdzU7hXSu7GYjdwgcazDKgDhmYwpIC02dBenKnxvwPIEi3GOyI7VnYVImiWCyne+ulePtw3yoDLHJbW+1m5pxgrx5ot5rE5uIC4JnZotigxtL3UoJWMYLLDkbOBHwSr4JNLqETd+EhUu1Mj5i+Qvg== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(47076005)(356005)(36756003)(1076003)(26005)(7636003)(83380400001)(8676002)(186003)(86362001)(54906003)(8936002)(16526019)(2906002)(7696005)(2616005)(107886003)(4326008)(82310400003)(36860700001)(6916009)(55016002)(70586007)(70206006)(508600001)(336012)(426003)(6286002)(5660300002)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:35.1846 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e2caf681-6833-40de-37e9-08d961854a32 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB2535 Subject: [dpdk-dev] [RFC 12/21] common/mlx5: add ROCE disable in context device creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add option to get IB device after disabling RoCE. It is relevant if there is vDPA class in device arguments list. Signed-off-by: Michael Baum --- drivers/common/mlx5/linux/mlx5_common_os.c | 126 ++++++++++++++++++++- 1 file changed, 125 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c index 6f78897390..4a94865241 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.c +++ b/drivers/common/mlx5/linux/mlx5_common_os.c @@ -15,6 +15,7 @@ #include #include "mlx5_common.h" +#include "mlx5_nl.h" #include "mlx5_common_log.h" #include "mlx5_common_os.h" #include "mlx5_glue.h" @@ -39,6 +40,9 @@ const struct mlx5_glue *mlx5_glue; #define MLX5_TXDB_NCACHED 1 #define MLX5_TXDB_HEURISTIC 2 +#define MLX5_VDPA_MAX_RETRIES 20 +#define MLX5_VDPA_USEC 1000 + int mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr) { @@ -417,6 +421,123 @@ mlx5_glue_constructor(void) mlx5_glue = NULL; } +/* Try to disable ROCE by Netlink\Devlink. */ +static int +mlx5_nl_roce_disable(const char *addr) +{ + int nlsk_fd = mlx5_nl_init(NETLINK_GENERIC); + int devlink_id; + int enable; + int ret; + + if (nlsk_fd < 0) + return nlsk_fd; + devlink_id = mlx5_nl_devlink_family_id_get(nlsk_fd); + if (devlink_id < 0) { + ret = devlink_id; + DRV_LOG(DEBUG, + "Failed to get devlink id for ROCE operations by Netlink."); + goto close; + } + ret = mlx5_nl_enable_roce_get(nlsk_fd, devlink_id, addr, &enable); + if (ret) { + DRV_LOG(DEBUG, "Failed to get ROCE enable by Netlink: %d.", + ret); + goto close; + } else if (!enable) { + DRV_LOG(INFO, "ROCE has already disabled(Netlink)."); + goto close; + } + ret = mlx5_nl_enable_roce_set(nlsk_fd, devlink_id, addr, 0); + if (ret) + DRV_LOG(DEBUG, "Failed to disable ROCE by Netlink: %d.", ret); + else + DRV_LOG(INFO, "ROCE is disabled by Netlink successfully."); +close: + close(nlsk_fd); + return ret; +} + +/* Try to disable ROCE by sysfs. */ +static int +mlx5_sys_roce_disable(const char *addr) +{ + FILE *file_o; + int enable; + int ret; + + MKSTR(file_p, "/sys/bus/pci/devices/%s/roce_enable", addr); + file_o = fopen(file_p, "rb"); + if (!file_o) { + rte_errno = ENOTSUP; + return -ENOTSUP; + } + ret = fscanf(file_o, "%d", &enable); + if (ret != 1) { + rte_errno = EINVAL; + ret = EINVAL; + goto close; + } else if (!enable) { + ret = 0; + DRV_LOG(INFO, "ROCE has already disabled(sysfs)."); + goto close; + } + fclose(file_o); + file_o = fopen(file_p, "wb"); + if (!file_o) { + rte_errno = ENOTSUP; + return -ENOTSUP; + } + fprintf(file_o, "0\n"); + ret = 0; +close: + if (ret) + DRV_LOG(DEBUG, "Failed to disable ROCE by sysfs: %d.", ret); + else + DRV_LOG(INFO, "ROCE is disabled by sysfs successfully."); + fclose(file_o); + return ret; +} + +static int +mlx5_roce_disable(struct rte_device *dev) +{ + char pci_addr[PCI_PRI_STR_SIZE] = { 0 }; + + if (mlx5_dev_to_pci_str(dev, pci_addr, sizeof(pci_addr)) < 0) + return -rte_errno; + /* Firstly try to disable ROCE by Netlink and fallback to sysfs. */ + if (mlx5_nl_roce_disable(pci_addr) != 0 && + mlx5_sys_roce_disable(pci_addr) != 0) + return -rte_errno; + return 0; +} + +static struct ibv_device * +mlx5_vdpa_get_ibv_dev(struct rte_device *dev) +{ + struct ibv_device *ibv; + int retry; + + if (mlx5_roce_disable(dev) != 0) { + DRV_LOG(WARNING, "Failed to disable ROCE for \"%s\".", + dev->name); + return NULL; + } + /* Wait for the IB device to appear again after reload. */ + for (retry = MLX5_VDPA_MAX_RETRIES; retry > 0; --retry) { + ibv = mlx5_os_get_ibv_dev(dev); + if (ibv != NULL) + return ibv; + usleep(MLX5_VDPA_USEC); + } + DRV_LOG(ERR, + "Cannot get IB device after disabling RoCE for \"%s\", retries exceed %d.", + dev->name, MLX5_VDPA_MAX_RETRIES); + rte_errno = EAGAIN; + return NULL; +} + static int mlx5_config_doorbell_mapping_env(int dbnc) { @@ -471,7 +592,10 @@ mlx5_os_devx_open_device(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, struct ibv_context *ctx = NULL; int dbmap_env; - ibv = mlx5_os_get_ibv_dev(dev); + if (classes & MLX5_CLASS_VDPA) + ibv = mlx5_vdpa_get_ibv_dev(dev); + else + ibv = mlx5_os_get_ibv_dev(dev); if (!ibv) return -rte_errno; DRV_LOG(INFO, "Dev information matches for device \"%s\".", ibv->name); From patchwork Tue Aug 17 13:44:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97004 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFE1EA0548; Tue, 17 Aug 2021 15:47:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09C7D41232; Tue, 17 Aug 2021 15:45:42 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2051.outbound.protection.outlook.com [40.107.236.51]) by mails.dpdk.org (Postfix) with ESMTP id 152E3411F3 for ; Tue, 17 Aug 2021 15:45:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e4f/F3JpnLm8h87BNm9XyhS1mKcXFFBlqIOzvD8BfY8bw6ouGENYMjN0c5nf/Stl2+DhVuq40FhHSRvFgJSkl6Kse8VC9/AnbaERxUbULIDK1QXuCU0aMSG2fuhmtFqL6GcHLZnUUW7YvehJnSjGHoDrr/6aNEQWhylUEiSe447nKB12TMwV+RyFMJNQ1+6Kf1bFzp8rU9nFzVVGvc6lgKPwan3scbkSPZT9e4367tLmLjtyL2ytB6LqXlCOEXJpAh6xVk58W733Ka/DC3qQrAxiDOnXsYIXz+/s8rRAMIUFjKDioVRhxTcMhQ0G/uKTglBLShvpxH8Z2/7OluhXYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vbXZOh//WAcWzFXpKAS77cN7UvYUWJq3k+ows9xhuv8=; b=NahiTU7xPKxuHAm8u0E3+eTCeKPK6tpierJ9eWhu5s+JKEf4jOM8JzIPjqTdC6FgfNLRZ9gpuclSEtKWVtxeIvaGvkBnWHfvEZUM9XMdJpEJVAXb5VSgtvu5FIByTIe1k1ZRbXJcVSOvcXFQJhbCM/SmWvYaOah+alkFlb1jq+vwbSf31PPCvpM6E4RPAvd1V98B4XM9M/TKQghxrPUAuGHSxznISZqad4dBx9to8SnrPXStj0RqOFRODVN9lBt0b8uKJE4+enIqZLLSuYjo0EJsiteUVWoDGzrwccdxlPtRmiGI59ts5zMn/02ajLqeK7AOGVlVJqV9Vkjr6KdXpg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vbXZOh//WAcWzFXpKAS77cN7UvYUWJq3k+ows9xhuv8=; b=jxud3CocBtieLPSrR5AYiFNvDykRPaCv2EN8MZBslQec6rcRWR+cvMNG5HHUw4q91DUKSHETn7HNfeiH/ggsIWTq7PiVuF7n0UrLATuzVwhX3RizyA/IqXOoPzQyySPbFyzRd+fbJJhNwpKbZrbla00cJRI8zkoPqQkaIAxojuBzV+A6vykh0n988rcFzd2wiDCup5M6O5Ihhcz1Iw6mnIACifZjJZay/mycB6TxbEtOczHlXBflkHeaosx/gOKtWyX/e9by4v7BP9Up9hyDW/xrpDPIPXRs9sShu0xnvyBQBW0WwDbYD5//EnoKDnc488DG9LQhqvD28ok+joOCCw== Received: from MW4PR04CA0282.namprd04.prod.outlook.com (2603:10b6:303:89::17) by BYAPR12MB3477.namprd12.prod.outlook.com (2603:10b6:a03:ac::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.22; Tue, 17 Aug 2021 13:45:38 +0000 Received: from CO1NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::59) by MW4PR04CA0282.outlook.office365.com (2603:10b6:303:89::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT019.mail.protection.outlook.com (10.13.175.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:37 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:36 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:34 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:33 +0300 Message-ID: <20210817134441.1966618-14-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ee4cda5-7324-4143-da51-08d961854baf X-MS-TrafficTypeDiagnostic: BYAPR12MB3477: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +sz/qS+b922EqgqJycEuo70WGWhNeGGDBvl7nZeKYGb5Q06uR279Im5llui5cfjxWXIagQUSOy+z8XD6Vg0sO9MD77NO3aWBnui90qdm9yFdxBEpZBp1kpGvplVThqAuRYR+Uf5PjOJJavNWW6e6YxSuBBXYc2r6ydK03t8P5fwdOS9U1SuFNryHOaHETSgGsZQSNCP0uzInuTvaKbgJvRwJK5jXTi6ySTDdXfg/fFFqcidhG57f+0TyDNc0OWdW5zalmn139ENocaYouRloOD7WijNY0PGh870QKObp93VJKziQu61+heefpxKpS+dfvInTIMgD1Hj7yvLtuRMfZ1YN2/kfjQjVmsp8EifjcC1W69bxuv1A36Dckh50h8aCsi8rmdv0LZWhVJNi+LF4ikfanfF4tzGY/m7XPt7zQaKV11weKgsuDVtQUtXvchBR2yyxJwQJ1+0g6+P6yKg9rxbZj3GRUd/kCYjGdq8IToHioXMfsgz6a4WsXzb3rlqiRYNPhVtM0HvKXtpzeQGvDWLIhfQxo7my+9lIVZPEw2Dj6fliUs3D3TciHuTiPNI9sYh0FI2xkt9N8CdiwsI+W6iC/C2633RjMZlgesp9IOkZOE2YkUYcz60b9N4t6d/jlElkR1H7CGtGb4O9hnNCYXolSCsZV4ZXhC+K0tHW859GybVNJ1byK8nWZgk3Y34l11+v2BKsb2+lkFFGaGSCdQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(7636003)(86362001)(36860700001)(107886003)(1076003)(2906002)(6286002)(316002)(4326008)(82310400003)(54906003)(36756003)(7696005)(16526019)(186003)(26005)(83380400001)(70586007)(336012)(47076005)(5660300002)(6916009)(2616005)(508600001)(426003)(8936002)(55016002)(70206006)(8676002)(356005)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:37.6167 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ee4cda5-7324-4143-da51-08d961854baf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3477 Subject: [dpdk-dev] [RFC 13/21] vdpa/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a priv field. Signed-off-by: Michael Baum --- drivers/vdpa/mlx5/mlx5_vdpa.c | 185 ++++------------------------ drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 19 +-- drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 6 +- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 13 +- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 10 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 16 +-- 7 files changed, 61 insertions(+), 192 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 6d17d7a6f3..f773ac8711 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -189,37 +189,6 @@ mlx5_vdpa_features_set(int vid) return 0; } -static int -mlx5_vdpa_pd_create(struct mlx5_vdpa_priv *priv) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - priv->pd = mlx5_glue->alloc_pd(priv->ctx); - if (priv->pd == NULL) { - DRV_LOG(ERR, "Failed to allocate PD."); - return errno ? -errno : -ENOMEM; - } - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret = 0; - - obj.pd.in = priv->pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret) { - DRV_LOG(ERR, "Fail to get PD object info."); - mlx5_glue->dealloc_pd(priv->pd); - priv->pd = NULL; - return -errno; - } - priv->pdn = pd_info.pdn; - return 0; -#else - (void)priv; - DRV_LOG(ERR, "Cannot get pdn - no DV support."); - return -ENOTSUP; -#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ -} - static int mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) { @@ -238,7 +207,8 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) DRV_LOG(DEBUG, "Vhost MTU is 0."); return ret; } - ret = mlx5_get_ifname_sysfs(priv->ctx->device->ibdev_path, + ret = mlx5_get_ifname_sysfs(mlx5_os_get_ctx_device_path + (priv->dev_ctx->ctx), request.ifr_name); if (ret) { DRV_LOG(DEBUG, "Cannot get kernel IF name - %d.", ret); @@ -289,10 +259,6 @@ mlx5_vdpa_dev_close(int vid) mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_event_qp_global_release(priv); mlx5_vdpa_mem_dereg(priv); - if (priv->pd) { - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); - priv->pd = NULL; - } priv->configured = 0; priv->vid = 0; /* The mutex may stay locked after event thread cancel - initiate it. */ @@ -320,8 +286,7 @@ mlx5_vdpa_dev_config(int vid) if (mlx5_vdpa_mtu_set(priv)) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); - if (mlx5_vdpa_pd_create(priv) || mlx5_vdpa_mem_register(priv) || - mlx5_vdpa_err_event_setup(priv) || + if (mlx5_vdpa_mem_register(priv) || mlx5_vdpa_err_event_setup(priv) || mlx5_vdpa_virtqs_prepare(priv) || mlx5_vdpa_steer_setup(priv) || mlx5_vdpa_cqe_event_setup(priv)) { mlx5_vdpa_dev_close(vid); @@ -343,7 +308,7 @@ mlx5_vdpa_get_device_fd(int vid) DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -EINVAL; } - return priv->ctx->cmd_fd; + return ((struct ibv_context *)priv->dev_ctx->ctx)->cmd_fd; } static int @@ -472,98 +437,6 @@ static struct rte_vdpa_dev_ops mlx5_vdpa_ops = { .reset_stats = mlx5_vdpa_reset_stats, }; -/* Try to disable ROCE by Netlink\Devlink. */ -static int -mlx5_vdpa_nl_roce_disable(const char *addr) -{ - int nlsk_fd = mlx5_nl_init(NETLINK_GENERIC); - int devlink_id; - int enable; - int ret; - - if (nlsk_fd < 0) - return nlsk_fd; - devlink_id = mlx5_nl_devlink_family_id_get(nlsk_fd); - if (devlink_id < 0) { - ret = devlink_id; - DRV_LOG(DEBUG, "Failed to get devlink id for ROCE operations by" - " Netlink."); - goto close; - } - ret = mlx5_nl_enable_roce_get(nlsk_fd, devlink_id, addr, &enable); - if (ret) { - DRV_LOG(DEBUG, "Failed to get ROCE enable by Netlink: %d.", - ret); - goto close; - } else if (!enable) { - DRV_LOG(INFO, "ROCE has already disabled(Netlink)."); - goto close; - } - ret = mlx5_nl_enable_roce_set(nlsk_fd, devlink_id, addr, 0); - if (ret) - DRV_LOG(DEBUG, "Failed to disable ROCE by Netlink: %d.", ret); - else - DRV_LOG(INFO, "ROCE is disabled by Netlink successfully."); -close: - close(nlsk_fd); - return ret; -} - -/* Try to disable ROCE by sysfs. */ -static int -mlx5_vdpa_sys_roce_disable(const char *addr) -{ - FILE *file_o; - int enable; - int ret; - - MKSTR(file_p, "/sys/bus/pci/devices/%s/roce_enable", addr); - file_o = fopen(file_p, "rb"); - if (!file_o) { - rte_errno = ENOTSUP; - return -ENOTSUP; - } - ret = fscanf(file_o, "%d", &enable); - if (ret != 1) { - rte_errno = EINVAL; - ret = EINVAL; - goto close; - } else if (!enable) { - ret = 0; - DRV_LOG(INFO, "ROCE has already disabled(sysfs)."); - goto close; - } - fclose(file_o); - file_o = fopen(file_p, "wb"); - if (!file_o) { - rte_errno = ENOTSUP; - return -ENOTSUP; - } - fprintf(file_o, "0\n"); - ret = 0; -close: - if (ret) - DRV_LOG(DEBUG, "Failed to disable ROCE by sysfs: %d.", ret); - else - DRV_LOG(INFO, "ROCE is disabled by sysfs successfully."); - fclose(file_o); - return ret; -} - -static int -mlx5_vdpa_roce_disable(struct rte_device *dev) -{ - char pci_addr[PCI_PRI_STR_SIZE] = { 0 }; - - if (mlx5_dev_to_pci_str(dev, pci_addr, sizeof(pci_addr)) < 0) - return -rte_errno; - /* Firstly try to disable ROCE by Netlink and fallback to sysfs. */ - if (mlx5_vdpa_nl_roce_disable(pci_addr) != 0 && - mlx5_vdpa_sys_roce_disable(pci_addr) != 0) - return -rte_errno; - return 0; -} - static int mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) { @@ -632,39 +505,26 @@ mlx5_vdpa_config_get(struct rte_devargs *devargs, struct mlx5_vdpa_priv *priv) static int mlx5_vdpa_dev_probe(struct rte_device *dev) { - struct ibv_device *ibv; struct mlx5_vdpa_priv *priv = NULL; - struct ibv_context *ctx = NULL; + struct mlx5_dev_ctx *dev_ctx = NULL; struct mlx5_hca_attr attr; - int retry; int ret; - if (mlx5_vdpa_roce_disable(dev) != 0) { - DRV_LOG(WARNING, "Failed to disable ROCE for \"%s\".", - dev->name); - return -rte_errno; - } - /* Wait for the IB device to appear again after reload. */ - for (retry = MLX5_VDPA_MAX_RETRIES; retry > 0; --retry) { - ibv = mlx5_os_get_ibv_dev(dev); - if (ibv != NULL) - break; - usleep(MLX5_VDPA_USEC); - } - if (ibv == NULL) { - DRV_LOG(ERR, "Cannot get IB device after disabling RoCE for " - "\"%s\", retries exceed %d.", - dev->name, MLX5_VDPA_MAX_RETRIES); - rte_errno = EAGAIN; + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; return -rte_errno; } - ctx = mlx5_glue->dv_open_device(ibv); - if (!ctx) { - DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_VDPA); + if (ret < 0) { + DRV_LOG(ERR, "Failed to create device context."); + mlx5_free(dev_ctx); rte_errno = ENODEV; return -rte_errno; } - ret = mlx5_devx_cmd_query_hca_attr(ctx, &attr); + ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr); if (ret) { DRV_LOG(ERR, "Unable to read HCA capabilities."); rte_errno = ENOTSUP; @@ -692,8 +552,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) priv->qp_ts_format = attr.qp_ts_format; if (attr.num_lag_ports == 0) priv->num_lag_ports = 1; - priv->ctx = ctx; - priv->var = mlx5_glue->dv_alloc_var(ctx, 0); + priv->dev_ctx = dev_ctx; + priv->var = mlx5_glue->dv_alloc_var(dev_ctx->ctx, 0); if (!priv->var) { DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); goto error; @@ -718,8 +578,10 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) mlx5_glue->dv_free_var(priv->var); rte_free(priv); } - if (ctx) - mlx5_glue->close_device(ctx); + if (dev_ctx) { + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); + } return -rte_errno; } @@ -748,7 +610,10 @@ mlx5_vdpa_dev_remove(struct rte_device *dev) } if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); - mlx5_glue->close_device(priv->ctx); + if (priv->dev_ctx) { + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); + } pthread_mutex_destroy(&priv->vq_config_lock); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2a04e36607..dc9ba1c3c2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -132,10 +132,8 @@ struct mlx5_vdpa_priv { uint16_t hw_max_pending_comp; /* Hardware CQ moderation counter. */ struct rte_vdpa_device *vdev; /* vDPA device. */ int vid; /* vhost device id. */ - struct ibv_context *ctx; /* Device context. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ struct mlx5_hca_vdpa_attr caps; - uint32_t pdn; /* Protection Domain number. */ - struct ibv_pd *pd; uint32_t gpa_mkey_index; struct ibv_mr *null_mr; struct rte_vhost_memory *vmem; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 3541c652ce..056a3c2936 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -48,7 +48,7 @@ mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) { if (priv->eventc) return 0; - priv->eventc = mlx5_os_devx_create_event_channel(priv->ctx, + priv->eventc = mlx5_os_devx_create_event_channel(priv->dev_ctx->ctx, MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); if (!priv->eventc) { rte_errno = errno; @@ -61,7 +61,7 @@ mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) * registers writings, it is safe to allocate UAR with any * memory mapping type. */ - priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->dev_ctx->ctx, -1); if (!priv->uar) { rte_errno = errno; DRV_LOG(ERR, "Failed to allocate UAR."); @@ -115,8 +115,8 @@ mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, uint16_t event_nums[1] = {0}; int ret; - ret = mlx5_devx_cq_create(priv->ctx, &cq->cq_obj, log_desc_n, &attr, - SOCKET_ID_ANY); + ret = mlx5_devx_cq_create(priv->dev_ctx->ctx, &cq->cq_obj, log_desc_n, + &attr, SOCKET_ID_ANY); if (ret) goto error; cq->cq_ci = 0; @@ -397,7 +397,8 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) int flags; /* Setup device event channel. */ - priv->err_chnl = mlx5_glue->devx_create_event_channel(priv->ctx, 0); + priv->err_chnl = + mlx5_glue->devx_create_event_channel(priv->dev_ctx->ctx, 0); if (!priv->err_chnl) { rte_errno = errno; DRV_LOG(ERR, "Failed to create device event channel %d.", @@ -598,9 +599,9 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, return -1; if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) return -1; - attr.pd = priv->pdn; + attr.pd = priv->dev_ctx->pdn; attr.ts_format = mlx5_ts_format_conv(priv->qp_ts_format); - eqp->fw_qp = mlx5_devx_cmd_create_qp(priv->ctx, &attr); + eqp->fw_qp = mlx5_devx_cmd_create_qp(priv->dev_ctx->ctx, &attr); if (!eqp->fw_qp) { DRV_LOG(ERR, "Failed to create FW QP(%u).", rte_errno); goto error; @@ -611,7 +612,7 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, rte_errno = ENOMEM; goto error; } - eqp->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, + eqp->umem_obj = mlx5_glue->devx_umem_reg(priv->dev_ctx->ctx, (void *)(uintptr_t)eqp->umem_buf, umem_size, IBV_ACCESS_LOCAL_WRITE); @@ -631,7 +632,7 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, attr.dbr_umem_id = eqp->umem_obj->umem_id; attr.ts_format = mlx5_ts_format_conv(priv->qp_ts_format); attr.dbr_address = RTE_BIT64(log_desc_n) * MLX5_WSEG_SIZE; - eqp->sw_qp = mlx5_devx_cmd_create_qp(priv->ctx, &attr); + eqp->sw_qp = mlx5_devx_cmd_create_qp(priv->dev_ctx->ctx, &attr); if (!eqp->sw_qp) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index f391813745..1e9a946708 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -39,7 +39,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, struct mlx5_devx_mkey_attr mkey_attr = { .addr = (uintptr_t)log_base, .size = log_size, - .pd = priv->pdn, + .pd = priv->dev_ctx->pdn, .pg_access = 1, }; struct mlx5_devx_virtq_attr attr = { @@ -54,7 +54,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, DRV_LOG(ERR, "Failed to allocate mem for lm mr."); return -1; } - mr->umem = mlx5_glue->devx_umem_reg(priv->ctx, + mr->umem = mlx5_glue->devx_umem_reg(priv->dev_ctx->ctx, (void *)(uintptr_t)log_base, log_size, IBV_ACCESS_LOCAL_WRITE); if (!mr->umem) { @@ -62,7 +62,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, goto err; } mkey_attr.umem_id = mr->umem->umem_id; - mr->mkey = mlx5_devx_cmd_mkey_create(priv->ctx, &mkey_attr); + mr->mkey = mlx5_devx_cmd_mkey_create(priv->dev_ctx->ctx, &mkey_attr); if (!mr->mkey) { DRV_LOG(ERR, "Failed to create Mkey for lm."); goto err; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index a13bde5a0b..bec83eddde 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -193,7 +193,7 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) if (!mem) return -rte_errno; priv->vmem = mem; - priv->null_mr = mlx5_glue->alloc_null_mr(priv->pd); + priv->null_mr = mlx5_glue->alloc_null_mr(priv->dev_ctx->pd); if (!priv->null_mr) { DRV_LOG(ERR, "Failed to allocate null MR."); ret = -errno; @@ -209,7 +209,7 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) DRV_LOG(ERR, "Failed to allocate mem entry memory."); goto error; } - entry->umem = mlx5_glue->devx_umem_reg(priv->ctx, + entry->umem = mlx5_glue->devx_umem_reg(priv->dev_ctx->ctx, (void *)(uintptr_t)reg->host_user_addr, reg->size, IBV_ACCESS_LOCAL_WRITE); if (!entry->umem) { @@ -220,9 +220,10 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) mkey_attr.addr = (uintptr_t)(reg->guest_phys_addr); mkey_attr.size = reg->size; mkey_attr.umem_id = entry->umem->umem_id; - mkey_attr.pd = priv->pdn; + mkey_attr.pd = priv->dev_ctx->pdn; mkey_attr.pg_access = 1; - entry->mkey = mlx5_devx_cmd_mkey_create(priv->ctx, &mkey_attr); + entry->mkey = mlx5_devx_cmd_mkey_create(priv->dev_ctx->ctx, + &mkey_attr); if (!entry->mkey) { DRV_LOG(ERR, "Failed to create direct Mkey."); ret = -rte_errno; @@ -267,7 +268,7 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) } mkey_attr.addr = (uintptr_t)(mem->regions[0].guest_phys_addr); mkey_attr.size = mem_size; - mkey_attr.pd = priv->pdn; + mkey_attr.pd = priv->dev_ctx->pdn; mkey_attr.umem_id = 0; /* Must be zero for KLM mode. */ mkey_attr.log_entity_size = mode == MLX5_MKC_ACCESS_MODE_KLM_FBS ? @@ -281,7 +282,7 @@ mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv) ret = -ENOMEM; goto error; } - entry->mkey = mlx5_devx_cmd_mkey_create(priv->ctx, &mkey_attr); + entry->mkey = mlx5_devx_cmd_mkey_create(priv->dev_ctx->ctx, &mkey_attr); if (!entry->mkey) { DRV_LOG(ERR, "Failed to create indirect Mkey."); ret = -rte_errno; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index 383f003966..ae2ca9ccac 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -98,7 +98,8 @@ mlx5_vdpa_rqt_prepare(struct mlx5_vdpa_priv *priv) attr->rqt_max_size = rqt_n; attr->rqt_actual_size = rqt_n; if (!priv->steer.rqt) { - priv->steer.rqt = mlx5_devx_cmd_create_rqt(priv->ctx, attr); + priv->steer.rqt = mlx5_devx_cmd_create_rqt(priv->dev_ctx->ctx, + attr); if (!priv->steer.rqt) { DRV_LOG(ERR, "Failed to create RQT."); ret = -rte_errno; @@ -116,6 +117,7 @@ static int __rte_unused mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) { #ifdef HAVE_MLX5DV_DR + struct ibv_context *ctx = priv->dev_ctx->ctx; struct mlx5_devx_tir_attr tir_att = { .disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT, .rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ, @@ -204,12 +206,12 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) tir_att.rx_hash_field_selector_outer.selected_fields = vars[i][HASH]; priv->steer.rss[i].matcher = mlx5_glue->dv_create_flow_matcher - (priv->ctx, &dv_attr, priv->steer.tbl); + (ctx, &dv_attr, priv->steer.tbl); if (!priv->steer.rss[i].matcher) { DRV_LOG(ERR, "Failed to create matcher %d.", i); goto error; } - priv->steer.rss[i].tir = mlx5_devx_cmd_create_tir(priv->ctx, + priv->steer.rss[i].tir = mlx5_devx_cmd_create_tir(ctx, &tir_att); if (!priv->steer.rss[i].tir) { DRV_LOG(ERR, "Failed to create TIR %d.", i); @@ -268,7 +270,7 @@ int mlx5_vdpa_steer_setup(struct mlx5_vdpa_priv *priv) { #ifdef HAVE_MLX5DV_DR - priv->steer.domain = mlx5_glue->dr_create_domain(priv->ctx, + priv->steer.domain = mlx5_glue->dr_create_domain(priv->dev_ctx->ctx, MLX5DV_DR_DOMAIN_TYPE_NIC_RX); if (!priv->steer.domain) { DRV_LOG(ERR, "Failed to create Rx domain."); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index f530646058..d7c2d70947 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -250,7 +250,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) if (priv->caps.queue_counters_valid) { if (!virtq->counters) virtq->counters = mlx5_devx_cmd_create_virtio_q_counters - (priv->ctx); + (priv->dev_ctx->ctx); if (!virtq->counters) { DRV_LOG(ERR, "Failed to create virtq couners for virtq" " %d.", index); @@ -269,7 +269,8 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) " %u.", i, index); goto error; } - virtq->umems[i].obj = mlx5_glue->devx_umem_reg(priv->ctx, + virtq->umems[i].obj = mlx5_glue->devx_umem_reg + (priv->dev_ctx->ctx, virtq->umems[i].buf, virtq->umems[i].size, IBV_ACCESS_LOCAL_WRITE); @@ -322,11 +323,11 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) attr.mkey = priv->gpa_mkey_index; attr.tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; attr.queue_index = index; - attr.pd = priv->pdn; + attr.pd = priv->dev_ctx->pdn; attr.hw_latency_mode = priv->hw_latency_mode; attr.hw_max_latency_us = priv->hw_max_latency_us; attr.hw_max_pending_comp = priv->hw_max_pending_comp; - virtq->virtq = mlx5_devx_cmd_create_virtq(priv->ctx, &attr); + virtq->virtq = mlx5_devx_cmd_create_virtq(priv->dev_ctx->ctx, &attr); virtq->priv = priv; if (!virtq->virtq) goto error; @@ -434,6 +435,7 @@ int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { struct mlx5_devx_tis_attr tis_attr = {0}; + struct ibv_context *ctx = priv->dev_ctx->ctx; uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); @@ -457,7 +459,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) } /* Always map the entire page. */ priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ | - PROT_WRITE, MAP_SHARED, priv->ctx->cmd_fd, + PROT_WRITE, MAP_SHARED, ctx->cmd_fd, priv->var->mmap_off); if (priv->virtq_db_addr == MAP_FAILED) { DRV_LOG(ERR, "Failed to map doorbell page %u.", errno); @@ -467,7 +469,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) DRV_LOG(DEBUG, "VAR address of doorbell mapping is %p.", priv->virtq_db_addr); } - priv->td = mlx5_devx_cmd_create_td(priv->ctx); + priv->td = mlx5_devx_cmd_create_td(ctx); if (!priv->td) { DRV_LOG(ERR, "Failed to create transport domain."); return -rte_errno; @@ -476,7 +478,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->num_lag_ports; i++) { /* 0 is auto affinity, non-zero value to propose port. */ tis_attr.lag_tx_port_affinity = i + 1; - priv->tiss[i] = mlx5_devx_cmd_create_tis(priv->ctx, &tis_attr); + priv->tiss[i] = mlx5_devx_cmd_create_tis(ctx, &tis_attr); if (!priv->tiss[i]) { DRV_LOG(ERR, "Failed to create TIS %u.", i); goto error; From patchwork Tue Aug 17 13:44:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97005 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D1A8A0548; Tue, 17 Aug 2021 15:47:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B69741246; Tue, 17 Aug 2021 15:45:43 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2088.outbound.protection.outlook.com [40.107.220.88]) by mails.dpdk.org (Postfix) with ESMTP id 8F0884121E for ; Tue, 17 Aug 2021 15:45:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KYiaXg6QzsOV5SgVOeaS6GOoJ47iNBSvR234QO8IbV1cLM+SgSdIFU5YpA3liK1TOMWB17QeqjqqlAqS2GXtC7cUlez+wcoFeaXlgwVOV2u5OhUJb4kjj13D3UnQ2a5K6slSVXSpGGufKanwQAzP+QLdrH2GYRNC67/Qr2HH578nmMS4l7yJzk5S12IcN+JzN/spm68QV5Ji+Porp0D5MtYoWGjUohIPtdFO7pFSAmJ52OyUk4Vn8dlYSF8EXAkkph6vZmtgBtR3bjIOg8g0LCQM6PfMNQAfJckYyIWDJOlhC+rQ2TfLeuSib7azN/uZyDpzsMgzaGGBPdk2BixWqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bIpSAuF3Fr/6XNIkxYjugW3hDf6qbzhBjBihWx999TY=; b=dOH8slaZg0GkCDY80hhTRemTP2N1D3fNYXq0DsxQRiCpvbpTfZ5DG+NqR8Lq09p0PAhEGBvA5XBU25VwYI86/AQq4ABcNWLD1Z3XLhxGNs2iVJ5sfGOKBFBKD91IpK57zvDA+1rfBDhzQzeN/YuLzEnXzLUFqEVlL6P3Y0fTGaSVXZ1Vqhda9MrdHgF2um1aev9+Znk0VnfAYdBBhRVMIkP78wLXIh7oafqwi1cJDT//xcncZqCtpUAi7JHzssBQRMu7ku7GiEhTpdj0D9f7kn3N2PjMem2BSXw8BojndAeeNvcUqo+33sDaHs4Wfzyljr3zKO94AwK24WhEIJfmdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bIpSAuF3Fr/6XNIkxYjugW3hDf6qbzhBjBihWx999TY=; b=solIN8BCRqur6FksxI5PZYbhJLyyrAKnxBo+inUoAdNUaNNv0HfGxFR8XJxXbKnxuJlpZx/226T3Njw9yITDvNEsjtRGPzNTnqTl46jC8RHmYS2jkgCATfev/bSqYa/GSRRtrdu2qMvEdfiYFtMIstJXzkdBqTHHtSuaA+KT2Wtn1WuaM9c5M0ciXU4pJu1s4NOkUb3cB1JhDsUYXskyNXYabDQ7tyvoeqOI/BHOADIoizyn7XbCcreT99iZ10SKac8rkz2KYuEU8y1rVyTS5d3rrIiaaYxJ0Ychfkffh8d9TCTxt2QYg7gvMwrfdYAPaUErpLsnbVBkhREMtHgORg== Received: from BN6PR20CA0064.namprd20.prod.outlook.com (2603:10b6:404:151::26) by MN2PR12MB4637.namprd12.prod.outlook.com (2603:10b6:208:3e::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16; Tue, 17 Aug 2021 13:45:39 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:404:151:cafe::8a) by BN6PR20CA0064.outlook.office365.com (2603:10b6:404:151::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:39 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:37 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:36 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:34 +0300 Message-ID: <20210817134441.1966618-15-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a084676e-774c-4db3-d6eb-08d961854c9c X-MS-TrafficTypeDiagnostic: MN2PR12MB4637: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:324; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iKciz0Pvn072EnIa9Dy81FVSaPFQJOd+psKQZYi7bizu/v3eIpe/7uyUyWn8mF/bqb4+DyowrjNbJ7NRzrlXeyRc+55eNuoEcNX2hcNZntqXETgq1c/AdI5N1Z22q6UJ0AUc5cwBOkFiVo4mOtvEqf31FJ851pqhO+p1/qHXiCy1XvpZRP0bOfuW/ya/X7T16/3lcXy6NB1ZXKM7KgIOsLIoxcgpEWrMLHhLzSwQC2x0RIZ6GjCcdeT0uw879ZmEBj8dsdhe4U1TxK4AaruGgUfD3XlyCTsM1xXi0QZ05kf5V+xGYVMPKzk/MIq30tzvWF9CRvHAXuCLk4pTUPJHTUl+ea1jpS/6lBEoPSAEz9SLxsFKFKhkf036HAMPehpP5tOwXw12fvWXATxUtyKSIzFHt0WoN5YzE/3Rp9dBn+AKy9CdlSp6RNfN3bmv79JG4Bb8jv3EYipxJDQGIDXJ5wEji+r4a2u5mxAc6gLv5VEjNqI9NYcAX08xD0mKCrvxg7aBIqWeisyj5whZVS8S+vtQgGBCtinQpTlnrcF8jJt3Rf29mseM2kggNnzclMvjHHECAOSsLPgVgIcYPs2QG7e1kqnAXMu8LVQCqMu0qVWe271FFTeJ6PtAPTKwYUtib0Y/McgFAR5GeHW0X0P4yqHmdSOREbJERnp4jjseoMiYyx4sV0oH8NGEjEfaaT5EGhnwyzLYHMb6QCRpAjcy+w== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(36840700001)(46966006)(6286002)(4326008)(26005)(186003)(16526019)(2616005)(82740400003)(82310400003)(6916009)(5660300002)(316002)(8936002)(8676002)(54906003)(86362001)(426003)(1076003)(70206006)(107886003)(55016002)(2906002)(36860700001)(478600001)(30864003)(36756003)(7636003)(7696005)(47076005)(83380400001)(356005)(336012)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:39.1998 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a084676e-774c-4db3-d6eb-08d961854c9c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4637 Subject: [dpdk-dev] [RFC 14/21] mlx5: update device sent to probing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use mlx5 device structure as a parameter to probe function. This structure will contain the shared device context. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.c | 4 ++-- drivers/common/mlx5/mlx5_common.h | 10 +++++++-- drivers/common/mlx5/mlx5_common_private.h | 6 ------ drivers/compress/mlx5/mlx5_compress.c | 12 +++++------ drivers/crypto/mlx5/mlx5_crypto.c | 15 +++++++------- drivers/net/mlx5/linux/mlx5_os.c | 25 ++++++++++++----------- drivers/net/mlx5/mlx5.c | 6 +++--- drivers/net/mlx5/mlx5.h | 4 ++-- drivers/net/mlx5/windows/mlx5_os.c | 10 ++++----- drivers/regex/mlx5/mlx5_regex.c | 12 +++++------ drivers/vdpa/mlx5/mlx5_vdpa.c | 12 +++++------ 11 files changed, 59 insertions(+), 57 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index ffd2c2c129..0870ee0718 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -404,7 +404,7 @@ drivers_remove(struct mlx5_common_device *dev, uint32_t enabled_classes) while (enabled_classes) { driver = driver_get(RTE_BIT64(i)); if (driver != NULL) { - local_ret = driver->remove(dev->dev); + local_ret = driver->remove(dev); if (local_ret == 0) dev->classes_loaded &= ~RTE_BIT64(i); else if (ret == 0) @@ -438,7 +438,7 @@ drivers_probe(struct mlx5_common_device *dev, uint32_t user_classes) ret = -EEXIST; goto probe_err; } - ret = driver->probe(dev->dev); + ret = driver->probe(dev); if (ret < 0) { DRV_LOG(ERR, "Failed to load driver %s", driver->name); diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index c4e86c3175..c5f2a6285f 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -335,6 +335,12 @@ struct mlx5_dev_ctx { int numa_node; /* Numa node of device. */ }; +struct mlx5_common_device { + struct rte_device *dev; + TAILQ_ENTRY(mlx5_common_device) next; + uint32_t classes_loaded; +}; + /** * Uninitialize context device and release all its resources. * @@ -367,12 +373,12 @@ int mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, /** * Initialization function for the driver called during device probing. */ -typedef int (mlx5_class_driver_probe_t)(struct rte_device *dev); +typedef int (mlx5_class_driver_probe_t)(struct mlx5_common_device *dev); /** * Uninitialization function for the driver called during hot-unplugging. */ -typedef int (mlx5_class_driver_remove_t)(struct rte_device *dev); +typedef int (mlx5_class_driver_remove_t)(struct mlx5_common_device *dev); /** * Driver-specific DMA mapping. After a successful call the device diff --git a/drivers/common/mlx5/mlx5_common_private.h b/drivers/common/mlx5/mlx5_common_private.h index a038330375..04c0af3763 100644 --- a/drivers/common/mlx5/mlx5_common_private.h +++ b/drivers/common/mlx5/mlx5_common_private.h @@ -16,12 +16,6 @@ extern "C" { /* Common bus driver: */ -struct mlx5_common_device { - struct rte_device *dev; - TAILQ_ENTRY(mlx5_common_device) next; - uint32_t classes_loaded; -}; - int mlx5_common_dev_probe(struct rte_device *eal_dev); int mlx5_common_dev_remove(struct rte_device *eal_dev); int mlx5_common_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova, diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index e906ddb066..8348ea8ea3 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -748,7 +748,7 @@ mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, } static int -mlx5_compress_dev_probe(struct rte_device *dev) +mlx5_compress_dev_probe(struct mlx5_common_device *dev) { struct rte_compressdev *cdev; struct mlx5_dev_ctx *dev_ctx; @@ -756,7 +756,7 @@ mlx5_compress_dev_probe(struct rte_device *dev) struct mlx5_hca_attr att = { 0 }; struct rte_compressdev_pmd_init_params init_params = { .name = "", - .socket_id = dev->numa_node, + .socket_id = dev->dev->numa_node, }; const char *ibdev_name; int ret; @@ -773,7 +773,7 @@ mlx5_compress_dev_probe(struct rte_device *dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_COMPRESS); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_COMPRESS); if (ret < 0) { DRV_LOG(ERR, "Failed to create device context."); mlx5_free(dev_ctx); @@ -791,7 +791,7 @@ mlx5_compress_dev_probe(struct rte_device *dev) rte_errno = ENOTSUP; return -ENOTSUP; } - cdev = rte_compressdev_pmd_create(ibdev_name, dev, + cdev = rte_compressdev_pmd_create(ibdev_name, dev->dev, sizeof(*priv), &init_params); if (cdev == NULL) { DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); @@ -840,13 +840,13 @@ mlx5_compress_dev_probe(struct rte_device *dev) } static int -mlx5_compress_dev_remove(struct rte_device *dev) +mlx5_compress_dev_remove(struct mlx5_common_device *dev) { struct mlx5_compress_priv *priv = NULL; pthread_mutex_lock(&priv_list_lock); TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) - if (priv->cdev->device == dev) + if (priv->cdev->device == dev->dev) break; if (priv) TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 7cb5bb5445..44656225d2 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -942,7 +942,7 @@ mlx5_crypto_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, } static int -mlx5_crypto_dev_probe(struct rte_device *dev) +mlx5_crypto_dev_probe(struct mlx5_common_device *dev) { struct rte_cryptodev *crypto_dev; struct mlx5_dev_ctx *dev_ctx; @@ -953,7 +953,7 @@ mlx5_crypto_dev_probe(struct rte_device *dev) struct rte_cryptodev_pmd_init_params init_params = { .name = "", .private_data_size = sizeof(struct mlx5_crypto_priv), - .socket_id = dev->numa_node, + .socket_id = dev->dev->numa_node, .max_nb_queue_pairs = RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS, }; @@ -973,7 +973,7 @@ mlx5_crypto_dev_probe(struct rte_device *dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_CRYPTO); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_CRYPTO); if (ret < 0) { DRV_LOG(ERR, "Failed to create device context."); mlx5_free(dev_ctx); @@ -990,7 +990,7 @@ mlx5_crypto_dev_probe(struct rte_device *dev) rte_errno = ENOTSUP; return -ENOTSUP; } - ret = mlx5_crypto_parse_devargs(dev->devargs, &devarg_prms); + ret = mlx5_crypto_parse_devargs(dev->dev->devargs, &devarg_prms); if (ret) { DRV_LOG(ERR, "Failed to parse devargs."); mlx5_dev_ctx_release(dev_ctx); @@ -1005,7 +1005,8 @@ mlx5_crypto_dev_probe(struct rte_device *dev) mlx5_free(dev_ctx); return -rte_errno; } - crypto_dev = rte_cryptodev_pmd_create(ibdev_name, dev, &init_params); + crypto_dev = rte_cryptodev_pmd_create(ibdev_name, dev->dev, + &init_params); if (crypto_dev == NULL) { DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); mlx5_dev_ctx_release(dev_ctx); @@ -1065,13 +1066,13 @@ mlx5_crypto_dev_probe(struct rte_device *dev) } static int -mlx5_crypto_dev_remove(struct rte_device *dev) +mlx5_crypto_dev_remove(struct mlx5_common_device *dev) { struct mlx5_crypto_priv *priv = NULL; pthread_mutex_lock(&priv_list_lock); TAILQ_FOREACH(priv, &mlx5_crypto_priv_list, next) - if (priv->crypto_dev->device == dev) + if (priv->crypto_dev->device == dev->dev) break; if (priv) TAILQ_REMOVE(&mlx5_crypto_priv_list, priv, next); diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index e2a7c3d09c..812aadaaa4 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2655,21 +2655,22 @@ mlx5_os_parse_eth_devargs(struct rte_device *dev, * * This function spawns Ethernet devices out of a given PCI device. * - * @param[in] pci_dev - * PCI device information. + * @param[in] dev + * Pointer to mlx5 device structure. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_os_pci_probe(struct rte_pci_device *pci_dev, struct mlx5_dev_ctx *dev_ctx, +mlx5_os_pci_probe(struct mlx5_common_device *dev, struct mlx5_dev_ctx *dev_ctx, uint8_t devx) { + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->dev); struct rte_eth_devargs eth_da = { .nb_ports = 0 }; int ret = 0; uint16_t p; - ret = mlx5_os_parse_eth_devargs(&pci_dev->device, ð_da); + ret = mlx5_os_parse_eth_devargs(dev->dev, ð_da); if (ret != 0) return ret; @@ -2687,7 +2688,7 @@ mlx5_os_pci_probe(struct rte_pci_device *pci_dev, struct mlx5_dev_ctx *dev_ctx, pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function, eth_da.ports[p]); - mlx5_net_remove(&pci_dev->device); + mlx5_net_remove(dev); } } else { ret = mlx5_os_pci_probe_pf(pci_dev, dev_ctx, ð_da, 0, devx); @@ -2873,13 +2874,13 @@ mlx5_verbs_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev) * This function probe PCI bus device(s) or a single SF on auxiliary bus. * * @param[in] dev - * Pointer to the generic device. + * Pointer to the common device. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_os_net_probe(struct rte_device *dev) +mlx5_os_net_probe(struct mlx5_common_device *dev) { struct mlx5_dev_ctx *dev_ctx; uint8_t devx = 0; @@ -2896,14 +2897,14 @@ mlx5_os_net_probe(struct rte_device *dev) * Initialize context device and allocate all its resources. * Try to do it with DV first, then usual Verbs. */ - ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_ETH); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_ETH); if (ret < 0) { goto error; } else if (dev_ctx->ctx) { devx = 1; DRV_LOG(DEBUG, "DevX is supported."); } else { - ret = mlx5_verbs_dev_ctx_prepare(dev_ctx, dev); + ret = mlx5_verbs_dev_ctx_prepare(dev_ctx, dev->dev); if (ret < 0) goto error; DRV_LOG(DEBUG, "DevX is NOT supported."); @@ -2916,10 +2917,10 @@ mlx5_os_net_probe(struct rte_device *dev) strerror(rte_errno)); goto error; } - if (mlx5_dev_is_pci(dev)) - ret = mlx5_os_pci_probe(RTE_DEV_TO_PCI(dev), dev_ctx, devx); + if (mlx5_dev_is_pci(dev->dev)) + ret = mlx5_os_pci_probe(dev, dev_ctx, devx); else - ret = mlx5_os_auxiliary_probe(dev, dev_ctx, devx); + ret = mlx5_os_auxiliary_probe(dev->dev, dev_ctx, devx); if (ret) goto error; return ret; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b695f2f6d3..e0b180e83c 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2394,13 +2394,13 @@ mlx5_get_dev_ctx(struct rte_device *dev) * 0 on success, the function cannot fail. */ int -mlx5_net_remove(struct rte_device *dev) +mlx5_net_remove(struct mlx5_common_device *dev) { - struct mlx5_dev_ctx *dev_ctx = mlx5_get_dev_ctx(dev); + struct mlx5_dev_ctx *dev_ctx = mlx5_get_dev_ctx(dev->dev); uint16_t port_id; int ret = 0; - RTE_ETH_FOREACH_DEV_OF(port_id, dev) { + RTE_ETH_FOREACH_DEV_OF(port_id, dev->dev) { /* * mlx5_dev_close() is not registered to secondary process, * call the close function explicitly for secondary process. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f6d8e1d817..26b23a6053 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1481,7 +1481,7 @@ int mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev, struct rte_eth_udp_tunnel *udp_tunnel); uint16_t mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev); int mlx5_dev_close(struct rte_eth_dev *dev); -int mlx5_net_remove(struct rte_device *dev); +int mlx5_net_remove(struct mlx5_common_device *dev); bool mlx5_is_hpf(struct rte_eth_dev *dev); bool mlx5_is_sf_repr(struct rte_eth_dev *dev); void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); @@ -1768,7 +1768,7 @@ void mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev); struct rte_pci_driver; int mlx5_os_get_dev_attr(void *ctx, struct mlx5_dev_attr *dev_attr); void mlx5_os_free_shared_dr(struct mlx5_priv *priv); -int mlx5_os_net_probe(struct rte_device *dev); +int mlx5_os_net_probe(struct mlx5_common_device *dev); void mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh); void mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh); void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index f6a7fbaca1..f21fb60272 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -869,15 +869,15 @@ mlx5_os_set_allmulti(struct rte_eth_dev *dev, int enable) * This function spawns Ethernet devices out of a given device. * * @param[in] dev - * Pointer to the generic device. + * Pointer to the common device. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_os_net_probe(struct rte_device *dev) +mlx5_os_net_probe(struct mlx5_common_device *dev) { - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->dev); struct mlx5_dev_ctx *dev_ctx; struct mlx5_dev_spawn_data spawn = { .pf_bond = -1 }; struct mlx5_dev_config dev_config; @@ -902,7 +902,7 @@ mlx5_os_net_probe(struct rte_device *dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_ETH); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_ETH); if (ret < 0) goto error; memset(&spawn.info, 0, sizeof(spawn.info)); @@ -954,7 +954,7 @@ mlx5_os_net_probe(struct rte_device *dev) dev_config.dv_flow_en = 1; dev_config.decap_en = 0; dev_config.log_hp_size = MLX5_ARG_UNSET; - spawn.eth_dev = mlx5_dev_spawn(dev, dev_ctx, &spawn, &dev_config); + spawn.eth_dev = mlx5_dev_spawn(dev->dev, dev_ctx, &spawn, &dev_config); if (!spawn.eth_dev) { ret = -rte_errno; goto error; diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index 11b24cde39..78fa90797c 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -122,7 +122,7 @@ mlx5_regex_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, } static int -mlx5_regex_dev_probe(struct rte_device *rte_dev) +mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) { struct mlx5_regex_priv *priv = NULL; struct mlx5_dev_ctx *dev_ctx = NULL; @@ -139,7 +139,7 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_dev_ctx_prepare(dev_ctx, rte_dev, MLX5_CLASS_REGEX); + ret = mlx5_dev_ctx_prepare(dev_ctx, mlx5_dev->dev, MLX5_CLASS_REGEX); if (ret < 0) { DRV_LOG(ERR, "Failed to create device context."); rte_free(dev_ctx); @@ -184,7 +184,7 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) priv->is_bf2 = 1; /* Default RXP programming mode to Shared. */ priv->prog_mode = MLX5_RXP_SHARED_PROG_MODE; - mlx5_regex_get_name(name, rte_dev); + mlx5_regex_get_name(name, mlx5_dev->dev); priv->regexdev = rte_regexdev_register(name); if (priv->regexdev == NULL) { DRV_LOG(ERR, "Failed to register RegEx device."); @@ -212,7 +212,7 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) priv->regexdev->enqueue = mlx5_regexdev_enqueue_gga; #endif priv->regexdev->dequeue = mlx5_regexdev_dequeue; - priv->regexdev->device = rte_dev; + priv->regexdev->device = mlx5_dev->dev; priv->regexdev->data->dev_private = priv; priv->regexdev->state = RTE_REGEXDEV_READY; priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; @@ -254,13 +254,13 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) } static int -mlx5_regex_dev_remove(struct rte_device *rte_dev) +mlx5_regex_dev_remove(struct mlx5_common_device *mlx5_dev) { char name[RTE_REGEXDEV_NAME_MAX_LEN]; struct rte_regexdev *dev; struct mlx5_regex_priv *priv = NULL; - mlx5_regex_get_name(name, rte_dev); + mlx5_regex_get_name(name, mlx5_dev->dev); dev = rte_regexdev_get_device_by_name(name); if (!dev) return 0; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index f773ac8711..6771445582 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -503,7 +503,7 @@ mlx5_vdpa_config_get(struct rte_devargs *devargs, struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_dev_probe(struct rte_device *dev) +mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) { struct mlx5_vdpa_priv *priv = NULL; struct mlx5_dev_ctx *dev_ctx = NULL; @@ -517,7 +517,7 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_VDPA); + ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_VDPA); if (ret < 0) { DRV_LOG(ERR, "Failed to create device context."); mlx5_free(dev_ctx); @@ -558,13 +558,13 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); goto error; } - priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops); + priv->vdev = rte_vdpa_register_device(dev->dev, &mlx5_vdpa_ops); if (priv->vdev == NULL) { DRV_LOG(ERR, "Failed to register vDPA device."); rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - mlx5_vdpa_config_get(dev->devargs, priv); + mlx5_vdpa_config_get(dev->dev->devargs, priv); SLIST_INIT(&priv->mr_list); pthread_mutex_init(&priv->vq_config_lock, NULL); pthread_mutex_lock(&priv_list_lock); @@ -586,14 +586,14 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) } static int -mlx5_vdpa_dev_remove(struct rte_device *dev) +mlx5_vdpa_dev_remove(struct mlx5_common_device *dev) { struct mlx5_vdpa_priv *priv = NULL; int found = 0; pthread_mutex_lock(&priv_list_lock); TAILQ_FOREACH(priv, &priv_list, next) { - if (priv->vdev->device == dev) { + if (priv->vdev->device == dev->dev) { found = 1; break; } From patchwork Tue Aug 17 13:44:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97006 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6C2AA0548; Tue, 17 Aug 2021 15:47:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5FD441220; Tue, 17 Aug 2021 15:45:44 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2055.outbound.protection.outlook.com [40.107.96.55]) by mails.dpdk.org (Postfix) with ESMTP id 0B2194123F for ; Tue, 17 Aug 2021 15:45:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bBh66MppsPRyncd8m2R3bB73raRK7NL1m7RkAE/h+O71GgI1vmTWYJccwB029twnC0c6M/1jALSfYsUktVnClAfw+1g8UpFZkV/Iew1hj2pZTTKAU/xOyF3NzAyRnB2zCab0y3P0xTbYcDJhQiGv4tFo6fQsfaPLdhRFI/aFxqW1XTxlhidNaQ2abVy1fXtgua5d+BqNlmhBD2GsZKACsiFVtrpgtO52P1zRRD1efQJSCCU47dpV22YWV61HQq4XRFelFXf0qIyWEntELtm8I7Ff2knBcNNe2GPOxmK3DJBVHXQwrgzo3CIRbyvfKIDAA+wKYge1INcynqd8ZpajWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/29qP/xMolmUxuiUQn90BxKebbd5RlbqJhraKg+Sj/M=; b=kOHnmGCQYBZ3VIqW0pLWT9FtjfpM8mym7XcIr8OfQomsZJs8x7xVUto9dNzLrLymwdXCrexS0YWKyOfJ/iFM3MCrR0HwTsoyCVrPkAUt2C/0POPnOhl3KmmmeWSaVSKpCbdEMVGeQO6BAFn5kGMrKB1XFMiHrIOSW8XOWF/uPi7DlYyYJM0S+DEnEzGVZ2xqJZRKW2B1VthBlKGv86P8d9mM9rm8rJEf+Az522iVNzl6zBG9GcKVjwgRAcIzHQ4z0SfQBdI43TWmEBgLcEAPOSSxOFGTPqQs+4WhmouOqi2Xs3jh06JmFwQ1tkIM7yWroFt5RJcl+LF7TO4XTG847A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/29qP/xMolmUxuiUQn90BxKebbd5RlbqJhraKg+Sj/M=; b=C4/KMwoRP3sPaMR+1zopMfM1WCZ9uWFFFm1Dy649+/g4R3xVpdPb/xwbK4t0t8n3LFbQ6j5tUhXB6AWe1NAFeRH9y1wtUELxE5lpdMi2ghN5mRD6aPHGpieTRIZsjgy+QIJh6/OramhzaJnBY7DYaxM0G1w01hBsALKo7SFWVy+xKxOMZa8Guxq9V6qTxBIsvlVrNn7dJfabdQolbr8OSXKo/umF4qFaGGN+TnjvqwhtFVd06xZrKSvmYca237BIyqkwq+mwZyjmUgzI87s7UYG78CbKoft8oHPimMkVEuaXwwyZ5kQ/d/5D0l1OVAe1Jw5OFwyRfy9KWmZcv1mw/w== Received: from BN6PR13CA0023.namprd13.prod.outlook.com (2603:10b6:404:10a::33) by BN9PR12MB5211.namprd12.prod.outlook.com (2603:10b6:408:11c::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.17; Tue, 17 Aug 2021 13:45:41 +0000 Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com (2603:10b6:404:10a:cafe::32) by BN6PR13CA0023.outlook.office365.com (2603:10b6:404:10a::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.8 via Frontend Transport; Tue, 17 Aug 2021 13:45:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:40 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:39 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:37 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:35 +0300 Message-ID: <20210817134441.1966618-16-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 45ff0919-9fec-4adc-cd5a-08d961854d5a X-MS-TrafficTypeDiagnostic: BN9PR12MB5211: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:142; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: moKG4Q6r1T0OiEBxGLOB7H2D2gaF1vXM4NxHKDqVygyfZQOlqVBmnL3GPByYuvFSj8uV2+d/zFgkW+ePpG5evSEBhFBecWyjfil2CnLNiEgvH5HZw054zFzGmM3UcSjcS8qSae0fTuI/71sFTF5TFKamrThF3VEvd+uhkeDVnCqxwkid6yY9nDBwCP6q17dkNPOgz8zVohTXe3+KgH9X2s6BsvbqmgiIU08tHig7ZSFOwKu7g9N14epKgcSJbBifWFEW9aHM9bNcDMk7/9E4jV9X0LBYk8f64E8BFOEakvRoPGITyJSIg4n059cNAOlSLAiBkBsMytGkvX97ABnRSBtuuI+4R5tNCi49FcPfzI0UOM8BFm83eR+aspa7ZH3mWqKaJ0MaUsqku1ssgGfnUC7eiuKGxc3kbdf7jqz/HZStW5BN5LX4dOppiKj7y5Md41VAW2iiKj6rwDiPMdUTZgzZ2f1YVi3qdW7qo+0/3EdwJWFH1fjCfKz6avdvsEvLPdBZEQ7nIRKVq21pznYbYROVTwMJDu6ehwC/tJEFBsGODWKwQNAyRAyHP0U/b5JXsqeiA+Iso1dDAmbhjjhfq9lngKpDlMrv+v92OEJaJ54BMgSvi0uZWYZKUKKxNFgkbiimHBp0mP/szB7dHaSLqnBa4RH4Y4Ik4rTf4yK4TvLKr8MQKYPZRZmE1fuV3McA4ArmJSLqbUlgFGWGJjzCQQ== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(2906002)(336012)(83380400001)(6286002)(4326008)(5660300002)(426003)(508600001)(82310400003)(7636003)(55016002)(356005)(8936002)(7696005)(6916009)(70586007)(2616005)(30864003)(70206006)(36756003)(36860700001)(1076003)(47076005)(107886003)(16526019)(186003)(26005)(86362001)(8676002)(316002)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:40.3848 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 45ff0919-9fec-4adc-cd5a-08d961854d5a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT065.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5211 Subject: [dpdk-dev] [RFC 15/21] mlx5: share context device structure between drivers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Create and initialize context device structure ones in common probing, and give a pointer to it for each driver. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.c | 40 +++++++++++++++++++++------ drivers/common/mlx5/mlx5_common.h | 30 +------------------- drivers/common/mlx5/version.map | 2 -- drivers/compress/mlx5/mlx5_compress.c | 31 ++------------------- drivers/crypto/mlx5/mlx5_crypto.c | 34 ++--------------------- drivers/net/mlx5/linux/mlx5_os.c | 36 ++++++++---------------- drivers/net/mlx5/mlx5.c | 32 --------------------- drivers/net/mlx5/windows/mlx5_os.c | 22 ++------------- drivers/regex/mlx5/mlx5_regex.c | 35 ++++------------------- drivers/vdpa/mlx5/mlx5_vdpa.c | 35 ++++------------------- 10 files changed, 64 insertions(+), 233 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 0870ee0718..b500e7834e 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -319,7 +319,7 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -void +static void mlx5_dev_ctx_release(struct mlx5_dev_ctx *dev_ctx) { if (dev_ctx->pd != NULL) { @@ -345,7 +345,7 @@ mlx5_dev_ctx_release(struct mlx5_dev_ctx *dev_ctx) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -int +static int mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, uint32_t classes_loaded) { @@ -386,12 +386,36 @@ mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, } static void -dev_release(struct mlx5_common_device *dev) +mlx5_common_dev_release(struct mlx5_common_device *dev) { TAILQ_REMOVE(&devices_list, dev, next); + mlx5_dev_ctx_release(&dev->ctx); rte_free(dev); } +static struct mlx5_common_device * +mlx5_common_dev_create(struct rte_device *eal_dev, uint32_t classes) +{ + struct mlx5_common_device *dev; + int ret; + + dev = rte_zmalloc("mlx5_common_device", sizeof(*dev), 0); + if (!dev) { + DRV_LOG(ERR, "Device allocation failure."); + rte_errno = ENOMEM; + return NULL; + } + ret = mlx5_dev_ctx_prepare(&dev->ctx, eal_dev, classes); + if (ret) { + DRV_LOG(ERR, "Failed to create device context."); + rte_free(dev); + return NULL; + } + dev->dev = eal_dev; + TAILQ_INSERT_HEAD(&devices_list, dev, next); + return dev; +} + static int drivers_remove(struct mlx5_common_device *dev, uint32_t enabled_classes) { @@ -477,11 +501,9 @@ mlx5_common_dev_probe(struct rte_device *eal_dev) classes = MLX5_CLASS_ETH; dev = to_mlx5_device(eal_dev); if (!dev) { - dev = rte_zmalloc("mlx5_common_device", sizeof(*dev), 0); + dev = mlx5_common_dev_create(eal_dev, classes); if (!dev) - return -ENOMEM; - dev->dev = eal_dev; - TAILQ_INSERT_HEAD(&devices_list, dev, next); + return -rte_errno; new_device = true; } else { /* Validate combination here. */ @@ -498,7 +520,7 @@ mlx5_common_dev_probe(struct rte_device *eal_dev) return 0; class_err: if (new_device) - dev_release(dev); + mlx5_common_dev_release(dev); return ret; } @@ -514,7 +536,7 @@ mlx5_common_dev_remove(struct rte_device *eal_dev) /* Matching device found, cleanup and unload drivers. */ ret = drivers_remove(dev, dev->classes_loaded); if (ret != 0) - dev_release(dev); + mlx5_common_dev_release(dev); return ret; } diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index c5f2a6285f..644dc58bc9 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -339,37 +339,9 @@ struct mlx5_common_device { struct rte_device *dev; TAILQ_ENTRY(mlx5_common_device) next; uint32_t classes_loaded; + struct mlx5_dev_ctx ctx; }; -/** - * Uninitialize context device and release all its resources. - * - * @param dev_ctx - * Pointer to the context device data structure. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -__rte_internal -void mlx5_dev_ctx_release(struct mlx5_dev_ctx *dev_ctx); - -/** - * Initialize context device and allocate all its resources. - * - * @param dev_ctx - * Pointer to the context device data structure. - * @param dev - * Pointer to mlx5 device structure. - * @param classes_loaded - * Chosen classes come from device arguments. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -__rte_internal -int mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, - uint32_t classes_loaded); - /** * Initialization function for the driver called during device probing. */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index a1a8bae5bd..4b24833ecb 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -10,8 +10,6 @@ INTERNAL { mlx5_common_init; mlx5_parse_db_map_arg; # WINDOWS_NO_EXPORT - mlx5_dev_ctx_release; - mlx5_dev_ctx_prepare; mlx5_common_verbs_reg_mr; # WINDOWS_NO_EXPORT mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 8348ea8ea3..93b0cc8ea6 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -751,43 +751,25 @@ static int mlx5_compress_dev_probe(struct mlx5_common_device *dev) { struct rte_compressdev *cdev; - struct mlx5_dev_ctx *dev_ctx; + struct mlx5_dev_ctx *dev_ctx = &dev->ctx; struct mlx5_compress_priv *priv; struct mlx5_hca_attr att = { 0 }; struct rte_compressdev_pmd_init_params init_params = { .name = "", .socket_id = dev->dev->numa_node, }; - const char *ibdev_name; - int ret; + const char *ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); if (rte_eal_process_type() != RTE_PROC_PRIMARY) { DRV_LOG(ERR, "Non-primary process type is not supported."); rte_errno = ENOTSUP; return -rte_errno; } - dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_COMPRESS); - if (ret < 0) { - DRV_LOG(ERR, "Failed to create device context."); - mlx5_free(dev_ctx); - rte_errno = ENODEV; - return -rte_errno; - } - ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &att) != 0 || att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || att.mmo_dma_en == 0) { DRV_LOG(ERR, "Not enough capabilities to support compress " "operations, maybe old FW/OFED version?"); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); rte_errno = ENOTSUP; return -ENOTSUP; } @@ -795,8 +777,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) sizeof(*priv), &init_params); if (cdev == NULL) { DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); return -ENODEV; } DRV_LOG(INFO, @@ -812,8 +792,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) priv->sq_ts_format = att.sq_ts_format; if (mlx5_compress_hw_global_prepare(priv) != 0) { rte_compressdev_pmd_destroy(priv->cdev); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); return -1; } if (mlx5_mr_btree_init(&priv->mr_scache.cache, @@ -821,8 +799,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); rte_errno = ENOMEM; return -rte_errno; } @@ -858,8 +834,7 @@ mlx5_compress_dev_remove(struct mlx5_common_device *dev) mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); + priv->dev_ctx = NULL; } return 0; } diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 44656225d2..4f390c8bf4 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -945,7 +945,7 @@ static int mlx5_crypto_dev_probe(struct mlx5_common_device *dev) { struct rte_cryptodev *crypto_dev; - struct mlx5_dev_ctx *dev_ctx; + struct mlx5_dev_ctx *dev_ctx = &dev->ctx; struct mlx5_devx_obj *login; struct mlx5_crypto_priv *priv; struct mlx5_crypto_devarg_params devarg_prms = { 0 }; @@ -957,7 +957,7 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) .max_nb_queue_pairs = RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS, }; - const char *ibdev_name; + const char *ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); uint16_t rdmw_wqe_size; int ret; @@ -966,51 +966,28 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) rte_errno = ENOTSUP; return -rte_errno; } - dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_CRYPTO); - if (ret < 0) { - DRV_LOG(ERR, "Failed to create device context."); - mlx5_free(dev_ctx); - rte_errno = ENODEV; - return -rte_errno; - } - ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr) != 0 || attr.crypto == 0 || attr.aes_xts == 0) { DRV_LOG(ERR, "Not enough capabilities to support crypto " "operations, maybe old FW/OFED version?"); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); rte_errno = ENOTSUP; return -ENOTSUP; } ret = mlx5_crypto_parse_devargs(dev->dev->devargs, &devarg_prms); if (ret) { DRV_LOG(ERR, "Failed to parse devargs."); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); return -rte_errno; } login = mlx5_devx_cmd_create_crypto_login_obj(dev_ctx->ctx, &devarg_prms.login_attr); if (login == NULL) { DRV_LOG(ERR, "Failed to configure login."); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); return -rte_errno; } crypto_dev = rte_cryptodev_pmd_create(ibdev_name, dev->dev, &init_params); if (crypto_dev == NULL) { DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); return -ENODEV; } DRV_LOG(INFO, "Crypto device %s was created successfully.", ibdev_name); @@ -1025,8 +1002,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) priv->crypto_dev = crypto_dev; if (mlx5_crypto_hw_global_prepare(priv) != 0) { rte_cryptodev_pmd_destroy(priv->crypto_dev); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); return -1; } if (mlx5_mr_btree_init(&priv->mr_scache.cache, @@ -1034,8 +1009,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); rte_errno = ENOMEM; return -rte_errno; } @@ -1085,8 +1058,7 @@ mlx5_crypto_dev_remove(struct mlx5_common_device *dev) mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); claim_zero(mlx5_devx_cmd_destroy(priv->login_obj)); - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); + priv->dev_ctx = NULL; } return 0; } diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 812aadaaa4..c8134f064f 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2882,31 +2882,24 @@ mlx5_verbs_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev) int mlx5_os_net_probe(struct mlx5_common_device *dev) { - struct mlx5_dev_ctx *dev_ctx; + struct mlx5_dev_ctx *dev_ctx = &dev->ctx; uint8_t devx = 0; int ret; - dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } /* - * Initialize context device and allocate all its resources. - * Try to do it with DV first, then usual Verbs. + * Context device and all its resources are created and initialized + * while common probing, using DevX API. When DevX isn't supported, + * we are trying to create them by Verbs only for net driver. + * Here, we check if the ctx creates successfully, and if not try to + * create it by Verbs. */ - ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_ETH); - if (ret < 0) { - goto error; - } else if (dev_ctx->ctx) { + if (dev_ctx->ctx) { devx = 1; DRV_LOG(DEBUG, "DevX is supported."); } else { ret = mlx5_verbs_dev_ctx_prepare(dev_ctx, dev->dev); if (ret < 0) - goto error; + return -rte_errno; DRV_LOG(DEBUG, "DevX is NOT supported."); } if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -2915,19 +2908,12 @@ mlx5_os_net_probe(struct mlx5_common_device *dev) if (ret) { DRV_LOG(ERR, "unable to init PMD global data: %s", strerror(rte_errno)); - goto error; + return -rte_errno; } if (mlx5_dev_is_pci(dev->dev)) - ret = mlx5_os_pci_probe(dev, dev_ctx, devx); + return mlx5_os_pci_probe(dev, dev_ctx, devx); else - ret = mlx5_os_auxiliary_probe(dev->dev, dev_ctx, devx); - if (ret) - goto error; - return ret; -error: - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); - return ret; + return mlx5_os_auxiliary_probe(dev->dev, dev_ctx, devx); } /** diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index e0b180e83c..085bf87abc 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2355,33 +2355,6 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev) return port_id; } -/** - * Finds the device context that match the device. - * The existence of multiple ethdev per pci device is only with representors. - * On such case, it is enough to get only one of the ports as they all share - * the same device context. - * - * @param dev - * Pointer to the device. - * - * @return - * Pointer to the device context if found, NULL otherwise. - */ -static struct mlx5_dev_ctx * -mlx5_get_dev_ctx(struct rte_device *dev) -{ - struct mlx5_priv *priv; - uint16_t port_id; - - port_id = rte_eth_find_next_of(0, dev); - if (port_id == RTE_MAX_ETHPORTS) - return NULL; - priv = rte_eth_devices[port_id].data->dev_private; - if (priv == NULL) - return NULL; - return priv->sh->dev_ctx; -} - /** * Callback to remove a device. * @@ -2396,7 +2369,6 @@ mlx5_get_dev_ctx(struct rte_device *dev) int mlx5_net_remove(struct mlx5_common_device *dev) { - struct mlx5_dev_ctx *dev_ctx = mlx5_get_dev_ctx(dev->dev); uint16_t port_id; int ret = 0; @@ -2411,10 +2383,6 @@ mlx5_net_remove(struct mlx5_common_device *dev) ret |= rte_eth_dev_close(port_id); } - if (dev_ctx) { - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); - } return ret == 0 ? 0 : -EIO; } diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index f21fb60272..d269cf2f74 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -878,7 +878,7 @@ int mlx5_os_net_probe(struct mlx5_common_device *dev) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->dev); - struct mlx5_dev_ctx *dev_ctx; + struct mlx5_dev_ctx *dev_ctx = &dev->ctx; struct mlx5_dev_spawn_data spawn = { .pf_bond = -1 }; struct mlx5_dev_config dev_config; unsigned int dev_config_vf; @@ -895,16 +895,6 @@ mlx5_os_net_probe(struct mlx5_common_device *dev) strerror(rte_errno)); return -rte_errno; } - dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_ETH); - if (ret < 0) - goto error; memset(&spawn.info, 0, sizeof(spawn.info)); spawn.max_port = 1; spawn.phys_port = 1; @@ -955,20 +945,14 @@ mlx5_os_net_probe(struct mlx5_common_device *dev) dev_config.decap_en = 0; dev_config.log_hp_size = MLX5_ARG_UNSET; spawn.eth_dev = mlx5_dev_spawn(dev->dev, dev_ctx, &spawn, &dev_config); - if (!spawn.eth_dev) { - ret = -rte_errno; - goto error; - } + if (!spawn.eth_dev) + return -rte_errno; restore = spawn.eth_dev->data->dev_flags; rte_eth_copy_pci_info(spawn.eth_dev, pci_dev); /* Restore non-PCI flags cleared by the above call. */ spawn.eth_dev->data->dev_flags |= restore; rte_eth_dev_probing_finish(spawn.eth_dev); return 0; -error: - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); - return ret; } /** diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index 78fa90797c..3772007d24 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -125,51 +125,36 @@ static int mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) { struct mlx5_regex_priv *priv = NULL; - struct mlx5_dev_ctx *dev_ctx = NULL; + struct mlx5_dev_ctx *dev_ctx = &mlx5_dev->ctx; struct mlx5_hca_attr attr; char name[RTE_REGEXDEV_NAME_MAX_LEN]; - const char *ibdev_name; + const char *ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); int ret; uint32_t val; - dev_ctx = rte_zmalloc("mlx5 context device", sizeof(*dev_ctx), - RTE_CACHE_LINE_SIZE); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } - ret = mlx5_dev_ctx_prepare(dev_ctx, mlx5_dev->dev, MLX5_CLASS_REGEX); - if (ret < 0) { - DRV_LOG(ERR, "Failed to create device context."); - rte_free(dev_ctx); - rte_errno = ENODEV; - return -rte_errno; - } - ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); DRV_LOG(INFO, "Probe device \"%s\".", ibdev_name); ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr); if (ret) { DRV_LOG(ERR, "Unable to read HCA capabilities."); rte_errno = ENOTSUP; - goto dev_error; + return -rte_errno; } else if (!attr.regex || attr.regexp_num_of_engines == 0) { DRV_LOG(ERR, "Not enough capabilities to support RegEx, maybe " "old FW/OFED version?"); rte_errno = ENOTSUP; - goto dev_error; + return -rte_errno; } if (mlx5_regex_engines_status(dev_ctx->ctx, 2)) { DRV_LOG(ERR, "RegEx engine error."); rte_errno = ENOMEM; - goto dev_error; + return -rte_errno; } priv = rte_zmalloc("mlx5 regex device private", sizeof(*priv), RTE_CACHE_LINE_SIZE); if (!priv) { DRV_LOG(ERR, "Failed to allocate private memory."); rte_errno = ENOMEM; - goto dev_error; + return -rte_errno; } priv->sq_ts_format = attr.sq_ts_format; priv->dev_ctx = dev_ctx; @@ -244,10 +229,6 @@ mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) if (priv->regexdev) rte_regexdev_unregister(priv->regexdev); dev_error: - if (dev_ctx) { - mlx5_dev_ctx_release(dev_ctx); - rte_free(dev_ctx); - } if (priv) rte_free(priv); return -rte_errno; @@ -279,10 +260,6 @@ mlx5_regex_dev_remove(struct mlx5_common_device *mlx5_dev) mlx5_glue->devx_free_uar(priv->uar); if (priv->regexdev) rte_regexdev_unregister(priv->regexdev); - if (priv->dev_ctx) { - mlx5_dev_ctx_release(priv->dev_ctx); - rte_free(priv->dev_ctx); - } rte_free(priv); } return 0; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 6771445582..2b1b521313 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -506,34 +506,19 @@ static int mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) { struct mlx5_vdpa_priv *priv = NULL; - struct mlx5_dev_ctx *dev_ctx = NULL; struct mlx5_hca_attr attr; int ret; - dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (dev_ctx == NULL) { - DRV_LOG(ERR, "Device context allocation failure."); - rte_errno = ENOMEM; - return -rte_errno; - } - ret = mlx5_dev_ctx_prepare(dev_ctx, dev->dev, MLX5_CLASS_VDPA); - if (ret < 0) { - DRV_LOG(ERR, "Failed to create device context."); - mlx5_free(dev_ctx); - rte_errno = ENODEV; - return -rte_errno; - } - ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr); + ret = mlx5_devx_cmd_query_hca_attr(dev->ctx.ctx, &attr); if (ret) { DRV_LOG(ERR, "Unable to read HCA capabilities."); rte_errno = ENOTSUP; - goto error; + return -rte_errno; } else if (!attr.vdpa.valid || !attr.vdpa.max_num_virtio_queues) { DRV_LOG(ERR, "Not enough capabilities to support vdpa, maybe " "old FW/OFED version?"); rte_errno = ENOTSUP; - goto error; + return -rte_errno; } if (!attr.vdpa.queue_counters_valid) DRV_LOG(DEBUG, "No capability to support virtq statistics."); @@ -544,7 +529,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) if (!priv) { DRV_LOG(ERR, "Failed to allocate private memory."); rte_errno = ENOMEM; - goto error; + return -rte_errno; } priv->caps = attr.vdpa; priv->log_max_rqt_size = attr.log_max_rqt_size; @@ -552,8 +537,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) priv->qp_ts_format = attr.qp_ts_format; if (attr.num_lag_ports == 0) priv->num_lag_ports = 1; - priv->dev_ctx = dev_ctx; - priv->var = mlx5_glue->dv_alloc_var(dev_ctx->ctx, 0); + priv->dev_ctx = &dev->ctx; + priv->var = mlx5_glue->dv_alloc_var(priv->dev_ctx->ctx, 0); if (!priv->var) { DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); goto error; @@ -578,10 +563,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) mlx5_glue->dv_free_var(priv->var); rte_free(priv); } - if (dev_ctx) { - mlx5_dev_ctx_release(dev_ctx); - mlx5_free(dev_ctx); - } return -rte_errno; } @@ -610,10 +591,6 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *dev) } if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); - if (priv->dev_ctx) { - mlx5_dev_ctx_release(priv->dev_ctx); - mlx5_free(priv->dev_ctx); - } pthread_mutex_destroy(&priv->vq_config_lock); rte_free(priv); } From patchwork Tue Aug 17 13:44:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97007 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95F64A0548; Tue, 17 Aug 2021 15:47:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD0A641278; Tue, 17 Aug 2021 15:45:46 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2078.outbound.protection.outlook.com [40.107.100.78]) by mails.dpdk.org (Postfix) with ESMTP id B0C674126B for ; Tue, 17 Aug 2021 15:45:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UGxlvERB/kWdJq+YuuE2/HPX1lT2FUndsGr7AJFfoeZKjF1IcBHy5eMsU5OnGNJNQC/yVFNR0KbC3CcQ2jRhiKLlOlPZfg8Jfq7exwgOqnYBv4zrRt5X/oD7UHQz+YtBbDqO7ItrRfCJnpRcBrY8YByC4AZE9uBmjsgKVoFSUzXweonFgMKJ5RFbe6HopPKT95Xh/bH3wXnzDIJLP/PinzbDN+sjpII0XZKEPK0tgNIQMVWNXRqJbrUjKzeGjVt6phUUjoknyI5CLj3nwTGpnHS9FngIMiGqVh9nLHbhL/p1qYbSVcrWzy2w9yqxCcihssNrW2ZSu58BMb71UdfhmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2oH15DylRNXwsEYZaWcDHwPiHA39924brnOzY0e5SuA=; b=lltj3EJFk+4txwxyLmMNtfZkYizt8P6+IR0S1xE9tysdyzTt7sogynFyix2ehxHsFq4v4+2p59v2ioFkp47EdxEeRbdgYDOiNhmDx/uZ2T59bVR1a/1s9MerNANBudEs5Fk1lQ5Zf/RniwULh75VT0CQ+S9HaxEnMXf5R68QZ1t1L5DgIc2D6SETYw0cTF70fQyK5iegO5e3b79hzUwWQlN1XB7c8Id9aV0aNimU1PMjZs81ubZNpRzAA40Cd1QwvlYOR4X2nQOG9hYimKbWuvOs6UglCC6ElHOgrkKhG0jOCZt510eRIr0oyY1gT99/GqevWJ1dyzzqDwvEi15EaQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2oH15DylRNXwsEYZaWcDHwPiHA39924brnOzY0e5SuA=; b=lucSasbWuSv376rDNZijB274adOHjzZTuuFGyvFa/hIDK1jHoCdBCHy0xNIjor3+6NJR7q8ooHNVfk8FgC9S/juiJH6DW4Rj9EEWY2Oqtt2ObJkEGLYhIrxQKVdgdvggubOeExZXXO9+tt9n0C9bVUxd+i+oV4m0ceY1MPYlIu4OExul+QxM+zm6nRdzlC7i8BR6uTFoR2R4KRU1HX5X38asNEMEoy3tjFGPGsSK89VO059slpnCZ6qBOjDBSoyv438/DLIdvUV9N3Re/WoPFAEzpNgfSiZRmRt6MgpwhzUKDeHmUHw0SR0CNSrl86yga8BR0UPqMpD1PQ58ObLOyA== Received: from CO2PR04CA0096.namprd04.prod.outlook.com (2603:10b6:104:6::22) by DM6PR12MB3626.namprd12.prod.outlook.com (2603:10b6:5:11f::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.15; Tue, 17 Aug 2021 13:45:41 +0000 Received: from CO1NAM11FT010.eop-nam11.prod.protection.outlook.com (2603:10b6:104:6:cafe::1c) by CO2PR04CA0096.outlook.office365.com (2603:10b6:104:6::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT010.mail.protection.outlook.com (10.13.175.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:41 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:40 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:39 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:36 +0300 Message-ID: <20210817134441.1966618-17-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 80f3235c-80e6-44d7-fb04-08d961854dbe X-MS-TrafficTypeDiagnostic: DM6PR12MB3626: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xsAWWzIPd+N8pZavZZAl3vp7V96LoaJZtWJsOqkXOOvVenxnXkZg9Kq3AGZgkVsmGPM/XAJffhpmK28sIgTtvMm1+Oy0AQz4F8Fyl6vx80bkEboFmndS1Z3PoO13yhq3lDHCA0w+Sl2ipQ9pFEB1B2YcQ9kOELhxA1N3uY547KS/Ht7SQcuuZlicMgGr7QrXf7YlXT0ufJcEsep5URdMjKHQDgsTMNHR7CprjFd/5kU0uE5vU1fCskLg34aQog0suCopLm8ElM8zxKnCqB8x6Iotoa8qDSdFxEFqomdBCn4wE1EZTvMOKpYAaiMlqlOInOa5ZLtXKn2+dUxXPkNpvR1gjp/rdThpe/vdec7aNMwbu9WwhFeL42KpzVrM+qKaEME2vqqSsJTTNcujRHenNSDgiyfR4nEKXI8iKV1FK0cCwQ2ZHIm3QTqJ/R9y8KGmDghtGGDiQIJYONcJcQFQVk1bZPhlfR3NXCr/yJTpt0p9ydYLjbqk0FtSsOT/rzsnQBBAEpVP28thQZp3OF3AZzrGa8mhJ7+xKC7vzVcG8WY2rClhtIPWdw6uqSDcc1sszPBQhBzMnz3558ega0j7Q8a62RSgNMpBiTdDtvLRSAn1vZ2umBHLjKC/mMPAA2OSJ0yF8Cu2FlcenSOYn/B/tnDnBUbArhGgBzw+WE1UEQZrJwjLsQLctdyTC5dGFi9O0nuGRyPpS2ffW6Dr10nJBA== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(39860400002)(136003)(376002)(396003)(36840700001)(46966006)(54906003)(26005)(2906002)(6916009)(5660300002)(478600001)(8676002)(6286002)(316002)(426003)(336012)(7636003)(356005)(7696005)(47076005)(70586007)(70206006)(4326008)(36860700001)(82740400003)(6666004)(107886003)(36756003)(83380400001)(16526019)(186003)(86362001)(55016002)(1076003)(8936002)(2616005)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:41.1892 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 80f3235c-80e6-44d7-fb04-08d961854dbe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT010.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3626 Subject: [dpdk-dev] [RFC 16/21] common/mlx5: add HCA attributes to context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add HCA attributes structure as a field of context device structure. It query in common probing, and check if the device supports the chosen classes. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.c | 67 ++++++++++++++++++++++++++++++- drivers/common/mlx5/mlx5_common.h | 9 +++-- 2 files changed, 70 insertions(+), 6 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index b500e7834e..e4c1984700 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -310,6 +310,56 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) #endif } +/** + * Validate HCA attributes. + * + * @param attr + * Attributes device values. + * @param classes + * Chosen classes come from device arguments. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_hca_attr_validate(struct mlx5_hca_attr *attr, uint32_t classes) +{ + if (classes & MLX5_CLASS_VDPA) { + if (!attr->vdpa.valid || !attr->vdpa.max_num_virtio_queues) { + DRV_LOG(ERR, + "Not enough capabilities to support vDPA, maybe old FW/OFED version?"); + rte_errno = ENOTSUP; + return -rte_errno; + } + } + if (classes & MLX5_CLASS_REGEX) { + if (!attr->regex || attr->regexp_num_of_engines == 0) { + DRV_LOG(ERR, + "Not enough capabilities to support RegEx, maybe old FW/OFED version?"); + rte_errno = ENOTSUP; + return -rte_errno; + } + } + if (classes & MLX5_CLASS_COMPRESS) { + if (attr->mmo_compress_en == 0 || + attr->mmo_decompress_en == 0 || attr->mmo_dma_en == 0) { + DRV_LOG(ERR, + "Not enough capabilities to support compress operations, maybe old FW/OFED version?"); + rte_errno = ENOTSUP; + return -ENOTSUP; + } + } + if (classes & MLX5_CLASS_CRYPTO) { + if (attr->crypto == 0 || attr->aes_xts == 0) { + DRV_LOG(ERR, + "Not enough capabilities to support crypto operations, maybe old FW/OFED version?"); + rte_errno = ENOTSUP; + return -ENOTSUP; + } + } + return 0; +} + /** * Uninitialize context device and release all its resources. * @@ -379,6 +429,13 @@ mlx5_dev_ctx_prepare(struct mlx5_dev_ctx *dev_ctx, struct rte_device *dev, ret = mlx5_os_pd_create(dev_ctx); if (ret) goto error; + /* Query HCA attributes. */ + ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &dev_ctx->hca_attr); + if (ret) { + DRV_LOG(ERR, "Unable to read HCA capabilities."); + rte_errno = ENOTSUP; + goto error; + } return ret; error: mlx5_dev_ctx_release(dev_ctx); @@ -507,13 +564,19 @@ mlx5_common_dev_probe(struct rte_device *eal_dev) new_device = true; } else { /* Validate combination here. */ - ret = is_valid_class_combination(classes | - dev->classes_loaded); + ret = is_valid_class_combination(classes | dev->classes_loaded); if (ret != 0) { DRV_LOG(ERR, "Unsupported mlx5 classes combination."); return ret; } } + if (dev->ctx.ctx) { + /* Validate HCA attributes here. */ + ret = mlx5_hca_attr_validate(&dev->ctx.hca_attr, + classes | dev->classes_loaded); + if (ret) + goto class_err; + } ret = drivers_probe(dev, classes); if (ret) goto class_err; diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 644dc58bc9..da03e160d2 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -329,10 +329,11 @@ void mlx5_common_init(void); * Contains HW device objects which belong to same device with multiple drivers. */ struct mlx5_dev_ctx { - void *ctx; /* Verbs/DV/DevX context. */ - void *pd; /* Protection Domain. */ - uint32_t pdn; /* Protection Domain Number. */ - int numa_node; /* Numa node of device. */ + void *ctx; /* Verbs/DV/DevX context. */ + void *pd; /* Protection Domain. */ + uint32_t pdn; /* Protection Domain Number. */ + int numa_node; /* Numa node of device. */ + struct mlx5_hca_attr hca_attr; /* HCA attributes. */ }; struct mlx5_common_device { From patchwork Tue Aug 17 13:44:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97009 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4E98A0548; Tue, 17 Aug 2021 15:47:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 734454128F; Tue, 17 Aug 2021 15:45:49 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2041.outbound.protection.outlook.com [40.107.94.41]) by mails.dpdk.org (Postfix) with ESMTP id A2FF54127F for ; Tue, 17 Aug 2021 15:45:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=feBQ122Zkd9NRz8d+JkwflPH69odHla6wYW4QkpEEgfsdBxBMvPIQsOSpdWdKZqFOls57kvr8zllRdBkN0GFQOt4s65ILLjQRWA7DDvUHOKx7s37Y2/a8Ih7pvf7m6qqqsVR730SDXX1vCPQPWDkq5SWJSazVtvEM2s/eesrMt3J83OV6IA37U3Ls/FQOa07aMmJlAUNPEeUo3bjyEbnRxFTSYmpHydLzWEr4dWUlsGS89uU1ZjMNPRHpbtspw6lBqrDbEBJTLIedhXDcQ0s2cnIzuQZG6IqQz+ka9CU+p5IPrqExS6KcxZ7nHir4nOqzTMzJvVgWXN2lh2MJyXn/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sJN2hoRADqWSQdIxVABqPFrthujMNibU2GBbDDmXAsE=; b=aCsGPqbq89sIbikiihywO34sYITGlisYNCm/5WvAR9mNJaSC2oGmldVxMKGkE55zJfVOtIsa92Gj6lhs/X9iD0uMlKl4ZnqK+42Co+eiZYHnZuwh3CC4JNPHw0fVL0Nk0wCvAZgcgLfA9OJPu6hqvynST32rMhSxVsCDzUCE+NV1UJeiNPayT77YOtPotut2MLOdRIXILPF30RuuIyqvRfFXcyPUhF1un41fj3dtMNHFSkfLrMvS9a3tlZTGH1JOuYxABDzn7HiaO3qzTNvkTo0tH38t8nKHgyWDYrgb5uNoJ+IK/+FhcQwCGkIuNtlAHoRjlQSQajvpgy2SoAKN5A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sJN2hoRADqWSQdIxVABqPFrthujMNibU2GBbDDmXAsE=; b=TTkJ8e7RO0F9yuJrKd9yC3IVCsLHQcn68nBMOmm+4BgnX01IMV21TPBVNgPwCA1Cqk9dapGhPGN4dwekpGul4s5u3+eK2b18MPTwvTxLLN1MKfeHaU5JoCzZbAoz9Wk+lhqRXc8K0U4j8lq1V443+XYfyQfE6mjHtUw885l99k6GVDG4tW1U/vyIebUwI74x5CDtie1C5opOXpQ86LsBoy6VqlVjz9j/kbhT31VDZvSIl96o1/GgAVZKSsjR+vrGP0k20nt+SHRt14Af5VXO+6Pa04i6wmjpr4HRt67ut6BvRcVKH+VZXgk7d6c4+sgGDOelgpzheJpvO1vI/GHJLg== Received: from MWHPR19CA0012.namprd19.prod.outlook.com (2603:10b6:300:d4::22) by BN6PR12MB1268.namprd12.prod.outlook.com (2603:10b6:404:1a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.21; Tue, 17 Aug 2021 13:45:46 +0000 Received: from CO1NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::e9) by MWHPR19CA0012.outlook.office365.com (2603:10b6:300:d4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.18 via Frontend Transport; Tue, 17 Aug 2021 13:45:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT033.mail.protection.outlook.com (10.13.174.247) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:45 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:42 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:40 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:37 +0300 Message-ID: <20210817134441.1966618-18-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3c4e1b45-7a22-4b68-d891-08d96185506c X-MS-TrafficTypeDiagnostic: BN6PR12MB1268: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: o6Y7y4QeYunhzU+5MxYHWX9xj27dnSyYO50jokSdNVHaStdJwHhn+3RyOFxD8l4YwBm4b59clD46vya2GNznqOmz8H/etyaG+2JwPAwu8XajdjmeqsKNhLf4zYIjAfAtN02Gv1xbTLn4duWalmKygRLulSDGy82WJGWL2mwz8f3JrVMtxJ/nv0K+aF0LDVkn6a5YxYX4dGXzJlKsMXJubddUlSKDTa2Mrhd3H7u+xLLB6Nx27ft0nXA2VdqB7VMUTC9vhAa2MrNDP9uScKdWuiltcLPkw98hzCB5o97fW/pGTWceWv83vJoU7DUUs9PUttTegyawHWPF8uNdMCg2UQK9J7HdcX7rQMasPAMupHMDDM7e8nNperIKt/cxLw0mrto0uBfTQ67CPR8Bg2IdgREenXWuxXERe0aCWXSCbcggTIQDIb/WZBRrqTy7s1H8I3lzBQ6Tr0KyQsMdIcephxJQpW7Yl/3WGtbk48fzw8szwsx1EleqINjO89vcxHWVVKw/mT8hfC6xOqDED2rQg/f0Sr76ro+yzHkwtjaWB9X8j1DT80zchosdcAq5JKBjzjmzGLKSHl5C5dy3hMFrbkNp7ce0ldkiJRyZdRuiKLpC8Hv3AY65F06XaW24+PdqQ3oIO07EflNGx8nW+D2yyjCGwK52oEmfl5uXNAnC6HHpP1Yv2rLARoCoD0ef4g4ANM1j5kN6yHY3h0zTt8sSfA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(39860400002)(346002)(396003)(46966006)(36840700001)(1076003)(82740400003)(16526019)(36860700001)(26005)(478600001)(5660300002)(7696005)(55016002)(36756003)(47076005)(2906002)(70586007)(70206006)(186003)(83380400001)(54906003)(4326008)(316002)(8676002)(336012)(6286002)(86362001)(107886003)(6916009)(82310400003)(6666004)(356005)(7636003)(2616005)(426003)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:45.5872 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3c4e1b45-7a22-4b68-d891-08d96185506c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1268 Subject: [dpdk-dev] [RFC 17/21] regex/mlx5: use HCA attributes from context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use HCA attributes from context device structure, instead of query it for itself. Signed-off-by: Michael Baum --- drivers/regex/mlx5/mlx5_regex.c | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index 3772007d24..ed3cd8972e 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -126,24 +126,12 @@ mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) { struct mlx5_regex_priv *priv = NULL; struct mlx5_dev_ctx *dev_ctx = &mlx5_dev->ctx; - struct mlx5_hca_attr attr; char name[RTE_REGEXDEV_NAME_MAX_LEN]; const char *ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); int ret; uint32_t val; DRV_LOG(INFO, "Probe device \"%s\".", ibdev_name); - ret = mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr); - if (ret) { - DRV_LOG(ERR, "Unable to read HCA capabilities."); - rte_errno = ENOTSUP; - return -rte_errno; - } else if (!attr.regex || attr.regexp_num_of_engines == 0) { - DRV_LOG(ERR, "Not enough capabilities to support RegEx, maybe " - "old FW/OFED version?"); - rte_errno = ENOTSUP; - return -rte_errno; - } if (mlx5_regex_engines_status(dev_ctx->ctx, 2)) { DRV_LOG(ERR, "RegEx engine error."); rte_errno = ENOMEM; @@ -156,7 +144,7 @@ mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) rte_errno = ENOMEM; return -rte_errno; } - priv->sq_ts_format = attr.sq_ts_format; + priv->sq_ts_format = dev_ctx->hca_attr.sq_ts_format; priv->dev_ctx = dev_ctx; priv->nb_engines = 2; /* attr.regexp_num_of_engines */ ret = mlx5_devx_regex_register_read(priv->dev_ctx->ctx, 0, @@ -190,8 +178,8 @@ mlx5_regex_dev_probe(struct mlx5_common_device *mlx5_dev) priv->regexdev->dev_ops = &mlx5_regexdev_ops; priv->regexdev->enqueue = mlx5_regexdev_enqueue; #ifdef HAVE_MLX5_UMR_IMKEY - if (!attr.umr_indirect_mkey_disabled && - !attr.umr_modify_entity_size_disabled) + if (!dev_ctx->hca_attr.umr_indirect_mkey_disabled && + !dev_ctx->hca_attr.umr_modify_entity_size_disabled) priv->has_umr = 1; if (priv->has_umr) priv->regexdev->enqueue = mlx5_regexdev_enqueue_gga; From patchwork Tue Aug 17 13:44:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97008 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8804A0548; Tue, 17 Aug 2021 15:47:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5507941288; Tue, 17 Aug 2021 15:45:48 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2043.outbound.protection.outlook.com [40.107.236.43]) by mails.dpdk.org (Postfix) with ESMTP id E12A041277 for ; Tue, 17 Aug 2021 15:45:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BxKzlhTT09TWCb2n7c56F8mcYO8NaNQQhTZ+rfVC7rVJswLGg/vpqrIp88/O5fmZ/4bHAqAUtT0/sWEaPf+hMiCPi0TqRuiN2FOh1mqZ4JW8yx7tBn2SEIKjC+RfoLYc2jk7kgxZ8FdV5sCBNlumdk4W+/fAkGABFGx4jYHa1aC3iB9zSFz9Y1sTAgKIH8/bOXUSFKT/cZKFmPckY0HlcMAv9XrlkftuMwnAjW4gegb3cJo83EhbiFYpkHOZ8rRQd7GJWiZl9/VYBxx3skBa0YRZp/fPoOs9lYffN5977RGephRmaGik/kjm2fPkPgzrm0n+Jtb4/JmOJpV/U8+aaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JA/I5BI8i9nKfn05OwQURCZ8RDugY/ckn3lIF/l/r5k=; b=X/BbrPIREmrYwh2Zmn3B/3O3mfHZnNZqtuD836MWp3izTMEu5l7P5Jz+0SGBoRxuXm6a8F7JVGXwajZ+wq2+GeE+HzNVv0oZfH6+heaa6guaBsh5ZSltSclZfYSPFg1wd1AKWA/E8H9JLczJtpdTtE1QIB6zQQHEa/G7ltw8Y4JsThxWLvi2XNbbcx5f//A7FNg7PoMAiai3rE69ca29lo9cZctQHitOk8yoCf2ASx0KYR3UvUBBdRhhD295FYkpPsKcJHTIqIHIbXC2KE3q4jZpFs1ZmolPsR3r000F+ck9xGENUYGOfK4s/yrtkq4renEt3ZqsFGEVDMr4EE2L6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JA/I5BI8i9nKfn05OwQURCZ8RDugY/ckn3lIF/l/r5k=; b=cAVOsd0tsHM7NiBk/W0/tW2GuQaduTUSczBD0/+Vceki9ZXdjGgOX3NDSFsHcBTLW+GQzCS9IUP0Cg8FIdOGreG15wTxLF+4yHZuzKncDdNRYBL/juNJUfwDdM77oaIK03ci9RfZV73l7VZAfIA1C/gQvfjnW4uUijdqz1+mBY7ach4hUSi6XQ8z6xuk8sKBYrGBsdrV/omQGry0RPIReLfOt8MJb6Cvu4WqiRVPbaWLhOzULjB+hfmxnZ+aPF7yFy3CV+jEHokp4eMjX6MjwlesS75+bNQ0B3YxuFRdK/Z0eY0BR208jMf7HaqK0fHPxAWUEYMMAefE0K56Vonntw== Received: from BN9PR03CA0573.namprd03.prod.outlook.com (2603:10b6:408:10d::8) by DM6PR12MB4385.namprd12.prod.outlook.com (2603:10b6:5:2a6::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.14; Tue, 17 Aug 2021 13:45:44 +0000 Received: from BN8NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::37) by BN9PR03CA0573.outlook.office365.com (2603:10b6:408:10d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT068.mail.protection.outlook.com (10.13.177.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Tue, 17 Aug 2021 13:45:44 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:43 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:42 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:38 +0300 Message-ID: <20210817134441.1966618-19-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: aace5e15-6b91-4395-96dc-08d961854f97 X-MS-TrafficTypeDiagnostic: DM6PR12MB4385: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Pr+HXocrtmJEkWq+Ro/eKgQyaONjyr0EA74eH6XEGzPWaPsYHa5ISwUVnWX5DDc9GP5p3EkaMFN/c1MkxEnysmVZ7Uez4VF/M7rgZL5G5v97gaOgKpzCms5hTKKYQArF4CBCuRsuyLkc0D6ugc2UbeV5D0aqVwRWP7NWYyHWdkun7orjTaXbRxc7JrXFYmLB56d75JeWWQyAT7Mv6TwSHmrC1nnrlVcJaTTDdTb9ezvyHXSEPS5dW3jVoRKz2dlKHxKbj8VigdvOK42twJBXhcoy3NIjOSNkh8cm6ciS7XSdHxRxxJG4lztKrm35DktllvvJBzLj3biQOL9JhJml+dcDQIpqpan5CMx/lzK2pj382/13CX9j+SiUMYT9O1S9S9HQwwBM5Ggh+v5l62xrqhBKiSmAj6aIjJ8Yehijm55lOSfDfd54NOUvIRn1NzHQNxkqbA7T0TWUUbf2GcjdJiekjbC20gIUwCgVMfZoQrWz2Y2z3pE0RWYrXSZJlptFpqz/JOYP/L/O1QECJWQYB1O+Krvc1ZwFsQzLyiqE/tbYH3kEVF7M7AcEn7zhgBS5wz92Pce1ogKTp4HK7pUgpCa8Njk6iYMVGLg1VnDnCRj2A+jVqyKFxQWd5Fi7Yw8RnebORfCOteSnApFVSIdZbI9lh0YyfoS9D16qfYfpX6S78h2nai+AE+e1stwIYKD64rkbDLbxZeFNQ9I3NRM92A== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(83380400001)(7696005)(4326008)(70206006)(6286002)(5660300002)(55016002)(356005)(316002)(426003)(186003)(36860700001)(6666004)(8676002)(36756003)(6916009)(70586007)(8936002)(7636003)(107886003)(54906003)(47076005)(1076003)(336012)(26005)(508600001)(16526019)(86362001)(82310400003)(2906002)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:44.1955 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aace5e15-6b91-4395-96dc-08d961854f97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4385 Subject: [dpdk-dev] [RFC 18/21] vdpa/mlx5: use HCA attributes from context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use HCA attributes from context device structure, instead of query it for itself. Signed-off-by: Michael Baum --- drivers/vdpa/mlx5/mlx5_vdpa.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 2b1b521313..317d2e8ed4 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -506,36 +506,24 @@ static int mlx5_vdpa_dev_probe(struct mlx5_common_device *dev) { struct mlx5_vdpa_priv *priv = NULL; - struct mlx5_hca_attr attr; - int ret; + struct mlx5_hca_attr *attr = &dev->ctx.hca_attr; - ret = mlx5_devx_cmd_query_hca_attr(dev->ctx.ctx, &attr); - if (ret) { - DRV_LOG(ERR, "Unable to read HCA capabilities."); - rte_errno = ENOTSUP; - return -rte_errno; - } else if (!attr.vdpa.valid || !attr.vdpa.max_num_virtio_queues) { - DRV_LOG(ERR, "Not enough capabilities to support vdpa, maybe " - "old FW/OFED version?"); - rte_errno = ENOTSUP; - return -rte_errno; - } - if (!attr.vdpa.queue_counters_valid) + if (!attr->vdpa.queue_counters_valid) DRV_LOG(DEBUG, "No capability to support virtq statistics."); priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) + sizeof(struct mlx5_vdpa_virtq) * - attr.vdpa.max_num_virtio_queues * 2, + attr->vdpa.max_num_virtio_queues * 2, RTE_CACHE_LINE_SIZE); if (!priv) { DRV_LOG(ERR, "Failed to allocate private memory."); rte_errno = ENOMEM; return -rte_errno; } - priv->caps = attr.vdpa; - priv->log_max_rqt_size = attr.log_max_rqt_size; - priv->num_lag_ports = attr.num_lag_ports; - priv->qp_ts_format = attr.qp_ts_format; - if (attr.num_lag_ports == 0) + priv->caps = attr->vdpa; + priv->log_max_rqt_size = attr->log_max_rqt_size; + priv->num_lag_ports = attr->num_lag_ports; + priv->qp_ts_format = attr->qp_ts_format; + if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; priv->dev_ctx = &dev->ctx; priv->var = mlx5_glue->dv_alloc_var(priv->dev_ctx->ctx, 0); From patchwork Tue Aug 17 13:44:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97010 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F655A0548; Tue, 17 Aug 2021 15:47:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B80841294; Tue, 17 Aug 2021 15:45:51 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2083.outbound.protection.outlook.com [40.107.223.83]) by mails.dpdk.org (Postfix) with ESMTP id 0922F4128E for ; Tue, 17 Aug 2021 15:45:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N4G1i2EpfdyBDy16TBhlaHuA3gQO/p43Xy7LABbgeLvB6wICAxKE4LgjAqmV7YGUN5lkZq1aH1N8USjF0wHkQyZ6K7GTlLAbBRikDHFRU53MLMstCLa8E27MBtMilZCGSBAC/gJpVsz45id4pEI3CJKqEzRYpyKTOT+x8M6ELHjWZ5MWx8EVRJc4JRRvt+d62FxG9b5Ws/9byp2AIyn9JIBcFaGTAQ37OM0VFL7L28OxQ6iUywzXgw4P1JErAA0x7ya68mBDWRxIUx4kaDReRVHR19QuvvBUWr+YWHvFfc2SEVAbCXB5S7K4VHe7E1nJJ5zBZEagB+ztAWQxNH33ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G6h8H4IyzxdGIrKOti1triigEOVqiYx6vtI0hLtTSaU=; b=Au+bqeJPQhByrsG+o9gwtckmZIQh8h5xQ0FMYVWQNwHfxHj3zN39vHGqdm+56ZnWGxGEifiicsYcGvtZaJYj5A6Ou4cE0IidIM5TCtby2iXOFY25dMjfBcoXgWGWcYR6e9xq5QRuOSkF6chAQOHD1Ne1BMwF+26MQ67KkwZoAMh09VC5o4tVCjx4icHlYajjKlvrW7jmptlTEmN8pmUFxbBoznQUbLVgKNVLEX4ctLJ1l3VkbROaruadvBPQmnqqPGR3ixOPjZLDuJ6gxlfH9X/w0BDso3/M+B2lJJsj3KO2sfyHDsMTtyrA/I3j/QyDL9hn18X0FmyKjESx/k8oUw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G6h8H4IyzxdGIrKOti1triigEOVqiYx6vtI0hLtTSaU=; b=RJeY/gJyr+yvf290vYsGaWwR/s1jKfniLQWJa0ae/eAzqEly1+YYyw3kX3B3FiwHafkyUns/Xj7mUH2IMfdQatYL9qbg7wdFm0bGqcDKfqkG361aZmwjgenxLjDyAva+nYlG/2eWF/30P8mbjucP+3zDmqJOPacZ8WOx2mrlijJILgqJbAmDT6uGl2QgBmO0PN9E5IeMKHcQ3e+tCawTLOJqvD0/iD7jA1Ib6MtJN2R/5wMFJvUqCkc3BQKhZGbfcTQ4H8ZrK38Ie5baHOhS/VvUFTCbdSFqJPOLGNqaZUYr1ug6rmXH/DbRRYVQPUNr0g1umU9eryvPY4aQCcXTlA== Received: from BN1PR12CA0021.namprd12.prod.outlook.com (2603:10b6:408:e1::26) by BY5PR12MB4804.namprd12.prod.outlook.com (2603:10b6:a03:1b6::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.13; Tue, 17 Aug 2021 13:45:47 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::7d) by BN1PR12CA0021.outlook.office365.com (2603:10b6:408:e1::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:46 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:45 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:43 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:39 +0300 Message-ID: <20210817134441.1966618-20-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9e05c113-eebb-494a-d3a7-08d961855122 X-MS-TrafficTypeDiagnostic: BY5PR12MB4804: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LRZgXaRoahXzb+nVhV68zAsSNrrIB5hnIdS8HN/i8teM1zXUthhQCqsk4lrLvpGz/7PDn0b6b3JDAYG5otTJ2isclKozqNThtEvej6/08FxaHGFhHtI3lycEafeIuhAGLVKTiRVdy+fb2q9w4vww7kURusxaoLO0gaHz0eYQ+yB+r9JZG+Y4uCilxqQoCZxU0tQvm3p6ZfV8WLMl0ADFlrnkFow3NXaHgCp8CaT/uh+Xblhze49whB/8D2Vl2IdFIjl4tR0pbSvoNOcBG3FPEgnViNS1jxusGNP1TOgikIrHtRlsKHHrJewZ+fozIpa3/t2rl8L5iG9ETAdAqTrD+6HJXcYWxrt8wBwPpKkIyBC/f887d7sohxCWyv0g30y/Dv2XfI9Hs73cDQBCyeUK3p637k8soI175FBi4QgTSWifJvljUCRCZ/EO53BnpnNly+PLkTiPgVZ48+S6iJr7m23wkGn6bqu91Oz/bai6JCyKgF+nkVtrRufWuhvqR4pEhFfHg0ooJWda1t0NmQxKXiF+/7uSfvlAoN0TlX3NRkCKll++Tng1IEU+VF+CGyRy26jjLpZzaDQMm/+epFzkjp1wYfnm2S56AzFp9PaPeen0KdFhS7/kEgkO0cTr8xIR4uJ5ziXVSufhtAXUOW1wp9AhZBQnP+CHvwLy3cBPBtZiMB8Tn+t83vSFT0l3cQUriE3ekeQCSn4k0AqOhwBfeA== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(346002)(396003)(136003)(39860400002)(36840700001)(46966006)(70586007)(107886003)(426003)(8936002)(86362001)(7696005)(36860700001)(70206006)(82740400003)(6916009)(336012)(2906002)(8676002)(5660300002)(55016002)(316002)(1076003)(7636003)(2616005)(36756003)(478600001)(6666004)(26005)(82310400003)(356005)(16526019)(83380400001)(6286002)(54906003)(186003)(47076005)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:46.7908 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9e05c113-eebb-494a-d3a7-08d961855122 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4804 Subject: [dpdk-dev] [RFC 19/21] compress/mlx5: use HCA attributes from context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use HCA attributes from context device structure, instead of query it for itself. Signed-off-by: Michael Baum --- drivers/compress/mlx5/mlx5_compress.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 93b0cc8ea6..e5d900568d 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -753,7 +753,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) struct rte_compressdev *cdev; struct mlx5_dev_ctx *dev_ctx = &dev->ctx; struct mlx5_compress_priv *priv; - struct mlx5_hca_attr att = { 0 }; struct rte_compressdev_pmd_init_params init_params = { .name = "", .socket_id = dev->dev->numa_node, @@ -765,14 +764,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) rte_errno = ENOTSUP; return -rte_errno; } - if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &att) != 0 || - att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || - att.mmo_dma_en == 0) { - DRV_LOG(ERR, "Not enough capabilities to support compress " - "operations, maybe old FW/OFED version?"); - rte_errno = ENOTSUP; - return -ENOTSUP; - } cdev = rte_compressdev_pmd_create(ibdev_name, dev->dev, sizeof(*priv), &init_params); if (cdev == NULL) { @@ -788,8 +779,8 @@ mlx5_compress_dev_probe(struct mlx5_common_device *dev) priv = cdev->data->dev_private; priv->dev_ctx = dev_ctx; priv->cdev = cdev; - priv->min_block_size = att.compress_min_block_size; - priv->sq_ts_format = att.sq_ts_format; + priv->min_block_size = dev_ctx->hca_attr.compress_min_block_size; + priv->sq_ts_format = dev_ctx->hca_attr.sq_ts_format; if (mlx5_compress_hw_global_prepare(priv) != 0) { rte_compressdev_pmd_destroy(priv->cdev); return -1; From patchwork Tue Aug 17 13:44:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97012 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3A5DA0548; Tue, 17 Aug 2021 15:48:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3E614129F; Tue, 17 Aug 2021 15:45:53 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2087.outbound.protection.outlook.com [40.107.236.87]) by mails.dpdk.org (Postfix) with ESMTP id CC12041294 for ; Tue, 17 Aug 2021 15:45:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k8/UhzekY5dSNwd4YgZRBWzg1wh/Q7n95xkysCHvbGGC/ivwClRcGRrOPj+rdKWCziWD+00OmeWj/4pAAFIipETAUVeLzEUoS2LshINgqKgeZ3G9yD1dikopb59qa3+ICjVa6CjSmW2w67FHzZr7HS1G9vPshJB1XpEft1MjvXVCcpHkjj/u4qXrZmnFTpaQb66NDlQ77e4WpXIgnmRbhhhAT6AhBpAO4GmXGmT9PysGai3sKISmW0th16qKCdDn+1ocOfe4a+LG9eIRhiqp/pwe7FvXeabGVpUMrqjhdeSNXYDH8y0dntGCUOcoK8oYXhDi79x7MQWq+3QySEfSTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+mFdk9nXG1pXqvcXcigwBl2Am+JVuyhkcWJDZFCmEHo=; b=PFxmOLf1PqyOAyg6SJrc1R0Y2hvMfvi2sepxy3Rbgj4ghGrWIKKchJKTyhT2tOyJYizqRhh/+rkc/eYbpMGv4Jcop9q4l3KVosj9xGqAGr49AIQGk4ZB1PosjzF+cr9kJKVnIO0352s5NRQ078xNk4eoNw7/E7mUzEu3E82Xm8o1ncJvXWMHHkrppozclfjf/BD8P7///rEwnQiZSvJUbrW/1QeHu90ClgIyeoc2ctOEfvF9+aDTolcHeKVoEvtK8JLEJQXA4KcMKYd/AxL6/EMvbXwjtxKP+lPKZxv7UiFxfh9qE98wCqtmSdpOR03x2ldSWnj6n06gviqt9KZ29A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+mFdk9nXG1pXqvcXcigwBl2Am+JVuyhkcWJDZFCmEHo=; b=jen3XKl5o4wsiDdE8Y49p6dJwrbCU7waRepQruKN4c6Drx4Lv/PHx95GFbZb95ve6zsErY/7OdtPdclMrIDqkiYXYAzqGoSMVTjAeRALhjhbWqlWDim55E+vuIybL5ToFKlQeXgRPz0h4tuc9tcUfnCp61w+mqXGepJ9zp5F91A1GgQ/1btqI7XsSgAzAn1AG/+9ejXmEf+RI2dFS3oapDpVzhSEvuATio2dziUt9RbmDEoD9Wx+a+Ha98V0+e6kddOoiqyKCJzQgXCx1Fy01ba/tGIKzcy2U4xooOE57UXpKfS16/GE+EZlGkVo6F9aGCTf9pvfA9TkTeOdwbsPag== Received: from MW4PR04CA0382.namprd04.prod.outlook.com (2603:10b6:303:81::27) by DM6PR12MB2906.namprd12.prod.outlook.com (2603:10b6:5:15f::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16; Tue, 17 Aug 2021 13:45:49 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:303:81:cafe::59) by MW4PR04CA0382.outlook.office365.com (2603:10b6:303:81::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:48 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:47 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:45 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:40 +0300 Message-ID: <20210817134441.1966618-21-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1de3326a-393b-4909-5614-08d961855255 X-MS-TrafficTypeDiagnostic: DM6PR12MB2906: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gC/zYuCwgFDaDu6MdRDYx/6T3Fbtu3BnOSdto1mo/diJq/GBzF/sXwAmSrmXZXSvdLUXjoDyA2LDEAR2VvNNzZco7Kx+7HESzIU003BxAwWmIe2M7H4DxhMd4bhcQVsJ+VU+AQRL436ltPa1Tpgasw8jY6i5qUoxPw75s4pjXTPfBfqg005wBwxzS8gTSForM7vxv2XvXZLthV5R9rmaN8JsxzuqM6c/a4L/Wqm73lsEj80ff1PE4azZO76IS9CSLvoVjsfNNaxPi42Wcu3tbAoss9/IP4vbI+ObsQlI4gpRfAY+NZTgH3N5G7TDMIYK9XvSviAys2naYqdGebujw8WDUgf8EaHs8CVEXINsyDyN/r4fFXafKrmwvmkMIN9PguVeSGHDYgvKikLHMkAOlLgreEIeAsZuRyACCzHER3QY2cguI+tMp6iCSfLgB3Yl4a6+wHyhExU2Bg6VJiLTLvwYocqo+SyR1qzdMUh77UvxaAy5yX+d8GyzKb87ECeO85L6qjDzgxlWi5Q4B5KmaWAkyiCpXzXneniWf+GO6q0QbnueN2ECvNtkEuP1OyvFEpoB0Qr1EUtsLGx2St0/7w2DSlR3kGP7dJK8cJgxpIxjXcWE7EwY+9fceRqyGZ2/zjCEDvuNgoe8Qum5ZfAVVpk4LpC1ZSsmyUzLwxFGUpj3L86qeNgENEtl8RCmwZ1wqfCp+g1Enau2G9Ri13eb1A== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(8676002)(6916009)(316002)(2616005)(8936002)(7696005)(508600001)(26005)(1076003)(54906003)(5660300002)(36756003)(426003)(336012)(36860700001)(83380400001)(107886003)(70586007)(6666004)(47076005)(55016002)(82310400003)(7636003)(2906002)(186003)(4326008)(356005)(6286002)(86362001)(70206006)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:48.7935 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1de3326a-393b-4909-5614-08d961855255 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2906 Subject: [dpdk-dev] [RFC 20/21] crypto/mlx5: use HCA attributes from context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use HCA attributes from context device structure, instead of query it for itself. Signed-off-by: Michael Baum --- drivers/crypto/mlx5/mlx5_crypto.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 4f390c8bf4..734ca63d89 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -949,7 +949,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) struct mlx5_devx_obj *login; struct mlx5_crypto_priv *priv; struct mlx5_crypto_devarg_params devarg_prms = { 0 }; - struct mlx5_hca_attr attr = { 0 }; struct rte_cryptodev_pmd_init_params init_params = { .name = "", .private_data_size = sizeof(struct mlx5_crypto_priv), @@ -966,13 +965,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *dev) rte_errno = ENOTSUP; return -rte_errno; } - if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &attr) != 0 || - attr.crypto == 0 || attr.aes_xts == 0) { - DRV_LOG(ERR, "Not enough capabilities to support crypto " - "operations, maybe old FW/OFED version?"); - rte_errno = ENOTSUP; - return -ENOTSUP; - } ret = mlx5_crypto_parse_devargs(dev->dev->devargs, &devarg_prms); if (ret) { DRV_LOG(ERR, "Failed to parse devargs."); From patchwork Tue Aug 17 13:44:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 97011 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12F9CA0548; Tue, 17 Aug 2021 15:48:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFD3E411D7; Tue, 17 Aug 2021 15:45:52 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2083.outbound.protection.outlook.com [40.107.223.83]) by mails.dpdk.org (Postfix) with ESMTP id C420541224 for ; Tue, 17 Aug 2021 15:45:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cJrrYXodGoPUCvNSpwzxE5Jdj4Cgd1l5BjPkcuOdeykvMUe5YxlxqoZRWIZF9+r1LKodiACGFlY1P+VJ+Ik+S174VlfVPIGUXJ+woJrz+VeiY4kPKQD7vPahaCuESePfk/f1RqbT1hfqca5R6VGu/HnShtZ7B4Fc6Z4l7uI7ZQpZVZo5IRD5zldUDPCCl3UdqaGxWC+yD1IeHCUxxT1wQhUg8ti5El6nWprW1zGXA9Hdd8da8kh7JN+HpFj4wRj7gFwJePYKBUQ3kz2cAIV5Ntqph8z3ierxgyU5dwhenz5rJWN5PSzImSfFQ3hnAYb8vdpggSQA6YZw7E9NUjnmkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dtCztlVn/gguwkkEGpnaCgLN0D5ZJfrw/3NavWUY1DY=; b=FRCDnuW/MuJ3TB/fHQP6ERMIrpjE1A6l7F1GEiRLINgT0a3CG/e8CtGoJhI1n568GnO0ZBJlhnrtoPBwuQTttMssuID3K4GiYh8reoTKQPn3gXfEFw43XrKm+RkL/E0ih/OIcGhxOZrxbDjCQdFZlo7o+pwOLV5+WHJOTkM2ox3Eolu/EcT/Jilud3OUED+z7SD/bJfQ9uz9PCAF3/U3fOWut3GTagVML++HOrqDarN+yaDXp052d4fp/VDRq8rk6PNB5YYWvljiA6GZs3TBZz068p/JPLqmypu/NgvBKsM7ijIv0jRBioiebQkuRtxYJaRgFuWsQGpw3usRAdZ2gQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dtCztlVn/gguwkkEGpnaCgLN0D5ZJfrw/3NavWUY1DY=; b=Wf2zEqtTUaRy2fUH7gBnL83Nyk977Zur1X8RreqDXVvDkaKzYFlT8YzVQ4iCMXci892MMajhXexQkLZSj9XLzwNQAR0rGOsmvloyru0EBzlaRxbZNDRYEbxV7Nj7SDG+32f4TDoNWmpLmt/zd7Sziuqvnp3cEMhGDC5syVGqTHCbXs67MXE2BqS8Hwl0en0Goo8bbofFC4ZErmh3QmOR2LPwQSDiLwYlvLUjqwNWumt9PK8ShGG3tm33ObjnsJrwQuNHnuZPPpk9pRo+7VLrK1h3qSvcByc2WLUgFr921em3YEhctZOLhR31JCH1+/+5wYbV6Op024AbJcyq5h11qA== Received: from MWHPR19CA0014.namprd19.prod.outlook.com (2603:10b6:300:d4::24) by DM6PR12MB4960.namprd12.prod.outlook.com (2603:10b6:5:1bc::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.16; Tue, 17 Aug 2021 13:45:49 +0000 Received: from CO1NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::1a) by MWHPR19CA0014.outlook.office365.com (2603:10b6:300:d4::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Tue, 17 Aug 2021 13:45:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT033.mail.protection.outlook.com (10.13.174.247) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:49 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:48 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:47 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:41 +0300 Message-ID: <20210817134441.1966618-22-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f8a8514e-7684-4c08-e877-08d961855282 X-MS-TrafficTypeDiagnostic: DM6PR12MB4960: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cJId+o9nK5kDpXTKl0zrH1qKxORWQkyOxOs/wkK55WrIne+QK/D1uWxTfHgoc6WrteDn26n6v5sL0CLWbmgHRn76ZpUzGUeVJ7PNr7ZHCLVfIYvOytbBg4wlK5JyNPyG+N/epEzdm1Ec65/VY9vQY0HMiDWQOyY5wUxhU58HSeI6S/SuzBgrelp9bM+XlHY0ppwIa8F7RI0zGBFy+DeSOUewU8mtKBTZbvDh0dihlyjRuMiLxCiUJophP/Ma/ANFZ/vRqnJxnodzStkSZzDHyjVO9LsIP60J0Zw1Y+LwZSpMx3XkLlrwDGNeUFXSK26PLE8rAlew1fWd38Zb8+O6hpMsXkO/AUid5gfAcESXweFF/MwIHE+UQuHn0FTtZ3XEi5s80ni9ov/Nkwml/hqWG9uyXa0myBSbPQeoh8xdgN5yYQyXEZt8xLxS/FRsO8SkoyyivB5uzVDWE8aRrrBeMesUIV0EIJqG+bunJzcq0Qfa7h6NA6AemtwXJNQDo4KnTAAt2p4N9Il6tFyd1ecsYULtJSoTZ0gij5LmUOTld2VVSttHzFcziby6IdWjxPiUmCurF8qe1T4mYl6ytHUCykTaHF6tz2+/DV1I/F2olDiTnOQ1uaXx56CFOYPQOzD440SCNK6A9j5FRiPnzNVaw6LCaRH6Ld46H3cHVTsegovD0E+lOjnF1zyRkGHNi+hcpPaA3wfzQIuSWRH0qXZ18A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(39860400002)(136003)(376002)(396003)(36840700001)(46966006)(7636003)(6916009)(7696005)(6286002)(5660300002)(2906002)(2616005)(83380400001)(47076005)(478600001)(54906003)(1076003)(356005)(107886003)(4326008)(316002)(8936002)(336012)(55016002)(82740400003)(16526019)(70586007)(426003)(26005)(70206006)(86362001)(36860700001)(8676002)(36756003)(82310400003)(186003)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:49.1351 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f8a8514e-7684-4c08-e877-08d961855282 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4960 Subject: [dpdk-dev] [RFC 21/21] net/mlx5: use HCA attributes from context device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use HCA attributes from context device structure, instead of query it for itself. Signed-off-by: Michael Baum --- drivers/net/mlx5/linux/mlx5_os.c | 7 +------ drivers/net/mlx5/windows/mlx5_os.c | 7 +------ 2 files changed, 2 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c8134f064f..a8a1cbc729 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1380,12 +1380,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); if (config->devx) { - err = mlx5_devx_cmd_query_hca_attr(sh->dev_ctx->ctx, - &config->hca_attr); - if (err) { - err = -err; - goto error; - } + config->hca_attr = dev_ctx->hca_attr; /* Check relax ordering support. */ if (!haswell_broadwell_cpu) { sh->cmng.relaxed_ordering_write = diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index d269cf2f74..49b9c258fa 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -443,12 +443,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->cqe_comp = 0; } if (config->devx) { - err = mlx5_devx_cmd_query_hca_attr(sh->dev_ctx->ctx, - &config->hca_attr); - if (err) { - err = -err; - goto error; - } + config->hca_attr = dev_ctx->hca_attr; /* Check relax ordering support. */ sh->cmng.relaxed_ordering_read = 0; sh->cmng.relaxed_ordering_write = 0;