From patchwork Tue Oct 19 20:56:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 102321 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3133AA0C41; Tue, 19 Oct 2021 22:58:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EF32C41194; Tue, 19 Oct 2021 22:57:22 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by mails.dpdk.org (Postfix) with ESMTP id 08FCD41185 for ; Tue, 19 Oct 2021 22:57:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H/JxXTohrH+eV/JmiacMfO+ozK4UhGRVi3RzJg8Rz8MD1a6KyxvH8zEIOv64PIAEyrqis6AXvvrwgEVWkY08nT4u+ohlu8KQSjYOw8Y/Fa8j7c8Zd8dGPUqC6WnJ4BA71aYmbmy7pIsPN0TpdNutUVMk2ho59trG+73AtGjbYKIcld61IBwGggZwIMAHHzPWrlO7Wz9xBMMQs/InX82/iP2+fZsrtV5QxvUBB1wOMshD1hoEhH/JE3G+g3npCqMHnTSdzs/91j4TFnHzx9cbK/wR5bY4bhfi1zjsNXYTdxc4L/CmNJYLTLUQkEjck6XwTxV0PKSkGoj1yqRVcb8wvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6aD9ZFVsDmuSloouxqYvXoK9/MHrf06SDxHixyj/v3Q=; b=boPFvxIdrFStgpH+PQLBsBRkGe1dN03sbCV7JE/SqaLspnE9uShVEOWy2PxnHBNo9M/3P1AJ7utAMq3yJbeIVb258BYIxMRc7wBUdeomMJ1hMBqFC48bdRzSx+xajchqHedWqTHvnVt0WjzGt8dvkduO22FyhDmAR9Vvva1mjuNn+HEt5DPfLWpEguN34y+ZNr8btJEuPkwP/i803JOHA8EXZQmdXcMWEVX4h84F0Avejqxo3nwrHgZtx8/Cn1VhCwffvC2gqbCbwjzEawJ4S5V1YADxgHwzSfyUIL0eNho7sqI6Llx4dUxvD+CwB+gDwbtMojOPHc1ND1udCgxyfg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6aD9ZFVsDmuSloouxqYvXoK9/MHrf06SDxHixyj/v3Q=; b=DO0jnpCZ+xrycn4AvgmxCWiVIQaR0Y0oLnZz/0X0LdfMLhqJE5gMyXjurSwU8x29nAp36D4mKN+xDL67gCnLp3ouKlbrRvrdPRn/axPuKVJ+zbu9GjFiT4Mfo8uJIwDgZR3cgaWNzz8cwXofmWkfTSGVyW7Ri7ZduX+pZEgYvZdStjlcr3B82GUQMm+OUJJary6bAxxPDRJ4WIi4Wh92IVrkXEuUbCPbr9Z81fp72EzpCX+vq/ParxgYqWw4Wb+i/LrZwUXmQ31B1RDeLCv0FXxTEyDh3ANm6KpI0lajtY4jPaShai+DABhs6fHC2vfKxFsWIHomZOHdfOK2IdeJ7Q== Received: from BN0PR02CA0015.namprd02.prod.outlook.com (2603:10b6:408:e4::20) by CH2PR12MB3928.namprd12.prod.outlook.com (2603:10b6:610:23::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.15; Tue, 19 Oct 2021 20:57:02 +0000 Received: from BN8NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e4:cafe::31) by BN0PR02CA0015.outlook.office365.com (2603:10b6:408:e4::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.15 via Frontend Transport; Tue, 19 Oct 2021 20:57:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT033.mail.protection.outlook.com (10.13.177.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Tue, 19 Oct 2021 20:57:02 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 19 Oct 2021 20:56:52 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Date: Tue, 19 Oct 2021 23:56:01 +0300 Message-ID: <20211019205602.3188203-18-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211019205602.3188203-1-michaelba@nvidia.com> References: <20211006220350.2357487-1-michaelba@nvidia.com> <20211019205602.3188203-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7cc7b2be-d98e-4f7a-87c2-08d993430037 X-MS-TrafficTypeDiagnostic: CH2PR12MB3928: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: m6PmI4E6iXtPQI62ZVMN92xhmUAYrDmX0pKSfYRHkwiTS+ubsArxlAP2lGirRPIkOorDsh2w3WUe91k7UMCTJfMCGbefeUeTpQlQrOS0jhttA3rVpLmt8YGZPKg3a4QwQUwiNiVdKzBVnJe3zRWqNsJXGTKv7vPwf1JGTZcs/0//QI78QEo5IqNGwNetjTCSvMKJBs4AOrVZ0IFuybwxBzpSsaKEhz7pC75dWtSK1eaj2ds0TxySq1CY9nD/t42iBV2QzF2Y2qboF80FlrRGmBstUglWdXa356MP3BnDBDUKfIAojkyHZcsQSHW/G9llCDF18EjqNzdCPJN+9kNRPRmV2e0uQDSlNldnVMkPct9DGCHb0IIqTf08Rb9ne7BtCQTOyVEDjlGbwc7ipEPvyk5Vyjn+lbnySmZabZiMuxlqdyy8y0kuEtM4RDFhvmfxFl/wuIYwMuBMc7Vj6/DOAieU+T8s982X0DxRGoJBNKt20kb/nKtYjDloGD3BsHs54afj6SQ9ZKDE0nnoBlYv3Qi/CjJ0j6YoI7+bn3n5ptrXC/ruV9WrpeHVJEk8zxLGju4U3wpXm67Ri47XYN65B89kCgBHTOP0R9ONvYnBmEBJISBfcnpD6mYhKAQYCQTAuETybD2BifW6X6Dlg91r50jOhsMY1e7rlRcJfZ9sERmup0BltsTaoCYQPgIQas9HWHKqOL75tuD8od1mGXqQ7g== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(186003)(8676002)(16526019)(336012)(4326008)(8936002)(26005)(6286002)(70206006)(70586007)(5660300002)(426003)(2616005)(6916009)(316002)(36906005)(508600001)(7696005)(54906003)(2876002)(55016002)(356005)(2906002)(7636003)(82310400003)(36756003)(83380400001)(86362001)(1076003)(6666004)(36860700001)(47076005)(107886003)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2021 20:57:02.2428 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7cc7b2be-d98e-4f7a-87c2-08d993430037 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3928 Subject: [dpdk-dev] [PATCH v3 17/18] common/mlx5: support device DMA map and unmap X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Since MR management has moved to the common area, there is no longer a need for the DMA map and unmap function for each driver. This patch share those functions. For most drivers it supports these operations for the first time. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.c | 144 +++++++++++++++++---------- drivers/common/mlx5/mlx5_common.h | 41 -------- drivers/common/mlx5/mlx5_common_mr.c | 2 +- drivers/common/mlx5/mlx5_common_mr.h | 25 ++--- drivers/common/mlx5/version.map | 9 -- drivers/net/mlx5/mlx5.c | 2 - drivers/net/mlx5/mlx5_mr.c | 132 ------------------------ 7 files changed, 100 insertions(+), 255 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index d6acf87493..0ed1477eb8 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -258,12 +258,6 @@ is_valid_class_combination(uint32_t user_classes) return 0; } -static bool -device_class_enabled(const struct mlx5_common_device *device, uint32_t class) -{ - return (device->classes_loaded & class) > 0; -} - static bool mlx5_bus_match(const struct mlx5_class_driver *drv, const struct rte_device *dev) @@ -597,62 +591,106 @@ mlx5_common_dev_remove(struct rte_device *eal_dev) return ret; } +/** + * Callback to DMA map external memory to a device. + * + * @param rte_dev + * Pointer to the generic device. + * @param addr + * Starting virtual address of memory to be mapped. + * @param iova + * Starting IOVA address of memory to be mapped. + * @param len + * Length of memory segment being mapped. + * + * @return + * 0 on success, negative value on error. + */ int -mlx5_common_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova, - size_t len) +mlx5_common_dev_dma_map(struct rte_device *rte_dev, void *addr, + uint64_t iova __rte_unused, size_t len) { - struct mlx5_class_driver *driver = NULL; - struct mlx5_class_driver *temp; - struct mlx5_common_device *mdev; - int ret = -EINVAL; - - mdev = to_mlx5_device(dev); - if (!mdev) - return -ENODEV; - TAILQ_FOREACH(driver, &drivers_list, next) { - if (!device_class_enabled(mdev, driver->drv_class) || - driver->dma_map == NULL) - continue; - ret = driver->dma_map(dev, addr, iova, len); - if (ret) - goto map_err; + struct mlx5_common_device *dev; + struct mlx5_mr *mr; + + dev = to_mlx5_device(rte_dev); + if (!dev) { + DRV_LOG(WARNING, + "Unable to find matching mlx5 device to device %s", + rte_dev->name); + rte_errno = ENODEV; + return -1; } - return ret; -map_err: - TAILQ_FOREACH(temp, &drivers_list, next) { - if (temp == driver) - break; - if (device_class_enabled(mdev, temp->drv_class) && - temp->dma_map && temp->dma_unmap) - temp->dma_unmap(dev, addr, iova, len); + mr = mlx5_create_mr_ext(dev->pd, (uintptr_t)addr, len, + SOCKET_ID_ANY, dev->mr_scache.reg_mr_cb); + if (!mr) { + DRV_LOG(WARNING, "Device %s unable to DMA map", rte_dev->name); + rte_errno = EINVAL; + return -1; } - return ret; + rte_rwlock_write_lock(&dev->mr_scache.rwlock); + LIST_INSERT_HEAD(&dev->mr_scache.mr_list, mr, mr); + /* Insert to the global cache table. */ + mlx5_mr_insert_cache(&dev->mr_scache, mr); + rte_rwlock_write_unlock(&dev->mr_scache.rwlock); + return 0; } +/** + * Callback to DMA unmap external memory to a device. + * + * @param rte_dev + * Pointer to the generic device. + * @param addr + * Starting virtual address of memory to be unmapped. + * @param iova + * Starting IOVA address of memory to be unmapped. + * @param len + * Length of memory segment being unmapped. + * + * @return + * 0 on success, negative value on error. + */ int -mlx5_common_dev_dma_unmap(struct rte_device *dev, void *addr, uint64_t iova, - size_t len) +mlx5_common_dev_dma_unmap(struct rte_device *rte_dev, void *addr, + uint64_t iova __rte_unused, size_t len __rte_unused) { - struct mlx5_class_driver *driver; - struct mlx5_common_device *mdev; - int local_ret = -EINVAL; - int ret = 0; - - mdev = to_mlx5_device(dev); - if (!mdev) - return -ENODEV; - /* There is no unmap error recovery in current implementation. */ - TAILQ_FOREACH_REVERSE(driver, &drivers_list, mlx5_drivers, next) { - if (!device_class_enabled(mdev, driver->drv_class) || - driver->dma_unmap == NULL) - continue; - local_ret = driver->dma_unmap(dev, addr, iova, len); - if (local_ret && (ret == 0)) - ret = local_ret; + struct mlx5_common_device *dev; + struct mr_cache_entry entry; + struct mlx5_mr *mr; + + dev = to_mlx5_device(rte_dev); + if (!dev) { + DRV_LOG(WARNING, + "Unable to find matching mlx5 device to device %s.", + rte_dev->name); + rte_errno = ENODEV; + return -1; } - if (local_ret) - ret = local_ret; - return ret; + rte_rwlock_read_lock(&dev->mr_scache.rwlock); + mr = mlx5_mr_lookup_list(&dev->mr_scache, &entry, (uintptr_t)addr); + if (!mr) { + rte_rwlock_read_unlock(&dev->mr_scache.rwlock); + DRV_LOG(WARNING, + "Address 0x%" PRIxPTR " wasn't registered to device %s", + (uintptr_t)addr, rte_dev->name); + rte_errno = EINVAL; + return -1; + } + LIST_REMOVE(mr, mr); + DRV_LOG(DEBUG, "MR(%p) is removed from list.", (void *)mr); + mlx5_mr_free(mr, dev->mr_scache.dereg_mr_cb); + mlx5_mr_rebuild_cache(&dev->mr_scache); + /* + * No explicit wmb is needed after updating dev_gen due to + * store-release ordering in unlock that provides the + * implicit barrier at the software visible level. + */ + ++dev->mr_scache.dev_gen; + DRV_LOG(DEBUG, "Broadcasting local cache flush, gen=%d.", + dev->mr_scache.dev_gen); + rte_rwlock_read_unlock(&dev->mr_scache.rwlock); + return 0; } void diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 1a6b8c0f52..72ff0ff809 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -364,44 +364,6 @@ typedef int (mlx5_class_driver_probe_t)(struct mlx5_common_device *dev); */ typedef int (mlx5_class_driver_remove_t)(struct mlx5_common_device *dev); -/** - * Driver-specific DMA mapping. After a successful call the device - * will be able to read/write from/to this segment. - * - * @param dev - * Pointer to the device. - * @param addr - * Starting virtual address of memory to be mapped. - * @param iova - * Starting IOVA address of memory to be mapped. - * @param len - * Length of memory segment being mapped. - * @return - * - 0 On success. - * - Negative value and rte_errno is set otherwise. - */ -typedef int (mlx5_class_driver_dma_map_t)(struct rte_device *dev, void *addr, - uint64_t iova, size_t len); - -/** - * Driver-specific DMA un-mapping. After a successful call the device - * will not be able to read/write from/to this segment. - * - * @param dev - * Pointer to the device. - * @param addr - * Starting virtual address of memory to be unmapped. - * @param iova - * Starting IOVA address of memory to be unmapped. - * @param len - * Length of memory segment being unmapped. - * @return - * - 0 On success. - * - Negative value and rte_errno is set otherwise. - */ -typedef int (mlx5_class_driver_dma_unmap_t)(struct rte_device *dev, void *addr, - uint64_t iova, size_t len); - /** Device already probed can be probed again to check for new ports. */ #define MLX5_DRV_PROBE_AGAIN 0x0004 @@ -414,9 +376,6 @@ struct mlx5_class_driver { const char *name; /**< Driver name. */ mlx5_class_driver_probe_t *probe; /**< Device probe function. */ mlx5_class_driver_remove_t *remove; /**< Device remove function. */ - mlx5_class_driver_dma_map_t *dma_map; /**< Device DMA map function. */ - mlx5_class_driver_dma_unmap_t *dma_unmap; - /**< Device DMA unmap function. */ const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */ uint32_t probe_again:1; /**< Device already probed can be probed again to check new device. */ diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index d63e973b60..5bfddac08e 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -455,7 +455,7 @@ mlx5_mr_lookup_list(struct mlx5_mr_share_cache *share_cache, * @return * Searched LKey on success, UINT32_MAX on failure and rte_errno is set. */ -uint32_t +static uint32_t mlx5_mr_lookup_cache(struct mlx5_mr_share_cache *share_cache, struct mr_cache_entry *entry, uintptr_t addr) { diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 0bc3519fd9..8a7af05ca5 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -124,12 +124,13 @@ mlx5_mr_lookup_lkey(struct mr_cache_entry *lkp_tbl, uint16_t *cached_idx, return UINT32_MAX; } +/* mlx5_common_mr.c */ + __rte_internal int mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, uint32_t *dev_gen_ptr, int socket); __rte_internal void mlx5_mr_btree_free(struct mlx5_mr_btree *bt); -__rte_internal void mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused); __rte_internal uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, @@ -142,36 +143,30 @@ uint32_t mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, struct rte_mempool *mp, uintptr_t addr); void mlx5_mr_release_cache(struct mlx5_mr_share_cache *mr_cache); int mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket); -__rte_internal void mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache __rte_unused); -__rte_internal void mlx5_mr_rebuild_cache(struct mlx5_mr_share_cache *share_cache); __rte_internal void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); void mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, const char *ibdev_name, const void *addr, size_t len); -__rte_internal -int -mlx5_mr_insert_cache(struct mlx5_mr_share_cache *share_cache, - struct mlx5_mr *mr); -__rte_internal -uint32_t -mlx5_mr_lookup_cache(struct mlx5_mr_share_cache *share_cache, - struct mr_cache_entry *entry, uintptr_t addr); -__rte_internal +int mlx5_mr_insert_cache(struct mlx5_mr_share_cache *share_cache, + struct mlx5_mr *mr); struct mlx5_mr * mlx5_mr_lookup_list(struct mlx5_mr_share_cache *share_cache, struct mr_cache_entry *entry, uintptr_t addr); -__rte_internal struct mlx5_mr * mlx5_create_mr_ext(void *pd, uintptr_t addr, size_t len, int socket_id, mlx5_reg_mr_t reg_mr_cb); +void mlx5_mr_free(struct mlx5_mr *mr, mlx5_dereg_mr_t dereg_mr_cb); __rte_internal uint32_t mlx5_mr_create_primary(void *pd, struct mlx5_mr_share_cache *share_cache, struct mr_cache_entry *entry, uintptr_t addr, unsigned int mr_ext_memseg_en); + +/* mlx5_common_verbs.c */ + __rte_internal int mlx5_common_verbs_reg_mr(void *pd, void *addr, size_t length, @@ -183,10 +178,6 @@ mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr); void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb); -__rte_internal -void -mlx5_mr_free(struct mlx5_mr *mr, mlx5_dereg_mr_t dereg_mr_cb); - __rte_internal int mlx5_mr_mempool_register(struct mlx5_mr_share_cache *share_cache, void *pd, diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 12128e4738..28a0944a93 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -12,8 +12,6 @@ INTERNAL { mlx5_common_verbs_reg_mr; # WINDOWS_NO_EXPORT mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT - mlx5_create_mr_ext; - mlx5_dev_is_pci; mlx5_devx_alloc_uar; # WINDOWS_NO_EXPORT @@ -107,18 +105,11 @@ INTERNAL { mlx5_mp_uninit_secondary; # WINDOWS_NO_EXPORT mlx5_mr_addr2mr_bh; - mlx5_mr_btree_dump; mlx5_mr_btree_free; mlx5_mr_create_primary; mlx5_mr_ctrl_init; - mlx5_mr_dump_cache; mlx5_mr_flush_local_cache; - mlx5_mr_free; - mlx5_mr_insert_cache; - mlx5_mr_lookup_cache; - mlx5_mr_lookup_list; mlx5_mr_mb2mr; - mlx5_mr_rebuild_cache; mlx5_nl_allmulti; # WINDOWS_NO_EXPORT mlx5_nl_ifindex; # WINDOWS_NO_EXPORT diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 91aa5c0c75..17113be873 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2589,8 +2589,6 @@ static struct mlx5_class_driver mlx5_net_driver = { .id_table = mlx5_pci_id_map, .probe = mlx5_os_net_probe, .remove = mlx5_net_remove, - .dma_map = mlx5_net_dma_map, - .dma_unmap = mlx5_net_dma_unmap, .probe_again = 1, .intr_lsc = 1, .intr_rmv = 1, diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 38780202dc..ac3d8e2492 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -87,135 +87,3 @@ mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb) } return mlx5_tx_addr2mr_bh(txq, addr); } - -/** - * Finds the first ethdev that match the device. - * The existence of multiple ethdev per pci device is only with representors. - * On such case, it is enough to get only one of the ports as they all share - * the same ibv context. - * - * @param dev - * Pointer to the device. - * - * @return - * Pointer to the ethdev if found, NULL otherwise. - */ -static struct rte_eth_dev * -dev_to_eth_dev(struct rte_device *dev) -{ - uint16_t port_id; - - port_id = rte_eth_find_next_of(0, dev); - if (port_id == RTE_MAX_ETHPORTS) - return NULL; - return &rte_eth_devices[port_id]; -} - -/** - * Callback to DMA map external memory to a device. - * - * @param rte_dev - * Pointer to the generic device. - * @param addr - * Starting virtual address of memory to be mapped. - * @param iova - * Starting IOVA address of memory to be mapped. - * @param len - * Length of memory segment being mapped. - * - * @return - * 0 on success, negative value on error. - */ -int -mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, - uint64_t iova __rte_unused, size_t len) -{ - struct rte_eth_dev *dev; - struct mlx5_mr *mr; - struct mlx5_priv *priv; - struct mlx5_common_device *cdev; - - dev = dev_to_eth_dev(rte_dev); - if (!dev) { - DRV_LOG(WARNING, "unable to find matching ethdev " - "to device %s", rte_dev->name); - rte_errno = ENODEV; - return -1; - } - priv = dev->data->dev_private; - cdev = priv->sh->cdev; - mr = mlx5_create_mr_ext(cdev->pd, (uintptr_t)addr, len, - SOCKET_ID_ANY, cdev->mr_scache.reg_mr_cb); - if (!mr) { - DRV_LOG(WARNING, - "port %u unable to dma map", dev->data->port_id); - rte_errno = EINVAL; - return -1; - } - rte_rwlock_write_lock(&cdev->mr_scache.rwlock); - LIST_INSERT_HEAD(&cdev->mr_scache.mr_list, mr, mr); - /* Insert to the global cache table. */ - mlx5_mr_insert_cache(&cdev->mr_scache, mr); - rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); - return 0; -} - -/** - * Callback to DMA unmap external memory to a device. - * - * @param rte_dev - * Pointer to the generic device. - * @param addr - * Starting virtual address of memory to be unmapped. - * @param iova - * Starting IOVA address of memory to be unmapped. - * @param len - * Length of memory segment being unmapped. - * - * @return - * 0 on success, negative value on error. - */ -int -mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, - uint64_t iova __rte_unused, size_t len __rte_unused) -{ - struct rte_eth_dev *dev; - struct mlx5_priv *priv; - struct mlx5_common_device *cdev; - struct mlx5_mr *mr; - struct mr_cache_entry entry; - - dev = dev_to_eth_dev(rte_dev); - if (!dev) { - DRV_LOG(WARNING, "unable to find matching ethdev to device %s", - rte_dev->name); - rte_errno = ENODEV; - return -1; - } - priv = dev->data->dev_private; - cdev = priv->sh->cdev; - rte_rwlock_write_lock(&cdev->mr_scache.rwlock); - mr = mlx5_mr_lookup_list(&cdev->mr_scache, &entry, (uintptr_t)addr); - if (!mr) { - rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); - DRV_LOG(WARNING, "address 0x%" PRIxPTR " wasn't registered to device %s", - (uintptr_t)addr, rte_dev->name); - rte_errno = EINVAL; - return -1; - } - LIST_REMOVE(mr, mr); - DRV_LOG(DEBUG, "port %u remove MR(%p) from list", dev->data->port_id, - (void *)mr); - mlx5_mr_free(mr, cdev->mr_scache.dereg_mr_cb); - mlx5_mr_rebuild_cache(&cdev->mr_scache); - /* - * No explicit wmb is needed after updating dev_gen due to - * store-release ordering in unlock that provides the - * implicit barrier at the software visible level. - */ - ++cdev->mr_scache.dev_gen; - DRV_LOG(DEBUG, "broadcasting local cache flush, gen=%d", - cdev->mr_scache.dev_gen); - rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); - return 0; -}