From patchwork Thu Sep 30 17:28:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 100187 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEFCBA0C43; Thu, 30 Sep 2021 19:41:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1CC4E4117F; Thu, 30 Sep 2021 19:40:05 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2080.outbound.protection.outlook.com [40.107.223.80]) by mails.dpdk.org (Postfix) with ESMTP id E9B114067E for ; Thu, 30 Sep 2021 19:29:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VNIrbHh7XoxU3JCqf4zIK/VxUyYY+NFZbxbu7zEjUm436M8AYEB1AMRke31gaJBP2n2EiLHvsCxD0+altEDZlyQhYsNJb6cpFyFhVuTHuYKRfHAaMnVfai0IjRBxyDqlK6NZR/9qor6B6gwzlIuTPJrRf8Vw0nxZ8aTHmuvQpxeU7W0Fhg52h+O9GS6gQGMV8Uuk/OmXepYRS0JjZQn5wAaJ/53gu2CfSUwm0wChYMKCnRF/7sz40RX2HhtZMNjxC0HtIrPrwIz4Bmo2q/3CcZFCfJi4hV18WeZRQVd+d9B1tGKHHQr6ybNSrfN+IG/e1B8esN5olR/P8mOOtjRzXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=adO3oOJwozRccJeHneFIztmsi2BHh7CTBk/6M2gnKZA=; b=eKhjNmQygLvhUwsTi+F87/qc5IUO/InjcFad/sTmQrBBthalbj6v+1VrYAWqJ61cBG7gl3nyyVYwO3Woads9H/4lxA3Cja6gDjYwFKXryQ956bAvUaMNhIb8QPegV0PcWiCnhMQ8VjqakiKavXsZ58DGUd7tyLFtW4DeyQxGkrh4hDQ/Jrj28MpsuTJNuyW9U+A5AHnpTeBIz+uPqd3q7WFHp/wr+9mdroO6Nr5/hcqyCrPhGhiSHlhcuHHGvw6SqKeWUir/KpEXMKS+qdsyfrWZnN1UAY4e4LRIGv8OhJC6fqTJsvL54MQCkad0wTnw72oAK4nsx73f0v3FHvJVnA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=adO3oOJwozRccJeHneFIztmsi2BHh7CTBk/6M2gnKZA=; b=WSDhThmjTQkJm3rQMqG8p17ax6Fyyn5Jr5+VuX4JHtQDcg7a1AB4kuDcdBI1afBELO2SRP59Oj2FwMeDFu8oXNKdvmDwjMAqeDMsJF1BQI8KqDQrQnNvdp1masq60jMuOohYWvrd4RNp4QQoz8Lckcq/yypdbzxABkDDXsa80kkHqgRbtuPsZmwi5sKu257Ks7P3E8QkvrbKwqmlK0c5h/iB57ptH8AY8OYfFD3F2d4NA/sWB5L0IseQbOySyxRt/TLmCPLVxzc/FqsM7ojwfTWD7eouG+rVe3PY3pvgT6o1vEyeKVOvbng9AQsUoIR8X2cHfmD2sBlYkASgF+s0oA== Received: from BN6PR22CA0052.namprd22.prod.outlook.com (2603:10b6:404:ca::14) by CY4PR12MB1367.namprd12.prod.outlook.com (2603:10b6:903:3c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Thu, 30 Sep 2021 17:29:17 +0000 Received: from BN8NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:404:ca:cafe::b3) by BN6PR22CA0052.outlook.office365.com (2603:10b6:404:ca::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.13 via Frontend Transport; Thu, 30 Sep 2021 17:29:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT036.mail.protection.outlook.com (10.13.177.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 17:29:17 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 30 Sep 2021 17:29:10 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Date: Thu, 30 Sep 2021 20:28:22 +0300 Message-ID: <20210930172822.1949969-19-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210930172822.1949969-1-michaelba@nvidia.com> References: <20210930172822.1949969-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8f649d0d-a76b-4973-e64a-08d98437d470 X-MS-TrafficTypeDiagnostic: CY4PR12MB1367: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:138; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bGTNNuHkaGmi4MxGRcbiZe0P2EBIgNQH6UeK4VrUxcVyvFpEOzqcLvJbVDYDIHzpCwpiGRtKNEMGMCI7Bkd/5JuPLsjHdko3lOuo1rJ5nHk7U1fImXJwFlFJIl1zNuoUagkm5FIm2QUKqTlDTu70MeCzTFsRJw+OJXtJuSkGUI+YBZF85m7XIf2GPxRwoX/Fhyc5D07l+NFOO7qOR9EENjjm1+6UoH05g7J69L+yYBagXbZ3rQA4JjG4AT9cu0WKb3pPkvQjKHP0U9meR8EV28PZez0+I1AEUMfN5brhv36u4eU9iIqueREgr536cxOlzDTW0z1/nGNwVVFIiVwH5Ac5VXazujR13prnjrzGPQJMxg3y5h0Ph5w/9iVFDROqnw5uiXOdDUPpMYRcd3JVkouha9sqxSjiIVrh8p1x/L72VIOPA7Xza/K0Qc2uCx1gFWBC/sgqVxNd8tVKQicUiA2WuaaDgMVEdVikCJ1GAa5YLhGpco/vUTRAekJtsFqleNGV9rp0iWdrAamvb2g95x3/CcZIV2dhyZWe/higHjBDeVndbWYs0drDNrjg/J5coHM/Je8P7W2IyLdSYXMfL/mXCO3mQqxyzuqCciz8RgDtzrnFziGpLbTZSo2GCSHnYJ1HbLuIvyNE8Zbjd5+qyw0PkJGMqX9hqLjFFG0/xgmTg5caONwHn4Fqmb8qTg+65eAgDr9C8TvSwGOzJX3vMw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(86362001)(70206006)(8936002)(82310400003)(70586007)(6666004)(55016002)(426003)(107886003)(83380400001)(186003)(2616005)(16526019)(1076003)(7696005)(336012)(6286002)(26005)(2876002)(4326008)(8676002)(5660300002)(30864003)(36860700001)(2906002)(356005)(36756003)(508600001)(6916009)(54906003)(316002)(47076005)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2021 17:29:17.0303 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8f649d0d-a76b-4973-e64a-08d98437d470 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1367 X-Mailman-Approved-At: Thu, 30 Sep 2021 19:39:43 +0200 Subject: [dpdk-dev] [PATCH 18/18] common/mlx5: share MR mempool registration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Expand the use of mempool registration to MR management for other drivers. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.c | 148 ++++++++++++++++++++++++++ drivers/common/mlx5/mlx5_common.h | 9 ++ drivers/common/mlx5/mlx5_common_mp.h | 11 ++ drivers/common/mlx5/mlx5_common_mr.c | 94 +++++++++++++--- drivers/common/mlx5/mlx5_common_mr.h | 41 ++++++- drivers/common/mlx5/version.map | 6 +- drivers/compress/mlx5/mlx5_compress.c | 5 +- drivers/crypto/mlx5/mlx5_crypto.c | 5 +- drivers/net/mlx5/linux/mlx5_mp_os.c | 3 +- drivers/net/mlx5/meson.build | 1 - drivers/net/mlx5/mlx5.c | 106 ++---------------- drivers/net/mlx5/mlx5.h | 13 --- drivers/net/mlx5/mlx5_mr.c | 89 ---------------- drivers/net/mlx5/mlx5_rx.c | 15 +-- drivers/net/mlx5/mlx5_rx.h | 14 --- drivers/net/mlx5/mlx5_rxq.c | 1 + drivers/net/mlx5/mlx5_rxtx.h | 26 ----- drivers/net/mlx5/mlx5_tx.h | 27 ++--- drivers/regex/mlx5/mlx5_regex.c | 6 +- 19 files changed, 322 insertions(+), 298 deletions(-) delete mode 100644 drivers/net/mlx5/mlx5_mr.c diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 0ed1477eb8..e6ff045c95 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -13,6 +13,7 @@ #include "mlx5_common.h" #include "mlx5_common_os.h" +#include "mlx5_common_mp.h" #include "mlx5_common_log.h" #include "mlx5_common_defs.h" #include "mlx5_common_private.h" @@ -302,6 +303,152 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) #endif } +/** + * Register the mempool for the protection domain. + * + * @param cdev + * Pointer to the mlx5 common device. + * @param mp + * Mempool being registered. + * + * @return + * 0 on success, (-1) on failure and rte_errno is set. + */ +static int +mlx5_dev_mempool_register(struct mlx5_common_device *cdev, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + return mlx5_mr_mempool_register(&cdev->mr_scache, cdev->pd, mp, &mp_id); +} + +/** + * Unregister the mempool from the protection domain. + * + * @param cdev + * Pointer to the mlx5 common device. + * @param mp + * Mempool being unregistered. + */ +void +mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + if (mlx5_mr_mempool_unregister(&cdev->mr_scache, mp, &mp_id) < 0) + DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); +} + +/** + * rte_mempool_walk() callback to register mempools for the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_mempool_register_cb(struct rte_mempool *mp, void *arg) +{ + struct mlx5_common_device *cdev = arg; + int ret; + + ret = mlx5_dev_mempool_register(cdev, mp); + if (ret < 0 && rte_errno != EEXIST) + DRV_LOG(ERR, + "Failed to register existing mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); +} + +/** + * rte_mempool_walk() callback to unregister mempools + * from the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_mempool_unregister_cb(struct rte_mempool *mp, void *arg) +{ + mlx5_dev_mempool_unregister((struct mlx5_common_device *)arg, mp); +} + +/** + * Mempool life cycle callback for mlx5 common devices. + * + * @param event + * Mempool life cycle event. + * @param mp + * Associated mempool. + * @param arg + * Pointer to a device shared context. + */ +static void +mlx5_dev_mempool_event_cb(enum rte_mempool_event event, struct rte_mempool *mp, + void *arg) +{ + struct mlx5_common_device *cdev = arg; + + switch (event) { + case RTE_MEMPOOL_EVENT_READY: + if (mlx5_dev_mempool_register(cdev, mp) < 0) + DRV_LOG(ERR, + "Failed to register new mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); + break; + case RTE_MEMPOOL_EVENT_DESTROY: + mlx5_dev_mempool_unregister(cdev, mp); + break; + } +} + +int +mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev) +{ + int ret = 0; + + if (!cdev->config.mr_mempool_reg_en) + return 0; + rte_rwlock_write_lock(&cdev->mr_scache.mprwlock); + if (cdev->mr_scache.mp_cb_registered) + goto exit; + /* Callback for this device may be already registered. */ + ret = rte_mempool_event_callback_register(mlx5_dev_mempool_event_cb, + cdev); + if (ret != 0 && rte_errno != EEXIST) + goto exit; + /* Register mempools only once for this device. */ + if (ret == 0) + rte_mempool_walk(mlx5_dev_mempool_register_cb, cdev); + ret = 0; + cdev->mr_scache.mp_cb_registered = 1; +exit: + rte_rwlock_write_unlock(&cdev->mr_scache.mprwlock); + return ret; +} + +static void +mlx5_dev_mempool_unsubscribe(struct mlx5_common_device *cdev) +{ + int ret; + + if (!cdev->mr_scache.mp_cb_registered || + !cdev->config.mr_mempool_reg_en) + return; + /* Stop watching for mempool events and unregister all mempools. */ + ret = rte_mempool_event_callback_unregister(mlx5_dev_mempool_event_cb, + cdev); + if (ret == 0) + rte_mempool_walk(mlx5_dev_mempool_unregister_cb, cdev); +} + /** * Callback for memory event. * @@ -409,6 +556,7 @@ mlx5_common_dev_release(struct mlx5_common_device *cdev) if (TAILQ_EMPTY(&devices_list)) rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", NULL); + mlx5_dev_mempool_unsubscribe(cdev); mlx5_mr_release_cache(&cdev->mr_scache); mlx5_dev_hw_global_release(cdev); } diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 72ff0ff809..744c6a72b3 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -408,6 +408,15 @@ __rte_internal bool mlx5_dev_is_pci(const struct rte_device *dev); +__rte_internal +int +mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev); + +__rte_internal +void +mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev, + struct rte_mempool *mp); + /* mlx5_common_mr.c */ __rte_internal diff --git a/drivers/common/mlx5/mlx5_common_mp.h b/drivers/common/mlx5/mlx5_common_mp.h index 527bf3cad8..2276dc921c 100644 --- a/drivers/common/mlx5/mlx5_common_mp.h +++ b/drivers/common/mlx5/mlx5_common_mp.h @@ -64,6 +64,17 @@ struct mlx5_mp_id { uint16_t port_id; }; +/** Key string for IPC. */ +#define MLX5_MP_NAME "common_mlx5_mp" + +/** Initialize a multi-process ID. */ +static inline void +mlx5_mp_id_init(struct mlx5_mp_id *mp_id, uint16_t port_id) +{ + mp_id->port_id = port_id; + strlcpy(mp_id->name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); +} + /** Request timeout for IPC. */ #define MLX5_MP_REQ_TIMEOUT_SEC 5 diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 5bfddac08e..b582e28d59 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -12,8 +12,10 @@ #include #include "mlx5_glue.h" +#include "mlx5_common.h" #include "mlx5_common_mp.h" #include "mlx5_common_mr.h" +#include "mlx5_common_os.h" #include "mlx5_common_log.h" #include "mlx5_malloc.h" @@ -47,6 +49,20 @@ struct mlx5_mempool_reg { unsigned int mrs_n; }; +void +mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque) +{ + struct mlx5_mprq_buf *buf = opaque; + + if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) == 1) { + rte_mempool_put(buf->mp, buf); + } else if (unlikely(__atomic_sub_fetch(&buf->refcnt, 1, + __ATOMIC_RELAXED) == 0)) { + __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); + rte_mempool_put(buf->mp, buf); + } +} + /** * Expand B-tree table to a given size. Can't be called with holding * memory_hotplug_lock or share_cache.rwlock due to rte_realloc(). @@ -600,6 +616,10 @@ mlx5_mr_create_secondary(void *pd __rte_unused, { int ret; + if (mp_id == NULL) { + rte_errno = EINVAL; + return UINT32_MAX; + } DRV_LOG(DEBUG, "port %u requesting MR creation for address (%p)", mp_id->port_id, (void *)addr); ret = mlx5_mp_req_mr_create(mp_id, addr); @@ -995,10 +1015,11 @@ mr_lookup_caches(void *pd, struct mlx5_mp_id *mp_id, * @return * Searched LKey on success, UINT32_MAX on no match. */ -uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, - struct mlx5_mr_share_cache *share_cache, - struct mlx5_mr_ctrl *mr_ctrl, - uintptr_t addr, unsigned int mr_ext_memseg_en) +static uint32_t +mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, + struct mlx5_mr_share_cache *share_cache, + struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr, + unsigned int mr_ext_memseg_en) { uint32_t lkey; uint16_t bh_idx = 0; @@ -1029,7 +1050,7 @@ uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, } /** - * Release all the created MRs and resources on global MR cache of a device. + * Release all the created MRs and resources on global MR cache of a device * list. * * @param share_cache @@ -1076,6 +1097,8 @@ mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket) mlx5_os_set_reg_mr_cb(&share_cache->reg_mr_cb, &share_cache->dereg_mr_cb); rte_rwlock_init(&share_cache->rwlock); + rte_rwlock_init(&share_cache->mprwlock); + share_cache->mp_cb_registered = 0; /* Initialize B-tree and allocate memory for global MR cache table. */ return mlx5_mr_btree_init(&share_cache->cache, MLX5_MR_BTREE_CACHE_N * 2, socket); @@ -1245,8 +1268,8 @@ mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, /** * Dump all the created MRs and the global cache entries. * - * @param sh - * Pointer to Ethernet device shared context. + * @param share_cache + * Pointer to a global shared MR cache. */ void mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache __rte_unused) @@ -1581,8 +1604,7 @@ mlx5_mr_mempool_register_primary(struct mlx5_mr_share_cache *share_cache, mpr = mlx5_mempool_reg_lookup(share_cache, mp); if (mpr == NULL) { mlx5_mempool_reg_attach(new_mpr); - LIST_INSERT_HEAD(&share_cache->mempool_reg_list, - new_mpr, next); + LIST_INSERT_HEAD(&share_cache->mempool_reg_list, new_mpr, next); ret = 0; } rte_rwlock_write_unlock(&share_cache->rwlock); @@ -1837,6 +1859,56 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, return lkey; } +/** + * Bottom-half of LKey search on. If supported, lookup for the address from + * the mempool. Otherwise, search in old mechanism caches. + * + * @param cdev + * Pointer to mlx5 device. + * @param mp_id + * Multi-process identifier, may be NULL for the primary process. + * @param mr_ctrl + * Pointer to per-queue MR control structure. + * @param mb + * Pointer to mbuf. + * + * @return + * Searched LKey on success, UINT32_MAX on no match. + */ +static uint32_t +mlx5_mr_mb2mr_bh(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, + struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb) +{ + uint32_t lkey; + uintptr_t addr = (uintptr_t)mb->buf_addr; + + if (cdev->config.mr_mempool_reg_en) { + struct rte_mempool *mp = NULL; + struct mlx5_mprq_buf *buf; + + if (!RTE_MBUF_HAS_EXTBUF(mb)) { + mp = mlx5_mb2mp(mb); + } else if (mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) { + /* Recover MPRQ mempool. */ + buf = mb->shinfo->fcb_opaque; + mp = buf->mp; + } + if (mp != NULL) { + lkey = mlx5_mr_mempool2mr_bh(&cdev->mr_scache, + mr_ctrl, mp, addr); + /* + * Lookup can only fail on invalid input, e.g. "addr" + * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set. + */ + if (lkey != UINT32_MAX) + return lkey; + } + /* Fallback for generic mechanism in corner cases. */ + } + return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, &cdev->mr_scache, mr_ctrl, + addr, cdev->config.mr_ext_memseg_en); +} + /** * Query LKey from a packet buffer. * @@ -1857,7 +1929,6 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf) { uint32_t lkey; - uintptr_t addr = (uintptr_t)mbuf->buf_addr; /* Check generation bit to see if there's any change on existing MRs. */ if (unlikely(*mr_ctrl->dev_gen_ptr != mr_ctrl->cur_gen)) @@ -1868,6 +1939,5 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, &cdev->mr_scache, mr_ctrl, - addr, cdev->config.mr_ext_memseg_en); + return mlx5_mr_mb2mr_bh(cdev, mp_id, mr_ctrl, mbuf); } diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 8a7af05ca5..e74f81641c 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -79,6 +79,8 @@ LIST_HEAD(mlx5_mempool_reg_list, mlx5_mempool_reg); struct mlx5_mr_share_cache { uint32_t dev_gen; /* Generation number to flush local caches. */ rte_rwlock_t rwlock; /* MR cache Lock. */ + rte_rwlock_t mprwlock; /* Mempool Registration Lock. */ + uint8_t mp_cb_registered; /* Mempool are Registered. */ struct mlx5_mr_btree cache; /* Global MR cache table. */ struct mlx5_mr_list mr_list; /* Registered MR list. */ struct mlx5_mr_list mr_free_list; /* Freed MR list. */ @@ -87,6 +89,40 @@ struct mlx5_mr_share_cache { mlx5_dereg_mr_t dereg_mr_cb; /* Callback to dereg_mr func */ } __rte_packed; +/* Multi-Packet RQ buffer header. */ +struct mlx5_mprq_buf { + struct rte_mempool *mp; + uint16_t refcnt; /* Atomically accessed refcnt. */ + uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */ + struct rte_mbuf_ext_shared_info shinfos[]; + /* + * Shared information per stride. + * More memory will be allocated for the first stride head-room and for + * the strides data. + */ +} __rte_cache_aligned; + +__rte_internal +void mlx5_mprq_buf_free_cb(void *addr, void *opaque); + +/** + * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from which the + * cloned mbuf is allocated is returned instead. + * + * @param buf + * Pointer to mbuf. + * + * @return + * Memory pool where data is located for given mbuf. + */ +static inline struct rte_mempool * +mlx5_mb2mp(struct rte_mbuf *buf) +{ + if (unlikely(RTE_MBUF_CLONED(buf))) + return rte_mbuf_from_indirect(buf)->pool; + return buf->pool; +} + /** * Look up LKey from given lookup table by linear search. Firstly look up the * last-hit entry. If miss, the entire array is searched. If found, update the @@ -133,11 +169,6 @@ __rte_internal void mlx5_mr_btree_free(struct mlx5_mr_btree *bt); void mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused); __rte_internal -uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, - struct mlx5_mr_share_cache *share_cache, - struct mlx5_mr_ctrl *mr_ctrl, - uintptr_t addr, unsigned int mr_ext_memseg_en); -__rte_internal uint32_t mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mempool *mp, uintptr_t addr); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index b41fdb883d..807043f22c 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -13,6 +13,8 @@ INTERNAL { mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT mlx5_dev_is_pci; + mlx5_dev_mempool_unregister; + mlx5_dev_mempool_subscribe; mlx5_devx_alloc_uar; # WINDOWS_NO_EXPORT @@ -101,10 +103,10 @@ INTERNAL { mlx5_mp_uninit_primary; # WINDOWS_NO_EXPORT mlx5_mp_uninit_secondary; # WINDOWS_NO_EXPORT - mlx5_mr_addr2mr_bh; + mlx5_mprq_buf_free_cb; mlx5_mr_btree_free; mlx5_mr_create_primary; - mlx5_mr_ctrl_init; + mlx5_mr_ctrl_init; mlx5_mr_flush_local_cache; mlx5_mr_mb2mr; diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 83efc2cbc4..707716aaa2 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -382,8 +382,9 @@ mlx5_compress_dev_stop(struct rte_compressdev *dev) static int mlx5_compress_dev_start(struct rte_compressdev *dev) { - RTE_SET_USED(dev); - return 0; + struct mlx5_compress_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } static void diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index ad63cace10..2af5194c05 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -142,8 +142,9 @@ mlx5_crypto_dev_stop(struct rte_cryptodev *dev) static int mlx5_crypto_dev_start(struct rte_cryptodev *dev) { - RTE_SET_USED(dev); - return 0; + struct mlx5_crypto_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } static int diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index c3b6495d9e..017a731b3f 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -90,8 +90,7 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) switch (param->type) { case MLX5_MP_REQ_CREATE_MR: mp_init_msg(&priv->mp_id, &mp_res, param->type); - lkey = mlx5_mr_create_primary(cdev->pd, - &priv->sh->cdev->mr_scache, + lkey = mlx5_mr_create_primary(cdev->pd, &cdev->mr_scache, &entry, param->args.addr, cdev->config.mr_ext_memseg_en); if (lkey == UINT32_MAX) diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index dac7f1fabf..636a1be890 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -18,7 +18,6 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_mac.c', - 'mlx5_mr.c', 'mlx5_rss.c', 'mlx5_rx.c', 'mlx5_rxmode.c', diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 17113be873..e9aa41432e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1097,28 +1097,8 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, } /** - * Unregister the mempool from the protection domain. - * - * @param sh - * Pointer to the device shared context. - * @param mp - * Mempool being unregistered. - */ -static void -mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, - struct rte_mempool *mp) -{ - struct mlx5_mp_id mp_id; - - mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_unregister(&sh->cdev->mr_scache, mp, &mp_id) < 0) - DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, rte_strerror(rte_errno)); -} - -/** - * rte_mempool_walk() callback to register mempools - * for the protection domain. + * rte_mempool_walk() callback to unregister Rx mempools. + * It used when implicit mempool registration is disabled. * * @param mp * The mempool being walked. @@ -1126,66 +1106,11 @@ mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, * Pointer to the device shared context. */ static void -mlx5_dev_ctx_shared_mempool_register_cb(struct rte_mempool *mp, void *arg) +mlx5_dev_ctx_shared_rx_mempool_unregister_cb(struct rte_mempool *mp, void *arg) { struct mlx5_dev_ctx_shared *sh = arg; - struct mlx5_mp_id mp_id; - int ret; - mlx5_mp_id_init(&mp_id, 0); - ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, mp, - &mp_id); - if (ret < 0 && rte_errno != EEXIST) - DRV_LOG(ERR, "Failed to register existing mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, rte_strerror(rte_errno)); -} - -/** - * rte_mempool_walk() callback to unregister mempools - * from the protection domain. - * - * @param mp - * The mempool being walked. - * @param arg - * Pointer to the device shared context. - */ -static void -mlx5_dev_ctx_shared_mempool_unregister_cb(struct rte_mempool *mp, void *arg) -{ - mlx5_dev_ctx_shared_mempool_unregister - ((struct mlx5_dev_ctx_shared *)arg, mp); -} - -/** - * Mempool life cycle callback for Ethernet devices. - * - * @param event - * Mempool life cycle event. - * @param mp - * Associated mempool. - * @param arg - * Pointer to a device shared context. - */ -static void -mlx5_dev_ctx_shared_mempool_event_cb(enum rte_mempool_event event, - struct rte_mempool *mp, void *arg) -{ - struct mlx5_dev_ctx_shared *sh = arg; - struct mlx5_mp_id mp_id; - - switch (event) { - case RTE_MEMPOOL_EVENT_READY: - mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, - mp, &mp_id) < 0) - DRV_LOG(ERR, "Failed to register new mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, - rte_strerror(rte_errno)); - break; - case RTE_MEMPOOL_EVENT_DESTROY: - mlx5_dev_ctx_shared_mempool_unregister(sh, mp); - break; - } + mlx5_dev_mempool_unregister(sh->cdev, mp); } /** @@ -1206,7 +1131,7 @@ mlx5_dev_ctx_shared_rx_mempool_event_cb(enum rte_mempool_event event, struct mlx5_dev_ctx_shared *sh = arg; if (event == RTE_MEMPOOL_EVENT_DESTROY) - mlx5_dev_ctx_shared_mempool_unregister(sh, mp); + mlx5_dev_mempool_unregister(sh->cdev, mp); } int @@ -1222,15 +1147,7 @@ mlx5_dev_ctx_shared_mempool_subscribe(struct rte_eth_dev *dev) (mlx5_dev_ctx_shared_rx_mempool_event_cb, sh); return ret == 0 || rte_errno == EEXIST ? 0 : ret; } - /* Callback for this shared context may be already registered. */ - ret = rte_mempool_event_callback_register - (mlx5_dev_ctx_shared_mempool_event_cb, sh); - if (ret != 0 && rte_errno != EEXIST) - return ret; - /* Register mempools only once for this shared context. */ - if (ret == 0) - rte_mempool_walk(mlx5_dev_ctx_shared_mempool_register_cb, sh); - return 0; + return mlx5_dev_mempool_subscribe(sh->cdev); } /** @@ -1414,14 +1331,13 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (--sh->refcnt) goto exit; /* Stop watching for mempool events and unregister all mempools. */ - ret = rte_mempool_event_callback_unregister - (mlx5_dev_ctx_shared_mempool_event_cb, sh); - if (ret < 0 && rte_errno == ENOENT) + if (!sh->cdev->config.mr_mempool_reg_en) { ret = rte_mempool_event_callback_unregister (mlx5_dev_ctx_shared_rx_mempool_event_cb, sh); - if (ret == 0) - rte_mempool_walk(mlx5_dev_ctx_shared_mempool_unregister_cb, - sh); + if (ret == 0) + rte_mempool_walk + (mlx5_dev_ctx_shared_rx_mempool_unregister_cb, sh); + } /* Remove context from the global device list. */ LIST_REMOVE(sh, next); /* Release flow workspaces objects on the last device. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4f823baa6d..059d400384 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -153,17 +153,6 @@ struct mlx5_flow_dump_ack { int rc; /**< Return code. */ }; -/** Key string for IPC. */ -#define MLX5_MP_NAME "net_mlx5_mp" - -/** Initialize a multi-process ID. */ -static inline void -mlx5_mp_id_init(struct mlx5_mp_id *mp_id, uint16_t port_id) -{ - mp_id->port_id = port_id; - strlcpy(mp_id->name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); -} - LIST_HEAD(mlx5_dev_list, mlx5_dev_ctx_shared); /* Shared data between primary and secondary processes. */ @@ -172,8 +161,6 @@ struct mlx5_shared_data { /* Global spinlock for primary and secondary processes. */ int init_done; /* Whether primary has done initialization. */ unsigned int secondary_cnt; /* Number of secondary processes init'd. */ - struct mlx5_dev_list mem_event_cb_list; - rte_rwlock_t mem_event_rwlock; }; /* Per-process data structure, not visible to other processes. */ diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c deleted file mode 100644 index ac3d8e2492..0000000000 --- a/drivers/net/mlx5/mlx5_mr.c +++ /dev/null @@ -1,89 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2016 6WIND S.A. - * Copyright 2016 Mellanox Technologies, Ltd - */ - -#include -#include -#include -#include - -#include -#include - -#include "mlx5.h" -#include "mlx5_rxtx.h" -#include "mlx5_rx.h" -#include "mlx5_tx.h" - -/** - * Bottom-half of LKey search on Tx. - * - * @param txq - * Pointer to Tx queue structure. - * @param addr - * Search key. - * - * @return - * Searched LKey on success, UINT32_MAX on no match. - */ -static uint32_t -mlx5_tx_addr2mr_bh(struct mlx5_txq_data *txq, uintptr_t addr) -{ - struct mlx5_txq_ctrl *txq_ctrl = - container_of(txq, struct mlx5_txq_ctrl, txq); - struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - struct mlx5_priv *priv = txq_ctrl->priv; - - return mlx5_mr_addr2mr_bh(priv->sh->cdev->pd, &priv->mp_id, - &priv->sh->cdev->mr_scache, mr_ctrl, addr, - priv->sh->cdev->config.mr_ext_memseg_en); -} - -/** - * Bottom-half of LKey search on Tx. If it can't be searched in the memseg - * list, register the mempool of the mbuf as externally allocated memory. - * - * @param txq - * Pointer to Tx queue structure. - * @param mb - * Pointer to mbuf. - * - * @return - * Searched LKey on success, UINT32_MAX on no match. - */ -uint32_t -mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb) -{ - struct mlx5_txq_ctrl *txq_ctrl = - container_of(txq, struct mlx5_txq_ctrl, txq); - struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - struct mlx5_priv *priv = txq_ctrl->priv; - uintptr_t addr = (uintptr_t)mb->buf_addr; - uint32_t lkey; - - if (priv->sh->cdev->config.mr_mempool_reg_en) { - struct rte_mempool *mp = NULL; - struct mlx5_mprq_buf *buf; - - if (!RTE_MBUF_HAS_EXTBUF(mb)) { - mp = mlx5_mb2mp(mb); - } else if (mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) { - /* Recover MPRQ mempool. */ - buf = mb->shinfo->fcb_opaque; - mp = buf->mp; - } - if (mp != NULL) { - lkey = mlx5_mr_mempool2mr_bh(&priv->sh->cdev->mr_scache, - mr_ctrl, mp, addr); - /* - * Lookup can only fail on invalid input, e.g. "addr" - * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set. - */ - if (lkey != UINT32_MAX) - return lkey; - } - /* Fallback for generic mechanism in corner cases. */ - } - return mlx5_tx_addr2mr_bh(txq, addr); -} diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index c83c7f4a39..8fa15e9820 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -18,6 +18,7 @@ #include #include +#include #include "mlx5_autoconf.h" #include "mlx5_defs.h" @@ -1027,20 +1028,6 @@ mlx5_lro_update_hdr(uint8_t *__rte_restrict padd, mlx5_lro_update_tcp_hdr(h.tcp, cqe, phcsum, l4_type); } -void -mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque) -{ - struct mlx5_mprq_buf *buf = opaque; - - if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) == 1) { - rte_mempool_put(buf->mp, buf); - } else if (unlikely(__atomic_sub_fetch(&buf->refcnt, 1, - __ATOMIC_RELAXED) == 0)) { - __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); - rte_mempool_put(buf->mp, buf); - } -} - void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf) { diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 42a12151fc..84a21fbfb9 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -43,19 +43,6 @@ struct rxq_zip { uint32_t cqe_cnt; /* Number of CQEs. */ }; -/* Multi-Packet RQ buffer header. */ -struct mlx5_mprq_buf { - struct rte_mempool *mp; - uint16_t refcnt; /* Atomically accessed refcnt. */ - uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */ - struct rte_mbuf_ext_shared_info shinfos[]; - /* - * Shared information per stride. - * More memory will be allocated for the first stride head-room and for - * the strides data. - */ -} __rte_cache_aligned; - /* Get pointer to the first stride. */ #define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \ sizeof(struct mlx5_mprq_buf) + \ @@ -255,7 +242,6 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec); -void mlx5_mprq_buf_free_cb(void *addr, void *opaque); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 174899e661..e1a4ded688 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -21,6 +21,7 @@ #include #include +#include #include "mlx5_defs.h" #include "mlx5.h" diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index b400295e7d..876aa14ae6 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -43,30 +43,4 @@ int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, int mlx5_queue_state_modify(struct rte_eth_dev *dev, struct mlx5_mp_arg_queue_state_modify *sm); -/* mlx5_mr.c */ - -void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); -int mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, uint64_t iova, - size_t len); -int mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, uint64_t iova, - size_t len); - -/** - * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from which the - * cloned mbuf is allocated is returned instead. - * - * @param buf - * Pointer to mbuf. - * - * @return - * Memory pool where data is located for given mbuf. - */ -static inline struct rte_mempool * -mlx5_mb2mp(struct rte_mbuf *buf) -{ - if (unlikely(RTE_MBUF_CLONED(buf))) - return rte_mbuf_from_indirect(buf)->pool; - return buf->pool; -} - #endif /* RTE_PMD_MLX5_RXTX_H_ */ diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 1f124b92e6..de2e284929 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -235,10 +235,6 @@ void mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, int mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, struct rte_eth_burst_mode *mode); -/* mlx5_mr.c */ - -uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb); - /* mlx5_tx_empw.c */ MLX5_TXOFF_PRE_DECL(full_empw); @@ -356,12 +352,12 @@ __mlx5_uar_write64(uint64_t val, void *addr, rte_spinlock_t *lock) #endif /** - * Query LKey from a packet buffer for Tx. If not found, add the mempool. + * Query LKey from a packet buffer for Tx. * * @param txq * Pointer to Tx queue structure. - * @param addr - * Address to search. + * @param mb + * Pointer to mbuf. * * @return * Searched LKey on success, UINT32_MAX on no match. @@ -370,19 +366,12 @@ static __rte_always_inline uint32_t mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) { struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - uintptr_t addr = (uintptr_t)mb->buf_addr; - uint32_t lkey; - - /* Check generation bit to see if there's any change on existing MRs. */ - if (unlikely(*mr_ctrl->dev_gen_ptr != mr_ctrl->cur_gen)) - mlx5_mr_flush_local_cache(mr_ctrl); - /* Linear search on MR cache array. */ - lkey = mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, - MLX5_MR_CACHE_N, addr); - if (likely(lkey != UINT32_MAX)) - return lkey; + struct mlx5_txq_ctrl *txq_ctrl = + container_of(txq, struct mlx5_txq_ctrl, txq); + struct mlx5_priv *priv = txq_ctrl->priv; + /* Take slower bottom-half on miss. */ - return mlx5_tx_mb2mr_bh(txq, mb); + return mlx5_mr_mb2mr(priv->sh->cdev, &priv->mp_id, mr_ctrl, mb); } /** diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index a79fb7e5be..cf46a0bd23 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -36,9 +36,11 @@ const struct rte_regexdev_ops mlx5_regexdev_ops = { }; int -mlx5_regex_start(struct rte_regexdev *dev __rte_unused) +mlx5_regex_start(struct rte_regexdev *dev) { - return 0; + struct mlx5_regex_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } int