From patchwork Wed Aug 18 09:07:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 97039 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B7D6A0C47; Wed, 18 Aug 2021 11:08:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5BC4411CD; Wed, 18 Aug 2021 11:08:27 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2085.outbound.protection.outlook.com [40.107.93.85]) by mails.dpdk.org (Postfix) with ESMTP id 225F2411CD for ; Wed, 18 Aug 2021 11:08:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KX9IaoauCwnaLx027C/llRJ5TLl/3w2hZo3Iz217B16HnqySF1ZNbWPRfNdyz+itmqnNvkgNy6989E+60MOY+p67Ty6cT2uq+rRGf0funR4x9Bf+xh2dKSR1odaeESp6xm9ppj9doaE1CTLAAKr4wyd/Sj3w3MWaZjBGTEEWzgMCV457MQC6SMihsNXbCaFTP5Geh/13fKNLUuKL023cMbvpXsgNZObtfyGBZaDn3B3po8HljkGFFvGPDvEeCzYFJiX4d+3NbAZ2Ki85hPcwWQkZAQeUNdrWPU5qZ7MJY8HF0e2EEUDeD1cjUBRSyo2vsCwuxpXIGvBmG0hO8iKCHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pXRzVS3ROARBVsunOM0jOoTG+DrCaux+zUpXmzge6bs=; b=LZGm59XecEVoNu9xwc1+H9jeQj8HEhwq9nsajkpgt765N1DXrG47bJR86g3HAOkCyPpRLHK7SfvOZU4cYoB3U81bRORyJwDPAsFKawCsBeIO+qeN/k2lpdR6rBXK4J3XcZvAyTAdooK7El+vJzwUG80A80hI6IJYBpnh+Eci0lFr9Qj661PhdBdNn1f6lsU7qoM33Spi6x4r29zARaeVdizfIBClBjgSM54KUadiilljBBhj4yYraecc02iBpy2ZMEdrCgNrB358SNK6Sr2j+7t2nc7iEGSDX2l7kw6SMEQ+GDrTM9xp5XOe/qevSfSYdYWkeO91ai+eoJI0Nxms6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pXRzVS3ROARBVsunOM0jOoTG+DrCaux+zUpXmzge6bs=; b=p6U0UcUrB3yCFSbe7lclBp33TTq5VAsKORhar6fsSh+Az6LCel48goXv9pwuiASzumI7CM8m5pIVMm5dRPi/MOlzvCDYNqj7lYciIjxKNO+lo1WJ4eEOmC2qhLMYE37oD5+MebAwmHvYbihPWFmHGAn7e64azxrLDsgXdbBJ9Z45gKN3tX1XPESrb8Hpnk8Fy8+q4z5EZ61CCAhXgjXlVKp8QsJRkjg5ex/sh4fB6dwSAfUqmlXnAjZnE1jrHGsXPl6We5hV6vUGHK5Q+NIlYF5qo5Hwp2hJYMH0B5rsyKehT5UGnIkwVkuOHJDfiGjy4Rs/bmHGB9sPV+vNOvwySg== Received: from MW4PR04CA0168.namprd04.prod.outlook.com (2603:10b6:303:85::23) by BYAPR12MB3512.namprd12.prod.outlook.com (2603:10b6:a03:134::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.19; Wed, 18 Aug 2021 09:08:24 +0000 Received: from CO1NAM11FT014.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::1d) by MW4PR04CA0168.outlook.office365.com (2603:10b6:303:85::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.19 via Frontend Transport; Wed, 18 Aug 2021 09:08:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT014.mail.protection.outlook.com (10.13.175.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Wed, 18 Aug 2021 09:08:22 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 09:08:22 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Aug 2021 09:08:20 +0000 From: Dmitry Kozlyuk To: CC: Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko Date: Wed, 18 Aug 2021 12:07:55 +0300 Message-ID: <20210818090755.2419483-5-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210818090755.2419483-1-dkozlyuk@nvidia.com> References: <20210818090755.2419483-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f2459000-9623-4e38-8431-08d96227bb12 X-MS-TrafficTypeDiagnostic: BYAPR12MB3512: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XkyPPFkBsRda1Nik2u9Bw3PWOJ1MyzPjZjb/aPITdnLUpl/RR40OOK2IPjBtC8TN0xJONr1eGXNCTrr/DQGSh6UovCPqjU46sjOfmEc8lDo3k0WyMuthywMcVPpU17wIYBaOZKEsIHhw1uFs03qxbFwAZoyA1gIUdUo9yNQWqKeLuYCNuSQi2plDYtLgKl60Q5immamTH2IBLBuIIajrmo0Ba32yet2XabJs4wttQaW0+etOaSyw4qe2CPYzCmgXsuLwLltBFX629iIxCbZDJTGOI3n2Vb689zSK8h6waNT/Lzu50fgbTYxCKmEECOC1OTJ1BWWPjbswofvwNn9nkgllXwEfTE+KWLNFrQX57S0Hkq10/vbmlWV8MV3WcTOtACtThMYrI4egLZfNSKBWOWCdixKQZ6D80aPegnnxXKKKqhBkDeYZx84ckzo+CxlTWVV9gDM3EimNfs6HlfYgMTzuquDFOEtyrU7OD3CpZgtibZx1wnN2xfK0aN8EElmP2JeY6JDOHbARdG3i57zzY98ykMLCE/JyHZ/SMrBSJGQuQVbAEuEGfoajI0NjsSgI0guVHEQbJFEAiPfbRx9oiMfdcSMgdjWtdIcXSMmGxW3NtkFS4QANGYU0BNDSdr7PgmJCvjijW6LteoOqzvCT9tpLsMx52MxTCutbIMTv69bS/CghiYibp4VbDgklyR2U17XR5Fci1oRYI3oA3d90JYphsTow3hHaZQqBKkeAz7A= X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(46966006)(36840700001)(70206006)(6666004)(478600001)(5660300002)(47076005)(356005)(82740400003)(2906002)(16526019)(6916009)(70586007)(7696005)(186003)(26005)(36860700001)(7636003)(8676002)(36756003)(1076003)(30864003)(82310400003)(83380400001)(86362001)(54906003)(426003)(55016002)(2616005)(107886003)(6286002)(316002)(336012)(8936002)(4326008)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2021 09:08:22.9816 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f2459000-9623-4e38-8431-08d96227bb12 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT014.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3512 Subject: [dpdk-dev] [PATCH 4/4] net/mlx5: support mempool registration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the first port in a given protection domain (PD) starts, install a mempool event callback for this PD and register all existing memory regions (MR) for it. When the last port in a PD closes, remove the callback and unregister all mempools for this PD. On TX slow path, i.e. when an MR key for the address of the buffer to send is not in the local cache, first try to retrieve it from the database of registered mempools. Supported are direct and indirect mbufs, as well as externally-attached ones from MLX5 MPRQ feature. Lookup in the database of non-mempool memory is used as the last resort. Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 11 +++ doc/guides/rel_notes/release_21_11.rst | 6 ++ drivers/net/mlx5/linux/mlx5_mp_os.c | 44 +++++++++ drivers/net/mlx5/linux/mlx5_os.c | 4 +- drivers/net/mlx5/linux/mlx5_os.h | 2 + drivers/net/mlx5/mlx5.c | 128 +++++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 13 +++ drivers/net/mlx5/mlx5_mr.c | 27 ++++++ drivers/net/mlx5/mlx5_trigger.c | 10 +- 9 files changed, 241 insertions(+), 4 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index bae73f42d8..58d1c5b65c 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1001,6 +1001,17 @@ Driver options Enabled by default. +- ``mr_mempool_reg_en`` parameter [int] + + A nonzero value enables implicit registration of DMA memory of all mempools + except those having ``MEMPOOL_F_NON_IO``. The effect is that when a packet + from a mempool is transmitted, its memory is already registered for DMA + in the PMD and no registration will happen on the data path. The tradeoff is + extra work on the creation of each mempool and increased HW resource use + if some mempools are not used with MLX5 devices. + + Enabled by default. + - ``representor`` parameter [list] This parameter can be used to instantiate DPDK Ethernet devices from diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index dc9b98b862..0a2f80aa1b 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Mellanox mlx5 driver.** + + Updated the Mellanox mlx5 driver with new features and improvements, including: + + * Added implicit mempool registration to avoid data path hiccups (opt-out). + Removed Items ------------- diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index 3a4aa766f8..d2ac375a47 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -20,6 +20,45 @@ #include "mlx5_tx.h" #include "mlx5_utils.h" +/** + * Handle a port-agnostic message. + * + * @return + * 0 on success, 1 when message is not port-agnostic, (-1) on error. + */ +static int +mlx5_mp_os_handle_port_agnostic(const struct rte_mp_msg *mp_msg, + const void *peer) +{ + struct rte_mp_msg mp_res; + struct mlx5_mp_param *res = (struct mlx5_mp_param *)mp_res.param; + const struct mlx5_mp_param *param = + (const struct mlx5_mp_param *)mp_msg->param; + const struct mlx5_mp_arg_mempool_reg *mpr; + struct mlx5_mp_id mp_id; + + switch (param->type) { + case MLX5_MP_REQ_MEMPOOL_REGISTER: + mlx5_mp_id_init(&mp_id, param->port_id); + mp_init_msg(&mp_id, &mp_res, param->type); + mpr = ¶m->args.mempool_reg; + res->result = mlx5_mr_mempool_register(mpr->share_cache, + mpr->pd, mpr->mempool, + NULL); + return rte_mp_reply(&mp_res, peer); + case MLX5_MP_REQ_MEMPOOL_UNREGISTER: + mlx5_mp_id_init(&mp_id, param->port_id); + mp_init_msg(&mp_id, &mp_res, param->type); + mpr = ¶m->args.mempool_reg; + res->result = mlx5_mr_mempool_unregister(mpr->share_cache, + mpr->mempool, NULL); + return rte_mp_reply(&mp_res, peer); + default: + return 1; + } + return -1; +} + int mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) { @@ -34,6 +73,11 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) int ret; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + /* Port-agnostic messages. */ + ret = mlx5_mp_os_handle_port_agnostic(mp_msg, peer); + if (ret <= 0) + return ret; + /* Port-specific messages. */ if (!rte_eth_dev_is_valid_port(param->port_id)) { rte_errno = ENODEV; DRV_LOG(ERR, "port %u invalid port ID", param->port_id); diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 5f8766aa48..7dceadb6cc 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1034,8 +1034,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = mlx5_proc_priv_init(eth_dev); if (err) return NULL; - mp_id.port_id = eth_dev->data->port_id; - strlcpy(mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); + mlx5_mp_id_init(&mp_id, eth_dev->data->port_id); /* Receive command fd from primary process */ err = mlx5_mp_req_verbs_cmd_fd(&mp_id); if (err < 0) @@ -2136,6 +2135,7 @@ mlx5_os_config_default(struct mlx5_dev_config *config) config->txqs_inline = MLX5_ARG_UNSET; config->vf_nl_en = 1; config->mr_ext_memseg_en = 1; + config->mr_mempool_reg_en = 1; config->mprq.max_memcpy_len = MLX5_MPRQ_MEMCPY_DEFAULT_LEN; config->mprq.min_rxqs_num = MLX5_MPRQ_MIN_RXQS; config->dv_esw_en = 1; diff --git a/drivers/net/mlx5/linux/mlx5_os.h b/drivers/net/mlx5/linux/mlx5_os.h index 2991d37df2..eb7e1dd3c6 100644 --- a/drivers/net/mlx5/linux/mlx5_os.h +++ b/drivers/net/mlx5/linux/mlx5_os.h @@ -20,5 +20,7 @@ enum { #define MLX5_NAMESIZE IF_NAMESIZE int mlx5_auxiliary_get_ifindex(const char *sf_name); +void mlx5_mempool_event_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *arg); #endif /* RTE_PMD_MLX5_OS_H_ */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f84e061fe7..d0bc7c7007 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -178,6 +178,9 @@ /* Device parameter to configure allow or prevent duplicate rules pattern. */ #define MLX5_ALLOW_DUPLICATE_PATTERN "allow_duplicate_pattern" +/* Device parameter to configure implicit registration of mempool memory. */ +#define MLX5_MR_MEMPOOL_REG_EN "mr_mempool_reg_en" + /* Shared memory between primary and secondary processes. */ struct mlx5_shared_data *mlx5_shared_data; @@ -1085,6 +1088,120 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, return err; } +/** + * Register the mempool for the protection domain. + * + * @param sh + * Pointer to the device shared context. + * @param mp + * Mempool being registered. + */ +static void +mlx5_dev_ctx_shared_mempool_register(struct mlx5_dev_ctx_shared *sh, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + if (mlx5_mr_mempool_register(&sh->share_cache, sh->pd, mp, &mp_id) < 0) + DRV_LOG(ERR, "Failed to register mempool %s for PD %p: %s", + mp->name, sh->pd, rte_strerror(rte_errno)); +} + +/** + * Unregister the mempool from the protection domain. + * + * @param sh + * Pointer to the device shared context. + * @param mp + * Mempool being unregistered. + */ +static void +mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + if (mlx5_mr_mempool_unregister(&sh->share_cache, mp, &mp_id) < 0) + DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", + mp->name, sh->pd, rte_strerror(rte_errno)); +} + +/** + * rte_mempool_walk() callback to register mempools + * for the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_ctx_shared_mempool_register_cb(struct rte_mempool *mp, void *arg) +{ + mlx5_dev_ctx_shared_mempool_register + ((struct mlx5_dev_ctx_shared *)arg, mp); +} + +/** + * rte_mempool_walk() callback to unregister mempools + * from the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_ctx_shared_mempool_unregister_cb(struct rte_mempool *mp, void *arg) +{ + mlx5_dev_ctx_shared_mempool_unregister + ((struct mlx5_dev_ctx_shared *)arg, mp); +} + +/** + * Mempool life cycle callback for Ethernet devices. + * + * @param event + * Mempool life cycle event. + * @param mp + * Associated mempool. + * @param arg + * Pointer to a device shared context. + */ +static void +mlx5_dev_ctx_shared_mempool_event_cb(enum rte_mempool_event event, + struct rte_mempool *mp, void *arg) +{ + struct mlx5_dev_ctx_shared *sh = arg; + + switch (event) { + case RTE_MEMPOOL_EVENT_CREATE: + mlx5_dev_ctx_shared_mempool_register(sh, mp); + break; + case RTE_MEMPOOL_EVENT_DESTROY: + mlx5_dev_ctx_shared_mempool_unregister(sh, mp); + break; + } +} + +int +mlx5_dev_ctx_shared_mempool_subscribe(struct mlx5_dev_ctx_shared *sh) +{ + int ret; + + /* Callback for this shared context may be already registered. */ + ret = rte_mempool_event_callback_register + (mlx5_dev_ctx_shared_mempool_event_cb, sh); + if (ret != 0 && rte_errno != EEXIST) + return ret; + /* Register mempools only once for this shared context. */ + if (ret == 0) + rte_mempool_walk(mlx5_dev_ctx_shared_mempool_register_cb, sh); + return 0; +} + /** * Allocate shared device context. If there is multiport device the * master and representors will share this context, if there is single @@ -1282,6 +1399,8 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, void mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) { + int ret; + pthread_mutex_lock(&mlx5_dev_ctx_list_mutex); #ifdef RTE_LIBRTE_MLX5_DEBUG /* Check the object presence in the list. */ @@ -1302,6 +1421,12 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); if (--sh->refcnt) goto exit; + /* Stop watching for mempool events and unregister all mempools. */ + ret = rte_mempool_event_callback_unregister + (mlx5_dev_ctx_shared_mempool_event_cb, sh); + if (ret == 0 || rte_errno != ENOENT) + rte_mempool_walk(mlx5_dev_ctx_shared_mempool_unregister_cb, + sh); /* Remove from memory callback device list. */ rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); LIST_REMOVE(sh, mem_event_cb); @@ -1991,6 +2116,8 @@ mlx5_args_check(const char *key, const char *val, void *opaque) config->decap_en = !!tmp; } else if (strcmp(MLX5_ALLOW_DUPLICATE_PATTERN, key) == 0) { config->allow_duplicate_pattern = !!tmp; + } else if (strcmp(MLX5_MR_MEMPOOL_REG_EN, key) == 0) { + config->mr_mempool_reg_en = !!tmp; } else { DRV_LOG(WARNING, "%s: unknown parameter", key); rte_errno = EINVAL; @@ -2051,6 +2178,7 @@ mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs) MLX5_SYS_MEM_EN, MLX5_DECAP_EN, MLX5_ALLOW_DUPLICATE_PATTERN, + MLX5_MR_MEMPOOL_REG_EN, NULL, }; struct rte_kvargs *kvlist; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e02714e231..1f6944ba9a 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -155,6 +155,13 @@ struct mlx5_flow_dump_ack { /** Key string for IPC. */ #define MLX5_MP_NAME "net_mlx5_mp" +/** Initialize a multi-process ID. */ +static inline void +mlx5_mp_id_init(struct mlx5_mp_id *mp_id, uint16_t port_id) +{ + mp_id->port_id = port_id; + strlcpy(mp_id->name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); +} LIST_HEAD(mlx5_dev_list, mlx5_dev_ctx_shared); @@ -175,6 +182,9 @@ struct mlx5_local_data { extern struct mlx5_shared_data *mlx5_shared_data; +/* Exposed to copy into the shared data in OS-specific module. */ +extern int mlx5_net_mempool_slot; + /* Dev ops structs */ extern const struct eth_dev_ops mlx5_dev_ops; extern const struct eth_dev_ops mlx5_dev_sec_ops; @@ -270,6 +280,8 @@ struct mlx5_dev_config { unsigned int dv_miss_info:1; /* restore packet after partial hw miss */ unsigned int allow_duplicate_pattern:1; /* Allow/Prevent the duplicate rules pattern. */ + unsigned int mr_mempool_reg_en:1; + /* Allow/prevent implicit mempool memory registration. */ struct { unsigned int enabled:1; /* Whether MPRQ is enabled. */ unsigned int stride_num_n; /* Number of strides. */ @@ -1497,6 +1509,7 @@ struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, const struct mlx5_dev_config *config); void mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh); +int mlx5_dev_ctx_shared_mempool_subscribe(struct mlx5_dev_ctx_shared *sh); void mlx5_free_table_hash_list(struct mlx5_priv *priv); int mlx5_alloc_table_hash_list(struct mlx5_priv *priv); void mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn, diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 44afda731f..1cd7d4ced0 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -128,9 +128,36 @@ mlx5_tx_addr2mr_bh(struct mlx5_txq_data *txq, uintptr_t addr) uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb) { + struct mlx5_txq_ctrl *txq_ctrl = + container_of(txq, struct mlx5_txq_ctrl, txq); + struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; + struct mlx5_priv *priv = txq_ctrl->priv; uintptr_t addr = (uintptr_t)mb->buf_addr; uint32_t lkey; + if (priv->config.mr_mempool_reg_en) { + struct rte_mempool *mp = NULL; + struct mlx5_mprq_buf *buf; + + if (!RTE_MBUF_HAS_EXTBUF(mb)) { + mp = mlx5_mb2mp(mb); + } else if (mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) { + /* Recover MPRQ mempool. */ + buf = mb->shinfo->fcb_opaque; + mp = buf->mp; + } + if (mp != NULL) { + lkey = mlx5_mr_mempool2mr_bh(&priv->sh->share_cache, + mr_ctrl, mp, addr); + /* + * Lookup can only fail on invalid input, e.g. "addr" + * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set. + */ + if (lkey != UINT32_MAX) + return lkey; + } + /* Fallback for generic mechanism in corner cases. */ + } lkey = mlx5_tx_addr2mr_bh(txq, addr); if (lkey == UINT32_MAX && rte_errno == ENXIO) { /* Mempool may have externally allocated memory. */ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 54173bfacb..6a027f87bf 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1124,6 +1124,13 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id, strerror(rte_errno)); goto error; } + if (priv->config.mr_mempool_reg_en) { + if (mlx5_dev_ctx_shared_mempool_subscribe(priv->sh) != 0) { + DRV_LOG(ERR, "port %u failed to subscribe for mempool life cycle: %s", + dev->data->port_id, rte_strerror(rte_errno)); + goto error; + } + } rte_wmb(); dev->tx_pkt_burst = mlx5_select_tx_function(dev); dev->rx_pkt_burst = mlx5_select_rx_function(dev); @@ -1193,11 +1200,10 @@ mlx5_dev_stop(struct rte_eth_dev *dev) if (priv->obj_ops.lb_dummy_queue_release) priv->obj_ops.lb_dummy_queue_release(dev); mlx5_txpp_stop(dev); - return 0; } -/** +/* * Enable traffic flows configured by control plane * * @param dev