From patchwork Wed Nov 3 07:58:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103612 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0065FA0C53; Wed, 3 Nov 2021 08:59:38 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2A5984115F; Wed, 3 Nov 2021 08:59:20 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) by mails.dpdk.org (Postfix) with ESMTP id 1E4B841156 for ; Wed, 3 Nov 2021 08:59:19 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ES0Q5wWFCvHyuUaTCsS2RK+Aolc+pd4dF7NaLBqv4TF10KaI8c3ikl9aNULfBDutcerwgQye8q0mHmL2+oVCKEr8cnZdIXqV9BcIotn9vSpb6b/LpD+Uh6Fv0cMMSYk8xX0hmgE94ZctDJnx+GYmTDMRxquHssUv0XicbEC1xJMuWTKhz4JAZ/7vCgEJM1vijL/MOPLb400y9efAVAcM2L8VFazi3WnvaeP6Dj6FvNqJMQHUYrzbk0C52r77AJ8VXZLVaiwFqxlmgi3JvNXHzpW51rpkBy7onp5mRwK/tpT2/8ROu6Bjvh3Zvin4vFxqkFIYIV8CVzmzV/eVFhgQxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Kg4ndYdx7H2n/xaL+PgbFUzzTOB5lcmF5kZ+oIaTo3g=; b=C6BUxL8zC97a3iItwzRbUjW6btvUJT8UWm0hY/a3SClZsmp8OjsSeb6DHzSPY80ja+lBoBu8U9u6IdjRc5e/6lsT/E16f7JYQp5ve4PbZdP469KIAifNCB8ibQnm30TeGNK1etCb3/oeGeUD9NO240Bpwytq7usH7m19QJOFZuu44NNju8sMI5rdzfEjAiD15zk9rj37mlK2P9IH8+goYH3b2kFEX6cf2Paub5x9eRn8Z3V+hoBVEcTZecev30vTFYys+MgrAYXwUln6bo7YazOXpL+zFHOavRUgd992ktkXJZVK5Jog6GLIURQOHHvs6Xz8jZFDvlESlzT0N/EN8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Kg4ndYdx7H2n/xaL+PgbFUzzTOB5lcmF5kZ+oIaTo3g=; b=FU7+OqQsIpAAL/HNj647bCUoMTWaNDpHCmX5A7nT+iCFd+EeIglN2095/0Zm/WREjm7PdarJpd+a6oKEVlodSbyvloz+TaJsGILLQTF4O0DhmNinf1fa4JGoiaLngNvgCi1U/s+M0M19GqbhHyK8YjsaYLA7DBl13KAJVQ4SlZI/vNiUkh0GggCsYnfI/xDWZnCi3o4pqa1lJJKO3QEhD4nOYw1mY16uSzMXUEy99+Z4Q15RRzO3ngmpTqp6dmwe/V1t+XtOT3i7qZIHpXVE0rkPnIPOGAbNJ3ONdak1eXi64HCGiXT4sfMujBvTT9L7uydVbsQKfsJlEZyrh6q2cQ== Received: from MWHPR17CA0065.namprd17.prod.outlook.com (2603:10b6:300:93::27) by SN6PR12MB2608.namprd12.prod.outlook.com (2603:10b6:805:68::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Wed, 3 Nov 2021 07:59:16 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:300:93:cafe::5b) by MWHPR17CA0065.outlook.office365.com (2603:10b6:300:93::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 07:59:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 07:59:16 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 3 Nov 2021 07:59:14 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Wed, 3 Nov 2021 15:58:28 +0800 Message-ID: <20211103075838.1486056-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211103075838.1486056-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211103075838.1486056-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5a1309e0-eb68-4f40-f82f-08d99e9fd541 X-MS-TrafficTypeDiagnostic: SN6PR12MB2608: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1122; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hRHWHAgfc1BeayXPew7isHNN2W/TPb7hVk4p5mI6XrQC1Kmyott7Hgqo0eYjPcePzfFWAbupjqAqGK8Ybt9/LYtuD86yGOMnbE1VWXOLZAyHB/mDN1DUVAMSdfPLF4VggGUlfyK1GPjfb5YGKmX5lGOBKRtT/2udZyeAhkZF6/rNtVYVaH8djMhyktPZhzgqlpArLz2X5Rr8h098p7lkmK/kf7ZZtYjZiWL5hMcSdGK6yO1W34ZtlCs9L8TpUyFNxtb3pkoJPfijzjEUA/BkOXpG3Rhk1D98/AYvD2rha/oJn4qVbxlQSk9tIODDaARfN7vwdN6zIrW2I8YiZa+osGhH4AA1iWOOeHvUbWHW1Q2HGvIcrBBoA/rwAdEvKn2Ep+Pj9k85UyiPexktm8D/mQUG+rXnLy8EHfe8UglBfg6DEEqmxuzhItXOY6MgiXKFiJQyxQn2f46UN4vuoE6CJLjwH631FDfo3exgKyYCiStvD07sI+J8MxEICnhSf2hiFBlyrKWkJByt6vA+2vtx2PkUTF3svh5tVS2Xs9aMaBJVmG8anAFZjfQ+UbS7lQauJFQq/25kKyC/5R/DKcNU6ePNt/x28oKhEUPskoY9EbFrCJvnlgy1yRrEFQGLujOsmAbfvjxiIdfnUQHbpAk0vDGHp0w6ly8+T8ySYIhp1G4fKq1URXAzXY1rUHcx+19aJllxhl6IzR9HDEjXPbjXCg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(36860700001)(26005)(2906002)(36756003)(186003)(16526019)(83380400001)(426003)(316002)(508600001)(2616005)(6286002)(107886003)(336012)(5660300002)(47076005)(6916009)(54906003)(8676002)(55016002)(82310400003)(70586007)(70206006)(7696005)(6666004)(4326008)(1076003)(7636003)(356005)(8936002)(86362001)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2021 07:59:16.3315 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5a1309e0-eb68-4f40-f82f-08d99e9fd541 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2608 Subject: [dpdk-dev] [PATCH v3 04/14] common/mlx5: support receive memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The hardware Receive Memory Pool (RMP) object holds the destination for incoming packets/messages that are routed to the RMP through RQs. RMP enables sharing of memory across multiple Receive Queues. Multiple Receive Queues can be attached to the same RMP and consume memory from that shared poll. When using RMPs, completions are reported to the CQ pointed to by the RQ, user index that set in RQ creation time is carried to completion entry. This patch enables RMP based RQ, RMP is created when mlx5_devx_rq.rmp is set. Signed-off-by: Xueming Li --- drivers/common/mlx5/mlx5_common_devx.c | 295 +++++++++++++++++++++---- drivers/common/mlx5/mlx5_common_devx.h | 19 +- drivers/net/mlx5/mlx5_devx.c | 4 +- 3 files changed, 271 insertions(+), 47 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 825f84b1833..85b5282061a 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -271,6 +271,39 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, return -rte_errno; } +/** + * Destroy DevX Receive Queue resources. + * + * @param[in] rq_res + * DevX RQ resource to destroy. + */ +static void +mlx5_devx_wq_res_destroy(struct mlx5_devx_wq_res *rq_res) +{ + if (rq_res->umem_obj) + claim_zero(mlx5_os_umem_dereg(rq_res->umem_obj)); + if (rq_res->umem_buf) + mlx5_free((void *)(uintptr_t)rq_res->umem_buf); + memset(rq_res, 0, sizeof(*rq_res)); +} + +/** + * Destroy DevX Receive Memory Pool. + * + * @param[in] rmp + * DevX RMP to destroy. + */ +static void +mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp) +{ + MLX5_ASSERT(rmp->ref_cnt == 0); + if (rmp->rmp) { + claim_zero(mlx5_devx_cmd_destroy(rmp->rmp)); + rmp->rmp = NULL; + } + mlx5_devx_wq_res_destroy(&rmp->wq); +} + /** * Destroy DevX Queue Pair. * @@ -389,55 +422,48 @@ mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint16_t log_wqbb_n, void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq) { - if (rq->rq) + if (rq->rq) { claim_zero(mlx5_devx_cmd_destroy(rq->rq)); - if (rq->umem_obj) - claim_zero(mlx5_os_umem_dereg(rq->umem_obj)); - if (rq->umem_buf) - mlx5_free((void *)(uintptr_t)rq->umem_buf); + rq->rq = NULL; + if (rq->rmp) + rq->rmp->ref_cnt--; + } + if (rq->rmp == NULL) { + mlx5_devx_wq_res_destroy(&rq->wq); + } else { + if (rq->rmp->ref_cnt == 0) + mlx5_devx_rmp_destroy(rq->rmp); + } } /** - * Create Receive Queue using DevX API. - * - * Get a pointer to partially initialized attributes structure, and updates the - * following fields: - * wq_umem_valid - * wq_umem_id - * wq_umem_offset - * dbr_umem_valid - * dbr_umem_id - * dbr_addr - * log_wq_pg_sz - * All other fields are updated by caller. + * Create WQ resources using DevX API. * * @param[in] ctx * Context returned from mlx5 open_device() glue function. - * @param[in/out] rq_obj - * Pointer to RQ to create. * @param[in] wqe_size * Size of WQE structure. * @param[in] log_wqbb_n * Log of number of WQBBs in queue. - * @param[in] attr - * Pointer to RQ attributes structure. * @param[in] socket * Socket to use for allocation. + * @param[out] wq_attr + * Pointer to WQ attributes structure. + * @param[out] wq_res + * Pointer to WQ resource to create. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -int -mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, - uint16_t log_wqbb_n, - struct mlx5_devx_create_rq_attr *attr, int socket) +static int +mlx5_devx_wq_init(void *ctx, uint32_t wqe_size, uint16_t log_wqbb_n, int socket, + struct mlx5_devx_wq_attr *wq_attr, + struct mlx5_devx_wq_res *wq_res) { - struct mlx5_devx_obj *rq = NULL; struct mlx5dv_devx_umem *umem_obj = NULL; void *umem_buf = NULL; size_t alignment = MLX5_WQE_BUF_ALIGNMENT; uint32_t umem_size, umem_dbrec; - uint16_t rq_size = 1 << log_wqbb_n; int ret; if (alignment == (size_t)-1) { @@ -446,7 +472,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, return -rte_errno; } /* Allocate memory buffer for WQEs and doorbell record. */ - umem_size = wqe_size * rq_size; + umem_size = wqe_size * (1 << log_wqbb_n); umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); umem_size += MLX5_DBR_SIZE; umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size, @@ -464,14 +490,60 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = errno; goto error; } + /* Fill WQ attributes for RQ/RMP object creation. */ + wq_attr->wq_umem_valid = 1; + wq_attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj); + wq_attr->wq_umem_offset = 0; + wq_attr->dbr_umem_valid = 1; + wq_attr->dbr_umem_id = wq_attr->wq_umem_id; + wq_attr->dbr_addr = umem_dbrec; + wq_attr->log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; /* Fill attributes for RQ object creation. */ - attr->wq_attr.wq_umem_valid = 1; - attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj); - attr->wq_attr.wq_umem_offset = 0; - attr->wq_attr.dbr_umem_valid = 1; - attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id; - attr->wq_attr.dbr_addr = umem_dbrec; - attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; + wq_res->umem_buf = umem_buf; + wq_res->umem_obj = umem_obj; + wq_res->db_rec = RTE_PTR_ADD(umem_buf, umem_dbrec); + return 0; +error: + ret = rte_errno; + if (umem_obj) + claim_zero(mlx5_os_umem_dereg(umem_obj)); + if (umem_buf) + mlx5_free((void *)(uintptr_t)umem_buf); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create standalone Receive Queue using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_std_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq; + int ret; + + ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket, + &attr->wq_attr, &rq_obj->wq); + if (ret != 0) + return ret; /* Create receive queue object with DevX. */ rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); if (!rq) { @@ -479,21 +551,160 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = ENOMEM; goto error; } - rq_obj->umem_buf = umem_buf; - rq_obj->umem_obj = umem_obj; rq_obj->rq = rq; - rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec); return 0; error: ret = rte_errno; - if (umem_obj) - claim_zero(mlx5_os_umem_dereg(umem_obj)); - if (umem_buf) - mlx5_free((void *)(uintptr_t)umem_buf); + mlx5_devx_wq_res_destroy(&rq_obj->wq); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Receive Memory Pool using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rmp_create(void *ctx, struct mlx5_devx_rmp *rmp_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_wq_attr *wq_attr, int socket) +{ + struct mlx5_devx_create_rmp_attr rmp_attr = { 0 }; + int ret; + + if (rmp_obj->rmp != NULL) + return 0; + rmp_attr.wq_attr = *wq_attr; + ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket, + &rmp_attr.wq_attr, &rmp_obj->wq); + if (ret != 0) + return ret; + rmp_attr.state = MLX5_RMPC_STATE_RDY; + rmp_attr.basic_cyclic_rcv_wqe = + wq_attr->wq_type != MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ; + /* Create receive memory pool object with DevX. */ + rmp_obj->rmp = mlx5_devx_cmd_create_rmp(ctx, &rmp_attr, socket); + if (rmp_obj->rmp == NULL) { + DRV_LOG(ERR, "Can't create DevX RMP object."); + rte_errno = ENOMEM; + goto error; + } + return 0; +error: + ret = rte_errno; + mlx5_devx_wq_res_destroy(&rmp_obj->wq); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Shared Receive Queue based on RMP using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq; + int ret; + + ret = mlx5_devx_rmp_create(ctx, rq_obj->rmp, wqe_size, log_wqbb_n, + &attr->wq_attr, socket); + if (ret != 0) + return ret; + attr->mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP; + attr->rmpn = rq_obj->rmp->rmp->id; + attr->flush_in_error_en = 0; + memset(&attr->wq_attr, 0, sizeof(attr->wq_attr)); + /* Create receive queue object with DevX. */ + rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); + if (!rq) { + DRV_LOG(ERR, "Can't create DevX RMP RQ object."); + rte_errno = ENOMEM; + goto error; + } + rq_obj->rq = rq; + rq_obj->rmp->ref_cnt++; + return 0; +error: + ret = rte_errno; + mlx5_devx_rq_destroy(rq_obj); rte_errno = ret; return -rte_errno; } +/** + * Create Receive Queue using DevX API. Shared RQ is created only if rmp set. + * + * Get a pointer to partially initialized attributes structure, and updates the + * following fields: + * wq_umem_valid + * wq_umem_id + * wq_umem_offset + * dbr_umem_valid + * dbr_umem_id + * dbr_addr + * log_wq_pg_sz + * All other fields are updated by caller. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + if (rq_obj->rmp == NULL) + return mlx5_devx_rq_std_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); + return mlx5_devx_rq_shared_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); +} /** * Change QP state to RTS. diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index f699405f69b..7ceac040f8b 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -45,14 +45,27 @@ struct mlx5_devx_qp { volatile uint32_t *db_rec; /* The QP doorbell record. */ }; -/* DevX Receive Queue structure. */ -struct mlx5_devx_rq { - struct mlx5_devx_obj *rq; /* The RQ DevX object. */ +/* DevX Receive Queue resource structure. */ +struct mlx5_devx_wq_res { void *umem_obj; /* The RQ umem object. */ volatile void *umem_buf; volatile uint32_t *db_rec; /* The RQ doorbell record. */ }; +/* DevX Receive Memory Pool structure. */ +struct mlx5_devx_rmp { + struct mlx5_devx_obj *rmp; /* The RMP DevX object. */ + uint32_t ref_cnt; /* Reference count. */ + struct mlx5_devx_wq_res wq; +}; + +/* DevX Receive Queue structure. */ +struct mlx5_devx_rq { + struct mlx5_devx_obj *rq; /* The RQ DevX object. */ + struct mlx5_devx_rmp *rmp; /* Shared RQ RMP object. */ + struct mlx5_devx_wq_res wq; /* WQ resource of standalone RQ. */ +}; + /* mlx5_common_devx.c */ __rte_internal diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 424f77be790..443252df05d 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -515,8 +515,8 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf; - rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec; + rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf; + rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.wq.db_rec; rxq_data->cq_arm_sn = 0; rxq_data->cq_ci = 0; mlx5_rxq_initialize(rxq_data);