From patchwork Tue Aug 17 13:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 96994 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8AB2A0548; Tue, 17 Aug 2021 15:45:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D7F4411CA; Tue, 17 Aug 2021 15:45:25 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2072.outbound.protection.outlook.com [40.107.95.72]) by mails.dpdk.org (Postfix) with ESMTP id C2082411BA for ; Tue, 17 Aug 2021 15:45:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JilUFwTzo+ZYz3d+IqAsceQWsTZvcx3TMTfU1c2nB4bOOFsc4iLhqiVnASarMzF/AS/m8la+ddww74FJQwnOENUWXX6EicIxehQeudBu+DdZPgEfCS2I8er2HrtFFfFa86ogVnP3jburMzM65i8/yawCYaut5brOSaXmYPZSRGWLAdfQAIhriAPvK6Aq0IXrYCuRGLb+H/pU/+iuuGy8isDN2pI7TEntHbvsLrRrrqX3++Vp5QzbvHvZyMavR60YqtaoZavZHv9w4WlL4IqfAUFSa5SVb96bV2LGiIqIHFJWfnbA/hGwe3qfDe8uj8UxggSf1JEPnQ50FBxXFrPTqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J4VuIl0A6drBdlp+eyWyUep7L/EtftmZdtjqXqPAb18=; b=i+NktARmu5/UJfP0WhG/Jr+Pm4V7xfZVXtHivckmciGdNGje9mLtGqBgjkg2cwqiTwgee1zFIH8EJp3pJ35lptkuUQ+0WvL2cAPKBHTfBA6dkiVNnJ7rTHzzQDrG84sLqduO/JwUTkWAlylu+iy+5I+HwLeCM7ZCMMew1xWUIuv9LZtMT6fHNtlp42TBhKdAQT8XGc2MheOHMFXGeIOxlQ5VsfNa2QhruHXzb2j3RZ3R1VdEnTLmOCjNUdtXjRz/DYbxFX2I6Ft96t+TxrZWDia2doiO2faJ3bHNeLCgJRa9blIsD3QzwiRpCveFY7E2Mi0ZYIybxQcE+1E+8P3TsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J4VuIl0A6drBdlp+eyWyUep7L/EtftmZdtjqXqPAb18=; b=E6PcFlNDvpXqRJMsT2teVN/a8Dpt9eExs+dSsJfFacNSyblbI3C4MWWdurLgwGRAXh6bwfU31wYuvB2KUA0JlncQGijUoc72arwaHVOuD9/OGsuDENW9Jg/pXQPz4iP3z8BGHDCGw/5YXqRP223cEoy5Fnj3EBJ0ozDBHZzk5WIH5DYDVoTPPARh2VlrU/W+5lSmu4A+PhLAC37E3OIvfQpB6T6RiX0Kv+MQd0TPxtAyHzT2xdWJU+GOq6n3qLPn+DOr1x99JssZ6jPXDpGJNkxteCmRj+jIxg5UD5Ecu2xIzcx+hv33FkgcK3Phyc/3+Lv6wpwE+FxT9fF5U8Td7w== Received: from MW4P222CA0016.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::21) by DM6PR12MB4926.namprd12.prod.outlook.com (2603:10b6:5:1bb::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.14; Tue, 17 Aug 2021 13:45:22 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::f8) by MW4P222CA0016.outlook.office365.com (2603:10b6:303:114::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4436.18 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Tue, 17 Aug 2021 13:45:22 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 06:45:21 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 17 Aug 2021 13:45:20 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Date: Tue, 17 Aug 2021 16:44:24 +0300 Message-ID: <20210817134441.1966618-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210817134441.1966618-1-michaelba@nvidia.com> References: <20210817134441.1966618-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b27c8131-6535-4b75-6c4e-08d961854278 X-MS-TrafficTypeDiagnostic: DM6PR12MB4926: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:19; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: c7M3aadeBI8ugFqP3ULcX4ww11J1nJxLHsqourHTvAM5St6uEL2TAeblPo35uqsio4S31o0oolCHttuNCCCy3SbOWYk0Ujhgp6ZCpAmPkqsfP4HrS48efbvyjmHOrtKQMWT2QCLOihahtNYR1eH279WBZodYHaFcYIXo36efYwxLzUyX+PFdwjDgkyFUHnfR4e4iZUMKbVk13VfrjU6G+UQw/1O7BkHLNIbZmgay9bYVVBKh7WHK/Ua+MAnpXoN9WnwJYALIv/9yDglGWUq6jOnjxqYgCR5asVZjX0j4YBaXOnDEwpahiUYOKISeHTv6Aw0xn6Bew7QnfHDIDwnSBHkXU2IUjcDOJyV/CsUPWsqmGRsjbFQbMxqIku70fXj+JkXtNogcfAYoj9vyhbD2TfwXdGRDwwjwNWvL5rZ24chJTwsGMJmBalW5S37hvtY/tJKaDrJZlzCny2PT8FafQS3tAaLafEUq7V/EM1zzYqBgRmfYKCx6z6ZWNPj0Kt1G1+Mq1NHSd2yp49ChQc8pZ5brn4NY27jA7ZEimGmyVrk/jCRBu/LczEh7wxMCtVpuBwAcEi24xu+i3C+xaxYE13iz/K41uW6YVQUsD2PsKvm+IqhajGOOlQt+3/0+itYzhdgr1f7uA0/nq1Am3M1Ui/bgMfx7nrXik7PMAjKhNKSM+UELfbfNnwpPpwe9zSihDg1UgPudoQ75eIZoOtvdwQ== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(136003)(396003)(46966006)(36840700001)(83380400001)(7696005)(70206006)(4326008)(6286002)(55016002)(5660300002)(316002)(356005)(426003)(186003)(36860700001)(8676002)(36756003)(6916009)(82740400003)(70586007)(8936002)(7636003)(107886003)(54906003)(47076005)(1076003)(336012)(26005)(16526019)(86362001)(82310400003)(2906002)(478600001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2021 13:45:22.2049 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b27c8131-6535-4b75-6c4e-08d961854278 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4926 Subject: [dpdk-dev] [RFC 04/21] compress/mlx5: use context device structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use common context device structure as a priv field. Signed-off-by: Michael Baum --- drivers/compress/mlx5/mlx5_compress.c | 110 ++++++++++---------------- 1 file changed, 42 insertions(+), 68 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 883e720ec1..e906ddb066 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -35,14 +35,12 @@ struct mlx5_compress_xform { struct mlx5_compress_priv { TAILQ_ENTRY(mlx5_compress_priv) next; - struct ibv_context *ctx; /* Device context. */ + struct mlx5_dev_ctx *dev_ctx; /* Device context. */ struct rte_compressdev *cdev; void *uar; - uint32_t pdn; /* Protection Domain number. */ uint8_t min_block_size; uint8_t sq_ts_format; /* Whether SQ supports timestamp formats. */ /* Minimum huffman block size supported by the device. */ - struct ibv_pd *pd; struct rte_compressdev_config dev_config; LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; @@ -185,7 +183,7 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, struct mlx5_devx_create_sq_attr sq_attr = { .user_index = qp_id, .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->pdn, + .pd = priv->dev_ctx->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), }, }; @@ -228,24 +226,24 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, qp->priv = priv; qp->ops = (struct rte_comp_op **)RTE_ALIGN((uintptr_t)(qp + 1), RTE_CACHE_LINE_SIZE); - if (mlx5_common_verbs_reg_mr(priv->pd, opaq_buf, qp->entries_n * - sizeof(struct mlx5_gga_compress_opaque), + if (mlx5_common_verbs_reg_mr(priv->dev_ctx->pd, opaq_buf, + qp->entries_n * sizeof(struct mlx5_gga_compress_opaque), &qp->opaque_mr) != 0) { rte_free(opaq_buf); DRV_LOG(ERR, "Failed to register opaque MR."); rte_errno = ENOMEM; goto err; } - ret = mlx5_devx_cq_create(priv->ctx, &qp->cq, log_ops_n, &cq_attr, - socket_id); + ret = mlx5_devx_cq_create(priv->dev_ctx->ctx, &qp->cq, log_ops_n, + &cq_attr, socket_id); if (ret != 0) { DRV_LOG(ERR, "Failed to create CQ."); goto err; } sq_attr.cqn = qp->cq.cq->id; sq_attr.ts_format = mlx5_ts_format_conv(priv->sq_ts_format); - ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, - socket_id); + ret = mlx5_devx_sq_create(priv->dev_ctx->ctx, &qp->sq, log_ops_n, + &sq_attr, socket_id); if (ret != 0) { DRV_LOG(ERR, "Failed to create SQ."); goto err; @@ -465,7 +463,8 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr, + return mlx5_mr_addr2mr_bh(priv->dev_ctx->pd, 0, &priv->mr_scache, + mr_ctrl, addr, !!(ol_flags & EXT_ATTACHED_MBUF)); } @@ -689,57 +688,19 @@ mlx5_compress_dequeue_burst(void *queue_pair, struct rte_comp_op **ops, static void mlx5_compress_hw_global_release(struct mlx5_compress_priv *priv) { - if (priv->pd != NULL) { - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); - priv->pd = NULL; - } if (priv->uar != NULL) { mlx5_glue->devx_free_uar(priv->uar); priv->uar = NULL; } } -static int -mlx5_compress_pd_create(struct mlx5_compress_priv *priv) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5dv_obj obj; - struct mlx5dv_pd pd_info; - int ret; - - priv->pd = mlx5_glue->alloc_pd(priv->ctx); - if (priv->pd == NULL) { - DRV_LOG(ERR, "Failed to allocate PD."); - return errno ? -errno : -ENOMEM; - } - obj.pd.in = priv->pd; - obj.pd.out = &pd_info; - ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); - if (ret != 0) { - DRV_LOG(ERR, "Fail to get PD object info."); - mlx5_glue->dealloc_pd(priv->pd); - priv->pd = NULL; - return -errno; - } - priv->pdn = pd_info.pdn; - return 0; -#else - (void)priv; - DRV_LOG(ERR, "Cannot get pdn - no DV support."); - return -ENOTSUP; -#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ -} - static int mlx5_compress_hw_global_prepare(struct mlx5_compress_priv *priv) { - if (mlx5_compress_pd_create(priv) != 0) - return -1; - priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->dev_ctx->ctx, -1); if (priv->uar == NULL || mlx5_os_get_devx_uar_reg_addr(priv->uar) == NULL) { rte_errno = errno; - claim_zero(mlx5_glue->dealloc_pd(priv->pd)); DRV_LOG(ERR, "Failed to allocate UAR."); return -1; } @@ -775,7 +736,8 @@ mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, /* Iterate all the existing mlx5 devices. */ TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) mlx5_free_mr_by_addr(&priv->mr_scache, - priv->ctx->device->name, + mlx5_os_get_ctx_device_name + (priv->dev_ctx->ctx), addr, len); pthread_mutex_unlock(&priv_list_lock); break; @@ -788,60 +750,70 @@ mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, static int mlx5_compress_dev_probe(struct rte_device *dev) { - struct ibv_device *ibv; struct rte_compressdev *cdev; - struct ibv_context *ctx; + struct mlx5_dev_ctx *dev_ctx; struct mlx5_compress_priv *priv; struct mlx5_hca_attr att = { 0 }; struct rte_compressdev_pmd_init_params init_params = { .name = "", .socket_id = dev->numa_node, }; + const char *ibdev_name; + int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) { DRV_LOG(ERR, "Non-primary process type is not supported."); rte_errno = ENOTSUP; return -rte_errno; } - ibv = mlx5_os_get_ibv_dev(dev); - if (ibv == NULL) + dev_ctx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_dev_ctx), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (dev_ctx == NULL) { + DRV_LOG(ERR, "Device context allocation failure."); + rte_errno = ENOMEM; return -rte_errno; - ctx = mlx5_glue->dv_open_device(ibv); - if (ctx == NULL) { - DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + } + ret = mlx5_dev_ctx_prepare(dev_ctx, dev, MLX5_CLASS_COMPRESS); + if (ret < 0) { + DRV_LOG(ERR, "Failed to create device context."); + mlx5_free(dev_ctx); rte_errno = ENODEV; return -rte_errno; } - if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || + ibdev_name = mlx5_os_get_ctx_device_name(dev_ctx->ctx); + if (mlx5_devx_cmd_query_hca_attr(dev_ctx->ctx, &att) != 0 || att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || att.mmo_dma_en == 0) { DRV_LOG(ERR, "Not enough capabilities to support compress " "operations, maybe old FW/OFED version?"); - claim_zero(mlx5_glue->close_device(ctx)); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); rte_errno = ENOTSUP; return -ENOTSUP; } - cdev = rte_compressdev_pmd_create(ibv->name, dev, + cdev = rte_compressdev_pmd_create(ibdev_name, dev, sizeof(*priv), &init_params); if (cdev == NULL) { - DRV_LOG(ERR, "Failed to create device \"%s\".", ibv->name); - claim_zero(mlx5_glue->close_device(ctx)); + DRV_LOG(ERR, "Failed to create device \"%s\".", ibdev_name); + mlx5_dev_ctx_release(dev_ctx); + mlx5_free(dev_ctx); return -ENODEV; } DRV_LOG(INFO, - "Compress device %s was created successfully.", ibv->name); + "Compress device %s was created successfully.", ibdev_name); cdev->dev_ops = &mlx5_compress_ops; cdev->dequeue_burst = mlx5_compress_dequeue_burst; cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; - priv->ctx = ctx; + priv->dev_ctx = dev_ctx; priv->cdev = cdev; priv->min_block_size = att.compress_min_block_size; priv->sq_ts_format = att.sq_ts_format; if (mlx5_compress_hw_global_prepare(priv) != 0) { rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); return -1; } if (mlx5_mr_btree_init(&priv->mr_scache.cache, @@ -849,7 +821,8 @@ mlx5_compress_dev_probe(struct rte_device *dev) DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); rte_errno = ENOMEM; return -rte_errno; } @@ -885,7 +858,8 @@ mlx5_compress_dev_remove(struct rte_device *dev) mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); - claim_zero(mlx5_glue->close_device(priv->ctx)); + mlx5_dev_ctx_release(priv->dev_ctx); + mlx5_free(priv->dev_ctx); } return 0; }