From patchwork Fri Aug 27 17:20:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 97475 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 51D8DA0548; Fri, 27 Aug 2021 19:21:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA4E941267; Fri, 27 Aug 2021 19:21:19 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 921A44125D for ; Fri, 27 Aug 2021 19:21:17 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10089"; a="198235915" X-IronPort-AV: E=Sophos;i="5.84,357,1620716400"; d="scan'208";a="198235915" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2021 10:21:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,357,1620716400"; d="scan'208";a="445009465" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by orsmga002.jf.intel.com with ESMTP; 27 Aug 2021 10:21:15 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, conor.walsh@intel.com, Kevin Laatz Date: Fri, 27 Aug 2021 17:20:40 +0000 Message-Id: <20210827172048.558704-6-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210827172048.558704-1-kevin.laatz@intel.com> References: <20210827172048.558704-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 05/13] dma/idxd: create dmadev instances on bus probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a suitable device is found during the bus scan/probe, create a dmadev instance for each HW queue. Internal structures required for device creation are also added. Signed-off-by: Bruce Richardson Signed-off-by: Kevin Laatz --- drivers/dma/idxd/idxd_bus.c | 20 ++++++++- drivers/dma/idxd/idxd_common.c | 75 ++++++++++++++++++++++++++++++++ drivers/dma/idxd/idxd_internal.h | 40 +++++++++++++++++ drivers/dma/idxd/meson.build | 1 + 4 files changed, 135 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/idxd/idxd_common.c diff --git a/drivers/dma/idxd/idxd_bus.c b/drivers/dma/idxd/idxd_bus.c index c08f0f473b..0f33500dfc 100644 --- a/drivers/dma/idxd/idxd_bus.c +++ b/drivers/dma/idxd/idxd_bus.c @@ -84,6 +84,18 @@ dsa_get_sysfs_path(void) return path ? path : DSA_SYSFS_PATH; } +static int +idxd_dev_close(struct rte_dmadev *dev) +{ + struct idxd_dmadev *idxd = dev->data->dev_private; + munmap(idxd->portal, 0x1000); + return 0; +} + +static const struct rte_dmadev_ops idxd_vdev_ops = { + .dev_close = idxd_dev_close, +}; + static void * idxd_vdev_mmap_wq(struct rte_dsa_device *dev) { @@ -205,7 +217,7 @@ idxd_probe_dsa(struct rte_dsa_device *dev) return -1; idxd.max_batch_size = ret; idxd.qid = dev->addr.wq_id; - idxd.u.vdev.dsa_id = dev->addr.device_id; + idxd.u.bus.dsa_id = dev->addr.device_id; idxd.sva_support = 1; idxd.portal = idxd_vdev_mmap_wq(dev); @@ -214,6 +226,12 @@ idxd_probe_dsa(struct rte_dsa_device *dev) return -ENOENT; } + ret = idxd_dmadev_create(dev->wq_name, &dev->device, &idxd, &idxd_vdev_ops); + if (ret) { + IDXD_PMD_ERR("Failed to create rawdev %s", dev->wq_name); + return ret; + } + return 0; } diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c new file mode 100644 index 0000000000..7770b2e264 --- /dev/null +++ b/drivers/dma/idxd/idxd_common.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Intel Corporation + */ + +#include +#include +#include + +#include "idxd_internal.h" + +#define IDXD_PMD_NAME_STR "dmadev_idxd" + +int +idxd_dmadev_create(const char *name, struct rte_device *dev, + const struct idxd_dmadev *base_idxd, + const struct rte_dmadev_ops *ops) +{ + struct idxd_dmadev *idxd; + struct rte_dmadev *dmadev = NULL; + int ret = 0; + + if (!name) { + IDXD_PMD_ERR("Invalid name of the device!"); + ret = -EINVAL; + goto cleanup; + } + + /* Allocate device structure */ + dmadev = rte_dmadev_pmd_allocate(name); + if (dmadev == NULL) { + IDXD_PMD_ERR("Unable to allocate raw device"); + ret = -ENOMEM; + goto cleanup; + } + dmadev->dev_ops = ops; + dmadev->device = dev; + + idxd = rte_malloc_socket(NULL, sizeof(struct idxd_dmadev), 0, dev->numa_node); + if (idxd == NULL) { + IDXD_PMD_ERR("Unable to allocate memory for device"); + ret = -ENOMEM; + goto cleanup; + } + dmadev->data->dev_private = idxd; + dmadev->dev_private = idxd; + *idxd = *base_idxd; /* copy over the main fields already passed in */ + idxd->dmadev = dmadev; + + /* allocate batch index ring and completion ring. + * The +1 is because we can never fully use + * the ring, otherwise read == write means both full and empty. + */ + idxd->batch_comp_ring = rte_zmalloc(NULL, (sizeof(idxd->batch_idx_ring[0]) + + sizeof(idxd->batch_comp_ring[0])) * (idxd->max_batches + 1), + sizeof(idxd->batch_comp_ring[0])); + if (idxd->batch_comp_ring == NULL) { + IDXD_PMD_ERR("Unable to reserve memory for batch data\n"); + ret = -ENOMEM; + goto cleanup; + } + idxd->batch_idx_ring = (void *)&idxd->batch_comp_ring[idxd->max_batches+1]; + idxd->batch_iova = rte_mem_virt2iova(idxd->batch_comp_ring); + + return 0; + +cleanup: + if (dmadev) + rte_dmadev_pmd_release(dmadev); + + return ret; +} + +int idxd_pmd_logtype; + +RTE_LOG_REGISTER_DEFAULT(idxd_pmd_logtype, WARNING); diff --git a/drivers/dma/idxd/idxd_internal.h b/drivers/dma/idxd/idxd_internal.h index c6a7dcd72f..99ab2df925 100644 --- a/drivers/dma/idxd/idxd_internal.h +++ b/drivers/dma/idxd/idxd_internal.h @@ -24,4 +24,44 @@ extern int idxd_pmd_logtype; #define IDXD_PMD_ERR(fmt, args...) IDXD_PMD_LOG(ERR, fmt, ## args) #define IDXD_PMD_WARN(fmt, args...) IDXD_PMD_LOG(WARNING, fmt, ## args) +struct idxd_dmadev { + /* counters to track the batches */ + unsigned short max_batches; + unsigned short batch_idx_read; + unsigned short batch_idx_write; + + /* track descriptors and handles */ + unsigned short desc_ring_mask; + unsigned short ids_avail; /* handles for ops completed */ + unsigned short ids_returned; /* the read pointer for hdls/desc rings */ + unsigned short batch_start; /* start+size == write pointer for hdls/desc */ + unsigned short batch_size; + + void *portal; /* address to write the batch descriptor */ + + struct idxd_completion *batch_comp_ring; + unsigned short *batch_idx_ring; /* store where each batch ends */ + + struct rte_dmadev_stats stats; + + rte_iova_t batch_iova; /* base address of the batch comp ring */ + rte_iova_t desc_iova; /* base address of desc ring, needed for completions */ + + unsigned short max_batch_size; + + struct rte_dmadev *dmadev; + struct rte_dmadev_vchan_conf qcfg; + uint8_t sva_support; + uint8_t qid; + + union { + struct { + unsigned int dsa_id; + } bus; + } u; +}; + +int idxd_dmadev_create(const char *name, struct rte_device *dev, + const struct idxd_dmadev *base_idxd, const struct rte_dmadev_ops *ops); + #endif /* _IDXD_INTERNAL_H_ */ diff --git a/drivers/dma/idxd/meson.build b/drivers/dma/idxd/meson.build index f1fea000a7..81150e6f25 100644 --- a/drivers/dma/idxd/meson.build +++ b/drivers/dma/idxd/meson.build @@ -4,5 +4,6 @@ deps += ['bus_pci'] sources = files( 'idxd_bus.c', + 'idxd_common.c', 'idxd_pci.c' ) \ No newline at end of file