From patchwork Mon Oct 18 12:38:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102006 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27052A0C43; Mon, 18 Oct 2021 14:38:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2DEB7410F7; Mon, 18 Oct 2021 14:38:46 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id BDF0940141 for ; Mon, 18 Oct 2021 14:38:43 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117623" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117623" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361048" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:40 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:24 +0000 Message-Id: <20211018123835.1080174-2-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 01/12] dma/ioat: add device probe and removal functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the basic device probe/remove skeleton code and initial documentation for new IOAT DMA driver. Maintainers update is also included in this patch. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Chengwen Feng --- MAINTAINERS | 6 +++ doc/guides/dmadevs/index.rst | 2 + doc/guides/dmadevs/ioat.rst | 69 ++++++++++++++++++++++++++ doc/guides/rel_notes/release_21_11.rst | 6 +++ drivers/dma/ioat/ioat_dmadev.c | 69 ++++++++++++++++++++++++++ drivers/dma/ioat/ioat_hw_defs.h | 35 +++++++++++++ drivers/dma/ioat/ioat_internal.h | 20 ++++++++ drivers/dma/ioat/meson.build | 7 +++ drivers/dma/ioat/version.map | 3 ++ drivers/dma/meson.build | 1 + 10 files changed, 218 insertions(+) create mode 100644 doc/guides/dmadevs/ioat.rst create mode 100644 drivers/dma/ioat/ioat_dmadev.c create mode 100644 drivers/dma/ioat/ioat_hw_defs.h create mode 100644 drivers/dma/ioat/ioat_internal.h create mode 100644 drivers/dma/ioat/meson.build create mode 100644 drivers/dma/ioat/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 423d8a73ce..283c70f7d7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1209,6 +1209,12 @@ M: Kevin Laatz F: drivers/dma/idxd/ F: doc/guides/dmadevs/idxd.rst +Intel IOAT +M: Bruce Richardson +M: Conor Walsh +F: drivers/dma/ioat/ +F: doc/guides/dmadevs/ioat.rst + RegEx Drivers ------------- diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst index 5d4abf880e..c59f4b5c92 100644 --- a/doc/guides/dmadevs/index.rst +++ b/doc/guides/dmadevs/index.rst @@ -12,3 +12,5 @@ an application through DMA API. :numbered: idxd + ioat + diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst new file mode 100644 index 0000000000..9ae1d8a2ad --- /dev/null +++ b/doc/guides/dmadevs/ioat.rst @@ -0,0 +1,69 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2021 Intel Corporation. + +.. include:: + +IOAT DMA Device Driver +======================= + +The ``ioat`` dmadev driver provides a poll-mode driver (PMD) for Intel\ +|reg| QuickData Technology which is part of part of Intel\ |reg| I/O +Acceleration Technology (`Intel I/OAT +`_). +This PMD, when used on supported hardware, allows data copies, for example, +cloning packet data, to be accelerated by IOAT hardware rather than having to +be done by software, freeing up CPU cycles for other tasks. + +Hardware Requirements +---------------------- + +The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the +presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma`` +will show all the DMA devices on the system, IOAT devices are included in this +list. For Intel\ |reg| IOAT devices, the hardware will often be listed as +"Crystal Beach DMA", or "CBDMA" or on some newer systems '0b00' due to the +absence of pci-id database entries for them at this point. + +.. note:: + Error handling is not supported by this driver on hardware prior to + Intel Ice Lake. Unsupported systems include Broadwell, Skylake and + Cascade Lake. + +Compilation +------------ + +For builds using ``meson`` and ``ninja``, the driver will be built when the +target platform is x86-based. No additional compilation steps are necessary. + +Device Setup +------------- + +Intel\ |reg| IOAT devices will need to be bound to a suitable DPDK-supported +user-space IO driver such as ``vfio-pci`` in order to be used by DPDK. + +The ``dpdk-devbind.py`` script can be used to view the state of the devices using:: + + $ dpdk-devbind.py --status-dev dma + +The ``dpdk-devbind.py`` script can also be used to bind devices to a suitable driver. +For example:: + + $ dpdk-devbind.py -b vfio-pci 00:01.0 00:01.1 + +Device Probing and Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For devices bound to a suitable DPDK-supported driver (``vfio-pci``), the HW +devices will be found as part of the device scan done at application +initialization time without the need to pass parameters to the application. + +If the application does not require all the devices available an allowlist can +be used in the same way that other DPDK devices use them. + +For example:: + + $ dpdk-test -a + +Once probed successfully, the device will appear as a ``dmadev``, that is a +"DMA device type" inside DPDK, and can be accessed using APIs from the +``rte_dmadev`` library. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index f8678efa94..3a85f2b33e 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -80,6 +80,12 @@ New Features The IDXD dmadev driver provide device drivers for the Intel DSA devices. This device driver can be used through the generic dmadev API. +* **Added IOAT dmadev driver implementation.** + + The Intel I/O Acceleration Technology (IOAT) dmadev driver provides a device + driver for Intel IOAT devices such as Crystal Beach DMA (CBDMA) on Ice Lake, + Skylake and Broadwell. This device driver can be used through the generic dmadev API. + * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.** Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c new file mode 100644 index 0000000000..f3491d45b1 --- /dev/null +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "ioat_internal.h" + +static struct rte_pci_driver ioat_pmd_drv; + +RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); + +#define IOAT_PMD_NAME dmadev_ioat +#define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) + +/* Probe DMA device. */ +static int +ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) +{ + char name[32]; + + rte_pci_device_name(&dev->addr, name, sizeof(name)); + IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); + + dev->device.driver = &drv->driver; + return 0; +} + +/* Remove DMA device. */ +static int +ioat_dmadev_remove(struct rte_pci_device *dev) +{ + char name[32]; + + rte_pci_device_name(&dev->addr, name, sizeof(name)); + + IOAT_PMD_INFO("Closing %s on NUMA node %d", + name, dev->device.numa_node); + + return 0; +} + +static const struct rte_pci_id pci_id_ioat_map[] = { + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_SKX) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX0) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX1) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX2) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX3) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX4) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX5) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX6) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX7) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDXE) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDXF) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_ICX) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct rte_pci_driver ioat_pmd_drv = { + .id_table = pci_id_ioat_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = ioat_dmadev_probe, + .remove = ioat_dmadev_remove, +}; + +RTE_PMD_REGISTER_PCI(IOAT_PMD_NAME, ioat_pmd_drv); +RTE_PMD_REGISTER_PCI_TABLE(IOAT_PMD_NAME, pci_id_ioat_map); +RTE_PMD_REGISTER_KMOD_DEP(IOAT_PMD_NAME, "* igb_uio | uio_pci_generic | vfio-pci"); diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h new file mode 100644 index 0000000000..eeabba41ef --- /dev/null +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef IOAT_HW_DEFS_H +#define IOAT_HW_DEFS_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#define IOAT_VER_3_0 0x30 +#define IOAT_VER_3_3 0x33 + +#define IOAT_VENDOR_ID 0x8086 +#define IOAT_DEVICE_ID_SKX 0x2021 +#define IOAT_DEVICE_ID_BDX0 0x6f20 +#define IOAT_DEVICE_ID_BDX1 0x6f21 +#define IOAT_DEVICE_ID_BDX2 0x6f22 +#define IOAT_DEVICE_ID_BDX3 0x6f23 +#define IOAT_DEVICE_ID_BDX4 0x6f24 +#define IOAT_DEVICE_ID_BDX5 0x6f25 +#define IOAT_DEVICE_ID_BDX6 0x6f26 +#define IOAT_DEVICE_ID_BDX7 0x6f27 +#define IOAT_DEVICE_ID_BDXE 0x6f2E +#define IOAT_DEVICE_ID_BDXF 0x6f2F +#define IOAT_DEVICE_ID_ICX 0x0b00 + +#ifdef __cplusplus +} +#endif + +#endif /* IOAT_HW_DEFS_H */ diff --git a/drivers/dma/ioat/ioat_internal.h b/drivers/dma/ioat/ioat_internal.h new file mode 100644 index 0000000000..f1ec12a919 --- /dev/null +++ b/drivers/dma/ioat/ioat_internal.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Intel Corporation + */ + +#ifndef _IOAT_INTERNAL_H_ +#define _IOAT_INTERNAL_H_ + +#include "ioat_hw_defs.h" + +extern int ioat_pmd_logtype; + +#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \ + ioat_pmd_logtype, "IOAT: %s(): " fmt "\n", __func__, ##args) + +#define IOAT_PMD_DEBUG(fmt, args...) IOAT_PMD_LOG(DEBUG, fmt, ## args) +#define IOAT_PMD_INFO(fmt, args...) IOAT_PMD_LOG(INFO, fmt, ## args) +#define IOAT_PMD_ERR(fmt, args...) IOAT_PMD_LOG(ERR, fmt, ## args) +#define IOAT_PMD_WARN(fmt, args...) IOAT_PMD_LOG(WARNING, fmt, ## args) + +#endif /* _IOAT_INTERNAL_H_ */ diff --git a/drivers/dma/ioat/meson.build b/drivers/dma/ioat/meson.build new file mode 100644 index 0000000000..d67fac96fb --- /dev/null +++ b/drivers/dma/ioat/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 Intel Corporation + +build = dpdk_conf.has('RTE_ARCH_X86') +reason = 'only supported on x86' +sources = files('ioat_dmadev.c') +deps += ['bus_pci', 'dmadev'] diff --git a/drivers/dma/ioat/version.map b/drivers/dma/ioat/version.map new file mode 100644 index 0000000000..c2e0723b4c --- /dev/null +++ b/drivers/dma/ioat/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build index 411be7a240..a69418ce9b 100644 --- a/drivers/dma/meson.build +++ b/drivers/dma/meson.build @@ -3,6 +3,7 @@ drivers = [ 'idxd', + 'ioat', 'skeleton', ] std_deps = ['dmadev'] From patchwork Mon Oct 18 12:38:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102007 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 790F8A0C43; Mon, 18 Oct 2021 14:38:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FB58410EF; Mon, 18 Oct 2021 14:38:48 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 4F542410EB for ; Mon, 18 Oct 2021 14:38:46 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117632" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117632" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361064" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:43 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:25 +0000 Message-Id: <20211018123835.1080174-3-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 02/12] dma/ioat: create dmadev instances on PCI probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a suitable device is found during the PCI probe, create a dmadev instance for each channel. Internal structures and HW definitions required for device creation are also included. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz --- drivers/dma/ioat/ioat_dmadev.c | 105 ++++++++++++++++++++++++++++++- drivers/dma/ioat/ioat_hw_defs.h | 45 +++++++++++++ drivers/dma/ioat/ioat_internal.h | 27 ++++++++ 3 files changed, 175 insertions(+), 2 deletions(-) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index f3491d45b1..90f54567a4 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -4,6 +4,7 @@ #include #include +#include #include "ioat_internal.h" @@ -14,6 +15,106 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Create a DMA device. */ +static int +ioat_dmadev_create(const char *name, struct rte_pci_device *dev) +{ + static const struct rte_dma_dev_ops ioat_dmadev_ops = { }; + + struct rte_dma_dev *dmadev = NULL; + struct ioat_dmadev *ioat = NULL; + int retry = 0; + + if (!name) { + IOAT_PMD_ERR("Invalid name of the device!"); + return -EINVAL; + } + + /* Allocate device structure. */ + dmadev = rte_dma_pmd_allocate(name, dev->device.numa_node, sizeof(struct ioat_dmadev)); + if (dmadev == NULL) { + IOAT_PMD_ERR("Unable to allocate dma device"); + return -ENOMEM; + } + + dmadev->device = &dev->device; + + dmadev->fp_obj->dev_private = dmadev->data->dev_private; + + dmadev->dev_ops = &ioat_dmadev_ops; + + ioat = dmadev->data->dev_private; + ioat->dmadev = dmadev; + ioat->regs = dev->mem_resource[0].addr; + ioat->doorbell = &ioat->regs->dmacount; + ioat->qcfg.nb_desc = 0; + ioat->desc_ring = NULL; + ioat->version = ioat->regs->cbver; + + /* Do device initialization - reset and set error behaviour. */ + if (ioat->regs->chancnt != 1) + IOAT_PMD_WARN("%s: Channel count == %d\n", __func__, + ioat->regs->chancnt); + + /* Locked by someone else. */ + if (ioat->regs->chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) { + IOAT_PMD_WARN("%s: Channel appears locked\n", __func__); + ioat->regs->chanctrl = 0; + } + + /* clear any previous errors */ + if (ioat->regs->chanerr != 0) { + uint32_t val = ioat->regs->chanerr; + ioat->regs->chanerr = val; + } + + ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND; + rte_delay_ms(1); + ioat->regs->chancmd = IOAT_CHANCMD_RESET; + rte_delay_ms(1); + while (ioat->regs->chancmd & IOAT_CHANCMD_RESET) { + ioat->regs->chainaddr = 0; + rte_delay_ms(1); + if (++retry >= 200) { + IOAT_PMD_ERR("%s: cannot reset device. CHANCMD=%#"PRIx8 + ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32"\n", + __func__, + ioat->regs->chancmd, + ioat->regs->chansts, + ioat->regs->chanerr); + rte_dma_pmd_release(name); + return -EIO; + } + } + ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN | + IOAT_CHANCTRL_ERR_COMPLETION_EN; + + dmadev->fp_obj->dev_private = ioat; + + dmadev->state = RTE_DMA_DEV_READY; + + return 0; + +} + +/* Destroy a DMA device. */ +static int +ioat_dmadev_destroy(const char *name) +{ + int ret; + + if (!name) { + IOAT_PMD_ERR("Invalid device name"); + return -EINVAL; + } + + ret = rte_dma_pmd_release(name); + if (ret) + IOAT_PMD_DEBUG("Device cleanup failed"); + + return 0; +} + /* Probe DMA device. */ static int ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) @@ -24,7 +125,7 @@ ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); dev->device.driver = &drv->driver; - return 0; + return ioat_dmadev_create(name, dev); } /* Remove DMA device. */ @@ -38,7 +139,7 @@ ioat_dmadev_remove(struct rte_pci_device *dev) IOAT_PMD_INFO("Closing %s on NUMA node %d", name, dev->device.numa_node); - return 0; + return ioat_dmadev_destroy(name); } static const struct rte_pci_id pci_id_ioat_map[] = { diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h index eeabba41ef..73bdf548b3 100644 --- a/drivers/dma/ioat/ioat_hw_defs.h +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -11,6 +11,8 @@ extern "C" { #include +#define IOAT_PCI_CHANERR_INT_OFFSET 0x180 + #define IOAT_VER_3_0 0x30 #define IOAT_VER_3_3 0x33 @@ -28,6 +30,49 @@ extern "C" { #define IOAT_DEVICE_ID_BDXF 0x6f2F #define IOAT_DEVICE_ID_ICX 0x0b00 +#define IOAT_COMP_UPDATE_SHIFT 3 +#define IOAT_CMD_OP_SHIFT 24 + +/* DMA Channel Registers */ +#define IOAT_CHANCTRL_CHANNEL_PRIORITY_MASK 0xF000 +#define IOAT_CHANCTRL_COMPL_DCA_EN 0x0200 +#define IOAT_CHANCTRL_CHANNEL_IN_USE 0x0100 +#define IOAT_CHANCTRL_DESCRIPTOR_ADDR_SNOOP_CONTROL 0x0020 +#define IOAT_CHANCTRL_ERR_INT_EN 0x0010 +#define IOAT_CHANCTRL_ANY_ERR_ABORT_EN 0x0008 +#define IOAT_CHANCTRL_ERR_COMPLETION_EN 0x0004 +#define IOAT_CHANCTRL_INT_REARM 0x0001 + +struct ioat_registers { + uint8_t chancnt; + uint8_t xfercap; + uint8_t genctrl; + uint8_t intrctrl; + uint32_t attnstatus; + uint8_t cbver; /* 0x08 */ + uint8_t reserved4[0x3]; /* 0x09 */ + uint16_t intrdelay; /* 0x0C */ + uint16_t cs_status; /* 0x0E */ + uint32_t dmacapability; /* 0x10 */ + uint8_t reserved5[0x6C]; /* 0x14 */ + uint16_t chanctrl; /* 0x80 */ + uint8_t reserved6[0x2]; /* 0x82 */ + uint8_t chancmd; /* 0x84 */ + uint8_t reserved3[1]; /* 0x85 */ + uint16_t dmacount; /* 0x86 */ + uint64_t chansts; /* 0x88 */ + uint64_t chainaddr; /* 0x90 */ + uint64_t chancmp; /* 0x98 */ + uint8_t reserved2[0x8]; /* 0xA0 */ + uint32_t chanerr; /* 0xA8 */ + uint32_t chanerrmask; /* 0xAC */ +} __rte_packed; + +#define IOAT_CHANCMD_RESET 0x20 +#define IOAT_CHANCMD_SUSPEND 0x04 + +#define IOAT_CHANCMP_ALIGN 8 /* CHANCMP address must be 64-bit aligned */ + #ifdef __cplusplus } #endif diff --git a/drivers/dma/ioat/ioat_internal.h b/drivers/dma/ioat/ioat_internal.h index f1ec12a919..4fa19eb811 100644 --- a/drivers/dma/ioat/ioat_internal.h +++ b/drivers/dma/ioat/ioat_internal.h @@ -7,6 +7,33 @@ #include "ioat_hw_defs.h" +struct ioat_dmadev { + struct rte_dma_dev *dmadev; + struct rte_dma_vchan_conf qcfg; + struct rte_dma_stats stats; + + volatile uint16_t *doorbell __rte_cache_aligned; + phys_addr_t status_addr; + phys_addr_t ring_addr; + + struct ioat_dma_hw_desc *desc_ring; + + unsigned short next_read; + unsigned short next_write; + unsigned short last_write; /* Used to compute submitted count. */ + unsigned short offset; /* Used after a device recovery when counts -> 0. */ + unsigned int failure; /* Used to store chanerr for error handling. */ + + /* To report completions, the device will write status back here. */ + volatile uint64_t status __rte_cache_aligned; + + /* Pointer to the register bar. */ + volatile struct ioat_registers *regs; + + /* Store the IOAT version. */ + uint8_t version; +}; + extern int ioat_pmd_logtype; #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \ From patchwork Mon Oct 18 12:38:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102008 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09446A0C43; Mon, 18 Oct 2021 14:39:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77EDA410E4; Mon, 18 Oct 2021 14:38:51 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 0EA0041109 for ; Mon, 18 Oct 2021 14:38:48 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117639" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117639" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361075" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:46 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:26 +0000 Message-Id: <20211018123835.1080174-4-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 03/12] dma/ioat: add datapath structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add data structures required for the data path of IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- drivers/dma/ioat/ioat_dmadev.c | 70 ++++++++++- drivers/dma/ioat/ioat_hw_defs.h | 215 ++++++++++++++++++++++++++++++++ 2 files changed, 284 insertions(+), 1 deletion(-) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 90f54567a4..876e17f320 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -15,11 +15,79 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Dump DMA device info. */ +static int +__dev_dump(void *dev_private, FILE *f) +{ + struct ioat_dmadev *ioat = dev_private; + uint64_t chansts_masked = ioat->regs->chansts & IOAT_CHANSTS_STATUS; + uint32_t chanerr = ioat->regs->chanerr; + uint64_t mask = (ioat->qcfg.nb_desc - 1); + char ver = ioat->version; + fprintf(f, "========= IOAT =========\n"); + fprintf(f, " IOAT version: %d.%d\n", ver >> 4, ver & 0xF); + fprintf(f, " Channel status: %s [0x%"PRIx64"]\n", + chansts_readable[chansts_masked], chansts_masked); + fprintf(f, " ChainADDR: 0x%"PRIu64"\n", ioat->regs->chainaddr); + if (chanerr == 0) { + fprintf(f, " No Channel Errors\n"); + } else { + fprintf(f, " ChanERR: 0x%"PRIu32"\n", chanerr); + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + fprintf(f, " Invalid Source Address\n"); + if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + fprintf(f, " Invalid Destination Address\n"); + if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + fprintf(f, " Invalid Descriptor Length\n"); + if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + fprintf(f, " Descriptor Read Error\n"); + if ((chanerr & ~(IOAT_CHANERR_INVALID_SRC_ADDR_MASK | + IOAT_CHANERR_INVALID_DST_ADDR_MASK | + IOAT_CHANERR_INVALID_LENGTH_MASK | + IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK)) != 0) + fprintf(f, " Unknown Error(s)\n"); + } + fprintf(f, "== Private Data ==\n"); + fprintf(f, " Config: { ring_size: %u }\n", ioat->qcfg.nb_desc); + fprintf(f, " Status: 0x%"PRIx64"\n", ioat->status); + fprintf(f, " Status IOVA: 0x%"PRIx64"\n", ioat->status_addr); + fprintf(f, " Status ADDR: %p\n", &ioat->status); + fprintf(f, " Ring IOVA: 0x%"PRIx64"\n", ioat->ring_addr); + fprintf(f, " Ring ADDR: 0x%"PRIx64"\n", ioat->desc_ring[0].next-64); + fprintf(f, " Next write: %"PRIu16"\n", ioat->next_write); + fprintf(f, " Next read: %"PRIu16"\n", ioat->next_read); + struct ioat_dma_hw_desc *desc_ring = &ioat->desc_ring[(ioat->next_write - 1) & mask]; + fprintf(f, " Last Descriptor Written {\n"); + fprintf(f, " Size: %"PRIu32"\n", desc_ring->size); + fprintf(f, " Control: 0x%"PRIx32"\n", desc_ring->u.control_raw); + fprintf(f, " Src: 0x%"PRIx64"\n", desc_ring->src_addr); + fprintf(f, " Dest: 0x%"PRIx64"\n", desc_ring->dest_addr); + fprintf(f, " Next: 0x%"PRIx64"\n", desc_ring->next); + fprintf(f, " }\n"); + fprintf(f, " Next Descriptor {\n"); + fprintf(f, " Size: %"PRIu32"\n", ioat->desc_ring[ioat->next_read & mask].size); + fprintf(f, " Src: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].src_addr); + fprintf(f, " Dest: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].dest_addr); + fprintf(f, " Next: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].next); + fprintf(f, " }\n"); + + return 0; +} + +/* Public wrapper for dump. */ +static int +ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) +{ + return __dev_dump(dev->fp_obj->dev_private, f); +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) { - static const struct rte_dma_dev_ops ioat_dmadev_ops = { }; + static const struct rte_dma_dev_ops ioat_dmadev_ops = { + .dev_dump = ioat_dev_dump, + }; struct rte_dma_dev *dmadev = NULL; struct ioat_dmadev *ioat = NULL; diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h index 73bdf548b3..dc3493a78f 100644 --- a/drivers/dma/ioat/ioat_hw_defs.h +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -15,6 +15,7 @@ extern "C" { #define IOAT_VER_3_0 0x30 #define IOAT_VER_3_3 0x33 +#define IOAT_VER_3_4 0x34 #define IOAT_VENDOR_ID 0x8086 #define IOAT_DEVICE_ID_SKX 0x2021 @@ -43,6 +44,14 @@ extern "C" { #define IOAT_CHANCTRL_ERR_COMPLETION_EN 0x0004 #define IOAT_CHANCTRL_INT_REARM 0x0001 +/* DMA Channel Capabilities */ +#define IOAT_DMACAP_PB (1 << 0) +#define IOAT_DMACAP_DCA (1 << 4) +#define IOAT_DMACAP_BFILL (1 << 6) +#define IOAT_DMACAP_XOR (1 << 8) +#define IOAT_DMACAP_PQ (1 << 9) +#define IOAT_DMACAP_DMA_DIF (1 << 10) + struct ioat_registers { uint8_t chancnt; uint8_t xfercap; @@ -71,8 +80,214 @@ struct ioat_registers { #define IOAT_CHANCMD_RESET 0x20 #define IOAT_CHANCMD_SUSPEND 0x04 +#define IOAT_CHANSTS_STATUS 0x7ULL +#define IOAT_CHANSTS_ACTIVE 0x0 +#define IOAT_CHANSTS_IDLE 0x1 +#define IOAT_CHANSTS_SUSPENDED 0x2 +#define IOAT_CHANSTS_HALTED 0x3 +#define IOAT_CHANSTS_ARMED 0x4 + +#define IOAT_CHANERR_INVALID_SRC_ADDR_MASK (1 << 0) +#define IOAT_CHANERR_INVALID_DST_ADDR_MASK (1 << 1) +#define IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK (1 << 8) +#define IOAT_CHANERR_INVALID_LENGTH_MASK (1 << 10) + +const char *chansts_readable[] = { + "ACTIVE", /* 0x0 */ + "IDLE", /* 0x1 */ + "SUSPENDED", /* 0x2 */ + "HALTED", /* 0x3 */ + "ARMED" /* 0x4 */ +}; + +#define IOAT_CHANSTS_UNAFFILIATED_ERROR 0x8ULL +#define IOAT_CHANSTS_SOFT_ERROR 0x10ULL + +#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_MASK (~0x3FULL) + #define IOAT_CHANCMP_ALIGN 8 /* CHANCMP address must be 64-bit aligned */ +struct ioat_dma_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t null: 1; + uint32_t src_page_break: 1; + uint32_t dest_page_break: 1; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t reserved: 13; +#define IOAT_OP_COPY 0x00 + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t dest_addr; + uint64_t next; + uint64_t reserved; + uint64_t reserved2; + uint64_t user1; + uint64_t user2; +}; + +struct ioat_fill_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t reserved: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t reserved2: 2; + uint32_t dest_page_break: 1; + uint32_t bundle: 1; + uint32_t reserved3: 15; +#define IOAT_OP_FILL 0x01 + uint32_t op: 8; + } control; + } u; + uint64_t src_data; + uint64_t dest_addr; + uint64_t next; + uint64_t reserved; + uint64_t next_dest_addr; + uint64_t user1; + uint64_t user2; +}; + +struct ioat_xor_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_count: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t reserved: 13; +#define IOAT_OP_XOR 0x87 +#define IOAT_OP_XOR_VAL 0x88 + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t dest_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t src_addr3; + uint64_t src_addr4; + uint64_t src_addr5; +}; + +struct ioat_xor_ext_hw_desc { + uint64_t src_addr6; + uint64_t src_addr7; + uint64_t src_addr8; + uint64_t next; + uint64_t reserved[4]; +}; + +struct ioat_pq_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_count: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t p_disable: 1; + uint32_t q_disable: 1; + uint32_t reserved: 11; +#define IOAT_OP_PQ 0x89 +#define IOAT_OP_PQ_VAL 0x8a + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t p_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t src_addr3; + uint8_t coef[8]; + uint64_t q_addr; +}; + +struct ioat_pq_ext_hw_desc { + uint64_t src_addr4; + uint64_t src_addr5; + uint64_t src_addr6; + uint64_t next; + uint64_t src_addr7; + uint64_t src_addr8; + uint64_t reserved[2]; +}; + +struct ioat_pq_update_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_cnt: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t p_disable: 1; + uint32_t q_disable: 1; + uint32_t reserved: 3; + uint32_t coef: 8; +#define IOAT_OP_PQ_UP 0x8b + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t p_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t p_src; + uint64_t q_src; + uint64_t q_addr; +}; + +union ioat_hw_desc { + struct ioat_dma_hw_desc dma; + struct ioat_fill_hw_desc fill; + struct ioat_xor_hw_desc xor_desc; + struct ioat_xor_ext_hw_desc xor_ext; + struct ioat_pq_hw_desc pq; + struct ioat_pq_ext_hw_desc pq_ext; + struct ioat_pq_update_hw_desc pq_update; +}; + +#define GENSTS_DEV_STATE_MASK 0x03 +#define CMDSTATUS_ACTIVE_SHIFT 31 +#define CMDSTATUS_ACTIVE_MASK (1 << 31) +#define CMDSTATUS_ERR_MASK 0xFF + #ifdef __cplusplus } #endif From patchwork Mon Oct 18 12:38:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102009 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AADCAA0C43; Mon, 18 Oct 2021 14:39:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C958B4113C; Mon, 18 Oct 2021 14:38:52 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id BDA68410EE for ; Mon, 18 Oct 2021 14:38:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117644" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117644" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361092" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:48 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:27 +0000 Message-Id: <20211018123835.1080174-5-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 04/12] dma/ioat: add configuration functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functions for device configuration. The info_get and close functions are included here also. info_get can be useful for checking successful configuration and close is used by the dmadev api when releasing a configured device. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz --- doc/guides/dmadevs/ioat.rst | 15 +++++ drivers/dma/ioat/ioat_dmadev.c | 107 +++++++++++++++++++++++++++++++++ 2 files changed, 122 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index 9ae1d8a2ad..af69556241 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -67,3 +67,18 @@ For example:: Once probed successfully, the device will appear as a ``dmadev``, that is a "DMA device type" inside DPDK, and can be accessed using APIs from the ``rte_dmadev`` library. + +Using IOAT DMAdev Devices +-------------------------- + +To use IOAT devices from an application, the ``dmadev`` API can be used. + +Device Configuration +~~~~~~~~~~~~~~~~~~~~~ + +IOAT configuration requirements: + +* ``ring_size`` must be a power of two, between 64 and 4096. +* Only one ``vchan`` is supported per device. +* Silent mode is not supported. +* The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 876e17f320..ada57c5814 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -12,9 +12,112 @@ static struct rte_pci_driver ioat_pmd_drv; RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); +#define DESC_SZ sizeof(struct ioat_dma_hw_desc) + #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Configure a device. */ +static int +ioat_dev_configure(struct rte_dma_dev *dev __rte_unused, const struct rte_dma_conf *dev_conf, + uint32_t conf_sz) +{ + if (sizeof(struct rte_dma_conf) != conf_sz) + return -EINVAL; + + if (dev_conf->nb_vchans != 1) + return -EINVAL; + + return 0; +} + +/* Setup a virtual channel for IOAT, only 1 vchan is supported. */ +static int +ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + const struct rte_dma_vchan_conf *qconf, uint32_t qconf_sz) +{ + struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + uint16_t max_desc = qconf->nb_desc; + int i; + + if (sizeof(struct rte_dma_vchan_conf) != qconf_sz) + return -EINVAL; + + ioat->qcfg = *qconf; + + if (!rte_is_power_of_2(max_desc)) { + max_desc = rte_align32pow2(max_desc); + IOAT_PMD_DEBUG("DMA dev %u using %u descriptors", dev->data->dev_id, max_desc); + ioat->qcfg.nb_desc = max_desc; + } + + /* In case we are reconfiguring a device, free any existing memory. */ + rte_free(ioat->desc_ring); + + ioat->desc_ring = rte_zmalloc(NULL, sizeof(*ioat->desc_ring) * max_desc, 0); + if (ioat->desc_ring == NULL) + return -ENOMEM; + + ioat->ring_addr = rte_mem_virt2iova(ioat->desc_ring); + + ioat->status_addr = rte_mem_virt2iova(ioat) + offsetof(struct ioat_dmadev, status); + + /* Ensure all counters are reset, if reconfiguring/restarting device. */ + ioat->next_read = 0; + ioat->next_write = 0; + ioat->last_write = 0; + ioat->offset = 0; + ioat->failure = 0; + + /* Configure descriptor ring - each one points to next. */ + for (i = 0; i < ioat->qcfg.nb_desc; i++) { + ioat->desc_ring[i].next = ioat->ring_addr + + (((i + 1) % ioat->qcfg.nb_desc) * DESC_SZ); + } + + return 0; +} + +/* Get device information of a device. */ +static int +ioat_dev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size) +{ + struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + if (size < sizeof(*info)) + return -EINVAL; + info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | + RTE_DMA_CAPA_OPS_COPY | + RTE_DMA_CAPA_OPS_FILL; + if (ioat->version >= IOAT_VER_3_4) + info->dev_capa |= RTE_DMA_CAPA_HANDLES_ERRORS; + info->max_vchans = 1; + info->min_desc = 32; + info->max_desc = 4096; + return 0; +} + +/* Close a configured device. */ +static int +ioat_dev_close(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat; + + if (!dev) { + IOAT_PMD_ERR("Invalid device"); + return -EINVAL; + } + + ioat = dev->fp_obj->dev_private; + if (!ioat) { + IOAT_PMD_ERR("Error getting dev_private"); + return -EINVAL; + } + + rte_free(ioat->desc_ring); + + return 0; +} + /* Dump DMA device info. */ static int __dev_dump(void *dev_private, FILE *f) @@ -86,7 +189,11 @@ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) { static const struct rte_dma_dev_ops ioat_dmadev_ops = { + .dev_close = ioat_dev_close, + .dev_configure = ioat_dev_configure, .dev_dump = ioat_dev_dump, + .dev_info_get = ioat_dev_info_get, + .vchan_setup = ioat_vchan_setup, }; struct rte_dma_dev *dmadev = NULL; From patchwork Mon Oct 18 12:38:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102010 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 984FAA0C43; Mon, 18 Oct 2021 14:39:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C238B41123; Mon, 18 Oct 2021 14:38:55 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 1F8B141145 for ; Mon, 18 Oct 2021 14:38:53 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117647" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117647" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361101" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:51 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:28 +0000 Message-Id: <20211018123835.1080174-6-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 05/12] dma/ioat: add start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add start, stop and recover functions for IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- doc/guides/dmadevs/ioat.rst | 3 ++ drivers/dma/ioat/ioat_dmadev.c | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 95 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index af69556241..df159f9957 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -82,3 +82,6 @@ IOAT configuration requirements: * Only one ``vchan`` is supported per device. * Silent mode is not supported. * The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory. + +Once configured, the device can then be made ready for use by calling the +``rte_dma_start()`` API. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index ada57c5814..cf28f4a7e6 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -78,6 +78,96 @@ ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, return 0; } +/* Recover IOAT device. */ +static inline int +__ioat_recover(struct ioat_dmadev *ioat) +{ + uint32_t chanerr, retry = 0; + uint16_t mask = ioat->qcfg.nb_desc - 1; + + /* Clear any channel errors. Reading and writing to chanerr does this. */ + chanerr = ioat->regs->chanerr; + ioat->regs->chanerr = chanerr; + + /* Reset Channel. */ + ioat->regs->chancmd = IOAT_CHANCMD_RESET; + + /* Write new chain address to trigger state change. */ + ioat->regs->chainaddr = ioat->desc_ring[(ioat->next_read - 1) & mask].next; + /* Ensure channel control and status addr are correct. */ + ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN | + IOAT_CHANCTRL_ERR_COMPLETION_EN; + ioat->regs->chancmp = ioat->status_addr; + + /* Allow HW time to move to the ARMED state. */ + do { + rte_pause(); + retry++; + } while (ioat->regs->chansts != IOAT_CHANSTS_ARMED && retry < 200); + + /* Exit as failure if device is still HALTED. */ + if (ioat->regs->chansts != IOAT_CHANSTS_ARMED) + return -1; + + /* Store next write as offset as recover will move HW and SW ring out of sync. */ + ioat->offset = ioat->next_read; + + /* Prime status register with previous address. */ + ioat->status = ioat->desc_ring[(ioat->next_read - 2) & mask].next; + + return 0; +} + +/* Start a configured device. */ +static int +ioat_dev_start(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + + if (ioat->qcfg.nb_desc == 0 || ioat->desc_ring == NULL) + return -EBUSY; + + /* Inform hardware of where the descriptor ring is. */ + ioat->regs->chainaddr = ioat->ring_addr; + /* Inform hardware of where to write the status/completions. */ + ioat->regs->chancmp = ioat->status_addr; + + /* Prime the status register to be set to the last element. */ + ioat->status = ioat->ring_addr + ((ioat->qcfg.nb_desc - 1) * DESC_SZ); + + printf("IOAT.status: %s [0x%"PRIx64"]\n", + chansts_readable[ioat->status & IOAT_CHANSTS_STATUS], + ioat->status); + + if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) { + IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n"); + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device couldn't be recovered"); + return -1; + } + } + + return 0; +} + +/* Stop a configured device. */ +static int +ioat_dev_stop(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + uint32_t retry = 0; + + ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND; + + do { + rte_pause(); + retry++; + } while ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) != IOAT_CHANSTS_SUSPENDED + && retry < 200); + + return ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_SUSPENDED) ? 0 : -1; +} + /* Get device information of a device. */ static int ioat_dev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size) @@ -193,6 +283,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_configure = ioat_dev_configure, .dev_dump = ioat_dev_dump, .dev_info_get = ioat_dev_info_get, + .dev_start = ioat_dev_start, + .dev_stop = ioat_dev_stop, .vchan_setup = ioat_vchan_setup, }; From patchwork Mon Oct 18 12:38:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102011 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1602CA0C43; Mon, 18 Oct 2021 14:39:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8FED4114A; Mon, 18 Oct 2021 14:38:57 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 92F6041130 for ; Mon, 18 Oct 2021 14:38:56 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117652" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117652" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361110" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:53 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:29 +0000 Message-Id: <20211018123835.1080174-7-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 06/12] dma/ioat: add data path job submission functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add data path functions for enqueuing and submitting operations to IOAT devices. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Chengwen Feng --- doc/guides/dmadevs/ioat.rst | 9 ++++ drivers/dma/ioat/ioat_dmadev.c | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index df159f9957..9ee4e372a8 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -85,3 +85,12 @@ IOAT configuration requirements: Once configured, the device can then be made ready for use by calling the ``rte_dma_start()`` API. + +Performing Data Copies +~~~~~~~~~~~~~~~~~~~~~~~ + +Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library +documentation for details on operation enqueue and submission API usage. + +It is expected that, for efficiency reasons, a burst of operations will be enqueued to the +device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index cf28f4a7e6..4d00fec5c8 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -5,6 +5,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -17,6 +18,12 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* IOAT operations. */ +enum rte_ioat_ops { + ioat_op_copy = 0, /* Standard DMA Operation */ + ioat_op_fill /* Block Fill */ +}; + /* Configure a device. */ static int ioat_dev_configure(struct rte_dma_dev *dev __rte_unused, const struct rte_dma_conf *dev_conf, @@ -208,6 +215,87 @@ ioat_dev_close(struct rte_dma_dev *dev) return 0; } +/* Trigger hardware to begin performing enqueued operations. */ +static inline void +__submit(struct ioat_dmadev *ioat) +{ + *ioat->doorbell = ioat->next_write - ioat->offset; + + ioat->last_write = ioat->next_write; +} + +/* External submit function wrapper. */ +static int +ioat_submit(void *dev_private, uint16_t qid __rte_unused) +{ + struct ioat_dmadev *ioat = dev_private; + + __submit(ioat); + + return 0; +} + +/* Write descriptor for enqueue. */ +static inline int +__write_desc(void *dev_private, uint32_t op, uint64_t src, phys_addr_t dst, + unsigned int length, uint64_t flags) +{ + struct ioat_dmadev *ioat = dev_private; + uint16_t ret; + const unsigned short mask = ioat->qcfg.nb_desc - 1; + const unsigned short read = ioat->next_read; + unsigned short write = ioat->next_write; + const unsigned short space = mask + read - write; + struct ioat_dma_hw_desc *desc; + + if (space == 0) + return -ENOSPC; + + ioat->next_write = write + 1; + write &= mask; + + desc = &ioat->desc_ring[write]; + desc->size = length; + desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) | + (1 << IOAT_COMP_UPDATE_SHIFT)); + + /* In IOAT the fence ensures that all operations including the current one + * are completed before moving on, DMAdev assumes that the fence ensures + * all operations before the current one are completed before starting + * the current one, so in IOAT we set the fence for the previous descriptor. + */ + if (flags & RTE_DMA_OP_FLAG_FENCE) + ioat->desc_ring[(write - 1) & mask].u.control.fence = 1; + + desc->src_addr = src; + desc->dest_addr = dst; + + rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]); + + ret = (uint16_t)(ioat->next_write - 1); + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) + __submit(ioat); + + return ret; +} + +/* Enqueue a fill operation onto the ioat device. */ +static int +ioat_enqueue_fill(void *dev_private, uint16_t qid __rte_unused, uint64_t pattern, + rte_iova_t dst, unsigned int length, uint64_t flags) +{ + return __write_desc(dev_private, ioat_op_fill, pattern, dst, length, flags); +} + +/* Enqueue a copy operation onto the ioat device. */ +static int +ioat_enqueue_copy(void *dev_private, uint16_t qid __rte_unused, rte_iova_t src, + rte_iova_t dst, unsigned int length, uint64_t flags) +{ + return __write_desc(dev_private, ioat_op_copy, src, dst, length, flags); +} + /* Dump DMA device info. */ static int __dev_dump(void *dev_private, FILE *f) @@ -310,6 +398,10 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->fp_obj->copy = ioat_enqueue_copy; + dmadev->fp_obj->fill = ioat_enqueue_fill; + dmadev->fp_obj->submit = ioat_submit; + ioat = dmadev->data->dev_private; ioat->dmadev = dmadev; ioat->regs = dev->mem_resource[0].addr; From patchwork Mon Oct 18 12:38:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102012 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3984AA0C43; Mon, 18 Oct 2021 14:39:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8ECB41151; Mon, 18 Oct 2021 14:39:00 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 6F8C941151 for ; Mon, 18 Oct 2021 14:38:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117662" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117662" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:38:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361125" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:56 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:30 +0000 Message-Id: <20211018123835.1080174-8-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 07/12] dma/ioat: add data path completion functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the data path functions for gathering completed operations from IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- doc/guides/dmadevs/ioat.rst | 33 +++++++- drivers/dma/ioat/ioat_dmadev.c | 141 +++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+), 1 deletion(-) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index 9ee4e372a8..9ac90e3108 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -90,7 +90,38 @@ Performing Data Copies ~~~~~~~~~~~~~~~~~~~~~~~ Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library -documentation for details on operation enqueue and submission API usage. +documentation for details on operation enqueue, submission and completion API usage. It is expected that, for efficiency reasons, a burst of operations will be enqueued to the device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. + +When gathering completions, ``rte_dma_completed()`` should be used, up until the point an error +occurs with an operation. If an error was encountered, ``rte_dma_completed_status()`` must be used +to reset the device and continue processing operations. This function will also gather the status +of each individual operation which is filled in to the ``status`` array provided as parameter +by the application. + +The status codes supported by IOAT are: + +* ``RTE_DMA_STATUS_SUCCESSFUL``: The operation was successful. +* ``RTE_DMA_STATUS_INVALID_SRC_ADDR``: The operation failed due to an invalid source address. +* ``RTE_DMA_STATUS_INVALID_DST_ADDR``: The operation failed due to an invalid destination address. +* ``RTE_DMA_STATUS_INVALID_LENGTH``: The operation failed due to an invalid descriptor length. +* ``RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR``: The device could not read the descriptor. +* ``RTE_DMA_STATUS_ERROR_UNKNOWN``: The operation failed due to an unspecified error. + +The following code shows how to retrieve the number of successfully completed +copies within a burst and then uses ``rte_dma_completed_status()`` to check +which operation failed and reset the device to continue processing operations: + +.. code-block:: C + + enum rte_dma_status_code status[COMP_BURST_SZ]; + uint16_t count, idx, status_count; + bool error = 0; + + count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error); + + if (error){ + status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, &idx, status); + } diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 4d00fec5c8..0318f67772 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -362,6 +363,144 @@ ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) return __dev_dump(dev->fp_obj->dev_private, f); } +/* Returns the index of the last completed operation. */ +static inline uint16_t +__get_last_completed(const struct ioat_dmadev *ioat, int *state) +{ + /* Status register contains the address of the completed operation */ + uint64_t status = ioat->status; + + /* lower 3 bits indicate "transfer status" : active, idle, halted. + * We can ignore bit 0. + */ + *state = status & IOAT_CHANSTS_STATUS; + + /* If we are just after recovering from an error the address returned by + * status will be 0, in this case we return the offset - 1 as the last + * completed. If not return the status value minus the chainaddr which + * gives us an offset into the ring. Right shifting by 6 (divide by 64) + * gives the index of the completion from the HW point of view and adding + * the offset translates the ring index from HW to SW point of view. + */ + if ((status & ~IOAT_CHANSTS_STATUS) == 0) + return ioat->offset - 1; + + return (status - ioat->ring_addr) >> 6; +} + +/* Translates IOAT ChanERRs to DMA error codes. */ +static inline enum rte_dma_status_code +__translate_status_ioat_to_dma(uint32_t chanerr) +{ + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_SRC_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_DST_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + return RTE_DMA_STATUS_INVALID_LENGTH; + else if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + return RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR; + else + return RTE_DMA_STATUS_ERROR_UNKNOWN; +} + +/* Returns details of operations that have been completed. */ +static uint16_t +ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_ops, + uint16_t *last_idx, bool *has_error) +{ + struct ioat_dmadev *ioat = dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short last_completed, count; + int state, fails = 0; + + /* Do not do any work if there is an uncleared error. */ + if (ioat->failure != 0) { + *has_error = true; + *last_idx = ioat->next_read - 2; + return 0; + } + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) { + ioat->next_read = read + count; + *last_idx = ioat->next_read - 1; + } else { + *has_error = true; + rte_errno = EIO; + ioat->failure = ioat->regs->chanerr; + ioat->next_read = read + count + 1; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + __dev_dump(dev_private, stdout); + return 0; + } + __submit(ioat); + fails++; + *last_idx = ioat->next_read - 2; + } + + return count; +} + +/* Returns detailed status information about operations that have been completed. */ +static uint16_t +ioat_completed_status(void *dev_private, uint16_t qid __rte_unused, + uint16_t max_ops, uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct ioat_dmadev *ioat = dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short count, last_completed; + uint64_t fails = 0; + int state, i; + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + for (i = 0; i < RTE_MIN(count + 1, max_ops); i++) + status[i] = RTE_DMA_STATUS_SUCCESSFUL; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) + ioat->next_read = read + count; + else { + rte_errno = EIO; + status[count] = __translate_status_ioat_to_dma(ioat->regs->chanerr); + count++; + ioat->next_read = read + count; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + __dev_dump(dev_private, stdout); + return 0; + } + __submit(ioat); + fails++; + } + + if (ioat->failure > 0) { + status[0] = __translate_status_ioat_to_dma(ioat->failure); + count = RTE_MIN(count + 1, max_ops); + ioat->failure = 0; + } + + *last_idx = ioat->next_read - 1; + + return count; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -398,6 +537,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->fp_obj->completed = ioat_completed; + dmadev->fp_obj->completed_status = ioat_completed_status; dmadev->fp_obj->copy = ioat_enqueue_copy; dmadev->fp_obj->fill = ioat_enqueue_fill; dmadev->fp_obj->submit = ioat_submit; From patchwork Mon Oct 18 12:38:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102013 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F34FA0C43; Mon, 18 Oct 2021 14:39:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0CFB41130; Mon, 18 Oct 2021 14:39:03 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 88115410F6 for ; Mon, 18 Oct 2021 14:39:02 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117670" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117670" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:39:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361139" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:38:59 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:31 +0000 Message-Id: <20211018123835.1080174-9-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 08/12] dma/ioat: add statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add statistic tracking for operations in IOAT. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 43 ++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 0318f67772..48126a1bcb 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -77,6 +77,9 @@ ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, ioat->offset = 0; ioat->failure = 0; + /* Reset Stats. */ + ioat->stats = (struct rte_dma_stats){0}; + /* Configure descriptor ring - each one points to next. */ for (i = 0; i < ioat->qcfg.nb_desc; i++) { ioat->desc_ring[i].next = ioat->ring_addr + @@ -222,6 +225,8 @@ __submit(struct ioat_dmadev *ioat) { *ioat->doorbell = ioat->next_write - ioat->offset; + ioat->stats.submitted += (uint16_t)(ioat->next_write - ioat->last_write); + ioat->last_write = ioat->next_write; } @@ -352,6 +357,10 @@ __dev_dump(void *dev_private, FILE *f) fprintf(f, " Dest: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].dest_addr); fprintf(f, " Next: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].next); fprintf(f, " }\n"); + fprintf(f, " Key Stats { submitted: %"PRIu64", comp: %"PRIu64", failed: %"PRIu64" }\n", + ioat->stats.submitted, + ioat->stats.completed, + ioat->stats.errors); return 0; } @@ -448,6 +457,9 @@ ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_ *last_idx = ioat->next_read - 2; } + ioat->stats.completed += count; + ioat->stats.errors += fails; + return count; } @@ -498,9 +510,38 @@ ioat_completed_status(void *dev_private, uint16_t qid __rte_unused, *last_idx = ioat->next_read - 1; + ioat->stats.completed += count; + ioat->stats.errors += fails; + return count; } +/* Retrieve the generic stats of a DMA device. */ +static int +ioat_stats_get(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + struct rte_dma_stats *rte_stats, uint32_t size) +{ + struct rte_dma_stats *stats = (&((struct ioat_dmadev *)dev->fp_obj->dev_private)->stats); + + if (size < sizeof(rte_stats)) + return -EINVAL; + if (rte_stats == NULL) + return -EINVAL; + + *rte_stats = *stats; + return 0; +} + +/* Reset the generic stat counters for the DMA device. */ +static int +ioat_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) +{ + struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + + ioat->stats = (struct rte_dma_stats){0}; + return 0; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -512,6 +553,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_info_get = ioat_dev_info_get, .dev_start = ioat_dev_start, .dev_stop = ioat_dev_stop, + .stats_get = ioat_stats_get, + .stats_reset = ioat_stats_reset, .vchan_setup = ioat_vchan_setup, }; From patchwork Mon Oct 18 12:38:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102014 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66A8CA0C43; Mon, 18 Oct 2021 14:39:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D07741154; Mon, 18 Oct 2021 14:39:06 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 65BA0410F8 for ; Mon, 18 Oct 2021 14:39:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117675" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117675" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:39:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361156" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:39:01 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:32 +0000 Message-Id: <20211018123835.1080174-10-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 09/12] dma/ioat: add support for vchan status function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the rte_dmadev_vchan_status API call. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 48126a1bcb..3d4d7b66e2 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -542,6 +542,26 @@ ioat_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) return 0; } +/* Check if the IOAT device is idle. */ +static int +ioat_vchan_status(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + enum rte_dma_vchan_status *status) +{ + int state = 0; + const struct ioat_dmadev *ioat = dev->fp_obj->dev_private; + const uint16_t mask = ioat->qcfg.nb_desc - 1; + const uint16_t last = __get_last_completed(ioat, &state); + + if (state == IOAT_CHANSTS_HALTED || state == IOAT_CHANSTS_SUSPENDED) + *status = RTE_DMA_VCHAN_HALTED_ERROR; + else if (last == ((ioat->next_write - 1) & mask)) + *status = RTE_DMA_VCHAN_IDLE; + else + *status = RTE_DMA_VCHAN_ACTIVE; + + return 0; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -555,6 +575,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_stop = ioat_dev_stop, .stats_get = ioat_stats_get, .stats_reset = ioat_stats_reset, + .vchan_status = ioat_vchan_status, .vchan_setup = ioat_vchan_setup, }; From patchwork Mon Oct 18 12:38:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102015 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0522EA0C43; Mon, 18 Oct 2021 14:39:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D76A4113A; Mon, 18 Oct 2021 14:39:08 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id BA1F34115F for ; Mon, 18 Oct 2021 14:39:06 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117683" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117683" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:39:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361167" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:39:04 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:33 +0000 Message-Id: <20211018123835.1080174-11-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 10/12] dma/ioat: add burst capacity function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds the ability to find the remaining space in the IOAT ring. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 3d4d7b66e2..a230496b11 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -516,6 +516,19 @@ ioat_completed_status(void *dev_private, uint16_t qid __rte_unused, return count; } +/* Get the remaining capacity of the ring. */ +static uint16_t +ioat_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused) +{ + const struct ioat_dmadev *ioat = dev_private; + unsigned short size = ioat->qcfg.nb_desc - 1; + unsigned short read = ioat->next_read; + unsigned short write = ioat->next_write; + unsigned short space = size - (write - read); + + return space; +} + /* Retrieve the generic stats of a DMA device. */ static int ioat_stats_get(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, @@ -601,6 +614,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->fp_obj->burst_capacity = ioat_burst_capacity; dmadev->fp_obj->completed = ioat_completed; dmadev->fp_obj->completed_status = ioat_completed_status; dmadev->fp_obj->copy = ioat_enqueue_copy; From patchwork Mon Oct 18 12:38:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102016 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61003A0C43; Mon, 18 Oct 2021 14:39:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75D9241165; Mon, 18 Oct 2021 14:39:11 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 0D8EA41165 for ; Mon, 18 Oct 2021 14:39:08 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117688" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117688" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:39:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361181" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:39:06 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:34 +0000 Message-Id: <20211018123835.1080174-12-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 11/12] devbind: move ioat device IDs to dmadev category X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move Intel IOAT devices from Misc to DMA devices. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Bruce Richardson --- usertools/dpdk-devbind.py | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py index ba18e2a487..91f1b16bde 100755 --- a/usertools/dpdk-devbind.py +++ b/usertools/dpdk-devbind.py @@ -71,14 +71,13 @@ network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class] baseband_devices = [acceleration_class] crypto_devices = [encryption_class, intel_processor_class] -dma_devices = [intel_idxd_spr] +dma_devices = [intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx] eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso] mempool_devices = [cavium_fpa, octeontx2_npa] compress_devices = [cavium_zip] regex_devices = [octeontx2_ree] -misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw, - intel_ioat_skx, intel_ioat_icx, intel_ntb_skx, - intel_ntb_icx, octeontx2_dma] +misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ntb_skx, + intel_ntb_icx, octeontx2_dma] # global dict ethernet devices present. Dictionary indexed by PCI address. # Each device within this is itself a dictionary of device properties From patchwork Mon Oct 18 12:38:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 102017 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85787A0C43; Mon, 18 Oct 2021 14:39:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7DA0C41170; Mon, 18 Oct 2021 14:39:12 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 5E26F410F4 for ; Mon, 18 Oct 2021 14:39:11 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10140"; a="228117692" X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="228117692" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2021 05:39:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,382,1624345200"; d="scan'208";a="661361192" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by orsmga005.jf.intel.com with ESMTP; 18 Oct 2021 05:39:08 -0700 From: Conor Walsh To: bruce.richardson@intel.com, thomas@monjalon.net, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Mon, 18 Oct 2021 12:38:35 +0000 Message-Id: <20211018123835.1080174-13-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211018123835.1080174-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20211018123835.1080174-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v8 12/12] raw/ioat: deprecate ioat rawdev driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Deprecate the rawdev IOAT driver as both IOAT and IDXD drivers have moved to dmadev. Signed-off-by: Conor Walsh Acked-by: Kevin Laatz Acked-by: Bruce Richardson Acked-by: Thomas Monjalon --- MAINTAINERS | 2 +- doc/guides/rawdevs/ioat.rst | 4 ++++ doc/guides/rel_notes/deprecation.rst | 7 +++++++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 283c70f7d7..b9f7746dc4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1322,7 +1322,7 @@ T: git://dpdk.org/next/dpdk-next-net-intel F: drivers/raw/ifpga/ F: doc/guides/rawdevs/ifpga.rst -IOAT Rawdev +IOAT Rawdev - DEPRECATED M: Bruce Richardson F: drivers/raw/ioat/ F: doc/guides/rawdevs/ioat.rst diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst index a65530bd30..98d15dd032 100644 --- a/doc/guides/rawdevs/ioat.rst +++ b/doc/guides/rawdevs/ioat.rst @@ -6,6 +6,10 @@ IOAT Rawdev Driver =================== +.. warning:: + As of DPDK 21.11 the rawdev implementation of the IOAT driver has been deprecated. + Please use the dmadev library instead. + The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg| Data Streaming Accelerator `(Intel DSA) `_ and for Intel\ |reg| diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 45239ca56e..1014c45bc5 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -236,3 +236,10 @@ Deprecation Notices * cmdline: ``cmdline`` structure will be made opaque to hide platform-specific content. On Linux and FreeBSD, supported prior to DPDK 20.11, original structure will be kept until DPDK 21.11. + +* raw/ioat: The ``ioat`` rawdev driver has been deprecated, since it's + functionality is provided through the new ``dmadev`` infrastructure. + To continue to use hardware previously supported by the ``ioat`` rawdev driver, + applications should be updated to use the ``dmadev`` library instead, + with the underlying HW-functionality being provided by the ``ioat`` or + ``idxd`` dma drivers