From patchwork Fri Sep 24 14:33:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99610 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C3B7A0548; Fri, 24 Sep 2021 16:33:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDDD841326; Fri, 24 Sep 2021 16:33:45 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id F0DEE411FA for ; Fri, 24 Sep 2021 16:33:41 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640124" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640124" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871419" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:39 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:24 +0000 Message-Id: <20210924143335.1092300-2-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 01/12] dma/ioat: add device probe and removal functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the basic device probe/remove skeleton code and initial documentation for new IOAT DMA driver. Maintainers update is also included in this patch. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Chengwen Feng --- MAINTAINERS | 6 +++ doc/guides/dmadevs/index.rst | 2 + doc/guides/dmadevs/ioat.rst | 69 ++++++++++++++++++++++++++ doc/guides/rel_notes/release_21_11.rst | 6 +++ drivers/dma/ioat/ioat_dmadev.c | 69 ++++++++++++++++++++++++++ drivers/dma/ioat/ioat_hw_defs.h | 35 +++++++++++++ drivers/dma/ioat/ioat_internal.h | 20 ++++++++ drivers/dma/ioat/meson.build | 7 +++ drivers/dma/ioat/version.map | 3 ++ drivers/dma/meson.build | 1 + 10 files changed, 218 insertions(+) create mode 100644 doc/guides/dmadevs/ioat.rst create mode 100644 drivers/dma/ioat/ioat_dmadev.c create mode 100644 drivers/dma/ioat/ioat_hw_defs.h create mode 100644 drivers/dma/ioat/ioat_internal.h create mode 100644 drivers/dma/ioat/meson.build create mode 100644 drivers/dma/ioat/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 497219e948..ccabba9169 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1209,6 +1209,12 @@ M: Kevin Laatz F: drivers/dma/idxd/ F: doc/guides/dmadevs/idxd.rst +Intel IOAT +M: Bruce Richardson +M: Conor Walsh +F: drivers/dma/ioat/ +F: doc/guides/dmadevs/ioat.rst + RegEx Drivers ------------- diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst index 5d4abf880e..c59f4b5c92 100644 --- a/doc/guides/dmadevs/index.rst +++ b/doc/guides/dmadevs/index.rst @@ -12,3 +12,5 @@ an application through DMA API. :numbered: idxd + ioat + diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst new file mode 100644 index 0000000000..9ae1d8a2ad --- /dev/null +++ b/doc/guides/dmadevs/ioat.rst @@ -0,0 +1,69 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2021 Intel Corporation. + +.. include:: + +IOAT DMA Device Driver +======================= + +The ``ioat`` dmadev driver provides a poll-mode driver (PMD) for Intel\ +|reg| QuickData Technology which is part of part of Intel\ |reg| I/O +Acceleration Technology (`Intel I/OAT +`_). +This PMD, when used on supported hardware, allows data copies, for example, +cloning packet data, to be accelerated by IOAT hardware rather than having to +be done by software, freeing up CPU cycles for other tasks. + +Hardware Requirements +---------------------- + +The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the +presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma`` +will show all the DMA devices on the system, IOAT devices are included in this +list. For Intel\ |reg| IOAT devices, the hardware will often be listed as +"Crystal Beach DMA", or "CBDMA" or on some newer systems '0b00' due to the +absence of pci-id database entries for them at this point. + +.. note:: + Error handling is not supported by this driver on hardware prior to + Intel Ice Lake. Unsupported systems include Broadwell, Skylake and + Cascade Lake. + +Compilation +------------ + +For builds using ``meson`` and ``ninja``, the driver will be built when the +target platform is x86-based. No additional compilation steps are necessary. + +Device Setup +------------- + +Intel\ |reg| IOAT devices will need to be bound to a suitable DPDK-supported +user-space IO driver such as ``vfio-pci`` in order to be used by DPDK. + +The ``dpdk-devbind.py`` script can be used to view the state of the devices using:: + + $ dpdk-devbind.py --status-dev dma + +The ``dpdk-devbind.py`` script can also be used to bind devices to a suitable driver. +For example:: + + $ dpdk-devbind.py -b vfio-pci 00:01.0 00:01.1 + +Device Probing and Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For devices bound to a suitable DPDK-supported driver (``vfio-pci``), the HW +devices will be found as part of the device scan done at application +initialization time without the need to pass parameters to the application. + +If the application does not require all the devices available an allowlist can +be used in the same way that other DPDK devices use them. + +For example:: + + $ dpdk-test -a + +Once probed successfully, the device will appear as a ``dmadev``, that is a +"DMA device type" inside DPDK, and can be accessed using APIs from the +``rte_dmadev`` library. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index c980e729f8..e34957069f 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -103,6 +103,12 @@ New Features The IDXD dmadev driver provide device drivers for the Intel DSA devices. This device driver can be used through the generic dmadev API. +* **Added IOAT dmadev driver implementation.** + + The Intel I/O Acceleration Technology (IOAT) dmadev driver provides a device + driver for Intel IOAT devices such as Crystal Beach DMA (CBDMA) on Ice Lake, + Skylake and Broadwell. This device driver can be used through the generic dmadev API. + Removed Items ------------- diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c new file mode 100644 index 0000000000..f3491d45b1 --- /dev/null +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "ioat_internal.h" + +static struct rte_pci_driver ioat_pmd_drv; + +RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); + +#define IOAT_PMD_NAME dmadev_ioat +#define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) + +/* Probe DMA device. */ +static int +ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) +{ + char name[32]; + + rte_pci_device_name(&dev->addr, name, sizeof(name)); + IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); + + dev->device.driver = &drv->driver; + return 0; +} + +/* Remove DMA device. */ +static int +ioat_dmadev_remove(struct rte_pci_device *dev) +{ + char name[32]; + + rte_pci_device_name(&dev->addr, name, sizeof(name)); + + IOAT_PMD_INFO("Closing %s on NUMA node %d", + name, dev->device.numa_node); + + return 0; +} + +static const struct rte_pci_id pci_id_ioat_map[] = { + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_SKX) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX0) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX1) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX2) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX3) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX4) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX5) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX6) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDX7) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDXE) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_BDXF) }, + { RTE_PCI_DEVICE(IOAT_VENDOR_ID, IOAT_DEVICE_ID_ICX) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct rte_pci_driver ioat_pmd_drv = { + .id_table = pci_id_ioat_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = ioat_dmadev_probe, + .remove = ioat_dmadev_remove, +}; + +RTE_PMD_REGISTER_PCI(IOAT_PMD_NAME, ioat_pmd_drv); +RTE_PMD_REGISTER_PCI_TABLE(IOAT_PMD_NAME, pci_id_ioat_map); +RTE_PMD_REGISTER_KMOD_DEP(IOAT_PMD_NAME, "* igb_uio | uio_pci_generic | vfio-pci"); diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h new file mode 100644 index 0000000000..eeabba41ef --- /dev/null +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef IOAT_HW_DEFS_H +#define IOAT_HW_DEFS_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#define IOAT_VER_3_0 0x30 +#define IOAT_VER_3_3 0x33 + +#define IOAT_VENDOR_ID 0x8086 +#define IOAT_DEVICE_ID_SKX 0x2021 +#define IOAT_DEVICE_ID_BDX0 0x6f20 +#define IOAT_DEVICE_ID_BDX1 0x6f21 +#define IOAT_DEVICE_ID_BDX2 0x6f22 +#define IOAT_DEVICE_ID_BDX3 0x6f23 +#define IOAT_DEVICE_ID_BDX4 0x6f24 +#define IOAT_DEVICE_ID_BDX5 0x6f25 +#define IOAT_DEVICE_ID_BDX6 0x6f26 +#define IOAT_DEVICE_ID_BDX7 0x6f27 +#define IOAT_DEVICE_ID_BDXE 0x6f2E +#define IOAT_DEVICE_ID_BDXF 0x6f2F +#define IOAT_DEVICE_ID_ICX 0x0b00 + +#ifdef __cplusplus +} +#endif + +#endif /* IOAT_HW_DEFS_H */ diff --git a/drivers/dma/ioat/ioat_internal.h b/drivers/dma/ioat/ioat_internal.h new file mode 100644 index 0000000000..f1ec12a919 --- /dev/null +++ b/drivers/dma/ioat/ioat_internal.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Intel Corporation + */ + +#ifndef _IOAT_INTERNAL_H_ +#define _IOAT_INTERNAL_H_ + +#include "ioat_hw_defs.h" + +extern int ioat_pmd_logtype; + +#define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \ + ioat_pmd_logtype, "IOAT: %s(): " fmt "\n", __func__, ##args) + +#define IOAT_PMD_DEBUG(fmt, args...) IOAT_PMD_LOG(DEBUG, fmt, ## args) +#define IOAT_PMD_INFO(fmt, args...) IOAT_PMD_LOG(INFO, fmt, ## args) +#define IOAT_PMD_ERR(fmt, args...) IOAT_PMD_LOG(ERR, fmt, ## args) +#define IOAT_PMD_WARN(fmt, args...) IOAT_PMD_LOG(WARNING, fmt, ## args) + +#endif /* _IOAT_INTERNAL_H_ */ diff --git a/drivers/dma/ioat/meson.build b/drivers/dma/ioat/meson.build new file mode 100644 index 0000000000..d67fac96fb --- /dev/null +++ b/drivers/dma/ioat/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 Intel Corporation + +build = dpdk_conf.has('RTE_ARCH_X86') +reason = 'only supported on x86' +sources = files('ioat_dmadev.c') +deps += ['bus_pci', 'dmadev'] diff --git a/drivers/dma/ioat/version.map b/drivers/dma/ioat/version.map new file mode 100644 index 0000000000..c2e0723b4c --- /dev/null +++ b/drivers/dma/ioat/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build index 411be7a240..a69418ce9b 100644 --- a/drivers/dma/meson.build +++ b/drivers/dma/meson.build @@ -3,6 +3,7 @@ drivers = [ 'idxd', + 'ioat', 'skeleton', ] std_deps = ['dmadev'] From patchwork Fri Sep 24 14:33:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99611 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB1EDA0548; Fri, 24 Sep 2021 16:33:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8F994132B; Fri, 24 Sep 2021 16:33:47 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 0710C41319 for ; Fri, 24 Sep 2021 16:33:43 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640132" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640132" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871429" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:41 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:25 +0000 Message-Id: <20210924143335.1092300-3-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 02/12] dma/ioat: create dmadev instances on PCI probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a suitable device is found during the PCI probe, create a dmadev instance for each channel. Internal structures and HW definitions required for device creation are also included. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz --- drivers/dma/ioat/ioat_dmadev.c | 102 ++++++++++++++++++++++++++++++- drivers/dma/ioat/ioat_hw_defs.h | 45 ++++++++++++++ drivers/dma/ioat/ioat_internal.h | 27 ++++++++ 3 files changed, 172 insertions(+), 2 deletions(-) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index f3491d45b1..df3c72363a 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -4,6 +4,7 @@ #include #include +#include #include "ioat_internal.h" @@ -14,6 +15,103 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Create a DMA device. */ +static int +ioat_dmadev_create(const char *name, struct rte_pci_device *dev) +{ + static const struct rte_dma_dev_ops ioat_dmadev_ops = { }; + + struct rte_dma_dev *dmadev = NULL; + struct ioat_dmadev *ioat = NULL; + int retry = 0; + + if (!name) { + IOAT_PMD_ERR("Invalid name of the device!"); + return -EINVAL; + } + + /* Allocate device structure. */ + dmadev = rte_dma_pmd_allocate(name, dev->device.numa_node, sizeof(*ioat)); + if (dmadev == NULL) { + IOAT_PMD_ERR("Unable to allocate dma device"); + return -ENOMEM; + } + + dmadev->device = &dev->device; + + dmadev->dev_private = dmadev->data->dev_private; + + dmadev->dev_ops = &ioat_dmadev_ops; + + ioat = dmadev->data->dev_private; + ioat->regs = dev->mem_resource[0].addr; + ioat->doorbell = &ioat->regs->dmacount; + ioat->qcfg.nb_desc = 0; + ioat->desc_ring = NULL; + ioat->version = ioat->regs->cbver; + + /* Do device initialization - reset and set error behaviour. */ + if (ioat->regs->chancnt != 1) + IOAT_PMD_WARN("%s: Channel count == %d\n", __func__, + ioat->regs->chancnt); + + /* Locked by someone else. */ + if (ioat->regs->chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) { + IOAT_PMD_WARN("%s: Channel appears locked\n", __func__); + ioat->regs->chanctrl = 0; + } + + /* clear any previous errors */ + if (ioat->regs->chanerr != 0) { + uint32_t val = ioat->regs->chanerr; + ioat->regs->chanerr = val; + } + + ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND; + rte_delay_ms(1); + ioat->regs->chancmd = IOAT_CHANCMD_RESET; + rte_delay_ms(1); + while (ioat->regs->chancmd & IOAT_CHANCMD_RESET) { + ioat->regs->chainaddr = 0; + rte_delay_ms(1); + if (++retry >= 200) { + IOAT_PMD_ERR("%s: cannot reset device. CHANCMD=%#"PRIx8 + ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32"\n", + __func__, + ioat->regs->chancmd, + ioat->regs->chansts, + ioat->regs->chanerr); + rte_dma_pmd_release(name); + return -EIO; + } + } + ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN | + IOAT_CHANCTRL_ERR_COMPLETION_EN; + + dmadev->state = RTE_DMA_DEV_READY; + + return 0; + +} + +/* Destroy a DMA device. */ +static int +ioat_dmadev_destroy(const char *name) +{ + int ret; + + if (!name) { + IOAT_PMD_ERR("Invalid device name"); + return -EINVAL; + } + + ret = rte_dma_pmd_release(name); + if (ret) + IOAT_PMD_DEBUG("Device cleanup failed"); + + return 0; +} + /* Probe DMA device. */ static int ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) @@ -24,7 +122,7 @@ ioat_dmadev_probe(struct rte_pci_driver *drv, struct rte_pci_device *dev) IOAT_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node); dev->device.driver = &drv->driver; - return 0; + return ioat_dmadev_create(name, dev); } /* Remove DMA device. */ @@ -38,7 +136,7 @@ ioat_dmadev_remove(struct rte_pci_device *dev) IOAT_PMD_INFO("Closing %s on NUMA node %d", name, dev->device.numa_node); - return 0; + return ioat_dmadev_destroy(name); } static const struct rte_pci_id pci_id_ioat_map[] = { diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h index eeabba41ef..73bdf548b3 100644 --- a/drivers/dma/ioat/ioat_hw_defs.h +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -11,6 +11,8 @@ extern "C" { #include +#define IOAT_PCI_CHANERR_INT_OFFSET 0x180 + #define IOAT_VER_3_0 0x30 #define IOAT_VER_3_3 0x33 @@ -28,6 +30,49 @@ extern "C" { #define IOAT_DEVICE_ID_BDXF 0x6f2F #define IOAT_DEVICE_ID_ICX 0x0b00 +#define IOAT_COMP_UPDATE_SHIFT 3 +#define IOAT_CMD_OP_SHIFT 24 + +/* DMA Channel Registers */ +#define IOAT_CHANCTRL_CHANNEL_PRIORITY_MASK 0xF000 +#define IOAT_CHANCTRL_COMPL_DCA_EN 0x0200 +#define IOAT_CHANCTRL_CHANNEL_IN_USE 0x0100 +#define IOAT_CHANCTRL_DESCRIPTOR_ADDR_SNOOP_CONTROL 0x0020 +#define IOAT_CHANCTRL_ERR_INT_EN 0x0010 +#define IOAT_CHANCTRL_ANY_ERR_ABORT_EN 0x0008 +#define IOAT_CHANCTRL_ERR_COMPLETION_EN 0x0004 +#define IOAT_CHANCTRL_INT_REARM 0x0001 + +struct ioat_registers { + uint8_t chancnt; + uint8_t xfercap; + uint8_t genctrl; + uint8_t intrctrl; + uint32_t attnstatus; + uint8_t cbver; /* 0x08 */ + uint8_t reserved4[0x3]; /* 0x09 */ + uint16_t intrdelay; /* 0x0C */ + uint16_t cs_status; /* 0x0E */ + uint32_t dmacapability; /* 0x10 */ + uint8_t reserved5[0x6C]; /* 0x14 */ + uint16_t chanctrl; /* 0x80 */ + uint8_t reserved6[0x2]; /* 0x82 */ + uint8_t chancmd; /* 0x84 */ + uint8_t reserved3[1]; /* 0x85 */ + uint16_t dmacount; /* 0x86 */ + uint64_t chansts; /* 0x88 */ + uint64_t chainaddr; /* 0x90 */ + uint64_t chancmp; /* 0x98 */ + uint8_t reserved2[0x8]; /* 0xA0 */ + uint32_t chanerr; /* 0xA8 */ + uint32_t chanerrmask; /* 0xAC */ +} __rte_packed; + +#define IOAT_CHANCMD_RESET 0x20 +#define IOAT_CHANCMD_SUSPEND 0x04 + +#define IOAT_CHANCMP_ALIGN 8 /* CHANCMP address must be 64-bit aligned */ + #ifdef __cplusplus } #endif diff --git a/drivers/dma/ioat/ioat_internal.h b/drivers/dma/ioat/ioat_internal.h index f1ec12a919..83ef5973f5 100644 --- a/drivers/dma/ioat/ioat_internal.h +++ b/drivers/dma/ioat/ioat_internal.h @@ -7,6 +7,33 @@ #include "ioat_hw_defs.h" +struct ioat_dmadev { + struct rte_dma_dev_data *dmadev; + struct rte_dma_vchan_conf qcfg; + struct rte_dma_stats stats; + + volatile uint16_t *doorbell __rte_cache_aligned; + phys_addr_t status_addr; + phys_addr_t ring_addr; + + struct ioat_dma_hw_desc *desc_ring; + + unsigned short next_read; + unsigned short next_write; + unsigned short last_write; /* Used to compute submitted count. */ + unsigned short offset; /* Used after a device recovery when counts -> 0. */ + unsigned int failure; /* Used to store chanerr for error handling. */ + + /* To report completions, the device will write status back here. */ + volatile uint64_t status __rte_cache_aligned; + + /* Pointer to the register bar. */ + volatile struct ioat_registers *regs; + + /* Store the IOAT version. */ + uint8_t version; +}; + extern int ioat_pmd_logtype; #define IOAT_PMD_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \ From patchwork Fri Sep 24 14:33:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99612 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7131CA0548; Fri, 24 Sep 2021 16:34:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC6CB41331; Fri, 24 Sep 2021 16:33:48 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B498041323 for ; Fri, 24 Sep 2021 16:33:45 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640138" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640138" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871445" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:43 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:26 +0000 Message-Id: <20210924143335.1092300-4-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 03/12] dma/ioat: add datapath structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add data structures required for the data path of IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- drivers/dma/ioat/ioat_dmadev.c | 63 +++++++++- drivers/dma/ioat/ioat_hw_defs.h | 215 ++++++++++++++++++++++++++++++++ 2 files changed, 277 insertions(+), 1 deletion(-) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index df3c72363a..b132283ba5 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -15,11 +15,72 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Dump DMA device info. */ +static int +ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) +{ + struct ioat_dmadev *ioat = dev->dev_private; + uint64_t chansts_masked = ioat->regs->chansts & IOAT_CHANSTS_STATUS; + uint32_t chanerr = ioat->regs->chanerr; + uint64_t mask = (ioat->qcfg.nb_desc - 1); + char ver = ioat->version; + fprintf(f, "========= IOAT =========\n"); + fprintf(f, " IOAT version: %d.%d\n", ver >> 4, ver & 0xF); + fprintf(f, " Channel status: %s [0x%"PRIx64"]\n", + chansts_readable[chansts_masked], chansts_masked); + fprintf(f, " ChainADDR: 0x%"PRIu64"\n", ioat->regs->chainaddr); + if (chanerr == 0) { + fprintf(f, " No Channel Errors\n"); + } else { + fprintf(f, " ChanERR: 0x%"PRIu32"\n", chanerr); + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + fprintf(f, " Invalid Source Address\n"); + if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + fprintf(f, " Invalid Destination Address\n"); + if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + fprintf(f, " Invalid Descriptor Length\n"); + if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + fprintf(f, " Descriptor Read Error\n"); + if ((chanerr & ~(IOAT_CHANERR_INVALID_SRC_ADDR_MASK | + IOAT_CHANERR_INVALID_DST_ADDR_MASK | + IOAT_CHANERR_INVALID_LENGTH_MASK | + IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK)) != 0) + fprintf(f, " Unknown Error(s)\n"); + } + fprintf(f, "== Private Data ==\n"); + fprintf(f, " Config: { ring_size: %u }\n", ioat->qcfg.nb_desc); + fprintf(f, " Status: 0x%"PRIx64"\n", ioat->status); + fprintf(f, " Status IOVA: 0x%"PRIx64"\n", ioat->status_addr); + fprintf(f, " Status ADDR: %p\n", &ioat->status); + fprintf(f, " Ring IOVA: 0x%"PRIx64"\n", ioat->ring_addr); + fprintf(f, " Ring ADDR: 0x%"PRIx64"\n", ioat->desc_ring[0].next-64); + fprintf(f, " Next write: %"PRIu16"\n", ioat->next_write); + fprintf(f, " Next read: %"PRIu16"\n", ioat->next_read); + struct ioat_dma_hw_desc *desc_ring = &ioat->desc_ring[(ioat->next_write - 1) & mask]; + fprintf(f, " Last Descriptor Written {\n"); + fprintf(f, " Size: %"PRIu32"\n", desc_ring->size); + fprintf(f, " Control: 0x%"PRIx32"\n", desc_ring->u.control_raw); + fprintf(f, " Src: 0x%"PRIx64"\n", desc_ring->src_addr); + fprintf(f, " Dest: 0x%"PRIx64"\n", desc_ring->dest_addr); + fprintf(f, " Next: 0x%"PRIx64"\n", desc_ring->next); + fprintf(f, " }\n"); + fprintf(f, " Next Descriptor {\n"); + fprintf(f, " Size: %"PRIu32"\n", ioat->desc_ring[ioat->next_read & mask].size); + fprintf(f, " Src: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].src_addr); + fprintf(f, " Dest: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].dest_addr); + fprintf(f, " Next: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].next); + fprintf(f, " }\n"); + + return 0; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) { - static const struct rte_dma_dev_ops ioat_dmadev_ops = { }; + static const struct rte_dma_dev_ops ioat_dmadev_ops = { + .dev_dump = ioat_dev_dump, + }; struct rte_dma_dev *dmadev = NULL; struct ioat_dmadev *ioat = NULL; diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h index 73bdf548b3..dc3493a78f 100644 --- a/drivers/dma/ioat/ioat_hw_defs.h +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -15,6 +15,7 @@ extern "C" { #define IOAT_VER_3_0 0x30 #define IOAT_VER_3_3 0x33 +#define IOAT_VER_3_4 0x34 #define IOAT_VENDOR_ID 0x8086 #define IOAT_DEVICE_ID_SKX 0x2021 @@ -43,6 +44,14 @@ extern "C" { #define IOAT_CHANCTRL_ERR_COMPLETION_EN 0x0004 #define IOAT_CHANCTRL_INT_REARM 0x0001 +/* DMA Channel Capabilities */ +#define IOAT_DMACAP_PB (1 << 0) +#define IOAT_DMACAP_DCA (1 << 4) +#define IOAT_DMACAP_BFILL (1 << 6) +#define IOAT_DMACAP_XOR (1 << 8) +#define IOAT_DMACAP_PQ (1 << 9) +#define IOAT_DMACAP_DMA_DIF (1 << 10) + struct ioat_registers { uint8_t chancnt; uint8_t xfercap; @@ -71,8 +80,214 @@ struct ioat_registers { #define IOAT_CHANCMD_RESET 0x20 #define IOAT_CHANCMD_SUSPEND 0x04 +#define IOAT_CHANSTS_STATUS 0x7ULL +#define IOAT_CHANSTS_ACTIVE 0x0 +#define IOAT_CHANSTS_IDLE 0x1 +#define IOAT_CHANSTS_SUSPENDED 0x2 +#define IOAT_CHANSTS_HALTED 0x3 +#define IOAT_CHANSTS_ARMED 0x4 + +#define IOAT_CHANERR_INVALID_SRC_ADDR_MASK (1 << 0) +#define IOAT_CHANERR_INVALID_DST_ADDR_MASK (1 << 1) +#define IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK (1 << 8) +#define IOAT_CHANERR_INVALID_LENGTH_MASK (1 << 10) + +const char *chansts_readable[] = { + "ACTIVE", /* 0x0 */ + "IDLE", /* 0x1 */ + "SUSPENDED", /* 0x2 */ + "HALTED", /* 0x3 */ + "ARMED" /* 0x4 */ +}; + +#define IOAT_CHANSTS_UNAFFILIATED_ERROR 0x8ULL +#define IOAT_CHANSTS_SOFT_ERROR 0x10ULL + +#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_MASK (~0x3FULL) + #define IOAT_CHANCMP_ALIGN 8 /* CHANCMP address must be 64-bit aligned */ +struct ioat_dma_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t null: 1; + uint32_t src_page_break: 1; + uint32_t dest_page_break: 1; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t reserved: 13; +#define IOAT_OP_COPY 0x00 + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t dest_addr; + uint64_t next; + uint64_t reserved; + uint64_t reserved2; + uint64_t user1; + uint64_t user2; +}; + +struct ioat_fill_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t reserved: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t reserved2: 2; + uint32_t dest_page_break: 1; + uint32_t bundle: 1; + uint32_t reserved3: 15; +#define IOAT_OP_FILL 0x01 + uint32_t op: 8; + } control; + } u; + uint64_t src_data; + uint64_t dest_addr; + uint64_t next; + uint64_t reserved; + uint64_t next_dest_addr; + uint64_t user1; + uint64_t user2; +}; + +struct ioat_xor_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_count: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t reserved: 13; +#define IOAT_OP_XOR 0x87 +#define IOAT_OP_XOR_VAL 0x88 + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t dest_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t src_addr3; + uint64_t src_addr4; + uint64_t src_addr5; +}; + +struct ioat_xor_ext_hw_desc { + uint64_t src_addr6; + uint64_t src_addr7; + uint64_t src_addr8; + uint64_t next; + uint64_t reserved[4]; +}; + +struct ioat_pq_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_count: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t p_disable: 1; + uint32_t q_disable: 1; + uint32_t reserved: 11; +#define IOAT_OP_PQ 0x89 +#define IOAT_OP_PQ_VAL 0x8a + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t p_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t src_addr3; + uint8_t coef[8]; + uint64_t q_addr; +}; + +struct ioat_pq_ext_hw_desc { + uint64_t src_addr4; + uint64_t src_addr5; + uint64_t src_addr6; + uint64_t next; + uint64_t src_addr7; + uint64_t src_addr8; + uint64_t reserved[2]; +}; + +struct ioat_pq_update_hw_desc { + uint32_t size; + union { + uint32_t control_raw; + struct { + uint32_t int_enable: 1; + uint32_t src_snoop_disable: 1; + uint32_t dest_snoop_disable: 1; + uint32_t completion_update: 1; + uint32_t fence: 1; + uint32_t src_cnt: 3; + uint32_t bundle: 1; + uint32_t dest_dca: 1; + uint32_t hint: 1; + uint32_t p_disable: 1; + uint32_t q_disable: 1; + uint32_t reserved: 3; + uint32_t coef: 8; +#define IOAT_OP_PQ_UP 0x8b + uint32_t op: 8; + } control; + } u; + uint64_t src_addr; + uint64_t p_addr; + uint64_t next; + uint64_t src_addr2; + uint64_t p_src; + uint64_t q_src; + uint64_t q_addr; +}; + +union ioat_hw_desc { + struct ioat_dma_hw_desc dma; + struct ioat_fill_hw_desc fill; + struct ioat_xor_hw_desc xor_desc; + struct ioat_xor_ext_hw_desc xor_ext; + struct ioat_pq_hw_desc pq; + struct ioat_pq_ext_hw_desc pq_ext; + struct ioat_pq_update_hw_desc pq_update; +}; + +#define GENSTS_DEV_STATE_MASK 0x03 +#define CMDSTATUS_ACTIVE_SHIFT 31 +#define CMDSTATUS_ACTIVE_MASK (1 << 31) +#define CMDSTATUS_ERR_MASK 0xFF + #ifdef __cplusplus } #endif From patchwork Fri Sep 24 14:33:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99613 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A259FA0548; Fri, 24 Sep 2021 16:34:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51F4441338; Fri, 24 Sep 2021 16:33:51 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 6CE8E4131B for ; Fri, 24 Sep 2021 16:33:47 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640146" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640146" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871457" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:45 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:27 +0000 Message-Id: <20210924143335.1092300-5-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 04/12] dma/ioat: add configuration functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functions for device configuration. The info_get and close functions are included here also. info_get can be useful for checking successful configuration and close is used by the dmadev api when releasing a configured device. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz --- doc/guides/dmadevs/ioat.rst | 19 ++++++ drivers/dma/ioat/ioat_dmadev.c | 107 +++++++++++++++++++++++++++++++++ 2 files changed, 126 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index 9ae1d8a2ad..b1f847d273 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -67,3 +67,22 @@ For example:: Once probed successfully, the device will appear as a ``dmadev``, that is a "DMA device type" inside DPDK, and can be accessed using APIs from the ``rte_dmadev`` library. + +Using IOAT DMAdev Devices +-------------------------- + +To use IOAT devices from an application, the ``dmadev`` API can be used. + +Device Configuration +~~~~~~~~~~~~~~~~~~~~~ + +Refer to the :ref:`Device Configuration ` and +:ref:`Configuration of Virtual DMA Channels ` sections +of the dmadev library documentation for details on device configuration API usage. + +IOAT configuration requirements: + +* ``ring_size`` must be a power of two, between 64 and 4096. +* Only one ``vchan`` is supported per device. +* Silent mode is not supported. +* The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index b132283ba5..92c4e2b04f 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -12,9 +12,112 @@ static struct rte_pci_driver ioat_pmd_drv; RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); +#define DESC_SZ sizeof(struct ioat_dma_hw_desc) + #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* Configure a device. */ +static int +ioat_dev_configure(struct rte_dma_dev *dev __rte_unused, const struct rte_dma_conf *dev_conf, + uint32_t conf_sz) +{ + if (sizeof(struct rte_dma_conf) != conf_sz) + return -EINVAL; + + if (dev_conf->nb_vchans != 1) + return -EINVAL; + + return 0; +} + +/* Setup a virtual channel for IOAT, only 1 vchan is supported. */ +static int +ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + const struct rte_dma_vchan_conf *qconf, uint32_t qconf_sz) +{ + struct ioat_dmadev *ioat = dev->dev_private; + uint16_t max_desc = qconf->nb_desc; + int i; + + if (sizeof(struct rte_dma_vchan_conf) != qconf_sz) + return -EINVAL; + + ioat->qcfg = *qconf; + + if (!rte_is_power_of_2(max_desc)) { + max_desc = rte_align32pow2(max_desc); + IOAT_PMD_DEBUG("DMA dev %u using %u descriptors", dev->data->dev_id, max_desc); + ioat->qcfg.nb_desc = max_desc; + } + + /* In case we are reconfiguring a device, free any existing memory. */ + rte_free(ioat->desc_ring); + + ioat->desc_ring = rte_zmalloc(NULL, sizeof(*ioat->desc_ring) * max_desc, 0); + if (ioat->desc_ring == NULL) + return -ENOMEM; + + ioat->ring_addr = rte_mem_virt2iova(ioat->desc_ring); + + ioat->status_addr = rte_mem_virt2iova(ioat) + offsetof(struct ioat_dmadev, status); + + /* Ensure all counters are reset, if reconfiguring/restarting device. */ + ioat->next_read = 0; + ioat->next_write = 0; + ioat->last_write = 0; + ioat->offset = 0; + ioat->failure = 0; + + /* Configure descriptor ring - each one points to next. */ + for (i = 0; i < ioat->qcfg.nb_desc; i++) { + ioat->desc_ring[i].next = ioat->ring_addr + + (((i + 1) % ioat->qcfg.nb_desc) * DESC_SZ); + } + + return 0; +} + +/* Get device information of a device. */ +static int +ioat_dev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size) +{ + struct ioat_dmadev *ioat = dev->dev_private; + if (size < sizeof(*info)) + return -EINVAL; + info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | + RTE_DMA_CAPA_OPS_COPY | + RTE_DMA_CAPA_OPS_FILL; + if (ioat->version >= IOAT_VER_3_4) + info->dev_capa |= RTE_DMA_CAPA_HANDLES_ERRORS; + info->max_vchans = 1; + info->min_desc = 32; + info->max_desc = 4096; + return 0; +} + +/* Close a configured device. */ +static int +ioat_dev_close(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat; + + if (!dev) { + IOAT_PMD_ERR("Invalid device"); + return -EINVAL; + } + + ioat = dev->dev_private; + if (!ioat) { + IOAT_PMD_ERR("Error getting dev_private"); + return -EINVAL; + } + + rte_free(ioat->desc_ring); + + return 0; +} + /* Dump DMA device info. */ static int ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) @@ -79,7 +182,11 @@ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) { static const struct rte_dma_dev_ops ioat_dmadev_ops = { + .dev_close = ioat_dev_close, + .dev_configure = ioat_dev_configure, .dev_dump = ioat_dev_dump, + .dev_info_get = ioat_dev_info_get, + .vchan_setup = ioat_vchan_setup, }; struct rte_dma_dev *dmadev = NULL; From patchwork Fri Sep 24 14:33:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99614 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 893CCA0548; Fri, 24 Sep 2021 16:34:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73FEB41342; Fri, 24 Sep 2021 16:33:52 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B454441335 for ; Fri, 24 Sep 2021 16:33:49 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640158" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640158" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871472" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:47 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:28 +0000 Message-Id: <20210924143335.1092300-6-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 05/12] dma/ioat: add start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add start, stop and recover functions for IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- doc/guides/dmadevs/ioat.rst | 3 ++ drivers/dma/ioat/ioat_dmadev.c | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 95 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index b1f847d273..d93d28023f 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -86,3 +86,6 @@ IOAT configuration requirements: * Only one ``vchan`` is supported per device. * Silent mode is not supported. * The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory. + +Once configured, the device can then be made ready for use by calling the +``rte_dma_start()`` API. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 92c4e2b04f..96bf55135f 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -78,6 +78,96 @@ ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, return 0; } +/* Recover IOAT device. */ +static inline int +__ioat_recover(struct ioat_dmadev *ioat) +{ + uint32_t chanerr, retry = 0; + uint16_t mask = ioat->qcfg.nb_desc - 1; + + /* Clear any channel errors. Reading and writing to chanerr does this. */ + chanerr = ioat->regs->chanerr; + ioat->regs->chanerr = chanerr; + + /* Reset Channel. */ + ioat->regs->chancmd = IOAT_CHANCMD_RESET; + + /* Write new chain address to trigger state change. */ + ioat->regs->chainaddr = ioat->desc_ring[(ioat->next_read - 1) & mask].next; + /* Ensure channel control and status addr are correct. */ + ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN | + IOAT_CHANCTRL_ERR_COMPLETION_EN; + ioat->regs->chancmp = ioat->status_addr; + + /* Allow HW time to move to the ARMED state. */ + do { + rte_pause(); + retry++; + } while (ioat->regs->chansts != IOAT_CHANSTS_ARMED && retry < 200); + + /* Exit as failure if device is still HALTED. */ + if (ioat->regs->chansts != IOAT_CHANSTS_ARMED) + return -1; + + /* Store next write as offset as recover will move HW and SW ring out of sync. */ + ioat->offset = ioat->next_read; + + /* Prime status register with previous address. */ + ioat->status = ioat->desc_ring[(ioat->next_read - 2) & mask].next; + + return 0; +} + +/* Start a configured device. */ +static int +ioat_dev_start(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + if (ioat->qcfg.nb_desc == 0 || ioat->desc_ring == NULL) + return -EBUSY; + + /* Inform hardware of where the descriptor ring is. */ + ioat->regs->chainaddr = ioat->ring_addr; + /* Inform hardware of where to write the status/completions. */ + ioat->regs->chancmp = ioat->status_addr; + + /* Prime the status register to be set to the last element. */ + ioat->status = ioat->ring_addr + ((ioat->qcfg.nb_desc - 1) * DESC_SZ); + + printf("IOAT.status: %s [0x%"PRIx64"]\n", + chansts_readable[ioat->status & IOAT_CHANSTS_STATUS], + ioat->status); + + if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) { + IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n"); + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device couldn't be recovered"); + return -1; + } + } + + return 0; +} + +/* Stop a configured device. */ +static int +ioat_dev_stop(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->dev_private; + uint32_t retry = 0; + + ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND; + + do { + rte_pause(); + retry++; + } while ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) != IOAT_CHANSTS_SUSPENDED + && retry < 200); + + return ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_SUSPENDED) ? 0 : -1; +} + /* Get device information of a device. */ static int ioat_dev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size) @@ -186,6 +276,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_configure = ioat_dev_configure, .dev_dump = ioat_dev_dump, .dev_info_get = ioat_dev_info_get, + .dev_start = ioat_dev_start, + .dev_stop = ioat_dev_stop, .vchan_setup = ioat_vchan_setup, }; From patchwork Fri Sep 24 14:33:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99615 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3802A0548; Fri, 24 Sep 2021 16:34:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 89C3E41349; Fri, 24 Sep 2021 16:33:54 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 70AC041340 for ; Fri, 24 Sep 2021 16:33:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640165" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640165" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871485" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:49 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:29 +0000 Message-Id: <20210924143335.1092300-7-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 06/12] dma/ioat: add data path job submission functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add data path functions for enqueuing and submitting operations to IOAT devices. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Chengwen Feng --- doc/guides/dmadevs/ioat.rst | 9 ++++ drivers/dma/ioat/ioat_dmadev.c | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index d93d28023f..ec8ce5a8e5 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -89,3 +89,12 @@ IOAT configuration requirements: Once configured, the device can then be made ready for use by calling the ``rte_dma_start()`` API. + +Performing Data Copies +~~~~~~~~~~~~~~~~~~~~~~~ + +Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library +documentation for details on operation enqueue and submission API usage. + +It is expected that, for efficiency reasons, a burst of operations will be enqueued to the +device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 96bf55135f..0e92c80fb0 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -5,6 +5,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -17,6 +18,12 @@ RTE_LOG_REGISTER_DEFAULT(ioat_pmd_logtype, INFO); #define IOAT_PMD_NAME dmadev_ioat #define IOAT_PMD_NAME_STR RTE_STR(IOAT_PMD_NAME) +/* IOAT operations. */ +enum rte_ioat_ops { + ioat_op_copy = 0, /* Standard DMA Operation */ + ioat_op_fill /* Block Fill */ +}; + /* Configure a device. */ static int ioat_dev_configure(struct rte_dma_dev *dev __rte_unused, const struct rte_dma_conf *dev_conf, @@ -208,6 +215,87 @@ ioat_dev_close(struct rte_dma_dev *dev) return 0; } +/* Trigger hardware to begin performing enqueued operations. */ +static inline void +__submit(struct ioat_dmadev *ioat) +{ + *ioat->doorbell = ioat->next_write - ioat->offset; + + ioat->last_write = ioat->next_write; +} + +/* External submit function wrapper. */ +static int +ioat_submit(struct rte_dma_dev *dev, uint16_t qid __rte_unused) +{ + struct ioat_dmadev *ioat = (struct ioat_dmadev *)dev->dev_private; + + __submit(ioat); + + return 0; +} + +/* Write descriptor for enqueue. */ +static inline int +__write_desc(struct rte_dma_dev *dev, uint32_t op, uint64_t src, phys_addr_t dst, + unsigned int length, uint64_t flags) +{ + struct ioat_dmadev *ioat = dev->dev_private; + uint16_t ret; + const unsigned short mask = ioat->qcfg.nb_desc - 1; + const unsigned short read = ioat->next_read; + unsigned short write = ioat->next_write; + const unsigned short space = mask + read - write; + struct ioat_dma_hw_desc *desc; + + if (space == 0) + return -ENOSPC; + + ioat->next_write = write + 1; + write &= mask; + + desc = &ioat->desc_ring[write]; + desc->size = length; + desc->u.control_raw = (uint32_t)((op << IOAT_CMD_OP_SHIFT) | + (1 << IOAT_COMP_UPDATE_SHIFT)); + + /* In IOAT the fence ensures that all operations including the current one + * are completed before moving on, DMAdev assumes that the fence ensures + * all operations before the current one are completed before starting + * the current one, so in IOAT we set the fence for the previous descriptor. + */ + if (flags & RTE_DMA_OP_FLAG_FENCE) + ioat->desc_ring[(write - 1) & mask].u.control.fence = 1; + + desc->src_addr = src; + desc->dest_addr = dst; + + rte_prefetch0(&ioat->desc_ring[ioat->next_write & mask]); + + ret = (uint16_t)(ioat->next_write - 1); + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) + __submit(ioat); + + return ret; +} + +/* Enqueue a fill operation onto the ioat device. */ +static int +ioat_enqueue_fill(struct rte_dma_dev *dev, uint16_t qid __rte_unused, uint64_t pattern, + rte_iova_t dst, unsigned int length, uint64_t flags) +{ + return __write_desc(dev, ioat_op_fill, pattern, dst, length, flags); +} + +/* Enqueue a copy operation onto the ioat device. */ +static int +ioat_enqueue_copy(struct rte_dma_dev *dev, uint16_t qid __rte_unused, rte_iova_t src, + rte_iova_t dst, unsigned int length, uint64_t flags) +{ + return __write_desc(dev, ioat_op_copy, src, dst, length, flags); +} + /* Dump DMA device info. */ static int ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) @@ -303,6 +391,10 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->copy = ioat_enqueue_copy; + dmadev->fill = ioat_enqueue_fill; + dmadev->submit = ioat_submit; + ioat = dmadev->data->dev_private; ioat->regs = dev->mem_resource[0].addr; ioat->doorbell = &ioat->regs->dmacount; From patchwork Fri Sep 24 14:33:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99616 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB1EAA0548; Fri, 24 Sep 2021 16:34:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A06A44134C; Fri, 24 Sep 2021 16:33:56 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id BA0E24134C for ; Fri, 24 Sep 2021 16:33:54 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640177" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640177" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871493" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:51 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:30 +0000 Message-Id: <20210924143335.1092300-8-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 07/12] dma/ioat: add data path completion functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the data path functions for gathering completed operations from IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- doc/guides/dmadevs/ioat.rst | 33 +++++++- drivers/dma/ioat/ioat_dmadev.c | 141 +++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+), 1 deletion(-) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index ec8ce5a8e5..fc1b3131c7 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -94,7 +94,38 @@ Performing Data Copies ~~~~~~~~~~~~~~~~~~~~~~~ Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library -documentation for details on operation enqueue and submission API usage. +documentation for details on operation enqueue, submission and completion API usage. It is expected that, for efficiency reasons, a burst of operations will be enqueued to the device via multiple enqueue calls between calls to the ``rte_dma_submit()`` function. + +When gathering completions, ``rte_dma_completed()`` should be used, up until the point an error +occurs with an operation. If an error was encountered, ``rte_dma_completed_status()`` must be used +to reset the device and continue processing operations. This function will also gather the status +of each individual operation which is filled in to the ``status`` array provided as parameter +by the application. + +The status codes supported by IOAT are: + +* ``RTE_DMA_STATUS_SUCCESSFUL``: The operation was successful. +* ``RTE_DMA_STATUS_INVALID_SRC_ADDR``: The operation failed due to an invalid source address. +* ``RTE_DMA_STATUS_INVALID_DST_ADDR``: The operation failed due to an invalid destination address. +* ``RTE_DMA_STATUS_INVALID_LENGTH``: The operation failed due to an invalid descriptor length. +* ``RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR``: The device could not read the descriptor. +* ``RTE_DMA_STATUS_ERROR_UNKNOWN``: The operation failed due to an unspecified error. + +The following code shows how to retrieve the number of successfully completed +copies within a burst and then uses ``rte_dma_completed_status()`` to check +which operation failed and reset the device to continue processing operations: + +.. code-block:: C + + enum rte_dma_status_code status[COMP_BURST_SZ]; + uint16_t count, idx, status_count; + bool error = 0; + + count = rte_dma_completed(dev_id, vchan, COMP_BURST_SZ, &idx, &error); + + if (error){ + status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, &idx, status); + } diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 0e92c80fb0..b2b7ebb3db 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "ioat_internal.h" @@ -355,6 +356,144 @@ ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) return 0; } +/* Returns the index of the last completed operation. */ +static inline uint16_t +__get_last_completed(const struct ioat_dmadev *ioat, int *state) +{ + /* Status register contains the address of the completed operation */ + uint64_t status = ioat->status; + + /* lower 3 bits indicate "transfer status" : active, idle, halted. + * We can ignore bit 0. + */ + *state = status & IOAT_CHANSTS_STATUS; + + /* If we are just after recovering from an error the address returned by + * status will be 0, in this case we return the offset - 1 as the last + * completed. If not return the status value minus the chainaddr which + * gives us an offset into the ring. Right shifting by 6 (divide by 64) + * gives the index of the completion from the HW point of view and adding + * the offset translates the ring index from HW to SW point of view. + */ + if ((status & ~IOAT_CHANSTS_STATUS) == 0) + return ioat->offset - 1; + + return (status - ioat->ring_addr) >> 6; +} + +/* Translates IOAT ChanERRs to DMA error codes. */ +static inline enum rte_dma_status_code +__translate_status_ioat_to_dma(uint32_t chanerr) +{ + if (chanerr & IOAT_CHANERR_INVALID_SRC_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_SRC_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_DST_ADDR_MASK) + return RTE_DMA_STATUS_INVALID_DST_ADDR; + else if (chanerr & IOAT_CHANERR_INVALID_LENGTH_MASK) + return RTE_DMA_STATUS_INVALID_LENGTH; + else if (chanerr & IOAT_CHANERR_DESCRIPTOR_READ_ERROR_MASK) + return RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR; + else + return RTE_DMA_STATUS_ERROR_UNKNOWN; +} + +/* Returns details of operations that have been completed. */ +static uint16_t +ioat_completed(struct rte_dma_dev *dev, uint16_t qid __rte_unused, const uint16_t max_ops, + uint16_t *last_idx, bool *has_error) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short last_completed, count; + int state, fails = 0; + + /* Do not do any work if there is an uncleared error. */ + if (ioat->failure != 0) { + *has_error = true; + *last_idx = ioat->next_read - 2; + return 0; + } + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) { + ioat->next_read = read + count; + *last_idx = ioat->next_read - 1; + } else { + *has_error = true; + rte_errno = EIO; + ioat->failure = ioat->regs->chanerr; + ioat->next_read = read + count + 1; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + ioat_dev_dump(dev, stdout); + return 0; + } + __submit(ioat); + fails++; + *last_idx = ioat->next_read - 2; + } + + return count; +} + +/* Returns detailed status information about operations that have been completed. */ +static uint16_t +ioat_completed_status(struct rte_dma_dev *dev, uint16_t qid __rte_unused, + uint16_t max_ops, uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + const unsigned short mask = (ioat->qcfg.nb_desc - 1); + const unsigned short read = ioat->next_read; + unsigned short count, last_completed; + uint64_t fails = 0; + int state, i; + + last_completed = __get_last_completed(ioat, &state); + count = (last_completed + 1 - read) & mask; + + for (i = 0; i < RTE_MIN(count + 1, max_ops); i++) + status[i] = RTE_DMA_STATUS_SUCCESSFUL; + + /* Cap count at max_ops or set as last run in batch. */ + if (count > max_ops) + count = max_ops; + + if (count == max_ops || state != IOAT_CHANSTS_HALTED) + ioat->next_read = read + count; + else { + rte_errno = EIO; + status[count] = __translate_status_ioat_to_dma(ioat->regs->chanerr); + count++; + ioat->next_read = read + count; + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device HALTED and could not be recovered\n"); + ioat_dev_dump(dev, stdout); + return 0; + } + __submit(ioat); + fails++; + } + + if (ioat->failure > 0) { + status[0] = __translate_status_ioat_to_dma(ioat->failure); + count = RTE_MIN(count + 1, max_ops); + ioat->failure = 0; + } + + *last_idx = ioat->next_read - 1; + + return count; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -391,6 +530,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->completed = ioat_completed; + dmadev->completed_status = ioat_completed_status; dmadev->copy = ioat_enqueue_copy; dmadev->fill = ioat_enqueue_fill; dmadev->submit = ioat_submit; From patchwork Fri Sep 24 14:33:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99617 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EEF0AA0548; Fri, 24 Sep 2021 16:34:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AEFA441354; Fri, 24 Sep 2021 16:33:58 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B3B7C4133A for ; Fri, 24 Sep 2021 16:33:56 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640190" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640190" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871503" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:54 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:31 +0000 Message-Id: <20210924143335.1092300-9-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 08/12] dma/ioat: add statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add statistic tracking for operations in IOAT. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 43 ++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index b2b7ebb3db..20ae364318 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -77,6 +77,9 @@ ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, ioat->offset = 0; ioat->failure = 0; + /* Reset Stats. */ + ioat->stats = (struct rte_dma_stats){0}; + /* Configure descriptor ring - each one points to next. */ for (i = 0; i < ioat->qcfg.nb_desc; i++) { ioat->desc_ring[i].next = ioat->ring_addr + @@ -222,6 +225,8 @@ __submit(struct ioat_dmadev *ioat) { *ioat->doorbell = ioat->next_write - ioat->offset; + ioat->stats.submitted += (uint16_t)(ioat->next_write - ioat->last_write); + ioat->last_write = ioat->next_write; } @@ -352,6 +357,10 @@ ioat_dev_dump(const struct rte_dma_dev *dev, FILE *f) fprintf(f, " Dest: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].dest_addr); fprintf(f, " Next: 0x%"PRIx64"\n", ioat->desc_ring[ioat->next_read & mask].next); fprintf(f, " }\n"); + fprintf(f, " Key Stats { submitted: %"PRIu64", comp: %"PRIu64", failed: %"PRIu64" }\n", + ioat->stats.submitted, + ioat->stats.completed, + ioat->stats.errors); return 0; } @@ -441,6 +450,9 @@ ioat_completed(struct rte_dma_dev *dev, uint16_t qid __rte_unused, const uint16_ *last_idx = ioat->next_read - 2; } + ioat->stats.completed += count; + ioat->stats.errors += fails; + return count; } @@ -491,9 +503,38 @@ ioat_completed_status(struct rte_dma_dev *dev, uint16_t qid __rte_unused, *last_idx = ioat->next_read - 1; + ioat->stats.completed += count; + ioat->stats.errors += fails; + return count; } +/* Retrieve the generic stats of a DMA device. */ +static int +ioat_stats_get(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + struct rte_dma_stats *rte_stats, uint32_t size) +{ + struct rte_dma_stats *stats = (&((struct ioat_dmadev *)dev->dev_private)->stats); + + if (size < sizeof(rte_stats)) + return -EINVAL; + if (rte_stats == NULL) + return -EINVAL; + + *rte_stats = *stats; + return 0; +} + +/* Reset the generic stat counters for the DMA device. */ +static int +ioat_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + ioat->stats = (struct rte_dma_stats){0}; + return 0; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -505,6 +546,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_info_get = ioat_dev_info_get, .dev_start = ioat_dev_start, .dev_stop = ioat_dev_stop, + .stats_get = ioat_stats_get, + .stats_reset = ioat_stats_reset, .vchan_setup = ioat_vchan_setup, }; From patchwork Fri Sep 24 14:33:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99618 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3CDDA0548; Fri, 24 Sep 2021 16:34:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 058584135A; Fri, 24 Sep 2021 16:34:03 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 5F33C41351 for ; Fri, 24 Sep 2021 16:33:58 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640195" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640195" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871517" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:56 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:32 +0000 Message-Id: <20210924143335.1092300-10-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 09/12] dma/ioat: add support for vchan status function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the rte_dmadev_vchan_status API call. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 20ae364318..fe01a3b1db 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -535,6 +535,26 @@ ioat_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) return 0; } +/* Check if the IOAT device is idle. */ +static int +ioat_vchan_status(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, + enum rte_dma_vchan_status *status) +{ + int state = 0; + const struct ioat_dmadev *ioat = dev->dev_private; + const uint16_t mask = ioat->qcfg.nb_desc - 1; + const uint16_t last = __get_last_completed(ioat, &state); + + if (state == IOAT_CHANSTS_HALTED || state == IOAT_CHANSTS_SUSPENDED) + *status = RTE_DMA_VCHAN_HALTED_ERROR; + else if (last == ((ioat->next_write - 1) & mask)) + *status = RTE_DMA_VCHAN_IDLE; + else + *status = RTE_DMA_VCHAN_ACTIVE; + + return 0; +} + /* Create a DMA device. */ static int ioat_dmadev_create(const char *name, struct rte_pci_device *dev) @@ -548,6 +568,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_stop = ioat_dev_stop, .stats_get = ioat_stats_get, .stats_reset = ioat_stats_reset, + .vchan_status = ioat_vchan_status, .vchan_setup = ioat_vchan_setup, }; From patchwork Fri Sep 24 14:33:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99619 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40282A0548; Fri, 24 Sep 2021 16:34:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2127B41363; Fri, 24 Sep 2021 16:34:04 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 0BBFD41361 for ; Fri, 24 Sep 2021 16:33:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640204" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640204" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871539" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:58 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:33 +0000 Message-Id: <20210924143335.1092300-11-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 10/12] dma/ioat: add burst capacity function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds the ability to find the remaining space in the IOAT ring. Signed-off-by: Conor Walsh Signed-off-by: Kevin Laatz Acked-by: Bruce Richardson --- drivers/dma/ioat/ioat_dmadev.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index fe01a3b1db..6c783ec94f 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -509,6 +509,19 @@ ioat_completed_status(struct rte_dma_dev *dev, uint16_t qid __rte_unused, return count; } +/* Get the remaining capacity of the ring. */ +static uint16_t +ioat_burst_capacity(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused) +{ + struct ioat_dmadev *ioat = dev->data->dev_private; + unsigned short size = ioat->qcfg.nb_desc - 1; + unsigned short read = ioat->next_read; + unsigned short write = ioat->next_write; + unsigned short space = size - (write - read); + + return space; +} + /* Retrieve the generic stats of a DMA device. */ static int ioat_stats_get(const struct rte_dma_dev *dev, uint16_t vchan __rte_unused, @@ -594,6 +607,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) dmadev->dev_ops = &ioat_dmadev_ops; + dmadev->burst_capacity = ioat_burst_capacity; dmadev->completed = ioat_completed; dmadev->completed_status = ioat_completed_status; dmadev->copy = ioat_enqueue_copy; From patchwork Fri Sep 24 14:33:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99620 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD4CFA0548; Fri, 24 Sep 2021 16:34:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BDD14136A; Fri, 24 Sep 2021 16:34:07 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id A98D64133D for ; Fri, 24 Sep 2021 16:34:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640211" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640211" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:34:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871549" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:59 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:34 +0000 Message-Id: <20210924143335.1092300-12-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 11/12] devbind: move ioat device IDs to dmadev category X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move Intel IOAT devices from Misc to DMA devices. Signed-off-by: Conor Walsh Reviewed-by: Kevin Laatz Reviewed-by: Bruce Richardson --- usertools/dpdk-devbind.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py index 15d438715f..4a72229622 100755 --- a/usertools/dpdk-devbind.py +++ b/usertools/dpdk-devbind.py @@ -69,14 +69,12 @@ network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class] baseband_devices = [acceleration_class] crypto_devices = [encryption_class, intel_processor_class] -dma_devices = [intel_idxd_spr] +dma_devices = [intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx] eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso] mempool_devices = [cavium_fpa, octeontx2_npa] compress_devices = [cavium_zip] regex_devices = [octeontx2_ree] -misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, - intel_ntb_skx, intel_ntb_icx, - octeontx2_dma] +misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ntb_skx, intel_ntb_icx, octeontx2_dma] # global dict ethernet devices present. Dictionary indexed by PCI address. # Each device within this is itself a dictionary of device properties From patchwork Fri Sep 24 14:33:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99621 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA90AA0548; Fri, 24 Sep 2021 16:35:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 497C441370; Fri, 24 Sep 2021 16:34:08 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 6C29841362 for ; Fri, 24 Sep 2021 16:34:03 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640222" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640222" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:34:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871567" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:34:01 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:35 +0000 Message-Id: <20210924143335.1092300-13-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 12/12] raw/ioat: deprecate ioat rawdev driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Deprecate the rawdev IOAT driver as both IOAT and IDXD drivers have moved to dmadev. Signed-off-by: Conor Walsh Acked-by: Kevin Laatz --- MAINTAINERS | 2 +- doc/guides/rawdevs/ioat.rst | 4 ++++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index ccabba9169..a4bcd2d024 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1322,7 +1322,7 @@ T: git://dpdk.org/next/dpdk-next-net-intel F: drivers/raw/ifpga/ F: doc/guides/rawdevs/ifpga.rst -IOAT Rawdev +IOAT Rawdev - DEPRECATED M: Bruce Richardson F: drivers/raw/ioat/ F: doc/guides/rawdevs/ioat.rst diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst index a65530bd30..98d15dd032 100644 --- a/doc/guides/rawdevs/ioat.rst +++ b/doc/guides/rawdevs/ioat.rst @@ -6,6 +6,10 @@ IOAT Rawdev Driver =================== +.. warning:: + As of DPDK 21.11 the rawdev implementation of the IOAT driver has been deprecated. + Please use the dmadev library instead. + The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg| Data Streaming Accelerator `(Intel DSA) `_ and for Intel\ |reg|