From patchwork Fri Sep 24 14:33:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Conor Walsh X-Patchwork-Id: 99614 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 893CCA0548; Fri, 24 Sep 2021 16:34:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73FEB41342; Fri, 24 Sep 2021 16:33:52 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B454441335 for ; Fri, 24 Sep 2021 16:33:49 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="309640158" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="309640158" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 07:33:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="703871472" Received: from silpixa00401160.ir.intel.com ([10.55.129.96]) by fmsmga006.fm.intel.com with ESMTP; 24 Sep 2021 07:33:47 -0700 From: Conor Walsh To: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, kevin.laatz@intel.com Cc: dev@dpdk.org, Conor Walsh Date: Fri, 24 Sep 2021 14:33:28 +0000 Message-Id: <20210924143335.1092300-6-conor.walsh@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210924143335.1092300-1-conor.walsh@intel.com> References: <20210827172550.1522362-1-conor.walsh@intel.com> <20210924143335.1092300-1-conor.walsh@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 05/12] dma/ioat: add start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add start, stop and recover functions for IOAT devices. Signed-off-by: Conor Walsh Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz --- doc/guides/dmadevs/ioat.rst | 3 ++ drivers/dma/ioat/ioat_dmadev.c | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 95 insertions(+) diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst index b1f847d273..d93d28023f 100644 --- a/doc/guides/dmadevs/ioat.rst +++ b/doc/guides/dmadevs/ioat.rst @@ -86,3 +86,6 @@ IOAT configuration requirements: * Only one ``vchan`` is supported per device. * Silent mode is not supported. * The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory. + +Once configured, the device can then be made ready for use by calling the +``rte_dma_start()`` API. diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index 92c4e2b04f..96bf55135f 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -78,6 +78,96 @@ ioat_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused, return 0; } +/* Recover IOAT device. */ +static inline int +__ioat_recover(struct ioat_dmadev *ioat) +{ + uint32_t chanerr, retry = 0; + uint16_t mask = ioat->qcfg.nb_desc - 1; + + /* Clear any channel errors. Reading and writing to chanerr does this. */ + chanerr = ioat->regs->chanerr; + ioat->regs->chanerr = chanerr; + + /* Reset Channel. */ + ioat->regs->chancmd = IOAT_CHANCMD_RESET; + + /* Write new chain address to trigger state change. */ + ioat->regs->chainaddr = ioat->desc_ring[(ioat->next_read - 1) & mask].next; + /* Ensure channel control and status addr are correct. */ + ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN | + IOAT_CHANCTRL_ERR_COMPLETION_EN; + ioat->regs->chancmp = ioat->status_addr; + + /* Allow HW time to move to the ARMED state. */ + do { + rte_pause(); + retry++; + } while (ioat->regs->chansts != IOAT_CHANSTS_ARMED && retry < 200); + + /* Exit as failure if device is still HALTED. */ + if (ioat->regs->chansts != IOAT_CHANSTS_ARMED) + return -1; + + /* Store next write as offset as recover will move HW and SW ring out of sync. */ + ioat->offset = ioat->next_read; + + /* Prime status register with previous address. */ + ioat->status = ioat->desc_ring[(ioat->next_read - 2) & mask].next; + + return 0; +} + +/* Start a configured device. */ +static int +ioat_dev_start(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->dev_private; + + if (ioat->qcfg.nb_desc == 0 || ioat->desc_ring == NULL) + return -EBUSY; + + /* Inform hardware of where the descriptor ring is. */ + ioat->regs->chainaddr = ioat->ring_addr; + /* Inform hardware of where to write the status/completions. */ + ioat->regs->chancmp = ioat->status_addr; + + /* Prime the status register to be set to the last element. */ + ioat->status = ioat->ring_addr + ((ioat->qcfg.nb_desc - 1) * DESC_SZ); + + printf("IOAT.status: %s [0x%"PRIx64"]\n", + chansts_readable[ioat->status & IOAT_CHANSTS_STATUS], + ioat->status); + + if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) { + IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n"); + if (__ioat_recover(ioat) != 0) { + IOAT_PMD_ERR("Device couldn't be recovered"); + return -1; + } + } + + return 0; +} + +/* Stop a configured device. */ +static int +ioat_dev_stop(struct rte_dma_dev *dev) +{ + struct ioat_dmadev *ioat = dev->dev_private; + uint32_t retry = 0; + + ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND; + + do { + rte_pause(); + retry++; + } while ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) != IOAT_CHANSTS_SUSPENDED + && retry < 200); + + return ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_SUSPENDED) ? 0 : -1; +} + /* Get device information of a device. */ static int ioat_dev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size) @@ -186,6 +276,8 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) .dev_configure = ioat_dev_configure, .dev_dump = ioat_dev_dump, .dev_info_get = ioat_dev_info_get, + .dev_start = ioat_dev_start, + .dev_stop = ioat_dev_stop, .vchan_setup = ioat_vchan_setup, };