From patchwork Mon Aug 12 07:31:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 143065 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEFFB4579C; Mon, 12 Aug 2024 09:40:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A12604014F; Mon, 12 Aug 2024 09:40:20 +0200 (CEST) Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by mails.dpdk.org (Postfix) with ESMTP id B058D400D7 for ; Mon, 12 Aug 2024 09:40:17 +0200 (CEST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4Wj5yl4zwJz8R042; Mon, 12 Aug 2024 15:40:15 +0800 (CST) Received: from szxlzmapp07.zte.com.cn ([10.5.230.251]) by mse-fl1.zte.com.cn with SMTP id 47C7dqGR003324; Mon, 12 Aug 2024 15:39:52 +0800 (+08) (envelope-from zhang.yong25@zte.com.cn) Received: from localhost.localdomain (unknown [192.168.6.15]) by smtp (Zmail) with SMTP; Mon, 12 Aug 2024 15:39:54 +0800 X-Zmail-TransId: 3e8166b9bc44006-f9930 From: Yong Zhang To: dev@dpdk.org, stephen@networkplumber.org, david.marchand@redhat.com Cc: Yong Zhang Subject: [v2 1/5] raw/zxdh: introduce zxdh raw device driver Date: Mon, 12 Aug 2024 15:31:24 +0800 Message-ID: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-MAIL: mse-fl1.zte.com.cn 47C7dqGR003324 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 66B9BC5F.003/4Wj5yl4zwJz8R042 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce rawdev driver support for ZXDH which can help to connect two separate hosts with each other. Signed-off-by: Yong Zhang --- MAINTAINERS | 5 + doc/guides/rawdevs/index.rst | 1 + doc/guides/rawdevs/zxdh.rst | 30 +++++ drivers/raw/meson.build | 1 + drivers/raw/zxdh/meson.build | 5 + drivers/raw/zxdh/zxdh_rawdev.c | 220 +++++++++++++++++++++++++++++++++ drivers/raw/zxdh/zxdh_rawdev.h | 118 ++++++++++++++++++ 7 files changed, 380 insertions(+) create mode 100644 doc/guides/rawdevs/zxdh.rst create mode 100644 drivers/raw/zxdh/meson.build create mode 100644 drivers/raw/zxdh/zxdh_rawdev.c create mode 100644 drivers/raw/zxdh/zxdh_rawdev.h -- 2.43.0 diff --git a/MAINTAINERS b/MAINTAINERS index c5a703b5c0..6dd4fbae6e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1511,6 +1511,11 @@ M: Gagandeep Singh F: drivers/raw/dpaa2_cmdif/ F: doc/guides/rawdevs/dpaa2_cmdif.rst +ZXDH +M: Yong Zhang +F: drivers/raw/zxdh/ +F: doc/guides/rawdevs/zxdh.rst + Packet processing ----------------- diff --git a/doc/guides/rawdevs/index.rst b/doc/guides/rawdevs/index.rst index f34315f051..d85a4b7148 100644 --- a/doc/guides/rawdevs/index.rst +++ b/doc/guides/rawdevs/index.rst @@ -16,3 +16,4 @@ application through rawdev API. dpaa2_cmdif ifpga ntb + zxdh diff --git a/doc/guides/rawdevs/zxdh.rst b/doc/guides/rawdevs/zxdh.rst new file mode 100644 index 0000000000..fa7ada1004 --- /dev/null +++ b/doc/guides/rawdevs/zxdh.rst @@ -0,0 +1,30 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2024 ZTE Corporation + +ZXDH Rawdev Driver +====================== + +The ``zxdh`` rawdev driver is an implementation of the rawdev API, +that provides communication between two separate hosts. +This is achieved via using the GDMA controller of Dinghai SoC, +which can be configured through exposed MPF devices. + +Device Setup +------------- + +It is recommended to bind the ZXDH MPF kernel driver for MPF devices (Not mandatory). +The kernel drivers can be downloaded at `ZTE Official Website +`_. + +Initialization +-------------- + +The ``zxdh`` rawdev driver needs to work in IOVA PA mode. +Consider using ``--iova-mode=pa`` in the EAL options. + +Platform Requirement +~~~~~~~~~~~~~~~~~~~~ + +This PMD is only supported on ZTE Neo Platforms: +- Neo X510/X512 + diff --git a/drivers/raw/meson.build b/drivers/raw/meson.build index 05cad143fe..237d1bdd80 100644 --- a/drivers/raw/meson.build +++ b/drivers/raw/meson.build @@ -12,5 +12,6 @@ drivers = [ 'ifpga', 'ntb', 'skeleton', + 'zxdh', ] std_deps = ['rawdev'] diff --git a/drivers/raw/zxdh/meson.build b/drivers/raw/zxdh/meson.build new file mode 100644 index 0000000000..266d3db6d8 --- /dev/null +++ b/drivers/raw/zxdh/meson.build @@ -0,0 +1,5 @@ +#SPDX-License-Identifier: BSD-3-Clause +#Copyright 2024 ZTE Corporation + +deps += ['rawdev', 'kvargs', 'mbuf', 'bus_pci'] +sources = files('zxdh_rawdev.c') diff --git a/drivers/raw/zxdh/zxdh_rawdev.c b/drivers/raw/zxdh/zxdh_rawdev.c new file mode 100644 index 0000000000..269c4f92e0 --- /dev/null +++ b/drivers/raw/zxdh/zxdh_rawdev.c @@ -0,0 +1,220 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2024 ZTE Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "zxdh_rawdev.h" + +/* Register offset */ +#define ZXDH_GDMA_BASE_OFFSET 0x100000 + +#define ZXDH_GDMA_CHAN_SHIFT 0x80 +char zxdh_gdma_driver_name[] = "rawdev_zxdh_gdma"; +char dev_name[] = "zxdh_gdma"; + +uint32_t +zxdh_gdma_read_reg(struct rte_rawdev *dev, uint16_t queue_id, uint32_t offset) +{ + struct zxdh_gdma_rawdev *gdmadev = zxdh_gdma_rawdev_get_priv(dev); + uint32_t addr = 0; + uint32_t val = 0; + + addr = offset + queue_id * ZXDH_GDMA_CHAN_SHIFT; + val = *(uint32_t *)(gdmadev->base_addr + addr); + + return val; +} + +void +zxdh_gdma_write_reg(struct rte_rawdev *dev, uint16_t queue_id, uint32_t offset, uint32_t val) +{ + struct zxdh_gdma_rawdev *gdmadev = zxdh_gdma_rawdev_get_priv(dev); + uint32_t addr = 0; + + addr = offset + queue_id * ZXDH_GDMA_CHAN_SHIFT; + *(uint32_t *)(gdmadev->base_addr + addr) = val; +} + +static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { +}; + +static int +zxdh_gdma_map_resource(struct rte_pci_device *dev) +{ + int fd = -1; + char devname[PATH_MAX]; + void *mapaddr = NULL; + struct rte_pci_addr *loc; + + loc = &dev->addr; + snprintf(devname, sizeof(devname), "%s/" PCI_PRI_FMT "/resource0", + rte_pci_get_sysfs_path(), + loc->domain, loc->bus, loc->devid, + loc->function); + + fd = open(devname, O_RDWR); + if (fd < 0) { + ZXDH_PMD_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno)); + return -1; + } + + /* Map the PCI memory resource of device */ + mapaddr = rte_mem_map(NULL, (size_t)dev->mem_resource[0].len, + RTE_PROT_READ | RTE_PROT_WRITE, + RTE_MAP_SHARED, fd, 0); + if (mapaddr == NULL) { + ZXDH_PMD_LOG(ERR, "cannot map resource(%d, 0x%zx): %s (%p)", + fd, (size_t)dev->mem_resource[0].len, + rte_strerror(rte_errno), mapaddr); + close(fd); + return -1; + } + + close(fd); + dev->mem_resource[0].addr = mapaddr; + + return 0; +} + +static void +zxdh_gdma_unmap_resource(void *requested_addr, size_t size) +{ + if (requested_addr == NULL) + return; + + /* Unmap the PCI memory resource of device */ + if (rte_mem_unmap(requested_addr, size)) + ZXDH_PMD_LOG(ERR, "cannot mem unmap(%p, %#zx): %s", + requested_addr, size, rte_strerror(rte_errno)); + else + ZXDH_PMD_LOG(DEBUG, "PCI memory unmapped at %p", requested_addr); +} + +static int +zxdh_gdma_rawdev_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_rawdev *dev = NULL; + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_queue *queue = NULL; + uint8_t i = 0; + int ret; + + if (pci_dev->mem_resource[0].phys_addr == 0) { + ZXDH_PMD_LOG(ERR, "PCI bar0 resource is invalid"); + return -1; + } + + ret = zxdh_gdma_map_resource(pci_dev); + if (ret != 0) { + ZXDH_PMD_LOG(ERR, "Failed to mmap pci device(%s)", pci_dev->name); + return -1; + } + ZXDH_PMD_LOG(INFO, "%s bar0 0x%"PRIx64" mapped at %p", + pci_dev->name, pci_dev->mem_resource[0].phys_addr, + pci_dev->mem_resource[0].addr); + + dev = rte_rawdev_pmd_allocate(dev_name, sizeof(struct zxdh_gdma_rawdev), rte_socket_id()); + if (dev == NULL) { + ZXDH_PMD_LOG(ERR, "Unable to allocate gdma rawdev"); + goto err_out; + } + ZXDH_PMD_LOG(INFO, "Init %s on NUMA node %d, dev_id is %d", + dev_name, rte_socket_id(), dev->dev_id); + + dev->dev_ops = &zxdh_gdma_rawdev_ops; + dev->device = &pci_dev->device; + dev->driver_name = zxdh_gdma_driver_name; + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdmadev->device_state = ZXDH_GDMA_DEV_STOPPED; + gdmadev->rawdev = dev; + gdmadev->queue_num = ZXDH_GDMA_TOTAL_CHAN_NUM; + gdmadev->used_num = 0; + gdmadev->base_addr = (uintptr_t)pci_dev->mem_resource[0].addr + ZXDH_GDMA_BASE_OFFSET; + + for (i = 0; i < ZXDH_GDMA_TOTAL_CHAN_NUM; i++) { + queue = &(gdmadev->vqs[i]); + queue->enable = 0; + queue->queue_size = ZXDH_GDMA_QUEUE_SIZE; + rte_spinlock_init(&(queue->enqueue_lock)); + } + + return 0; + +err_out: + zxdh_gdma_unmap_resource(pci_dev->mem_resource[0].addr, + (size_t)pci_dev->mem_resource[0].len); + return -1; +} + +static int +zxdh_gdma_rawdev_remove(struct rte_pci_device *pci_dev) +{ + struct rte_rawdev *dev = NULL; + int ret = 0; + + dev = rte_rawdev_pmd_get_named_dev(dev_name); + if (dev == NULL) + return -EINVAL; + + /* rte_rawdev_close is called by pmd_release */ + ret = rte_rawdev_pmd_release(dev); + if (ret != 0) { + ZXDH_PMD_LOG(ERR, "Device cleanup failed"); + return -1; + } + + zxdh_gdma_unmap_resource(pci_dev->mem_resource[0].addr, + (size_t)pci_dev->mem_resource[0].len); + + ZXDH_PMD_LOG(DEBUG, "rawdev %s remove done!", dev_name); + + return ret; +} + +static const struct rte_pci_id zxdh_gdma_rawdev_map[] = { + { RTE_PCI_DEVICE(ZXDH_GDMA_VENDORID, ZXDH_GDMA_DEVICEID) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct rte_pci_driver zxdh_gdma_rawdev_pmd = { + .id_table = zxdh_gdma_rawdev_map, + .drv_flags = 0, + .probe = zxdh_gdma_rawdev_probe, + .remove = zxdh_gdma_rawdev_remove, +}; + +RTE_PMD_REGISTER_PCI(zxdh_gdma_rawdev_pci_driver, zxdh_gdma_rawdev_pmd); +RTE_PMD_REGISTER_PCI_TABLE(zxdh_gdma_rawdev_pci_driver, zxdh_gdma_rawdev_map); +RTE_LOG_REGISTER_DEFAULT(zxdh_gdma_rawdev_logtype, NOTICE); diff --git a/drivers/raw/zxdh/zxdh_rawdev.h b/drivers/raw/zxdh/zxdh_rawdev.h new file mode 100644 index 0000000000..b4d977ce54 --- /dev/null +++ b/drivers/raw/zxdh/zxdh_rawdev.h @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2024 ZTE Corporation + */ + +#ifndef __ZXDH_RAWDEV_H__ +#define __ZXDH_RAWDEV_H__ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include + +extern int zxdh_gdma_rawdev_logtype; +#define RTE_LOGTYPE_ZXDH_GDMA zxdh_gdma_rawdev_logtype + +#define ZXDH_PMD_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_GDMA, \ + "%s() line %u: ", __func__ RTE_LOG_COMMA __LINE__, __VA_ARGS__) + +#define ZXDH_GDMA_VENDORID 0x1cf2 +#define ZXDH_GDMA_DEVICEID 0x8044 + +#define ZXDH_GDMA_TOTAL_CHAN_NUM 58 +#define ZXDH_GDMA_QUEUE_SIZE 16384 +#define ZXDH_GDMA_RING_SIZE 32768 + +enum zxdh_gdma_device_state { + ZXDH_GDMA_DEV_RUNNING, + ZXDH_GDMA_DEV_STOPPED +}; + +struct zxdh_gdma_buff_desc { + uint32_t SrcAddr_L; + uint32_t DstAddr_L; + uint32_t Xpara; + uint32_t ZY_para; + uint32_t ZY_SrcStep; + uint32_t ZY_DstStep; + uint32_t ExtAddr; + uint32_t LLI_Addr_L; + uint32_t LLI_Addr_H; + uint32_t ChCont; + uint32_t LLI_User; + uint32_t ErrAddr; + uint32_t Control; + uint32_t SrcAddr_H; + uint32_t DstAddr_H; + uint32_t Reserved; +}; + +struct zxdh_gdma_job { + uint64_t src; + uint64_t dest; + uint32_t len; + uint32_t flags; + uint64_t cnxt; + uint16_t status; + uint16_t vq_id; + void *usr_elem; + uint8_t ep_id; + uint8_t pf_id; + uint16_t vf_id; +}; + +struct zxdh_gdma_queue { + uint8_t enable; + uint8_t is_txq; + uint16_t vq_id; + uint16_t queue_size; + /* 0:GDMA needs to be configured through the APB interface */ + uint16_t flag; + uint32_t user; + uint16_t tc_cnt; + rte_spinlock_t enqueue_lock; + struct { + uint16_t avail_idx; + uint16_t last_avail_idx; + rte_iova_t ring_mem; + const struct rte_memzone *ring_mz; + struct zxdh_gdma_buff_desc *desc; + } ring; + struct { + uint16_t free_cnt; + uint16_t deq_cnt; + uint16_t pend_cnt; + uint16_t enq_idx; + uint16_t deq_idx; + uint16_t used_idx; + struct zxdh_gdma_job **job; + } sw_ring; +}; + +struct zxdh_gdma_rawdev { + struct rte_device *device; + struct rte_rawdev *rawdev; + uintptr_t base_addr; + uint8_t queue_num; /* total queue num */ + uint8_t used_num; /* used queue num */ + enum zxdh_gdma_device_state device_state; + struct zxdh_gdma_queue vqs[ZXDH_GDMA_TOTAL_CHAN_NUM]; +}; + +static inline struct zxdh_gdma_rawdev * +zxdh_gdma_rawdev_get_priv(const struct rte_rawdev *rawdev) +{ + return rawdev->dev_private; +} + +uint32_t zxdh_gdma_read_reg(struct rte_rawdev *dev, uint16_t qidx, uint32_t offset); +void zxdh_gdma_write_reg(struct rte_rawdev *dev, uint16_t qidx, uint32_t offset, uint32_t val); + +#ifdef __cplusplus +} +#endif + +#endif /* __ZXDH_RAWDEV_H__ */ From patchwork Mon Aug 12 07:31:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 143066 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A04E4579C; Mon, 12 Aug 2024 09:40:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 327E3402E1; Mon, 12 Aug 2024 09:40:51 +0200 (CEST) Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by mails.dpdk.org (Postfix) with ESMTP id C3C98402DF for ; Mon, 12 Aug 2024 09:40:49 +0200 (CEST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4Wj5zN1rpCz8R042; Mon, 12 Aug 2024 15:40:48 +0800 (CST) Received: from szxlzmapp07.zte.com.cn ([10.5.230.251]) by mse-fl1.zte.com.cn with SMTP id 47C7eGb4003942; Mon, 12 Aug 2024 15:40:16 +0800 (+08) (envelope-from zhang.yong25@zte.com.cn) Received: from localhost.localdomain (unknown [192.168.6.15]) by smtp (Zmail) with SMTP; Mon, 12 Aug 2024 15:40:19 +0800 X-Zmail-TransId: 3e8166b9bc62006-f9957 From: Yong Zhang To: dev@dpdk.org, stephen@networkplumber.org, david.marchand@redhat.com Cc: Yong Zhang Subject: [v2 2/5] raw/zxdh: add support for queue setup operation Date: Mon, 12 Aug 2024 15:31:26 +0800 Message-ID: <20240812073209.1924286-3-zhang.yong25@zte.com.cn> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> References: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> MIME-Version: 1.0 X-MAIL: mse-fl1.zte.com.cn 47C7eGb4003942 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 66B9BC80.004/4Wj5zN1rpCz8R042 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add queue initialization and release interface. Signed-off-by: Yong Zhang --- drivers/raw/zxdh/zxdh_rawdev.c | 242 +++++++++++++++++++++++++++++++++ drivers/raw/zxdh/zxdh_rawdev.h | 19 +++ 2 files changed, 261 insertions(+) -- 2.43.0 diff --git a/drivers/raw/zxdh/zxdh_rawdev.c b/drivers/raw/zxdh/zxdh_rawdev.c index 269c4f92e0..a76e3cda39 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.c +++ b/drivers/raw/zxdh/zxdh_rawdev.c @@ -36,13 +36,58 @@ #include "zxdh_rawdev.h" +/* + * User define: + * ep_id-bit[15:12] vfunc_num-bit[11:4] func_num-bit[3:1] vfunc_active-bit0 + * host ep_id:5~8 zf ep_id:9 + */ +#define ZXDH_GDMA_ZF_USER 0x9000 /* ep4 pf0 */ +#define ZXDH_GDMA_PF_NUM_SHIFT 1 +#define ZXDH_GDMA_VF_NUM_SHIFT 4 +#define ZXDH_GDMA_EP_ID_SHIFT 12 +#define ZXDH_GDMA_VF_EN 1 +#define ZXDH_GDMA_EPID_OFFSET 5 + /* Register offset */ #define ZXDH_GDMA_BASE_OFFSET 0x100000 +#define ZXDH_GDMA_EXT_ADDR_OFFSET 0x218 +#define ZXDH_GDMA_CONTROL_OFFSET 0x230 +#define ZXDH_GDMA_TC_CNT_OFFSET 0x23c +#define ZXDH_GDMA_LLI_USER_OFFSET 0x228 + +#define ZXDH_GDMA_CHAN_FORCE_CLOSE (1 << 31) + +/* TC count & Error interrupt status register */ +#define ZXDH_GDMA_SRC_LLI_ERR (1 << 16) +#define ZXDH_GDMA_SRC_DATA_ERR (1 << 17) +#define ZXDH_GDMA_DST_ADDR_ERR (1 << 18) +#define ZXDH_GDMA_ERR_STATUS (1 << 19) +#define ZXDH_GDMA_ERR_INTR_ENABLE (1 << 20) +#define ZXDH_GDMA_TC_CNT_CLEAN (1) #define ZXDH_GDMA_CHAN_SHIFT 0x80 +#define LOW32_MASK 0xffffffff +#define LOW16_MASK 0xffff + +static int zxdh_gdma_queue_init(struct rte_rawdev *dev, uint16_t queue_id); +static int zxdh_gdma_queue_free(struct rte_rawdev *dev, uint16_t queue_id); + char zxdh_gdma_driver_name[] = "rawdev_zxdh_gdma"; char dev_name[] = "zxdh_gdma"; +static inline struct zxdh_gdma_queue * +zxdh_gdma_get_queue(struct rte_rawdev *dev, uint16_t queue_id) +{ + struct zxdh_gdma_rawdev *gdmadev = zxdh_gdma_rawdev_get_priv(dev); + + if (queue_id >= ZXDH_GDMA_TOTAL_CHAN_NUM) { + ZXDH_PMD_LOG(ERR, "queue id %d is invalid", queue_id); + return NULL; + } + + return &(gdmadev->vqs[queue_id]); +} + uint32_t zxdh_gdma_read_reg(struct rte_rawdev *dev, uint16_t queue_id, uint32_t offset) { @@ -66,9 +111,206 @@ zxdh_gdma_write_reg(struct rte_rawdev *dev, uint16_t queue_id, uint32_t offset, *(uint32_t *)(gdmadev->base_addr + addr) = val; } +static int +zxdh_gdma_rawdev_queue_setup(struct rte_rawdev *dev, + uint16_t queue_id, + rte_rawdev_obj_t queue_conf, + size_t conf_size) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_queue *queue = NULL; + struct zxdh_gdma_queue_config *qconfig = NULL; + struct zxdh_gdma_rbp *rbp = NULL; + uint16_t i = 0; + uint8_t is_txq = 0; + uint32_t src_user = 0; + uint32_t dst_user = 0; + + if (dev == NULL) + return -EINVAL; + + if ((queue_conf == NULL) || (conf_size != sizeof(struct zxdh_gdma_queue_config))) + return -EINVAL; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + qconfig = (struct zxdh_gdma_queue_config *)queue_conf; + + for (i = 0; i < ZXDH_GDMA_TOTAL_CHAN_NUM; i++) { + if (gdmadev->vqs[i].enable == 0) + break; + } + if (i >= ZXDH_GDMA_TOTAL_CHAN_NUM) { + ZXDH_PMD_LOG(ERR, "Failed to setup queue, no avail queues"); + return -1; + } + queue_id = i; + if (zxdh_gdma_queue_init(dev, queue_id) != 0) { + ZXDH_PMD_LOG(ERR, "Failed to init queue"); + return -1; + } + queue = &(gdmadev->vqs[queue_id]); + + rbp = qconfig->rbp; + if ((rbp->srbp != 0) && (rbp->drbp == 0)) { + is_txq = 0; + dst_user = ZXDH_GDMA_ZF_USER; + src_user = ((rbp->spfid << ZXDH_GDMA_PF_NUM_SHIFT) | + ((rbp->sportid + ZXDH_GDMA_EPID_OFFSET) << ZXDH_GDMA_EP_ID_SHIFT)); + + if (rbp->svfid != 0) + src_user |= (ZXDH_GDMA_VF_EN | + ((rbp->svfid - 1) << ZXDH_GDMA_VF_NUM_SHIFT)); + + ZXDH_PMD_LOG(DEBUG, "rxq->qidx:%d setup src_user(ep:%d pf:%d vf:%d) success", + queue_id, (uint8_t)rbp->sportid, (uint8_t)rbp->spfid, + (uint8_t)rbp->svfid); + } else if ((rbp->srbp == 0) && (rbp->drbp != 0)) { + is_txq = 1; + src_user = ZXDH_GDMA_ZF_USER; + dst_user = ((rbp->dpfid << ZXDH_GDMA_PF_NUM_SHIFT) | + ((rbp->dportid + ZXDH_GDMA_EPID_OFFSET) << ZXDH_GDMA_EP_ID_SHIFT)); + + if (rbp->dvfid != 0) + dst_user |= (ZXDH_GDMA_VF_EN | + ((rbp->dvfid - 1) << ZXDH_GDMA_VF_NUM_SHIFT)); + + ZXDH_PMD_LOG(DEBUG, "txq->qidx:%d setup dst_user(ep:%d pf:%d vf:%d) success", + queue_id, (uint8_t)rbp->dportid, (uint8_t)rbp->dpfid, + (uint8_t)rbp->dvfid); + } else { + ZXDH_PMD_LOG(ERR, "Failed to setup queue, srbp/drbp is invalid"); + return -EINVAL; + } + queue->is_txq = is_txq; + + /* setup queue user info */ + queue->user = (src_user & LOW16_MASK) | (dst_user << 16); + + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_EXT_ADDR_OFFSET, queue->user); + gdmadev->used_num++; + + return queue_id; +} + static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { + .queue_setup = zxdh_gdma_rawdev_queue_setup, }; +static int +zxdh_gdma_queue_init(struct rte_rawdev *dev, uint16_t queue_id) +{ + char name[RTE_RAWDEV_NAME_MAX_LEN]; + struct zxdh_gdma_queue *queue = NULL; + const struct rte_memzone *mz = NULL; + uint32_t size = 0; + int ret = 0; + + queue = zxdh_gdma_get_queue(dev, queue_id); + if (queue == NULL) + return -EINVAL; + + queue->enable = 1; + queue->vq_id = queue_id; + queue->flag = 0; + queue->tc_cnt = 0; + + /* Init sw_ring */ + memset(name, 0, sizeof(name)); + snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "gdma_vq%d_sw_ring", queue_id); + size = queue->queue_size * sizeof(struct zxdh_gdma_job *); + queue->sw_ring.job = rte_zmalloc(name, size, 0); + if (queue->sw_ring.job == NULL) { + ZXDH_PMD_LOG(ERR, "can not allocate sw_ring %s", name); + ret = -ENOMEM; + goto free_queue; + } + + /* Cache up to size-1 job in the ring to prevent overwriting hardware prefetching */ + queue->sw_ring.free_cnt = queue->queue_size - 1; + queue->sw_ring.deq_cnt = 0; + queue->sw_ring.pend_cnt = 0; + queue->sw_ring.enq_idx = 0; + queue->sw_ring.deq_idx = 0; + queue->sw_ring.used_idx = 0; + + /* Init ring */ + memset(name, 0, sizeof(name)); + snprintf(name, RTE_RAWDEV_NAME_MAX_LEN, "gdma_vq%d_ring", queue_id); + size = ZXDH_GDMA_RING_SIZE * sizeof(struct zxdh_gdma_buff_desc); + mz = rte_memzone_reserve_aligned(name, size, rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, size); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(name); + if (mz == NULL) { + ZXDH_PMD_LOG(ERR, "can not allocate ring %s", name); + ret = -ENOMEM; + goto free_queue; + } + } + memset(mz->addr, 0, size); + queue->ring.ring_mz = mz; + queue->ring.desc = (struct zxdh_gdma_buff_desc *)(mz->addr); + queue->ring.ring_mem = mz->iova; + queue->ring.avail_idx = 0; + ZXDH_PMD_LOG(INFO, "queue%u ring phy addr:0x%"PRIx64" virt addr:%p", + queue_id, mz->iova, mz->addr); + + /* Initialize the hardware channel */ + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_CONTROL_OFFSET, + ZXDH_GDMA_CHAN_FORCE_CLOSE); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_TC_CNT_OFFSET, + ZXDH_GDMA_ERR_INTR_ENABLE | ZXDH_GDMA_ERR_STATUS | ZXDH_GDMA_TC_CNT_CLEAN); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_LLI_USER_OFFSET, + ZXDH_GDMA_ZF_USER); + + return 0; + +free_queue: + zxdh_gdma_queue_free(dev, queue_id); + return ret; +} + +static int +zxdh_gdma_queue_free(struct rte_rawdev *dev, uint16_t queue_id) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_queue *queue = NULL; + uint32_t val = 0; + + queue = zxdh_gdma_get_queue(dev, queue_id); + if (queue == NULL) + return -EINVAL; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdmadev->used_num--; + + /* disable gdma channel */ + val = ZXDH_GDMA_CHAN_FORCE_CLOSE; + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_CONTROL_OFFSET, val); + + queue->enable = 0; + queue->is_txq = 0; + queue->flag = 0; + queue->user = 0; + queue->tc_cnt = 0; + queue->ring.avail_idx = 0; + queue->sw_ring.free_cnt = 0; + queue->sw_ring.deq_cnt = 0; + queue->sw_ring.pend_cnt = 0; + queue->sw_ring.enq_idx = 0; + queue->sw_ring.deq_idx = 0; + queue->sw_ring.used_idx = 0; + + if (queue->sw_ring.job != NULL) + rte_free(queue->sw_ring.job); + + if (queue->ring.ring_mz != NULL) + rte_memzone_free(queue->ring.ring_mz); + + return 0; +} + static int zxdh_gdma_map_resource(struct rte_pci_device *dev) { diff --git a/drivers/raw/zxdh/zxdh_rawdev.h b/drivers/raw/zxdh/zxdh_rawdev.h index b4d977ce54..e9e7038560 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.h +++ b/drivers/raw/zxdh/zxdh_rawdev.h @@ -102,6 +102,25 @@ struct zxdh_gdma_rawdev { struct zxdh_gdma_queue vqs[ZXDH_GDMA_TOTAL_CHAN_NUM]; }; +struct zxdh_gdma_rbp { + uint32_t use_ultrashort:1; + uint32_t enable:1; + uint32_t dportid:3; + uint32_t dpfid:3; + uint32_t dvfid:8; /*using route by port for destination */ + uint32_t drbp:1; + uint32_t sportid:3; + uint32_t spfid:3; + uint32_t svfid:8; + uint32_t srbp:1; +}; + +struct zxdh_gdma_queue_config { + uint32_t lcore_id; + uint32_t flags; + struct zxdh_gdma_rbp *rbp; +}; + static inline struct zxdh_gdma_rawdev * zxdh_gdma_rawdev_get_priv(const struct rte_rawdev *rawdev) { From patchwork Mon Aug 12 07:31:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 143067 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 290394579C; Mon, 12 Aug 2024 09:40:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5905402F1; Mon, 12 Aug 2024 09:40:56 +0200 (CEST) Received: from mxct.zte.com.cn (mxct.zte.com.cn [183.62.165.209]) by mails.dpdk.org (Postfix) with ESMTP id F0C09402DF for ; Mon, 12 Aug 2024 09:40:54 +0200 (CEST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxct.zte.com.cn (FangMail) with ESMTPS id 4Wj5zQ5QnWz4x5q1; Mon, 12 Aug 2024 15:40:50 +0800 (CST) Received: from szxlzmapp01.zte.com.cn ([10.5.231.85]) by mse-fl2.zte.com.cn with SMTP id 47C7eaUC099547; Mon, 12 Aug 2024 15:40:36 +0800 (+08) (envelope-from zhang.yong25@zte.com.cn) Received: from localhost.localdomain (unknown [192.168.6.15]) by smtp (Zmail) with SMTP; Mon, 12 Aug 2024 15:40:38 +0800 X-Zmail-TransId: 3e8166b9bc76006-f9975 From: Yong Zhang To: dev@dpdk.org, stephen@networkplumber.org, david.marchand@redhat.com Cc: Yong Zhang Subject: [v2 3/5] raw/zxdh: add support for standard rawdev operations Date: Mon, 12 Aug 2024 15:31:28 +0800 Message-ID: <20240812073209.1924286-5-zhang.yong25@zte.com.cn> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> References: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> MIME-Version: 1.0 X-MAIL: mse-fl2.zte.com.cn 47C7eaUC099547 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 66B9BC82.002/4Wj5zQ5QnWz4x5q1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rawdev operations such as dev_start and dev_stop. Signed-off-by: Yong Zhang --- drivers/raw/zxdh/zxdh_rawdev.c | 136 ++++++++++++++++++++++++++++++++- drivers/raw/zxdh/zxdh_rawdev.h | 10 +++ 2 files changed, 145 insertions(+), 1 deletion(-) -- 2.43.0 diff --git a/drivers/raw/zxdh/zxdh_rawdev.c b/drivers/raw/zxdh/zxdh_rawdev.c index a76e3cda39..363011dfcc 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.c +++ b/drivers/raw/zxdh/zxdh_rawdev.c @@ -111,6 +111,96 @@ zxdh_gdma_write_reg(struct rte_rawdev *dev, uint16_t queue_id, uint32_t offset, *(uint32_t *)(gdmadev->base_addr + addr) = val; } +static int +zxdh_gdma_rawdev_info_get(struct rte_rawdev *dev, + __rte_unused rte_rawdev_obj_t dev_info, + __rte_unused size_t dev_info_size) +{ + if (dev == NULL) + return -EINVAL; + + return 0; +} + +static int +zxdh_gdma_rawdev_configure(const struct rte_rawdev *dev, + rte_rawdev_obj_t config, + size_t config_size) +{ + struct zxdh_gdma_config *gdma_config = NULL; + + if ((dev == NULL) || + (config == NULL) || + (config_size != sizeof(struct zxdh_gdma_config))) + return -EINVAL; + + gdma_config = (struct zxdh_gdma_config *)config; + if (gdma_config->max_vqs > ZXDH_GDMA_TOTAL_CHAN_NUM) { + ZXDH_PMD_LOG(ERR, "gdma supports up to %d queues", ZXDH_GDMA_TOTAL_CHAN_NUM); + return -EINVAL; + } + + return 0; +} + +static int +zxdh_gdma_rawdev_start(struct rte_rawdev *dev) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + + if (dev == NULL) + return -EINVAL; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdmadev->device_state = ZXDH_GDMA_DEV_RUNNING; + + return 0; +} + +static void +zxdh_gdma_rawdev_stop(struct rte_rawdev *dev) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + + if (dev == NULL) + return; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdmadev->device_state = ZXDH_GDMA_DEV_STOPPED; +} + +static int +zxdh_gdma_rawdev_reset(struct rte_rawdev *dev) +{ + if (dev == NULL) + return -EINVAL; + + return 0; +} + +static int +zxdh_gdma_rawdev_close(struct rte_rawdev *dev) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_queue *queue = NULL; + uint16_t queue_id = 0; + + if (dev == NULL) + return -EINVAL; + + for (queue_id = 0; queue_id < ZXDH_GDMA_TOTAL_CHAN_NUM; queue_id++) { + queue = zxdh_gdma_get_queue(dev, queue_id); + if ((queue == NULL) || (queue->enable == 0)) + continue; + + zxdh_gdma_queue_free(dev, queue_id); + } + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdmadev->device_state = ZXDH_GDMA_DEV_STOPPED; + + return 0; +} + static int zxdh_gdma_rawdev_queue_setup(struct rte_rawdev *dev, uint16_t queue_id, @@ -192,8 +282,52 @@ zxdh_gdma_rawdev_queue_setup(struct rte_rawdev *dev, return queue_id; } +static int +zxdh_gdma_rawdev_queue_release(struct rte_rawdev *dev, uint16_t queue_id) +{ + struct zxdh_gdma_queue *queue = NULL; + + if (dev == NULL) + return -EINVAL; + + queue = zxdh_gdma_get_queue(dev, queue_id); + if ((queue == NULL) || (queue->enable == 0)) + return -EINVAL; + + zxdh_gdma_queue_free(dev, queue_id); + + return 0; +} + +static int +zxdh_gdma_rawdev_get_attr(struct rte_rawdev *dev, + __rte_unused const char *attr_name, + uint64_t *attr_value) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_attr *gdma_attr = NULL; + + if ((dev == NULL) || (attr_value == NULL)) + return -EINVAL; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + gdma_attr = (struct zxdh_gdma_attr *)attr_value; + gdma_attr->num_hw_queues = gdmadev->used_num; + + return 0; +} static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { + .dev_info_get = zxdh_gdma_rawdev_info_get, + .dev_configure = zxdh_gdma_rawdev_configure, + .dev_start = zxdh_gdma_rawdev_start, + .dev_stop = zxdh_gdma_rawdev_stop, + .dev_close = zxdh_gdma_rawdev_close, + .dev_reset = zxdh_gdma_rawdev_reset, + .queue_setup = zxdh_gdma_rawdev_queue_setup, + .queue_release = zxdh_gdma_rawdev_queue_release, + + .attr_get = zxdh_gdma_rawdev_get_attr, }; static int @@ -256,7 +390,7 @@ zxdh_gdma_queue_init(struct rte_rawdev *dev, uint16_t queue_id) ZXDH_PMD_LOG(INFO, "queue%u ring phy addr:0x%"PRIx64" virt addr:%p", queue_id, mz->iova, mz->addr); - /* Initialize the hardware channel */ + /* Configure the hardware channel to the initial state */ zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_CONTROL_OFFSET, ZXDH_GDMA_CHAN_FORCE_CLOSE); zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_TC_CNT_OFFSET, diff --git a/drivers/raw/zxdh/zxdh_rawdev.h b/drivers/raw/zxdh/zxdh_rawdev.h index e9e7038560..70a5fae499 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.h +++ b/drivers/raw/zxdh/zxdh_rawdev.h @@ -102,6 +102,12 @@ struct zxdh_gdma_rawdev { struct zxdh_gdma_queue vqs[ZXDH_GDMA_TOTAL_CHAN_NUM]; }; +struct zxdh_gdma_config { + uint16_t max_hw_queues_per_core; + uint16_t max_vqs; + int fle_queue_pool_cnt; +}; + struct zxdh_gdma_rbp { uint32_t use_ultrashort:1; uint32_t enable:1; @@ -121,6 +127,10 @@ struct zxdh_gdma_queue_config { struct zxdh_gdma_rbp *rbp; }; +struct zxdh_gdma_attr { + uint16_t num_hw_queues; +}; + static inline struct zxdh_gdma_rawdev * zxdh_gdma_rawdev_get_priv(const struct rte_rawdev *rawdev) { From patchwork Mon Aug 12 07:31:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 143068 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 601D54579C; Mon, 12 Aug 2024 09:41:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48826402EC; Mon, 12 Aug 2024 09:41:25 +0200 (CEST) Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by mails.dpdk.org (Postfix) with ESMTP id DC4AE402E4 for ; Mon, 12 Aug 2024 09:41:22 +0200 (CEST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4Wj6012XR3z8RTZH; Mon, 12 Aug 2024 15:41:21 +0800 (CST) Received: from szxlzmapp04.zte.com.cn ([10.5.231.166]) by mse-fl1.zte.com.cn with SMTP id 47C7euW1004727; Mon, 12 Aug 2024 15:40:56 +0800 (+08) (envelope-from zhang.yong25@zte.com.cn) Received: from localhost.localdomain (unknown [192.168.6.15]) by smtp (Zmail) with SMTP; Mon, 12 Aug 2024 15:40:59 +0800 X-Zmail-TransId: 3e8166b9bc8a006-f9a0d From: Yong Zhang To: dev@dpdk.org, stephen@networkplumber.org, david.marchand@redhat.com Cc: Yong Zhang Subject: [v2 4/5] raw/zxdh: add support for enqueue operation Date: Mon, 12 Aug 2024 15:31:30 +0800 Message-ID: <20240812073209.1924286-7-zhang.yong25@zte.com.cn> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> References: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> MIME-Version: 1.0 X-MAIL: mse-fl1.zte.com.cn 47C7euW1004727 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 66B9BCA1.001/4Wj6012XR3z8RTZH X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add rawdev enqueue operation for zxdh devices. Signed-off-by: Yong Zhang --- drivers/raw/zxdh/zxdh_rawdev.c | 220 +++++++++++++++++++++++++++++++++ drivers/raw/zxdh/zxdh_rawdev.h | 19 +++ 2 files changed, 239 insertions(+) -- 2.43.0 diff --git a/drivers/raw/zxdh/zxdh_rawdev.c b/drivers/raw/zxdh/zxdh_rawdev.c index 363011dfcc..a878d42c03 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.c +++ b/drivers/raw/zxdh/zxdh_rawdev.c @@ -51,10 +51,34 @@ /* Register offset */ #define ZXDH_GDMA_BASE_OFFSET 0x100000 #define ZXDH_GDMA_EXT_ADDR_OFFSET 0x218 +#define ZXDH_GDMA_SAR_LOW_OFFSET 0x200 +#define ZXDH_GDMA_DAR_LOW_OFFSET 0x204 +#define ZXDH_GDMA_SAR_HIGH_OFFSET 0x234 +#define ZXDH_GDMA_DAR_HIGH_OFFSET 0x238 +#define ZXDH_GDMA_XFERSIZE_OFFSET 0x208 #define ZXDH_GDMA_CONTROL_OFFSET 0x230 +#define ZXDH_GDMA_TC_STATUS_OFFSET 0x0 +#define ZXDH_GDMA_STATUS_CLEAN_OFFSET 0x80 +#define ZXDH_GDMA_LLI_L_OFFSET 0x21c +#define ZXDH_GDMA_LLI_H_OFFSET 0x220 +#define ZXDH_GDMA_CHAN_CONTINUE_OFFSET 0x224 #define ZXDH_GDMA_TC_CNT_OFFSET 0x23c #define ZXDH_GDMA_LLI_USER_OFFSET 0x228 +/* Control register */ +#define ZXDH_GDMA_CHAN_ENABLE 0x1 +#define ZXDH_GDMA_CHAN_DISABLE 0 +#define ZXDH_GDMA_SOFT_CHAN 0x2 +#define ZXDH_GDMA_TC_INTR_ENABLE 0x10 +#define ZXDH_GDMA_ALL_INTR_ENABLE 0x30 +#define ZXDH_GDMA_SBS_SHIFT 6 /* src burst size */ +#define ZXDH_GDMA_SBL_SHIFT 9 /* src burst length */ +#define ZXDH_GDMA_DBS_SHIFT 13 /* dest burst size */ +#define ZXDH_GDMA_BURST_SIZE_MIN 0x1 /* 1 byte */ +#define ZXDH_GDMA_BURST_SIZE_MEDIUM 0x4 /* 4 word */ +#define ZXDH_GDMA_BURST_SIZE_MAX 0x6 /* 16 word */ +#define ZXDH_GDMA_DEFAULT_BURST_LEN 0xf /* 16 beats */ +#define ZXDH_GDMA_TC_CNT_ENABLE (1 << 27) #define ZXDH_GDMA_CHAN_FORCE_CLOSE (1 << 31) /* TC count & Error interrupt status register */ @@ -66,9 +90,15 @@ #define ZXDH_GDMA_TC_CNT_CLEAN (1) #define ZXDH_GDMA_CHAN_SHIFT 0x80 +#define ZXDH_GDMA_LINK_END_NODE (1 << 30) +#define ZXDH_GDMA_CHAN_CONTINUE (1) + #define LOW32_MASK 0xffffffff #define LOW16_MASK 0xffff +#define IDX_TO_ADDR(addr, idx, t) \ + ((t)((uintptr_t)(addr) + (idx) * sizeof(struct zxdh_gdma_buff_desc))) + static int zxdh_gdma_queue_init(struct rte_rawdev *dev, uint16_t queue_id); static int zxdh_gdma_queue_free(struct rte_rawdev *dev, uint16_t queue_id); @@ -316,6 +346,194 @@ zxdh_gdma_rawdev_get_attr(struct rte_rawdev *dev, return 0; } + +static inline void +zxdh_gdma_control_cal(uint32_t *val, uint8_t tc_enable) +{ + *val = (ZXDH_GDMA_CHAN_ENABLE | + ZXDH_GDMA_SOFT_CHAN | + (ZXDH_GDMA_DEFAULT_BURST_LEN << ZXDH_GDMA_SBL_SHIFT) | + (ZXDH_GDMA_BURST_SIZE_MAX << ZXDH_GDMA_SBS_SHIFT) | + (ZXDH_GDMA_BURST_SIZE_MAX << ZXDH_GDMA_DBS_SHIFT)); + + if (tc_enable != 0) + *val |= ZXDH_GDMA_TC_CNT_ENABLE; +} + +static inline uint32_t +zxdh_gdma_user_get(struct zxdh_gdma_queue *queue, struct zxdh_gdma_job *job) +{ + uint32_t src_user = 0; + uint32_t dst_user = 0; + + if ((job->flags & ZXDH_GDMA_JOB_DIR_MASK) == 0) { + ZXDH_PMD_LOG(DEBUG, "job flags:0x%x default user:0x%x", + job->flags, queue->user); + return queue->user; + } else if ((job->flags & ZXDH_GDMA_JOB_DIR_TX) != 0) { + src_user = ZXDH_GDMA_ZF_USER; + dst_user = ((job->pf_id << ZXDH_GDMA_PF_NUM_SHIFT) | + ((job->ep_id + ZXDH_GDMA_EPID_OFFSET) << ZXDH_GDMA_EP_ID_SHIFT)); + + if (job->vf_id != 0) + dst_user |= (ZXDH_GDMA_VF_EN | + ((job->vf_id - 1) << ZXDH_GDMA_VF_NUM_SHIFT)); + } else { + dst_user = ZXDH_GDMA_ZF_USER; + src_user = ((job->pf_id << ZXDH_GDMA_PF_NUM_SHIFT) | + ((job->ep_id + ZXDH_GDMA_EPID_OFFSET) << ZXDH_GDMA_EP_ID_SHIFT)); + + if (job->vf_id != 0) + src_user |= (ZXDH_GDMA_VF_EN | + ((job->vf_id - 1) << ZXDH_GDMA_VF_NUM_SHIFT)); + } + ZXDH_PMD_LOG(DEBUG, "job flags:0x%x ep_id:%u, pf_id:%u, vf_id:%u, user:0x%x", + job->flags, job->ep_id, job->pf_id, job->vf_id, + (src_user & LOW16_MASK) | (dst_user << 16)); + + return (src_user & LOW16_MASK) | (dst_user << 16); +} + +static inline void +zxdh_gdma_fill_bd(struct zxdh_gdma_queue *queue, struct zxdh_gdma_job *job) +{ + struct zxdh_gdma_buff_desc *bd = NULL; + uint32_t val = 0; + uint64_t next_bd_addr = 0; + uint16_t avail_idx = 0; + + avail_idx = queue->ring.avail_idx; + bd = &(queue->ring.desc[avail_idx]); + memset(bd, 0, sizeof(struct zxdh_gdma_buff_desc)); + + /* data bd */ + if (job != NULL) { + zxdh_gdma_control_cal(&val, 1); + next_bd_addr = IDX_TO_ADDR(queue->ring.ring_mem, + (avail_idx + 1) % ZXDH_GDMA_RING_SIZE, + uint64_t); + bd->SrcAddr_L = job->src & LOW32_MASK; + bd->DstAddr_L = job->dest & LOW32_MASK; + bd->SrcAddr_H = (job->src >> 32) & LOW32_MASK; + bd->DstAddr_H = (job->dest >> 32) & LOW32_MASK; + bd->Xpara = job->len; + bd->ExtAddr = zxdh_gdma_user_get(queue, job); + bd->LLI_Addr_L = (next_bd_addr >> 6) & LOW32_MASK; + bd->LLI_Addr_H = next_bd_addr >> 38; + bd->LLI_User = ZXDH_GDMA_ZF_USER; + bd->Control = val; + } else { + zxdh_gdma_control_cal(&val, 0); + next_bd_addr = IDX_TO_ADDR(queue->ring.ring_mem, avail_idx, uint64_t); + bd->ExtAddr = queue->user; + bd->LLI_User = ZXDH_GDMA_ZF_USER; + bd->Control = val; + bd->LLI_Addr_L = (next_bd_addr >> 6) & LOW32_MASK; + bd->LLI_Addr_H = (next_bd_addr >> 38) | ZXDH_GDMA_LINK_END_NODE; + if (queue->flag != 0) { + bd = IDX_TO_ADDR(queue->ring.desc, + queue->ring.last_avail_idx, + struct zxdh_gdma_buff_desc*); + next_bd_addr = IDX_TO_ADDR(queue->ring.ring_mem, + (queue->ring.last_avail_idx + 1) % ZXDH_GDMA_RING_SIZE, + uint64_t); + bd->LLI_Addr_L = (next_bd_addr >> 6) & LOW32_MASK; + bd->LLI_Addr_H = next_bd_addr >> 38; + rte_wmb(); + bd->LLI_Addr_H &= ~ZXDH_GDMA_LINK_END_NODE; + } + /* Record the index of empty bd for dynamic chaining */ + queue->ring.last_avail_idx = avail_idx; + } + + if (++avail_idx >= ZXDH_GDMA_RING_SIZE) + avail_idx -= ZXDH_GDMA_RING_SIZE; + + queue->ring.avail_idx = avail_idx; +} + +static int +zxdh_gdma_rawdev_enqueue_bufs(struct rte_rawdev *dev, + __rte_unused struct rte_rawdev_buf **buffers, + uint32_t count, + rte_rawdev_obj_t context) +{ + struct zxdh_gdma_rawdev *gdmadev = NULL; + struct zxdh_gdma_queue *queue = NULL; + struct zxdh_gdma_enqdeq *e_context = NULL; + struct zxdh_gdma_job *job = NULL; + uint16_t queue_id = 0; + uint32_t val = 0; + uint16_t i = 0; + uint16_t free_cnt = 0; + + if (dev == NULL) + return -EINVAL; + + if (unlikely((count < 1) || (context == NULL))) + return -EINVAL; + + gdmadev = zxdh_gdma_rawdev_get_priv(dev); + if (gdmadev->device_state == ZXDH_GDMA_DEV_STOPPED) { + ZXDH_PMD_LOG(ERR, "gdma dev is stop"); + return 0; + } + + e_context = (struct zxdh_gdma_enqdeq *)context; + queue_id = e_context->vq_id; + queue = zxdh_gdma_get_queue(dev, queue_id); + if ((queue == NULL) || (queue->enable == 0)) + return -EINVAL; + + free_cnt = queue->sw_ring.free_cnt; + if (free_cnt == 0) { + ZXDH_PMD_LOG(ERR, "queue %u is full, enq_idx:%u deq_idx:%u used_idx:%u", + queue_id, queue->sw_ring.enq_idx, + queue->sw_ring.deq_idx, queue->sw_ring.used_idx); + return 0; + } else if (free_cnt < count) { + ZXDH_PMD_LOG(DEBUG, "job num %u > free_cnt, change to %u", count, free_cnt); + count = free_cnt; + } + + rte_spinlock_lock(&queue->enqueue_lock); + + /* Build bd list, the last bd is empty bd */ + for (i = 0; i < count; i++) { + job = e_context->job[i]; + zxdh_gdma_fill_bd(queue, job); + } + zxdh_gdma_fill_bd(queue, NULL); + + if (unlikely(queue->flag == 0)) { + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_LLI_L_OFFSET, + (queue->ring.ring_mem >> 6) & LOW32_MASK); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_LLI_H_OFFSET, + queue->ring.ring_mem >> 38); + /* Start hardware handling */ + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_XFERSIZE_OFFSET, 0); + zxdh_gdma_control_cal(&val, 0); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_CONTROL_OFFSET, val); + queue->flag = 1; + } else { + val = ZXDH_GDMA_CHAN_CONTINUE; + zxdh_gdma_write_reg(dev, queue->vq_id, ZXDH_GDMA_CHAN_CONTINUE_OFFSET, val); + } + + /* job enqueue */ + for (i = 0; i < count; i++) { + queue->sw_ring.job[queue->sw_ring.enq_idx] = e_context->job[i]; + if (++queue->sw_ring.enq_idx >= queue->queue_size) + queue->sw_ring.enq_idx -= queue->queue_size; + + free_cnt--; + } + queue->sw_ring.free_cnt = free_cnt; + queue->sw_ring.pend_cnt += count; + rte_spinlock_unlock(&queue->enqueue_lock); + + return count; +} static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { .dev_info_get = zxdh_gdma_rawdev_info_get, .dev_configure = zxdh_gdma_rawdev_configure, @@ -328,6 +546,8 @@ static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { .queue_release = zxdh_gdma_rawdev_queue_release, .attr_get = zxdh_gdma_rawdev_get_attr, + + .enqueue_bufs = zxdh_gdma_rawdev_enqueue_bufs, }; static int diff --git a/drivers/raw/zxdh/zxdh_rawdev.h b/drivers/raw/zxdh/zxdh_rawdev.h index 70a5fae499..429ef90088 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.h +++ b/drivers/raw/zxdh/zxdh_rawdev.h @@ -26,6 +26,20 @@ extern int zxdh_gdma_rawdev_logtype; #define ZXDH_GDMA_QUEUE_SIZE 16384 #define ZXDH_GDMA_RING_SIZE 32768 +/* States if the source addresses is physical. */ +#define ZXDH_GDMA_JOB_SRC_PHY (1UL) + +/* States if the destination addresses is physical. */ +#define ZXDH_GDMA_JOB_DEST_PHY (1UL << 1) + +/* ZF->HOST */ +#define ZXDH_GDMA_JOB_DIR_TX (1UL << 2) + +/* HOST->ZF */ +#define ZXDH_GDMA_JOB_DIR_RX (1UL << 3) + +#define ZXDH_GDMA_JOB_DIR_MASK (ZXDH_GDMA_JOB_DIR_TX | ZXDH_GDMA_JOB_DIR_RX) + enum zxdh_gdma_device_state { ZXDH_GDMA_DEV_RUNNING, ZXDH_GDMA_DEV_STOPPED @@ -102,6 +116,11 @@ struct zxdh_gdma_rawdev { struct zxdh_gdma_queue vqs[ZXDH_GDMA_TOTAL_CHAN_NUM]; }; +struct zxdh_gdma_enqdeq { + uint16_t vq_id; + struct zxdh_gdma_job **job; +}; + struct zxdh_gdma_config { uint16_t max_hw_queues_per_core; uint16_t max_vqs; From patchwork Mon Aug 12 07:31:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 143069 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 153064579C; Mon, 12 Aug 2024 09:41:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03CDD4042E; Mon, 12 Aug 2024 09:41:37 +0200 (CEST) Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by mails.dpdk.org (Postfix) with ESMTP id A9684402E1 for ; Mon, 12 Aug 2024 09:41:35 +0200 (CEST) Received: from mse-fl2.zte.com.cn (unknown [10.5.228.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4Wj60G1hH0z8RSdK; Mon, 12 Aug 2024 15:41:34 +0800 (CST) Received: from szxlzmapp06.zte.com.cn ([10.5.230.252]) by mse-fl2.zte.com.cn with SMTP id 47C7fF83000820; Mon, 12 Aug 2024 15:41:15 +0800 (+08) (envelope-from zhang.yong25@zte.com.cn) Received: from localhost.localdomain (unknown [192.168.6.15]) by smtp (Zmail) with SMTP; Mon, 12 Aug 2024 15:41:17 +0800 X-Zmail-TransId: 3e8166b9bc9d006-f9a22 From: Yong Zhang To: dev@dpdk.org, stephen@networkplumber.org, david.marchand@redhat.com Cc: Yong Zhang Subject: [v2 5/5] raw/zxdh: add support for dequeue operation Date: Mon, 12 Aug 2024 15:31:32 +0800 Message-ID: <20240812073209.1924286-9-zhang.yong25@zte.com.cn> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> References: <20240812073209.1924286-1-zhang.yong25@zte.com.cn> MIME-Version: 1.0 X-MAIL: mse-fl2.zte.com.cn 47C7fF83000820 X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 66B9BCAE.003/4Wj60G1hH0z8RSdK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add rawdev dequeue operation for zxdh devices. Signed-off-by: Yong Zhang --- drivers/raw/zxdh/zxdh_rawdev.c | 113 +++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) -- 2.43.0 diff --git a/drivers/raw/zxdh/zxdh_rawdev.c b/drivers/raw/zxdh/zxdh_rawdev.c index a878d42c03..ccb1a241c4 100644 --- a/drivers/raw/zxdh/zxdh_rawdev.c +++ b/drivers/raw/zxdh/zxdh_rawdev.c @@ -96,6 +96,8 @@ #define LOW32_MASK 0xffffffff #define LOW16_MASK 0xffff +#define ZXDH_GDMA_TC_CNT_MAX 0x10000 + #define IDX_TO_ADDR(addr, idx, t) \ ((t)((uintptr_t)(addr) + (idx) * sizeof(struct zxdh_gdma_buff_desc))) @@ -534,6 +536,116 @@ zxdh_gdma_rawdev_enqueue_bufs(struct rte_rawdev *dev, return count; } + +static inline void +zxdh_gdma_used_idx_update(struct zxdh_gdma_queue *queue, uint16_t cnt, uint8_t data_bd_err) +{ + uint16_t idx = 0; + + if (queue->sw_ring.used_idx + cnt < queue->queue_size) + queue->sw_ring.used_idx += cnt; + else + queue->sw_ring.used_idx = queue->sw_ring.used_idx + cnt - queue->queue_size; + + if (data_bd_err == 1) { + /* Update job status, the last job status is error */ + if (queue->sw_ring.used_idx == 0) + idx = queue->queue_size - 1; + else + idx = queue->sw_ring.used_idx - 1; + + queue->sw_ring.job[idx]->status = 1; + } +} + +static int +zxdh_gdma_rawdev_dequeue_bufs(struct rte_rawdev *dev, + __rte_unused struct rte_rawdev_buf **buffers, + uint32_t count, + rte_rawdev_obj_t context) +{ + struct zxdh_gdma_queue *queue = NULL; + struct zxdh_gdma_enqdeq *e_context = NULL; + uint16_t queue_id = 0; + uint32_t val = 0; + uint16_t tc_cnt = 0; + uint16_t diff_cnt = 0; + uint16_t i = 0; + uint16_t bd_idx = 0; + uint64_t next_bd_addr = 0; + uint8_t data_bd_err = 0; + + if ((dev == NULL) || (context == NULL)) + return -EINVAL; + + e_context = (struct zxdh_gdma_enqdeq *)context; + queue_id = e_context->vq_id; + queue = zxdh_gdma_get_queue(dev, queue_id); + if ((queue == NULL) || (queue->enable == 0)) + return -EINVAL; + + if (queue->sw_ring.pend_cnt == 0) + goto deq_job; + + /* Get data transmit count */ + val = zxdh_gdma_read_reg(dev, queue_id, ZXDH_GDMA_TC_CNT_OFFSET); + tc_cnt = val & LOW16_MASK; + if (tc_cnt >= queue->tc_cnt) + diff_cnt = tc_cnt - queue->tc_cnt; + else + diff_cnt = tc_cnt + ZXDH_GDMA_TC_CNT_MAX - queue->tc_cnt; + + queue->tc_cnt = tc_cnt; + + /* Data transmit error, channel stopped */ + if ((val & ZXDH_GDMA_ERR_STATUS) != 0) { + next_bd_addr = zxdh_gdma_read_reg(dev, queue_id, ZXDH_GDMA_LLI_L_OFFSET); + next_bd_addr |= ((uint64_t)zxdh_gdma_read_reg(dev, queue_id, + ZXDH_GDMA_LLI_H_OFFSET) << 32); + next_bd_addr = next_bd_addr << 6; + bd_idx = (next_bd_addr - queue->ring.ring_mem) / sizeof(struct zxdh_gdma_buff_desc); + if ((val & ZXDH_GDMA_SRC_DATA_ERR) || (val & ZXDH_GDMA_DST_ADDR_ERR)) { + diff_cnt++; + data_bd_err = 1; + } + ZXDH_PMD_LOG(INFO, "queue%d is err(0x%x) next_bd_idx:%u ll_addr:0x%"PRIx64" def user:0x%x", + queue_id, val, bd_idx, next_bd_addr, queue->user); + + ZXDH_PMD_LOG(INFO, "Clean up error status"); + val = ZXDH_GDMA_ERR_STATUS | ZXDH_GDMA_ERR_INTR_ENABLE; + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_TC_CNT_OFFSET, val); + + ZXDH_PMD_LOG(INFO, "Restart channel"); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_XFERSIZE_OFFSET, 0); + zxdh_gdma_control_cal(&val, 0); + zxdh_gdma_write_reg(dev, queue_id, ZXDH_GDMA_CONTROL_OFFSET, val); + } + + if (diff_cnt != 0) { + zxdh_gdma_used_idx_update(queue, diff_cnt, data_bd_err); + queue->sw_ring.deq_cnt += diff_cnt; + queue->sw_ring.pend_cnt -= diff_cnt; + } + +deq_job: + if (queue->sw_ring.deq_cnt == 0) + return 0; + else if (queue->sw_ring.deq_cnt < count) + count = queue->sw_ring.deq_cnt; + + queue->sw_ring.deq_cnt -= count; + + for (i = 0; i < count; i++) { + e_context->job[i] = queue->sw_ring.job[queue->sw_ring.deq_idx]; + queue->sw_ring.job[queue->sw_ring.deq_idx] = NULL; + if (++queue->sw_ring.deq_idx >= queue->queue_size) + queue->sw_ring.deq_idx -= queue->queue_size; + } + queue->sw_ring.free_cnt += count; + + return count; +} + static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { .dev_info_get = zxdh_gdma_rawdev_info_get, .dev_configure = zxdh_gdma_rawdev_configure, @@ -548,6 +660,7 @@ static const struct rte_rawdev_ops zxdh_gdma_rawdev_ops = { .attr_get = zxdh_gdma_rawdev_get_attr, .enqueue_bufs = zxdh_gdma_rawdev_enqueue_bufs, + .dequeue_bufs = zxdh_gdma_rawdev_dequeue_bufs, }; static int