From patchwork Thu Oct 21 04:46:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 102503 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C48BA0C4B; Thu, 21 Oct 2021 06:47:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 393A741168; Thu, 21 Oct 2021 06:47:15 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id ABA1B41168 for ; Thu, 21 Oct 2021 06:47:08 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 81F4A2006A6; Thu, 21 Oct 2021 06:47:08 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 1C0CC200306; Thu, 21 Oct 2021 06:47:08 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 0EFA5183AD6E; Thu, 21 Oct 2021 12:47:07 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Thu, 21 Oct 2021 10:16:56 +0530 Message-Id: <20211021044700.12370-2-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211021044700.12370-1-apeksha.gupta@nxp.com> References: <20211019184003.23128-2-apeksha.gupta@nxp.com> <20211021044700.12370-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENETFEC (Fast Ethernet Controller) is a network poll mode driver for NXP SoC i.MX 8M Mini. This patch adds skeleton for enetfec driver with probe function. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta Acked-by: Hemant Agrawal --- v6: - Fix document build errors --- --- MAINTAINERS | 7 + doc/guides/nics/enetfec.rst | 131 ++++++++++++++++++ doc/guides/nics/features/enetfec.ini | 9 ++ doc/guides/nics/index.rst | 1 + doc/guides/rel_notes/release_21_11.rst | 4 + drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++ drivers/net/enetfec/enet_ethdev.h | 179 +++++++++++++++++++++++++ drivers/net/enetfec/enet_pmd_logs.h | 31 +++++ drivers/net/enetfec/meson.build | 11 ++ drivers/net/enetfec/version.map | 3 + drivers/net/meson.build | 1 + 11 files changed, 462 insertions(+) create mode 100644 doc/guides/nics/enetfec.rst create mode 100644 doc/guides/nics/features/enetfec.ini create mode 100644 drivers/net/enetfec/enet_ethdev.c create mode 100644 drivers/net/enetfec/enet_ethdev.h create mode 100644 drivers/net/enetfec/enet_pmd_logs.h create mode 100644 drivers/net/enetfec/meson.build create mode 100644 drivers/net/enetfec/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 8dceb6c0e0..db2df484d0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -876,6 +876,13 @@ F: drivers/net/enetc/ F: doc/guides/nics/enetc.rst F: doc/guides/nics/features/enetc.ini +NXP enetfec +M: Apeksha Gupta +M: Sachin Saxena +F: drivers/net/enetfec/ +F: doc/guides/nics/enetfec.rst +F: doc/guides/nics/features/enetfec.ini + NXP pfe M: Gagandeep Singh F: doc/guides/nics/pfe.rst diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst new file mode 100644 index 0000000000..dfcd032098 --- /dev/null +++ b/doc/guides/nics/enetfec.rst @@ -0,0 +1,131 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 NXP + +ENETFEC Poll Mode Driver +======================== + +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC. + +More information can be found at NXP Official Website + + +ENETFEC +------- + +This section provides an overview of the NXP ENETFEC and how it is +integrated into the DPDK. + +Contents summary + +- ENETFEC overview +- ENETFEC features +- Supported ENETFEC SoCs +- Prerequisites +- Driver compilation and testing +- Limitations + +ENETFEC Overview +~~~~~~~~~~~~~~~~ +The i.MX 8M Mini Media Applications Processor is built to achieve both +high performance and low power consumption. ENETFEC PMD is a hardware +programmable packet forwarding engine to provide high performance +Ethernet interface. It has only 1 GB Ethernet interface with RJ45 +connector. + +The diagram below shows a system level overview of ENETFEC: + + .. code-block:: console + + ===================================================== + Userspace + +-----------------------------------------+ + | ENETFEC Driver | + | +-------------------------+ | + | | virtual ethernet device | | + +-----------------------------------------+ + ^ | + | | + | | + RXQ | | TXQ + | | + | v + ===================================================== + Kernel Space + +---------+ + | fec-uio | + ====================+=========+====================== + Hardware + +-----------------------------------------+ + | i.MX 8M MINI EVK | + | +-----+ | + | | MAC | | + +---------------+-----+-------------------+ + | PHY | + +-----+ + +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the +userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware +blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY +initialisation and for mapping the allocated memory of register & buffer +descriptor with DPDK which gives access to non-cacheable memory for buffer +descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC +driver. + +- ENETFEC driver registers the device in virtual device driver. +- RTE framework scans and will invoke the probe function of ENETFEC driver. +- The probe function will set the basic device registers and also setups BD rings. +- On packet Rx the respective BD Ring status bit is set which is then used for + packet processing. +- Then Tx is done first followed by Rx via logical interfaces. + +ENETFEC Features +~~~~~~~~~~~~~~~~~ + +- Linux +- ARMv8 + +Supported ENETFEC SoCs +~~~~~~~~~~~~~~~~~~~~~~ + +- i.MX 8M Mini + +Prerequisites +~~~~~~~~~~~~~ + +There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini +compatible board: + +1. **ARM 64 Tool Chain** + + For example, the `*aarch64* Linaro Toolchain `_. + +2. **Linux Kernel** + + It can be obtained from `NXP's Github hosting `_. + + .. note:: + + Branch is 'lf-5.10.y' + +3. **Rootfile system** + + Any *aarch64* supporting filesystem can be used. For example, + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained + from `here `_. + +4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to + run DPDK application. + +Driver compilation and testing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Follow instructions available in the document +:ref:`compiling and testing a PMD for a NIC ` +to launch **dpdk-testpmd** + +Limitations +~~~~~~~~~~~ + +- Multi queue is not supported. diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini new file mode 100644 index 0000000000..bdfbdbd9d4 --- /dev/null +++ b/doc/guides/nics/features/enetfec.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'enetfec' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 784d5d39f6..777fdab4a0 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -26,6 +26,7 @@ Network Interface Controller Drivers e1000em ena enetc + enetfec enic fm10k hinic diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 3362c52a73..e964838967 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -20,6 +20,10 @@ DPDK Release 21.11 ninja -C build doc xdg-open build/doc/guides/html/rel_notes/release_21_11.html +* **Added NXP ENETFEC PMD.** + + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver. New Features ------------ diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c new file mode 100644 index 0000000000..8a74fb5bf2 --- /dev/null +++ b/drivers/net/enetfec/enet_ethdev.c @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "enet_ethdev.h" +#include "enet_pmd_logs.h" + +#define ENETFEC_NAME_PMD net_enetfec +#define ENETFEC_CDEV_INVALID_FD -1 + +static int +enetfec_eth_init(struct rte_eth_dev *dev) +{ + rte_eth_dev_probing_finish(dev); + return 0; +} + +static int +pmd_enetfec_probe(struct rte_vdev_device *vdev) +{ + struct rte_eth_dev *dev = NULL; + struct enetfec_private *fep; + const char *name; + int rc; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name); + + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep)); + if (dev == NULL) + return -ENOMEM; + + /* setup board info structure */ + fep = dev->data->dev_private; + fep->dev = dev; + rc = enetfec_eth_init(dev); + if (rc) + goto failed_init; + + return 0; + +failed_init: + ENETFEC_PMD_ERR("Failed to init"); + return rc; +} + +static int +pmd_enetfec_remove(struct rte_vdev_device *vdev) +{ + struct rte_eth_dev *eth_dev = NULL; + int ret; + + /* find the ethdev entry */ + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev)); + if (eth_dev == NULL) + return -ENODEV; + + ret = rte_eth_dev_release_port(eth_dev); + if (ret != 0) + return -EINVAL; + + ENETFEC_PMD_INFO("Closing sw device"); + return 0; +} + +static struct rte_vdev_driver pmd_enetfec_drv = { + .probe = pmd_enetfec_probe, + .remove = pmd_enetfec_remove, +}; + +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv); +RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE); diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h new file mode 100644 index 0000000000..c674dfc782 --- /dev/null +++ b/drivers/net/enetfec/enet_ethdev.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef __ENETFEC_ETHDEV_H__ +#define __ENETFEC_ETHDEV_H__ + +#include + +/* + * ENETFEC with AVB IP can support maximum 3 rx and tx queues. + */ +#define ENETFEC_MAX_Q 3 + +#define ETHER_ADDR_LEN 6 +#define BD_LEN 49152 +#define ENETFEC_TX_FR_SIZE 2048 +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */ +#define MAX_RX_BD_RING_SIZE 512 + +/* full duplex or half duplex */ +#define HALF_DUPLEX 0x00 +#define FULL_DUPLEX 0x01 +#define UNKNOWN_DUPLEX 0xff + +#define PKT_MAX_BUF_SIZE 1984 +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16) +#define ETH_ALEN RTE_ETHER_ADDR_LEN +#define ETH_HLEN RTE_ETHER_HDR_LEN +#define VLAN_HLEN 4 + +#define __iomem +#if defined(RTE_ARCH_ARM) +#if defined(RTE_ARCH_64) +#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); } +#define dcbf_64(p) dcbf(p) + +#else /* RTE_ARCH_32 */ +#define dcbf(p) RTE_SET_USED(p) +#define dcbf_64(p) dcbf(p) +#endif + +#else +#define dcbf(p) RTE_SET_USED(p) +#define dcbf_64(p) dcbf(p) +#endif + +/* Required types */ +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +struct bufdesc { + uint16_t bd_datlen; /* buffer data length */ + uint16_t bd_sc; /* buffer control & status */ + uint32_t bd_bufaddr; /* buffer address */ +}; + +struct bufdesc_ex { + struct bufdesc desc; + uint32_t bd_esc; + uint32_t bd_prot; + uint32_t bd_bdu; + uint32_t ts; + uint16_t res0[4]; +}; + +struct bufdesc_prop { + int queue_id; + /* Addresses of Tx and Rx buffers */ + struct bufdesc *base; + struct bufdesc *last; + struct bufdesc *cur; + void __iomem *active_reg_desc; + uint64_t descr_baseaddr_p; + unsigned short ring_size; + unsigned char d_size; + unsigned char d_size_log2; +}; + +struct enetfec_priv_tx_q { + struct bufdesc_prop bd; + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE]; + struct bufdesc *dirty_tx; + struct rte_mempool *pool; + struct enetfec_private *fep; +}; + +struct enetfec_priv_rx_q { + struct bufdesc_prop bd; + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE]; + struct rte_mempool *pool; + struct enetfec_private *fep; +}; + +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer + * descriptor base is x_bd_base. Currently available buffer are x_cur + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx + * that is sent by the controller. + * The tx_cur and dirty_tx are same in completely full and empty + * conditions. Actual condition is determined by empty & ready bits. + */ +struct enetfec_private { + struct rte_eth_dev *dev; + struct rte_eth_stats stats; + struct rte_mempool *pool; + uint16_t max_rx_queues; + uint16_t max_tx_queues; + unsigned int total_tx_ring_size; + unsigned int total_rx_ring_size; + bool bufdesc_ex; + unsigned int tx_align; + unsigned int rx_align; + int full_duplex; + unsigned int phy_speed; + uint32_t quirks; + int flag_csum; + int flag_pause; + int flag_wol; + bool rgmii_txc_delay; + bool rgmii_rxc_delay; + int link; + void *hw_baseaddr_v; + uint64_t hw_baseaddr_p; + void *bd_addr_v; + uint64_t bd_addr_p; + uint64_t bd_addr_p_r[ENETFEC_MAX_Q]; + uint64_t bd_addr_p_t[ENETFEC_MAX_Q]; + void *dma_baseaddr_r[ENETFEC_MAX_Q]; + void *dma_baseaddr_t[ENETFEC_MAX_Q]; + uint64_t cbus_size; + unsigned int reg_size; + unsigned int bd_size; + int hw_ts_rx_en; + int hw_ts_tx_en; + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q]; + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q]; +}; + +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); }) +#define readl(p) rte_read32(p) + +static inline struct +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return (bdp >= bd->last) ? bd->base + : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size); +} + +static inline struct +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return (bdp <= bd->base) ? bd->last + : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size); +} + +static inline int +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2; +} + +static inline int +fls64(unsigned long word) +{ + return (64 - __builtin_clzl(word)) - 1; +} + +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp, + struct bufdesc_prop *bd); +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp, + struct rte_mbuf *mbuf); + +#endif /*__ENETFEC_ETHDEV_H__*/ diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h new file mode 100644 index 0000000000..e7b3964a0e --- /dev/null +++ b/drivers/net/enetfec/enet_pmd_logs.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef _ENETFEC_LOGS_H_ +#define _ENETFEC_LOGS_H_ + +extern int enetfec_logtype_pmd; + +/* PMD related logs */ +#define ENETFEC_PMD_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \ + fmt "\n", __func__, ##args) + +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>") + +#define ENETFEC_PMD_DEBUG(fmt, args...) \ + ENETFEC_PMD_LOG(DEBUG, fmt, ## args) +#define ENETFEC_PMD_ERR(fmt, args...) \ + ENETFEC_PMD_LOG(ERR, fmt, ## args) +#define ENETFEC_PMD_INFO(fmt, args...) \ + ENETFEC_PMD_LOG(INFO, fmt, ## args) + +#define ENETFEC_PMD_WARN(fmt, args...) \ + ENETFEC_PMD_LOG(WARNING, fmt, ## args) + +/* DP Logs, toggled out at compile time if level lower than current level */ +#define ENETFEC_DP_LOG(level, fmt, args...) \ + RTE_LOG_DP(level, PMD, fmt, ## args) + +#endif /* _ENETFEC_LOGS_H_ */ diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build new file mode 100644 index 0000000000..79dca58dea --- /dev/null +++ b/drivers/net/enetfec/meson.build @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 NXP + +if not is_linux + build = false + reason = 'only supported on linux' +endif + +sources = files('enet_ethdev.c', + 'enet_uio.c', + 'enet_rxtx.c') diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map new file mode 100644 index 0000000000..b66517b171 --- /dev/null +++ b/drivers/net/enetfec/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; diff --git a/drivers/net/meson.build b/drivers/net/meson.build index 24ad121fe4..ac294d8507 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -18,6 +18,7 @@ drivers = [ 'e1000', 'ena', 'enetc', + 'enetfec', 'enic', 'failsafe', 'fm10k', From patchwork Thu Oct 21 04:46:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 102504 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD0A6A0C4B; Thu, 21 Oct 2021 06:47:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36DD741171; Thu, 21 Oct 2021 06:47:16 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 4B0C341168 for ; Thu, 21 Oct 2021 06:47:10 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 198D11A0E43; Thu, 21 Oct 2021 06:47:10 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id AF61D1A0B8C; Thu, 21 Oct 2021 06:47:09 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id AC56A183ACDC; Thu, 21 Oct 2021 12:47:08 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Thu, 21 Oct 2021 10:16:57 +0530 Message-Id: <20211021044700.12370-3-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211021044700.12370-1-apeksha.gupta@nxp.com> References: <20211019184003.23128-2-apeksha.gupta@nxp.com> <20211021044700.12370-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implemented the fec-uio driver in kernel. enetfec PMD uses UIO interface to interact with "fec-uio" driver implemented in kernel for PHY initialisation and for mapping the allocated memory of register & BD from kernel to DPDK which gives access to non-cacheable memory for BD. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- drivers/net/enetfec/enet_ethdev.c | 236 +++++++++++++++++++++++++ drivers/net/enetfec/enet_ethdev.h | 2 + drivers/net/enetfec/enet_regs.h | 106 ++++++++++++ drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++ drivers/net/enetfec/enet_uio.h | 64 +++++++ 5 files changed, 686 insertions(+) create mode 100644 drivers/net/enetfec/enet_regs.h create mode 100644 drivers/net/enetfec/enet_uio.c create mode 100644 drivers/net/enetfec/enet_uio.h diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 8a74fb5bf2..406a8db7f3 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -13,16 +13,221 @@ #include #include #include +#include #include "enet_ethdev.h" #include "enet_pmd_logs.h" +#include "enet_regs.h" +#include "enet_uio.h" #define ENETFEC_NAME_PMD net_enetfec #define ENETFEC_CDEV_INVALID_FD -1 +#define BIT(nr) (1u << (nr)) + +/* FEC receive acceleration */ +#define ENETFEC_RACC_IPDIS BIT(1) +#define ENETFEC_RACC_PRODIS BIT(2) +#define ENETFEC_RACC_SHIFT16 BIT(7) +#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \ + ENETFEC_RACC_PRODIS) + +#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1 +#define ENETFEC_PAUSE_FLAG_ENABLE 0x2 + +/* Pause frame field and FIFO threshold */ +#define ENETFEC_FCE BIT(5) +#define ENETFEC_RSEM_V 0x84 +#define ENETFEC_RSFL_V 16 +#define ENETFEC_RAEM_V 0x8 +#define ENETFEC_RAFL_V 0x8 +#define ENETFEC_OPD_V 0xFFF0 + +#define NUM_OF_QUEUES 6 + +uint32_t e_cntl; + +/* + * This function is called to start or restart the ENETFEC during a link + * change, transmit timeout, or to reconfigure the ENETFEC. The network + * packet processing for this device must be stopped before this call. + */ +static void +enetfec_restart(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + uint32_t temp_mac[2]; + uint32_t rcntl = OPT_FRAME_SIZE | 0x04; + uint32_t ecntl = ENETFEC_ETHEREN; + + /* default mac address */ + struct rte_ether_addr addr = { + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} }; + uint32_t val; + + /* + * enet-mac reset will reset mac address registers too, + * so need to reconfigure it. + */ + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN); + rte_write32(rte_cpu_to_be_32(temp_mac[0]), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR); + rte_write32(rte_cpu_to_be_32(temp_mac[1]), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR); + + /* Clear any outstanding interrupt. */ + writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR); + + /* Enable MII mode */ + if (fep->full_duplex == FULL_DUPLEX) { + /* FD enable */ + rte_write32(rte_cpu_to_le_32(0x04), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR); + } else { + /* No Rcv on Xmit */ + rcntl |= 0x02; + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR); + } + + if (fep->quirks & QUIRK_RACC) { + val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); + /* align IP header */ + val |= ENETFEC_RACC_SHIFT16; + val &= ~ENETFEC_RACC_OPTIONS; + rte_write32(rte_cpu_to_le_32(val), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL); + } + + /* + * The phy interface and speed need to get configured + * differently on enet-mac. + */ + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) { + /* Enable flow control and length check */ + rcntl |= 0x40000000 | 0x00000020; + + /* RGMII, RMII or MII */ + rcntl |= BIT(6); + ecntl |= BIT(5); + } + + /* enable pause frame*/ + if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) || + ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG) + /*&& ndev->phydev && ndev->phydev->pause*/)) { + rcntl |= ENETFEC_FCE; + + /* set FIFO threshold parameter to reduce overrun */ + rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM); + rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL); + rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM); + rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL); + + /* OPD */ + rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD); + } else { + rcntl &= ~ENETFEC_FCE; + } + + rte_write32(rte_cpu_to_le_32(rcntl), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR); + + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR); + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR); + + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) { + /* enable ENETFEC endian swap */ + ecntl |= (1 << 8); + /* enable ENETFEC store and forward mode */ + rte_write32(rte_cpu_to_le_32(1 << 8), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR); + } + if (fep->bufdesc_ex) + ecntl |= (1 << 4); + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS && + fep->rgmii_txc_delay) + ecntl |= ENETFEC_TXC_DLY; + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS && + fep->rgmii_rxc_delay) + ecntl |= ENETFEC_RXC_DLY; + /* Enable the MIB statistic event counters */ + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC); + + ecntl |= 0x70000000; + e_cntl = ecntl; + /* And last, enable the transmit and receive processing */ + rte_write32(rte_cpu_to_le_32(ecntl), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR); + rte_delay_us(10); +} + +static int +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload"); + + return 0; +} + +static int +enetfec_eth_start(struct rte_eth_dev *dev) +{ + enetfec_restart(dev); + + return 0; +} +/* ENETFEC enable function. + * @param[in] base ENETFEC base address + */ +void +enetfec_enable(void *base) +{ + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl, + (uint8_t *)base + ENETFEC_ECR); +} + +/* ENETFEC disable function. + * @param[in] base ENETFEC base address + */ +void +enetfec_disable(void *base) +{ + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl, + (uint8_t *)base + ENETFEC_ECR); +} + +static int +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + + dev->data->dev_started = 0; + enetfec_disable(fep->hw_baseaddr_v); + + return 0; +} + +static const struct eth_dev_ops enetfec_ops = { + .dev_configure = enetfec_eth_configure, + .dev_start = enetfec_eth_start, + .dev_stop = enetfec_eth_stop +}; static int enetfec_eth_init(struct rte_eth_dev *dev) { + struct enetfec_private *fep = dev->data->dev_private; + + fep->full_duplex = FULL_DUPLEX; + dev->dev_ops = &enetfec_ops; rte_eth_dev_probing_finish(dev); + return 0; } @@ -33,6 +238,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) struct enetfec_private *fep; const char *name; int rc; + int i; + unsigned int bdsize; name = rte_vdev_device_name(vdev); if (name == NULL) @@ -46,6 +253,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) /* setup board info structure */ fep = dev->data->dev_private; fep->dev = dev; + + fep->max_rx_queues = ENETFEC_MAX_Q; + fep->max_tx_queues = ENETFEC_MAX_Q; + fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT + | QUIRK_RACC; + + rc = enetfec_configure(); + if (rc != 0) + return -ENOMEM; + rc = config_enetfec_uio(fep); + if (rc != 0) + return -ENOMEM; + + /* Get the BD size for distributing among six queues */ + bdsize = (fep->bd_size) / NUM_OF_QUEUES; + + for (i = 0; i < fep->max_tx_queues; i++) { + fep->dma_baseaddr_t[i] = fep->bd_addr_v; + fep->bd_addr_p_t[i] = fep->bd_addr_p; + fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize; + fep->bd_addr_p = fep->bd_addr_p + bdsize; + } + for (i = 0; i < fep->max_rx_queues; i++) { + fep->dma_baseaddr_r[i] = fep->bd_addr_v; + fep->bd_addr_p_r[i] = fep->bd_addr_p; + fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize; + fep->bd_addr_p = fep->bd_addr_p + bdsize; + } + rc = enetfec_eth_init(dev); if (rc) goto failed_init; diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h index c674dfc782..312e0424e5 100644 --- a/drivers/net/enetfec/enet_ethdev.h +++ b/drivers/net/enetfec/enet_ethdev.h @@ -175,5 +175,7 @@ struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd); int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp, struct rte_mbuf *mbuf); +void enetfec_enable(void *base); +void enetfec_disable(void *base); #endif /*__ENETFEC_ETHDEV_H__*/ diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h new file mode 100644 index 0000000000..5415ed77ea --- /dev/null +++ b/drivers/net/enetfec/enet_regs.h @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef __ENETFEC_REGS_H +#define __ENETFEC_REGS_H + +/* Ethernet receive use control and status of buffer descriptor + */ +#define RX_BD_TR ((ushort)0x0001) /* Truncated */ +#define RX_BD_OV ((ushort)0x0002) /* Over-run */ +#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */ +#define RX_BD_SH ((ushort)0x0008) /* Reserved */ +#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */ +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */ +#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */ +#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */ +#define RX_BD_INT 0x00800000 +#define RX_BD_ICE 0x00000020 +#define RX_BD_PCR 0x00000010 + +/* + * 0 The next BD in consecutive location + * 1 The next BD in ENETFECn_RDSR. + */ +#define RX_BD_WRAP ((ushort)0x2000) +#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */ +#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */ + +/* Ethernet transmit use control and status of buffer descriptor */ +#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */ +#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */ +#define TX_BD_READY ((ushort)0x8000) /* Data is ready */ +#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */ +#define TX_BD_WRAP ((ushort)0x2000) + +/* Ethernet transmit use control and status of enhanced buffer descriptor */ +#define TX_BD_IINS 0x08000000 +#define TX_BD_PINS 0x10000000 + +#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \ + (((X) == 2) ? \ + ENETFEC_RD_START_2 : ENETFEC_RD_START_0)) +#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \ + (((X) == 2) ? \ + ENETFEC_TD_START_2 : ENETFEC_TD_START_0)) +#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \ + (((X) == 2) ? \ + ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0)) + +#define ENETFEC_ETHEREN ((uint)0x00000002) +#define ENETFEC_TXC_DLY ((uint)0x00010000) +#define ENETFEC_RXC_DLY ((uint)0x00020000) + +/* ENETFEC MAC is in controller */ +#define QUIRK_HAS_ENETFEC_MAC (1 << 0) +/* GBIT supported in controller */ +#define QUIRK_GBIT (1 << 3) +/* RACC register supported by controller */ +#define QUIRK_RACC (1 << 12) +/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or + * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for + * generating delay of 2ns. + */ +#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18) + +#define ENETFEC_EIR 0x004 /* Interrupt event register */ +#define ENETFEC_EIMR 0x008 /* Interrupt mask register */ +#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */ +#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */ +#define ENETFEC_ECR 0x024 /* Ethernet control register */ +#define ENETFEC_MSCR 0x044 /* MII speed control register */ +#define ENETFEC_MIBC 0x064 /* MIB control and status register */ +#define ENETFEC_RCR 0x084 /* Receive control register */ +#define ENETFEC_TCR 0x0c4 /* Transmit Control register */ +#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */ +#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */ +#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */ +#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */ +#define ENETFEC_IALR 0x11c /* hash table 32 bits low */ +#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */ +#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */ +#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */ +#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/ +#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */ +#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */ +#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */ +#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */ +#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */ +#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */ +#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */ +#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */ +#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */ +#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */ +#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */ +#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */ +#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */ +#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */ +#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/ +#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */ +#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */ +#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */ +#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */ +#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */ + +#endif /*__ENETFEC_REGS_H */ diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c new file mode 100644 index 0000000000..c939b4736b --- /dev/null +++ b/drivers/net/enetfec/enet_uio.c @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include "enet_pmd_logs.h" +#include "enet_uio.h" + +static struct uio_job enetfec_uio_job; +static int enetfec_count; + +/** @brief Checks if a file name contains a certain substring. + * This function assumes a filename format of: [text][number]. + * @param [in] filename File name + * @param [in] match String to match in file name + * + * @retval true if file name matches the criteria + * @retval false if file name does not match the criteria + */ +static bool +file_name_match_extract(const char filename[], const char match[]) +{ + char *substr = NULL; + + substr = strstr(filename, match); + if (substr == NULL) + return false; + + return true; +} + +/* + * @brief Reads first line from a file. + * Composes file name as: root/subdir/filename + * + * @param [in] root Root path + * @param [in] subdir Subdirectory name + * @param [in] filename File name + * @param [out] line The first line read from file. + * + * @retval 0 for success + * @retval other value for error + */ +static int +file_read_first_line(const char root[], const char subdir[], + const char filename[], char *line) +{ + char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME]; + int fd = 0, ret = 0; + + /*compose the file name: root/subdir/filename */ + memset(absolute_file_name, 0, sizeof(absolute_file_name)); + snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME, + "%s/%s/%s", root, subdir, filename); + + fd = open(absolute_file_name, O_RDONLY); + if (fd <= 0) + ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name); + + /* read UIO device name from first line in file */ + ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH); + if (ret <= 0) { + ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name); + return ret; + } + close(fd); + + /* NULL-ify string */ + line[ret] = '\0'; + + return 0; +} + +/* + * @brief Maps rx-tx bd range assigned for a bd ring. + * + * @param [in] uio_device_fd UIO device file descriptor + * @param [in] uio_device_id UIO device id + * @param [in] uio_map_id UIO allows maximum 5 different mapping for + each device. Maps start with id 0. + * @param [out] map_size Map size. + * @param [out] map_addr Map physical address + * + * @retval NULL if failed to map registers + * @retval Virtual address for mapped register address range + */ +static void * +uio_map_mem(int uio_device_fd, int uio_device_id, + int uio_map_id, int *map_size, uint64_t *map_addr) +{ + void *mapped_address = NULL; + unsigned int uio_map_size = 0; + unsigned int uio_map_p_addr = 0; + char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME]; + char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME]; + char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1]; + char uio_map_p_addr_str[32]; + int ret = 0; + + /* compose the file name: root/subdir/filename */ + memset(uio_sys_root, 0, sizeof(uio_sys_root)); + memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir)); + memset(uio_map_size_str, 0, sizeof(uio_map_size_str)); + memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str)); + + /* Compose string: /sys/class/uio/uioX */ + snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d", + FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id); + /* Compose string: maps/mapY */ + snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d", + FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id); + + /* Read first (and only) line from file + * /sys/class/uio/uioX/maps/mapY/size + */ + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir, + "size", uio_map_size_str); + if (ret < 0) { + ENETFEC_PMD_ERR("file_read_first_line() failed"); + return NULL; + } + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir, + "addr", uio_map_p_addr_str); + if (ret < 0) { + ENETFEC_PMD_ERR("file_read_first_line() failed"); + return NULL; + } + /* Read mapping size and physical address expressed in hexa(base 16) */ + uio_map_size = strtol(uio_map_size_str, NULL, 16); + uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16); + + if (uio_map_id == 0) { + /* Map the register address in user space when map_id is 0 */ + mapped_address = mmap(0 /*dynamically choose virtual address */, + uio_map_size, PROT_READ | PROT_WRITE, + MAP_SHARED, uio_device_fd, 0); + } else { + /* Map the BD memory in user space */ + mapped_address = mmap(NULL, uio_map_size, + PROT_READ | PROT_WRITE, + MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE)); + } + + if (mapped_address == MAP_FAILED) { + ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d," + "uio device id = %d, uio map id = %d", errno, + uio_device_fd, uio_device_id, uio_map_id); + return NULL; + } + + /* Save the map size to use it later on for munmap-ing */ + *map_size = uio_map_size; + *map_addr = uio_map_p_addr; + ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p", + uio_device_id, uio_map_id, uio_map_size, mapped_address); + + return mapped_address; +} + +int +config_enetfec_uio(struct enetfec_private *fep) +{ + char uio_device_file_name[32]; + struct uio_job *uio_job = NULL; + + /* Mapping is done only one time */ + if (enetfec_count > 0) { + ENETFEC_PMD_INFO("Mapped!\n"); + return 0; + } + + uio_job = &enetfec_uio_job; + + /* Find UIO device created by ENETFEC-UIO kernel driver */ + memset(uio_device_file_name, 0, sizeof(uio_device_file_name)); + snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d", + FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number); + + /* Open device file */ + uio_job->uio_fd = open(uio_device_file_name, O_RDWR); + if (uio_job->uio_fd < 0) { + ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n"); + return -1; + } + + ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d", + uio_device_file_name, uio_job->uio_fd); + + fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd, + uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID, + &uio_job->map_size, &uio_job->map_addr); + if (fep->hw_baseaddr_v == NULL) + return -ENOMEM; + fep->hw_baseaddr_p = uio_job->map_addr; + fep->reg_size = uio_job->map_size; + + fep->bd_addr_v = uio_map_mem(uio_job->uio_fd, + uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID, + &uio_job->map_size, &uio_job->map_addr); + if (fep->hw_baseaddr_v == NULL) + return -ENOMEM; + fep->bd_addr_p = uio_job->map_addr; + fep->bd_size = uio_job->map_size; + + enetfec_count++; + + return 0; +} + +int +enetfec_configure(void) +{ + char uio_name[32]; + int uio_minor_number = -1; + int ret; + DIR *d = NULL; + struct dirent *dir; + + d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH); + if (d == NULL) { + ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n", + FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno)); + return -1; + } + + /* Iterate through all subdirs */ + while ((dir = readdir(d)) != NULL) { + if (!strncmp(dir->d_name, ".", 1) || + !strncmp(dir->d_name, "..", 2)) + continue; + + if (file_name_match_extract(dir->d_name, "uio")) { + /* + * As substring was found in + * read number following substring in + */ + ret = sscanf(dir->d_name + strlen("uio"), "%d", + &uio_minor_number); + if (ret < 0) + ENETFEC_PMD_ERR("Error: not find minor number\n"); + /* + * Open file uioX/name and read first line which + * contains the name for the device. Based on the + * name check if this UIO device is for enetfec. + */ + memset(uio_name, 0, sizeof(uio_name)); + ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH, + dir->d_name, "name", uio_name); + if (ret != 0) { + ENETFEC_PMD_INFO("file_read_first_line failed\n"); + closedir(d); + return -1; + } + + if (file_name_match_extract(uio_name, + FEC_UIO_DEVICE_NAME)) { + enetfec_uio_job.uio_minor_number = + uio_minor_number; + ENETFEC_PMD_INFO("enetfec device uio name: %s", + uio_name); + } + } + } + closedir(d); + return 0; +} diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h new file mode 100644 index 0000000000..4a031d3f46 --- /dev/null +++ b/drivers/net/enetfec/enet_uio.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include "enet_ethdev.h" + +/* Prefix path to sysfs directory where UIO device attributes are exported. + * Path for UIO device X is /sys/class/uio/uioX + */ +#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio" + +/* Subfolder in sysfs where mapping attributes are exported + * for each UIO device. Path for mapping Y for device X is: + * /sys/class/uio/uioX/maps/mapY + */ +#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map" + +/* Name of UIO device file prefix. Each UIO device will have a device file + * /dev/uioX, where X is the minor device number. + */ +#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio" +/* + * Name of UIO device. User space FEC will have a corresponding + * UIO device. + * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH. + * + * @note Must be kept in sync with FEC kernel driver + * define #FEC_UIO_DEVICE_NAME ! + */ +#define FEC_UIO_DEVICE_NAME "imx-fec-uio" + +/* Maximum length for the name of an UIO device file. + * Device file name format is: /dev/uioX. + */ +#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30 + +/* Maximum length for the name of an attribute file for an UIO device. + * Attribute files are exported in sysfs and have the name formatted as: + * /sys/class/uio/uioX/ + */ +#define FEC_UIO_MAX_ATTR_FILE_NAME 100 + +/* The id for the mapping used to export ENETFEC registers and BD memory to + * user space through UIO device. + */ +#define FEC_UIO_REG_MAP_ID 0 +#define FEC_UIO_BD_MAP_ID 1 + +#define MAP_PAGE_SIZE 4096 + +struct uio_job { + uint32_t fec_id; + int uio_fd; + void *bd_start_addr; + void *register_base_addr; + int map_size; + uint64_t map_addr; + int uio_minor_number; +}; + +int enetfec_configure(void); +int config_enetfec_uio(struct enetfec_private *fep); +void enetfec_uio_init(void); +void enetfec_cleanup(void); From patchwork Thu Oct 21 04:46:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 102505 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE4DBA0C4B; Thu, 21 Oct 2021 06:47:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E1904117B; Thu, 21 Oct 2021 06:47:17 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id A964241168 for ; Thu, 21 Oct 2021 06:47:11 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 84340200D33; Thu, 21 Oct 2021 06:47:11 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 2093D200D3D; Thu, 21 Oct 2021 06:47:11 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 1D90D183AD6E; Thu, 21 Oct 2021 12:47:10 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Thu, 21 Oct 2021 10:16:58 +0530 Message-Id: <20211021044700.12370-4-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211021044700.12370-1-apeksha.gupta@nxp.com> References: <20211019184003.23128-2-apeksha.gupta@nxp.com> <20211021044700.12370-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds Rx/Tx queue configuration setup operations. On packet reception the respective BD Ring status bit is set which is then used for packet processing. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++- 1 file changed, 229 insertions(+), 1 deletion(-) diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 406a8db7f3..0ff93363c7 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -45,6 +45,19 @@ uint32_t e_cntl; +/* Supported Rx offloads */ +static uint64_t dev_rx_offloads_sup = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_CHECKSUM; + +static uint64_t dev_tx_offloads_sup = + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM; + /* * This function is called to start or restart the ENETFEC during a link * change, transmit timeout, or to reconfigure the ENETFEC. The network @@ -213,10 +226,225 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) return 0; } +static int +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + dev_info->max_rx_queues = ENETFEC_MAX_Q; + dev_info->max_tx_queues = ENETFEC_MAX_Q; + dev_info->rx_offload_capa = dev_rx_offloads_sup; + dev_info->tx_offload_capa = dev_tx_offloads_sup; + return 0; +} + +static const unsigned short offset_des_active_rxq[] = { + ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2 +}; + +static const unsigned short offset_des_active_txq[] = { + ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2 +}; + +static int +enetfec_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + struct bufdesc *bdp, *bd_base; + struct enetfec_priv_tx_q *txq; + unsigned int size; + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : + sizeof(struct bufdesc); + unsigned int dsize_log2 = fls64(dsize); + + /* Tx deferred start is not supported */ + if (tx_conf->tx_deferred_start) { + ENETFEC_PMD_ERR("%p:Tx deferred start not supported", + (void *)dev); + return -EINVAL; + } + + /* allocate transmit queue */ + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE); + if (txq == NULL) { + ENETFEC_PMD_ERR("transmit queue allocation failed"); + return -ENOMEM; + } + + if (nb_desc > MAX_TX_BD_RING_SIZE) { + nb_desc = MAX_TX_BD_RING_SIZE; + ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n"); + } + txq->bd.ring_size = nb_desc; + fep->total_tx_ring_size += txq->bd.ring_size; + fep->tx_queues[queue_idx] = txq; + + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx)); + + /* Set transmit descriptor base. */ + txq = fep->tx_queues[queue_idx]; + txq->fep = fep; + size = dsize * txq->bd.ring_size; + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx]; + txq->bd.queue_id = queue_idx; + txq->bd.base = bd_base; + txq->bd.cur = bd_base; + txq->bd.d_size = dsize; + txq->bd.d_size_log2 = dsize_log2; + txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v + + offset_des_active_txq[queue_idx]; + bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size); + txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize); + bdp = txq->bd.base; + bdp = txq->bd.cur; + + for (i = 0; i < txq->bd.ring_size; i++) { + /* Initialize the BD for every fragment in the page. */ + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); + if (txq->tx_mbuf[i] != NULL) { + rte_pktmbuf_free(txq->tx_mbuf[i]); + txq->tx_mbuf[i] = NULL; + } + rte_write32(0, &bdp->bd_bufaddr); + bdp = enet_get_nextdesc(bdp, &txq->bd); + } + + /* Set the last buffer to wrap */ + bdp = enet_get_prevdesc(bdp, &txq->bd); + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) | + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); + txq->dirty_tx = bdp; + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx]; + return 0; +} + +static int +enetfec_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_rx_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + struct bufdesc *bd_base; + struct bufdesc *bdp; + struct enetfec_priv_rx_q *rxq; + unsigned int size; + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : + sizeof(struct bufdesc); + unsigned int dsize_log2 = fls64(dsize); + + /* Rx deferred start is not supported */ + if (rx_conf->rx_deferred_start) { + ENETFEC_PMD_ERR("%p:Rx deferred start not supported", + (void *)dev); + return -EINVAL; + } + + /* allocate receive queue */ + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE); + if (rxq == NULL) { + ENETFEC_PMD_ERR("receive queue allocation failed"); + return -ENOMEM; + } + + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) { + nb_rx_desc = MAX_RX_BD_RING_SIZE; + ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n"); + } + + rxq->bd.ring_size = nb_rx_desc; + fep->total_rx_ring_size += rxq->bd.ring_size; + fep->rx_queues[queue_idx] = rxq; + + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx)); + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx)); + + /* Set receive descriptor base. */ + rxq = fep->rx_queues[queue_idx]; + rxq->pool = mb_pool; + size = dsize * rxq->bd.ring_size; + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx]; + rxq->bd.queue_id = queue_idx; + rxq->bd.base = bd_base; + rxq->bd.cur = bd_base; + rxq->bd.d_size = dsize; + rxq->bd.d_size_log2 = dsize_log2; + rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v + + offset_des_active_rxq[queue_idx]; + bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size); + rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize); + + rxq->fep = fep; + bdp = rxq->bd.base; + rxq->bd.cur = bdp; + + for (i = 0; i < nb_rx_desc; i++) { + /* Initialize Rx buffers from pktmbuf pool */ + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool); + if (mbuf == NULL) { + ENETFEC_PMD_ERR("mbuf failed\n"); + goto err_alloc; + } + + /* Get the virtual address & physical address */ + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &bdp->bd_bufaddr); + + rxq->rx_mbuf[i] = mbuf; + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc); + + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + + /* Initialize the receive buffer descriptors. */ + bdp = rxq->bd.cur; + for (i = 0; i < rxq->bd.ring_size; i++) { + /* Initialize the BD for every fragment in the page. */ + if (rte_read32(&bdp->bd_bufaddr) > 0) + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), + &bdp->bd_sc); + else + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); + + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + + /* Set the last buffer to wrap */ + bdp = enet_get_prevdesc(bdp, &rxq->bd); + rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) | + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); + dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx]; + rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc); + return 0; + +err_alloc: + for (i = 0; i < nb_rx_desc; i++) { + if (rxq->rx_mbuf[i] != NULL) { + rte_pktmbuf_free(rxq->rx_mbuf[i]); + rxq->rx_mbuf[i] = NULL; + } + } + rte_free(rxq); + return errno; +} + static const struct eth_dev_ops enetfec_ops = { .dev_configure = enetfec_eth_configure, .dev_start = enetfec_eth_start, - .dev_stop = enetfec_eth_stop + .dev_stop = enetfec_eth_stop, + .dev_infos_get = enetfec_eth_info, + .rx_queue_setup = enetfec_rx_queue_setup, + .tx_queue_setup = enetfec_tx_queue_setup }; static int From patchwork Thu Oct 21 04:46:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 102506 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C535A0C4B; Thu, 21 Oct 2021 06:47:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3942041189; Thu, 21 Oct 2021 06:47:33 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id B6CC840142 for ; Thu, 21 Oct 2021 06:47:14 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 96C21200306; Thu, 21 Oct 2021 06:47:14 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 30298200D3A; Thu, 21 Oct 2021 06:47:14 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 2D975183ACDC; Thu, 21 Oct 2021 12:47:13 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Thu, 21 Oct 2021 10:16:59 +0530 Message-Id: <20211021044700.12370-5-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211021044700.12370-1-apeksha.gupta@nxp.com> References: <20211019184003.23128-2-apeksha.gupta@nxp.com> <20211021044700.12370-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds burst enqueue and dequeue operations to the enetfec PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is used to enable this feature. By default loopback mode is disabled. Basic features added like promiscuous enable, basic stats. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 2 + doc/guides/nics/features/enetfec.ini | 2 + drivers/net/enetfec/enet_ethdev.c | 197 ++++++++++++ drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++ 4 files changed, 646 insertions(+) create mode 100644 drivers/net/enetfec/enet_rxtx.c diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst index dfcd032098..47836630b6 100644 --- a/doc/guides/nics/enetfec.rst +++ b/doc/guides/nics/enetfec.rst @@ -82,6 +82,8 @@ driver. ENETFEC Features ~~~~~~~~~~~~~~~~~ +- Basic stats +- Promiscuous - Linux - ARMv8 diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini index bdfbdbd9d4..7e0fb148ac 100644 --- a/doc/guides/nics/features/enetfec.ini +++ b/doc/guides/nics/features/enetfec.ini @@ -4,6 +4,8 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Basic stats = Y +Promiscuous mode = Y Linux = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 0ff93363c7..4419952443 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -41,6 +41,8 @@ #define ENETFEC_RAFL_V 0x8 #define ENETFEC_OPD_V 0xFFF0 +/* Extended buffer descriptor */ +#define ENETFEC_EXTENDED_BD 0 #define NUM_OF_QUEUES 6 uint32_t e_cntl; @@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev) rte_delay_us(10); } +static void +enet_free_buffers(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i, q; + struct rte_mbuf *mbuf; + struct bufdesc *bdp; + struct enetfec_priv_rx_q *rxq; + struct enetfec_priv_tx_q *txq; + + for (q = 0; q < dev->data->nb_rx_queues; q++) { + rxq = fep->rx_queues[q]; + bdp = rxq->bd.base; + for (i = 0; i < rxq->bd.ring_size; i++) { + mbuf = rxq->rx_mbuf[i]; + rxq->rx_mbuf[i] = NULL; + if (mbuf) + rte_pktmbuf_free(mbuf); + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + } + + for (q = 0; q < dev->data->nb_tx_queues; q++) { + txq = fep->tx_queues[q]; + bdp = txq->bd.base; + for (i = 0; i < txq->bd.ring_size; i++) { + mbuf = txq->tx_mbuf[i]; + txq->tx_mbuf[i] = NULL; + if (mbuf) + rte_pktmbuf_free(mbuf); + } + } +} + static int enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) { @@ -192,6 +228,8 @@ static int enetfec_eth_start(struct rte_eth_dev *dev) { enetfec_restart(dev); + dev->rx_pkt_burst = &enetfec_recv_pkts; + dev->tx_pkt_burst = &enetfec_xmit_pkts; return 0; } @@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) return 0; } +static int +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev) +{ + enet_free_buffers(dev); + return 0; +} + +static int +enetfec_eth_link_update(struct rte_eth_dev *dev, + int wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + unsigned int lstatus = 1; + + if (dev == NULL) { + ENETFEC_PMD_ERR("Invalid device in link_update.\n"); + return 0; + } + + memset(&link, 0, sizeof(struct rte_eth_link)); + + link.link_status = lstatus; + link.link_speed = ETH_SPEED_NUM_1G; + + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id, + "Up"); + + return rte_eth_linkstatus_set(dev, &link); +} + +static int +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + uint32_t tmp; + + tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR); + tmp |= 0x8; + tmp &= ~0x2; + rte_write32(rte_cpu_to_le_32(tmp), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR); + + return 0; +} + +static int +enetfec_multicast_enable(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + + rte_write32(rte_cpu_to_le_32(0xffffffff), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR); + rte_write32(rte_cpu_to_le_32(0xffffffff), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR); + dev->data->all_multicast = 1; + + rte_write32(rte_cpu_to_le_32(0x04400002), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR); + rte_write32(rte_cpu_to_le_32(0x10800049), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR); + + return 0; +} + +/* Set a MAC change in hardware. */ +static int +enetfec_set_mac_address(struct rte_eth_dev *dev, + struct rte_ether_addr *addr) +{ + struct enetfec_private *fep = dev->data->dev_private; + + writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) | + (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR); + writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24), + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR); + + rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]); + + return 0; +} + +static int +enetfec_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats) +{ + struct enetfec_private *fep = dev->data->dev_private; + struct rte_eth_stats *eth_stats = &fep->stats; + + if (stats == NULL) + return -1; + + memset(stats, 0, sizeof(struct rte_eth_stats)); + + stats->ipackets = eth_stats->ipackets; + stats->ibytes = eth_stats->ibytes; + stats->ierrors = eth_stats->ierrors; + stats->opackets = eth_stats->opackets; + stats->obytes = eth_stats->obytes; + stats->oerrors = eth_stats->oerrors; + + return 0; +} + static int enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) @@ -237,6 +379,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, return 0; } +static void +enet_free_queue(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) + rte_free(fep->rx_queues[i]); + for (i = 0; i < dev->data->nb_tx_queues; i++) + rte_free(fep->rx_queues[i]); +} + static const unsigned short offset_des_active_rxq[] = { ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2 }; @@ -442,6 +596,12 @@ static const struct eth_dev_ops enetfec_ops = { .dev_configure = enetfec_eth_configure, .dev_start = enetfec_eth_start, .dev_stop = enetfec_eth_stop, + .dev_close = enetfec_eth_close, + .link_update = enetfec_eth_link_update, + .promiscuous_enable = enetfec_promiscuous_enable, + .allmulticast_enable = enetfec_multicast_enable, + .mac_addr_set = enetfec_set_mac_address, + .stats_get = enetfec_stats_get, .dev_infos_get = enetfec_eth_info, .rx_queue_setup = enetfec_rx_queue_setup, .tx_queue_setup = enetfec_tx_queue_setup @@ -468,6 +628,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) int rc; int i; unsigned int bdsize; + struct rte_ether_addr macaddr; name = rte_vdev_device_name(vdev); if (name == NULL) @@ -510,6 +671,27 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) fep->bd_addr_p = fep->bd_addr_p + bdsize; } + /* Copy the station address into the dev structure, */ + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses", + ETHER_ADDR_LEN); + rc = -ENOMEM; + goto err; + } + + /* + * Set default mac address + */ + macaddr.addr_bytes[0] = 1; + macaddr.addr_bytes[1] = 1; + macaddr.addr_bytes[2] = 1; + macaddr.addr_bytes[3] = 1; + macaddr.addr_bytes[4] = 1; + macaddr.addr_bytes[5] = 1; + enetfec_set_mac_address(dev, &macaddr); + + fep->bufdesc_ex = ENETFEC_EXTENDED_BD; rc = enetfec_eth_init(dev); if (rc) goto failed_init; @@ -518,6 +700,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) failed_init: ENETFEC_PMD_ERR("Failed to init"); +err: + rte_eth_dev_release_port(dev); return rc; } @@ -525,6 +709,8 @@ static int pmd_enetfec_remove(struct rte_vdev_device *vdev) { struct rte_eth_dev *eth_dev = NULL; + struct enetfec_private *fep; + struct enetfec_priv_rx_q *rxq; int ret; /* find the ethdev entry */ @@ -532,11 +718,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev) if (eth_dev == NULL) return -ENODEV; + fep = eth_dev->data->dev_private; + /* Free descriptor base of first RX queue as it was configured + * first in enetfec_eth_init(). + */ + rxq = fep->rx_queues[0]; + rte_free(rxq->bd.base); + enet_free_queue(eth_dev); + enetfec_eth_stop(eth_dev); + ret = rte_eth_dev_release_port(eth_dev); if (ret != 0) return -EINVAL; ENETFEC_PMD_INFO("Closing sw device"); + munmap(fep->hw_baseaddr_v, fep->cbus_size); + return 0; } diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c new file mode 100644 index 0000000000..445fa97e77 --- /dev/null +++ b/drivers/net/enetfec/enet_rxtx.c @@ -0,0 +1,445 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include +#include +#include +#include "enet_regs.h" +#include "enet_ethdev.h" +#include "enet_pmd_logs.h" + +#define ENETFEC_LOOPBACK 0 +#define ENETFEC_DUMP 0 + +#if ENETFEC_DUMP +static void +enet_dump(struct enetfec_priv_tx_q *txq) +{ + struct bufdesc *bdp; + int index = 0; + + ENETFEC_PMD_DEBUG("TX ring dump\n"); + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n"); + + bdp = txq->bd.base; + do { + ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n", + index, + bdp == txq->bd.cur ? 'S' : ' ', + bdp == txq->dirty_tx ? 'H' : ' ', + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), + txq->tx_mbuf[index]); + bdp = enet_get_nextdesc(bdp, &txq->bd); + index++; + } while (bdp != txq->bd.base); +} + +static void +enet_dump_rx(struct enetfec_priv_rx_q *rxq) +{ + struct bufdesc *bdp; + int index = 0; + + ENETFEC_PMD_DEBUG("RX ring dump\n"); + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n"); + + bdp = rxq->bd.base; + do { + ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n", + index, + bdp == rxq->bd.cur ? 'S' : ' ', + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), + rxq->rx_mbuf[index]); + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index], + rxq->rx_mbuf[index]->pkt_len); + bdp = enet_get_nextdesc(bdp, &rxq->bd); + index++; + } while (bdp != rxq->bd.base); +} +#endif + +#if ENETFEC_LOOPBACK +static volatile bool lb_quit; + +static void fec_signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) { + printf("\n\n %s: Signal %d received, preparing to exit...\n", + __func__, signum); + lb_quit = true; + } +} + +static void +enetfec_lb_rxtx(void *rxq1) +{ + struct rte_mempool *pool; + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL; + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL; + unsigned short status; + unsigned short pkt_len = 0; + int index_r = 0, index_t = 0; + u8 *data; + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; + struct rte_eth_stats *stats = &rxq->fep->stats; + unsigned int i; + struct enetfec_private *fep; + struct enetfec_priv_tx_q *txq; + fep = rxq->fep->dev->data->dev_private; + txq = fep->tx_queues[0]; + + pool = rxq->pool; + rx_bdp = rxq->bd.cur; + tx_bdp = txq->bd.cur; + + signal(SIGTSTP, fec_signal_handler); + while (!lb_quit) { +chk_again: + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc)); + if (status & RX_BD_EMPTY) { + if (!lb_quit) + goto chk_again; + rxq->bd.cur = rx_bdp; + txq->bd.cur = tx_bdp; + return; + } + + /* Check for errors. */ + status ^= RX_BD_LAST; + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | + RX_BD_CR | RX_BD_OV | RX_BD_LAST | + RX_BD_TR)) { + stats->ierrors++; + if (status & RX_BD_OV) { + /* FIFO overrun */ + ENETFEC_PMD_ERR("rx_fifo_error\n"); + goto rx_processing_done; + } + if (status & (RX_BD_LG | RX_BD_SH + | RX_BD_LAST)) { + /* Frame too long or too short. */ + ENETFEC_PMD_ERR("rx_length_error\n"); + if (status & RX_BD_LAST) + ENETFEC_PMD_ERR("rcv is not +last\n"); + } + /* CRC Error */ + if (status & RX_BD_CR) + ENETFEC_PMD_ERR("rx_crc_errors\n"); + + /* Report late collisions as a frame error. */ + if (status & (RX_BD_NO | RX_BD_TR)) + ENETFEC_PMD_ERR("rx_frame_error\n"); + mbuf = NULL; + goto rx_processing_done; + } + + new_mbuf = rte_pktmbuf_alloc(pool); + if (unlikely(!new_mbuf)) { + stats->ierrors++; + break; + } + /* Process the incoming frame. */ + pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen)); + + /* shows data with respect to the data_off field. */ + index_r = enet_get_bd_index(rx_bdp, &rxq->bd); + mbuf = rxq->rx_mbuf[index_r]; + + /* adjust pkt_len */ + rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); + if (rxq->fep->quirks & QUIRK_RACC) + rte_pktmbuf_adj(mbuf, 2); + + /* Replace Buffer in BD */ + rxq->rx_mbuf[index_r] = new_mbuf; + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), + &rx_bdp->bd_bufaddr); + +rx_processing_done: + /* when rx_processing_done clear the status flags + * for this buffer + */ + status &= ~RX_BD_STATS; + + /* Mark the buffer empty */ + status |= RX_BD_EMPTY; + + /* Make sure the updates to rest of the descriptor are + * performed before transferring ownership. + */ + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc); + + /* Update BD pointer to next entry */ + rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd); + + /* Doing this here will keep the FEC running while we process + * incoming frames. + */ + rte_write32(0, rxq->bd.active_reg_desc); + + /* TX begins: First clean the ring then process packet */ + index_t = enet_get_bd_index(tx_bdp, &txq->bd); + status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc)); + if (status & TX_BD_READY) + stats->oerrors++; + break; + if (txq->tx_mbuf[index_t]) { + rte_pktmbuf_free(txq->tx_mbuf[index_t]); + txq->tx_mbuf[index_t] = NULL; + } + + if (mbuf == NULL) + continue; + + /* Fill in a Tx ring entry */ + status &= ~TX_BD_STATS; + + /* Set buffer length and buffer pointer */ + pkt_len = rte_pktmbuf_pkt_len(mbuf); + status |= (TX_BD_LAST); + data = rte_pktmbuf_mtod(mbuf, void *); + + for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE) + dcbf(data + i); + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &tx_bdp->bd_bufaddr); + rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen); + + /* Make sure the updates to rest of the descriptor are performed + * before transferring ownership. + */ + status |= (TX_BD_READY | TX_BD_TC); + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc); + + /* Trigger transmission start */ + rte_write32(0, txq->bd.active_reg_desc); + + /* Save mbuf pointer to clean later */ + txq->tx_mbuf[index_t] = mbuf; + + /* If this was the last BD in the ring, start at the + * beginning again. + */ + tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd); + } +} +#endif + +/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue + * When update through the ring, just set the empty indicator. + */ +uint16_t +enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mempool *pool; + struct bufdesc *bdp; + struct rte_mbuf *mbuf, *new_mbuf = NULL; + unsigned short status; + unsigned short pkt_len; + int pkt_received = 0, index = 0; + void *data; + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; + struct rte_eth_stats *stats = &rxq->fep->stats; + pool = rxq->pool; + bdp = rxq->bd.cur; +#if ENETFEC_LOOPBACK + enetfec_lb_rxtx(rxq1); +#endif + /* Process the incoming packet */ + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + while ((status & RX_BD_EMPTY) == 0) { + if (pkt_received >= nb_pkts) + break; + + new_mbuf = rte_pktmbuf_alloc(pool); + if (unlikely(new_mbuf == NULL)) { + stats->ierrors++; + break; + } + /* Check for errors. */ + status ^= RX_BD_LAST; + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | + RX_BD_CR | RX_BD_OV | RX_BD_LAST | + RX_BD_TR)) { + stats->ierrors++; + if (status & RX_BD_OV) { + /* FIFO overrun */ + /* enet_dump_rx(rxq); */ + ENETFEC_PMD_ERR("rx_fifo_error\n"); + goto rx_processing_done; + } + if (status & (RX_BD_LG | RX_BD_SH + | RX_BD_LAST)) { + /* Frame too long or too short. */ + ENETFEC_PMD_ERR("rx_length_error\n"); + if (status & RX_BD_LAST) + ENETFEC_PMD_ERR("rcv is not +last\n"); + } + if (status & RX_BD_CR) { /* CRC Error */ + ENETFEC_PMD_ERR("rx_crc_errors\n"); + } + /* Report late collisions as a frame error. */ + if (status & (RX_BD_NO | RX_BD_TR)) + ENETFEC_PMD_ERR("rx_frame_error\n"); + goto rx_processing_done; + } + + /* Process the incoming frame. */ + stats->ipackets++; + pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen)); + stats->ibytes += pkt_len; + + /* shows data with respect to the data_off field. */ + index = enet_get_bd_index(bdp, &rxq->bd); + mbuf = rxq->rx_mbuf[index]; + + data = rte_pktmbuf_mtod(mbuf, uint8_t *); + rte_prefetch0(data); + rte_pktmbuf_append((struct rte_mbuf *)mbuf, + pkt_len - 4); + + if (rxq->fep->quirks & QUIRK_RACC) + data = rte_pktmbuf_adj(mbuf, 2); + + rx_pkts[pkt_received] = mbuf; + pkt_received++; + rxq->rx_mbuf[index] = new_mbuf; + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), + &bdp->bd_bufaddr); +rx_processing_done: + /* when rx_processing_done clear the status flags + * for this buffer + */ + status &= ~RX_BD_STATS; + + /* Mark the buffer empty */ + status |= RX_BD_EMPTY; + + if (rxq->fep->bufdesc_ex) { + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + rte_write32(rte_cpu_to_le_32(RX_BD_INT), + &ebdp->bd_esc); + rte_write32(0, &ebdp->bd_prot); + rte_write32(0, &ebdp->bd_bdu); + } + + /* Make sure the updates to rest of the descriptor are + * performed before transferring ownership. + */ + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); + + /* Update BD pointer to next entry */ + bdp = enet_get_nextdesc(bdp, &rxq->bd); + + /* Doing this here will keep the FEC running while we process + * incoming frames. + */ + rte_write32(0, rxq->bd.active_reg_desc); + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + } + rxq->bd.cur = bdp; + return pkt_received; +} + +uint16_t +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct enetfec_priv_tx_q *txq = + (struct enetfec_priv_tx_q *)tx_queue; + struct rte_eth_stats *stats = &txq->fep->stats; + struct bufdesc *bdp, *last_bdp; + struct rte_mbuf *mbuf; + unsigned short status; + unsigned short buflen; + unsigned int index, estatus = 0; + unsigned int i, pkt_transmitted = 0; + u8 *data; + int tx_st = 1; + + while (tx_st) { + if (pkt_transmitted >= nb_pkts) { + tx_st = 0; + break; + } + bdp = txq->bd.cur; + /* First clean the ring */ + index = enet_get_bd_index(bdp, &txq->bd); + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + + if (status & TX_BD_READY) { + stats->oerrors++; + break; + } + if (txq->tx_mbuf[index]) { + rte_pktmbuf_free(txq->tx_mbuf[index]); + txq->tx_mbuf[index] = NULL; + } + + mbuf = *(tx_pkts); + tx_pkts++; + + /* Fill in a Tx ring entry */ + last_bdp = bdp; + status &= ~TX_BD_STATS; + + /* Set buffer length and buffer pointer */ + buflen = rte_pktmbuf_pkt_len(mbuf); + stats->opackets++; + stats->obytes += buflen; + + if (mbuf->nb_segs > 1) { + ENETFEC_PMD_DEBUG("SG not supported"); + return -1; + } + status |= (TX_BD_LAST); + data = rte_pktmbuf_mtod(mbuf, void *); + for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE) + dcbf(data + i); + + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &bdp->bd_bufaddr); + rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen); + + if (txq->fep->bufdesc_ex) { + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + rte_write32(0, &ebdp->bd_bdu); + rte_write32(rte_cpu_to_le_32(estatus), + &ebdp->bd_esc); + } + + index = enet_get_bd_index(last_bdp, &txq->bd); + /* Save mbuf pointer */ + txq->tx_mbuf[index] = mbuf; + + /* Make sure the updates to rest of the descriptor are performed + * before transferring ownership. + */ + status |= (TX_BD_READY | TX_BD_TC); + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); + + /* Trigger transmission start */ + rte_write32(0, txq->bd.active_reg_desc); + pkt_transmitted++; + + /* If this was the last BD in the ring, start at the + * beginning again. + */ + bdp = enet_get_nextdesc(last_bdp, &txq->bd); + + /* Make sure the update to bdp and tx_skbuff are performed + * before txq->bd.cur. + */ + txq->bd.cur = bdp; + } + return nb_pkts; +} From patchwork Thu Oct 21 04:47:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 102507 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCBFAA0C4B; Thu, 21 Oct 2021 06:47:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 425E14118C; Thu, 21 Oct 2021 06:47:34 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 4CA1D41173 for ; Thu, 21 Oct 2021 06:47:16 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 1FC13200D47; Thu, 21 Oct 2021 06:47:16 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id AF3AB200D67; Thu, 21 Oct 2021 06:47:15 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id ACB33183F0C0; Thu, 21 Oct 2021 12:47:14 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Thu, 21 Oct 2021 10:17:00 +0530 Message-Id: <20211021044700.12370-6-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211021044700.12370-1-apeksha.gupta@nxp.com> References: <20211019184003.23128-2-apeksha.gupta@nxp.com> <20211021044700.12370-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds checksum and VLAN offloads in enetfec network poll mode driver. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 2 ++ doc/guides/nics/features/enetfec.ini | 3 ++ drivers/net/enetfec/enet_ethdev.c | 17 ++++++++- drivers/net/enetfec/enet_regs.h | 10 ++++++ drivers/net/enetfec/enet_rxtx.c | 53 +++++++++++++++++++++++++++- 5 files changed, 83 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst index 47836630b6..132f0209e5 100644 --- a/doc/guides/nics/enetfec.rst +++ b/doc/guides/nics/enetfec.rst @@ -84,6 +84,8 @@ ENETFEC Features - Basic stats - Promiscuous +- VLAN offload +- L3/L4 checksum offload - Linux - ARMv8 diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini index 7e0fb148ac..3e9cc90b9f 100644 --- a/doc/guides/nics/features/enetfec.ini +++ b/doc/guides/nics/features/enetfec.ini @@ -6,6 +6,9 @@ [Features] Basic stats = Y Promiscuous mode = Y +VLAN offload = Y +L3 checksum offload = Y +L4 checksum offload = Y Linux = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 4419952443..c6957e16e5 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev) val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); /* align IP header */ val |= ENETFEC_RACC_SHIFT16; - val &= ~ENETFEC_RACC_OPTIONS; + if (fep->flag_csum & RX_FLAG_CSUM_EN) + /* set RX checksum */ + val |= ENETFEC_RACC_OPTIONS; + else + val &= ~ENETFEC_RACC_OPTIONS; rte_write32(rte_cpu_to_le_32(val), (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), @@ -611,9 +615,20 @@ static int enetfec_eth_init(struct rte_eth_dev *dev) { struct enetfec_private *fep = dev->data->dev_private; + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; fep->full_duplex = FULL_DUPLEX; dev->dev_ops = &enetfec_ops; + if (fep->quirks & QUIRK_VLAN) + /* enable hw VLAN support */ + rx_offloads |= DEV_RX_OFFLOAD_VLAN; + + if (fep->quirks & QUIRK_CSUM) { + /* enable hw accelerator */ + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM; + fep->flag_csum |= RX_FLAG_CSUM_EN; + } rte_eth_dev_probing_finish(dev); return 0; diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h index 5415ed77ea..a300c6f8bc 100644 --- a/drivers/net/enetfec/enet_regs.h +++ b/drivers/net/enetfec/enet_regs.h @@ -27,6 +27,12 @@ #define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */ #define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */ +/* Ethernet receive use control and status of enhanced buffer descriptor */ +#define BD_ENETFEC_RX_VLAN 0x00000004 + +#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR) +#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR) + /* Ethernet transmit use control and status of buffer descriptor */ #define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */ #define TX_BD_LAST ((ushort)0x0800) /* Last in frame */ @@ -56,6 +62,10 @@ #define QUIRK_HAS_ENETFEC_MAC (1 << 0) /* GBIT supported in controller */ #define QUIRK_GBIT (1 << 3) +/* Controller support hardware checksum */ +#define QUIRK_CSUM (1 << 5) +/* Controller support hardware vlan */ +#define QUIRK_VLAN (1 << 6) /* RACC register supported by controller */ #define QUIRK_RACC (1 << 12) /* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c index 445fa97e77..fdd3343589 100644 --- a/drivers/net/enetfec/enet_rxtx.c +++ b/drivers/net/enetfec/enet_rxtx.c @@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, unsigned short status; unsigned short pkt_len; int pkt_received = 0, index = 0; - void *data; + void *data, *mbuf_data; + uint16_t vlan_tag; + struct bufdesc_ex *ebdp = NULL; + bool vlan_packet_rcvd = false; struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; struct rte_eth_stats *stats = &rxq->fep->stats; + struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; pool = rxq->pool; bdp = rxq->bd.cur; #if ENETFEC_LOOPBACK @@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, mbuf = rxq->rx_mbuf[index]; data = rte_pktmbuf_mtod(mbuf, uint8_t *); + mbuf_data = data; rte_prefetch0(data); rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); @@ -311,6 +317,47 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, rx_pkts[pkt_received] = mbuf; pkt_received++; + + /* Extract the enhanced buffer descriptor */ + ebdp = NULL; + if (rxq->fep->bufdesc_ex) + ebdp = (struct bufdesc_ex *)bdp; + + /* If this is a VLAN packet remove the VLAN Tag */ + vlan_packet_rcvd = false; + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) && + rxq->fep->bufdesc_ex && + (rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) { + /* Push and remove the vlan tag */ + struct rte_vlan_hdr *vlan_header = + (struct rte_vlan_hdr *) + ((uint8_t *)data + ETH_HLEN); + vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci); + + vlan_packet_rcvd = true; + memmove((uint8_t *)mbuf_data + VLAN_HLEN, + data, ETH_ALEN * 2); + rte_pktmbuf_adj(mbuf, VLAN_HLEN); + } + + if (rxq->fep->bufdesc_ex && + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) { + if ((rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) { + /* don't check it */ + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD; + } else { + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD; + } + } + + /* Handle received VLAN packets */ + if (vlan_packet_rcvd) { + mbuf->vlan_tci = vlan_tag; + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN; + } + rxq->rx_mbuf[index] = new_mbuf; rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), &bdp->bd_bufaddr); @@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->fep->bufdesc_ex) { struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD) + estatus |= TX_BD_PINS | TX_BD_IINS; + rte_write32(0, &ebdp->bd_bdu); rte_write32(rte_cpu_to_le_32(estatus), &ebdp->bd_esc);