From patchwork Sun Oct 17 06:53:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101894 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFD0CA0547; Sun, 17 Oct 2021 08:53:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E14E240DF7; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 863D44003C for ; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5B955201B8B; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id EDB012005C1; Sun, 17 Oct 2021 08:53:33 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 01E84183AD05; Sun, 17 Oct 2021 14:53:32 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:24 +0530 Message-Id: <20211017065331.25488-2-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 1/8] bbdev: add device info related to data endianness X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nicolas Chautru Adding device information to capture explicitly the assumption of the input/output data byte endianness being processed. Signed-off-by: Nicolas Chautru Signed-off-by: Nipun Gupta --- doc/guides/rel_notes/release_21_11.rst | 1 + drivers/baseband/acc100/rte_acc100_pmd.c | 1 + drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 + drivers/baseband/null/bbdev_null.c | 6 ++++++ drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 + lib/bbdev/rte_bbdev.h | 4 ++++ 7 files changed, 15 insertions(+) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 4c56cdfeaa..957bd78d61 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -229,6 +229,7 @@ API Changes the crypto/security operation. This field will be used to communicate events such as soft expiry with IPsec in lookaside mode. +* bbdev: Added device info related to data byte endianness processing. ABI Changes ----------- diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 4e2feefc3c..05fe6f8b6f 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -1089,6 +1089,7 @@ acc100_dev_info_get(struct rte_bbdev *dev, #else dev_info->harq_buffer_size = 0; #endif + dev_info->data_endianness = RTE_LITTLE_ENDIAN; acc100_check_ir(d); } diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c index 6485cc824a..ee457f3071 100644 --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c @@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev, dev_info->default_queue_conf = default_queue_conf; dev_info->capabilities = bbdev_capabilities; dev_info->cpu_flag_reqs = NULL; + dev_info->data_endianness = RTE_LITTLE_ENDIAN; /* Calculates number of queues assigned to device */ dev_info->max_num_queues = 0; diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c index 350c4248eb..703bb611a0 100644 --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c @@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev, dev_info->default_queue_conf = default_queue_conf; dev_info->capabilities = bbdev_capabilities; dev_info->cpu_flag_reqs = NULL; + dev_info->data_endianness = RTE_LITTLE_ENDIAN; /* Calculates number of queues assigned to device */ dev_info->max_num_queues = 0; diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c index 53c538ba44..753d920e18 100644 --- a/drivers/baseband/null/bbdev_null.c +++ b/drivers/baseband/null/bbdev_null.c @@ -77,6 +77,12 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info) dev_info->cpu_flag_reqs = NULL; dev_info->min_alignment = 0; + /* BBDEV null device does not process the data, so + * endianness setting is not relevant, but setting it + * here for code completeness. + */ + dev_info->data_endianness = RTE_LITTLE_ENDIAN; + rte_bbdev_log_debug("got device info from %u", dev->data->dev_id); } diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c index e1db2bf205..b234bb751a 100644 --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c @@ -253,6 +253,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info) dev_info->capabilities = bbdev_capabilities; dev_info->min_alignment = 64; dev_info->harq_buffer_size = 0; + dev_info->data_endianness = RTE_LITTLE_ENDIAN; rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id); } diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index 3ebf62e697..e863bd913f 100644 --- a/lib/bbdev/rte_bbdev.h +++ b/lib/bbdev/rte_bbdev.h @@ -309,6 +309,10 @@ struct rte_bbdev_driver_info { uint16_t min_alignment; /** HARQ memory available in kB */ uint32_t harq_buffer_size; + /** Byte endianness (RTE_BIG_ENDIAN/RTE_LITTLE_ENDIAN) supported + * for input/output data + */ + uint8_t data_endianness; /** Default queue configuration used if none is supplied */ struct rte_bbdev_queue_conf default_queue_conf; /** Device operation capabilities */ From patchwork Sun Oct 17 06:53:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101895 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4AEE6A0547; Sun, 17 Oct 2021 08:53:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E9411410E1; Sun, 17 Oct 2021 08:53:37 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id ECDFA4003C for ; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C5F7F1A1365; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6201F1A132F; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 6D5AA183AD0B; Sun, 17 Oct 2021 14:53:33 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:25 +0530 Message-Id: <20211017065331.25488-3-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 2/8] baseband: introduce NXP LA12xx driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch introduce the baseband device drivers for NXP's LA1200 series software defined baseband modem. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- MAINTAINERS | 10 ++ doc/guides/bbdevs/index.rst | 1 + doc/guides/bbdevs/la12xx.rst | 71 ++++++++++++ drivers/baseband/la12xx/bbdev_la12xx.c | 108 ++++++++++++++++++ .../baseband/la12xx/bbdev_la12xx_pmd_logs.h | 28 +++++ drivers/baseband/la12xx/meson.build | 6 + drivers/baseband/la12xx/version.map | 3 + drivers/baseband/meson.build | 1 + 8 files changed, 228 insertions(+) create mode 100644 doc/guides/bbdevs/la12xx.rst create mode 100644 drivers/baseband/la12xx/bbdev_la12xx.c create mode 100644 drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h create mode 100644 drivers/baseband/la12xx/meson.build create mode 100644 drivers/baseband/la12xx/version.map diff --git a/MAINTAINERS b/MAINTAINERS index ed8becce85..ff632479c5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1289,6 +1289,16 @@ F: drivers/event/opdl/ F: doc/guides/eventdevs/opdl.rst +Baseband Drivers +---------------- + +NXP LA12xx driver +M: Nipun Gupta +M: Hemant Agrawal +F: drivers/baseband/la12xx/ +F: doc/guides/bbdevs/la12xx.rst + + Rawdev Drivers -------------- diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst index 4445cbd1b0..cedd706fa6 100644 --- a/doc/guides/bbdevs/index.rst +++ b/doc/guides/bbdevs/index.rst @@ -14,3 +14,4 @@ Baseband Device Drivers fpga_lte_fec fpga_5gnr_fec acc100 + la12xx diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst new file mode 100644 index 0000000000..9ac6f0a0cd --- /dev/null +++ b/doc/guides/bbdevs/la12xx.rst @@ -0,0 +1,71 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 NXP + +NXP LA12xx Poll Mode Driver +======================================= + +The BBDEV LA12xx poll mode driver (PMD) supports an implementation for +offloading High Phy processing functions like LDPC Encode / Decode 5GNR wireless +acceleration function, using PCI based LA12xx Software defined radio. + +More information can be found at `NXP Official Website +`_. + +Features +-------- + +LA12xx PMD supports the following features: + +- Maximum of 8 LDPC decode (UL) queues +- Maximum of 8 LDPC encode (DL) queues +- PCIe Gen-3 x8 Interface + +Installation +------------ + +Section 3 of the DPDK manual provides instructions on installing and compiling DPDK. + +DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual. + +Initialization +-------------- + +The device can be listed on the host console with: + + +Use the following lspci command to get the multiple LA12xx processor ids. The +device ID of the LA12xx baseband processor is "1c30". + +.. code-block:: console + + sudo lspci -nn + +... +0001:01:00.0 Power PC [0b20]: Freescale Semiconductor Inc Device [1957:1c30] ( +rev 10) +... +0002:01:00.0 Power PC [0b20]: Freescale Semiconductor Inc Device [1957:1c30] ( +rev 10) + + +Prerequisites +------------- + +Currently supported by DPDK: + +- NXP LA1224 BSP **1.0+**. +- NXP LA1224 PCIe Modem card connected to ARM host. + +- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. + +Enabling logs +------------- + +For enabling logs, use the following EAL parameter: + +.. code-block:: console + + ./your_bbdev_application --log-level=la12xx: + +Using ``bb.la12xx`` as log matching criteria, all Baseband PMD logs can be +enabled which are lower than logging ``level``. diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c new file mode 100644 index 0000000000..d3d7a4df37 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#include + +#include +#include +#include +#include +#include + +#include +#include + +#include + +#define DRIVER_NAME baseband_la12xx + +/* private data structure */ +struct bbdev_la12xx_private { + unsigned int max_nb_queues; /**< Max number of queues */ +}; +/* Create device */ +static int +la12xx_bbdev_create(struct rte_vdev_device *vdev) +{ + struct rte_bbdev *bbdev; + const char *name = rte_vdev_device_name(vdev); + + PMD_INIT_FUNC_TRACE(); + + bbdev = rte_bbdev_allocate(name); + if (bbdev == NULL) + return -ENODEV; + + bbdev->data->dev_private = rte_zmalloc(name, + sizeof(struct bbdev_la12xx_private), + RTE_CACHE_LINE_SIZE); + if (bbdev->data->dev_private == NULL) { + rte_bbdev_release(bbdev); + return -ENOMEM; + } + + bbdev->dev_ops = NULL; + bbdev->device = &vdev->device; + bbdev->data->socket_id = 0; + bbdev->intr_handle = NULL; + + /* register rx/tx burst functions for data path */ + bbdev->dequeue_enc_ops = NULL; + bbdev->dequeue_dec_ops = NULL; + bbdev->enqueue_enc_ops = NULL; + bbdev->enqueue_dec_ops = NULL; + + return 0; +} + +/* Initialise device */ +static int +la12xx_bbdev_probe(struct rte_vdev_device *vdev) +{ + const char *name; + + PMD_INIT_FUNC_TRACE(); + + if (vdev == NULL) + return -EINVAL; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + return la12xx_bbdev_create(vdev); +} + +/* Uninitialise device */ +static int +la12xx_bbdev_remove(struct rte_vdev_device *vdev) +{ + struct rte_bbdev *bbdev; + const char *name; + + PMD_INIT_FUNC_TRACE(); + + if (vdev == NULL) + return -EINVAL; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + bbdev = rte_bbdev_get_named_dev(name); + if (bbdev == NULL) + return -EINVAL; + + rte_free(bbdev->data->dev_private); + + return rte_bbdev_release(bbdev); +} + +static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { + .probe = la12xx_bbdev_probe, + .remove = la12xx_bbdev_remove +}; + +RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); +RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); diff --git a/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h b/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h new file mode 100644 index 0000000000..452435ccb9 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef _BBDEV_LA12XX_PMD_LOGS_H_ +#define _BBDEV_LA12XX_PMD_LOGS_H_ + +extern int bbdev_la12xx_logtype; + +#define rte_bbdev_log(level, fmt, ...) \ + rte_log(RTE_LOG_ ## level, bbdev_la12xx_logtype, fmt "\n", \ + ##__VA_ARGS__) + +#ifdef RTE_LIBRTE_BBDEV_DEBUG +#define rte_bbdev_log_debug(fmt, ...) \ + rte_bbdev_log(DEBUG, "la12xx_pmd: " fmt, \ + ##__VA_ARGS__) +#else +#define rte_bbdev_log_debug(fmt, ...) +#endif + +#define PMD_INIT_FUNC_TRACE() rte_bbdev_log_debug(">>") + +/* DP Logs, toggled out at compile time if level lower than current level */ +#define rte_bbdev_dp_log(level, fmt, args...) \ + RTE_LOG_DP(level, PMD, fmt, ## args) + +#endif /* _BBDEV_LA12XX_PMD_LOGS_H_ */ diff --git a/drivers/baseband/la12xx/meson.build b/drivers/baseband/la12xx/meson.build new file mode 100644 index 0000000000..7a017dcffa --- /dev/null +++ b/drivers/baseband/la12xx/meson.build @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2020-2021 NXP + +deps += ['bbdev', 'bus_vdev', 'ring'] + +sources = files('bbdev_la12xx.c') diff --git a/drivers/baseband/la12xx/version.map b/drivers/baseband/la12xx/version.map new file mode 100644 index 0000000000..4a76d1d52d --- /dev/null +++ b/drivers/baseband/la12xx/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; diff --git a/drivers/baseband/meson.build b/drivers/baseband/meson.build index 5ee61d5323..686e98b2ed 100644 --- a/drivers/baseband/meson.build +++ b/drivers/baseband/meson.build @@ -9,6 +9,7 @@ drivers = [ 'acc100', 'fpga_5gnr_fec', 'fpga_lte_fec', + 'la12xx', 'null', 'turbo_sw', ] From patchwork Sun Oct 17 06:53:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101896 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95A49A0547; Sun, 17 Oct 2021 08:53:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 14705410EA; Sun, 17 Oct 2021 08:53:39 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 5C31F4003C for ; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 3C4461A132F; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id CE6B21A16F6; Sun, 17 Oct 2021 08:53:34 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id D675F183AC94; Sun, 17 Oct 2021 14:53:33 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:26 +0530 Message-Id: <20211017065331.25488-4-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 3/8] baseband/la12xx: add devargs for max queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch adds dev args to take max queues as input Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- doc/guides/bbdevs/la12xx.rst | 4 ++ drivers/baseband/la12xx/bbdev_la12xx.c | 73 +++++++++++++++++++++++++- 2 files changed, 75 insertions(+), 2 deletions(-) diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst index 9ac6f0a0cd..3725078567 100644 --- a/doc/guides/bbdevs/la12xx.rst +++ b/doc/guides/bbdevs/la12xx.rst @@ -58,6 +58,10 @@ Currently supported by DPDK: - Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. +* Use dev arg option ``max_nb_queues=x`` to specify the maximum number of queues + to be used for communication with offload device i.e. modem. default is 16. + e.g. ``--vdev=baseband_la12xx,max_nb_queues=4`` + Enabling logs ------------- diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index d3d7a4df37..f5c835eeb8 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -17,13 +17,73 @@ #define DRIVER_NAME baseband_la12xx +/* Initialisation params structure that can be used by LA12xx BBDEV driver */ +struct bbdev_la12xx_params { + uint8_t queues_num; /*< LA12xx BBDEV queues number */ +}; + +#define LA12XX_MAX_NB_QUEUES_ARG "max_nb_queues" + +static const char * const bbdev_la12xx_valid_params[] = { + LA12XX_MAX_NB_QUEUES_ARG, +}; + /* private data structure */ struct bbdev_la12xx_private { unsigned int max_nb_queues; /**< Max number of queues */ }; +static inline int +parse_u16_arg(const char *key, const char *value, void *extra_args) +{ + uint16_t *u16 = extra_args; + + uint64_t result; + if ((value == NULL) || (extra_args == NULL)) + return -EINVAL; + errno = 0; + result = strtoul(value, NULL, 0); + if ((result >= (1 << 16)) || (errno != 0)) { + rte_bbdev_log(ERR, "Invalid value %" PRIu64 " for %s", + result, key); + return -ERANGE; + } + *u16 = (uint16_t)result; + return 0; +} + +/* Parse parameters used to create device */ +static int +parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, + const char *input_args) +{ + struct rte_kvargs *kvlist = NULL; + int ret = 0; + + if (params == NULL) + return -EINVAL; + if (input_args) { + kvlist = rte_kvargs_parse(input_args, + bbdev_la12xx_valid_params); + if (kvlist == NULL) + return -EFAULT; + + ret = rte_kvargs_process(kvlist, bbdev_la12xx_valid_params[0], + &parse_u16_arg, ¶ms->queues_num); + if (ret < 0) + goto exit; + + } + +exit: + if (kvlist) + rte_kvargs_free(kvlist); + return ret; +} + /* Create device */ static int -la12xx_bbdev_create(struct rte_vdev_device *vdev) +la12xx_bbdev_create(struct rte_vdev_device *vdev, + struct bbdev_la12xx_params *init_params __rte_unused) { struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); @@ -60,7 +120,11 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev) static int la12xx_bbdev_probe(struct rte_vdev_device *vdev) { + struct bbdev_la12xx_params init_params = { + 8 + }; const char *name; + const char *input_args; PMD_INIT_FUNC_TRACE(); @@ -71,7 +135,10 @@ la12xx_bbdev_probe(struct rte_vdev_device *vdev) if (name == NULL) return -EINVAL; - return la12xx_bbdev_create(vdev); + input_args = rte_vdev_device_args(vdev); + parse_bbdev_la12xx_params(&init_params, input_args); + + return la12xx_bbdev_create(vdev, &init_params); } /* Uninitialise device */ @@ -105,4 +172,6 @@ static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { }; RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); +RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME, + LA12XX_MAX_NB_QUEUES_ARG"="); RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); From patchwork Sun Oct 17 06:53:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101897 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B9FBA0547; Sun, 17 Oct 2021 08:54:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23B06410F2; Sun, 17 Oct 2021 08:53:40 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id C0CB84003C for ; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 9E6C3201B87; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3BB602005C1; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 4E8D5183AD6D; Sun, 17 Oct 2021 14:53:34 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com Date: Sun, 17 Oct 2021 12:23:27 +0530 Message-Id: <20211017065331.25488-5-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 4/8] baseband/la12xx: add support for multiple modems X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch add support for multiple modems by assigning a modem id as dev args in vdev creation. Signed-off-by: Hemant Agrawal --- doc/guides/bbdevs/la12xx.rst | 5 ++ drivers/baseband/la12xx/bbdev_la12xx.c | 64 +++++++++++++++++++--- drivers/baseband/la12xx/bbdev_la12xx.h | 56 +++++++++++++++++++ drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 20 +++++++ 4 files changed, 138 insertions(+), 7 deletions(-) create mode 100644 drivers/baseband/la12xx/bbdev_la12xx.h create mode 100644 drivers/baseband/la12xx/bbdev_la12xx_ipc.h diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst index 3725078567..1a711ef5e3 100644 --- a/doc/guides/bbdevs/la12xx.rst +++ b/doc/guides/bbdevs/la12xx.rst @@ -58,6 +58,11 @@ Currently supported by DPDK: - Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. +* Use dev arg option ``modem=0`` to identify the modem instance for a given + device. This is required only if more than 1 modem cards are attached to host. + this is optional and the default value is 0. + e.g. ``--vdev=baseband_la12xx,modem=0`` + * Use dev arg option ``max_nb_queues=x`` to specify the maximum number of queues to be used for communication with offload device i.e. modem. default is 16. e.g. ``--vdev=baseband_la12xx,max_nb_queues=4`` diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index f5c835eeb8..58defa54f0 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -14,24 +14,26 @@ #include #include +#include +#include #define DRIVER_NAME baseband_la12xx /* Initialisation params structure that can be used by LA12xx BBDEV driver */ struct bbdev_la12xx_params { uint8_t queues_num; /*< LA12xx BBDEV queues number */ + int8_t modem_id; /*< LA12xx modem instance id */ }; #define LA12XX_MAX_NB_QUEUES_ARG "max_nb_queues" +#define LA12XX_VDEV_MODEM_ID_ARG "modem" +#define LA12XX_MAX_MODEM 4 static const char * const bbdev_la12xx_valid_params[] = { LA12XX_MAX_NB_QUEUES_ARG, + LA12XX_VDEV_MODEM_ID_ARG, }; -/* private data structure */ -struct bbdev_la12xx_private { - unsigned int max_nb_queues; /**< Max number of queues */ -}; static inline int parse_u16_arg(const char *key, const char *value, void *extra_args) { @@ -51,6 +53,28 @@ parse_u16_arg(const char *key, const char *value, void *extra_args) return 0; } +/* Parse integer from integer argument */ +static int +parse_integer_arg(const char *key __rte_unused, + const char *value, void *extra_args) +{ + int i; + char *end; + + errno = 0; + + i = strtol(value, &end, 10); + if (*end != 0 || errno != 0 || i < 0 || i > LA12XX_MAX_MODEM) { + rte_bbdev_log(ERR, "Supported Port IDS are 0 to %d", + LA12XX_MAX_MODEM - 1); + return -EINVAL; + } + + *((uint32_t *)extra_args) = i; + + return 0; +} + /* Parse parameters used to create device */ static int parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, @@ -72,6 +96,16 @@ parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, if (ret < 0) goto exit; + ret = rte_kvargs_process(kvlist, + bbdev_la12xx_valid_params[1], + &parse_integer_arg, + ¶ms->modem_id); + + if (params->modem_id >= LA12XX_MAX_MODEM) { + rte_bbdev_log(ERR, "Invalid modem id, must be < %u", + LA12XX_MAX_MODEM); + goto exit; + } } exit: @@ -83,10 +117,11 @@ parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, /* Create device */ static int la12xx_bbdev_create(struct rte_vdev_device *vdev, - struct bbdev_la12xx_params *init_params __rte_unused) + struct bbdev_la12xx_params *init_params) { struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); + struct bbdev_la12xx_private *priv; PMD_INIT_FUNC_TRACE(); @@ -102,6 +137,20 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, return -ENOMEM; } + priv = bbdev->data->dev_private; + priv->modem_id = init_params->modem_id; + /* if modem id is not configured */ + if (priv->modem_id == -1) + priv->modem_id = bbdev->data->dev_id; + + /* Reset Global variables */ + priv->num_ldpc_enc_queues = 0; + priv->num_ldpc_dec_queues = 0; + priv->num_valid_queues = 0; + priv->max_nb_queues = init_params->queues_num; + + rte_bbdev_log(INFO, "Setting Up %s: DevId=%d, ModemId=%d", + name, bbdev->data->dev_id, priv->modem_id); bbdev->dev_ops = NULL; bbdev->device = &vdev->device; bbdev->data->socket_id = 0; @@ -121,7 +170,7 @@ static int la12xx_bbdev_probe(struct rte_vdev_device *vdev) { struct bbdev_la12xx_params init_params = { - 8 + 8, -1, }; const char *name; const char *input_args; @@ -173,5 +222,6 @@ static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME, - LA12XX_MAX_NB_QUEUES_ARG"="); + LA12XX_MAX_NB_QUEUES_ARG"=" + LA12XX_VDEV_MODEM_ID_ARG "= "); RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); diff --git a/drivers/baseband/la12xx/bbdev_la12xx.h b/drivers/baseband/la12xx/bbdev_la12xx.h new file mode 100644 index 0000000000..5228502331 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef __BBDEV_LA12XX_H__ +#define __BBDEV_LA12XX_H__ + +#define BBDEV_IPC_ENC_OP_TYPE 1 +#define BBDEV_IPC_DEC_OP_TYPE 2 + +#define MAX_LDPC_ENC_FECA_QUEUES 4 +#define MAX_LDPC_DEC_FECA_QUEUES 4 + +#define MAX_CHANNEL_DEPTH 16 +/* private data structure */ +struct bbdev_la12xx_private { + void *ipc_priv; + uint8_t num_valid_queues; + uint8_t max_nb_queues; + uint8_t num_ldpc_enc_queues; + uint8_t num_ldpc_dec_queues; + int8_t modem_id; + struct bbdev_la12xx_q_priv *queues_priv[32]; +}; + +struct hugepage_info { + void *vaddr; + phys_addr_t paddr; + size_t len; +}; + +struct bbdev_la12xx_q_priv { + struct bbdev_la12xx_private *bbdev_priv; + uint32_t q_id; /**< Channel ID */ + uint32_t feca_blk_id; /** FECA block ID for processing */ + uint32_t feca_blk_id_be32; /**< FECA Block ID for this queue */ + uint8_t en_napi; /* 0: napi disabled, 1: napi enabled */ + uint16_t queue_size; /**< Queue depth */ + int32_t eventfd; /**< Event FD value */ + enum rte_bbdev_op_type op_type; /**< Operation type */ + uint32_t la12xx_core_id; + /* LA12xx core ID on which this will be scheduled */ + struct rte_mempool *mp; /**< Pool from where buffers would be cut */ + void *bbdev_op[MAX_CHANNEL_DEPTH]; + /**< Stores bbdev op for each index */ + void *msg_ch_vaddr[MAX_CHANNEL_DEPTH]; + /**< Stores msg channel addr for modem->host */ + uint32_t host_pi; /**< Producer_Index for HOST->MODEM */ + uint32_t host_ci; /**< Consumer Index for MODEM->HOST */ + host_ipc_params_t *host_params; /**< Host parameters */ +}; + +#define lower_32_bits(x) ((uint32_t)((uint64_t)x)) +#define upper_32_bits(x) ((uint32_t)(((uint64_t)(x) >> 16) >> 16)) + +#endif diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h new file mode 100644 index 0000000000..9aa5562981 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ +#ifndef __BBDEV_LA12XX_IPC_H__ +#define __BBDEV_LA12XX_IPC_H__ + +/** No. of max channel per instance */ +#define IPC_MAX_DEPTH (16) + +/* This shared memory would be on the host side which have copy of some + * of the parameters which are also part of Shared BD ring. Read access + * of these parameters from the host side would not be over PCI. + */ +typedef struct host_ipc_params { + volatile uint32_t pi; + volatile uint32_t ci; + volatile uint32_t modem_ptr[IPC_MAX_DEPTH]; +} __rte_packed host_ipc_params_t; + +#endif From patchwork Sun Oct 17 06:53:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101898 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B828A0547; Sun, 17 Oct 2021 08:54:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A22EE41102; Sun, 17 Oct 2021 08:53:43 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 54A9C40692 for ; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 219691A1B83; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id AEAF51A16F6; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id ADC62183AC89; Sun, 17 Oct 2021 14:53:34 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:28 +0530 Message-Id: <20211017065331.25488-6-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 5/8] baseband/la12xx: add queue and modem config support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch add support for connecting with modem and creating the ipc channel as queues with modem for the exchange of data. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/baseband/la12xx/bbdev_la12xx.c | 559 ++++++++++++++++++++- drivers/baseband/la12xx/bbdev_la12xx.h | 17 +- drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 189 ++++++- 3 files changed, 752 insertions(+), 13 deletions(-) diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index 58defa54f0..7e312cf2e7 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -3,6 +3,11 @@ */ #include +#include +#include +#include +#include +#include #include #include @@ -29,11 +34,556 @@ struct bbdev_la12xx_params { #define LA12XX_VDEV_MODEM_ID_ARG "modem" #define LA12XX_MAX_MODEM 4 +#define LA12XX_MAX_CORES 4 +#define LA12XX_LDPC_ENC_CORE 0 +#define LA12XX_LDPC_DEC_CORE 1 + +#define LA12XX_MAX_LDPC_ENC_QUEUES 4 +#define LA12XX_MAX_LDPC_DEC_QUEUES 4 + static const char * const bbdev_la12xx_valid_params[] = { LA12XX_MAX_NB_QUEUES_ARG, LA12XX_VDEV_MODEM_ID_ARG, }; +static const struct rte_bbdev_op_cap bbdev_capabilities[] = { + { + .type = RTE_BBDEV_OP_LDPC_ENC, + .cap.ldpc_enc = { + .capability_flags = + RTE_BBDEV_LDPC_RATE_MATCH | + RTE_BBDEV_LDPC_CRC_24A_ATTACH | + RTE_BBDEV_LDPC_CRC_24B_ATTACH, + .num_buffers_src = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .num_buffers_dst = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + } + }, + { + .type = RTE_BBDEV_OP_LDPC_DEC, + .cap.ldpc_dec = { + .capability_flags = + RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK | + RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK | + RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP, + .num_buffers_src = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .num_buffers_hard_out = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .llr_size = 8, + .llr_decimals = 1, + } + }, + RTE_BBDEV_END_OF_CAPABILITIES_LIST() +}; + +static struct rte_bbdev_queue_conf default_queue_conf = { + .queue_size = MAX_CHANNEL_DEPTH, +}; + +/* Get device info */ +static void +la12xx_info_get(struct rte_bbdev *dev __rte_unused, + struct rte_bbdev_driver_info *dev_info) +{ + PMD_INIT_FUNC_TRACE(); + + dev_info->driver_name = RTE_STR(DRIVER_NAME); + dev_info->max_num_queues = LA12XX_MAX_QUEUES; + dev_info->queue_size_lim = MAX_CHANNEL_DEPTH; + dev_info->hardware_accelerated = true; + dev_info->max_dl_queue_priority = 0; + dev_info->max_ul_queue_priority = 0; + dev_info->data_endianness = RTE_BIG_ENDIAN; + dev_info->default_queue_conf = default_queue_conf; + dev_info->capabilities = bbdev_capabilities; + dev_info->cpu_flag_reqs = NULL; + dev_info->min_alignment = 64; + + rte_bbdev_log_debug("got device info from %u", dev->data->dev_id); +} + +/* Release queue */ +static int +la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(q_id); + + PMD_INIT_FUNC_TRACE(); + + return 0; +} + +#define HUGEPG_OFFSET(A) \ + ((uint64_t) ((unsigned long) (A) \ + - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) + +static int +ipc_queue_configure(uint32_t channel_id, + ipc_t instance, + struct bbdev_la12xx_q_priv *q_priv) +{ + ipc_userspace_t *ipc_priv = (ipc_userspace_t *)instance; + ipc_instance_t *ipc_instance = ipc_priv->instance; + ipc_ch_t *ch; + void *vaddr; + uint32_t i = 0; + uint32_t msg_size = sizeof(struct bbdev_ipc_enqueue_op); + + PMD_INIT_FUNC_TRACE(); + + rte_bbdev_log_debug("%x %p", ipc_instance->initialized, + ipc_priv->instance); + ch = &(ipc_instance->ch_list[channel_id]); + + rte_bbdev_log_debug("channel: %u, depth: %u, msg size: %u", + channel_id, q_priv->queue_size, msg_size); + + /* Start init of channel */ + ch->md.ring_size = rte_cpu_to_be_32(q_priv->queue_size); + ch->md.pi = 0; + ch->md.ci = 0; + ch->md.msg_size = msg_size; + for (i = 0; i < q_priv->queue_size; i++) { + vaddr = rte_malloc(NULL, msg_size, RTE_CACHE_LINE_SIZE); + if (!vaddr) + return IPC_HOST_BUF_ALLOC_FAIL; + /* Only offset now */ + ch->bd_h[i].modem_ptr = + rte_cpu_to_be_32(HUGEPG_OFFSET(vaddr)); + ch->bd_h[i].host_virt_l = lower_32_bits(vaddr); + ch->bd_h[i].host_virt_h = upper_32_bits(vaddr); + q_priv->msg_ch_vaddr[i] = vaddr; + /* Not sure use of this len may be for CRC*/ + ch->bd_h[i].len = 0; + } + ch->host_ipc_params = + rte_cpu_to_be_32(HUGEPG_OFFSET(q_priv->host_params)); + + rte_bbdev_log_debug("Channel configured"); + return IPC_SUCCESS; +} + +static int +la12xx_e200_queue_setup(struct rte_bbdev *dev, + struct bbdev_la12xx_q_priv *q_priv) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + struct gul_hif *mhif; + ipc_metadata_t *ipc_md; + ipc_ch_t *ch; + int instance_id = 0, i; + int ret; + + PMD_INIT_FUNC_TRACE(); + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + q_priv->la12xx_core_id = LA12XX_LDPC_ENC_CORE; + break; + case RTE_BBDEV_OP_LDPC_DEC: + q_priv->la12xx_core_id = LA12XX_LDPC_DEC_CORE; + break; + default: + rte_bbdev_log(ERR, "Unsupported op type\n"); + return -1; + } + + mhif = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + /* offset is from start of PEB */ + ipc_md = (ipc_metadata_t *)((uintptr_t)ipc_priv->peb_start.host_vaddr + + mhif->ipc_regs.ipc_mdata_offset); + ch = &ipc_md->instance_list[instance_id].ch_list[q_priv->q_id]; + + if (q_priv->q_id < priv->num_valid_queues) { + ipc_br_md_t *md = &(ch->md); + + q_priv->feca_blk_id = rte_cpu_to_be_32(ch->feca_blk_id); + q_priv->feca_blk_id_be32 = ch->feca_blk_id; + q_priv->host_pi = rte_be_to_cpu_32(md->pi); + q_priv->host_ci = rte_be_to_cpu_32(md->ci); + q_priv->host_params = (host_ipc_params_t *)(uintptr_t) + (rte_be_to_cpu_32(ch->host_ipc_params) + + ((uint64_t)ipc_priv->hugepg_start.host_vaddr)); + + for (i = 0; i < q_priv->queue_size; i++) { + uint32_t h, l; + + h = ch->bd_h[i].host_virt_h; + l = ch->bd_h[i].host_virt_l; + q_priv->msg_ch_vaddr[i] = (void *)join_32_bits(h, l); + } + + rte_bbdev_log(WARNING, + "Queue [%d] already configured, not configuring again", + q_priv->q_id); + return 0; + } + + rte_bbdev_log_debug("setting up queue %d", q_priv->q_id); + + /* Call ipc_configure_channel */ + ret = ipc_queue_configure(q_priv->q_id, ipc_priv, q_priv); + if (ret) { + rte_bbdev_log(ERR, "Unable to setup queue (%d) (err=%d)", + q_priv->q_id, ret); + return ret; + } + + /* Set queue properties for LA12xx device */ + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + if (priv->num_ldpc_enc_queues >= LA12XX_MAX_LDPC_ENC_QUEUES) { + rte_bbdev_log(ERR, + "num_ldpc_enc_queues reached max value"); + return -1; + } + ch->la12xx_core_id = + rte_cpu_to_be_32(LA12XX_LDPC_ENC_CORE); + ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_enc_queues++); + break; + case RTE_BBDEV_OP_LDPC_DEC: + if (priv->num_ldpc_dec_queues >= LA12XX_MAX_LDPC_DEC_QUEUES) { + rte_bbdev_log(ERR, + "num_ldpc_dec_queues reached max value"); + return -1; + } + ch->la12xx_core_id = + rte_cpu_to_be_32(LA12XX_LDPC_DEC_CORE); + ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_dec_queues++); + break; + default: + rte_bbdev_log(ERR, "Not supported op type\n"); + return -1; + } + ch->op_type = rte_cpu_to_be_32(q_priv->op_type); + ch->depth = rte_cpu_to_be_32(q_priv->queue_size); + + /* Store queue config here */ + q_priv->feca_blk_id = rte_cpu_to_be_32(ch->feca_blk_id); + q_priv->feca_blk_id_be32 = ch->feca_blk_id; + + return 0; +} + +/* Setup a queue */ +static int +la12xx_queue_setup(struct rte_bbdev *dev, uint16_t q_id, + const struct rte_bbdev_queue_conf *queue_conf) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + struct rte_bbdev_queue_data *q_data; + struct bbdev_la12xx_q_priv *q_priv; + int ret; + + PMD_INIT_FUNC_TRACE(); + + /* Move to setup_queues callback */ + q_data = &dev->data->queues[q_id]; + q_data->queue_private = rte_zmalloc(NULL, + sizeof(struct bbdev_la12xx_q_priv), 0); + if (!q_data->queue_private) { + rte_bbdev_log(ERR, "Memory allocation failed for qpriv"); + return -ENOMEM; + } + q_priv = q_data->queue_private; + q_priv->q_id = q_id; + q_priv->bbdev_priv = dev->data->dev_private; + q_priv->queue_size = queue_conf->queue_size; + q_priv->op_type = queue_conf->op_type; + + ret = la12xx_e200_queue_setup(dev, q_priv); + if (ret) { + rte_bbdev_log(ERR, "e200_queue_setup failed for qid: %d", + q_id); + return ret; + } + + /* Store queue config here */ + priv->num_valid_queues++; + + return 0; +} + +static int +la12xx_start(struct rte_bbdev *dev) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + int ready = 0; + struct gul_hif *hif_start; + + PMD_INIT_FUNC_TRACE(); + + hif_start = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + + /* Set Host Read bit */ + SET_HIF_HOST_RDY(hif_start, HIF_HOST_READY_IPC_APP); + + /* Now wait for modem ready bit */ + while (!ready) + ready = CHK_HIF_MOD_RDY(hif_start, HIF_MOD_READY_IPC_APP); + + return 0; +} + +static const struct rte_bbdev_ops pmd_ops = { + .info_get = la12xx_info_get, + .queue_setup = la12xx_queue_setup, + .queue_release = la12xx_queue_release, + .start = la12xx_start +}; +static struct hugepage_info * +get_hugepage_info(void) +{ + struct hugepage_info *hp_info; + struct rte_memseg *mseg; + + PMD_INIT_FUNC_TRACE(); + + /* TODO - find a better way */ + hp_info = rte_malloc(NULL, sizeof(struct hugepage_info), 0); + if (!hp_info) { + rte_bbdev_log(ERR, "Unable to allocate on local heap"); + return NULL; + } + + mseg = rte_mem_virt2memseg(hp_info, NULL); + hp_info->vaddr = mseg->addr; + hp_info->paddr = rte_mem_virt2phy(mseg->addr); + hp_info->len = mseg->len; + + return hp_info; +} + +static int +open_ipc_dev(int modem_id) +{ + char dev_initials[16], dev_path[PATH_MAX]; + struct dirent *entry; + int dev_ipc = 0; + DIR *dir; + + dir = opendir("/dev/"); + if (!dir) { + rte_bbdev_log(ERR, "Unable to open /dev/"); + return -1; + } + + sprintf(dev_initials, "gulipcgul%d", modem_id); + + while ((entry = readdir(dir)) != NULL) { + if (!strncmp(dev_initials, entry->d_name, + sizeof(dev_initials) - 1)) + break; + } + + if (!entry) { + rte_bbdev_log(ERR, "Error: No gulipcgul%d device", modem_id); + return -1; + } + + sprintf(dev_path, "/dev/%s", entry->d_name); + dev_ipc = open(dev_path, O_RDWR); + if (dev_ipc < 0) { + rte_bbdev_log(ERR, "Error: Cannot open %s", dev_path); + return -errno; + } + + return dev_ipc; +} + +static int +setup_la12xx_dev(struct rte_bbdev *dev) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + struct hugepage_info *hp = NULL; + ipc_channel_us_t *ipc_priv_ch = NULL; + int dev_ipc = 0, dev_mem = 0, i; + ipc_metadata_t *ipc_md; + struct gul_hif *mhif; + uint32_t phy_align = 0; + int ret; + + PMD_INIT_FUNC_TRACE(); + + if (!ipc_priv) { + /* TODO - get a better way */ + /* Get the hugepage info against it */ + hp = get_hugepage_info(); + if (!hp) { + rte_bbdev_log(ERR, "Unable to get hugepage info"); + ret = -ENOMEM; + goto err; + } + + rte_bbdev_log_debug("0x%" PRIx64 " %p 0x%" PRIx64, + hp->paddr, hp->vaddr, hp->len); + + ipc_priv = rte_zmalloc(0, sizeof(ipc_userspace_t), 0); + if (ipc_priv == NULL) { + rte_bbdev_log(ERR, + "Unable to allocate memory for ipc priv"); + ret = -ENOMEM; + goto err; + } + + for (i = 0; i < IPC_MAX_CHANNEL_COUNT; i++) { + ipc_priv_ch = rte_zmalloc(0, + sizeof(ipc_channel_us_t), 0); + if (ipc_priv_ch == NULL) { + rte_bbdev_log(ERR, + "Unable to allocate memory for channels"); + ret = -ENOMEM; + } + ipc_priv->channels[i] = ipc_priv_ch; + } + + dev_mem = open("/dev/mem", O_RDWR); + if (dev_mem < 0) { + rte_bbdev_log(ERR, "Error: Cannot open /dev/mem"); + ret = -errno; + goto err; + } + + ipc_priv->instance_id = 0; + ipc_priv->dev_mem = dev_mem; + + rte_bbdev_log_debug("hugepg input 0x%" PRIx64 "%p 0x%" PRIx64, + hp->paddr, hp->vaddr, hp->len); + + ipc_priv->sys_map.hugepg_start.host_phys = hp->paddr; + ipc_priv->sys_map.hugepg_start.size = hp->len; + + ipc_priv->hugepg_start.host_phys = hp->paddr; + ipc_priv->hugepg_start.host_vaddr = hp->vaddr; + ipc_priv->hugepg_start.size = hp->len; + + rte_free(hp); + } + + dev_ipc = open_ipc_dev(priv->modem_id); + if (dev_ipc < 0) { + rte_bbdev_log(ERR, "Error: open_ipc_dev failed"); + goto err; + } + ipc_priv->dev_ipc = dev_ipc; + + ret = ioctl(ipc_priv->dev_ipc, IOCTL_GUL_IPC_GET_SYS_MAP, + &ipc_priv->sys_map); + if (ret) { + rte_bbdev_log(ERR, + "IOCTL_GUL_IPC_GET_SYS_MAP ioctl failed"); + goto err; + } + + phy_align = (ipc_priv->sys_map.mhif_start.host_phys % 0x1000); + ipc_priv->mhif_start.host_vaddr = + mmap(0, ipc_priv->sys_map.mhif_start.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.mhif_start.host_phys - phy_align)); + if (ipc_priv->mhif_start.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->mhif_start.host_vaddr = (void *) ((uintptr_t) + (ipc_priv->mhif_start.host_vaddr) + phy_align); + + phy_align = (ipc_priv->sys_map.peb_start.host_phys % 0x1000); + ipc_priv->peb_start.host_vaddr = + mmap(0, ipc_priv->sys_map.peb_start.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.peb_start.host_phys - phy_align)); + if (ipc_priv->peb_start.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->peb_start.host_vaddr = (void *)((uintptr_t) + (ipc_priv->peb_start.host_vaddr) + phy_align); + + phy_align = (ipc_priv->sys_map.modem_ccsrbar.host_phys % 0x1000); + ipc_priv->modem_ccsrbar.host_vaddr = + mmap(0, ipc_priv->sys_map.modem_ccsrbar.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.modem_ccsrbar.host_phys - phy_align)); + if (ipc_priv->modem_ccsrbar.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->modem_ccsrbar.host_vaddr = (void *)((uintptr_t) + (ipc_priv->modem_ccsrbar.host_vaddr) + phy_align); + + ipc_priv->hugepg_start.modem_phys = + ipc_priv->sys_map.hugepg_start.modem_phys; + + ipc_priv->mhif_start.host_phys = + ipc_priv->sys_map.mhif_start.host_phys; + ipc_priv->mhif_start.size = ipc_priv->sys_map.mhif_start.size; + ipc_priv->peb_start.host_phys = ipc_priv->sys_map.peb_start.host_phys; + ipc_priv->peb_start.size = ipc_priv->sys_map.peb_start.size; + + rte_bbdev_log(INFO, "peb 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->peb_start.host_phys, + ipc_priv->peb_start.host_vaddr, + ipc_priv->peb_start.size); + rte_bbdev_log(INFO, "hugepg 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->hugepg_start.host_phys, + ipc_priv->hugepg_start.host_vaddr, + ipc_priv->hugepg_start.size); + rte_bbdev_log(INFO, "mhif 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->mhif_start.host_phys, + ipc_priv->mhif_start.host_vaddr, + ipc_priv->mhif_start.size); + mhif = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + + /* offset is from start of PEB */ + ipc_md = (ipc_metadata_t *)((uintptr_t)ipc_priv->peb_start.host_vaddr + + mhif->ipc_regs.ipc_mdata_offset); + + if (sizeof(ipc_metadata_t) != mhif->ipc_regs.ipc_mdata_size) { + rte_bbdev_log(ERR, + "ipc_metadata_t =0x%" PRIx64 + ", mhif->ipc_regs.ipc_mdata_size=0x%" PRIx32, + (uint64_t)(sizeof(ipc_metadata_t)), + mhif->ipc_regs.ipc_mdata_size); + rte_bbdev_log(ERR, "--> mhif->ipc_regs.ipc_mdata_offset= 0x%" + PRIx32, mhif->ipc_regs.ipc_mdata_offset); + rte_bbdev_log(ERR, "gul_hif size=0x%" PRIx64, + (uint64_t)(sizeof(struct gul_hif))); + return IPC_MD_SZ_MISS_MATCH; + } + + ipc_priv->instance = (ipc_instance_t *) + (&ipc_md->instance_list[ipc_priv->instance_id]); + + rte_bbdev_log_debug("finish host init"); + + priv->ipc_priv = ipc_priv; + + return 0; + +err: + rte_free(hp); + rte_free(ipc_priv); + rte_free(ipc_priv_ch); + if (dev_mem) + close(dev_mem); + if (dev_ipc) + close(dev_ipc); + + return ret; +} + static inline int parse_u16_arg(const char *key, const char *value, void *extra_args) { @@ -122,6 +672,7 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); struct bbdev_la12xx_private *priv; + int ret; PMD_INIT_FUNC_TRACE(); @@ -151,7 +702,13 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, rte_bbdev_log(INFO, "Setting Up %s: DevId=%d, ModemId=%d", name, bbdev->data->dev_id, priv->modem_id); - bbdev->dev_ops = NULL; + ret = setup_la12xx_dev(bbdev); + if (ret) { + rte_bbdev_log(ERR, "IPC Setup failed for %s", name); + rte_free(bbdev->data->dev_private); + return ret; + } + bbdev->dev_ops = &pmd_ops; bbdev->device = &vdev->device; bbdev->data->socket_id = 0; bbdev->intr_handle = NULL; diff --git a/drivers/baseband/la12xx/bbdev_la12xx.h b/drivers/baseband/la12xx/bbdev_la12xx.h index 5228502331..fe91e62bf6 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.h +++ b/drivers/baseband/la12xx/bbdev_la12xx.h @@ -5,16 +5,10 @@ #ifndef __BBDEV_LA12XX_H__ #define __BBDEV_LA12XX_H__ -#define BBDEV_IPC_ENC_OP_TYPE 1 -#define BBDEV_IPC_DEC_OP_TYPE 2 - -#define MAX_LDPC_ENC_FECA_QUEUES 4 -#define MAX_LDPC_DEC_FECA_QUEUES 4 - #define MAX_CHANNEL_DEPTH 16 /* private data structure */ struct bbdev_la12xx_private { - void *ipc_priv; + ipc_userspace_t *ipc_priv; uint8_t num_valid_queues; uint8_t max_nb_queues; uint8_t num_ldpc_enc_queues; @@ -32,14 +26,14 @@ struct hugepage_info { struct bbdev_la12xx_q_priv { struct bbdev_la12xx_private *bbdev_priv; uint32_t q_id; /**< Channel ID */ - uint32_t feca_blk_id; /** FECA block ID for processing */ + uint32_t feca_blk_id; /**< FECA block ID for processing */ uint32_t feca_blk_id_be32; /**< FECA Block ID for this queue */ - uint8_t en_napi; /* 0: napi disabled, 1: napi enabled */ + uint8_t en_napi; /**< 0: napi disabled, 1: napi enabled */ uint16_t queue_size; /**< Queue depth */ int32_t eventfd; /**< Event FD value */ enum rte_bbdev_op_type op_type; /**< Operation type */ uint32_t la12xx_core_id; - /* LA12xx core ID on which this will be scheduled */ + /**< LA12xx core ID on which this will be scheduled */ struct rte_mempool *mp; /**< Pool from where buffers would be cut */ void *bbdev_op[MAX_CHANNEL_DEPTH]; /**< Stores bbdev op for each index */ @@ -52,5 +46,6 @@ struct bbdev_la12xx_q_priv { #define lower_32_bits(x) ((uint32_t)((uint64_t)x)) #define upper_32_bits(x) ((uint32_t)(((uint64_t)(x) >> 16) >> 16)) - +#define join_32_bits(upper, lower) \ + ((size_t)(((uint64_t)(upper) << 32) | (uint32_t)(lower))) #endif diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h index 9aa5562981..5f613fb087 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -4,9 +4,182 @@ #ifndef __BBDEV_LA12XX_IPC_H__ #define __BBDEV_LA12XX_IPC_H__ +#define LA12XX_MAX_QUEUES 20 +#define HOST_RX_QUEUEID_OFFSET LA12XX_MAX_QUEUES + +/** No. of max channel per instance */ +#define IPC_MAX_CHANNEL_COUNT (64) + /** No. of max channel per instance */ #define IPC_MAX_DEPTH (16) +/** No. of max IPC instance per modem */ +#define IPC_MAX_INSTANCE_COUNT (1) + +/** Error codes */ +#define IPC_SUCCESS (0) /** IPC operation success */ +#define IPC_INPUT_INVALID (-1) /** Invalid input to API */ +#define IPC_CH_INVALID (-2) /** Channel no is invalid */ +#define IPC_INSTANCE_INVALID (-3) /** Instance no is invalid */ +#define IPC_MEM_INVALID (-4) /** Insufficient memory */ +#define IPC_CH_FULL (-5) /** Channel is full */ +#define IPC_CH_EMPTY (-6) /** Channel is empty */ +#define IPC_BL_EMPTY (-7) /** Free buffer list is empty */ +#define IPC_BL_FULL (-8) /** Free buffer list is full */ +#define IPC_HOST_BUF_ALLOC_FAIL (-9) /** DPDK malloc fail */ +#define IPC_MD_SZ_MISS_MATCH (-10) /** META DATA size in mhif miss matched*/ +#define IPC_MALLOC_FAIL (-11) /** system malloc fail */ +#define IPC_IOCTL_FAIL (-12) /** IOCTL call failed */ +#define IPC_MMAP_FAIL (-14) /** MMAP fail */ +#define IPC_OPEN_FAIL (-15) /** OPEN fail */ +#define IPC_EVENTFD_FAIL (-16) /** eventfd initialization failed */ +#define IPC_NOT_IMPLEMENTED (-17) /** IPC feature is not implemented yet*/ + +#define SET_HIF_HOST_RDY(hif, RDY_MASK) (hif->host_ready |= RDY_MASK) +#define CHK_HIF_MOD_RDY(hif, RDY_MASK) (hif->mod_ready & RDY_MASK) + +/* Host Ready bits */ +#define HIF_HOST_READY_HOST_REGIONS (1 << 0) +#define HIF_HOST_READY_IPC_LIB (1 << 12) +#define HIF_HOST_READY_IPC_APP (1 << 13) +#define HIF_HOST_READY_FECA (1 << 14) + +/* Modem Ready bits */ +#define HIF_MOD_READY_IPC_LIB (1 << 5) +#define HIF_MOD_READY_IPC_APP (1 << 6) +#define HIF_MOD_READY_FECA (1 << 7) + +typedef void *ipc_t; + +struct ipc_msg { + int chid; + void *addr; + uint32_t len; + uint8_t flags; +}; + +typedef struct { + uint64_t host_phys; + uint32_t modem_phys; + void *host_vaddr; + uint32_t size; +} mem_range_t; + +#define GUL_IPC_MAGIC 'R' + +#define IOCTL_GUL_IPC_GET_SYS_MAP _IOW(GUL_IPC_MAGIC, 1, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_REGISTER _IOWR(GUL_IPC_MAGIC, 4, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_DEREGISTER \ + _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) + +/** buffer ring common metadata */ +typedef struct ipc_bd_ring_md { + volatile uint32_t pi; /**< Producer index and flag (MSB) + * which flip for each Ring wrapping + */ + volatile uint32_t ci; /**< Consumer index and flag (MSB) + * which flip for each Ring wrapping + */ + uint32_t ring_size; /**< depth (Used to roll-over pi/ci) */ + uint32_t msg_size; /**< Size of the each buffer */ +} __rte_packed ipc_br_md_t; + +/** IPC buffer descriptor */ +typedef struct ipc_buffer_desc { + union { + uint64_t host_virt; /**< msg's host virtual address */ + struct { + uint32_t host_virt_l; + uint32_t host_virt_h; + }; + }; + uint32_t modem_ptr; /**< msg's modem physical address */ + uint32_t len; /**< msg len */ +} __rte_packed ipc_bd_t; + +typedef struct ipc_channel { + uint32_t ch_id; /**< Channel id */ + ipc_br_md_t md; /**< Metadata for BD ring */ + ipc_bd_t bd_h[IPC_MAX_DEPTH]; /**< Buffer Descriptor on Host */ + ipc_bd_t bd_m[IPC_MAX_DEPTH]; /**< Buffer Descriptor on Modem */ + uint32_t op_type; /**< Type of the BBDEV operation + * supported on this channel + */ + uint32_t depth; /**< Channel depth */ + uint32_t feca_blk_id; /**< FECA Transport Block ID for processing */ + uint32_t la12xx_core_id;/**< LA12xx core ID on which this will be + * scheduled + */ + uint32_t feca_input_circ_size; /**< FECA transport block input + * circular buffer size + */ + uint32_t host_ipc_params; /**< Address for host IPC parameters */ +} __rte_packed ipc_ch_t; + +typedef struct ipc_instance { + uint32_t instance_id; /**< instance id, use to init this + * instance by ipc_init API + */ + uint32_t initialized; /**< Set in ipc_init */ + ipc_ch_t ch_list[IPC_MAX_CHANNEL_COUNT]; + /**< Channel descriptors in this instance */ +} __rte_packed ipc_instance_t; + +typedef struct ipc_metadata { + uint32_t ipc_host_signature; /**< IPC host signature, Set by host/L2 */ + uint32_t ipc_geul_signature; /**< IPC geul signature, Set by modem */ + ipc_instance_t instance_list[IPC_MAX_INSTANCE_COUNT]; +} __rte_packed ipc_metadata_t; + +typedef struct ipc_channel_us_priv { + int32_t eventfd; + uint32_t channel_id; + /* In flight packets status for buffer list. */ + uint8_t bufs_inflight[IPC_MAX_DEPTH]; +} ipc_channel_us_t; + +typedef struct { + uint64_t host_phys; + uint32_t modem_phys; + uint32_t size; +} mem_strt_addr_t; + +typedef struct { + mem_strt_addr_t modem_ccsrbar; + mem_strt_addr_t peb_start; /* PEB meta data */ + mem_strt_addr_t mhif_start; /* MHIF meta daat */ + mem_strt_addr_t hugepg_start; /* Modem to access hugepage */ +} sys_map_t; + +typedef struct ipc_priv_t { + int instance_id; + int dev_ipc; + int dev_mem; + sys_map_t sys_map; + mem_range_t modem_ccsrbar; + mem_range_t peb_start; + mem_range_t mhif_start; + mem_range_t hugepg_start; + ipc_channel_us_t *channels[IPC_MAX_CHANNEL_COUNT]; + ipc_instance_t *instance; + ipc_instance_t *instance_bk; +} ipc_userspace_t; + +/** Structure specifying enqueue operation (enqueue at LA1224) */ +struct bbdev_ipc_enqueue_op { + /** Status of operation that was performed */ + int32_t status; + /** CRC Status of SD operation that was performed */ + int32_t crc_stat_addr; + /** HARQ Output buffer memory length for Shared Decode. + * Filled by LA12xx. + */ + uint32_t out_len; + /** Reserved (for 8 byte alignment) */ + uint32_t rsvd; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI. @@ -14,7 +187,21 @@ typedef struct host_ipc_params { volatile uint32_t pi; volatile uint32_t ci; - volatile uint32_t modem_ptr[IPC_MAX_DEPTH]; + volatile uint32_t bd_m_modem_ptr[IPC_MAX_DEPTH]; } __rte_packed host_ipc_params_t; +struct hif_ipc_regs { + uint32_t ipc_mdata_offset; + uint32_t ipc_mdata_size; +} __rte_packed; + +struct gul_hif { + uint32_t ver; + uint32_t hif_ver; + uint32_t status; + volatile uint32_t host_ready; + volatile uint32_t mod_ready; + struct hif_ipc_regs ipc_regs; +} __rte_packed; + #endif From patchwork Sun Oct 17 06:53:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101899 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33339A0547; Sun, 17 Oct 2021 08:54:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEF66410FF; Sun, 17 Oct 2021 08:53:44 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 5830640DDE for ; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 324F21A1B84; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C21711A1B6B; Sun, 17 Oct 2021 08:53:35 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 2E20B183AD05; Sun, 17 Oct 2021 14:53:35 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:29 +0530 Message-Id: <20211017065331.25488-7-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 6/8] baseband/la12xx: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal Add support for enqueue and dequeue the LDPC enc/dec from the modem device. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- doc/guides/bbdevs/features/la12xx.ini | 13 + doc/guides/bbdevs/la12xx.rst | 44 +++ doc/guides/rel_notes/release_21_11.rst | 5 + drivers/baseband/la12xx/bbdev_la12xx.c | 320 +++++++++++++++++++++ drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 37 +++ 5 files changed, 419 insertions(+) create mode 100644 doc/guides/bbdevs/features/la12xx.ini diff --git a/doc/guides/bbdevs/features/la12xx.ini b/doc/guides/bbdevs/features/la12xx.ini new file mode 100644 index 0000000000..0aec5eecb6 --- /dev/null +++ b/doc/guides/bbdevs/features/la12xx.ini @@ -0,0 +1,13 @@ +; +; Supported features of the 'la12xx' bbdev driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Turbo Decoder (4G) = N +Turbo Encoder (4G) = N +LDPC Decoder (5G) = Y +LDPC Encoder (5G) = Y +LLR/HARQ Compression = N +HW Accelerated = Y +BBDEV API = Y diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst index 1a711ef5e3..fe1bca4c5c 100644 --- a/doc/guides/bbdevs/la12xx.rst +++ b/doc/guides/bbdevs/la12xx.rst @@ -78,3 +78,47 @@ For enabling logs, use the following EAL parameter: Using ``bb.la12xx`` as log matching criteria, all Baseband PMD logs can be enabled which are lower than logging ``level``. + +Test Application +---------------- + +BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing +the functionality of LA12xx for FEC encode and decode, depending on the device +capabilities. The test application is located under app->test-bbdev folder and has the +following options: + +.. code-block:: console + + "-p", "--testapp-path": specifies path to the bbdev test app. + "-e", "--eal-params" : EAL arguments which are passed to the test app. + "-t", "--timeout" : Timeout in seconds (default=300). + "-c", "--test-cases" : Defines test cases to run. Run all if not specified. + "-v", "--test-vector" : Test vector path (default=dpdk_path+/app/test-bbdev/test_vectors/bbdev_null.data). + "-n", "--num-ops" : Number of operations to process on device (default=32). + "-b", "--burst-size" : Operations enqueue/dequeue burst size (default=32). + "-s", "--snr" : SNR in dB used when generating LLRs for bler tests. + "-s", "--iter_max" : Number of iterations for LDPC decoder. + "-l", "--num-lcores" : Number of lcores to run (default=16). + "-i", "--init-device" : Initialise PF device with default values. + + +To execute the test application tool using simple decode or encode data, +type one of the following: + +.. code-block:: console + + ./test-bbdev.py -e="--vdev=baseband_la12xx,socket_id=0,max_nb_queues=8" -c validation -n 64 -b 1 -v ./ldpc_dec_default.data + ./test-bbdev.py -e="--vdev=baseband_la12xx,socket_id=0,max_nb_queues=8" -c validation -n 64 -b 1 -v ./ldpc_enc_default.data + +The test application ``test-bbdev.py``, supports the ability to configure the PF device with +a default set of values, if the "-i" or "- -init-device" option is included. The default values +are defined in test_bbdev_perf.c. + + +Test Vectors +~~~~~~~~~~~~ + +In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides +a range of additional tests under the test_vectors folder, which may be useful. The results +of these tests will depend on the LA12xx FEC capabilities which may cause some +testcases to be skipped, but no failure should be reported. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 957bd78d61..d7e0bdc09b 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -159,6 +159,11 @@ New Features * Added tests to verify tunnel header verification in IPsec inbound. * Added tests to verify inner checksum. +* **Added NXP LA12xx baseband PMD.** + + * Added a new baseband PMD driver for NXP LA12xx Software defined radio. + * See the :doc:`../bbdevs/la12xx` for more details. + Removed Items ------------- diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index 7e312cf2e7..4b05b5d3f2 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -120,6 +120,10 @@ la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id) ((uint64_t) ((unsigned long) (A) \ - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) +#define MODEM_P2V(A) \ + ((uint64_t) ((unsigned long) (A) \ + + (unsigned long)(ipc_priv->peb_start.host_vaddr))) + static int ipc_queue_configure(uint32_t channel_id, ipc_t instance, @@ -336,6 +340,318 @@ static const struct rte_bbdev_ops pmd_ops = { .queue_release = la12xx_queue_release, .start = la12xx_start }; + +static inline int +is_bd_ring_full(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (pi == ci) { + if (pi_flag != ci_flag) + return 1; /* Ring is Full */ + } + return 0; +} + +static inline int +prepare_ldpc_enc_op(struct rte_bbdev_enc_op *bbdev_enc_op, + struct bbdev_la12xx_q_priv *q_priv __rte_unused, + struct rte_bbdev_op_data *in_op_data __rte_unused, + struct rte_bbdev_op_data *out_op_data) +{ + struct rte_bbdev_op_ldpc_enc *ldpc_enc = &bbdev_enc_op->ldpc_enc; + uint32_t total_out_bits; + + total_out_bits = (ldpc_enc->tb_params.cab * + ldpc_enc->tb_params.ea) + (ldpc_enc->tb_params.c - + ldpc_enc->tb_params.cab) * ldpc_enc->tb_params.eb; + + ldpc_enc->output.length = (total_out_bits + 7)/8; + + rte_pktmbuf_append(out_op_data->data, ldpc_enc->output.length); + + return 0; +} + +static inline int +prepare_ldpc_dec_op(struct rte_bbdev_dec_op *bbdev_dec_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv __rte_unused, + struct rte_bbdev_op_data *out_op_data __rte_unused) +{ + struct rte_bbdev_op_ldpc_dec *ldpc_dec = &bbdev_dec_op->ldpc_dec; + uint32_t total_out_bits; + uint32_t num_code_blocks = 0; + uint16_t sys_cols; + + sys_cols = (ldpc_dec->basegraph == 1) ? 22 : 10; + if (ldpc_dec->tb_params.c == 1) { + total_out_bits = ((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler); + /* 5G-NR protocol uses 16 bit CRC when output packet + * size <= 3824 (bits). Otherwise 24 bit CRC is used. + * Adjust the output bits accordingly + */ + if (total_out_bits - 16 <= 3824) + total_out_bits -= 16; + else + total_out_bits -= 24; + ldpc_dec->hard_output.length = (total_out_bits / 8); + } else { + total_out_bits = (((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler - 24) * + ldpc_dec->tb_params.c); + ldpc_dec->hard_output.length = (total_out_bits / 8) - 3; + } + + num_code_blocks = ldpc_dec->tb_params.c; + + bbdev_ipc_op->num_code_blocks = rte_cpu_to_be_32(num_code_blocks); + + return 0; +} + +static int +enqueue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *bbdev_op) +{ + struct bbdev_la12xx_private *priv = q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + ipc_instance_t *ipc_instance = ipc_priv->instance; + struct bbdev_ipc_dequeue_op *bbdev_ipc_op; + struct rte_bbdev_op_ldpc_enc *ldpc_enc; + struct rte_bbdev_op_ldpc_dec *ldpc_dec; + uint32_t q_id = q_priv->q_id; + uint32_t ci, ci_flag, pi, pi_flag; + ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]); + ipc_br_md_t *md = &(ch->md); + size_t virt; + char *huge_start_addr = + (char *)q_priv->bbdev_priv->ipc_priv->hugepg_start.host_vaddr; + struct rte_bbdev_op_data *in_op_data, *out_op_data; + char *data_ptr; + uint32_t l1_pcie_addr; + int ret; + + ci = IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci); + + pi = IPC_GET_PI_INDEX(q_priv->host_pi); + pi_flag = IPC_GET_PI_FLAG(q_priv->host_pi); + + rte_bbdev_dp_log(DEBUG, "before bd_ring_full: pi: %u, ci: %u," + "pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + if (is_bd_ring_full(ci, ci_flag, pi, pi_flag)) { + rte_bbdev_dp_log(DEBUG, "bd ring full for queue id: %d", q_id); + return IPC_CH_FULL; + } + + virt = MODEM_P2V(q_priv->host_params->bd_m_modem_ptr[pi]); + bbdev_ipc_op = (struct bbdev_ipc_dequeue_op *)virt; + q_priv->bbdev_op[pi] = bbdev_op; + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + ldpc_enc = &(((struct rte_bbdev_enc_op *)bbdev_op)->ldpc_enc); + in_op_data = &ldpc_enc->input; + out_op_data = &ldpc_enc->output; + + ret = prepare_ldpc_enc_op(bbdev_op, q_priv, + in_op_data, out_op_data); + if (ret) { + rte_bbdev_log(ERR, "process_ldpc_enc_op fail, ret: %d", + ret); + return ret; + } + break; + + case RTE_BBDEV_OP_LDPC_DEC: + ldpc_dec = &(((struct rte_bbdev_dec_op *)bbdev_op)->ldpc_dec); + in_op_data = &ldpc_dec->input; + + out_op_data = &ldpc_dec->hard_output; + + ret = prepare_ldpc_dec_op(bbdev_op, bbdev_ipc_op, + q_priv, out_op_data); + if (ret) { + rte_bbdev_log(ERR, "process_ldpc_dec_op fail, ret: %d", + ret); + return ret; + } + break; + + default: + rte_bbdev_log(ERR, "unsupported bbdev_ipc op type"); + return -1; + } + + if (in_op_data->data) { + data_ptr = rte_pktmbuf_mtod(in_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->in_addr = l1_pcie_addr; + bbdev_ipc_op->in_len = in_op_data->length; + } + + if (out_op_data->data) { + data_ptr = rte_pktmbuf_mtod(out_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->out_addr = rte_cpu_to_be_32(l1_pcie_addr); + bbdev_ipc_op->out_len = rte_cpu_to_be_32(out_op_data->length); + } + + /* Move Producer Index forward */ + pi++; + /* Flip the PI flag, if wrapping */ + if (unlikely(q_priv->queue_size == pi)) { + pi = 0; + pi_flag = pi_flag ? 0 : 1; + } + + if (pi_flag) + IPC_SET_PI_FLAG(pi); + else + IPC_RESET_PI_FLAG(pi); + q_priv->host_pi = pi; + + /* Wait for Data Copy & pi_flag update to complete before updating pi */ + rte_mb(); + /* now update pi */ + md->pi = rte_cpu_to_be_32(pi); + + rte_bbdev_dp_log(DEBUG, "enter: pi: %u, ci: %u," + "pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + return 0; +} + +/* Enqueue decode burst */ +static uint16_t +enqueue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +/* Enqueue encode burst */ +static uint16_t +enqueue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +/* Dequeue encode burst */ +static void * +dequeue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *dst) +{ + void *op; + uint32_t ci, ci_flag; + uint32_t temp_ci; + + temp_ci = q_priv->host_params->ci; + if (temp_ci == q_priv->host_ci) + return NULL; + + ci = IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci); + + rte_bbdev_dp_log(DEBUG, + "ci: %u, ci_flag: %u, ring size: %u", + ci, ci_flag, q_priv->queue_size); + + op = q_priv->bbdev_op[ci]; + + rte_memcpy(dst, q_priv->msg_ch_vaddr[ci], + sizeof(struct bbdev_ipc_enqueue_op)); + + /* Move Consumer Index forward */ + ci++; + /* Flip the CI flag, if wrapping */ + if (q_priv->queue_size == ci) { + ci = 0; + ci_flag = ci_flag ? 0 : 1; + } + if (ci_flag) + IPC_SET_CI_FLAG(ci); + else + IPC_RESET_CI_FLAG(ci); + + q_priv->host_ci = ci; + + rte_bbdev_dp_log(DEBUG, + "exit: ci: %u, ci_flag: %u, ring size: %u", + ci, ci_flag, q_priv->queue_size); + + return op; +} + +/* Dequeue decode burst */ +static uint16_t +dequeue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_dequeued; + + for (nb_dequeued = 0; nb_dequeued < nb_ops; nb_dequeued++) { + ops[nb_dequeued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_dequeued]) + break; + ops[nb_dequeued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + +/* Dequeue encode burst */ +static uint16_t +dequeue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_enqueued; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ops[nb_enqueued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_enqueued]) + break; + ops[nb_enqueued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + static struct hugepage_info * get_hugepage_info(void) { @@ -718,6 +1034,10 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, bbdev->dequeue_dec_ops = NULL; bbdev->enqueue_enc_ops = NULL; bbdev->enqueue_dec_ops = NULL; + bbdev->dequeue_ldpc_enc_ops = dequeue_enc_ops; + bbdev->dequeue_ldpc_dec_ops = dequeue_dec_ops; + bbdev->enqueue_ldpc_enc_ops = enqueue_enc_ops; + bbdev->enqueue_ldpc_dec_ops = enqueue_dec_ops; return 0; } diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h index 5f613fb087..b6a7f677d0 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -73,6 +73,25 @@ typedef struct { _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) #define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) +#define GUL_USER_HUGE_PAGE_OFFSET (0) +#define GUL_PCI1_ADDR_BASE (0x00000000ULL) + +#define GUL_USER_HUGE_PAGE_ADDR (GUL_PCI1_ADDR_BASE + GUL_USER_HUGE_PAGE_OFFSET) + +/* IPC PI/CI index & flag manipulation helpers */ +#define IPC_PI_CI_FLAG_MASK 0x80000000 /* (1<<31) */ +#define IPC_PI_CI_INDEX_MASK 0x7FFFFFFF /* ~(1<<31) */ + +#define IPC_SET_PI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_PI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_PI_FLAG(x) (x >> 31) +#define IPC_GET_PI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + +#define IPC_SET_CI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_CI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_CI_FLAG(x) (x >> 31) +#define IPC_GET_CI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + /** buffer ring common metadata */ typedef struct ipc_bd_ring_md { volatile uint32_t pi; /**< Producer index and flag (MSB) @@ -180,6 +199,24 @@ struct bbdev_ipc_enqueue_op { uint32_t rsvd; }; +/** Structure specifying dequeue operation (dequeue at LA1224) */ +struct bbdev_ipc_dequeue_op { + /** Input buffer memory address */ + uint32_t in_addr; + /** Input buffer memory length */ + uint32_t in_len; + /** Output buffer memory address */ + uint32_t out_addr; + /** Output buffer memory length */ + uint32_t out_len; + /* Number of code blocks. Only set when HARQ is used */ + uint32_t num_code_blocks; + /** Dequeue Operation flags */ + uint32_t op_flags; + /** Shared metadata between L1 and L2 */ + uint32_t shared_metadata; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI. From patchwork Sun Oct 17 06:53:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101900 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03773A0547; Sun, 17 Oct 2021 08:54:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEC194111C; Sun, 17 Oct 2021 08:53:45 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 8DCD340DF7 for ; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7132F201B8D; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3A06D2005C1; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 9715A183AD0B; Sun, 17 Oct 2021 14:53:35 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com Date: Sun, 17 Oct 2021 12:23:30 +0530 Message-Id: <20211017065331.25488-8-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 7/8] app/bbdev: enable la12xx for bbdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal this patch adds la12xx driver in test bbdev Signed-off-by: Hemant Agrawal --- app/test-bbdev/meson.build | 3 +++ 1 file changed, 3 insertions(+) diff --git a/app/test-bbdev/meson.build b/app/test-bbdev/meson.build index edb9deef84..a726a5b3fa 100644 --- a/app/test-bbdev/meson.build +++ b/app/test-bbdev/meson.build @@ -23,3 +23,6 @@ endif if dpdk_conf.has('RTE_BASEBAND_ACC100') deps += ['baseband_acc100'] endif +if dpdk_conf.has('RTE_LIBRTE_PMD_BBDEV_LA12XX') + deps += ['baseband_la12xx'] +endif From patchwork Sun Oct 17 06:53:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 101901 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94B77A0547; Sun, 17 Oct 2021 08:54:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E85314112E; Sun, 17 Oct 2021 08:53:46 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 3644240E25 for ; Sun, 17 Oct 2021 08:53:37 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 10DA31A16F6; Sun, 17 Oct 2021 08:53:37 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A2BC11A1B6B; Sun, 17 Oct 2021 08:53:36 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 0E8F0183AC94; Sun, 17 Oct 2021 14:53:36 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Sun, 17 Oct 2021 12:23:31 +0530 Message-Id: <20211017065331.25488-9-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211017065331.25488-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211017065331.25488-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v11 8/8] app/bbdev: handle endianness of test data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta With data input, output and harq also supported in big endian format, this patch updates the testbbdev application to handle the endianness conversion as directed by the the driver being used. The test vectors assumes the data in the little endian order, and thus if the driver supports big endian data processing, conversion from little endian to big is handled by the testbbdev application. Signed-off-by: Nipun Gupta --- app/test-bbdev/test_bbdev_perf.c | 43 ++++++++++++++++++++++++++++++++ doc/guides/tools/testbbdev.rst | 3 +++ 2 files changed, 46 insertions(+) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 469597b8b3..7b4529789b 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -227,6 +227,45 @@ clear_soft_out_cap(uint32_t *op_flags) *op_flags &= ~RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT; } +/* This API is to convert all the test vector op data entries + * to big endian format. It is used when the device supports + * the input in the big endian format. + */ +static inline void +convert_op_data_to_be(void) +{ + struct op_data_entries *op; + enum op_data_type type; + uint8_t nb_segs, *rem_data, temp; + uint32_t *data, len; + int complete, rem, i, j; + + for (type = DATA_INPUT; type < DATA_NUM_TYPES; ++type) { + nb_segs = test_vector.entries[type].nb_segments; + op = &test_vector.entries[type]; + + /* Invert byte endianness for all the segments */ + for (i = 0; i < nb_segs; ++i) { + len = op->segments[i].length; + data = op->segments[i].addr; + + /* Swap complete u32 bytes */ + complete = len / 4; + for (j = 0; j < complete; j++) + data[j] = rte_bswap32(data[j]); + + /* Swap any remaining bytes */ + rem = len % 4; + rem_data = (uint8_t *)&data[j]; + for (j = 0; j < rem/2; j++) { + temp = rem_data[j]; + rem_data[j] = rem_data[rem - j - 1]; + rem_data[rem - j - 1] = temp; + } + } + } +} + static int check_dev_cap(const struct rte_bbdev_info *dev_info) { @@ -234,6 +273,7 @@ check_dev_cap(const struct rte_bbdev_info *dev_info) unsigned int nb_inputs, nb_soft_outputs, nb_hard_outputs, nb_harq_inputs, nb_harq_outputs; const struct rte_bbdev_op_cap *op_cap = dev_info->drv.capabilities; + uint8_t dev_data_endianness = dev_info->drv.data_endianness; nb_inputs = test_vector.entries[DATA_INPUT].nb_segments; nb_soft_outputs = test_vector.entries[DATA_SOFT_OUTPUT].nb_segments; @@ -245,6 +285,9 @@ check_dev_cap(const struct rte_bbdev_info *dev_info) if (op_cap->type != test_vector.op_type) continue; + if (dev_data_endianness == RTE_BIG_ENDIAN) + convert_op_data_to_be(); + if (op_cap->type == RTE_BBDEV_OP_TURBO_DEC) { const struct rte_bbdev_op_cap_turbo_dec *cap = &op_cap->cap.turbo_dec; diff --git a/doc/guides/tools/testbbdev.rst b/doc/guides/tools/testbbdev.rst index d397d991ff..83a0312062 100644 --- a/doc/guides/tools/testbbdev.rst +++ b/doc/guides/tools/testbbdev.rst @@ -332,6 +332,9 @@ Variable op_type has to be defined as a first variable in file. It specifies what type of operations will be executed. For 4G decoder op_type has to be set to ``RTE_BBDEV_OP_TURBO_DEC`` and for 4G encoder to ``RTE_BBDEV_OP_TURBO_ENC``. +Bbdev-test adjusts the byte endianness based on the PMD capability (data_endianness) +and all the test vectors input/output data are assumed to be LE by default + Full details of the meaning and valid values for the below fields are documented in *rte_bbdev_op.h*