From patchwork Thu Oct 7 09:33:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100680 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 005E4A0C47; Thu, 7 Oct 2021 11:33:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BEC6C411AD; Thu, 7 Oct 2021 11:33:22 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id C0C414067E for ; Thu, 7 Oct 2021 11:33:18 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 9A5152013B6; Thu, 7 Oct 2021 11:33:18 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 38CB6201075; Thu, 7 Oct 2021 11:33:18 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 4BBFC183AD28; Thu, 7 Oct 2021 17:33:17 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com Date: Thu, 7 Oct 2021 15:03:08 +0530 Message-Id: <20211007093315.17384-2-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 1/8] bbdev: add device info related to data endianness assumption X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nicolas Chautru Adding device information to capture explicitly the assumption of the input/output data byte endianness being processed. Signed-off-by: Nicolas Chautru --- doc/guides/rel_notes/release_21_11.rst | 1 + drivers/baseband/acc100/rte_acc100_pmd.c | 1 + drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 + drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 + lib/bbdev/rte_bbdev.h | 8 ++++++++ 6 files changed, 13 insertions(+) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index dfc2cbdeed..a991f01bf3 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -187,6 +187,7 @@ API Changes the crypto/security operation. This field will be used to communicate events such as soft expiry with IPsec in lookaside mode. +* bbdev: Added device info related to data byte endianness processing assumption. ABI Changes ----------- diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 68ba523ea9..5c3901c9ca 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -1088,6 +1088,7 @@ acc100_dev_info_get(struct rte_bbdev *dev, #else dev_info->harq_buffer_size = 0; #endif + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN; acc100_check_ir(d); } diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c index 6485cc824a..c7f15c0bfc 100644 --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c @@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev, dev_info->default_queue_conf = default_queue_conf; dev_info->capabilities = bbdev_capabilities; dev_info->cpu_flag_reqs = NULL; + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN; /* Calculates number of queues assigned to device */ dev_info->max_num_queues = 0; diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c index 350c4248eb..72e213ed9a 100644 --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c @@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev, dev_info->default_queue_conf = default_queue_conf; dev_info->capabilities = bbdev_capabilities; dev_info->cpu_flag_reqs = NULL; + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN; /* Calculates number of queues assigned to device */ dev_info->max_num_queues = 0; diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c index 77e9a2ecbc..193f701028 100644 --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c @@ -251,6 +251,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info) dev_info->capabilities = bbdev_capabilities; dev_info->min_alignment = 64; dev_info->harq_buffer_size = 0; + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN; rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id); } diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index 3ebf62e697..b3f30002cd 100644 --- a/lib/bbdev/rte_bbdev.h +++ b/lib/bbdev/rte_bbdev.h @@ -49,6 +49,12 @@ enum rte_bbdev_state { RTE_BBDEV_INITIALIZED }; +/** Definitions of device data byte endianness types */ +enum rte_bbdev_endianness { + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */ + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ +}; + /** * Get the total number of devices that have been successfully initialised. * @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info { uint16_t min_alignment; /** HARQ memory available in kB */ uint32_t harq_buffer_size; + /** Byte endianness assumption for input/output data */ + enum rte_bbdev_endianness data_endianness; /** Default queue configuration used if none is supplied */ struct rte_bbdev_queue_conf default_queue_conf; /** Device operation capabilities */ From patchwork Thu Oct 7 09:33:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100681 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D949A0C47; Thu, 7 Oct 2021 11:33:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E702E411BC; Thu, 7 Oct 2021 11:33:23 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 3597C4067E for ; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 106DD1A1D3E; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A22101A1D59; Thu, 7 Oct 2021 11:33:18 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id AC586183AC94; Thu, 7 Oct 2021 17:33:17 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Thu, 7 Oct 2021 15:03:09 +0530 Message-Id: <20211007093315.17384-3-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 2/8] baseband: introduce NXP LA12xx driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch introduce the baseband device drivers for NXP's LA1200 series software defined baseband modem. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- MAINTAINERS | 9 ++ drivers/baseband/la12xx/bbdev_la12xx.c | 108 ++++++++++++++++++ .../baseband/la12xx/bbdev_la12xx_pmd_logs.h | 28 +++++ drivers/baseband/la12xx/meson.build | 6 + drivers/baseband/la12xx/version.map | 3 + drivers/baseband/meson.build | 1 + 6 files changed, 155 insertions(+) create mode 100644 drivers/baseband/la12xx/bbdev_la12xx.c create mode 100644 drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h create mode 100644 drivers/baseband/la12xx/meson.build create mode 100644 drivers/baseband/la12xx/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 278e5b3226..25eec751bb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1289,6 +1289,15 @@ F: drivers/event/opdl/ F: doc/guides/eventdevs/opdl.rst +Baseband Drivers +---------------- + +NXP LA12xx driver +M: Nipun Gupta +M: Hemant Agrawal +F: drivers/baseband/la12xx/ + + Rawdev Drivers -------------- diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c new file mode 100644 index 0000000000..d3d7a4df37 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#include + +#include +#include +#include +#include +#include + +#include +#include + +#include + +#define DRIVER_NAME baseband_la12xx + +/* private data structure */ +struct bbdev_la12xx_private { + unsigned int max_nb_queues; /**< Max number of queues */ +}; +/* Create device */ +static int +la12xx_bbdev_create(struct rte_vdev_device *vdev) +{ + struct rte_bbdev *bbdev; + const char *name = rte_vdev_device_name(vdev); + + PMD_INIT_FUNC_TRACE(); + + bbdev = rte_bbdev_allocate(name); + if (bbdev == NULL) + return -ENODEV; + + bbdev->data->dev_private = rte_zmalloc(name, + sizeof(struct bbdev_la12xx_private), + RTE_CACHE_LINE_SIZE); + if (bbdev->data->dev_private == NULL) { + rte_bbdev_release(bbdev); + return -ENOMEM; + } + + bbdev->dev_ops = NULL; + bbdev->device = &vdev->device; + bbdev->data->socket_id = 0; + bbdev->intr_handle = NULL; + + /* register rx/tx burst functions for data path */ + bbdev->dequeue_enc_ops = NULL; + bbdev->dequeue_dec_ops = NULL; + bbdev->enqueue_enc_ops = NULL; + bbdev->enqueue_dec_ops = NULL; + + return 0; +} + +/* Initialise device */ +static int +la12xx_bbdev_probe(struct rte_vdev_device *vdev) +{ + const char *name; + + PMD_INIT_FUNC_TRACE(); + + if (vdev == NULL) + return -EINVAL; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + return la12xx_bbdev_create(vdev); +} + +/* Uninitialise device */ +static int +la12xx_bbdev_remove(struct rte_vdev_device *vdev) +{ + struct rte_bbdev *bbdev; + const char *name; + + PMD_INIT_FUNC_TRACE(); + + if (vdev == NULL) + return -EINVAL; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + bbdev = rte_bbdev_get_named_dev(name); + if (bbdev == NULL) + return -EINVAL; + + rte_free(bbdev->data->dev_private); + + return rte_bbdev_release(bbdev); +} + +static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { + .probe = la12xx_bbdev_probe, + .remove = la12xx_bbdev_remove +}; + +RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); +RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); diff --git a/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h b/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h new file mode 100644 index 0000000000..452435ccb9 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef _BBDEV_LA12XX_PMD_LOGS_H_ +#define _BBDEV_LA12XX_PMD_LOGS_H_ + +extern int bbdev_la12xx_logtype; + +#define rte_bbdev_log(level, fmt, ...) \ + rte_log(RTE_LOG_ ## level, bbdev_la12xx_logtype, fmt "\n", \ + ##__VA_ARGS__) + +#ifdef RTE_LIBRTE_BBDEV_DEBUG +#define rte_bbdev_log_debug(fmt, ...) \ + rte_bbdev_log(DEBUG, "la12xx_pmd: " fmt, \ + ##__VA_ARGS__) +#else +#define rte_bbdev_log_debug(fmt, ...) +#endif + +#define PMD_INIT_FUNC_TRACE() rte_bbdev_log_debug(">>") + +/* DP Logs, toggled out at compile time if level lower than current level */ +#define rte_bbdev_dp_log(level, fmt, args...) \ + RTE_LOG_DP(level, PMD, fmt, ## args) + +#endif /* _BBDEV_LA12XX_PMD_LOGS_H_ */ diff --git a/drivers/baseband/la12xx/meson.build b/drivers/baseband/la12xx/meson.build new file mode 100644 index 0000000000..7a017dcffa --- /dev/null +++ b/drivers/baseband/la12xx/meson.build @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2020-2021 NXP + +deps += ['bbdev', 'bus_vdev', 'ring'] + +sources = files('bbdev_la12xx.c') diff --git a/drivers/baseband/la12xx/version.map b/drivers/baseband/la12xx/version.map new file mode 100644 index 0000000000..4a76d1d52d --- /dev/null +++ b/drivers/baseband/la12xx/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; diff --git a/drivers/baseband/meson.build b/drivers/baseband/meson.build index 5ee61d5323..ccd1eebc3b 100644 --- a/drivers/baseband/meson.build +++ b/drivers/baseband/meson.build @@ -11,6 +11,7 @@ drivers = [ 'fpga_lte_fec', 'null', 'turbo_sw', + 'la12xx', ] log_prefix = 'pmd.bb' From patchwork Thu Oct 7 09:33:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100682 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 119EEA0C47; Thu, 7 Oct 2021 11:33:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0B7E411C5; Thu, 7 Oct 2021 11:33:24 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id ACAB241196 for ; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7D2A41A1D5C; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1B4EB1A1D5A; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 23ABE183AD29; Thu, 7 Oct 2021 17:33:18 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Thu, 7 Oct 2021 15:03:10 +0530 Message-Id: <20211007093315.17384-4-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 3/8] baseband/la12xx: add devargs for max queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch adds dev args to take max queues as input Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/baseband/la12xx/bbdev_la12xx.c | 73 +++++++++++++++++++++++++- 1 file changed, 71 insertions(+), 2 deletions(-) diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index d3d7a4df37..f5c835eeb8 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -17,13 +17,73 @@ #define DRIVER_NAME baseband_la12xx +/* Initialisation params structure that can be used by LA12xx BBDEV driver */ +struct bbdev_la12xx_params { + uint8_t queues_num; /*< LA12xx BBDEV queues number */ +}; + +#define LA12XX_MAX_NB_QUEUES_ARG "max_nb_queues" + +static const char * const bbdev_la12xx_valid_params[] = { + LA12XX_MAX_NB_QUEUES_ARG, +}; + /* private data structure */ struct bbdev_la12xx_private { unsigned int max_nb_queues; /**< Max number of queues */ }; +static inline int +parse_u16_arg(const char *key, const char *value, void *extra_args) +{ + uint16_t *u16 = extra_args; + + uint64_t result; + if ((value == NULL) || (extra_args == NULL)) + return -EINVAL; + errno = 0; + result = strtoul(value, NULL, 0); + if ((result >= (1 << 16)) || (errno != 0)) { + rte_bbdev_log(ERR, "Invalid value %" PRIu64 " for %s", + result, key); + return -ERANGE; + } + *u16 = (uint16_t)result; + return 0; +} + +/* Parse parameters used to create device */ +static int +parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, + const char *input_args) +{ + struct rte_kvargs *kvlist = NULL; + int ret = 0; + + if (params == NULL) + return -EINVAL; + if (input_args) { + kvlist = rte_kvargs_parse(input_args, + bbdev_la12xx_valid_params); + if (kvlist == NULL) + return -EFAULT; + + ret = rte_kvargs_process(kvlist, bbdev_la12xx_valid_params[0], + &parse_u16_arg, ¶ms->queues_num); + if (ret < 0) + goto exit; + + } + +exit: + if (kvlist) + rte_kvargs_free(kvlist); + return ret; +} + /* Create device */ static int -la12xx_bbdev_create(struct rte_vdev_device *vdev) +la12xx_bbdev_create(struct rte_vdev_device *vdev, + struct bbdev_la12xx_params *init_params __rte_unused) { struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); @@ -60,7 +120,11 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev) static int la12xx_bbdev_probe(struct rte_vdev_device *vdev) { + struct bbdev_la12xx_params init_params = { + 8 + }; const char *name; + const char *input_args; PMD_INIT_FUNC_TRACE(); @@ -71,7 +135,10 @@ la12xx_bbdev_probe(struct rte_vdev_device *vdev) if (name == NULL) return -EINVAL; - return la12xx_bbdev_create(vdev); + input_args = rte_vdev_device_args(vdev); + parse_bbdev_la12xx_params(&init_params, input_args); + + return la12xx_bbdev_create(vdev, &init_params); } /* Uninitialise device */ @@ -105,4 +172,6 @@ static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { }; RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); +RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME, + LA12XX_MAX_NB_QUEUES_ARG"="); RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); From patchwork Thu Oct 7 09:33:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100683 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A1D5A0C47; Thu, 7 Oct 2021 11:33:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 211AB411D0; Thu, 7 Oct 2021 11:33:26 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 0B3744119A for ; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id DE0D71A1D5A; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7BA541A1D5B; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 90B50183AD2A; Thu, 7 Oct 2021 17:33:18 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com Date: Thu, 7 Oct 2021 15:03:11 +0530 Message-Id: <20211007093315.17384-5-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 4/8] baseband/la12xx: add support for multiple modems X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch add support for multiple modems by assigning a modem id as dev args in vdev creation. Signed-off-by: Hemant Agrawal --- drivers/baseband/la12xx/bbdev_la12xx.c | 64 +++++++++++++++++++--- drivers/baseband/la12xx/bbdev_la12xx.h | 56 +++++++++++++++++++ drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 20 +++++++ 3 files changed, 133 insertions(+), 7 deletions(-) create mode 100644 drivers/baseband/la12xx/bbdev_la12xx.h create mode 100644 drivers/baseband/la12xx/bbdev_la12xx_ipc.h diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index f5c835eeb8..58defa54f0 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -14,24 +14,26 @@ #include #include +#include +#include #define DRIVER_NAME baseband_la12xx /* Initialisation params structure that can be used by LA12xx BBDEV driver */ struct bbdev_la12xx_params { uint8_t queues_num; /*< LA12xx BBDEV queues number */ + int8_t modem_id; /*< LA12xx modem instance id */ }; #define LA12XX_MAX_NB_QUEUES_ARG "max_nb_queues" +#define LA12XX_VDEV_MODEM_ID_ARG "modem" +#define LA12XX_MAX_MODEM 4 static const char * const bbdev_la12xx_valid_params[] = { LA12XX_MAX_NB_QUEUES_ARG, + LA12XX_VDEV_MODEM_ID_ARG, }; -/* private data structure */ -struct bbdev_la12xx_private { - unsigned int max_nb_queues; /**< Max number of queues */ -}; static inline int parse_u16_arg(const char *key, const char *value, void *extra_args) { @@ -51,6 +53,28 @@ parse_u16_arg(const char *key, const char *value, void *extra_args) return 0; } +/* Parse integer from integer argument */ +static int +parse_integer_arg(const char *key __rte_unused, + const char *value, void *extra_args) +{ + int i; + char *end; + + errno = 0; + + i = strtol(value, &end, 10); + if (*end != 0 || errno != 0 || i < 0 || i > LA12XX_MAX_MODEM) { + rte_bbdev_log(ERR, "Supported Port IDS are 0 to %d", + LA12XX_MAX_MODEM - 1); + return -EINVAL; + } + + *((uint32_t *)extra_args) = i; + + return 0; +} + /* Parse parameters used to create device */ static int parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, @@ -72,6 +96,16 @@ parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, if (ret < 0) goto exit; + ret = rte_kvargs_process(kvlist, + bbdev_la12xx_valid_params[1], + &parse_integer_arg, + ¶ms->modem_id); + + if (params->modem_id >= LA12XX_MAX_MODEM) { + rte_bbdev_log(ERR, "Invalid modem id, must be < %u", + LA12XX_MAX_MODEM); + goto exit; + } } exit: @@ -83,10 +117,11 @@ parse_bbdev_la12xx_params(struct bbdev_la12xx_params *params, /* Create device */ static int la12xx_bbdev_create(struct rte_vdev_device *vdev, - struct bbdev_la12xx_params *init_params __rte_unused) + struct bbdev_la12xx_params *init_params) { struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); + struct bbdev_la12xx_private *priv; PMD_INIT_FUNC_TRACE(); @@ -102,6 +137,20 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, return -ENOMEM; } + priv = bbdev->data->dev_private; + priv->modem_id = init_params->modem_id; + /* if modem id is not configured */ + if (priv->modem_id == -1) + priv->modem_id = bbdev->data->dev_id; + + /* Reset Global variables */ + priv->num_ldpc_enc_queues = 0; + priv->num_ldpc_dec_queues = 0; + priv->num_valid_queues = 0; + priv->max_nb_queues = init_params->queues_num; + + rte_bbdev_log(INFO, "Setting Up %s: DevId=%d, ModemId=%d", + name, bbdev->data->dev_id, priv->modem_id); bbdev->dev_ops = NULL; bbdev->device = &vdev->device; bbdev->data->socket_id = 0; @@ -121,7 +170,7 @@ static int la12xx_bbdev_probe(struct rte_vdev_device *vdev) { struct bbdev_la12xx_params init_params = { - 8 + 8, -1, }; const char *name; const char *input_args; @@ -173,5 +222,6 @@ static struct rte_vdev_driver bbdev_la12xx_pmd_drv = { RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_la12xx_pmd_drv); RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME, - LA12XX_MAX_NB_QUEUES_ARG"="); + LA12XX_MAX_NB_QUEUES_ARG"=" + LA12XX_VDEV_MODEM_ID_ARG "= "); RTE_LOG_REGISTER_DEFAULT(bbdev_la12xx_logtype, NOTICE); diff --git a/drivers/baseband/la12xx/bbdev_la12xx.h b/drivers/baseband/la12xx/bbdev_la12xx.h new file mode 100644 index 0000000000..5228502331 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef __BBDEV_LA12XX_H__ +#define __BBDEV_LA12XX_H__ + +#define BBDEV_IPC_ENC_OP_TYPE 1 +#define BBDEV_IPC_DEC_OP_TYPE 2 + +#define MAX_LDPC_ENC_FECA_QUEUES 4 +#define MAX_LDPC_DEC_FECA_QUEUES 4 + +#define MAX_CHANNEL_DEPTH 16 +/* private data structure */ +struct bbdev_la12xx_private { + void *ipc_priv; + uint8_t num_valid_queues; + uint8_t max_nb_queues; + uint8_t num_ldpc_enc_queues; + uint8_t num_ldpc_dec_queues; + int8_t modem_id; + struct bbdev_la12xx_q_priv *queues_priv[32]; +}; + +struct hugepage_info { + void *vaddr; + phys_addr_t paddr; + size_t len; +}; + +struct bbdev_la12xx_q_priv { + struct bbdev_la12xx_private *bbdev_priv; + uint32_t q_id; /**< Channel ID */ + uint32_t feca_blk_id; /** FECA block ID for processing */ + uint32_t feca_blk_id_be32; /**< FECA Block ID for this queue */ + uint8_t en_napi; /* 0: napi disabled, 1: napi enabled */ + uint16_t queue_size; /**< Queue depth */ + int32_t eventfd; /**< Event FD value */ + enum rte_bbdev_op_type op_type; /**< Operation type */ + uint32_t la12xx_core_id; + /* LA12xx core ID on which this will be scheduled */ + struct rte_mempool *mp; /**< Pool from where buffers would be cut */ + void *bbdev_op[MAX_CHANNEL_DEPTH]; + /**< Stores bbdev op for each index */ + void *msg_ch_vaddr[MAX_CHANNEL_DEPTH]; + /**< Stores msg channel addr for modem->host */ + uint32_t host_pi; /**< Producer_Index for HOST->MODEM */ + uint32_t host_ci; /**< Consumer Index for MODEM->HOST */ + host_ipc_params_t *host_params; /**< Host parameters */ +}; + +#define lower_32_bits(x) ((uint32_t)((uint64_t)x)) +#define upper_32_bits(x) ((uint32_t)(((uint64_t)(x) >> 16) >> 16)) + +#endif diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h new file mode 100644 index 0000000000..9aa5562981 --- /dev/null +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ +#ifndef __BBDEV_LA12XX_IPC_H__ +#define __BBDEV_LA12XX_IPC_H__ + +/** No. of max channel per instance */ +#define IPC_MAX_DEPTH (16) + +/* This shared memory would be on the host side which have copy of some + * of the parameters which are also part of Shared BD ring. Read access + * of these parameters from the host side would not be over PCI. + */ +typedef struct host_ipc_params { + volatile uint32_t pi; + volatile uint32_t ci; + volatile uint32_t modem_ptr[IPC_MAX_DEPTH]; +} __rte_packed host_ipc_params_t; + +#endif From patchwork Thu Oct 7 09:33:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100684 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8E7EA0C47; Thu, 7 Oct 2021 11:33:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 35D3A411D7; Thu, 7 Oct 2021 11:33:27 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 8A52441196 for ; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 65A301A1D64; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id F2B061A1D5B; Thu, 7 Oct 2021 11:33:19 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id F229A183AC89; Thu, 7 Oct 2021 17:33:18 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Thu, 7 Oct 2021 15:03:12 +0530 Message-Id: <20211007093315.17384-6-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 5/8] baseband/la12xx: add queue and modem config support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal This patch add support for connecting with modem and creating the ipc channel as queues with modem for the exchange of data. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- MAINTAINERS | 1 + doc/guides/bbdevs/index.rst | 1 + doc/guides/bbdevs/la12xx.rst | 80 +++ doc/guides/rel_notes/release_21_11.rst | 5 + drivers/baseband/la12xx/bbdev_la12xx.c | 556 ++++++++++++++++++++- drivers/baseband/la12xx/bbdev_la12xx.h | 17 +- drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 189 ++++++- 7 files changed, 836 insertions(+), 13 deletions(-) create mode 100644 doc/guides/bbdevs/la12xx.rst diff --git a/MAINTAINERS b/MAINTAINERS index 25eec751bb..7030147767 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1296,6 +1296,7 @@ NXP LA12xx driver M: Nipun Gupta M: Hemant Agrawal F: drivers/baseband/la12xx/ +F: doc/guides/bbdevs/la12xx.rst Rawdev Drivers diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst index 4445cbd1b0..cedd706fa6 100644 --- a/doc/guides/bbdevs/index.rst +++ b/doc/guides/bbdevs/index.rst @@ -14,3 +14,4 @@ Baseband Device Drivers fpga_lte_fec fpga_5gnr_fec acc100 + la12xx diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst new file mode 100644 index 0000000000..1a711ef5e3 --- /dev/null +++ b/doc/guides/bbdevs/la12xx.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 NXP + +NXP LA12xx Poll Mode Driver +======================================= + +The BBDEV LA12xx poll mode driver (PMD) supports an implementation for +offloading High Phy processing functions like LDPC Encode / Decode 5GNR wireless +acceleration function, using PCI based LA12xx Software defined radio. + +More information can be found at `NXP Official Website +`_. + +Features +-------- + +LA12xx PMD supports the following features: + +- Maximum of 8 LDPC decode (UL) queues +- Maximum of 8 LDPC encode (DL) queues +- PCIe Gen-3 x8 Interface + +Installation +------------ + +Section 3 of the DPDK manual provides instructions on installing and compiling DPDK. + +DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual. + +Initialization +-------------- + +The device can be listed on the host console with: + + +Use the following lspci command to get the multiple LA12xx processor ids. The +device ID of the LA12xx baseband processor is "1c30". + +.. code-block:: console + + sudo lspci -nn + +... +0001:01:00.0 Power PC [0b20]: Freescale Semiconductor Inc Device [1957:1c30] ( +rev 10) +... +0002:01:00.0 Power PC [0b20]: Freescale Semiconductor Inc Device [1957:1c30] ( +rev 10) + + +Prerequisites +------------- + +Currently supported by DPDK: + +- NXP LA1224 BSP **1.0+**. +- NXP LA1224 PCIe Modem card connected to ARM host. + +- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. + +* Use dev arg option ``modem=0`` to identify the modem instance for a given + device. This is required only if more than 1 modem cards are attached to host. + this is optional and the default value is 0. + e.g. ``--vdev=baseband_la12xx,modem=0`` + +* Use dev arg option ``max_nb_queues=x`` to specify the maximum number of queues + to be used for communication with offload device i.e. modem. default is 16. + e.g. ``--vdev=baseband_la12xx,max_nb_queues=4`` + +Enabling logs +------------- + +For enabling logs, use the following EAL parameter: + +.. code-block:: console + + ./your_bbdev_application --log-level=la12xx: + +Using ``bb.la12xx`` as log matching criteria, all Baseband PMD logs can be +enabled which are lower than logging ``level``. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index a991f01bf3..3b41239484 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -130,6 +130,11 @@ New Features * Added tests to validate packets hard expiry. * Added tests to verify tunnel header verification in IPsec inbound. +* **Added NXP LA12xx baseband PMD.** + + * Added a new baseband PMD driver for NXP LA12xx Software defined radio. + * See the :doc:`../bbdevs/la12xx` for more details. + Removed Items ------------- diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index 58defa54f0..efd5b5c42d 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -3,6 +3,11 @@ */ #include +#include +#include +#include +#include +#include #include #include @@ -29,11 +34,553 @@ struct bbdev_la12xx_params { #define LA12XX_VDEV_MODEM_ID_ARG "modem" #define LA12XX_MAX_MODEM 4 +#define LA12XX_MAX_CORES 4 +#define LA12XX_LDPC_ENC_CORE 0 +#define LA12XX_LDPC_DEC_CORE 1 + +#define LA12XX_MAX_LDPC_ENC_QUEUES 4 +#define LA12XX_MAX_LDPC_DEC_QUEUES 4 + static const char * const bbdev_la12xx_valid_params[] = { LA12XX_MAX_NB_QUEUES_ARG, LA12XX_VDEV_MODEM_ID_ARG, }; +static const struct rte_bbdev_op_cap bbdev_capabilities[] = { + { + .type = RTE_BBDEV_OP_LDPC_ENC, + .cap.ldpc_enc = { + .capability_flags = + RTE_BBDEV_LDPC_RATE_MATCH | + RTE_BBDEV_LDPC_CRC_24A_ATTACH | + RTE_BBDEV_LDPC_CRC_24B_ATTACH, + .num_buffers_src = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .num_buffers_dst = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + } + }, + { + .type = RTE_BBDEV_OP_LDPC_DEC, + .cap.ldpc_dec = { + .capability_flags = + RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK | + RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK | + RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP, + .num_buffers_src = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .num_buffers_hard_out = + RTE_BBDEV_LDPC_MAX_CODE_BLOCKS, + .llr_size = 8, + .llr_decimals = 1, + } + }, + RTE_BBDEV_END_OF_CAPABILITIES_LIST() +}; + +static struct rte_bbdev_queue_conf default_queue_conf = { + .queue_size = MAX_CHANNEL_DEPTH, +}; + +/* Get device info */ +static void +la12xx_info_get(struct rte_bbdev *dev __rte_unused, + struct rte_bbdev_driver_info *dev_info) +{ + PMD_INIT_FUNC_TRACE(); + + dev_info->driver_name = RTE_STR(DRIVER_NAME); + dev_info->max_num_queues = LA12XX_MAX_QUEUES; + dev_info->queue_size_lim = MAX_CHANNEL_DEPTH; + dev_info->hardware_accelerated = true; + dev_info->max_dl_queue_priority = 0; + dev_info->max_ul_queue_priority = 0; + dev_info->data_endianness = RTE_BBDEV_BIG_ENDIAN; + dev_info->default_queue_conf = default_queue_conf; + dev_info->capabilities = bbdev_capabilities; + dev_info->cpu_flag_reqs = NULL; + dev_info->min_alignment = 64; + + rte_bbdev_log_debug("got device info from %u", dev->data->dev_id); +} + +/* Release queue */ +static int +la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(q_id); + + PMD_INIT_FUNC_TRACE(); + + return 0; +} + +#define HUGEPG_OFFSET(A) \ + ((uint64_t) ((unsigned long) (A) \ + - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) + +static int ipc_queue_configure(uint32_t channel_id, + ipc_t instance, struct bbdev_la12xx_q_priv *q_priv) +{ + ipc_userspace_t *ipc_priv = (ipc_userspace_t *)instance; + ipc_instance_t *ipc_instance = ipc_priv->instance; + ipc_ch_t *ch; + void *vaddr; + uint32_t i = 0; + uint32_t msg_size = sizeof(struct bbdev_ipc_enqueue_op); + + PMD_INIT_FUNC_TRACE(); + + rte_bbdev_log_debug("%x %p", ipc_instance->initialized, + ipc_priv->instance); + ch = &(ipc_instance->ch_list[channel_id]); + + rte_bbdev_log_debug("channel: %u, depth: %u, msg size: %u", + channel_id, q_priv->queue_size, msg_size); + + /* Start init of channel */ + ch->md.ring_size = rte_cpu_to_be_32(q_priv->queue_size); + ch->md.pi = 0; + ch->md.ci = 0; + ch->md.msg_size = msg_size; + for (i = 0; i < q_priv->queue_size; i++) { + vaddr = rte_malloc(NULL, msg_size, RTE_CACHE_LINE_SIZE); + if (!vaddr) + return IPC_HOST_BUF_ALLOC_FAIL; + /* Only offset now */ + ch->bd_h[i].modem_ptr = + rte_cpu_to_be_32(HUGEPG_OFFSET(vaddr)); + ch->bd_h[i].host_virt_l = lower_32_bits(vaddr); + ch->bd_h[i].host_virt_h = upper_32_bits(vaddr); + q_priv->msg_ch_vaddr[i] = vaddr; + /* Not sure use of this len may be for CRC*/ + ch->bd_h[i].len = 0; + } + ch->host_ipc_params = + rte_cpu_to_be_32(HUGEPG_OFFSET(q_priv->host_params)); + + rte_bbdev_log_debug("Channel configured"); + return IPC_SUCCESS; +} + +static int +la12xx_e200_queue_setup(struct rte_bbdev *dev, + struct bbdev_la12xx_q_priv *q_priv) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + struct gul_hif *mhif; + ipc_metadata_t *ipc_md; + ipc_ch_t *ch; + int instance_id = 0, i; + int ret; + + PMD_INIT_FUNC_TRACE(); + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + q_priv->la12xx_core_id = LA12XX_LDPC_ENC_CORE; + break; + case RTE_BBDEV_OP_LDPC_DEC: + q_priv->la12xx_core_id = LA12XX_LDPC_DEC_CORE; + break; + default: + rte_bbdev_log(ERR, "Unsupported op type\n"); + return -1; + } + + mhif = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + /* offset is from start of PEB */ + ipc_md = (ipc_metadata_t *)((uintptr_t)ipc_priv->peb_start.host_vaddr + + mhif->ipc_regs.ipc_mdata_offset); + ch = &ipc_md->instance_list[instance_id].ch_list[q_priv->q_id]; + + if (q_priv->q_id < priv->num_valid_queues) { + ipc_br_md_t *md = &(ch->md); + + q_priv->feca_blk_id = rte_cpu_to_be_32(ch->feca_blk_id); + q_priv->feca_blk_id_be32 = ch->feca_blk_id; + q_priv->host_pi = rte_be_to_cpu_32(md->pi); + q_priv->host_ci = rte_be_to_cpu_32(md->ci); + q_priv->host_params = (host_ipc_params_t *)(uintptr_t) + (rte_be_to_cpu_32(ch->host_ipc_params) + + ((uint64_t)ipc_priv->hugepg_start.host_vaddr)); + + for (i = 0; i < q_priv->queue_size; i++) { + uint32_t h, l; + + h = ch->bd_h[i].host_virt_h; + l = ch->bd_h[i].host_virt_l; + q_priv->msg_ch_vaddr[i] = (void *)join_32_bits(h, l); + } + + rte_bbdev_log(WARNING, + "Queue [%d] already configured, not configuring again", + q_priv->q_id); + return 0; + } + + rte_bbdev_log_debug("setting up queue %d", q_priv->q_id); + + /* Call ipc_configure_channel */ + ret = ipc_queue_configure(q_priv->q_id, ipc_priv, q_priv); + if (ret) { + rte_bbdev_log(ERR, "Unable to setup queue (%d) (err=%d)", + q_priv->q_id, ret); + return ret; + } + + /* Set queue properties for LA12xx device */ + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + if (priv->num_ldpc_enc_queues >= LA12XX_MAX_LDPC_ENC_QUEUES) { + rte_bbdev_log(ERR, + "num_ldpc_enc_queues reached max value"); + return -1; + } + ch->la12xx_core_id = + rte_cpu_to_be_32(LA12XX_LDPC_ENC_CORE); + ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_enc_queues++); + break; + case RTE_BBDEV_OP_LDPC_DEC: + if (priv->num_ldpc_dec_queues >= LA12XX_MAX_LDPC_DEC_QUEUES) { + rte_bbdev_log(ERR, + "num_ldpc_dec_queues reached max value"); + return -1; + } + ch->la12xx_core_id = + rte_cpu_to_be_32(LA12XX_LDPC_DEC_CORE); + ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_dec_queues++); + break; + default: + rte_bbdev_log(ERR, "Not supported op type\n"); + return -1; + } + ch->op_type = rte_cpu_to_be_32(q_priv->op_type); + ch->depth = rte_cpu_to_be_32(q_priv->queue_size); + + /* Store queue config here */ + q_priv->feca_blk_id = rte_cpu_to_be_32(ch->feca_blk_id); + q_priv->feca_blk_id_be32 = ch->feca_blk_id; + + return 0; +} + +/* Setup a queue */ +static int +la12xx_queue_setup(struct rte_bbdev *dev, uint16_t q_id, + const struct rte_bbdev_queue_conf *queue_conf) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + struct rte_bbdev_queue_data *q_data; + struct bbdev_la12xx_q_priv *q_priv; + int ret; + + PMD_INIT_FUNC_TRACE(); + + /* Move to setup_queues callback */ + q_data = &dev->data->queues[q_id]; + q_data->queue_private = rte_zmalloc(NULL, + sizeof(struct bbdev_la12xx_q_priv), 0); + if (!q_data->queue_private) { + rte_bbdev_log(ERR, "Memory allocation failed for qpriv"); + return -ENOMEM; + } + q_priv = q_data->queue_private; + q_priv->q_id = q_id; + q_priv->bbdev_priv = dev->data->dev_private; + q_priv->queue_size = queue_conf->queue_size; + q_priv->op_type = queue_conf->op_type; + + ret = la12xx_e200_queue_setup(dev, q_priv); + if (ret) { + rte_bbdev_log(ERR, "e200_queue_setup failed for qid: %d", + q_id); + return ret; + } + + /* Store queue config here */ + priv->num_valid_queues++; + + return 0; +} + +static int +la12xx_start(struct rte_bbdev *dev) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + int ready = 0; + struct gul_hif *hif_start; + + PMD_INIT_FUNC_TRACE(); + + hif_start = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + + /* Set Host Read bit */ + SET_HIF_HOST_RDY(hif_start, HIF_HOST_READY_IPC_APP); + + /* Now wait for modem ready bit */ + while (!ready) + ready = CHK_HIF_MOD_RDY(hif_start, HIF_MOD_READY_IPC_APP); + + return 0; +} + +static const struct rte_bbdev_ops pmd_ops = { + .info_get = la12xx_info_get, + .queue_setup = la12xx_queue_setup, + .queue_release = la12xx_queue_release, + .start = la12xx_start +}; +static struct hugepage_info * +get_hugepage_info(void) +{ + struct hugepage_info *hp_info; + struct rte_memseg *mseg; + + PMD_INIT_FUNC_TRACE(); + + /* TODO - find a better way */ + hp_info = rte_malloc(NULL, sizeof(struct hugepage_info), 0); + if (!hp_info) { + rte_bbdev_log(ERR, "Unable to allocate on local heap"); + return NULL; + } + + mseg = rte_mem_virt2memseg(hp_info, NULL); + hp_info->vaddr = mseg->addr; + hp_info->paddr = rte_mem_virt2phy(mseg->addr); + hp_info->len = mseg->len; + + return hp_info; +} + +static int open_ipc_dev(int modem_id) +{ + char dev_initials[16], dev_path[PATH_MAX]; + struct dirent *entry; + int dev_ipc = 0; + DIR *dir; + + dir = opendir("/dev/"); + if (!dir) { + rte_bbdev_log(ERR, "Unable to open /dev/"); + return -1; + } + + sprintf(dev_initials, "gulipcgul%d", modem_id); + + while ((entry = readdir(dir)) != NULL) { + if (!strncmp(dev_initials, entry->d_name, + sizeof(dev_initials) - 1)) + break; + } + + if (!entry) { + rte_bbdev_log(ERR, "Error: No gulipcgul%d device", modem_id); + return -1; + } + + sprintf(dev_path, "/dev/%s", entry->d_name); + dev_ipc = open(dev_path, O_RDWR); + if (dev_ipc < 0) { + rte_bbdev_log(ERR, "Error: Cannot open %s", dev_path); + return -errno; + } + + return dev_ipc; +} + +static int +setup_la12xx_dev(struct rte_bbdev *dev) +{ + struct bbdev_la12xx_private *priv = dev->data->dev_private; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + struct hugepage_info *hp = NULL; + ipc_channel_us_t *ipc_priv_ch = NULL; + int dev_ipc = 0, dev_mem = 0, i; + ipc_metadata_t *ipc_md; + struct gul_hif *mhif; + uint32_t phy_align = 0; + int ret; + + PMD_INIT_FUNC_TRACE(); + + if (!ipc_priv) { + /* TODO - get a better way */ + /* Get the hugepage info against it */ + hp = get_hugepage_info(); + if (!hp) { + rte_bbdev_log(ERR, "Unable to get hugepage info"); + ret = -ENOMEM; + goto err; + } + + rte_bbdev_log_debug("0x%" PRIx64 " %p 0x%" PRIx64, + hp->paddr, hp->vaddr, hp->len); + + ipc_priv = rte_zmalloc(0, sizeof(ipc_userspace_t), 0); + if (ipc_priv == NULL) { + rte_bbdev_log(ERR, + "Unable to allocate memory for ipc priv"); + ret = -ENOMEM; + goto err; + } + + for (i = 0; i < IPC_MAX_CHANNEL_COUNT; i++) { + ipc_priv_ch = rte_zmalloc(0, + sizeof(ipc_channel_us_t), 0); + if (ipc_priv_ch == NULL) { + rte_bbdev_log(ERR, + "Unable to allocate memory for channels"); + ret = -ENOMEM; + } + ipc_priv->channels[i] = ipc_priv_ch; + } + + dev_mem = open("/dev/mem", O_RDWR); + if (dev_mem < 0) { + rte_bbdev_log(ERR, "Error: Cannot open /dev/mem"); + ret = -errno; + goto err; + } + + ipc_priv->instance_id = 0; + ipc_priv->dev_mem = dev_mem; + + rte_bbdev_log_debug("hugepg input 0x%" PRIx64 "%p 0x%" PRIx64, + hp->paddr, hp->vaddr, hp->len); + + ipc_priv->sys_map.hugepg_start.host_phys = hp->paddr; + ipc_priv->sys_map.hugepg_start.size = hp->len; + + ipc_priv->hugepg_start.host_phys = hp->paddr; + ipc_priv->hugepg_start.host_vaddr = hp->vaddr; + ipc_priv->hugepg_start.size = hp->len; + + rte_free(hp); + } + + dev_ipc = open_ipc_dev(priv->modem_id); + if (dev_ipc < 0) { + rte_bbdev_log(ERR, "Error: open_ipc_dev failed"); + goto err; + } + ipc_priv->dev_ipc = dev_ipc; + + ret = ioctl(ipc_priv->dev_ipc, IOCTL_GUL_IPC_GET_SYS_MAP, + &ipc_priv->sys_map); + if (ret) { + rte_bbdev_log(ERR, + "IOCTL_GUL_IPC_GET_SYS_MAP ioctl failed"); + goto err; + } + + phy_align = (ipc_priv->sys_map.mhif_start.host_phys % 0x1000); + ipc_priv->mhif_start.host_vaddr = + mmap(0, ipc_priv->sys_map.mhif_start.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.mhif_start.host_phys - phy_align)); + if (ipc_priv->mhif_start.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->mhif_start.host_vaddr = (void *) ((uintptr_t) + (ipc_priv->mhif_start.host_vaddr) + phy_align); + + phy_align = (ipc_priv->sys_map.peb_start.host_phys % 0x1000); + ipc_priv->peb_start.host_vaddr = + mmap(0, ipc_priv->sys_map.peb_start.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.peb_start.host_phys - phy_align)); + if (ipc_priv->peb_start.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->peb_start.host_vaddr = (void *)((uintptr_t) + (ipc_priv->peb_start.host_vaddr) + phy_align); + + phy_align = (ipc_priv->sys_map.modem_ccsrbar.host_phys % 0x1000); + ipc_priv->modem_ccsrbar.host_vaddr = + mmap(0, ipc_priv->sys_map.modem_ccsrbar.size + phy_align, + (PROT_READ | PROT_WRITE), MAP_SHARED, ipc_priv->dev_mem, + (ipc_priv->sys_map.modem_ccsrbar.host_phys - phy_align)); + if (ipc_priv->modem_ccsrbar.host_vaddr == MAP_FAILED) { + rte_bbdev_log(ERR, "MAP failed:"); + ret = -errno; + goto err; + } + + ipc_priv->modem_ccsrbar.host_vaddr = (void *)((uintptr_t) + (ipc_priv->modem_ccsrbar.host_vaddr) + phy_align); + + ipc_priv->hugepg_start.modem_phys = + ipc_priv->sys_map.hugepg_start.modem_phys; + + ipc_priv->mhif_start.host_phys = + ipc_priv->sys_map.mhif_start.host_phys; + ipc_priv->mhif_start.size = ipc_priv->sys_map.mhif_start.size; + ipc_priv->peb_start.host_phys = ipc_priv->sys_map.peb_start.host_phys; + ipc_priv->peb_start.size = ipc_priv->sys_map.peb_start.size; + + rte_bbdev_log(INFO, "peb 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->peb_start.host_phys, + ipc_priv->peb_start.host_vaddr, + ipc_priv->peb_start.size); + rte_bbdev_log(INFO, "hugepg 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->hugepg_start.host_phys, + ipc_priv->hugepg_start.host_vaddr, + ipc_priv->hugepg_start.size); + rte_bbdev_log(INFO, "mhif 0x%" PRIx64 "%p 0x%" PRIx32, + ipc_priv->mhif_start.host_phys, + ipc_priv->mhif_start.host_vaddr, + ipc_priv->mhif_start.size); + mhif = (struct gul_hif *)ipc_priv->mhif_start.host_vaddr; + + /* offset is from start of PEB */ + ipc_md = (ipc_metadata_t *)((uintptr_t)ipc_priv->peb_start.host_vaddr + + mhif->ipc_regs.ipc_mdata_offset); + + if (sizeof(ipc_metadata_t) != mhif->ipc_regs.ipc_mdata_size) { + rte_bbdev_log(ERR, + "ipc_metadata_t =0x%" PRIx64 + ", mhif->ipc_regs.ipc_mdata_size=0x%" PRIx32, + (uint64_t)(sizeof(ipc_metadata_t)), + mhif->ipc_regs.ipc_mdata_size); + rte_bbdev_log(ERR, "--> mhif->ipc_regs.ipc_mdata_offset= 0x%" + PRIx32, mhif->ipc_regs.ipc_mdata_offset); + rte_bbdev_log(ERR, "gul_hif size=0x%" PRIx64, + (uint64_t)(sizeof(struct gul_hif))); + return IPC_MD_SZ_MISS_MATCH; + } + + ipc_priv->instance = (ipc_instance_t *) + (&ipc_md->instance_list[ipc_priv->instance_id]); + + rte_bbdev_log_debug("finish host init"); + + priv->ipc_priv = ipc_priv; + + return 0; + +err: + rte_free(hp); + rte_free(ipc_priv); + rte_free(ipc_priv_ch); + if (dev_mem) + close(dev_mem); + if (dev_ipc) + close(dev_ipc); + + return ret; +} + static inline int parse_u16_arg(const char *key, const char *value, void *extra_args) { @@ -122,6 +669,7 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, struct rte_bbdev *bbdev; const char *name = rte_vdev_device_name(vdev); struct bbdev_la12xx_private *priv; + int ret; PMD_INIT_FUNC_TRACE(); @@ -151,7 +699,13 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, rte_bbdev_log(INFO, "Setting Up %s: DevId=%d, ModemId=%d", name, bbdev->data->dev_id, priv->modem_id); - bbdev->dev_ops = NULL; + ret = setup_la12xx_dev(bbdev); + if (ret) { + rte_bbdev_log(ERR, "IPC Setup failed for %s", name); + rte_free(bbdev->data->dev_private); + return ret; + } + bbdev->dev_ops = &pmd_ops; bbdev->device = &vdev->device; bbdev->data->socket_id = 0; bbdev->intr_handle = NULL; diff --git a/drivers/baseband/la12xx/bbdev_la12xx.h b/drivers/baseband/la12xx/bbdev_la12xx.h index 5228502331..fe91e62bf6 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.h +++ b/drivers/baseband/la12xx/bbdev_la12xx.h @@ -5,16 +5,10 @@ #ifndef __BBDEV_LA12XX_H__ #define __BBDEV_LA12XX_H__ -#define BBDEV_IPC_ENC_OP_TYPE 1 -#define BBDEV_IPC_DEC_OP_TYPE 2 - -#define MAX_LDPC_ENC_FECA_QUEUES 4 -#define MAX_LDPC_DEC_FECA_QUEUES 4 - #define MAX_CHANNEL_DEPTH 16 /* private data structure */ struct bbdev_la12xx_private { - void *ipc_priv; + ipc_userspace_t *ipc_priv; uint8_t num_valid_queues; uint8_t max_nb_queues; uint8_t num_ldpc_enc_queues; @@ -32,14 +26,14 @@ struct hugepage_info { struct bbdev_la12xx_q_priv { struct bbdev_la12xx_private *bbdev_priv; uint32_t q_id; /**< Channel ID */ - uint32_t feca_blk_id; /** FECA block ID for processing */ + uint32_t feca_blk_id; /**< FECA block ID for processing */ uint32_t feca_blk_id_be32; /**< FECA Block ID for this queue */ - uint8_t en_napi; /* 0: napi disabled, 1: napi enabled */ + uint8_t en_napi; /**< 0: napi disabled, 1: napi enabled */ uint16_t queue_size; /**< Queue depth */ int32_t eventfd; /**< Event FD value */ enum rte_bbdev_op_type op_type; /**< Operation type */ uint32_t la12xx_core_id; - /* LA12xx core ID on which this will be scheduled */ + /**< LA12xx core ID on which this will be scheduled */ struct rte_mempool *mp; /**< Pool from where buffers would be cut */ void *bbdev_op[MAX_CHANNEL_DEPTH]; /**< Stores bbdev op for each index */ @@ -52,5 +46,6 @@ struct bbdev_la12xx_q_priv { #define lower_32_bits(x) ((uint32_t)((uint64_t)x)) #define upper_32_bits(x) ((uint32_t)(((uint64_t)(x) >> 16) >> 16)) - +#define join_32_bits(upper, lower) \ + ((size_t)(((uint64_t)(upper) << 32) | (uint32_t)(lower))) #endif diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h index 9aa5562981..5f613fb087 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -4,9 +4,182 @@ #ifndef __BBDEV_LA12XX_IPC_H__ #define __BBDEV_LA12XX_IPC_H__ +#define LA12XX_MAX_QUEUES 20 +#define HOST_RX_QUEUEID_OFFSET LA12XX_MAX_QUEUES + +/** No. of max channel per instance */ +#define IPC_MAX_CHANNEL_COUNT (64) + /** No. of max channel per instance */ #define IPC_MAX_DEPTH (16) +/** No. of max IPC instance per modem */ +#define IPC_MAX_INSTANCE_COUNT (1) + +/** Error codes */ +#define IPC_SUCCESS (0) /** IPC operation success */ +#define IPC_INPUT_INVALID (-1) /** Invalid input to API */ +#define IPC_CH_INVALID (-2) /** Channel no is invalid */ +#define IPC_INSTANCE_INVALID (-3) /** Instance no is invalid */ +#define IPC_MEM_INVALID (-4) /** Insufficient memory */ +#define IPC_CH_FULL (-5) /** Channel is full */ +#define IPC_CH_EMPTY (-6) /** Channel is empty */ +#define IPC_BL_EMPTY (-7) /** Free buffer list is empty */ +#define IPC_BL_FULL (-8) /** Free buffer list is full */ +#define IPC_HOST_BUF_ALLOC_FAIL (-9) /** DPDK malloc fail */ +#define IPC_MD_SZ_MISS_MATCH (-10) /** META DATA size in mhif miss matched*/ +#define IPC_MALLOC_FAIL (-11) /** system malloc fail */ +#define IPC_IOCTL_FAIL (-12) /** IOCTL call failed */ +#define IPC_MMAP_FAIL (-14) /** MMAP fail */ +#define IPC_OPEN_FAIL (-15) /** OPEN fail */ +#define IPC_EVENTFD_FAIL (-16) /** eventfd initialization failed */ +#define IPC_NOT_IMPLEMENTED (-17) /** IPC feature is not implemented yet*/ + +#define SET_HIF_HOST_RDY(hif, RDY_MASK) (hif->host_ready |= RDY_MASK) +#define CHK_HIF_MOD_RDY(hif, RDY_MASK) (hif->mod_ready & RDY_MASK) + +/* Host Ready bits */ +#define HIF_HOST_READY_HOST_REGIONS (1 << 0) +#define HIF_HOST_READY_IPC_LIB (1 << 12) +#define HIF_HOST_READY_IPC_APP (1 << 13) +#define HIF_HOST_READY_FECA (1 << 14) + +/* Modem Ready bits */ +#define HIF_MOD_READY_IPC_LIB (1 << 5) +#define HIF_MOD_READY_IPC_APP (1 << 6) +#define HIF_MOD_READY_FECA (1 << 7) + +typedef void *ipc_t; + +struct ipc_msg { + int chid; + void *addr; + uint32_t len; + uint8_t flags; +}; + +typedef struct { + uint64_t host_phys; + uint32_t modem_phys; + void *host_vaddr; + uint32_t size; +} mem_range_t; + +#define GUL_IPC_MAGIC 'R' + +#define IOCTL_GUL_IPC_GET_SYS_MAP _IOW(GUL_IPC_MAGIC, 1, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_REGISTER _IOWR(GUL_IPC_MAGIC, 4, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_DEREGISTER \ + _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) +#define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) + +/** buffer ring common metadata */ +typedef struct ipc_bd_ring_md { + volatile uint32_t pi; /**< Producer index and flag (MSB) + * which flip for each Ring wrapping + */ + volatile uint32_t ci; /**< Consumer index and flag (MSB) + * which flip for each Ring wrapping + */ + uint32_t ring_size; /**< depth (Used to roll-over pi/ci) */ + uint32_t msg_size; /**< Size of the each buffer */ +} __rte_packed ipc_br_md_t; + +/** IPC buffer descriptor */ +typedef struct ipc_buffer_desc { + union { + uint64_t host_virt; /**< msg's host virtual address */ + struct { + uint32_t host_virt_l; + uint32_t host_virt_h; + }; + }; + uint32_t modem_ptr; /**< msg's modem physical address */ + uint32_t len; /**< msg len */ +} __rte_packed ipc_bd_t; + +typedef struct ipc_channel { + uint32_t ch_id; /**< Channel id */ + ipc_br_md_t md; /**< Metadata for BD ring */ + ipc_bd_t bd_h[IPC_MAX_DEPTH]; /**< Buffer Descriptor on Host */ + ipc_bd_t bd_m[IPC_MAX_DEPTH]; /**< Buffer Descriptor on Modem */ + uint32_t op_type; /**< Type of the BBDEV operation + * supported on this channel + */ + uint32_t depth; /**< Channel depth */ + uint32_t feca_blk_id; /**< FECA Transport Block ID for processing */ + uint32_t la12xx_core_id;/**< LA12xx core ID on which this will be + * scheduled + */ + uint32_t feca_input_circ_size; /**< FECA transport block input + * circular buffer size + */ + uint32_t host_ipc_params; /**< Address for host IPC parameters */ +} __rte_packed ipc_ch_t; + +typedef struct ipc_instance { + uint32_t instance_id; /**< instance id, use to init this + * instance by ipc_init API + */ + uint32_t initialized; /**< Set in ipc_init */ + ipc_ch_t ch_list[IPC_MAX_CHANNEL_COUNT]; + /**< Channel descriptors in this instance */ +} __rte_packed ipc_instance_t; + +typedef struct ipc_metadata { + uint32_t ipc_host_signature; /**< IPC host signature, Set by host/L2 */ + uint32_t ipc_geul_signature; /**< IPC geul signature, Set by modem */ + ipc_instance_t instance_list[IPC_MAX_INSTANCE_COUNT]; +} __rte_packed ipc_metadata_t; + +typedef struct ipc_channel_us_priv { + int32_t eventfd; + uint32_t channel_id; + /* In flight packets status for buffer list. */ + uint8_t bufs_inflight[IPC_MAX_DEPTH]; +} ipc_channel_us_t; + +typedef struct { + uint64_t host_phys; + uint32_t modem_phys; + uint32_t size; +} mem_strt_addr_t; + +typedef struct { + mem_strt_addr_t modem_ccsrbar; + mem_strt_addr_t peb_start; /* PEB meta data */ + mem_strt_addr_t mhif_start; /* MHIF meta daat */ + mem_strt_addr_t hugepg_start; /* Modem to access hugepage */ +} sys_map_t; + +typedef struct ipc_priv_t { + int instance_id; + int dev_ipc; + int dev_mem; + sys_map_t sys_map; + mem_range_t modem_ccsrbar; + mem_range_t peb_start; + mem_range_t mhif_start; + mem_range_t hugepg_start; + ipc_channel_us_t *channels[IPC_MAX_CHANNEL_COUNT]; + ipc_instance_t *instance; + ipc_instance_t *instance_bk; +} ipc_userspace_t; + +/** Structure specifying enqueue operation (enqueue at LA1224) */ +struct bbdev_ipc_enqueue_op { + /** Status of operation that was performed */ + int32_t status; + /** CRC Status of SD operation that was performed */ + int32_t crc_stat_addr; + /** HARQ Output buffer memory length for Shared Decode. + * Filled by LA12xx. + */ + uint32_t out_len; + /** Reserved (for 8 byte alignment) */ + uint32_t rsvd; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI. @@ -14,7 +187,21 @@ typedef struct host_ipc_params { volatile uint32_t pi; volatile uint32_t ci; - volatile uint32_t modem_ptr[IPC_MAX_DEPTH]; + volatile uint32_t bd_m_modem_ptr[IPC_MAX_DEPTH]; } __rte_packed host_ipc_params_t; +struct hif_ipc_regs { + uint32_t ipc_mdata_offset; + uint32_t ipc_mdata_size; +} __rte_packed; + +struct gul_hif { + uint32_t ver; + uint32_t hif_ver; + uint32_t status; + volatile uint32_t host_ready; + volatile uint32_t mod_ready; + struct hif_ipc_regs ipc_regs; +} __rte_packed; + #endif From patchwork Thu Oct 7 09:33:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100685 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48D87A0C47; Thu, 7 Oct 2021 11:33:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4998D411E1; Thu, 7 Oct 2021 11:33:28 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 9EFC3411A6 for ; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7B6881A1D5B; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 170081A1D59; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 7466D183AD05; Thu, 7 Oct 2021 17:33:19 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Thu, 7 Oct 2021 15:03:13 +0530 Message-Id: <20211007093315.17384-7-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 6/8] baseband/la12xx: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal Add support for enqueue and dequeue the LDPC enc/dec from the modem device. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- doc/guides/bbdevs/features/la12xx.ini | 13 + doc/guides/bbdevs/la12xx.rst | 44 +++ drivers/baseband/la12xx/bbdev_la12xx.c | 320 +++++++++++++++++++++ drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 37 +++ 4 files changed, 414 insertions(+) create mode 100644 doc/guides/bbdevs/features/la12xx.ini diff --git a/doc/guides/bbdevs/features/la12xx.ini b/doc/guides/bbdevs/features/la12xx.ini new file mode 100644 index 0000000000..0aec5eecb6 --- /dev/null +++ b/doc/guides/bbdevs/features/la12xx.ini @@ -0,0 +1,13 @@ +; +; Supported features of the 'la12xx' bbdev driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Turbo Decoder (4G) = N +Turbo Encoder (4G) = N +LDPC Decoder (5G) = Y +LDPC Encoder (5G) = Y +LLR/HARQ Compression = N +HW Accelerated = Y +BBDEV API = Y diff --git a/doc/guides/bbdevs/la12xx.rst b/doc/guides/bbdevs/la12xx.rst index 1a711ef5e3..fe1bca4c5c 100644 --- a/doc/guides/bbdevs/la12xx.rst +++ b/doc/guides/bbdevs/la12xx.rst @@ -78,3 +78,47 @@ For enabling logs, use the following EAL parameter: Using ``bb.la12xx`` as log matching criteria, all Baseband PMD logs can be enabled which are lower than logging ``level``. + +Test Application +---------------- + +BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing +the functionality of LA12xx for FEC encode and decode, depending on the device +capabilities. The test application is located under app->test-bbdev folder and has the +following options: + +.. code-block:: console + + "-p", "--testapp-path": specifies path to the bbdev test app. + "-e", "--eal-params" : EAL arguments which are passed to the test app. + "-t", "--timeout" : Timeout in seconds (default=300). + "-c", "--test-cases" : Defines test cases to run. Run all if not specified. + "-v", "--test-vector" : Test vector path (default=dpdk_path+/app/test-bbdev/test_vectors/bbdev_null.data). + "-n", "--num-ops" : Number of operations to process on device (default=32). + "-b", "--burst-size" : Operations enqueue/dequeue burst size (default=32). + "-s", "--snr" : SNR in dB used when generating LLRs for bler tests. + "-s", "--iter_max" : Number of iterations for LDPC decoder. + "-l", "--num-lcores" : Number of lcores to run (default=16). + "-i", "--init-device" : Initialise PF device with default values. + + +To execute the test application tool using simple decode or encode data, +type one of the following: + +.. code-block:: console + + ./test-bbdev.py -e="--vdev=baseband_la12xx,socket_id=0,max_nb_queues=8" -c validation -n 64 -b 1 -v ./ldpc_dec_default.data + ./test-bbdev.py -e="--vdev=baseband_la12xx,socket_id=0,max_nb_queues=8" -c validation -n 64 -b 1 -v ./ldpc_enc_default.data + +The test application ``test-bbdev.py``, supports the ability to configure the PF device with +a default set of values, if the "-i" or "- -init-device" option is included. The default values +are defined in test_bbdev_perf.c. + + +Test Vectors +~~~~~~~~~~~~ + +In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides +a range of additional tests under the test_vectors folder, which may be useful. The results +of these tests will depend on the LA12xx FEC capabilities which may cause some +testcases to be skipped, but no failure should be reported. diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index efd5b5c42d..eaadd4f6e9 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -120,6 +120,10 @@ la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id) ((uint64_t) ((unsigned long) (A) \ - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) +#define MODEM_P2V(A) \ + ((uint64_t) ((unsigned long) (A) \ + + (unsigned long)(ipc_priv->peb_start.host_vaddr))) + static int ipc_queue_configure(uint32_t channel_id, ipc_t instance, struct bbdev_la12xx_q_priv *q_priv) { @@ -334,6 +338,318 @@ static const struct rte_bbdev_ops pmd_ops = { .queue_release = la12xx_queue_release, .start = la12xx_start }; + +static inline int +is_bd_ring_full(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (pi == ci) { + if (pi_flag != ci_flag) + return 1; /* Ring is Full */ + } + return 0; +} + +static inline int +prepare_ldpc_enc_op(struct rte_bbdev_enc_op *bbdev_enc_op, + struct bbdev_la12xx_q_priv *q_priv __rte_unused, + struct rte_bbdev_op_data *in_op_data __rte_unused, + struct rte_bbdev_op_data *out_op_data) +{ + struct rte_bbdev_op_ldpc_enc *ldpc_enc = &bbdev_enc_op->ldpc_enc; + uint32_t total_out_bits; + + total_out_bits = (ldpc_enc->tb_params.cab * + ldpc_enc->tb_params.ea) + (ldpc_enc->tb_params.c - + ldpc_enc->tb_params.cab) * ldpc_enc->tb_params.eb; + + ldpc_enc->output.length = (total_out_bits + 7)/8; + + rte_pktmbuf_append(out_op_data->data, ldpc_enc->output.length); + + return 0; +} + +static inline int +prepare_ldpc_dec_op(struct rte_bbdev_dec_op *bbdev_dec_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv __rte_unused, + struct rte_bbdev_op_data *out_op_data __rte_unused) +{ + struct rte_bbdev_op_ldpc_dec *ldpc_dec = &bbdev_dec_op->ldpc_dec; + uint32_t total_out_bits; + uint32_t num_code_blocks = 0; + uint16_t sys_cols; + + sys_cols = (ldpc_dec->basegraph == 1) ? 22 : 10; + if (ldpc_dec->tb_params.c == 1) { + total_out_bits = ((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler); + /* 5G-NR protocol uses 16 bit CRC when output packet + * size <= 3824 (bits). Otherwise 24 bit CRC is used. + * Adjust the output bits accordingly + */ + if (total_out_bits - 16 <= 3824) + total_out_bits -= 16; + else + total_out_bits -= 24; + ldpc_dec->hard_output.length = (total_out_bits / 8); + } else { + total_out_bits = (((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler - 24) * + ldpc_dec->tb_params.c); + ldpc_dec->hard_output.length = (total_out_bits / 8) - 3; + } + + num_code_blocks = ldpc_dec->tb_params.c; + + bbdev_ipc_op->num_code_blocks = rte_cpu_to_be_32(num_code_blocks); + + return 0; +} + +static int +enqueue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *bbdev_op) +{ + struct bbdev_la12xx_private *priv = q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + ipc_instance_t *ipc_instance = ipc_priv->instance; + struct bbdev_ipc_dequeue_op *bbdev_ipc_op; + struct rte_bbdev_op_ldpc_enc *ldpc_enc; + struct rte_bbdev_op_ldpc_dec *ldpc_dec; + uint32_t q_id = q_priv->q_id; + uint32_t ci, ci_flag, pi, pi_flag; + ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]); + ipc_br_md_t *md = &(ch->md); + size_t virt; + char *huge_start_addr = + (char *)q_priv->bbdev_priv->ipc_priv->hugepg_start.host_vaddr; + struct rte_bbdev_op_data *in_op_data, *out_op_data; + char *data_ptr; + uint32_t l1_pcie_addr; + int ret; + + ci = IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci); + + pi = IPC_GET_PI_INDEX(q_priv->host_pi); + pi_flag = IPC_GET_PI_FLAG(q_priv->host_pi); + + rte_bbdev_dp_log(DEBUG, "before bd_ring_full: pi: %u, ci: %u," + "pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + if (is_bd_ring_full(ci, ci_flag, pi, pi_flag)) { + rte_bbdev_dp_log(DEBUG, "bd ring full for queue id: %d", q_id); + return IPC_CH_FULL; + } + + virt = MODEM_P2V(q_priv->host_params->bd_m_modem_ptr[pi]); + bbdev_ipc_op = (struct bbdev_ipc_dequeue_op *)virt; + q_priv->bbdev_op[pi] = bbdev_op; + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + ldpc_enc = &(((struct rte_bbdev_enc_op *)bbdev_op)->ldpc_enc); + in_op_data = &ldpc_enc->input; + out_op_data = &ldpc_enc->output; + + ret = prepare_ldpc_enc_op(bbdev_op, q_priv, + in_op_data, out_op_data); + if (ret) { + rte_bbdev_log(ERR, "process_ldpc_enc_op fail, ret: %d", + ret); + return ret; + } + break; + + case RTE_BBDEV_OP_LDPC_DEC: + ldpc_dec = &(((struct rte_bbdev_dec_op *)bbdev_op)->ldpc_dec); + in_op_data = &ldpc_dec->input; + + out_op_data = &ldpc_dec->hard_output; + + ret = prepare_ldpc_dec_op(bbdev_op, bbdev_ipc_op, + q_priv, out_op_data); + if (ret) { + rte_bbdev_log(ERR, "process_ldpc_dec_op fail, ret: %d", + ret); + return ret; + } + break; + + default: + rte_bbdev_log(ERR, "unsupported bbdev_ipc op type"); + return -1; + } + + if (in_op_data->data) { + data_ptr = rte_pktmbuf_mtod(in_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->in_addr = l1_pcie_addr; + bbdev_ipc_op->in_len = in_op_data->length; + } + + if (out_op_data->data) { + data_ptr = rte_pktmbuf_mtod(out_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->out_addr = rte_cpu_to_be_32(l1_pcie_addr); + bbdev_ipc_op->out_len = rte_cpu_to_be_32(out_op_data->length); + } + + /* Move Producer Index forward */ + pi++; + /* Flip the PI flag, if wrapping */ + if (unlikely(q_priv->queue_size == pi)) { + pi = 0; + pi_flag = pi_flag ? 0 : 1; + } + + if (pi_flag) + IPC_SET_PI_FLAG(pi); + else + IPC_RESET_PI_FLAG(pi); + q_priv->host_pi = pi; + + /* Wait for Data Copy & pi_flag update to complete before updating pi */ + rte_mb(); + /* now update pi */ + md->pi = rte_cpu_to_be_32(pi); + + rte_bbdev_dp_log(DEBUG, "enter: pi: %u, ci: %u," + "pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + return 0; +} + +/* Enqueue decode burst */ +static uint16_t +enqueue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +/* Enqueue encode burst */ +static uint16_t +enqueue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +/* Dequeue encode burst */ +static void * +dequeue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *dst) +{ + void *op; + uint32_t ci, ci_flag; + uint32_t temp_ci; + + temp_ci = q_priv->host_params->ci; + if (temp_ci == q_priv->host_ci) + return NULL; + + ci = IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci); + + rte_bbdev_dp_log(DEBUG, + "ci: %u, ci_flag: %u, ring size: %u", + ci, ci_flag, q_priv->queue_size); + + op = q_priv->bbdev_op[ci]; + + rte_memcpy(dst, q_priv->msg_ch_vaddr[ci], + sizeof(struct bbdev_ipc_enqueue_op)); + + /* Move Consumer Index forward */ + ci++; + /* Flip the CI flag, if wrapping */ + if (q_priv->queue_size == ci) { + ci = 0; + ci_flag = ci_flag ? 0 : 1; + } + if (ci_flag) + IPC_SET_CI_FLAG(ci); + else + IPC_RESET_CI_FLAG(ci); + + q_priv->host_ci = ci; + + rte_bbdev_dp_log(DEBUG, + "exit: ci: %u, ci_flag: %u, ring size: %u", + ci, ci_flag, q_priv->queue_size); + + return op; +} + +/* Dequeue decode burst */ +static uint16_t +dequeue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_dequeued; + + for (nb_dequeued = 0; nb_dequeued < nb_ops; nb_dequeued++) { + ops[nb_dequeued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_dequeued]) + break; + ops[nb_dequeued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + +/* Dequeue encode burst */ +static uint16_t +dequeue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_enqueued; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ops[nb_enqueued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_enqueued]) + break; + ops[nb_enqueued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + static struct hugepage_info * get_hugepage_info(void) { @@ -715,6 +1031,10 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, bbdev->dequeue_dec_ops = NULL; bbdev->enqueue_enc_ops = NULL; bbdev->enqueue_dec_ops = NULL; + bbdev->dequeue_ldpc_enc_ops = dequeue_enc_ops; + bbdev->dequeue_ldpc_dec_ops = dequeue_dec_ops; + bbdev->enqueue_ldpc_enc_ops = enqueue_enc_ops; + bbdev->enqueue_ldpc_dec_ops = enqueue_dec_ops; return 0; } diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h index 5f613fb087..b6a7f677d0 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -73,6 +73,25 @@ typedef struct { _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) #define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) +#define GUL_USER_HUGE_PAGE_OFFSET (0) +#define GUL_PCI1_ADDR_BASE (0x00000000ULL) + +#define GUL_USER_HUGE_PAGE_ADDR (GUL_PCI1_ADDR_BASE + GUL_USER_HUGE_PAGE_OFFSET) + +/* IPC PI/CI index & flag manipulation helpers */ +#define IPC_PI_CI_FLAG_MASK 0x80000000 /* (1<<31) */ +#define IPC_PI_CI_INDEX_MASK 0x7FFFFFFF /* ~(1<<31) */ + +#define IPC_SET_PI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_PI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_PI_FLAG(x) (x >> 31) +#define IPC_GET_PI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + +#define IPC_SET_CI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_CI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_CI_FLAG(x) (x >> 31) +#define IPC_GET_CI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + /** buffer ring common metadata */ typedef struct ipc_bd_ring_md { volatile uint32_t pi; /**< Producer index and flag (MSB) @@ -180,6 +199,24 @@ struct bbdev_ipc_enqueue_op { uint32_t rsvd; }; +/** Structure specifying dequeue operation (dequeue at LA1224) */ +struct bbdev_ipc_dequeue_op { + /** Input buffer memory address */ + uint32_t in_addr; + /** Input buffer memory length */ + uint32_t in_len; + /** Output buffer memory address */ + uint32_t out_addr; + /** Output buffer memory length */ + uint32_t out_len; + /* Number of code blocks. Only set when HARQ is used */ + uint32_t num_code_blocks; + /** Dequeue Operation flags */ + uint32_t op_flags; + /** Shared metadata between L1 and L2 */ + uint32_t shared_metadata; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI. From patchwork Thu Oct 7 09:33:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100686 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BE07A0C47; Thu, 7 Oct 2021 11:33:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C47A411ED; Thu, 7 Oct 2021 11:33:29 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id DB75141196 for ; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BA4601A1D59; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8399E1A1D65; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id E182C183AC94; Thu, 7 Oct 2021 17:33:19 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com Date: Thu, 7 Oct 2021 15:03:14 +0530 Message-Id: <20211007093315.17384-8-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 7/8] app/bbdev: enable la12xx for bbdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal this patch adds la12xx driver in test bbdev Signed-off-by: Hemant Agrawal --- app/test-bbdev/meson.build | 3 +++ 1 file changed, 3 insertions(+) diff --git a/app/test-bbdev/meson.build b/app/test-bbdev/meson.build index edb9deef84..a726a5b3fa 100644 --- a/app/test-bbdev/meson.build +++ b/app/test-bbdev/meson.build @@ -23,3 +23,6 @@ endif if dpdk_conf.has('RTE_BASEBAND_ACC100') deps += ['baseband_acc100'] endif +if dpdk_conf.has('RTE_LIBRTE_PMD_BBDEV_LA12XX') + deps += ['baseband_la12xx'] +endif From patchwork Thu Oct 7 09:33:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 100687 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C46CA0C47; Thu, 7 Oct 2021 11:34:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 76EF3411F6; Thu, 7 Oct 2021 11:33:30 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 88893411A6 for ; Thu, 7 Oct 2021 11:33:21 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 69D811A0338; Thu, 7 Oct 2021 11:33:21 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id F2F6A1A1D65; Thu, 7 Oct 2021 11:33:20 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 5B710183AD28; Thu, 7 Oct 2021 17:33:20 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, hemant.agrawal@nxp.com, Nipun Gupta Date: Thu, 7 Oct 2021 15:03:15 +0530 Message-Id: <20211007093315.17384-9-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007093315.17384-1-nipun.gupta@nxp.com> References: <20210318063421.14895-1-hemant.agrawal@nxp.com> <20211007093315.17384-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v9 8/8] app/bbdev: handle endianness of test data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta With data input, output and harq also supported in big endian format, this patch updates the testbbdev application to handle the endianness conversion as directed by the the driver being used. The test vectors assumes the data in the little endian order, and thus if the driver supports big endian data processing, conversion from little endian to big is handled by the testbbdev application. Signed-off-by: Nipun Gupta --- app/test-bbdev/test_bbdev_perf.c | 43 ++++++++++++++++++++++++++++++++ doc/guides/tools/testbbdev.rst | 3 +++ 2 files changed, 46 insertions(+) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 469597b8b3..05f12463a8 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -227,6 +227,45 @@ clear_soft_out_cap(uint32_t *op_flags) *op_flags &= ~RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT; } +/* This API is to convert all the test vector op data entries + * to big endian format. It is used when the device supports + * the input in the big endian format. + */ +static inline void +convert_op_data_to_be(void) +{ + struct op_data_entries *op; + enum op_data_type type; + uint8_t nb_segs, *rem_data, temp; + uint32_t *data, len; + int complete, rem, i, j; + + for (type = DATA_INPUT; type < DATA_NUM_TYPES; ++type) { + nb_segs = test_vector.entries[type].nb_segments; + op = &test_vector.entries[type]; + + /* Invert byte endianness for all the segments */ + for (i = 0; i < nb_segs; ++i) { + len = op->segments[i].length; + data = op->segments[i].addr; + + /* Swap complete u32 bytes */ + complete = len / 4; + for (j = 0; j < complete; j++) + data[j] = rte_bswap32(data[j]); + + /* Swap any remaining bytes */ + rem = len % 4; + rem_data = (uint8_t *)&data[j]; + for (j = 0; j < rem/2; j++) { + temp = rem_data[j]; + rem_data[j] = rem_data[rem - j - 1]; + rem_data[rem - j - 1] = temp; + } + } + } +} + static int check_dev_cap(const struct rte_bbdev_info *dev_info) { @@ -234,6 +273,7 @@ check_dev_cap(const struct rte_bbdev_info *dev_info) unsigned int nb_inputs, nb_soft_outputs, nb_hard_outputs, nb_harq_inputs, nb_harq_outputs; const struct rte_bbdev_op_cap *op_cap = dev_info->drv.capabilities; + uint8_t dev_data_endianness = dev_info->drv.data_endianness; nb_inputs = test_vector.entries[DATA_INPUT].nb_segments; nb_soft_outputs = test_vector.entries[DATA_SOFT_OUTPUT].nb_segments; @@ -245,6 +285,9 @@ check_dev_cap(const struct rte_bbdev_info *dev_info) if (op_cap->type != test_vector.op_type) continue; + if (dev_data_endianness == RTE_BBDEV_BIG_ENDIAN) + convert_op_data_to_be(); + if (op_cap->type == RTE_BBDEV_OP_TURBO_DEC) { const struct rte_bbdev_op_cap_turbo_dec *cap = &op_cap->cap.turbo_dec; diff --git a/doc/guides/tools/testbbdev.rst b/doc/guides/tools/testbbdev.rst index d397d991ff..83a0312062 100644 --- a/doc/guides/tools/testbbdev.rst +++ b/doc/guides/tools/testbbdev.rst @@ -332,6 +332,9 @@ Variable op_type has to be defined as a first variable in file. It specifies what type of operations will be executed. For 4G decoder op_type has to be set to ``RTE_BBDEV_OP_TURBO_DEC`` and for 4G encoder to ``RTE_BBDEV_OP_TURBO_ENC``. +Bbdev-test adjusts the byte endianness based on the PMD capability (data_endianness) +and all the test vectors input/output data are assumed to be LE by default + Full details of the meaning and valid values for the below fields are documented in *rte_bbdev_op.h*