From patchwork Mon Apr 4 21:13:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chautru, Nicolas" X-Patchwork-Id: 109133 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80B53A0508; Mon, 4 Apr 2022 23:16:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07E424284F; Mon, 4 Apr 2022 23:16:29 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 32F6C4068C for ; Mon, 4 Apr 2022 23:16:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649106985; x=1680642985; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=1oYKUvCOIwIv7rFy/8rabz8aJWHsFraFd2tK02wbKFY=; b=H0GTUk89hXGsAENCRe3Nn93eQrm5PB2DMcNs2Vt+sqOPBaDhk8ZfeCW+ uNrlIs3tArWEIv/C2cA4cyr5JWc45SmD4yaFtE+sRoBE9ZrAqrkU1SRDI hkZd6PiqKU6Oua/WJJZgVf9VqAtzT+WYdux8IFNVpJJx53OhA00W5hIO6 sa9jcov0Izb5fge0FGZ3aywEfuLHUTJesj5FGIIli3JO05raD84JUXrfz ScAx2Yt4rzK0oCLU9Jq4BIp1TA6EMPFHo5k4gNqWLwnoPvLXuefpSaBcc dAH2hmzOl5I1Bqft25NGHU5GYiME0JMdD1EAPpHSP83OxuLYXrF+VA/23 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10307"; a="258194697" X-IronPort-AV: E=Sophos;i="5.90,235,1643702400"; d="scan'208";a="258194697" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2022 14:16:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,235,1643702400"; d="scan'208";a="569552045" Received: from skx-5gnr-sc12-4.sc.intel.com ([172.25.69.210]) by orsmga008.jf.intel.com with ESMTP; 04 Apr 2022 14:16:23 -0700 From: Nicolas Chautru To: dev@dpdk.org, gakhil@marvell.com Cc: trix@redhat.com, thomas@monjalon.net, ray.kinsella@intel.com, bruce.richardson@intel.com, hemant.agrawal@nxp.com, mingshan.zhang@intel.com, david.marchand@redhat.com, Nicolas Chautru Subject: [PATCH v1 3/9] baseband/acc101: add info get function Date: Mon, 4 Apr 2022 14:13:42 -0700 Message-Id: <1649106828-116338-4-git-send-email-nicolas.chautru@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1649106828-116338-1-git-send-email-nicolas.chautru@intel.com> References: <1649106828-116338-1-git-send-email-nicolas.chautru@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the info get function to allow to query device null capabilities. Linking to bbdev-test. Signed-off-by: Nicolas Chautru --- app/test-bbdev/meson.build | 3 + drivers/baseband/acc101/rte_acc101_cfg.h | 96 +++++++++++++ drivers/baseband/acc101/rte_acc101_pmd.c | 226 +++++++++++++++++++++++++++++++ drivers/baseband/acc101/rte_acc101_pmd.h | 2 + 4 files changed, 327 insertions(+) create mode 100644 drivers/baseband/acc101/rte_acc101_cfg.h diff --git a/app/test-bbdev/meson.build b/app/test-bbdev/meson.build index 76d4c26..9cbee5a 100644 --- a/app/test-bbdev/meson.build +++ b/app/test-bbdev/meson.build @@ -23,6 +23,9 @@ endif if dpdk_conf.has('RTE_BASEBAND_ACC100') deps += ['baseband_acc100'] endif +if dpdk_conf.has('RTE_BASEBAND_ACC101') + deps += ['baseband_acc101'] +endif if dpdk_conf.has('RTE_LIBRTE_PMD_BBDEV_LA12XX') deps += ['baseband_la12xx'] endif diff --git a/drivers/baseband/acc101/rte_acc101_cfg.h b/drivers/baseband/acc101/rte_acc101_cfg.h new file mode 100644 index 0000000..4881cd6 --- /dev/null +++ b/drivers/baseband/acc101/rte_acc101_cfg.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#ifndef _RTE_ACC101_CFG_H_ +#define _RTE_ACC101_CFG_H_ + +/** + * @file rte_acc101_cfg.h + * + * Functions for configuring ACC101 HW, exposed directly to applications. + * Configuration related to encoding/decoding is done through the + * librte_bbdev library. + * + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + */ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif +/**< Number of Virtual Functions ACC101 supports */ +#define RTE_ACC101_NUM_VFS 16 + +/** + * Definition of Queue Topology for ACC101 Configuration + * Some level of details is abstracted out to expose a clean interface + * given that comprehensive flexibility is not required + */ +struct rte_acc101_queue_topology { + /** Number of QGroups in incremental order of priority */ + uint16_t num_qgroups; + /** + * All QGroups have the same number of AQs here. + * Note : Could be made a 16-array if more flexibility is really + * required + */ + uint16_t num_aqs_per_groups; + /** + * Depth of the AQs is the same of all QGroups here. Log2 Enum : 2^N + * Note : Could be made a 16-array if more flexibility is really + * required + */ + uint16_t aq_depth_log2; + /** + * Index of the first Queue Group Index - assuming contiguity + * Initialized as -1 + */ + int8_t first_qgroup_index; +}; + +/** + * Definition of Arbitration related parameters for ACC101 Configuration + */ +struct rte_acc101_arbitration { + /** Default Weight for VF Fairness Arbitration */ + uint16_t round_robin_weight; + uint32_t gbr_threshold1; /**< Guaranteed Bitrate Threshold 1 */ + uint32_t gbr_threshold2; /**< Guaranteed Bitrate Threshold 2 */ +}; + +/** + * Structure to pass ACC101 configuration. + * Note: all VF Bundles will have the same configuration. + */ +struct rte_acc101_conf { + bool pf_mode_en; /**< 1 if PF is used for dataplane, 0 for VFs */ + /** 1 if input '1' bit is represented by a positive LLR value, 0 if '1' + * bit is represented by a negative value. + */ + bool input_pos_llr_1_bit; + /** 1 if output '1' bit is represented by a positive value, 0 if '1' + * bit is represented by a negative value. + */ + bool output_pos_llr_1_bit; + uint16_t num_vf_bundles; /**< Number of VF bundles to setup */ + /** Queue topology for each operation type */ + struct rte_acc101_queue_topology q_ul_4g; + struct rte_acc101_queue_topology q_dl_4g; + struct rte_acc101_queue_topology q_ul_5g; + struct rte_acc101_queue_topology q_dl_5g; + /** Arbitration configuration for each operation type */ + struct rte_acc101_arbitration arb_ul_4g[RTE_ACC101_NUM_VFS]; + struct rte_acc101_arbitration arb_dl_4g[RTE_ACC101_NUM_VFS]; + struct rte_acc101_arbitration arb_ul_5g[RTE_ACC101_NUM_VFS]; + struct rte_acc101_arbitration arb_dl_5g[RTE_ACC101_NUM_VFS]; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ACC101_CFG_H_ */ diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c index dff3834..9518a9e 100644 --- a/drivers/baseband/acc101/rte_acc101_pmd.c +++ b/drivers/baseband/acc101/rte_acc101_pmd.c @@ -29,6 +29,187 @@ RTE_LOG_REGISTER_DEFAULT(acc101_logtype, NOTICE); #endif +/* Read a register of a ACC101 device */ +static inline uint32_t +acc101_reg_read(struct acc101_device *d, uint32_t offset) +{ + void *reg_addr = RTE_PTR_ADD(d->mmio_base, offset); + uint32_t ret = *((volatile uint32_t *)(reg_addr)); + return rte_le_to_cpu_32(ret); +} + +/* Calculate the offset of the enqueue register */ +static inline uint32_t +queue_offset(bool pf_device, uint8_t vf_id, uint8_t qgrp_id, uint16_t aq_id) +{ + if (pf_device) + return ((vf_id << 12) + (qgrp_id << 7) + (aq_id << 3) + + HWPfQmgrIngressAq); + else + return ((qgrp_id << 7) + (aq_id << 3) + + HWVfQmgrIngressAq); +} + +enum {UL_4G = 0, UL_5G, DL_4G, DL_5G, NUM_ACC}; + +/* Return the queue topology for a Queue Group Index */ +static inline void +qtopFromAcc(struct rte_acc101_queue_topology **qtop, int acc_enum, + struct rte_acc101_conf *acc101_conf) +{ + struct rte_acc101_queue_topology *p_qtop; + p_qtop = NULL; + switch (acc_enum) { + case UL_4G: + p_qtop = &(acc101_conf->q_ul_4g); + break; + case UL_5G: + p_qtop = &(acc101_conf->q_ul_5g); + break; + case DL_4G: + p_qtop = &(acc101_conf->q_dl_4g); + break; + case DL_5G: + p_qtop = &(acc101_conf->q_dl_5g); + break; + default: + /* NOTREACHED */ + rte_bbdev_log(ERR, "Unexpected error evaluating qtopFromAcc"); + break; + } + *qtop = p_qtop; +} + +static void +initQTop(struct rte_acc101_conf *acc101_conf) +{ + acc101_conf->q_ul_4g.num_aqs_per_groups = 0; + acc101_conf->q_ul_4g.num_qgroups = 0; + acc101_conf->q_ul_4g.first_qgroup_index = -1; + acc101_conf->q_ul_5g.num_aqs_per_groups = 0; + acc101_conf->q_ul_5g.num_qgroups = 0; + acc101_conf->q_ul_5g.first_qgroup_index = -1; + acc101_conf->q_dl_4g.num_aqs_per_groups = 0; + acc101_conf->q_dl_4g.num_qgroups = 0; + acc101_conf->q_dl_4g.first_qgroup_index = -1; + acc101_conf->q_dl_5g.num_aqs_per_groups = 0; + acc101_conf->q_dl_5g.num_qgroups = 0; + acc101_conf->q_dl_5g.first_qgroup_index = -1; +} + +static inline void +updateQtop(uint8_t acc, uint8_t qg, struct rte_acc101_conf *acc101_conf, + struct acc101_device *d) { + uint32_t reg; + struct rte_acc101_queue_topology *q_top = NULL; + qtopFromAcc(&q_top, acc, acc101_conf); + if (unlikely(q_top == NULL)) + return; + uint16_t aq; + q_top->num_qgroups++; + if (q_top->first_qgroup_index == -1) { + q_top->first_qgroup_index = qg; + /* Can be optimized to assume all are enabled by default */ + reg = acc101_reg_read(d, queue_offset(d->pf_device, + 0, qg, ACC101_NUM_AQS - 1)); + if (reg & ACC101_QUEUE_ENABLE) { + q_top->num_aqs_per_groups = ACC101_NUM_AQS; + return; + } + q_top->num_aqs_per_groups = 0; + for (aq = 0; aq < ACC101_NUM_AQS; aq++) { + reg = acc101_reg_read(d, queue_offset(d->pf_device, + 0, qg, aq)); + if (reg & ACC101_QUEUE_ENABLE) + q_top->num_aqs_per_groups++; + } + } +} + +/* Fetch configuration enabled for the PF/VF using MMIO Read (slow) */ +static inline void +fetch_acc101_config(struct rte_bbdev *dev) +{ + struct acc101_device *d = dev->data->dev_private; + struct rte_acc101_conf *acc101_conf = &d->acc101_conf; + const struct acc101_registry_addr *reg_addr; + uint8_t acc, qg; + uint32_t reg, reg_aq, reg_len0, reg_len1; + uint32_t reg_mode; + + /* No need to retrieve the configuration is already done */ + if (d->configured) + return; + + /* Choose correct registry addresses for the device type */ + if (d->pf_device) + reg_addr = &pf_reg_addr; + else + reg_addr = &vf_reg_addr; + + d->ddr_size = (1 + acc101_reg_read(d, reg_addr->ddr_range)) << 10; + + /* Single VF Bundle by VF */ + acc101_conf->num_vf_bundles = 1; + initQTop(acc101_conf); + + struct rte_acc101_queue_topology *q_top = NULL; + int qman_func_id[ACC101_NUM_ACCS] = {ACC101_ACCMAP_0, ACC101_ACCMAP_1, + ACC101_ACCMAP_2, ACC101_ACCMAP_3, ACC101_ACCMAP_4}; + reg = acc101_reg_read(d, reg_addr->qman_group_func); + for (qg = 0; qg < ACC101_NUM_QGRPS_PER_WORD; qg++) { + reg_aq = acc101_reg_read(d, + queue_offset(d->pf_device, 0, qg, 0)); + if (reg_aq & ACC101_QUEUE_ENABLE) { + uint32_t idx = (reg >> (qg * 4)) & 0x7; + if (idx < ACC101_NUM_ACCS) { + acc = qman_func_id[idx]; + updateQtop(acc, qg, acc101_conf, d); + } + } + } + + /* Check the depth of the AQs*/ + reg_len0 = acc101_reg_read(d, reg_addr->depth_log0_offset); + reg_len1 = acc101_reg_read(d, reg_addr->depth_log1_offset); + for (acc = 0; acc < NUM_ACC; acc++) { + qtopFromAcc(&q_top, acc, acc101_conf); + if (q_top->first_qgroup_index < ACC101_NUM_QGRPS_PER_WORD) + q_top->aq_depth_log2 = (reg_len0 >> + (q_top->first_qgroup_index * 4)) + & 0xF; + else + q_top->aq_depth_log2 = (reg_len1 >> + ((q_top->first_qgroup_index - + ACC101_NUM_QGRPS_PER_WORD) * 4)) + & 0xF; + } + + /* Read PF mode */ + if (d->pf_device) { + reg_mode = acc101_reg_read(d, HWPfHiPfMode); + acc101_conf->pf_mode_en = (reg_mode == ACC101_PF_VAL) ? 1 : 0; + } + + rte_bbdev_log_debug( + "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u\n", + (d->pf_device) ? "PF" : "VF", + (acc101_conf->input_pos_llr_1_bit) ? "POS" : "NEG", + (acc101_conf->output_pos_llr_1_bit) ? "POS" : "NEG", + acc101_conf->q_ul_4g.num_qgroups, + acc101_conf->q_dl_4g.num_qgroups, + acc101_conf->q_ul_5g.num_qgroups, + acc101_conf->q_dl_5g.num_qgroups, + acc101_conf->q_ul_4g.num_aqs_per_groups, + acc101_conf->q_dl_4g.num_aqs_per_groups, + acc101_conf->q_ul_5g.num_aqs_per_groups, + acc101_conf->q_dl_5g.num_aqs_per_groups, + acc101_conf->q_ul_4g.aq_depth_log2, + acc101_conf->q_dl_4g.aq_depth_log2, + acc101_conf->q_ul_5g.aq_depth_log2, + acc101_conf->q_dl_5g.aq_depth_log2); +} + /* Free memory used for software rings */ static int acc101_dev_close(struct rte_bbdev *dev) @@ -37,8 +218,53 @@ return 0; } +/* Get ACC101 device info */ +static void +acc101_dev_info_get(struct rte_bbdev *dev, + struct rte_bbdev_driver_info *dev_info) +{ + struct acc101_device *d = dev->data->dev_private; + static const struct rte_bbdev_op_cap bbdev_capabilities[] = { + RTE_BBDEV_END_OF_CAPABILITIES_LIST() + }; + + static struct rte_bbdev_queue_conf default_queue_conf; + default_queue_conf.socket = dev->data->socket_id; + default_queue_conf.queue_size = ACC101_MAX_QUEUE_DEPTH; + + dev_info->driver_name = dev->device->driver->name; + + /* Read and save the populated config from ACC101 registers */ + fetch_acc101_config(dev); + /* This isn't ideal because it reports the maximum number of queues but + * does not provide info on how many can be uplink/downlink or different + * priorities + */ + dev_info->max_num_queues = + d->acc101_conf.q_dl_5g.num_aqs_per_groups * + d->acc101_conf.q_dl_5g.num_qgroups + + d->acc101_conf.q_ul_5g.num_aqs_per_groups * + d->acc101_conf.q_ul_5g.num_qgroups + + d->acc101_conf.q_dl_4g.num_aqs_per_groups * + d->acc101_conf.q_dl_4g.num_qgroups + + d->acc101_conf.q_ul_4g.num_aqs_per_groups * + d->acc101_conf.q_ul_4g.num_qgroups; + dev_info->queue_size_lim = ACC101_MAX_QUEUE_DEPTH; + dev_info->hardware_accelerated = true; + dev_info->max_dl_queue_priority = + d->acc101_conf.q_dl_4g.num_qgroups - 1; + dev_info->max_ul_queue_priority = + d->acc101_conf.q_ul_4g.num_qgroups - 1; + dev_info->default_queue_conf = default_queue_conf; + dev_info->cpu_flag_reqs = NULL; + dev_info->min_alignment = 64; + dev_info->capabilities = bbdev_capabilities; + dev_info->harq_buffer_size = 0; +} + static const struct rte_bbdev_ops acc101_bbdev_ops = { .close = acc101_dev_close, + .info_get = acc101_dev_info_get, }; /* ACC101 PCI PF address map */ diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h index 66641f3..9c0e711 100644 --- a/drivers/baseband/acc101/rte_acc101_pmd.h +++ b/drivers/baseband/acc101/rte_acc101_pmd.h @@ -7,6 +7,7 @@ #include "acc101_pf_enum.h" #include "acc101_vf_enum.h" +#include "rte_acc101_cfg.h" /* Helper macro for logging */ #define rte_bbdev_log(level, fmt, ...) \ @@ -497,6 +498,7 @@ struct acc101_device { * on how many queues are enabled with configure() */ uint32_t sw_ring_max_depth; + struct rte_acc101_conf acc101_conf; /* ACC101 Initial configuration */ /* Bitmap capturing which Queues have already been assigned */ uint16_t q_assigned_bit_map[ACC101_NUM_QGRPS]; bool pf_device; /**< True if this is a PF ACC101 device */