From patchwork Wed Sep 1 14:47:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 97700 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17EA4A0C45; Wed, 1 Sep 2021 16:47:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C0A3F41152; Wed, 1 Sep 2021 16:47:36 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id E53CE41152 for ; Wed, 1 Sep 2021 16:47:34 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10094"; a="218779740" X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="218779740" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2021 07:47:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="466926295" Received: from silpixa00400308.ir.intel.com ([10.237.214.190]) by orsmga007.jf.intel.com with ESMTP; 01 Sep 2021 07:47:32 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Wed, 1 Sep 2021 15:47:26 +0100 Message-Id: <20210901144729.26784-2-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> References: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH 1/4] common/qat: isolate implementations of qat generations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit isolates implementations of common part in QAT PMD. When changing/expanding particular generation code, other generations code should left intact. Generation code in drivers common is invisible to other generations code. Signed-off-by: Arek Kusztal --- drivers/common/qat/dev/qat_dev_gen1.c | 245 ++++++++++ drivers/common/qat/dev/qat_dev_gen1.h | 52 +++ drivers/common/qat/dev/qat_dev_gen2.c | 38 ++ drivers/common/qat/dev/qat_dev_gen3.c | 76 +++ drivers/common/qat/dev/qat_dev_gen4.c | 258 +++++++++++ drivers/common/qat/meson.build | 4 + drivers/common/qat/qat_common.h | 2 + drivers/common/qat/qat_device.c | 117 ++--- drivers/common/qat/qat_device.h | 24 +- drivers/common/qat/qat_qp.c | 641 +++++++++----------------- drivers/common/qat/qat_qp.h | 45 +- 11 files changed, 983 insertions(+), 519 deletions(-) create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c create mode 100644 drivers/common/qat/dev/qat_dev_gen1.h create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c new file mode 100644 index 0000000000..4d60c2a051 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen1.c @@ -0,0 +1,245 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gen1.h" + +#include + +#define ADF_ARB_REG_SLOT 0x1000 + +#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ + ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ + (ADF_ARB_REG_SLOT * index), value) + +__extension__ +const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE] = { + /* queue pairs which provide an asymmetric crypto service */ + [QAT_SERVICE_ASYMMETRIC] = { + { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 0, + .rx_ring_num = 8, + .tx_msg_size = 64, + .rx_msg_size = 32, + + }, { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 1, + .rx_ring_num = 9, + .tx_msg_size = 64, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a symmetric crypto service */ + [QAT_SERVICE_SYMMETRIC] = { + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 2, + .rx_ring_num = 10, + .tx_msg_size = 128, + .rx_msg_size = 32, + }, + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 3, + .rx_ring_num = 11, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a compression service */ + [QAT_SERVICE_COMPRESSION] = { + { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 6, + .rx_ring_num = 14, + .tx_msg_size = 128, + .rx_msg_size = 32, + }, { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 7, + .rx_ring_num = 15, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + } +}; + +int +qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev, + enum qat_service_type service) +{ + int i = 0, count = 0, max_ops_per_srv = 0; + const struct qat_qp_hw_data *sym_hw_qps = + qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[service]; + + max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (sym_hw_qps[i].service_type == service) + count++; + return count; +} + +void +qat_qp_csr_build_ring_base_gen1(void *io_addr, + struct qat_queue *queue) +{ + uint64_t queue_base; + + queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); +} + +void +qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr, + arb_csr_offset); + value |= (0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +void +qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * + txq->hw_bundle_number); + uint32_t value; + + rte_spinlock_lock(lock); + value = ADF_CSR_RD(base_addr, arb_csr_offset); + value &= ~(0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +void +qat_qp_adf_configure_queues_gen1(struct qat_qp *qp) +{ + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); +} + +void +qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q) +{ + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, q->tail); +} + +void +qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head) +{ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, new_head); +} + +void +qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q); + qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q); + qat_qp_adf_configure_queues_gen1(qp); + qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr, + &qat_dev->arb_csr_lock); +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, +}; + +int qat_reset_ring_pairs_gen1( + struct qat_pci_device *qat_pci_dev __rte_unused) +{ + /* + * Ring pairs reset not supported on base, continue + */ + return 0; +} + +const struct +rte_mem_resource *qat_dev_get_transport_bar_gen1( + struct rte_pci_device *pci_dev) +{ + return &pci_dev->mem_resource[0]; +} + +int +qat_dev_get_misc_bar_gen1( + struct rte_mem_resource **mem_resource __rte_unused, + struct rte_pci_device *pci_dev __rte_unused) +{ + return -1; +} + +int +qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + /* + * Base generations do not have configuration, + * but set this pointer anyway that we can + * distinguish higher generations faulty set to NULL + */ + return 0; +} + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, +}; + +RTE_INIT(qat_dev_gen_gen1_init) +{ + qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1; + qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1; + qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1; + qat_gen_config[QAT_GEN1].qp_hw_data = qat_gen1_qps; + qat_gen_config[QAT_GEN1].comp_num_im_bufs_required = + QAT_NUM_INTERM_BUFS_GEN1; +} diff --git a/drivers/common/qat/dev/qat_dev_gen1.h b/drivers/common/qat/dev/qat_dev_gen1.h new file mode 100644 index 0000000000..9bf4fcf01b --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen1.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _QAT_DEV_GEN_H_ +#define _QAT_DEV_GEN_H_ + +#include "qat_device.h" +#include "qat_qp.h" + +#include + +extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE]; + +int +qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev, + enum qat_service_type service); +void +qat_qp_csr_build_ring_base_gen1(void *io_addr, + struct qat_queue *queue); +void +qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock); +void +qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock); +void +qat_qp_adf_configure_queues_gen1(struct qat_qp *qp); +void +qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q); +void +qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head); +void +qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp); + +int +qat_reset_ring_pairs_gen1( + struct qat_pci_device *qat_pci_dev __rte_unused); +const struct +rte_mem_resource *qat_dev_get_transport_bar_gen1( + struct rte_pci_device *pci_dev); +int +qat_dev_get_misc_bar_gen1( + struct rte_mem_resource **mem_resource __rte_unused, + struct rte_pci_device *pci_dev __rte_unused); +int +qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused); + +#endif diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c new file mode 100644 index 0000000000..ad1b643e00 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen2.c @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gen1.h" + +#include + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, +}; + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, +}; + +RTE_INIT(qat_dev_gen_gen2_init) +{ + qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2; + qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2; + qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2; + qat_gen_config[QAT_GEN2].qp_hw_data = qat_gen1_qps; + qat_gen_config[QAT_GEN2].comp_num_im_bufs_required = + QAT_NUM_INTERM_BUFS_GEN2; +} diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c new file mode 100644 index 0000000000..407d21576b --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen3.c @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gen1.h" + +#include + +__extension__ +const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE] = { + /* queue pairs which provide an asymmetric crypto service */ + [QAT_SERVICE_ASYMMETRIC] = { + { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 0, + .rx_ring_num = 4, + .tx_msg_size = 64, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a symmetric crypto service */ + [QAT_SERVICE_SYMMETRIC] = { + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 1, + .rx_ring_num = 5, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a compression service */ + [QAT_SERVICE_COMPRESSION] = { + { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 3, + .rx_ring_num = 7, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + } +}; + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, +}; + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, +}; + +RTE_INIT(qat_dev_gen_gen3_init) +{ + qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3; + qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3; + qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3; + qat_gen_config[QAT_GEN3].qp_hw_data = qat_gen3_qps; + qat_gen_config[QAT_GEN3].comp_num_im_bufs_required = + QAT_NUM_INTERM_BUFS_GEN3; +} diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c new file mode 100644 index 0000000000..6394e17dde --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen4.c @@ -0,0 +1,258 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros_gen4vf.h" +#include "adf_pf2vf_msg.h" +#include "qat_pf2vf.h" + +#include + +static struct qat_pf2vf_dev qat_pf2vf_gen4 = { + .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, + .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, + .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, + .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, + .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, + .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, +}; + +static int +qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val) +{ + struct qat_pf2vf_msg pf2vf_msg; + + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ; + pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ; + pf2vf_msg.msg_data = 2; + return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); +} + +static int +qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev, + enum qat_service_type service) +{ + int i = 0, count = 0, max_ops_per_srv = 0; + + max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (qat_dev->qp_gen4_data[i][0].service_type == service) + count++; + return count; +} + +static int +qat_dev_read_config_gen4(struct qat_pci_device *qat_dev) +{ + int i = 0; + uint16_t svc = 0; + + if (qat_query_svc(qat_dev, (uint8_t *)&svc)) + return -EFAULT; + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + struct qat_qp_hw_data *hw_data = + &qat_dev->qp_gen4_data[i][0]; + uint8_t svc1 = (svc >> (3 * i)) & 0x7; + enum qat_service_type service_type = QAT_SERVICE_INVALID; + + if (svc1 == QAT_SVC_SYM) { + service_type = QAT_SERVICE_SYMMETRIC; + QAT_LOG(DEBUG, + "Discovered SYMMETRIC service on bundle %d", + i); + } else if (svc1 == QAT_SVC_COMPRESSION) { + service_type = QAT_SERVICE_COMPRESSION; + QAT_LOG(DEBUG, + "Discovered COPRESSION service on bundle %d", + i); + } else if (svc1 == QAT_SVC_ASYM) { + service_type = QAT_SERVICE_ASYMMETRIC; + QAT_LOG(DEBUG, + "Discovered ASYMMETRIC service on bundle %d", + i); + } else { + QAT_LOG(ERR, + "Unrecognized service on bundle %d", + i); + return -EFAULT; + } + + memset(hw_data, 0, sizeof(*hw_data)); + hw_data->service_type = service_type; + if (service_type == QAT_SERVICE_ASYMMETRIC) { + hw_data->tx_msg_size = 64; + hw_data->rx_msg_size = 32; + } else if (service_type == QAT_SERVICE_SYMMETRIC || + service_type == + QAT_SERVICE_COMPRESSION) { + hw_data->tx_msg_size = 128; + hw_data->rx_msg_size = 32; + } + hw_data->tx_ring_num = 0; + hw_data->rx_ring_num = 1; + hw_data->hw_bundle_num = i; + } + return 0; +} + +static void +qat_qp_build_ring_base_gen4(void *io_addr, + struct qat_queue *queue) +{ + uint64_t queue_base; + + queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); +} + +static void +qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + value |= (0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + value &= ~(0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_configure_queues_gen4(struct qat_qp *qp) +{ + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); +} + +static void +qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q) +{ + WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, q->tail); +} + +static void +qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head) +{ + WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, new_head); +} + +static void +qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q); + qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q); + qat_qp_adf_configure_queues_gen4(qp); + qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr, + &qat_dev->arb_csr_lock); +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen4, + .qat_qp_build_ring_base = qat_qp_build_ring_base_gen4, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen4, + .qat_qp_csr_setup = qat_qp_csr_setup_gen4, +}; + +static int +qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev) +{ + int ret = 0, i; + uint8_t data[4]; + struct qat_pf2vf_msg pf2vf_msg; + + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET; + pf2vf_msg.block_hdr = -1; + for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) { + pf2vf_msg.msg_data = i; + ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data); + if (ret) { + QAT_LOG(ERR, "QAT error when reset bundle no %d", + i); + return ret; + } + } + + return 0; +} + +static const struct +rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev) +{ + return &pci_dev->mem_resource[0]; +} + +static int +qat_dev_get_misc_bar_gen4( + struct rte_mem_resource **mem_resource, + struct rte_pci_device *pci_dev) +{ + *mem_resource = &pci_dev->mem_resource[2]; + return 0; +} + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4, + .qat_dev_read_config = qat_dev_read_config_gen4, +}; + +RTE_INIT(qat_dev_gen_4_init) +{ + qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4; + qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4; + qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4; + qat_gen_config[QAT_GEN4].qp_hw_data = NULL; + qat_gen_config[QAT_GEN4].comp_num_im_bufs_required = + QAT_NUM_INTERM_BUFS_GEN3; + qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4; +} diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 053c219fed..532e0fabb3 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -50,6 +50,10 @@ sources += files( 'qat_device.c', 'qat_logs.c', 'qat_pf2vf.c', + 'dev/qat_dev_gen1.c', + 'dev/qat_dev_gen2.c', + 'dev/qat_dev_gen3.c', + 'dev/qat_dev_gen4.c' ) includes += include_directories( 'qat_adf', diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h index 23715085f4..b15e980f0f 100644 --- a/drivers/common/qat/qat_common.h +++ b/drivers/common/qat/qat_common.h @@ -22,6 +22,8 @@ enum qat_device_gen { QAT_GEN4 }; +#define QAT_DEV_GEN_NO (QAT_GEN4 + 1) + enum qat_service_type { QAT_SERVICE_ASYMMETRIC = 0, QAT_SERVICE_SYMMETRIC, diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 1b967cbcf7..030624b46d 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -13,42 +13,10 @@ #include "adf_pf2vf_msg.h" #include "qat_pf2vf.h" -/* pv2vf data Gen 4*/ -struct qat_pf2vf_dev qat_pf2vf_gen4 = { - .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, - .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, - .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, - .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, - .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, - .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, -}; - /* Hardware device information per generation */ -__extension__ -struct qat_gen_hw_data qat_gen_config[] = { - [QAT_GEN1] = { - .dev_gen = QAT_GEN1, - .qp_hw_data = qat_gen1_qps, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1 - }, - [QAT_GEN2] = { - .dev_gen = QAT_GEN2, - .qp_hw_data = qat_gen1_qps, - /* gen2 has same ring layout as gen1 */ - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2 - }, - [QAT_GEN3] = { - .dev_gen = QAT_GEN3, - .qp_hw_data = qat_gen3_qps, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3 - }, - [QAT_GEN4] = { - .dev_gen = QAT_GEN4, - .qp_hw_data = NULL, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3, - .pf2vf_dev = &qat_pf2vf_gen4 - }, -}; + +struct qat_gen_hw_data qat_gen_config[QAT_DEV_GEN_NO]; +struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_DEV_GEN_NO]; /* per-process array of device data */ struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES]; @@ -126,44 +94,6 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev) return qat_pci_get_named_dev(name); } -static int -qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev) -{ - int ret = 0, i; - uint8_t data[4]; - struct qat_pf2vf_msg pf2vf_msg; - - pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET; - pf2vf_msg.block_hdr = -1; - for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) { - pf2vf_msg.msg_data = i; - ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data); - if (ret) { - QAT_LOG(ERR, "QAT error when reset bundle no %d", - i); - return ret; - } - } - - return 0; -} - -int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val) -{ - int ret = -(EINVAL); - struct qat_pf2vf_msg pf2vf_msg; - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ; - pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ; - pf2vf_msg.msg_data = 2; - ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); - } - - return ret; -} - - static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param *qat_dev_cmd_param) { @@ -229,6 +159,8 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, uint8_t qat_dev_id = 0; char name[QAT_DEV_NAME_MAX_LEN]; struct rte_devargs *devargs = pci_dev->device.devargs; + struct qat_dev_hw_spec_funcs *ops_hw = NULL; + struct rte_mem_resource *mem_resource; rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat"); @@ -300,24 +232,25 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, return NULL; } - if (qat_dev->qat_dev_gen == QAT_GEN4) { - qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr; - if (qat_dev->misc_bar_io_addr == NULL) { + ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen]; + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_misc_bar, NULL); + if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) { + if (mem_resource->addr == NULL) { QAT_LOG(ERR, "QAT cannot get access to VF misc bar"); return NULL; } - } + qat_dev->misc_bar_io_addr = mem_resource->addr; + } else + qat_dev->misc_bar_io_addr = NULL; if (devargs && devargs->drv_str) qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param); - if (qat_dev->qat_dev_gen >= QAT_GEN4) { - if (qat_read_qp_config(qat_dev)) { - QAT_LOG(ERR, - "Cannot acquire ring configuration for QAT_%d", - qat_dev_id); - return NULL; - } + if (qat_read_qp_config(qat_dev)) { + QAT_LOG(ERR, + "Cannot acquire ring configuration for QAT_%d", + qat_dev_id); + return NULL; } rte_spinlock_init(&qat_dev->arb_csr_lock); @@ -392,6 +325,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, int sym_ret = 0, asym_ret = 0, comp_ret = 0; int num_pmds_created = 0; struct qat_pci_device *qat_pci_dev; + struct qat_dev_hw_spec_funcs *ops; struct qat_dev_cmd_param qat_dev_cmd_param[] = { { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, @@ -408,13 +342,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (qat_pci_dev == NULL) return -ENODEV; - if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { - if (qat_gen4_reset_ring_pair(qat_pci_dev)) { - QAT_LOG(ERR, - "Cannot reset ring pairs, does pf driver supports pf2vf comms?" - ); - return -ENODEV; - } + ops = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen]; + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_dev_reset_ring_pairs, + -ENOTSUP); + if (ops->qat_dev_reset_ring_pairs(qat_pci_dev)) { + QAT_LOG(ERR, + "Cannot reset ring pairs, does pf driver supports pf2vf comms?" + ); + return -ENODEV; } sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param); diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 228c057d1e..531aa663ca 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -21,6 +21,24 @@ #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" #define MAX_QP_THRESHOLD_SIZE 32 +typedef int (*qat_dev_reset_ring_pairs_t) + (struct qat_pci_device *); +typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t) + (struct rte_pci_device *); +typedef int (*qat_dev_get_misc_bar_t) + (struct rte_mem_resource **, struct rte_pci_device *); +typedef int (*qat_dev_read_config_t) + (struct qat_pci_device *); + +struct qat_dev_hw_spec_funcs { + qat_dev_reset_ring_pairs_t qat_dev_reset_ring_pairs; + qat_dev_get_transport_bar_t qat_dev_get_transport_bar; + qat_dev_get_misc_bar_t qat_dev_get_misc_bar; + qat_dev_read_config_t qat_dev_read_config; +}; + +extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[]; + struct qat_dev_cmd_param { const char *name; uint16_t val; @@ -57,6 +75,9 @@ struct qat_device_info { */ }; +extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; +extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; + extern struct qat_device_info qat_pci_devs[]; struct qat_sym_dev_private; @@ -159,7 +180,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused, int qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused); -int -qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret); - #endif /* _QAT_DEVICE_H_ */ diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 026ea5ee01..ff4d7fa95c 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -18,119 +18,14 @@ #include "qat_sym.h" #include "qat_asym.h" #include "qat_comp.h" -#include "adf_transport_access_macros.h" -#include "adf_transport_access_macros_gen4vf.h" #define QAT_CQ_MAX_DEQ_RETRIES 10 #define ADF_MAX_DESC 4096 #define ADF_MIN_DESC 128 -#define ADF_ARB_REG_SLOT 0x1000 -#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C - -#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ - (ADF_ARB_REG_SLOT * index), value) - -__extension__ -const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] - [ADF_MAX_QPS_ON_ANY_SERVICE] = { - /* queue pairs which provide an asymmetric crypto service */ - [QAT_SERVICE_ASYMMETRIC] = { - { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 0, - .rx_ring_num = 8, - .tx_msg_size = 64, - .rx_msg_size = 32, - - }, { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 1, - .rx_ring_num = 9, - .tx_msg_size = 64, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a symmetric crypto service */ - [QAT_SERVICE_SYMMETRIC] = { - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 2, - .rx_ring_num = 10, - .tx_msg_size = 128, - .rx_msg_size = 32, - }, - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 3, - .rx_ring_num = 11, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a compression service */ - [QAT_SERVICE_COMPRESSION] = { - { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 6, - .rx_ring_num = 14, - .tx_msg_size = 128, - .rx_msg_size = 32, - }, { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 7, - .rx_ring_num = 15, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - } -}; - -__extension__ -const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES] - [ADF_MAX_QPS_ON_ANY_SERVICE] = { - /* queue pairs which provide an asymmetric crypto service */ - [QAT_SERVICE_ASYMMETRIC] = { - { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 0, - .rx_ring_num = 4, - .tx_msg_size = 64, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a symmetric crypto service */ - [QAT_SERVICE_SYMMETRIC] = { - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 1, - .rx_ring_num = 5, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a compression service */ - [QAT_SERVICE_COMPRESSION] = { - { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 3, - .rx_ring_num = 7, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - } -}; +struct qat_qp_hw_spec_funcs* + qat_qp_hw_spec[QAT_DEV_GEN_NO]; static int qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes); @@ -139,66 +34,19 @@ static int qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, struct qat_qp_config *, uint8_t dir); static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, uint32_t *queue_size_for_csr); -static void adf_configure_queues(struct qat_qp *queue, +static int adf_configure_queues(struct qat_qp *queue, enum qat_device_gen qat_dev_gen); -static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, +static int adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); -static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, +static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); - -int qat_qps_per_service(struct qat_pci_device *qat_dev, - enum qat_service_type service) -{ - int i = 0, count = 0, max_ops_per_srv = 0; - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; - for (i = 0, count = 0; i < max_ops_per_srv; i++) - if (qat_dev->qp_gen4_data[i][0].service_type == service) - count++; - } else { - const struct qat_qp_hw_data *sym_hw_qps = - qat_gen_config[qat_dev->qat_dev_gen] - .qp_hw_data[service]; - - max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; - for (i = 0, count = 0; i < max_ops_per_srv; i++) - if (sym_hw_qps[i].service_type == service) - count++; - } - - return count; -} - +static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_queue *queue); static const struct rte_memzone * -queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, - int socket_id) -{ - const struct rte_memzone *mz; - - mz = rte_memzone_lookup(queue_name); - if (mz != 0) { - if (((size_t)queue_size <= mz->len) && - ((socket_id == SOCKET_ID_ANY) || - (socket_id == mz->socket_id))) { - QAT_LOG(DEBUG, "re-use memzone already " - "allocated for %s", queue_name); - return mz; - } - - QAT_LOG(ERR, "Incompatible memzone already " - "allocated %s, size %u, socket %d. " - "Requested size %u, socket %u", - queue_name, (uint32_t)mz->len, - mz->socket_id, queue_size, socket_id); - return NULL; - } - - QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u", - queue_name, queue_size, socket_id); - return rte_memzone_reserve_aligned(queue_name, queue_size, - socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size); -} + queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, + int socket_id); +static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp); int qat_qp_setup(struct qat_pci_device *qat_dev, struct qat_qp **qp_addr, @@ -209,8 +57,10 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; char op_cookie_pool_name[RTE_RING_NAMESIZE]; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; uint32_t i; + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev->qat_dev_gen]; + void *io_addr; QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d", queue_pair_id, qat_dev->qat_dev_id, qat_dev->qat_dev_gen); @@ -222,7 +72,10 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -EINVAL; } - if (pci_dev->mem_resource[0].addr == NULL) { + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_transport_bar, + -ENOTSUP); + io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr; + if (io_addr == NULL) { QAT_LOG(ERR, "Could not find VF config space " "(UIO driver attached?)."); return -EINVAL; @@ -246,7 +99,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -ENOMEM; } - qp->mmap_bar_addr = pci_dev->mem_resource[0].addr; + qp->mmap_bar_addr = io_addr; qp->enqueued = qp->dequeued = 0; if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf, @@ -273,10 +126,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, goto create_err; } - adf_configure_queues(qp, qat_dev_gen); - adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr, - &qat_dev->arb_csr_lock); - snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE, "%s%d_cookies_%s_qp%hu", pci_dev->driver->driver.name, qat_dev->qat_dev_id, @@ -312,6 +161,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s", queue_pair_id, op_cookie_pool_name); + qat_qp_csr_setup(qat_dev, io_addr, qp); + *qp_addr = qp; return 0; @@ -323,80 +174,13 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -EFAULT; } - -int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr) -{ - struct qat_qp *qp = *qp_addr; - uint32_t i; - - if (qp == NULL) { - QAT_LOG(DEBUG, "qp already freed"); - return 0; - } - - QAT_LOG(DEBUG, "Free qp on qat_pci device %d", - qp->qat_dev->qat_dev_id); - - /* Don't free memory if there are still responses to be processed */ - if ((qp->enqueued - qp->dequeued) == 0) { - qat_queue_delete(&(qp->tx_q)); - qat_queue_delete(&(qp->rx_q)); - } else { - return -EAGAIN; - } - - adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr, - &qp->qat_dev->arb_csr_lock); - - for (i = 0; i < qp->nb_descriptors; i++) - rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]); - - if (qp->op_cookie_pool) - rte_mempool_free(qp->op_cookie_pool); - - rte_free(qp->op_cookies); - rte_free(qp); - *qp_addr = NULL; - return 0; -} - - -static void qat_queue_delete(struct qat_queue *queue) -{ - const struct rte_memzone *mz; - int status = 0; - - if (queue == NULL) { - QAT_LOG(DEBUG, "Invalid queue"); - return; - } - QAT_LOG(DEBUG, "Free ring %d, memzone: %s", - queue->hw_queue_number, queue->memz_name); - - mz = rte_memzone_lookup(queue->memz_name); - if (mz != NULL) { - /* Write an unused pattern to the queue memory. */ - memset(queue->base_addr, 0x7F, queue->queue_size); - status = rte_memzone_free(mz); - if (status != 0) - QAT_LOG(ERR, "Error %d on freeing queue %s", - status, queue->memz_name); - } else { - QAT_LOG(DEBUG, "queue %s doesn't exist", - queue->memz_name); - } -} - static int qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, struct qat_qp_config *qp_conf, uint8_t dir) { - uint64_t queue_base; - void *io_addr; const struct rte_memzone *qp_mz; struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; int ret = 0; uint16_t desc_size = (dir == ADF_RING_DIR_TX ? qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size); @@ -456,19 +240,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, * Write an unused pattern to the queue memory. */ memset(queue->base_addr, 0x7F, queue_size_bytes); - io_addr = pci_dev->mem_resource[0].addr; - - if (qat_dev_gen == QAT_GEN4) { - queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr, - queue->queue_size); - WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_base); - } else { - queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, - queue->queue_size); - WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_base); - } QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u," " nb msgs %u, msg_size %u, modulo mask %u", @@ -484,202 +255,216 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, return ret; } -int -qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, - enum qat_service_type service_type) +static const struct rte_memzone * +queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, + int socket_id) { - if (qat_dev->qat_dev_gen == QAT_GEN4) { - int i = 0, valid_qps = 0; - - for (; i < QAT_GEN4_BUNDLE_NUM; i++) { - if (qat_dev->qp_gen4_data[i][0].service_type == - service_type) { - if (valid_qps == qp_id) - return i; - ++valid_qps; - } + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(queue_name); + if (mz != 0) { + if (((size_t)queue_size <= mz->len) && + ((socket_id == SOCKET_ID_ANY) || + (socket_id == mz->socket_id))) { + QAT_LOG(DEBUG, + "re-use memzone already allocated for %s", + queue_name); + return mz; } + + QAT_LOG(ERR, + "Incompatible memzone already allocated %s, size %u, socket %d. Requested size %u, socket %u", + queue_name, (uint32_t)mz->len, + mz->socket_id, queue_size, socket_id); + return NULL; } - return -1; + + QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u", + queue_name, queue_size, socket_id); + return rte_memzone_reserve_aligned(queue_name, queue_size, + socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size); } -int -qat_read_qp_config(struct qat_pci_device *qat_dev) +int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr) { - int i = 0; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; - - if (qat_dev_gen == QAT_GEN4) { - uint16_t svc = 0; - - if (qat_query_svc(qat_dev, (uint8_t *)&svc)) - return -(EFAULT); - for (; i < QAT_GEN4_BUNDLE_NUM; i++) { - struct qat_qp_hw_data *hw_data = - &qat_dev->qp_gen4_data[i][0]; - uint8_t svc1 = (svc >> (3 * i)) & 0x7; - enum qat_service_type service_type = QAT_SERVICE_INVALID; - - if (svc1 == QAT_SVC_SYM) { - service_type = QAT_SERVICE_SYMMETRIC; - QAT_LOG(DEBUG, - "Discovered SYMMETRIC service on bundle %d", - i); - } else if (svc1 == QAT_SVC_COMPRESSION) { - service_type = QAT_SERVICE_COMPRESSION; - QAT_LOG(DEBUG, - "Discovered COPRESSION service on bundle %d", - i); - } else if (svc1 == QAT_SVC_ASYM) { - service_type = QAT_SERVICE_ASYMMETRIC; - QAT_LOG(DEBUG, - "Discovered ASYMMETRIC service on bundle %d", - i); - } else { - QAT_LOG(ERR, - "Unrecognized service on bundle %d", - i); - return -(EFAULT); - } + int ret; + struct qat_qp *qp = *qp_addr; + uint32_t i; - memset(hw_data, 0, sizeof(*hw_data)); - hw_data->service_type = service_type; - if (service_type == QAT_SERVICE_ASYMMETRIC) { - hw_data->tx_msg_size = 64; - hw_data->rx_msg_size = 32; - } else if (service_type == QAT_SERVICE_SYMMETRIC || - service_type == - QAT_SERVICE_COMPRESSION) { - hw_data->tx_msg_size = 128; - hw_data->rx_msg_size = 32; - } - hw_data->tx_ring_num = 0; - hw_data->rx_ring_num = 1; - hw_data->hw_bundle_num = i; - } + if (qp == NULL) { + QAT_LOG(DEBUG, "qp already freed"); return 0; } - return -(EINVAL); -} -static int qat_qp_check_queue_alignment(uint64_t phys_addr, - uint32_t queue_size_bytes) -{ - if (((queue_size_bytes - 1) & phys_addr) != 0) - return -EINVAL; + QAT_LOG(DEBUG, "Free qp on qat_pci device %d", + qp->qat_dev->qat_dev_id); + + /* Don't free memory if there are still responses to be processed */ + if ((qp->enqueued - qp->dequeued) == 0) { + qat_queue_delete(&(qp->tx_q)); + qat_queue_delete(&(qp->rx_q)); + } else { + return -EAGAIN; + } + + ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr, + &qp->qat_dev->arb_csr_lock); + if (ret) + return ret; + + for (i = 0; i < qp->nb_descriptors; i++) + rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]); + + if (qp->op_cookie_pool) + rte_mempool_free(qp->op_cookie_pool); + + rte_free(qp->op_cookies); + rte_free(qp); + *qp_addr = NULL; return 0; } -static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, - uint32_t *p_queue_size_for_csr) + +static void qat_queue_delete(struct qat_queue *queue) { - uint8_t i = ADF_MIN_RING_SIZE; + const struct rte_memzone *mz; + int status = 0; - for (; i <= ADF_MAX_RING_SIZE; i++) - if ((msg_size * msg_num) == - (uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) { - *p_queue_size_for_csr = i; - return 0; - } - QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num); - return -EINVAL; + if (queue == NULL) { + QAT_LOG(DEBUG, "Invalid queue"); + return; + } + QAT_LOG(DEBUG, "Free ring %d, memzone: %s", + queue->hw_queue_number, queue->memz_name); + + mz = rte_memzone_lookup(queue->memz_name); + if (mz != NULL) { + /* Write an unused pattern to the queue memory. */ + memset(queue->base_addr, 0x7F, queue->queue_size); + status = rte_memzone_free(mz); + if (status != 0) + QAT_LOG(ERR, "Error %d on freeing queue %s", + status, queue->memz_name); + } else { + QAT_LOG(DEBUG, "queue %s doesn't exist", + queue->memz_name); + } } -static void -adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, +static int __rte_unused +adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock) { - uint32_t arb_csr_offset = 0, value; - - rte_spinlock_lock(lock); - if (qat_dev_gen == QAT_GEN4) { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_RING_BUNDLE_SIZE_GEN4 * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, - arb_csr_offset); - } else { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr, - arb_csr_offset); - } - value |= (0x01 << txq->hw_queue_number); - ADF_CSR_WR(base_addr, arb_csr_offset, value); - rte_spinlock_unlock(lock); + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable, + -ENOTSUP); + ops->qat_qp_adf_arb_enable(txq, base_addr, lock); + return 0; } -static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, +static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock) { - uint32_t arb_csr_offset = 0, value; - - rte_spinlock_lock(lock); - if (qat_dev_gen == QAT_GEN4) { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_RING_BUNDLE_SIZE_GEN4 * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, - arb_csr_offset); - } else { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr, - arb_csr_offset); - } - value &= ~(0x01 << txq->hw_queue_number); - ADF_CSR_WR(base_addr, arb_csr_offset, value); - rte_spinlock_unlock(lock); + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable, + -ENOTSUP); + ops->qat_qp_adf_arb_disable(txq, base_addr, lock); + return 0; } -static void adf_configure_queues(struct qat_qp *qp, - enum qat_device_gen qat_dev_gen) +static int __rte_unused +qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr, + struct qat_queue *queue) { - uint32_t q_tx_config, q_resp_config; - struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; - - q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); - q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, - ADF_RING_NEAR_WATERMARK_512, - ADF_RING_NEAR_WATERMARK_0); - - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, - q_tx->hw_bundle_number, q_tx->hw_queue_number, - q_tx_config); - WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, - q_rx->hw_bundle_number, q_rx->hw_queue_number, - q_resp_config); - } else { - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, - q_tx->hw_bundle_number, q_tx->hw_queue_number, - q_tx_config); - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, - q_rx->hw_bundle_number, q_rx->hw_queue_number, - q_resp_config); - } + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base, + -ENOTSUP); + ops->qat_qp_build_ring_base(io_addr, queue); + return 0; } -static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) +int qat_qps_per_service(struct qat_pci_device *qat_dev, + enum qat_service_type service) { - return data & modulo_mask; + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service, + -ENOTSUP); + return ops->qat_qp_rings_per_service(qat_dev, service); +} + +int +qat_read_qp_config(struct qat_pci_device *qat_dev) +{ + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config, + -ENOTSUP); + return ops_hw->qat_dev_read_config(qat_dev); +} + +static int __rte_unused +adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues, + -ENOTSUP); + ops->qat_qp_adf_configure_queues(qp); + return 0; } static inline void txq_write_tail(enum qat_device_gen qat_dev_gen, - struct qat_qp *qp, struct qat_queue *q) { + struct qat_qp *qp, struct qat_queue *q) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr, - q->hw_bundle_number, q->hw_queue_number, q->tail); - } else { - WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, - q->hw_queue_number, q->tail); - } + /* + * Pointer check should be done during + * initialization + */ + ops->qat_qp_csr_write_tail(qp, q); } +static inline void +qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, + struct qat_queue *q, uint32_t new_head) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + /* + * Pointer check should be done during + * initialization + */ + ops->qat_qp_csr_write_head(qp, q, new_head); +} + +static int +qat_qp_csr_setup(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup, + -ENOTSUP); + ops->qat_qp_csr_setup(qat_dev, io_addr, qp); + return 0; +} + + static inline void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, struct qat_queue *q) @@ -703,15 +488,35 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, q->nb_processed_responses = 0; q->csr_head = new_head; - /* write current head to CSR */ - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr, - q->hw_bundle_number, q->hw_queue_number, new_head); - } else { - WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, - q->hw_queue_number, new_head); - } + qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head); +} + +static int qat_qp_check_queue_alignment(uint64_t phys_addr, + uint32_t queue_size_bytes) +{ + if (((queue_size_bytes - 1) & phys_addr) != 0) + return -EINVAL; + return 0; +} + +static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, + uint32_t *p_queue_size_for_csr) +{ + uint8_t i = ADF_MIN_RING_SIZE; + for (; i <= ADF_MAX_RING_SIZE; i++) + if ((msg_size * msg_num) == + (uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) { + *p_queue_size_for_csr = i; + return 0; + } + QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num); + return -EINVAL; +} + +static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) +{ + return data & modulo_mask; } uint16_t diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index e1627197fa..ffba3a3615 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -24,6 +24,8 @@ struct qat_pci_device; #define QAT_GEN4_BUNDLE_NUM 4 #define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 +#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C + /** * Structure with data needed for creation of queue pair. */ @@ -96,9 +98,6 @@ struct qat_qp { uint16_t min_enq_burst_threshold; } __rte_cache_aligned; -extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; -extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; - uint16_t qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops); @@ -129,11 +128,43 @@ qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused); -int -qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, - enum qat_service_type service_type); - int qat_read_qp_config(struct qat_pci_device *qat_dev); +typedef int (*qat_qp_rings_per_service_t) + (struct qat_pci_device *, enum qat_service_type); +typedef void (*qat_qp_build_ring_base_t) + (void *, struct qat_queue *); +typedef void (*qat_qp_adf_arb_enable_t) + (const struct qat_queue *, void *, + rte_spinlock_t *); +typedef void (*qat_qp_adf_arb_disable_t) + (const struct qat_queue *, void *, + rte_spinlock_t *); +typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *); + +typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, + struct qat_queue *q); + +typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, + struct qat_queue *q, + uint32_t new_head); + +typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, + void *, struct qat_qp *); + +struct qat_qp_hw_spec_funcs { + qat_qp_rings_per_service_t qat_qp_rings_per_service; + qat_qp_build_ring_base_t qat_qp_build_ring_base; + qat_qp_adf_arb_enable_t qat_qp_adf_arb_enable; + qat_qp_adf_arb_disable_t qat_qp_adf_arb_disable; + qat_qp_adf_configure_queues_t qat_qp_adf_configure_queues; + qat_qp_csr_write_tail_t qat_qp_csr_write_tail; + qat_qp_csr_write_head_t qat_qp_csr_write_head; + qat_qp_csr_setup_t qat_qp_csr_setup; +}; + +extern struct +qat_qp_hw_spec_funcs *qat_qp_hw_spec[]; + #endif /* _QAT_QP_H_ */ From patchwork Wed Sep 1 14:47:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 97701 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A464A0C45; Wed, 1 Sep 2021 16:47:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 470054115C; Wed, 1 Sep 2021 16:47:50 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 2ECAD41157 for ; Wed, 1 Sep 2021 16:47:47 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10094"; a="304344959" X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="304344959" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2021 07:47:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="466926347" Received: from silpixa00400308.ir.intel.com ([10.237.214.190]) by orsmga007.jf.intel.com with ESMTP; 01 Sep 2021 07:47:45 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Wed, 1 Sep 2021 15:47:27 +0100 Message-Id: <20210901144729.26784-3-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> References: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH 2/4] crypto/qat: isolate implementations of symmetric operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit isolates implementations of symmetric part in QAT PMD. When changing/expanding particular generation code, other generations code should left intact. Generation code in drivers crypto is invisible to other generations code. Signed-off-by: Arek Kusztal --- drivers/common/qat/meson.build | 6 +- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 55 ++++++++++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.h | 15 +++ drivers/crypto/qat/dev/qat_sym_pmd_gen2.c | 80 +++++++++++++++ drivers/crypto/qat/dev/qat_sym_pmd_gen3.c | 39 +++++++ drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 82 +++++++++++++++ drivers/crypto/qat/qat_sym_pmd.c | 120 +++++----------------- drivers/crypto/qat/qat_sym_pmd.h | 23 +++++ 8 files changed, 322 insertions(+), 98 deletions(-) create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 532e0fabb3..de54004b4c 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -69,7 +69,11 @@ endif if qat_crypto foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c'] + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', + 'dev/qat_sym_pmd_gen1.c', + 'dev/qat_sym_pmd_gen2.c', + 'dev/qat_sym_pmd_gen3.c', + 'dev/qat_sym_pmd_gen4.c'] sources += files(join_paths(qat_crypto_relpath, f)) endforeach deps += ['security'] diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c new file mode 100644 index 0000000000..4a4dc9ab55 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_sym_pmd_gen1.h" + +int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct qat_qp_config qat_qp_conf = { }; + const struct qat_qp_hw_data *sym_hw_qps = NULL; + struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_sym_private->qat_dev; + + sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[QAT_SERVICE_SYMMETRIC]; + qat_qp_conf.hw = sym_hw_qps + qp_id; + + return qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id); +} + +struct rte_cryptodev_ops crypto_qat_gen1_ops = { + + /* Device related operations */ + .dev_configure = qat_sym_dev_config, + .dev_start = qat_sym_dev_start, + .dev_stop = qat_sym_dev_stop, + .dev_close = qat_sym_dev_close, + .dev_infos_get = qat_sym_dev_info_get, + + .stats_get = qat_sym_stats_get, + .stats_reset = qat_sym_stats_reset, + .queue_pair_setup = qat_sym_qp_setup_gen1, + .queue_pair_release = qat_sym_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +RTE_INIT(qat_sym_pmd_gen1_init) +{ + QAT_CRYPTODEV_OPS[QAT_GEN1] = &crypto_qat_gen1_ops; +} diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h new file mode 100644 index 0000000000..397faab0b0 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#ifndef _QAT_DEV_GEN_H_ +#define _QAT_DEV_GEN_H_ + +int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id); + +#endif diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c new file mode 100644 index 0000000000..6344d7de13 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c @@ -0,0 +1,80 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_sym.h" + +#define MIXED_CRYPTO_MIN_FW_VER 0x04090000 + +static int qat_sym_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + int ret; + struct qat_qp_config qat_qp_conf = { }; + const struct qat_qp_hw_data *sym_hw_qps = NULL; + struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_sym_private->qat_dev; + struct qat_qp *qp; + + sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[QAT_SERVICE_SYMMETRIC]; + qat_qp_conf.hw = sym_hw_qps + qp_id; + + if (qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id)) { + return -1; + } + qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]; + ret = qat_cq_get_fw_version(qp); + if (ret < 0) { + qat_sym_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d", + (ret >> 24) & 0xff, + (ret >> 16) & 0xff, + (ret >> 8) & 0xff); + else + QAT_LOG(DEBUG, "unknown QAT firmware version"); + + /* set capabilities based on the fw version */ + qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? + QAT_SYM_CAP_MIXED_CRYPTO : 0); + return 0; +} + +struct rte_cryptodev_ops crypto_qat_gen2_ops = { + + /* Device related operations */ + .dev_configure = qat_sym_dev_config, + .dev_start = qat_sym_dev_start, + .dev_stop = qat_sym_dev_stop, + .dev_close = qat_sym_dev_close, + .dev_infos_get = qat_sym_dev_info_get, + + .stats_get = qat_sym_stats_get, + .stats_reset = qat_sym_stats_reset, + .queue_pair_setup = qat_sym_qp_setup_gen2, + .queue_pair_release = qat_sym_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +RTE_INIT(qat_sym_pmd_gen2) +{ + QAT_CRYPTODEV_OPS[QAT_GEN2] = &crypto_qat_gen2_ops; +} diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c new file mode 100644 index 0000000000..f8488cd122 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_sym_pmd_gen1.h" + +struct rte_cryptodev_ops crypto_qat_gen3_ops = { + + /* Device related operations */ + .dev_configure = qat_sym_dev_config, + .dev_start = qat_sym_dev_start, + .dev_stop = qat_sym_dev_stop, + .dev_close = qat_sym_dev_close, + .dev_infos_get = qat_sym_dev_info_get, + + .stats_get = qat_sym_stats_get, + .stats_reset = qat_sym_stats_reset, + .queue_pair_setup = qat_sym_qp_setup_gen1, + .queue_pair_release = qat_sym_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +RTE_INIT(qat_sym_pmd_gen3_init) +{ + QAT_CRYPTODEV_OPS[QAT_GEN3] = &crypto_qat_gen3_ops; +} diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c new file mode 100644 index 0000000000..9470e78fb1 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_sym.h" + +static int +qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, + enum qat_service_type service_type) +{ + int i = 0, valid_qps = 0; + + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + if (qat_dev->qp_gen4_data[i][0].service_type == + service_type) { + if (valid_qps == qp_id) + return i; + ++valid_qps; + } + } + return -1; +} + +static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + int ret = 0; + int ring_pair; + struct qat_qp_config qat_qp_conf = { }; + struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_sym_private->qat_dev; + + ring_pair = + qat_select_valid_queue(qat_sym_private->qat_dev, qp_id, + QAT_SERVICE_SYMMETRIC); + if (ring_pair < 0) { + QAT_LOG(ERR, + "qp_id %u invalid for this device, no enough services allocated for GEN4 device", + qp_id); + return -EINVAL; + } + qat_qp_conf.hw = + &qat_dev->qp_gen4_data[ring_pair][0]; + + ret = qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id); + + return ret; +} + +struct rte_cryptodev_ops crypto_qat_gen4_ops = { + + /* Device related operations */ + .dev_configure = qat_sym_dev_config, + .dev_start = qat_sym_dev_start, + .dev_stop = qat_sym_dev_stop, + .dev_close = qat_sym_dev_close, + .dev_infos_get = qat_sym_dev_info_get, + + .stats_get = qat_sym_stats_get, + .stats_reset = qat_sym_stats_reset, + .queue_pair_setup = qat_sym_qp_setup_gen4, + .queue_pair_release = qat_sym_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +RTE_INIT(qat_sym_pmd_gen4_init) +{ + QAT_CRYPTODEV_OPS[QAT_GEN4] = &crypto_qat_gen4_ops; +} diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 6868e5f001..ee1a7e52bc 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -16,6 +16,7 @@ #include "qat_sym.h" #include "qat_sym_session.h" #include "qat_sym_pmd.h" +#include "qat_qp.h" #define MIXED_CRYPTO_MIN_FW_VER 0x04090000 @@ -59,26 +60,25 @@ static const struct rte_security_capability qat_security_capabilities[] = { }; #endif -static int qat_sym_qp_release(struct rte_cryptodev *dev, - uint16_t queue_pair_id); +struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO]; -static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev, +int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev, __rte_unused struct rte_cryptodev_config *config) { return 0; } -static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev) +int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev) { return 0; } -static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev) +void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev) { return; } -static int qat_sym_dev_close(struct rte_cryptodev *dev) +int qat_sym_dev_close(struct rte_cryptodev *dev) { int i, ret; @@ -91,7 +91,7 @@ static int qat_sym_dev_close(struct rte_cryptodev *dev) return 0; } -static void qat_sym_dev_info_get(struct rte_cryptodev *dev, +void qat_sym_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info) { struct qat_sym_dev_private *internals = dev->data->dev_private; @@ -108,7 +108,7 @@ static void qat_sym_dev_info_get(struct rte_cryptodev *dev, } } -static void qat_sym_stats_get(struct rte_cryptodev *dev, +void qat_sym_stats_get(struct rte_cryptodev *dev, struct rte_cryptodev_stats *stats) { struct qat_common_stats qat_stats = {0}; @@ -127,7 +127,7 @@ static void qat_sym_stats_get(struct rte_cryptodev *dev, stats->dequeue_err_count = qat_stats.dequeue_err_count; } -static void qat_sym_stats_reset(struct rte_cryptodev *dev) +void qat_sym_stats_reset(struct rte_cryptodev *dev) { struct qat_sym_dev_private *qat_priv; @@ -141,7 +141,7 @@ static void qat_sym_stats_reset(struct rte_cryptodev *dev) } -static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) +int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) { struct qat_sym_dev_private *qat_private = dev->data->dev_private; enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; @@ -156,70 +156,46 @@ static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) &(dev->data->queue_pairs[queue_pair_id])); } -static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, - const struct rte_cryptodev_qp_conf *qp_conf, +int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, struct qat_qp_config qat_qp_conf, int socket_id) { struct qat_qp *qp; int ret = 0; uint32_t i; - struct qat_qp_config qat_qp_conf; - const struct qat_qp_hw_data *sym_hw_qps = NULL; - const struct qat_qp_hw_data *qp_hw_data = NULL; struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); - struct qat_sym_dev_private *qat_private = dev->data->dev_private; - struct qat_pci_device *qat_dev = qat_private->qat_dev; - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - int ring_pair = - qat_select_valid_queue(qat_dev, qp_id, - QAT_SERVICE_SYMMETRIC); - - if (ring_pair < 0) { - QAT_LOG(ERR, - "qp_id %u invalid for this device, no enough services allocated for GEN4 device", - qp_id); - return -EINVAL; - } - sym_hw_qps = - &qat_dev->qp_gen4_data[0][0]; - qp_hw_data = - &qat_dev->qp_gen4_data[ring_pair][0]; - } else { - sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_SYMMETRIC]; - qp_hw_data = sym_hw_qps + qp_id; - } + struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_sym_private->qat_dev; /* If qp is already in use free ring memory and qp metadata. */ if (*qp_addr != NULL) { ret = qat_sym_qp_release(dev, qp_id); if (ret < 0) - return ret; + return -EBUSY; } if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) { QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); return -EINVAL; } - qat_qp_conf.hw = qp_hw_data; - qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie); - qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; + if (qat_qp_conf.cookie_size == 0) + qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie); + if (qat_qp_conf.nb_descriptors == 0) + qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; qat_qp_conf.socket_id = socket_id; qat_qp_conf.service_str = "sym"; - ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf); + ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf); if (ret != 0) return ret; /* store a link to the qp in the qat_pci_device */ - qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id] - = *qp_addr; + qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id] = *qp_addr; qp = (struct qat_qp *)*qp_addr; - qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold; + qp->min_enq_burst_threshold = qat_sym_private->min_enq_burst_threshold; for (i = 0; i < qp->nb_descriptors; i++) { @@ -240,61 +216,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, rte_mempool_virt2iova(cookie) + offsetof(struct qat_sym_op_cookie, opt.spc_gmac.cd_cipher); - - } - - /* Get fw version from QAT (GEN2), skip if we've got it already */ - if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities - & QAT_SYM_CAP_VALID)) { - ret = qat_cq_get_fw_version(qp); - - if (ret < 0) { - qat_sym_qp_release(dev, qp_id); - return ret; - } - - if (ret != 0) - QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d", - (ret >> 24) & 0xff, - (ret >> 16) & 0xff, - (ret >> 8) & 0xff); - else - QAT_LOG(DEBUG, "unknown QAT firmware version"); - - /* set capabilities based on the fw version */ - qat_private->internal_capabilities = QAT_SYM_CAP_VALID | - ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? - QAT_SYM_CAP_MIXED_CRYPTO : 0); - ret = 0; } return ret; } -static struct rte_cryptodev_ops crypto_qat_ops = { - - /* Device related operations */ - .dev_configure = qat_sym_dev_config, - .dev_start = qat_sym_dev_start, - .dev_stop = qat_sym_dev_stop, - .dev_close = qat_sym_dev_close, - .dev_infos_get = qat_sym_dev_info_get, - - .stats_get = qat_sym_stats_get, - .stats_reset = qat_sym_stats_reset, - .queue_pair_setup = qat_sym_qp_setup, - .queue_pair_release = qat_sym_qp_release, - - /* Crypto related operations */ - .sym_session_get_size = qat_sym_session_get_private_size, - .sym_session_configure = qat_sym_session_configure, - .sym_session_clear = qat_sym_session_clear, - - /* Raw data-path API related operations */ - .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, - .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, -}; - #ifdef RTE_LIB_SECURITY static const struct rte_security_capability * qat_security_cap_get(void *device __rte_unused) @@ -397,7 +323,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, qat_dev_instance->sym_rte_dev.name = cryptodev->data->name; cryptodev->driver_id = qat_sym_driver_id; - cryptodev->dev_ops = &crypto_qat_ops; + cryptodev->dev_ops = QAT_CRYPTODEV_OPS[qat_pci_dev->qat_dev_gen]; cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst; cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst; diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h index e0992cbe27..f676a296e4 100644 --- a/drivers/crypto/qat/qat_sym_pmd.h +++ b/drivers/crypto/qat/qat_sym_pmd.h @@ -15,6 +15,7 @@ #include "qat_sym_capabilities.h" #include "qat_device.h" +#include "qat_qp.h" /** Intel(R) QAT Symmetric Crypto PMD driver name */ #define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat @@ -25,6 +26,8 @@ extern uint8_t qat_sym_driver_id; +extern struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[]; + /** private data structure for a QAT device. * This QAT device is a device offering only symmetric crypto service, * there can be one of these on each qat_pci_device (VF). @@ -49,5 +52,25 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, int qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev); +int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id); + +int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, struct qat_qp_config qat_qp_conf, + int socket_id); + +void qat_sym_stats_reset(struct rte_cryptodev *dev); + +void qat_sym_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats); + +void qat_sym_dev_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *info); + +int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev, + __rte_unused struct rte_cryptodev_config *config); +int qat_sym_dev_close(struct rte_cryptodev *dev); +void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev); +int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev); + #endif #endif /* _QAT_SYM_PMD_H_ */ From patchwork Wed Sep 1 14:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 97702 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1875A0C45; Wed, 1 Sep 2021 16:47:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 705224116C; Wed, 1 Sep 2021 16:47:51 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 6284A41157 for ; Wed, 1 Sep 2021 16:47:49 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10094"; a="304344977" X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="304344977" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2021 07:47:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="466926366" Received: from silpixa00400308.ir.intel.com ([10.237.214.190]) by orsmga007.jf.intel.com with ESMTP; 01 Sep 2021 07:47:47 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Wed, 1 Sep 2021 15:47:28 +0100 Message-Id: <20210901144729.26784-4-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> References: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH 3/4] crypto/qat: move capabilities initialization to spec files X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move capabilites static struct to particular generations into separate translation units that it can be isolated from each other. Signed-off-by: Arek Kusztal --- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 27 ++++++++- drivers/crypto/qat/dev/qat_sym_pmd_gen2.c | 25 ++++++++- drivers/crypto/qat/dev/qat_sym_pmd_gen3.c | 26 ++++++++- drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 24 +++++++- drivers/crypto/qat/qat_sym_pmd.c | 68 +++++++---------------- drivers/crypto/qat/qat_sym_pmd.h | 19 ++++++- 6 files changed, 135 insertions(+), 54 deletions(-) diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 4a4dc9ab55..40ec77f846 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -8,6 +8,12 @@ #include "qat_sym_session.h" #include "qat_sym.h" #include "qat_sym_pmd_gen1.h" +#include "qat_sym_capabilities.h" + +static struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = { + QAT_BASE_GEN1_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id, const struct rte_cryptodev_qp_conf *qp_conf, @@ -49,7 +55,24 @@ struct rte_cryptodev_ops crypto_qat_gen1_ops = { .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; -RTE_INIT(qat_sym_pmd_gen1_init) +static struct +qat_capabilities_info get_capabilties_gen1( + struct qat_pci_device *qat_dev __rte_unused) { - QAT_CRYPTODEV_OPS[QAT_GEN1] = &crypto_qat_gen1_ops; + struct qat_capabilities_info capa_info; + + capa_info.data = qat_gen1_sym_capabilities; + capa_info.size = sizeof(qat_gen1_sym_capabilities); + return capa_info; } + +static struct +qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen1 = { + .qat_sym_get_capabilities = get_capabilties_gen1, +}; + +RTE_INIT(qat_sym_pmd_gen1_init) +{ + QAT_CRYPTODEV_OPS[QAT_GEN1] = &crypto_qat_gen1_ops; + qat_sym_pmd_ops[QAT_GEN1] = &qat_sym_pmd_ops_gen1; +} \ No newline at end of file diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c index 6344d7de13..18dfca3a84 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c @@ -7,9 +7,16 @@ #include "qat_sym_pmd.h" #include "qat_sym_session.h" #include "qat_sym.h" +#include "qat_sym_capabilities.h" #define MIXED_CRYPTO_MIN_FW_VER 0x04090000 +static struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = { + QAT_BASE_GEN1_SYM_CAPABILITIES, + QAT_EXTRA_GEN2_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + static int qat_sym_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, const struct rte_cryptodev_qp_conf *qp_conf, int socket_id) @@ -74,7 +81,23 @@ struct rte_cryptodev_ops crypto_qat_gen2_ops = { .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; +static struct +qat_capabilities_info get_capabilties_gen2( + struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_gen2_sym_capabilities; + capa_info.size = sizeof(qat_gen2_sym_capabilities); + return capa_info; +} + +static struct +qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen2 = { + .qat_sym_get_capabilities = get_capabilties_gen2, +}; + RTE_INIT(qat_sym_pmd_gen2) { - QAT_CRYPTODEV_OPS[QAT_GEN2] = &crypto_qat_gen2_ops; + QAT_CRYPTODEV_OPS[QAT_GEN2] = &crypto_qat_gen2_ops; + qat_sym_pmd_ops[QAT_GEN2] = &qat_sym_pmd_ops_gen2; } diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c index f8488cd122..e914a09362 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c @@ -9,6 +9,13 @@ #include "qat_sym.h" #include "qat_sym_pmd_gen1.h" +static struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = { + QAT_BASE_GEN1_SYM_CAPABILITIES, + QAT_EXTRA_GEN2_SYM_CAPABILITIES, + QAT_EXTRA_GEN3_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + struct rte_cryptodev_ops crypto_qat_gen3_ops = { /* Device related operations */ @@ -33,7 +40,24 @@ struct rte_cryptodev_ops crypto_qat_gen3_ops = { .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; +static struct +qat_capabilities_info get_capabilties_gen3( + struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_gen3_sym_capabilities; + capa_info.size = sizeof(qat_gen3_sym_capabilities); + return capa_info; +} + +static struct +qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen3 = { + .qat_sym_get_capabilities = get_capabilties_gen3, +}; + + RTE_INIT(qat_sym_pmd_gen3_init) { - QAT_CRYPTODEV_OPS[QAT_GEN3] = &crypto_qat_gen3_ops; + QAT_CRYPTODEV_OPS[QAT_GEN3] = &crypto_qat_gen3_ops; + qat_sym_pmd_ops[QAT_GEN3] = &qat_sym_pmd_ops_gen3; } diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c index 9470e78fb1..834ae88d38 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c @@ -8,6 +8,11 @@ #include "qat_sym_session.h" #include "qat_sym.h" +static struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = { + QAT_BASE_GEN4_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + static int qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, enum qat_service_type service_type) @@ -76,7 +81,24 @@ struct rte_cryptodev_ops crypto_qat_gen4_ops = { .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; +static struct +qat_capabilities_info get_capabilties_gen4( + struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + + capa_info.data = qat_gen4_sym_capabilities; + capa_info.size = sizeof(qat_gen4_sym_capabilities); + return capa_info; +} + +static struct +qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen4 = { + .qat_sym_get_capabilities = get_capabilties_gen4, +}; + RTE_INIT(qat_sym_pmd_gen4_init) { - QAT_CRYPTODEV_OPS[QAT_GEN4] = &crypto_qat_gen4_ops; + QAT_CRYPTODEV_OPS[QAT_GEN4] = &crypto_qat_gen4_ops; + qat_sym_pmd_ops[QAT_GEN4] = &qat_sym_pmd_ops_gen4; } diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index ee1a7e52bc..dc1dcbe34f 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -22,28 +22,9 @@ uint8_t qat_sym_driver_id; -static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - QAT_EXTRA_GEN2_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - QAT_EXTRA_GEN2_SYM_CAPABILITIES, - QAT_EXTRA_GEN3_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = { - QAT_BASE_GEN4_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; +struct qat_capabilities_info qat_sym_capabilities[QAT_DEV_GEN_NO]; +struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO]; +struct qat_sym_pmd_dev_ops *qat_sym_pmd_ops[QAT_DEV_GEN_NO]; #ifdef RTE_LIB_SECURITY static const struct rte_cryptodev_capabilities @@ -62,6 +43,16 @@ static const struct rte_security_capability qat_security_capabilities[] = { struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO]; +static struct +qat_capabilities_info qat_sym_get_capa_info( + struct qat_pci_device *qat_dev) +{ + struct qat_sym_pmd_dev_ops *ops = + qat_sym_pmd_ops[qat_dev->qat_dev_gen]; + + return ops->qat_sym_get_capabilities(qat_dev); +} + int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev, __rte_unused struct rte_cryptodev_config *config) { @@ -83,7 +74,7 @@ int qat_sym_dev_close(struct rte_cryptodev *dev) int i, ret; for (i = 0; i < dev->data->nb_queue_pairs; i++) { - ret = qat_sym_qp_release(dev, i); + ret = dev->dev_ops->queue_pair_release(dev, i); if (ret < 0) return ret; } @@ -171,7 +162,7 @@ int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, /* If qp is already in use free ring memory and qp metadata. */ if (*qp_addr != NULL) { - ret = qat_sym_qp_release(dev, qp_id); + ret = dev->dev_ops->queue_pair_release(dev, qp_id); if (ret < 0) return -EBUSY; } @@ -283,6 +274,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; struct qat_sym_dev_private *internals; + struct qat_capabilities_info capa_info; const struct rte_cryptodev_capabilities *capabilities; uint64_t capa_size; @@ -370,30 +362,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, internals->qat_dev = qat_pci_dev; internals->sym_dev_id = cryptodev->data->dev_id; - switch (qat_pci_dev->qat_dev_gen) { - case QAT_GEN1: - capabilities = qat_gen1_sym_capabilities; - capa_size = sizeof(qat_gen1_sym_capabilities); - break; - case QAT_GEN2: - capabilities = qat_gen2_sym_capabilities; - capa_size = sizeof(qat_gen2_sym_capabilities); - break; - case QAT_GEN3: - capabilities = qat_gen3_sym_capabilities; - capa_size = sizeof(qat_gen3_sym_capabilities); - break; - case QAT_GEN4: - capabilities = qat_gen4_sym_capabilities; - capa_size = sizeof(qat_gen4_sym_capabilities); - break; - default: - QAT_LOG(DEBUG, - "QAT gen %d capabilities unknown", - qat_pci_dev->qat_dev_gen); - ret = -(EINVAL); - goto error; - } + + capa_info = qat_sym_get_capa_info(qat_pci_dev); + capabilities = capa_info.data; + capa_size = capa_info.size; internals->capa_mz = rte_memzone_lookup(capa_memz_name); if (internals->capa_mz == NULL) { diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h index f676a296e4..a03d2a0f04 100644 --- a/drivers/crypto/qat/qat_sym_pmd.h +++ b/drivers/crypto/qat/qat_sym_pmd.h @@ -26,7 +26,24 @@ extern uint8_t qat_sym_driver_id; -extern struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[]; +struct qat_capabilities_info { + struct rte_cryptodev_capabilities *data; + uint64_t size; +}; + +extern struct +rte_cryptodev_ops *QAT_CRYPTODEV_OPS[]; +extern struct +qat_capabilities_info qat_sym_capabilities[]; + +typedef struct qat_capabilities_info (*get_capabilities_info_t) + (struct qat_pci_device *qat_dev); + +struct qat_sym_pmd_dev_ops { + get_capabilities_info_t qat_sym_get_capabilities; +}; + +extern struct qat_sym_pmd_dev_ops *qat_sym_pmd_ops[]; /** private data structure for a QAT device. * This QAT device is a device offering only symmetric crypto service, From patchwork Wed Sep 1 14:47:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 97703 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F21CA0C45; Wed, 1 Sep 2021 16:48:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2CE8C4116B; Wed, 1 Sep 2021 16:47:54 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id F39C84117B for ; Wed, 1 Sep 2021 16:47:52 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10094"; a="304344984" X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="304344984" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2021 07:47:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,369,1620716400"; d="scan'208";a="466926383" Received: from silpixa00400308.ir.intel.com ([10.237.214.190]) by orsmga007.jf.intel.com with ESMTP; 01 Sep 2021 07:47:50 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Wed, 1 Sep 2021 15:47:29 +0100 Message-Id: <20210901144729.26784-5-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> References: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH 4/4] common/qat: add extra data to qat pci dev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add private data to qat_pci_device struct that will be visible only by specific generation it belongs to. Signed-off-by: Arek Kusztal --- drivers/common/qat/dev/qat_dev_gen1.c | 7 +++ drivers/common/qat/dev/qat_dev_gen1.h | 3 ++ drivers/common/qat/dev/qat_dev_gen2.c | 1 + drivers/common/qat/dev/qat_dev_gen3.c | 1 + drivers/common/qat/dev/qat_dev_gen4.c | 31 ++++++++++- drivers/common/qat/dev/qat_dev_gen4.h | 18 +++++++ drivers/common/qat/meson.build | 2 + drivers/common/qat/qat_device.c | 66 +++++++++++++++-------- drivers/common/qat/qat_device.h | 10 ++-- drivers/common/qat/qat_qp.h | 9 ---- drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 7 ++- 11 files changed, 113 insertions(+), 42 deletions(-) create mode 100644 drivers/common/qat/dev/qat_dev_gen4.h diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c index 4d60c2a051..3c7a558959 100644 --- a/drivers/common/qat/dev/qat_dev_gen1.c +++ b/drivers/common/qat/dev/qat_dev_gen1.c @@ -227,11 +227,18 @@ qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused) return 0; } +int +qat_dev_get_extra_size_gen1(void) +{ + return 0; +} + static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = { .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, }; RTE_INIT(qat_dev_gen_gen1_init) diff --git a/drivers/common/qat/dev/qat_dev_gen1.h b/drivers/common/qat/dev/qat_dev_gen1.h index 9bf4fcf01b..ec0af94655 100644 --- a/drivers/common/qat/dev/qat_dev_gen1.h +++ b/drivers/common/qat/dev/qat_dev_gen1.h @@ -13,6 +13,9 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] [ADF_MAX_QPS_ON_ANY_SERVICE]; +int +qat_dev_get_extra_size_gen1(void); + int qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev, enum qat_service_type service); diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c index ad1b643e00..856463c06f 100644 --- a/drivers/common/qat/dev/qat_dev_gen2.c +++ b/drivers/common/qat/dev/qat_dev_gen2.c @@ -25,6 +25,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = { .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, }; RTE_INIT(qat_dev_gen_gen2_init) diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c index 407d21576b..237712f1ef 100644 --- a/drivers/common/qat/dev/qat_dev_gen3.c +++ b/drivers/common/qat/dev/qat_dev_gen3.c @@ -63,6 +63,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = { .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, }; RTE_INIT(qat_dev_gen_gen3_init) diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c index 6394e17dde..aecdedf375 100644 --- a/drivers/common/qat/dev/qat_dev_gen4.c +++ b/drivers/common/qat/dev/qat_dev_gen4.c @@ -10,9 +10,27 @@ #include "adf_transport_access_macros_gen4vf.h" #include "adf_pf2vf_msg.h" #include "qat_pf2vf.h" +#include "qat_dev_gen4.h" #include +struct qat_dev_gen4_extra { + struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] + [QAT_GEN4_QPS_PER_BUNDLE_NUM]; +}; + +enum qat_service_type qat_dev4_get_qp_serv( + struct qat_dev_gen4_extra *dev_extra, int ring_pair) +{ + return dev_extra->qp_gen4_data[ring_pair][0].service_type; +} + +const struct qat_qp_hw_data *qat_dev4_get_hw( + struct qat_dev_gen4_extra *dev_extra, int ring_pair) +{ + return &dev_extra->qp_gen4_data[ring_pair][0]; +} + static struct qat_pf2vf_dev qat_pf2vf_gen4 = { .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, @@ -38,10 +56,11 @@ qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev, enum qat_service_type service) { int i = 0, count = 0, max_ops_per_srv = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; for (i = 0, count = 0; i < max_ops_per_srv; i++) - if (qat_dev->qp_gen4_data[i][0].service_type == service) + if (dev_extra->qp_gen4_data[i][0].service_type == service) count++; return count; } @@ -51,12 +70,13 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev) { int i = 0; uint16_t svc = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; if (qat_query_svc(qat_dev, (uint8_t *)&svc)) return -EFAULT; for (; i < QAT_GEN4_BUNDLE_NUM; i++) { struct qat_qp_hw_data *hw_data = - &qat_dev->qp_gen4_data[i][0]; + &dev_extra->qp_gen4_data[i][0]; uint8_t svc1 = (svc >> (3 * i)) & 0x7; enum qat_service_type service_type = QAT_SERVICE_INVALID; @@ -239,11 +259,18 @@ qat_dev_get_misc_bar_gen4( return 0; } +static int +qat_dev_get_extra_size_gen4(void) +{ + return sizeof(struct qat_dev_gen4_extra); +} + static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = { .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4, .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4, .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4, .qat_dev_read_config = qat_dev_read_config_gen4, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen4, }; RTE_INIT(qat_dev_gen_4_init) diff --git a/drivers/common/qat/dev/qat_dev_gen4.h b/drivers/common/qat/dev/qat_dev_gen4.h new file mode 100644 index 0000000000..f588354603 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen4.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _QAT_DEV_GEN_H_ +#define _QAT_DEV_GEN_H_ + +#include + +struct qat_dev_gen4_extra; + +enum qat_service_type qat_dev4_get_qp_serv( + struct qat_dev_gen4_extra *dev_extra, int ring_pair); + +const struct qat_qp_hw_data *qat_dev4_get_hw( + struct qat_dev_gen4_extra *dev_extra, int ring_pair); + +#endif diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index de54004b4c..6c5db48944 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -9,6 +9,7 @@ endif qat_crypto = true qat_crypto_path = 'crypto/qat' +qat_devs_path = 'dev' qat_crypto_relpath = '../../' + qat_crypto_path qat_compress = true qat_compress_path = 'compress/qat' @@ -59,6 +60,7 @@ includes += include_directories( 'qat_adf', qat_crypto_relpath, qat_compress_relpath, + qat_devs_path ) if qat_compress diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 030624b46d..4a33a62824 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -51,6 +51,16 @@ static const struct rte_pci_id pci_id_qat_map[] = { {.device_id = 0}, }; +static int +qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen) +{ + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev_gen]; + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size, + -ENOTSUP); + return ops_hw->qat_dev_get_extra_size(); +} + static struct qat_pci_device * qat_pci_get_named_dev(const char *name) { @@ -156,15 +166,38 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param) { struct qat_pci_device *qat_dev; + enum qat_device_gen qat_dev_gen; uint8_t qat_dev_id = 0; char name[QAT_DEV_NAME_MAX_LEN]; struct rte_devargs *devargs = pci_dev->device.devargs; struct qat_dev_hw_spec_funcs *ops_hw = NULL; struct rte_mem_resource *mem_resource; + int extra_size; rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat"); + switch (pci_dev->id.device_id) { + case 0x0443: + qat_dev_gen = QAT_GEN1; + break; + case 0x37c9: + case 0x19e3: + case 0x6f55: + case 0x18ef: + qat_dev_gen = QAT_GEN2; + break; + case 0x18a1: + qat_dev_gen = QAT_GEN3; + break; + case 0x4941: + qat_dev_gen = QAT_GEN4; + break; + default: + QAT_LOG(ERR, "Invalid dev_id, can't determine generation"); + return NULL; + } + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { const struct rte_memzone *mz = rte_memzone_lookup(name); @@ -194,9 +227,15 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, QAT_LOG(ERR, "Reached maximum number of QAT devices"); return NULL; } - + extra_size = qat_pci_get_extra_size(qat_dev_gen); + if (extra_size < 0) { + QAT_LOG(ERR, "Error when acquiring extra size len QAT_%d", + qat_dev_id); + return NULL; + } qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name, - sizeof(struct qat_pci_device), + sizeof(struct qat_pci_device) + + extra_size, rte_socket_id(), 0); if (qat_pci_devs[qat_dev_id].mz == NULL) { @@ -207,30 +246,11 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, qat_dev = qat_pci_devs[qat_dev_id].mz->addr; memset(qat_dev, 0, sizeof(*qat_dev)); + qat_dev->dev_private = qat_dev + 1; strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN); qat_dev->qat_dev_id = qat_dev_id; qat_pci_devs[qat_dev_id].pci_dev = pci_dev; - switch (pci_dev->id.device_id) { - case 0x0443: - qat_dev->qat_dev_gen = QAT_GEN1; - break; - case 0x37c9: - case 0x19e3: - case 0x6f55: - case 0x18ef: - qat_dev->qat_dev_gen = QAT_GEN2; - break; - case 0x18a1: - qat_dev->qat_dev_gen = QAT_GEN3; - break; - case 0x4941: - qat_dev->qat_dev_gen = QAT_GEN4; - break; - default: - QAT_LOG(ERR, "Invalid dev_id, can't determine generation"); - rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz); - return NULL; - } + qat_dev->qat_dev_gen = qat_dev_gen; ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen]; RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_misc_bar, NULL); diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 531aa663ca..c9923cdc54 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -29,12 +29,14 @@ typedef int (*qat_dev_get_misc_bar_t) (struct rte_mem_resource **, struct rte_pci_device *); typedef int (*qat_dev_read_config_t) (struct qat_pci_device *); +typedef int (*qat_dev_get_extra_size_t)(void); struct qat_dev_hw_spec_funcs { qat_dev_reset_ring_pairs_t qat_dev_reset_ring_pairs; qat_dev_get_transport_bar_t qat_dev_get_transport_bar; qat_dev_get_misc_bar_t qat_dev_get_misc_bar; qat_dev_read_config_t qat_dev_read_config; + qat_dev_get_extra_size_t qat_dev_get_extra_size; }; extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[]; @@ -75,9 +77,6 @@ struct qat_device_info { */ }; -extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; -extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; - extern struct qat_device_info qat_pci_devs[]; struct qat_sym_dev_private; @@ -126,11 +125,10 @@ struct qat_pci_device { /* Data relating to compression service */ struct qat_comp_dev_private *comp_dev; /**< link back to compressdev private data */ - struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] - [QAT_GEN4_QPS_PER_BUNDLE_NUM]; - /**< Data of ring configuration on gen4 */ void *misc_bar_io_addr; /**< Address of misc bar */ + void *dev_private; + /**< Address per generation */ }; struct qat_gen_hw_data { diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index ffba3a3615..4be54de2d9 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -38,15 +38,6 @@ struct qat_qp_hw_data { uint16_t rx_msg_size; }; -/** - * Structure with data needed for creation of queue pair on gen4. - */ -struct qat_qp_gen4_data { - struct qat_qp_hw_data qat_qp_hw_data; - uint8_t reserved; - uint8_t valid; -}; - /** * Structure with data needed for creation of queue pair. */ diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c index 834ae88d38..f8f795301c 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c @@ -7,6 +7,7 @@ #include "qat_sym_pmd.h" #include "qat_sym_session.h" #include "qat_sym.h" +#include "qat_dev_gen4.h" static struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = { QAT_BASE_GEN4_SYM_CAPABILITIES, @@ -18,9 +19,10 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, enum qat_service_type service_type) { int i = 0, valid_qps = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; for (; i < QAT_GEN4_BUNDLE_NUM; i++) { - if (qat_dev->qp_gen4_data[i][0].service_type == + if (qat_dev4_get_qp_serv(dev_extra, i) == service_type) { if (valid_qps == qp_id) return i; @@ -39,6 +41,7 @@ static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id, struct qat_qp_config qat_qp_conf = { }; struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private; struct qat_pci_device *qat_dev = qat_sym_private->qat_dev; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; ring_pair = qat_select_valid_queue(qat_sym_private->qat_dev, qp_id, @@ -50,7 +53,7 @@ static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id, return -EINVAL; } qat_qp_conf.hw = - &qat_dev->qp_gen4_data[ring_pair][0]; + qat_dev4_get_hw(dev_extra, ring_pair); ret = qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id);