From patchwork Fri Nov 5 00:19:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103811 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83462A0C61; Fri, 5 Nov 2021 01:19:44 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3453142789; Fri, 5 Nov 2021 01:19:41 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 012C94014D for ; Fri, 5 Nov 2021 01:19:37 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563716" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563716" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671789" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:36 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:22 +0000 Message-Id: <20211105001932.28784-2-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 01/11] common/qat: define build op request and dequeue op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce build request op and dequeue op function pointers to qat queue pair implementation. Those two functions are used to be assigned during qat session generation based on crypto operation Signed-off-by: Kai Ji --- drivers/common/qat/qat_qp.c | 8 +++-- drivers/common/qat/qat_qp.h | 52 ++++++++++++++++++++++++++-- drivers/compress/qat/qat_comp_pmd.c | 2 +- drivers/crypto/qat/qat_asym_pmd.c | 4 +-- drivers/crypto/qat/qat_sym_pmd.c | 4 +-- drivers/crypto/qat/qat_sym_session.h | 6 ++++ 6 files changed, 67 insertions(+), 9 deletions(-) diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index cde421eb77..a026b90ca7 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -550,7 +550,9 @@ adf_modulo(uint32_t data, uint32_t modulo_mask) } uint16_t -qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops) +qat_enqueue_op_burst(void *qp, + __rte_unused qat_op_build_request_t op_build_request, + void **ops, uint16_t nb_ops) { register struct qat_queue *queue; struct qat_qp *tmp_qp = (struct qat_qp *)qp; @@ -817,7 +819,9 @@ qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops) } uint16_t -qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops) +qat_dequeue_op_burst(void *qp, void **ops, + __rte_unused qat_op_dequeue_t qat_dequeue_process_response, + uint16_t nb_ops) { struct qat_queue *rx_queue; struct qat_qp *tmp_qp = (struct qat_qp *)qp; diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index deafb407b3..f2ca0173de 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -36,6 +36,51 @@ struct qat_queue { /* number of responses processed since last CSR head write */ }; +/** + * Type define qat_op_build_request_t function pointer, passed in as argument + * in enqueue op burst, where a build request assigned base on the type of + * crypto op. + * + * @param in_op + * An input op pointer + * @param out_msg + * out_meg pointer + * @param op_cookie + * op cookie pointer + * @param opaque + * an opaque data may be used to store context may be useful between + * 2 enqueue operations. + * @param dev_gen + * qat device gen id + * @return + * - 0 if the crypto request is build successfully, + * - error code otherwise + **/ +typedef int (*qat_op_build_request_t)(void *in_op, uint8_t *out_msg, + void *op_cookie, uint64_t *opaque, enum qat_device_gen dev_gen); + +/** + * Type define qat_op_dequeue_t function pointer, passed in as argument + * in dequeue op burst, where a dequeue op assigned base on the type of + * crypto op. + * + * @param op + * An input op pointer + * @param resp + * qat response msg pointer + * @param op_cookie + * op cookie pointer + * @param dequeue_err_count + * dequeue error counter + * @return + * - 0 if dequeue OP is successful + * - error code otherwise + **/ +typedef int (*qat_op_dequeue_t)(void **op, uint8_t *resp, void *op_cookie, + uint64_t *dequeue_err_count __rte_unused); + +#define QAT_BUILD_REQUEST_MAX_OPAQUE_SIZE 2 + struct qat_qp { void *mmap_bar_addr; struct qat_queue tx_q; @@ -44,6 +89,7 @@ struct qat_qp { struct rte_mempool *op_cookie_pool; void **op_cookies; uint32_t nb_descriptors; + uint64_t opaque[QAT_BUILD_REQUEST_MAX_OPAQUE_SIZE]; enum qat_device_gen qat_dev_gen; enum qat_service_type service_type; struct qat_pci_device *qat_dev; @@ -78,13 +124,15 @@ struct qat_qp_config { }; uint16_t -qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops); +qat_enqueue_op_burst(void *qp, qat_op_build_request_t op_build_request, + void **ops, uint16_t nb_ops); uint16_t qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops); uint16_t -qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops); +qat_dequeue_op_burst(void *qp, void **ops, + qat_op_dequeue_t qat_dequeue_process_response, uint16_t nb_ops); int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr); diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 9b24d46e97..31de526056 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -620,7 +620,7 @@ static uint16_t qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops, uint16_t nb_ops) { - uint16_t ret = qat_dequeue_op_burst(qp, (void **)ops, nb_ops); + uint16_t ret = qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); struct qat_qp *tmp_qp = (struct qat_qp *)qp; if (ret) { diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index addee384e3..9a7596b227 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -62,13 +62,13 @@ static struct rte_cryptodev_ops crypto_qat_ops = { uint16_t qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - return qat_enqueue_op_burst(qp, (void **)ops, nb_ops); + return qat_enqueue_op_burst(qp, NULL, (void **)ops, nb_ops); } uint16_t qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - return qat_dequeue_op_burst(qp, (void **)ops, nb_ops); + return qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); } /* An rte_driver is needed in the registration of both the device and the driver diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index b835245f17..28a26260fb 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -49,14 +49,14 @@ static uint16_t qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - return qat_enqueue_op_burst(qp, (void **)ops, nb_ops); + return qat_enqueue_op_burst(qp, NULL, (void **)ops, nb_ops); } static uint16_t qat_sym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - return qat_dequeue_op_burst(qp, (void **)ops, nb_ops); + return qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); } /* An rte_driver is needed in the registration of both the device and the driver diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6ebc176729..33f977a4e3 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -63,6 +63,11 @@ enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_ZUC = 4 }; +struct qat_sym_session; + +typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; @@ -107,6 +112,7 @@ struct qat_sym_session { /* Some generations need different setup of counter */ uint32_t slice_types; enum qat_sym_proto_flag qat_proto_flag; + qat_sym_build_request_t build_request[2]; }; int From patchwork Fri Nov 5 00:19:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103812 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D608A0C61; Fri, 5 Nov 2021 01:19:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2E1C4279C; Fri, 5 Nov 2021 01:19:44 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 01F5542710 for ; Fri, 5 Nov 2021 01:19:39 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563718" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563718" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671794" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:37 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:23 +0000 Message-Id: <20211105001932.28784-3-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 02/11] crypto/qat: sym build op request specific implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add-in specific symmetric crypto build op request function pointer implementation for QAT generation 1 Signed-off-by: Kai Ji --- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 830 +++++++++++++++++++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 186 +++++ 2 files changed, 1016 insertions(+) diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 67a4d2cb2c..783fdcaa34 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -8,14 +8,844 @@ #include #include "qat_crypto.h" #include "qat_sym_session.h" +#include "qat_sym.h" + +#define QAT_SYM_DP_GET_MAX_ENQ(q, c, n) \ + RTE_MIN((q->max_inflights - q->enqueued + q->dequeued - c), n) + +#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \ + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \ + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status)) + +static __rte_always_inline int +op_bpi_cipher_decrypt(uint8_t *src, uint8_t *dst, + uint8_t *iv, int ivlen, int srclen, + void *bpi_ctx) +{ + EVP_CIPHER_CTX *ctx = (EVP_CIPHER_CTX *)bpi_ctx; + int encrypted_ivlen; + uint8_t encrypted_iv[BPI_MAX_ENCR_IV_LEN]; + uint8_t *encr = encrypted_iv; + + /* ECB method: encrypt (not decrypt!) the IV, then XOR with plaintext */ + if (EVP_EncryptUpdate(ctx, encrypted_iv, &encrypted_ivlen, iv, ivlen) + <= 0) + goto cipher_decrypt_err; + + for (; srclen != 0; --srclen, ++dst, ++src, ++encr) + *dst = *src ^ *encr; + + return 0; + +cipher_decrypt_err: + QAT_DP_LOG(ERR, "libcrypto ECB cipher decrypt for BPI IV failed"); + return -EINVAL; +} + +static __rte_always_inline uint32_t +qat_bpicipher_preprocess(struct qat_sym_session *ctx, + struct rte_crypto_op *op) +{ + int block_len = qat_cipher_get_block_size(ctx->qat_cipher_alg); + struct rte_crypto_sym_op *sym_op = op->sym; + uint8_t last_block_len = block_len > 0 ? + sym_op->cipher.data.length % block_len : 0; + + if (last_block_len && ctx->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { + /* Decrypt last block */ + uint8_t *last_block, *dst, *iv; + uint32_t last_block_offset = sym_op->cipher.data.offset + + sym_op->cipher.data.length - last_block_len; + last_block = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_src, + uint8_t *, last_block_offset); + + if (unlikely((sym_op->m_dst != NULL) + && (sym_op->m_dst != sym_op->m_src))) + /* out-of-place operation (OOP) */ + dst = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_dst, + uint8_t *, last_block_offset); + else + dst = last_block; + + if (last_block_len < sym_op->cipher.data.length) + /* use previous block ciphertext as IV */ + iv = last_block - block_len; + else + /* runt block, i.e. less than one full block */ + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + ctx->cipher_iv.offset); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: src before pre-process:", + last_block, last_block_len); + if (sym_op->m_dst != NULL) + QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: dst before pre-process:", + dst, last_block_len); +#endif + op_bpi_cipher_decrypt(last_block, dst, iv, block_len, + last_block_len, ctx->bpi_ctx); +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: src after pre-process:", + last_block, last_block_len); + if (sym_op->m_dst != NULL) + QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: dst after pre-process:", + dst, last_block_len); +#endif + } + + return sym_op->cipher.data.length - last_block_len; +} + +static __rte_always_inline int +qat_chk_len_in_bits_auth(struct qat_sym_session *ctx, + struct rte_crypto_op *op) +{ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3) { + if (unlikely((op->sym->auth.data.offset % BYTE_LENGTH != 0) || + (op->sym->auth.data.length % BYTE_LENGTH != 0))) + return -EINVAL; + return 1; + } + return 0; +} + +static __rte_always_inline int +qat_chk_len_in_bits_cipher(struct qat_sym_session *ctx, + struct rte_crypto_op *op) +{ + if (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || + ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI || + ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { + if (unlikely((op->sym->cipher.data.length % BYTE_LENGTH != 0) || + ((op->sym->cipher.data.offset % + BYTE_LENGTH) != 0))) + return -EINVAL; + return 1; + } + return 0; +} + +static __rte_always_inline int32_t +qat_sym_build_req_set_data(struct icp_qat_fw_la_bulk_req *req, + void *opaque, struct qat_sym_op_cookie *cookie, + struct rte_crypto_vec *src_vec, uint16_t n_src, + struct rte_crypto_vec *dst_vec, uint16_t n_dst) +{ + struct qat_sgl *list; + uint32_t i; + uint32_t tl_src = 0, total_len_src, total_len_dst; + uint64_t src_data_start = 0, dst_data_start = 0; + int is_sgl = n_src > 1 || n_dst > 1; + + if (unlikely(n_src < 1 || n_src > QAT_SYM_SGL_MAX_NUMBER || + n_dst > QAT_SYM_SGL_MAX_NUMBER)) + return -1; + + if (likely(!is_sgl)) { + src_data_start = src_vec[0].iova; + tl_src = total_len_src = + src_vec[0].len; + if (unlikely(n_dst)) { /* oop */ + total_len_dst = dst_vec[0].len; + + dst_data_start = dst_vec[0].iova; + if (unlikely(total_len_src != total_len_dst)) + return -EINVAL; + } else { + dst_data_start = src_data_start; + total_len_dst = tl_src; + } + } else { /* sgl */ + total_len_dst = total_len_src = 0; + + ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_SGL); + + list = (struct qat_sgl *)&cookie->qat_sgl_src; + for (i = 0; i < n_src; i++) { + list->buffers[i].len = src_vec[i].len; + list->buffers[i].resrvd = 0; + list->buffers[i].addr = src_vec[i].iova; + if (tl_src + src_vec[i].len > UINT32_MAX) { + QAT_DP_LOG(ERR, "Message too long"); + return -1; + } + tl_src += src_vec[i].len; + } + + list->num_bufs = i; + src_data_start = cookie->qat_sgl_src_phys_addr; + + if (unlikely(n_dst > 0)) { /* oop sgl */ + uint32_t tl_dst = 0; + + list = (struct qat_sgl *)&cookie->qat_sgl_dst; + + for (i = 0; i < n_dst; i++) { + list->buffers[i].len = dst_vec[i].len; + list->buffers[i].resrvd = 0; + list->buffers[i].addr = dst_vec[i].iova; + if (tl_dst + dst_vec[i].len > UINT32_MAX) { + QAT_DP_LOG(ERR, "Message too long"); + return -ENOTSUP; + } + + tl_dst += dst_vec[i].len; + } + + if (tl_src != tl_dst) + return -EINVAL; + list->num_bufs = i; + dst_data_start = cookie->qat_sgl_dst_phys_addr; + } else + dst_data_start = src_data_start; + } + + req->comn_mid.src_data_addr = src_data_start; + req->comn_mid.dest_data_addr = dst_data_start; + req->comn_mid.src_length = total_len_src; + req->comn_mid.dst_length = total_len_dst; + req->comn_mid.opaque_data = (uintptr_t)opaque; + + return tl_src; +} + +static __rte_always_inline uint64_t +qat_sym_convert_op_to_vec_cipher(struct rte_crypto_op *op, + struct qat_sym_session *ctx, + struct rte_crypto_sgl *in_sgl, struct rte_crypto_sgl *out_sgl, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *auth_iv_or_aad __rte_unused, + struct rte_crypto_va_iova_ptr *digest __rte_unused) +{ + uint32_t cipher_len = 0, cipher_ofs = 0; + int n_src = 0; + int ret; + + ret = qat_chk_len_in_bits_cipher(ctx, op); + switch (ret) { + case 1: + cipher_len = op->sym->cipher.data.length >> 3; + cipher_ofs = op->sym->cipher.data.offset >> 3; + break; + case 0: + if (ctx->bpi_ctx) { + /* DOCSIS - only send complete blocks to device. + * Process any partial block using CFB mode. + * Even if 0 complete blocks, still send this to device + * to get into rx queue for post-process and dequeuing + */ + cipher_len = qat_bpicipher_preprocess(ctx, op); + cipher_ofs = op->sym->cipher.data.offset; + } else { + cipher_len = op->sym->cipher.data.length; + cipher_ofs = op->sym->cipher.data.offset; + } + break; + default: + QAT_DP_LOG(ERR, + "SNOW3G/KASUMI/ZUC in QAT PMD only supports byte aligned values"); + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return UINT64_MAX; + } + + cipher_iv->va = rte_crypto_op_ctod_offset(op, void *, + ctx->cipher_iv.offset); + cipher_iv->iova = rte_crypto_op_ctophys_offset(op, + ctx->cipher_iv.offset); + + n_src = rte_crypto_mbuf_to_vec(op->sym->m_src, cipher_ofs, + cipher_len, in_sgl->vec, QAT_SYM_SGL_MAX_NUMBER); + if (n_src < 0 || n_src > op->sym->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + + in_sgl->num = n_src; + + /* Out-Of-Place operation */ + if (unlikely((op->sym->m_dst != NULL) && + (op->sym->m_dst != op->sym->m_src))) { + int n_dst = rte_crypto_mbuf_to_vec(op->sym->m_dst, cipher_ofs, + cipher_len, out_sgl->vec, + QAT_SYM_SGL_MAX_NUMBER); + + if ((n_dst < 0) || (n_dst > op->sym->m_dst->nb_segs)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + + out_sgl->num = n_dst; + } else + out_sgl->num = 0; + + return 0; +} + +static __rte_always_inline uint64_t +qat_sym_convert_op_to_vec_auth(struct rte_crypto_op *op, + struct qat_sym_session *ctx, + struct rte_crypto_sgl *in_sgl, struct rte_crypto_sgl *out_sgl, + struct rte_crypto_va_iova_ptr *cipher_iv __rte_unused, + struct rte_crypto_va_iova_ptr *auth_iv, + struct rte_crypto_va_iova_ptr *digest) +{ + uint32_t auth_ofs = 0, auth_len = 0; + int n_src, ret; + + ret = qat_chk_len_in_bits_auth(ctx, op); + switch (ret) { + case 1: + auth_ofs = op->sym->auth.data.offset >> 3; + auth_len = op->sym->auth.data.length >> 3; + auth_iv->va = rte_crypto_op_ctod_offset(op, void *, + ctx->auth_iv.offset); + auth_iv->iova = rte_crypto_op_ctophys_offset(op, + ctx->auth_iv.offset); + break; + case 0: + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) { + /* AES-GMAC */ + auth_ofs = op->sym->auth.data.offset; + auth_len = op->sym->auth.data.length; + auth_iv->va = rte_crypto_op_ctod_offset(op, void *, + ctx->auth_iv.offset); + auth_iv->iova = rte_crypto_op_ctophys_offset(op, + ctx->auth_iv.offset); + } else { + auth_ofs = op->sym->auth.data.offset; + auth_len = op->sym->auth.data.length; + auth_iv->va = NULL; + auth_iv->iova = 0; + } + break; + default: + QAT_DP_LOG(ERR, + "For SNOW3G/KASUMI/ZUC, QAT PMD only supports byte aligned values"); + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return UINT64_MAX; + } + + n_src = rte_crypto_mbuf_to_vec(op->sym->m_src, auth_ofs, + auth_ofs + auth_len, in_sgl->vec, + QAT_SYM_SGL_MAX_NUMBER); + if (n_src < 0 || n_src > op->sym->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + + in_sgl->num = n_src; + + /* Out-Of-Place operation */ + if (unlikely((op->sym->m_dst != NULL) && + (op->sym->m_dst != op->sym->m_src))) { + int n_dst = rte_crypto_mbuf_to_vec(op->sym->m_dst, auth_ofs, + auth_ofs + auth_len, out_sgl->vec, + QAT_SYM_SGL_MAX_NUMBER); + + if ((n_dst < 0) || (n_dst > op->sym->m_dst->nb_segs)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + out_sgl->num = n_dst; + } else + out_sgl->num = 0; + + digest->va = (void *)op->sym->auth.digest.data; + digest->iova = op->sym->auth.digest.phys_addr; + + return 0; +} + +static __rte_always_inline uint64_t +qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, + struct qat_sym_session *ctx, + struct rte_crypto_sgl *in_sgl, struct rte_crypto_sgl *out_sgl, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *auth_iv_or_aad, + struct rte_crypto_va_iova_ptr *digest) +{ + union rte_crypto_sym_ofs ofs; + uint32_t min_ofs = 0, max_len = 0; + uint32_t cipher_len = 0, cipher_ofs = 0; + uint32_t auth_len = 0, auth_ofs = 0; + int is_oop = (op->sym->m_dst != NULL) && + (op->sym->m_dst != op->sym->m_src); + int is_sgl = op->sym->m_src->nb_segs > 1; + int n_src; + int ret; + + if (unlikely(is_oop)) + is_sgl |= op->sym->m_dst->nb_segs > 1; + + cipher_iv->va = rte_crypto_op_ctod_offset(op, void *, + ctx->cipher_iv.offset); + cipher_iv->iova = rte_crypto_op_ctophys_offset(op, + ctx->cipher_iv.offset); + auth_iv_or_aad->va = rte_crypto_op_ctod_offset(op, void *, + ctx->auth_iv.offset); + auth_iv_or_aad->iova = rte_crypto_op_ctophys_offset(op, + ctx->auth_iv.offset); + digest->va = (void *)op->sym->auth.digest.data; + digest->iova = op->sym->auth.digest.phys_addr; + + ret = qat_chk_len_in_bits_cipher(ctx, op); + switch (ret) { + case 1: + cipher_len = op->sym->aead.data.length >> 3; + cipher_ofs = op->sym->aead.data.offset >> 3; + break; + case 0: + cipher_len = op->sym->aead.data.length; + cipher_ofs = op->sym->aead.data.offset; + break; + default: + QAT_DP_LOG(ERR, + "For SNOW3G/KASUMI/ZUC, QAT PMD only supports byte aligned values"); + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + ret = qat_chk_len_in_bits_auth(ctx, op); + switch (ret) { + case 1: + auth_len = op->sym->auth.data.length >> 3; + auth_ofs = op->sym->auth.data.offset >> 3; + break; + case 0: + auth_len = op->sym->auth.data.length; + auth_ofs = op->sym->auth.data.offset; + break; + default: + QAT_DP_LOG(ERR, + "For SNOW3G/KASUMI/ZUC, QAT PMD only supports byte aligned values"); + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + min_ofs = cipher_ofs < auth_ofs ? cipher_ofs : auth_ofs; + max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); + + /* digest in buffer check. Needed only for wireless algos */ + if (ret == 1) { + /* Handle digest-encrypted cases, i.e. + * auth-gen-then-cipher-encrypt and + * cipher-decrypt-then-auth-verify + */ + uint64_t auth_end_iova; + + if (unlikely(is_sgl)) { + uint32_t remaining_off = auth_ofs + auth_len; + struct rte_mbuf *sgl_buf = (is_oop ? op->sym->m_dst : + op->sym->m_src); + + while (remaining_off >= rte_pktmbuf_data_len(sgl_buf) + && sgl_buf->next != NULL) { + remaining_off -= rte_pktmbuf_data_len(sgl_buf); + sgl_buf = sgl_buf->next; + } + + auth_end_iova = (uint64_t)rte_pktmbuf_iova_offset( + sgl_buf, remaining_off); + } else + auth_end_iova = (is_oop ? + rte_pktmbuf_iova(op->sym->m_dst) : + rte_pktmbuf_iova(op->sym->m_src)) + auth_ofs + + auth_len; + + /* Then check if digest-encrypted conditions are met */ + if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && + (digest->iova == auth_end_iova)) + max_len = RTE_MAX(max_len, auth_ofs + auth_len + + ctx->digest_length); + } + + n_src = rte_crypto_mbuf_to_vec(op->sym->m_src, min_ofs, max_len, + in_sgl->vec, QAT_SYM_SGL_MAX_NUMBER); + if (unlikely(n_src < 0 || n_src > op->sym->m_src->nb_segs)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return -1; + } + in_sgl->num = n_src; + + if (unlikely((op->sym->m_dst != NULL) && + (op->sym->m_dst != op->sym->m_src))) { + int n_dst = rte_crypto_mbuf_to_vec(op->sym->m_dst, min_ofs, + max_len, out_sgl->vec, QAT_SYM_SGL_MAX_NUMBER); + + if (n_dst < 0 || n_dst > op->sym->m_dst->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return -1; + } + out_sgl->num = n_dst; + } else + out_sgl->num = 0; + + ofs.ofs.cipher.head = cipher_ofs; + ofs.ofs.cipher.tail = max_len - cipher_ofs - cipher_len; + ofs.ofs.auth.head = auth_ofs; + ofs.ofs.auth.tail = max_len - auth_ofs - auth_len; + + return ofs.raw; +} + +static __rte_always_inline uint64_t +qat_sym_convert_op_to_vec_aead(struct rte_crypto_op *op, + struct qat_sym_session *ctx, + struct rte_crypto_sgl *in_sgl, struct rte_crypto_sgl *out_sgl, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *auth_iv_or_aad, + struct rte_crypto_va_iova_ptr *digest) +{ + uint32_t cipher_len = 0, cipher_ofs = 0; + int32_t n_src = 0; + + cipher_iv->va = rte_crypto_op_ctod_offset(op, void *, + ctx->cipher_iv.offset); + cipher_iv->iova = rte_crypto_op_ctophys_offset(op, + ctx->cipher_iv.offset); + auth_iv_or_aad->va = (void *)op->sym->aead.aad.data; + auth_iv_or_aad->iova = op->sym->aead.aad.phys_addr; + digest->va = (void *)op->sym->aead.digest.data; + digest->iova = op->sym->aead.digest.phys_addr; + + cipher_len = op->sym->aead.data.length; + cipher_ofs = op->sym->aead.data.offset; + + n_src = rte_crypto_mbuf_to_vec(op->sym->m_src, cipher_ofs, cipher_len, + in_sgl->vec, QAT_SYM_SGL_MAX_NUMBER); + if (n_src < 0 || n_src > op->sym->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + in_sgl->num = n_src; + + /* Out-Of-Place operation */ + if (unlikely((op->sym->m_dst != NULL) && + (op->sym->m_dst != op->sym->m_src))) { + int n_dst = rte_crypto_mbuf_to_vec(op->sym->m_dst, cipher_ofs, + cipher_len, out_sgl->vec, + QAT_SYM_SGL_MAX_NUMBER); + if (n_dst < 0 || n_dst > op->sym->m_dst->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return UINT64_MAX; + } + + out_sgl->num = n_dst; + } else + out_sgl->num = 0; + + return 0; +} + +static __rte_always_inline void +qat_set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param, + struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len, + struct icp_qat_fw_la_bulk_req *qat_req) +{ + /* copy IV into request if it fits */ + if (iv_len <= sizeof(cipher_param->u.cipher_IV_array)) + rte_memcpy(cipher_param->u.cipher_IV_array, iv_ptr->va, + iv_len); + else { + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + qat_req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_CIPH_IV_64BIT_PTR); + cipher_param->u.s.cipher_IV_ptr = iv_ptr->iova; + } +} + +static __rte_always_inline void +qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + sta[i] = status; +} + +static __rte_always_inline void +enqueue_one_cipher_job_gen1(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *iv, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + + /* cipher IV */ + qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; +} + +static __rte_always_inline void +enqueue_one_auth_job_gen1(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = data_len - ofs.ofs.auth.head - + ofs.ofs.auth.tail; + auth_param->auth_res_addr = digest->iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = auth_iv->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va, + ctx->auth_iv.length); + break; + default: + break; + } +} + +static __rte_always_inline int +enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_vec *src_vec, + uint16_t n_src_vecs, + struct rte_crypto_vec *dst_vec, + uint16_t n_dst_vecs, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct rte_crypto_vec *cvec = n_dst_vecs > 0 ? + dst_vec : src_vec; + rte_iova_t auth_iova_end; + int cipher_len, auth_len; + int is_sgl = n_src_vecs > 1 || n_dst_vecs > 1; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + cipher_len = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail; + + if (unlikely(cipher_len < 0 || auth_len < 0)) + return -1; + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = cipher_len; + qat_set_cipher_iv(cipher_param, cipher_iv, ctx->cipher_iv.length, req); + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = auth_len; + auth_param->auth_res_addr = digest->iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = auth_iv->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + break; + default: + break; + } + + if (unlikely(is_sgl)) { + /* sgl */ + int i = n_dst_vecs ? n_dst_vecs : n_src_vecs; + uint32_t remaining_off = data_len - ofs.ofs.auth.tail; + + while (remaining_off >= cvec->len && i >= 1) { + i--; + remaining_off -= cvec->len; + cvec++; + } + + auth_iova_end = cvec->iova + remaining_off; + } else + auth_iova_end = cvec[0].iova + auth_param->auth_off + + auth_param->auth_len; + + /* Then check if digest-encrypted conditions are met */ + if ((auth_param->auth_off + auth_param->auth_len < + cipher_param->cipher_offset + cipher_param->cipher_length) && + (digest->iova == auth_iova_end)) { + /* Handle partial digest encryption */ + if (cipher_param->cipher_offset + cipher_param->cipher_length < + auth_param->auth_off + auth_param->auth_len + + ctx->digest_length && !is_sgl) + req->comn_mid.dst_length = req->comn_mid.src_length = + auth_param->auth_off + auth_param->auth_len + + ctx->digest_length; + struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr; + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + } + + return 0; +} + +static __rte_always_inline void +enqueue_one_aead_job_gen1(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + struct icp_qat_fw_la_auth_req_params *auth_param = + (void *)((uint8_t *)&req->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + uint8_t *aad_data; + uint8_t aad_ccm_real_len; + uint8_t aad_len_field_sz; + uint32_t msg_len_be; + rte_iova_t aad_iova = 0; + uint8_t q; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy(cipher_param->u.cipher_IV_array, iv->va, + ctx->cipher_iv.length); + aad_iova = aad->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: + aad_data = aad->va; + aad_iova = aad->iova; + aad_ccm_real_len = 0; + aad_len_field_sz = 0; + msg_len_be = rte_bswap32((uint32_t)data_len - + ofs.ofs.cipher.head); + + if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { + aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; + aad_ccm_real_len = ctx->aad_len - + ICP_QAT_HW_CCM_AAD_B0_LEN - + ICP_QAT_HW_CCM_AAD_LEN_INFO; + } else { + aad_data = iv->va; + aad_iova = iv->iova; + } + + q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; + aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( + aad_len_field_sz, ctx->digest_length, q); + if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET + (q - + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), + (uint8_t *)&msg_len_be, + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); + } else { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)&msg_len_be + + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE + - q), q); + } + + if (aad_len_field_sz > 0) { + *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = + rte_bswap16(aad_ccm_real_len); + + if ((aad_ccm_real_len + aad_len_field_sz) + % ICP_QAT_HW_CCM_AAD_B0_LEN) { + uint8_t pad_len = 0; + uint8_t pad_idx = 0; + + pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - + ((aad_ccm_real_len + + aad_len_field_sz) % + ICP_QAT_HW_CCM_AAD_B0_LEN); + pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + + aad_ccm_real_len + + aad_len_field_sz; + memset(&aad_data[pad_idx], 0, pad_len); + } + } + + rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv->va + + ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length); + *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = + q - ICP_QAT_HW_CCM_NONCE_OFFSET; + + rte_memcpy((uint8_t *)aad->va + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + break; + default: + break; + } + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = cipher_param->cipher_length; + auth_param->auth_res_addr = digest->iova; + auth_param->u1.aad_adr = aad_iova; +} extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1; extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1; +/* -----------------GEN 1 sym crypto op data path APIs ---------------- */ +int +qat_sym_build_op_cipher_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + +int +qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + +int +qat_sym_build_op_aead_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + +int +qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + /* -----------------GENx control path APIs ---------------- */ uint64_t qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); +int +qat_sym_crypto_set_session_gen1(void *cryptodev, void *session); + void qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session, uint8_t hash_flag); diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index e156f194e2..5b96e626f8 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -180,6 +180,192 @@ qat_sym_crypto_feature_flags_get_gen1( return feature_flags; } + +int +qat_sym_build_op_cipher_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr cipher_iv; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_cipher(op, ctx, &in_sgl, &out_sgl, + &cipher_iv, NULL, NULL); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_cipher_job_gen1(ctx, req, &cipher_iv, ofs, total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv, + NULL, NULL, NULL); +#endif + + return 0; +} + +int +qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr auth_iv; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_auth(op, ctx, &in_sgl, &out_sgl, + NULL, &auth_iv, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_auth_job_gen1(ctx, req, &digest, &auth_iv, ofs, + total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, NULL, + &auth_iv, NULL, &digest); +#endif + + return 0; +} + +int +qat_sym_build_op_aead_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr cipher_iv; + struct rte_crypto_va_iova_ptr aad; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_aead(op, ctx, &in_sgl, &out_sgl, + &cipher_iv, &aad, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_aead_job_gen1(ctx, req, &cipher_iv, &digest, &aad, ofs, + total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv, + NULL, &aad, &digest); +#endif + + return 0; +} + +int +qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl = {0}, out_sgl = {0}; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr cipher_iv; + struct rte_crypto_va_iova_ptr auth_iv; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_chain(op, ctx, &in_sgl, &out_sgl, + &cipher_iv, &auth_iv, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_chain_job_gen1(ctx, req, in_sgl.vec, in_sgl.num, + out_sgl.vec, out_sgl.num, &cipher_iv, &digest, &auth_iv, + ofs, total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv, + &auth_iv, &digest); +#endif + + return 0; +} + #ifdef RTE_LIB_SECURITY #define QAT_SECURITY_SYM_CAPABILITIES \ From patchwork Fri Nov 5 00:19:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103813 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06370A0C61; Fri, 5 Nov 2021 01:19:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA1CD427A7; Fri, 5 Nov 2021 01:19:45 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 1E96F42710 for ; Fri, 5 Nov 2021 01:19:40 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563720" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563720" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671798" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:39 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:24 +0000 Message-Id: <20211105001932.28784-4-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 03/11] crypto/qat: rework session APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The patch introduce set_session methods to qat gen dev ops Replace min_qat_dev_gen_id with dev_id, the session will become invalid if the device generation id is not matching during session init and crypto ops Signed-off-by: Kai Ji --- drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 87 ++++++- drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 253 +++++++++++++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 121 +++++++++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 62 +++++ drivers/crypto/qat/qat_crypto.c | 1 + drivers/crypto/qat/qat_crypto.h | 5 + drivers/crypto/qat/qat_sym.c | 8 +- drivers/crypto/qat/qat_sym_session.c | 106 +------- drivers/crypto/qat/qat_sym_session.h | 2 +- 9 files changed, 544 insertions(+), 101 deletions(-) diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index b4ec440e05..621ff85638 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -166,6 +166,90 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, return 0; } +void +qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session, + uint8_t hash_flag) +{ + struct icp_qat_fw_comn_req_hdr *header = &session->fw_req.comn_hdr; + struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl = + (struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *) + session->fw_req.cd_ctrl.content_desc_ctrl_lw; + + /* Set the Use Extended Protocol Flags bit in LW 1 */ + QAT_FIELD_SET(header->comn_req_flags, + QAT_COMN_EXT_FLAGS_USED, + QAT_COMN_EXT_FLAGS_BITPOS, + QAT_COMN_EXT_FLAGS_MASK); + + /* Set Hash Flags in LW 28 */ + cd_ctrl->hash_flags |= hash_flag; + + /* Set proto flags in LW 1 */ + switch (session->qat_cipher_alg) { + case ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2: + ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_SNOW_3G_PROTO); + ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( + header->serv_specif_flags, 0); + break; + case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3: + ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_PROTO); + ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_ZUC_3G_PROTO); + break; + default: + ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_PROTO); + ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( + header->serv_specif_flags, 0); + break; + } +} + +static int +qat_sym_crypto_set_session_gen2(void *cdev, void *session) +{ + struct rte_cryptodev *dev = cdev; + struct qat_sym_session *ctx = session; + const struct qat_cryptodev_private *qat_private = + dev->data->dev_private; + int ret; + + ret = qat_sym_crypto_set_session_gen1(cdev, session); + if (ret == 0 || ret != -ENOTSUP) + return ret; + + /* GEN1 returning -ENOTSUP as it cannot handle some mixed algo, + * but some are not supported by GEN2, so checking here + */ + if ((qat_private->internal_capabilities & + QAT_SYM_CAP_MIXED_CRYPTO) == 0) + return -ENOTSUP; + + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS); + } else if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS); + } else if ((ctx->aes_cmac || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) && + (ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, 0); + } + + return 0; +} + struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = { /* Device related operations */ @@ -204,9 +288,10 @@ RTE_INIT(qat_sym_crypto_gen2_init) qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2; qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities = qat_sym_crypto_cap_get_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].set_session = + qat_sym_crypto_set_session_gen2; qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; - #ifdef RTE_LIB_SECURITY qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx = qat_sym_create_security_gen1; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index d3336cf4a1..6baf1810ed 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -143,6 +143,256 @@ qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused) return capa_info; } +static __rte_always_inline void +enqueue_one_aead_job_gen3(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + if (ctx->is_single_pass) { + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + + /* QAT GEN3 uses single pass to treat AEAD as + * cipher operation + */ + cipher_param = (void *)&req->serv_specif_rqpars; + + qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + + cipher_param->spc_aad_addr = aad->iova; + cipher_param->spc_auth_res_addr = digest->iova; + + return; + } + + enqueue_one_aead_job_gen1(ctx, req, iv, digest, aad, ofs, data_len); +} + +static __rte_always_inline void +enqueue_one_auth_job_gen3(struct qat_sym_session *ctx, + struct qat_sym_op_cookie *cookie, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + uint32_t ver_key_offset; + uint32_t auth_data_len = data_len - ofs.ofs.auth.head - + ofs.ofs.auth.tail; + + if (!ctx->is_single_pass_gmac || + (auth_data_len > QAT_AES_GMAC_SPC_MAX_SIZE)) { + enqueue_one_auth_job_gen1(ctx, req, digest, auth_iv, ofs, + data_len); + return; + } + + cipher_cd_ctrl = (void *) &req->cd_ctrl; + cipher_param = (void *)&req->serv_specif_rqpars; + ver_key_offset = sizeof(struct icp_qat_hw_auth_setup) + + ICP_QAT_HW_GALOIS_128_STATE1_SZ + + ICP_QAT_HW_GALOIS_H_SZ + ICP_QAT_HW_GALOIS_LEN_A_SZ + + ICP_QAT_HW_GALOIS_E_CTR0_SZ + + sizeof(struct icp_qat_hw_cipher_config); + + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) { + /* AES-GMAC */ + qat_set_cipher_iv(cipher_param, auth_iv, ctx->auth_iv.length, + req); + } + + /* Fill separate Content Descriptor for this op */ + rte_memcpy(cookie->opt.spc_gmac.cd_cipher.key, + ctx->auth_op == ICP_QAT_HW_AUTH_GENERATE ? + ctx->cd.cipher.key : + RTE_PTR_ADD(&ctx->cd, ver_key_offset), + ctx->auth_key_length); + cookie->opt.spc_gmac.cd_cipher.cipher_config.val = + ICP_QAT_HW_CIPHER_CONFIG_BUILD( + ICP_QAT_HW_CIPHER_AEAD_MODE, + ctx->qat_cipher_alg, + ICP_QAT_HW_CIPHER_NO_CONVERT, + (ctx->auth_op == ICP_QAT_HW_AUTH_GENERATE ? + ICP_QAT_HW_CIPHER_ENCRYPT : + ICP_QAT_HW_CIPHER_DECRYPT)); + QAT_FIELD_SET(cookie->opt.spc_gmac.cd_cipher.cipher_config.val, + ctx->digest_length, + QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + cookie->opt.spc_gmac.cd_cipher.cipher_config.reserved = + ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(auth_data_len); + + /* Update the request */ + req->cd_pars.u.s.content_desc_addr = + cookie->opt.spc_gmac.cd_phys_addr; + req->cd_pars.u.s.content_desc_params_sz = RTE_ALIGN_CEIL( + sizeof(struct icp_qat_hw_cipher_config) + + ctx->auth_key_length, 8) >> 3; + req->comn_mid.src_length = data_len; + req->comn_mid.dst_length = 0; + + cipher_param->spc_aad_addr = 0; + cipher_param->spc_auth_res_addr = digest->iova; + cipher_param->spc_aad_sz = auth_data_len; + cipher_param->reserved = 0; + cipher_param->spc_auth_res_sz = ctx->digest_length; + + req->comn_hdr.service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER; + cipher_cd_ctrl->cipher_cfg_offset = 0; + ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER); + ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); + ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_SINGLE_PASS_PROTO); + ICP_QAT_FW_LA_PROTO_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_NO_PROTO); +} + +static int +qat_sym_build_op_aead_gen3(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr cipher_iv; + struct rte_crypto_va_iova_ptr aad; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_aead(op, ctx, &in_sgl, &out_sgl, + &cipher_iv, &aad, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_aead_job_gen3(ctx, req, &cipher_iv, &digest, &aad, ofs, + total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv, + NULL, &aad, &digest); +#endif + + return 0; +} + +static int +qat_sym_build_op_auth_gen3(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr auth_iv; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_auth(op, ctx, &in_sgl, &out_sgl, + NULL, &auth_iv, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_auth_job_gen3(ctx, cookie, req, &digest, &auth_iv, + ofs, total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, NULL, + &auth_iv, NULL, &digest); +#endif + + return 0; +} + +static int +qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session) +{ + struct qat_sym_session *ctx = session; + enum rte_proc_type_t proc_type = rte_eal_process_type(); + int ret; + + ret = qat_sym_crypto_set_session_gen1(cdev, session); + /* special single pass build request for GEN3 */ + if (ctx->is_single_pass) + ctx->build_request[proc_type] = qat_sym_build_op_aead_gen3; + else if (ctx->is_single_pass_gmac) + ctx->build_request[proc_type] = qat_sym_build_op_auth_gen3; + + if (ret == 0) + return ret; + + /* GEN1 returning -ENOTSUP as it cannot handle some mixed algo, + * this is addressed by GEN3 + */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS); + } else if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS); + } else if ((ctx->aes_cmac || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) && + (ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, 0); + } + + return 0; +} + RTE_INIT(qat_sym_crypto_gen3_init) { qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1; @@ -150,6 +400,8 @@ RTE_INIT(qat_sym_crypto_gen3_init) qat_sym_crypto_cap_get_gen3; qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; + qat_sym_gen_dev_ops[QAT_GEN3].set_session = + qat_sym_crypto_set_session_gen3; #ifdef RTE_LIB_SECURITY qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx = qat_sym_create_security_gen1; @@ -161,4 +413,5 @@ RTE_INIT(qat_asym_crypto_gen3_init) qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL; qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL; qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL; + qat_asym_gen_dev_ops[QAT_GEN3].set_session = NULL; } diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c index 37a58c026f..fa6a7ee2fd 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -103,11 +103,131 @@ qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused) return capa_info; } +static __rte_always_inline void +enqueue_one_aead_job_gen4(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + if (ctx->is_single_pass && ctx->is_ucs) { + struct icp_qat_fw_la_cipher_20_req_params *cipher_param_20 = + (void *)&req->serv_specif_rqpars; + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + + /* QAT GEN4 uses single pass to treat AEAD as cipher + * operation + */ + qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, + req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - + ofs.ofs.cipher.head - ofs.ofs.cipher.tail; + + cipher_param_20->spc_aad_addr = aad->iova; + cipher_param_20->spc_auth_res_addr = digest->iova; + + return; + } + + enqueue_one_aead_job_gen1(ctx, req, iv, digest, aad, ofs, data_len); +} + +static int +qat_sym_build_op_aead_gen4(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + register struct icp_qat_fw_la_bulk_req *qat_req; + struct rte_crypto_op *op = in_op; + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_sgl in_sgl, out_sgl; + struct rte_crypto_vec in_vec[QAT_SYM_SGL_MAX_NUMBER], + out_vec[QAT_SYM_SGL_MAX_NUMBER]; + struct rte_crypto_va_iova_ptr cipher_iv; + struct rte_crypto_va_iova_ptr aad; + struct rte_crypto_va_iova_ptr digest; + union rte_crypto_sym_ofs ofs; + int32_t total_len; + + in_sgl.vec = in_vec; + out_sgl.vec = out_vec; + + qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); + + ofs.raw = qat_sym_convert_op_to_vec_aead(op, ctx, &in_sgl, &out_sgl, + &cipher_iv, &aad, &digest); + if (unlikely(ofs.raw == UINT64_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + total_len = qat_sym_build_req_set_data(qat_req, in_op, cookie, + in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num); + if (unlikely(total_len < 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return -EINVAL; + } + + enqueue_one_aead_job_gen4(ctx, qat_req, &cipher_iv, &digest, &aad, ofs, + total_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(qat_req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv, + NULL, &aad, &digest); +#endif + + return 0; +} + +static int +qat_sym_crypto_set_session_gen4(void *cdev, void *session) +{ + struct qat_sym_session *ctx = session; + enum rte_proc_type_t proc_type = rte_eal_process_type(); + int ret; + + ret = qat_sym_crypto_set_session_gen1(cdev, session); + /* special single pass build request for GEN4 */ + if (ctx->is_single_pass && ctx->is_ucs) + ctx->build_request[proc_type] = qat_sym_build_op_aead_gen4; + if (ret == 0) + return ret; + + /* GEN1 returning -ENOTSUP as it cannot handle some mixed algo, + * this is addressed by GEN4 + */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS); + } else if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, + 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS); + } else if ((ctx->aes_cmac || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) && + (ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) { + qat_sym_session_set_ext_hash_flags_gen2(ctx, 0); + } + + return 0; +} + RTE_INIT(qat_sym_crypto_gen4_init) { qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1; qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities = qat_sym_crypto_cap_get_gen4; + qat_sym_gen_dev_ops[QAT_GEN4].set_session = + qat_sym_crypto_set_session_gen4; qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; #ifdef RTE_LIB_SECURITY @@ -121,4 +241,5 @@ RTE_INIT(qat_asym_crypto_gen4_init) qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL; qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL; qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL; + qat_asym_gen_dev_ops[QAT_GEN4].set_session = NULL; } diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 5b96e626f8..bcdf0172fe 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -6,6 +6,7 @@ #ifdef RTE_LIB_SECURITY #include #endif +#include #include "adf_transport_access_macros.h" #include "icp_qat_fw.h" @@ -454,12 +455,73 @@ qat_sym_create_security_gen1(void *cryptodev) } #endif +int +qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) +{ + struct qat_sym_session *ctx = session; + qat_sym_build_request_t build_request = NULL; + enum rte_proc_type_t proc_type = rte_eal_process_type(); + int handle_mixed = 0; + + if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && + !ctx->is_gmac) { + /* AES-GCM or AES-CCM */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || + (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 + && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE + && ctx->qat_hash_alg == + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { + /* do_aead = 1; */ + build_request = qat_sym_build_op_aead_gen1; + } else { + /* do_auth = 1; do_cipher = 1; */ + build_request = qat_sym_build_op_chain_gen1; + handle_mixed = 1; + } + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { + /* do_auth = 1; do_cipher = 0;*/ + build_request = qat_sym_build_op_auth_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { + /* do_auth = 0; do_cipher = 1; */ + build_request = qat_sym_build_op_cipher_gen1; + } + + if (!build_request) + return 0; + ctx->build_request[proc_type] = build_request; + + if (!handle_mixed) + return 0; + + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { + return -ENOTSUP; + } else if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 && + ctx->qat_cipher_alg != + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) { + return -ENOTSUP; + } else if ((ctx->aes_cmac || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) && + (ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) { + return -ENOTSUP; + } + + return 0; +} RTE_INIT(qat_sym_crypto_gen1_init) { qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1; qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities = qat_sym_crypto_cap_get_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].set_session = + qat_sym_crypto_set_session_gen1; qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; #ifdef RTE_LIB_SECURITY diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 84c26a8062..ee878e47f1 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -2,6 +2,7 @@ * Copyright(c) 2021 Intel Corporation */ +#include #include "qat_device.h" #include "qat_qp.h" #include "qat_crypto.h" diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 6eaa15b975..a32c716d28 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -48,15 +48,20 @@ typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev); typedef void * (*create_security_ctx_t)(void *cryptodev); +typedef int (*set_session_t)(void *cryptodev, void *session); + struct qat_crypto_gen_dev_ops { get_feature_flags_t get_feature_flags; get_capabilities_info_t get_capabilities; struct rte_cryptodev_ops *cryptodev_ops; + set_session_t set_session; #ifdef RTE_LIB_SECURITY create_security_ctx_t create_security_ctx; #endif }; +extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[]; + int qat_cryptodev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *config); diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 93b257522b..7481607ff8 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -212,7 +212,7 @@ handle_spc_gmac(struct qat_sym_session *ctx, struct rte_crypto_op *op, int qat_sym_build_request(void *in_op, uint8_t *out_msg, - void *op_cookie, enum qat_device_gen qat_dev_gen) + void *op_cookie, __rte_unused enum qat_device_gen qat_dev_gen) { int ret = 0; struct qat_sym_session *ctx = NULL; @@ -277,12 +277,6 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, return -EINVAL; } - if (unlikely(ctx->min_qat_dev_gen > qat_dev_gen)) { - QAT_DP_LOG(ERR, "Session alg not supported on this device gen"); - op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; - return -EINVAL; - } - qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg; rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op; diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 8ca475ca8b..52837d7c9c 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -486,80 +486,6 @@ qat_sym_session_configure(struct rte_cryptodev *dev, return 0; } -static void -qat_sym_session_set_ext_hash_flags(struct qat_sym_session *session, - uint8_t hash_flag) -{ - struct icp_qat_fw_comn_req_hdr *header = &session->fw_req.comn_hdr; - struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl = - (struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *) - session->fw_req.cd_ctrl.content_desc_ctrl_lw; - - /* Set the Use Extended Protocol Flags bit in LW 1 */ - QAT_FIELD_SET(header->comn_req_flags, - QAT_COMN_EXT_FLAGS_USED, - QAT_COMN_EXT_FLAGS_BITPOS, - QAT_COMN_EXT_FLAGS_MASK); - - /* Set Hash Flags in LW 28 */ - cd_ctrl->hash_flags |= hash_flag; - - /* Set proto flags in LW 1 */ - switch (session->qat_cipher_alg) { - case ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2: - ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_SNOW_3G_PROTO); - ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( - header->serv_specif_flags, 0); - break; - case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3: - ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_PROTO); - ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( - header->serv_specif_flags, - ICP_QAT_FW_LA_ZUC_3G_PROTO); - break; - default: - ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_PROTO); - ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET( - header->serv_specif_flags, 0); - break; - } -} - -static void -qat_sym_session_handle_mixed(const struct rte_cryptodev *dev, - struct qat_sym_session *session) -{ - const struct qat_cryptodev_private *qat_private = - dev->data->dev_private; - enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities & - QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3; - - if (session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 && - session->qat_cipher_alg != - ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { - session->min_qat_dev_gen = min_dev_gen; - qat_sym_session_set_ext_hash_flags(session, - 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS); - } else if (session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 && - session->qat_cipher_alg != - ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) { - session->min_qat_dev_gen = min_dev_gen; - qat_sym_session_set_ext_hash_flags(session, - 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS); - } else if ((session->aes_cmac || - session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) && - (session->qat_cipher_alg == - ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || - session->qat_cipher_alg == - ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) { - session->min_qat_dev_gen = min_dev_gen; - qat_sym_session_set_ext_hash_flags(session, 0); - } -} - int qat_sym_session_set_parameters(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, void *session_private) @@ -569,7 +495,6 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; int ret; int qat_cmd_id; - int handle_mixed = 0; /* Verify the session physical address is known */ rte_iova_t session_paddr = rte_mempool_virt2iova(session); @@ -584,7 +509,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); - session->min_qat_dev_gen = QAT_GEN1; + session->dev_id = internals->dev_id; session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; session->is_ucs = 0; @@ -625,7 +550,6 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, xform, session); if (ret < 0) return ret; - handle_mixed = 1; } break; case ICP_QAT_FW_LA_CMD_HASH_CIPHER: @@ -643,7 +567,6 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, xform, session); if (ret < 0) return ret; - handle_mixed = 1; } break; case ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM: @@ -664,12 +587,9 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, return -ENOTSUP; } qat_sym_session_finalize(session); - if (handle_mixed) { - /* Special handling of mixed hash+cipher algorithms */ - qat_sym_session_handle_mixed(dev, session); - } - return 0; + return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)dev, + (void *)session); } static int @@ -678,7 +598,6 @@ qat_sym_session_handle_single_pass(struct qat_sym_session *session, { session->is_single_pass = 1; session->is_auth = 1; - session->min_qat_dev_gen = QAT_GEN3; session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER; /* Chacha-Poly is special case that use QAT CTR mode */ if (aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) { @@ -1205,9 +1124,10 @@ static int partial_hash_md5(uint8_t *data_in, uint8_t *data_out) return 0; } -static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg, - uint8_t *data_in, - uint8_t *data_out) +static int +partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg, + uint8_t *data_in, + uint8_t *data_out) { int digest_size; uint8_t digest[qat_hash_get_digest_size( @@ -1654,7 +1574,6 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3; cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC; - cdesc->min_qat_dev_gen = QAT_GEN2; } else { total_key_size = cipherkeylen; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3; @@ -2002,7 +1921,6 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen); cd_extra_size += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ; auth_param->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3; - cdesc->min_qat_dev_gen = QAT_GEN2; break; case ICP_QAT_HW_AUTH_ALGO_MD5: @@ -2263,8 +2181,6 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); - session->min_qat_dev_gen = QAT_GEN1; - /* Get requested QAT command id - should be cipher */ qat_cmd_id = qat_get_cmd_id(xform); if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { @@ -2289,6 +2205,9 @@ qat_security_session_create(void *dev, { void *sess_private_data; struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev; + struct qat_cryptodev_private *internals = cdev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + struct qat_sym_session *sym_session = NULL; int ret; if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL || @@ -2312,8 +2231,11 @@ qat_security_session_create(void *dev, } set_sec_session_private_data(sess, sess_private_data); + sym_session = (struct qat_sym_session *)sess_private_data; + sym_session->dev_id = internals->dev_id; - return ret; + return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, + sess_private_data); } int diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 33f977a4e3..581bce3790 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -100,7 +100,7 @@ struct qat_sym_session { uint16_t auth_key_length; uint16_t digest_length; rte_spinlock_t lock; /* protects this struct */ - enum qat_device_gen min_qat_dev_gen; + uint16_t dev_id; uint8_t aes_cmac; uint8_t is_single_pass; uint8_t is_single_pass_gmac; From patchwork Fri Nov 5 00:19:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103814 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79412A0C61; Fri, 5 Nov 2021 01:20:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C4CA427AD; Fri, 5 Nov 2021 01:19:47 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 82B024279B for ; Fri, 5 Nov 2021 01:19:42 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563722" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563722" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671802" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:40 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:25 +0000 Message-Id: <20211105001932.28784-5-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 04/11] crypto/qat: asym build op request specific implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add-in specific asymmetric crypto session and build op request implementation for QAT generations Signed-off-by: Kai Ji --- drivers/common/qat/qat_qp.c | 5 +- drivers/crypto/qat/dev/qat_asym_pmd_gen1.c | 7 + drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 3 + drivers/crypto/qat/qat_asym.c | 620 ++++++++++--------- drivers/crypto/qat/qat_asym.h | 65 +- 5 files changed, 389 insertions(+), 311 deletions(-) diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index a026b90ca7..a3db053c82 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -622,7 +622,7 @@ qat_enqueue_op_burst(void *qp, #ifdef BUILD_QAT_ASYM ret = qat_asym_build_request(*ops, base_addr + tail, tmp_qp->op_cookies[tail >> queue->trailz], - tmp_qp->qat_dev_gen); + NULL, tmp_qp->qat_dev_gen); #endif } if (ret != 0) { @@ -850,7 +850,8 @@ qat_dequeue_op_burst(void *qp, void **ops, #ifdef BUILD_QAT_ASYM else if (tmp_qp->service_type == QAT_SERVICE_ASYMMETRIC) qat_asym_process_response(ops, resp_msg, - tmp_qp->op_cookies[head >> rx_queue->trailz]); + tmp_qp->op_cookies[head >> rx_queue->trailz], + NULL); #endif head = adf_modulo(head + rx_queue->msg_size, diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c index 9ed1f21d9d..99d90fa56c 100644 --- a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c @@ -65,6 +65,13 @@ qat_asym_crypto_feature_flags_get_gen1( return feature_flags; } +int +qat_asym_crypto_set_session_gen1(void *cdev __rte_unused, + void *session __rte_unused) +{ + return 0; +} + RTE_INIT(qat_asym_crypto_gen1_init) { qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops = diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 783fdcaa34..6277211b35 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -853,6 +853,9 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session, struct qat_capabilities_info qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev); +int +qat_asym_crypto_set_session_gen1(void *cryptodev, void *session); + uint64_t qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index f893508030..e2a5fb9260 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -1,69 +1,115 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation + * Copyright(c) 2019 - 2021 Intel Corporation */ #include -#include "qat_asym.h" +#include + #include "icp_qat_fw_pke.h" #include "icp_qat_fw.h" #include "qat_pke_functionality_arrays.h" -#define qat_asym_sz_2param(arg) (arg, sizeof(arg)/sizeof(*arg)) +#include "qat_device.h" -static int qat_asym_get_sz_and_func_id(const uint32_t arr[][2], - size_t arr_sz, size_t *size, uint32_t *func_id) +#include "qat_logs.h" +#include "qat_asym.h" + +int +qat_asym_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_asym_xform *xform, + struct rte_cryptodev_asym_session *sess, + struct rte_mempool *mempool) { - size_t i; + int err = 0; + void *sess_private_data; + struct qat_asym_session *session; - for (i = 0; i < arr_sz; i++) { - if (*size <= arr[i][0]) { - *size = arr[i][0]; - *func_id = arr[i][1]; - return 0; + if (rte_mempool_get(mempool, &sess_private_data)) { + QAT_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + session = sess_private_data; + if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { + if (xform->modex.exponent.length == 0 || + xform->modex.modulus.length == 0) { + QAT_LOG(ERR, "Invalid mod exp input parameter"); + err = -EINVAL; + goto error; + } + } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { + if (xform->modinv.modulus.length == 0) { + QAT_LOG(ERR, "Invalid mod inv input parameter"); + err = -EINVAL; + goto error; } + } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { + if (xform->rsa.n.length == 0) { + QAT_LOG(ERR, "Invalid rsa input parameter"); + err = -EINVAL; + goto error; + } + } else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END + || xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) { + QAT_LOG(ERR, "Invalid asymmetric crypto xform"); + err = -EINVAL; + goto error; + } else { + QAT_LOG(ERR, "Asymmetric crypto xform not implemented"); + err = -EINVAL; + goto error; } - return -1; -} -static inline void qat_fill_req_tmpl(struct icp_qat_fw_pke_request *qat_req) -{ - memset(qat_req, 0, sizeof(*qat_req)); - qat_req->pke_hdr.service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_PKE; + session->xform = xform; + qat_asym_build_req_tmpl(sess_private_data); + set_asym_session_private_data(sess, dev->driver_id, + sess_private_data); - qat_req->pke_hdr.hdr_flags = - ICP_QAT_FW_COMN_HDR_FLAGS_BUILD - (ICP_QAT_FW_COMN_REQ_FLAG_SET); + return 0; +error: + rte_mempool_put(mempool, sess_private_data); + return err; } -static inline void qat_asym_build_req_tmpl(void *sess_private_data) +unsigned int +qat_asym_session_get_private_size( + struct rte_cryptodev *dev __rte_unused) { - struct icp_qat_fw_pke_request *qat_req; - struct qat_asym_session *session = sess_private_data; - - qat_req = &session->req_tmpl; - qat_fill_req_tmpl(qat_req); + return RTE_ALIGN_CEIL(sizeof(struct qat_asym_session), 8); } -static size_t max_of(int n, ...) +void +qat_asym_session_clear(struct rte_cryptodev *dev, + struct rte_cryptodev_asym_session *sess) { - va_list args; - size_t len = 0, num; - int i; + uint8_t index = dev->driver_id; + void *sess_priv = get_asym_session_private_data(sess, index); + struct qat_asym_session *s = (struct qat_asym_session *)sess_priv; - va_start(args, n); - len = va_arg(args, size_t); + if (sess_priv) { + memset(s, 0, qat_asym_session_get_private_size(dev)); + struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv); - for (i = 0; i < n - 1; i++) { - num = va_arg(args, size_t); - if (num > len) - len = num; + set_asym_session_private_data(sess, index, NULL); + rte_mempool_put(sess_mp, sess_priv); } - va_end(args); - - return len; } +/* An rte_driver is needed in the registration of both the device and the driver + * with cryptodev. + * The actual qat pci's rte_driver can't be used as its name represents + * the whole pci device with all services. Think of this as a holder for a name + * for the crypto part of the pci device. + */ +static const char qat_asym_drv_name[] = RTE_STR(CRYPTODEV_NAME_QAT_ASYM_PMD); +static const struct rte_driver cryptodev_qat_asym_driver = { + .name = qat_asym_drv_name, + .alias = qat_asym_drv_name +}; + + static void qat_clear_arrays(struct qat_asym_op_cookie *cookie, int in_count, int out_count, int alg_size) { @@ -106,7 +152,230 @@ static void qat_clear_arrays_by_alg(struct qat_asym_op_cookie *cookie, } } -static int qat_asym_check_nonzero(rte_crypto_param n) +static void qat_asym_collect_response(struct rte_crypto_op *rx_op, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + size_t alg_size, alg_size_in_bytes = 0; + struct rte_crypto_asym_op *asym_op = rx_op->asym; + + if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { + rte_crypto_param n = xform->modex.modulus; + + alg_size = cookie->alg_size; + alg_size_in_bytes = alg_size >> 3; + uint8_t *modexp_result = asym_op->modex.result.data; + + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { + rte_memcpy(modexp_result + + (asym_op->modex.result.length - + n.length), + cookie->output_array[0] + alg_size_in_bytes + - n.length, n.length + ); + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "ModExp result", + cookie->output_array[0], + alg_size_in_bytes); + +#endif + } + } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { + rte_crypto_param n = xform->modinv.modulus; + + alg_size = cookie->alg_size; + alg_size_in_bytes = alg_size >> 3; + uint8_t *modinv_result = asym_op->modinv.result.data; + + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { + rte_memcpy(modinv_result + + (asym_op->modinv.result.length + - n.length), + cookie->output_array[0] + alg_size_in_bytes + - n.length, n.length); + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "ModInv result", + cookie->output_array[0], + alg_size_in_bytes); +#endif + } + } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { + + alg_size = cookie->alg_size; + alg_size_in_bytes = alg_size >> 3; + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || + asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_VERIFY) { + if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_ENCRYPT) { + uint8_t *rsa_result = asym_op->rsa.cipher.data; + + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_size_in_bytes); + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Encrypted data", + cookie->output_array[0], + alg_size_in_bytes); +#endif + } else if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_VERIFY) { + uint8_t *rsa_result = asym_op->rsa.cipher.data; + + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_size_in_bytes); + rx_op->status = + RTE_CRYPTO_OP_STATUS_SUCCESS; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", + cookie->output_array[0], + alg_size_in_bytes); +#endif + break; + default: + QAT_LOG(ERR, "Padding not supported"); + rx_op->status = + RTE_CRYPTO_OP_STATUS_ERROR; + break; + } + } + } else { + if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_DECRYPT) { + uint8_t *rsa_result = asym_op->rsa.message.data; + + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_size_in_bytes); + rx_op->status = + RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + default: + QAT_LOG(ERR, "Padding not supported"); + rx_op->status = + RTE_CRYPTO_OP_STATUS_ERROR; + break; + } +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Decrypted Message", + rsa_result, alg_size_in_bytes); +#endif + } else if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_SIGN) { + uint8_t *rsa_result = asym_op->rsa.sign.data; + + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_size_in_bytes); + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", + cookie->output_array[0], + alg_size_in_bytes); +#endif + } + } + } + qat_clear_arrays_by_alg(cookie, xform, alg_size_in_bytes); +} + +int +qat_asym_process_response(void __rte_unused * *op, uint8_t *resp, + void *op_cookie, __rte_unused uint64_t *dequeue_err_count) +{ + struct qat_asym_session *ctx; + struct icp_qat_fw_pke_resp *resp_msg = + (struct icp_qat_fw_pke_resp *)resp; + struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) + (resp_msg->opaque); + struct qat_asym_op_cookie *cookie = op_cookie; + + if (cookie->error) { + cookie->error = 0; + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + QAT_DP_LOG(ERR, "Cookie status returned error"); + } else { + if (ICP_QAT_FW_PKE_RESP_PKE_STAT_GET( + resp_msg->pke_resp_hdr.resp_status.pke_resp_flags)) { + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + QAT_DP_LOG(ERR, "Asymmetric response status" + " returned error"); + } + if (resp_msg->pke_resp_hdr.resp_status.comn_err_code) { + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + QAT_DP_LOG(ERR, "Asymmetric common status" + " returned error"); + } + } + + if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + ctx = (struct qat_asym_session *)get_asym_session_private_data( + rx_op->asym->session, qat_asym_driver_id); + qat_asym_collect_response(rx_op, cookie, ctx->xform); + } else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { + qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform); + } + *op = rx_op; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "resp_msg:", resp_msg, + sizeof(struct icp_qat_fw_pke_resp)); +#endif + + return 1; +} + +#define qat_asym_sz_2param(arg) (arg, sizeof(arg)/sizeof(*arg)) + +static int +qat_asym_get_sz_and_func_id(const uint32_t arr[][2], + size_t arr_sz, size_t *size, uint32_t *func_id) +{ + size_t i; + + for (i = 0; i < arr_sz; i++) { + if (*size <= arr[i][0]) { + *size = arr[i][0]; + *func_id = arr[i][1]; + return 0; + } + } + return -1; +} + +static size_t +max_of(int n, ...) +{ + va_list args; + size_t len = 0, num; + int i; + + va_start(args, n); + len = va_arg(args, size_t); + + for (i = 0; i < n - 1; i++) { + num = va_arg(args, size_t); + if (num > len) + len = num; + } + va_end(args); + + return len; +} + +static int +qat_asym_check_nonzero(rte_crypto_param n) { if (n.length < 8) { /* Not a case for any cryptograpic function except for DH @@ -475,10 +744,9 @@ qat_asym_fill_arrays(struct rte_crypto_asym_op *asym_op, } int -qat_asym_build_request(void *in_op, - uint8_t *out_msg, - void *op_cookie, - __rte_unused enum qat_device_gen qat_dev_gen) +qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, + __rte_unused uint64_t *opaque, + __rte_unused enum qat_device_gen dev_gen) { struct qat_asym_session *ctx; struct rte_crypto_op *op = (struct rte_crypto_op *)in_op; @@ -545,263 +813,7 @@ qat_asym_build_request(void *in_op, return 0; } -static void qat_asym_collect_response(struct rte_crypto_op *rx_op, - struct qat_asym_op_cookie *cookie, - struct rte_crypto_asym_xform *xform) -{ - size_t alg_size, alg_size_in_bytes = 0; - struct rte_crypto_asym_op *asym_op = rx_op->asym; - - if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { - rte_crypto_param n = xform->modex.modulus; - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - uint8_t *modexp_result = asym_op->modex.result.data; - - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { - rte_memcpy(modexp_result + - (asym_op->modex.result.length - - n.length), - cookie->output_array[0] + alg_size_in_bytes - - n.length, n.length - ); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModExp result", - cookie->output_array[0], - alg_size_in_bytes); - -#endif - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { - rte_crypto_param n = xform->modinv.modulus; - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - uint8_t *modinv_result = asym_op->modinv.result.data; - - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { - rte_memcpy(modinv_result + (asym_op->modinv.result.length - - n.length), - cookie->output_array[0] + alg_size_in_bytes - - n.length, n.length); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModInv result", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || - asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { - if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_ENCRYPT) { - uint8_t *rsa_result = asym_op->rsa.cipher.data; - - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Encrypted data", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } else if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { - uint8_t *rsa_result = asym_op->rsa.cipher.data; - - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = - RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", - cookie->output_array[0], - alg_size_in_bytes); -#endif - break; - default: - QAT_LOG(ERR, "Padding not supported"); - rx_op->status = - RTE_CRYPTO_OP_STATUS_ERROR; - break; - } - } - } else { - if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_DECRYPT) { - uint8_t *rsa_result = asym_op->rsa.message.data; - - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = - RTE_CRYPTO_OP_STATUS_SUCCESS; - break; - default: - QAT_LOG(ERR, "Padding not supported"); - rx_op->status = - RTE_CRYPTO_OP_STATUS_ERROR; - break; - } -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Decrypted Message", - rsa_result, alg_size_in_bytes); -#endif - } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { - uint8_t *rsa_result = asym_op->rsa.sign.data; - - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } - } - } - qat_clear_arrays_by_alg(cookie, xform, alg_size_in_bytes); -} - -void -qat_asym_process_response(void **op, uint8_t *resp, - void *op_cookie) -{ - struct qat_asym_session *ctx; - struct icp_qat_fw_pke_resp *resp_msg = - (struct icp_qat_fw_pke_resp *)resp; - struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) - (resp_msg->opaque); - struct qat_asym_op_cookie *cookie = op_cookie; - - if (cookie->error) { - cookie->error = 0; - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; - QAT_DP_LOG(ERR, "Cookie status returned error"); - } else { - if (ICP_QAT_FW_PKE_RESP_PKE_STAT_GET( - resp_msg->pke_resp_hdr.resp_status.pke_resp_flags)) { - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; - QAT_DP_LOG(ERR, "Asymmetric response status" - " returned error"); - } - if (resp_msg->pke_resp_hdr.resp_status.comn_err_code) { - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; - QAT_DP_LOG(ERR, "Asymmetric common status" - " returned error"); - } - } - - if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - ctx = (struct qat_asym_session *)get_asym_session_private_data( - rx_op->asym->session, qat_asym_driver_id); - qat_asym_collect_response(rx_op, cookie, ctx->xform); - } else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform); - } - *op = rx_op; - -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "resp_msg:", resp_msg, - sizeof(struct icp_qat_fw_pke_resp)); -#endif -} - -int -qat_asym_session_configure(struct rte_cryptodev *dev, - struct rte_crypto_asym_xform *xform, - struct rte_cryptodev_asym_session *sess, - struct rte_mempool *mempool) -{ - int err = 0; - void *sess_private_data; - struct qat_asym_session *session; - - if (rte_mempool_get(mempool, &sess_private_data)) { - QAT_LOG(ERR, - "Couldn't get object from session mempool"); - return -ENOMEM; - } - - session = sess_private_data; - if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { - if (xform->modex.exponent.length == 0 || - xform->modex.modulus.length == 0) { - QAT_LOG(ERR, "Invalid mod exp input parameter"); - err = -EINVAL; - goto error; - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { - if (xform->modinv.modulus.length == 0) { - QAT_LOG(ERR, "Invalid mod inv input parameter"); - err = -EINVAL; - goto error; - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { - if (xform->rsa.n.length == 0) { - QAT_LOG(ERR, "Invalid rsa input parameter"); - err = -EINVAL; - goto error; - } - } else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END - || xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) { - QAT_LOG(ERR, "Invalid asymmetric crypto xform"); - err = -EINVAL; - goto error; - } else { - QAT_LOG(ERR, "Asymmetric crypto xform not implemented"); - err = -EINVAL; - goto error; - } - - session->xform = xform; - qat_asym_build_req_tmpl(sess_private_data); - set_asym_session_private_data(sess, dev->driver_id, - sess_private_data); - - return 0; -error: - rte_mempool_put(mempool, sess_private_data); - return err; -} - -unsigned int qat_asym_session_get_private_size( - struct rte_cryptodev *dev __rte_unused) -{ - return RTE_ALIGN_CEIL(sizeof(struct qat_asym_session), 8); -} - -void -qat_asym_session_clear(struct rte_cryptodev *dev, - struct rte_cryptodev_asym_session *sess) -{ - uint8_t index = dev->driver_id; - void *sess_priv = get_asym_session_private_data(sess, index); - struct qat_asym_session *s = (struct qat_asym_session *)sess_priv; - - if (sess_priv) { - memset(s, 0, qat_asym_session_get_private_size(dev)); - struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv); - - set_asym_session_private_data(sess, index, NULL); - rte_mempool_put(sess_mp, sess_priv); - } -} +static struct cryptodev_driver qat_crypto_drv; +RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, + cryptodev_qat_asym_driver, + qat_asym_driver_id); diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index 308b6b2e0b..655a835c7f 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -8,10 +8,13 @@ #include #include #include "icp_qat_fw_pke.h" -#include "qat_common.h" -#include "qat_asym_pmd.h" +#include "qat_device.h" +#include "qat_crypto.h" #include "icp_qat_fw.h" +/** Intel(R) QAT Asymmetric Crypto PMD driver name */ +#define CRYPTODEV_NAME_QAT_ASYM_PMD crypto_qat_asym + typedef uint64_t large_int_ptr; #define MAX_PKE_PARAMS 8 #define QAT_PKE_MAX_LN_SIZE 512 @@ -26,6 +29,32 @@ typedef uint64_t large_int_ptr; #define QAT_ASYM_RSA_NUM_OUT_PARAMS 1 #define QAT_ASYM_RSA_QT_NUM_IN_PARAMS 6 +uint8_t qat_asym_driver_id; + +struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; + +/** + * helper function to add an asym capability + * + **/ +#define QAT_ASYM_CAP(n, o, l, r, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ + {.asym = { \ + .xform_capa = { \ + .xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\ + .op_types = o, \ + { \ + .modlen = { \ + .min = l, \ + .max = r, \ + .increment = i \ + }, } \ + } \ + }, \ + } \ + } + struct qat_asym_op_cookie { size_t alg_size; uint64_t error; @@ -45,6 +74,27 @@ struct qat_asym_session { struct rte_crypto_asym_xform *xform; }; +static inline void +qat_fill_req_tmpl(struct icp_qat_fw_pke_request *qat_req) +{ + memset(qat_req, 0, sizeof(*qat_req)); + qat_req->pke_hdr.service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_PKE; + + qat_req->pke_hdr.hdr_flags = + ICP_QAT_FW_COMN_HDR_FLAGS_BUILD + (ICP_QAT_FW_COMN_REQ_FLAG_SET); +} + +static inline void +qat_asym_build_req_tmpl(void *sess_private_data) +{ + struct icp_qat_fw_pke_request *qat_req; + struct qat_asym_session *session = sess_private_data; + + qat_req = &session->req_tmpl; + qat_fill_req_tmpl(qat_req); +} + int qat_asym_session_configure(struct rte_cryptodev *dev, struct rte_crypto_asym_xform *xform, @@ -76,7 +126,9 @@ qat_asym_session_clear(struct rte_cryptodev *dev, */ int qat_asym_build_request(void *in_op, uint8_t *out_msg, - void *op_cookie, enum qat_device_gen qat_dev_gen); + void *op_cookie, + __rte_unused uint64_t *opaque, + enum qat_device_gen qat_dev_gen); /* * Process PKE response received from outgoing queue of QAT @@ -88,8 +140,11 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, * @param op_cookie Cookie pointer that holds private metadata * */ +int +qat_asym_process_response(void __rte_unused * *op, uint8_t *resp, + void *op_cookie, __rte_unused uint64_t *dequeue_err_count); + void -qat_asym_process_response(void __rte_unused **op, uint8_t *resp, - void *op_cookie); +qat_asym_init_op_cookie(void *cookie); #endif /* _QAT_ASYM_H_ */ From patchwork Fri Nov 5 00:19:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103815 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44E54A0C61; Fri, 5 Nov 2021 01:20:13 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 30902427C1; Fri, 5 Nov 2021 01:19:49 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 6071F42710 for ; Fri, 5 Nov 2021 01:19:44 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563725" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563725" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671807" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:42 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:26 +0000 Message-Id: <20211105001932.28784-6-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 05/11] crypto/qat: unify sym pmd apis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch integrate all the functions from qat_sym_pmd.c into qat_sym.c, The integration unify the sym crypto pmd apis and should make it easier for furture maintains Signed-off-by: Kai Ji --- drivers/common/qat/meson.build | 2 +- drivers/common/qat/qat_device.c | 2 +- drivers/common/qat/qat_qp.c | 3 +- drivers/crypto/qat/qat_sym.c | 24 +++ drivers/crypto/qat/qat_sym.h | 141 +++++++++++++-- drivers/crypto/qat/qat_sym_hw_dp.c | 9 - drivers/crypto/qat/qat_sym_pmd.c | 251 --------------------------- drivers/crypto/qat/qat_sym_pmd.h | 95 ---------- drivers/crypto/qat/qat_sym_session.c | 2 +- 9 files changed, 158 insertions(+), 371 deletions(-) delete mode 100644 drivers/crypto/qat/qat_sym_pmd.c delete mode 100644 drivers/crypto/qat/qat_sym_pmd.h diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index ce9959d103..7f0ff68946 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -70,7 +70,7 @@ if qat_compress endif if qat_crypto - foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', + foreach f: ['qat_sym.c', 'qat_sym_session.c', 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c', 'dev/qat_sym_pmd_gen1.c', 'dev/qat_asym_pmd_gen1.c', diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 437996f2e8..e1fc1c37ec 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -8,7 +8,7 @@ #include "qat_device.h" #include "adf_transport_access_macros.h" -#include "qat_sym_pmd.h" +#include "qat_sym.h" #include "qat_comp_pmd.h" #include "adf_pf2vf_msg.h" #include "qat_pf2vf.h" diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index a3db053c82..7faa5a48fd 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -841,7 +841,8 @@ qat_dequeue_op_burst(void *qp, void **ops, if (tmp_qp->service_type == QAT_SERVICE_SYMMETRIC) qat_sym_process_response(ops, resp_msg, - tmp_qp->op_cookies[head >> rx_queue->trailz]); + tmp_qp->op_cookies[head >> rx_queue->trailz], + NULL); else if (tmp_qp->service_type == QAT_SERVICE_COMPRESSION) nb_fw_responses = qat_comp_process_response( ops, resp_msg, diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 7481607ff8..8b1a49c690 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -12,6 +12,30 @@ #include "qat_sym.h" +uint8_t qat_sym_driver_id; + +struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS]; + +void +qat_sym_init_op_cookie(void *op_cookie) +{ + struct qat_sym_op_cookie *cookie = op_cookie; + + cookie->qat_sgl_src_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + qat_sgl_src); + + cookie->qat_sgl_dst_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + qat_sgl_dst); + + cookie->opt.spc_gmac.cd_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + opt.spc_gmac.cd_cipher); +} /** Decrypt a single partial block * Depends on openssl libcrypto diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index e3ec7f0de4..87e23bb1d1 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -15,15 +15,75 @@ #include "qat_common.h" #include "qat_sym_session.h" -#include "qat_sym_pmd.h" +#include "qat_crypto.h" #include "qat_logs.h" +#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat + #define BYTE_LENGTH 8 /* bpi is only used for partial blocks of DES and AES * so AES block len can be assumed as max len for iv, src and dst */ #define BPI_MAX_ENCR_IV_LEN ICP_QAT_HW_AES_BLK_SZ +/* Internal capabilities */ +#define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) +#define QAT_SYM_CAP_VALID (1 << 31) + +/** + * Macro to add a sym capability + * helper function to add an sym capability + * + * + **/ +#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_##n, \ + b, d \ + }, } \ + }, } \ + } + +#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_##n, \ + b, k, d, a, i \ + }, } \ + }, } \ + } + +#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ + {.aead = { \ + .algo = RTE_CRYPTO_AEAD_##n, \ + b, k, d, a, i \ + }, } \ + }, } \ + } + +#define QAT_SYM_CIPHER_CAP(n, b, k, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_##n, \ + b, k, i \ + }, } \ + }, } \ + } + /* * Maximum number of SGL entries */ @@ -54,11 +114,26 @@ struct qat_sym_op_cookie { } opt; }; +struct qat_sym_dp_ctx { + struct qat_sym_session *session; + uint32_t tail; + uint32_t head; + uint16_t cached_enqueue; + uint16_t cached_dequeue; +}; + +uint16_t +qat_sym_enqueue_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops); + +uint16_t +qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops); + int qat_sym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, enum qat_device_gen qat_dev_gen); - /** Encrypt a single partial block * Depends on openssl libcrypto * Uses ECB+XOR to do CFB encryption, same result, more performant @@ -213,17 +288,11 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops) } } } -#else - -static inline void -qat_sym_preprocess_requests(void **ops __rte_unused, - uint16_t nb_ops __rte_unused) -{ -} #endif -static inline void -qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie) +static __rte_always_inline int +qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, + uint64_t *dequeue_err_count __rte_unused) { struct icp_qat_fw_comn_resp *resp_msg = (struct icp_qat_fw_comn_resp *)resp; @@ -282,6 +351,8 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie) } *op = (void *)rx_op; + + return 1; } int @@ -293,6 +364,52 @@ qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, int qat_sym_get_dp_ctx_size(struct rte_cryptodev *dev); +void +qat_sym_init_op_cookie(void *cookie); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG +static __rte_always_inline void +qat_sym_debug_log_dump(struct icp_qat_fw_la_bulk_req *qat_req, + struct qat_sym_session *ctx, + struct rte_crypto_vec *vec, uint32_t vec_len, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *auth_iv, + struct rte_crypto_va_iova_ptr *aad, + struct rte_crypto_va_iova_ptr *digest) +{ + uint32_t i; + + QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, + sizeof(struct icp_qat_fw_la_bulk_req)); + for (i = 0; i < vec_len; i++) + QAT_DP_HEXDUMP_LOG(DEBUG, "src_data:", vec[i].base, vec[i].len); + if (cipher_iv && ctx->cipher_iv.length > 0) + QAT_DP_HEXDUMP_LOG(DEBUG, "cipher iv:", cipher_iv->va, + ctx->cipher_iv.length); + if (auth_iv && ctx->auth_iv.length > 0) + QAT_DP_HEXDUMP_LOG(DEBUG, "auth iv:", auth_iv->va, + ctx->auth_iv.length); + if (aad && ctx->aad_len > 0) + QAT_DP_HEXDUMP_LOG(DEBUG, "aad:", aad->va, + ctx->aad_len); + if (digest && ctx->digest_length > 0) + QAT_DP_HEXDUMP_LOG(DEBUG, "digest:", digest->va, + ctx->digest_length); +} +#else +static __rte_always_inline void +qat_sym_debug_log_dump(struct icp_qat_fw_la_bulk_req *qat_req __rte_unused, + struct qat_sym_session *ctx __rte_unused, + struct rte_crypto_vec *vec __rte_unused, + uint32_t vec_len __rte_unused, + struct rte_crypto_va_iova_ptr *cipher_iv __rte_unused, + struct rte_crypto_va_iova_ptr *auth_iv __rte_unused, + struct rte_crypto_va_iova_ptr *aad __rte_unused, + struct rte_crypto_va_iova_ptr *digest __rte_unused) +{ +} +#endif + #else static inline void @@ -307,5 +424,5 @@ qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, { } -#endif +#endif /* BUILD_QAT_SYM */ #endif /* _QAT_SYM_H_ */ diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c index 12825e448b..6bcc30d92d 100644 --- a/drivers/crypto/qat/qat_sym_hw_dp.c +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -9,18 +9,9 @@ #include "icp_qat_fw_la.h" #include "qat_sym.h" -#include "qat_sym_pmd.h" #include "qat_sym_session.h" #include "qat_qp.h" -struct qat_sym_dp_ctx { - struct qat_sym_session *session; - uint32_t tail; - uint32_t head; - uint16_t cached_enqueue; - uint16_t cached_dequeue; -}; - static __rte_always_inline int32_t qat_sym_dp_parse_data_vec(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data, uint16_t n_data_vecs) diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c deleted file mode 100644 index 28a26260fb..0000000000 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ /dev/null @@ -1,251 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2015-2018 Intel Corporation - */ - -#include -#include -#include -#include -#include -#include -#ifdef RTE_LIB_SECURITY -#include -#endif - -#include "qat_logs.h" -#include "qat_crypto.h" -#include "qat_sym.h" -#include "qat_sym_session.h" -#include "qat_sym_pmd.h" - -#define MIXED_CRYPTO_MIN_FW_VER 0x04090000 - -uint8_t qat_sym_driver_id; - -struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS]; - -void -qat_sym_init_op_cookie(void *op_cookie) -{ - struct qat_sym_op_cookie *cookie = op_cookie; - - cookie->qat_sgl_src_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - qat_sgl_src); - - cookie->qat_sgl_dst_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - qat_sgl_dst); - - cookie->opt.spc_gmac.cd_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - opt.spc_gmac.cd_cipher); -} - -static uint16_t -qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - return qat_enqueue_op_burst(qp, NULL, (void **)ops, nb_ops); -} - -static uint16_t -qat_sym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - return qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); -} - -/* An rte_driver is needed in the registration of both the device and the driver - * with cryptodev. - * The actual qat pci's rte_driver can't be used as its name represents - * the whole pci device with all services. Think of this as a holder for a name - * for the crypto part of the pci device. - */ -static const char qat_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD); -static const struct rte_driver cryptodev_qat_sym_driver = { - .name = qat_sym_drv_name, - .alias = qat_sym_drv_name -}; - -int -qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, - struct qat_dev_cmd_param *qat_dev_cmd_param __rte_unused) -{ - int i = 0, ret = 0; - struct qat_device_info *qat_dev_instance = - &qat_pci_devs[qat_pci_dev->qat_dev_id]; - struct rte_cryptodev_pmd_init_params init_params = { - .name = "", - .socket_id = qat_dev_instance->pci_dev->device.numa_node, - .private_data_size = sizeof(struct qat_cryptodev_private) - }; - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; - struct rte_cryptodev *cryptodev; - struct qat_cryptodev_private *internals; - struct qat_capabilities_info capa_info; - const struct rte_cryptodev_capabilities *capabilities; - const struct qat_crypto_gen_dev_ops *gen_dev_ops = - &qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; - uint64_t capa_size; - - snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", - qat_pci_dev->name, "sym"); - QAT_LOG(DEBUG, "Creating QAT SYM device %s", name); - - if (gen_dev_ops->cryptodev_ops == NULL) { - QAT_LOG(ERR, "Device %s does not support symmetric crypto", - name); - return -EFAULT; - } - - /* - * All processes must use same driver id so they can share sessions. - * Store driver_id so we can validate that all processes have the same - * value, typically they have, but could differ if binaries built - * separately. - */ - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - qat_pci_dev->qat_sym_driver_id = - qat_sym_driver_id; - } else if (rte_eal_process_type() == RTE_PROC_SECONDARY) { - if (qat_pci_dev->qat_sym_driver_id != - qat_sym_driver_id) { - QAT_LOG(ERR, - "Device %s have different driver id than corresponding device in primary process", - name); - return -(EFAULT); - } - } - - /* Populate subset device to use in cryptodev device creation */ - qat_dev_instance->sym_rte_dev.driver = &cryptodev_qat_sym_driver; - qat_dev_instance->sym_rte_dev.numa_node = - qat_dev_instance->pci_dev->device.numa_node; - qat_dev_instance->sym_rte_dev.devargs = NULL; - - cryptodev = rte_cryptodev_pmd_create(name, - &(qat_dev_instance->sym_rte_dev), &init_params); - - if (cryptodev == NULL) - return -ENODEV; - - qat_dev_instance->sym_rte_dev.name = cryptodev->data->name; - cryptodev->driver_id = qat_sym_driver_id; - cryptodev->dev_ops = gen_dev_ops->cryptodev_ops; - - cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst; - cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst; - - cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - -#ifdef RTE_LIB_SECURITY - if (gen_dev_ops->create_security_ctx) { - cryptodev->security_ctx = - gen_dev_ops->create_security_ctx((void *)cryptodev); - if (cryptodev->security_ctx == NULL) { - QAT_LOG(ERR, "rte_security_ctx memory alloc failed"); - ret = -ENOMEM; - goto error; - } - - cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY; - QAT_LOG(INFO, "Device %s rte_security support enabled", name); - } else - QAT_LOG(INFO, "Device %s rte_security support disabled", name); - -#endif - snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, - "QAT_SYM_CAPA_GEN_%d", - qat_pci_dev->qat_dev_gen); - - internals = cryptodev->data->dev_private; - internals->qat_dev = qat_pci_dev; - internals->service_type = QAT_SERVICE_SYMMETRIC; - internals->dev_id = cryptodev->data->dev_id; - - capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); - capabilities = capa_info.data; - capa_size = capa_info.size; - - internals->capa_mz = rte_memzone_lookup(capa_memz_name); - if (internals->capa_mz == NULL) { - internals->capa_mz = rte_memzone_reserve(capa_memz_name, - capa_size, rte_socket_id(), 0); - if (internals->capa_mz == NULL) { - QAT_LOG(DEBUG, - "Error allocating capability memzon for %s", - name); - ret = -EFAULT; - goto error; - } - } - - memcpy(internals->capa_mz->addr, capabilities, capa_size); - internals->qat_dev_capabilities = internals->capa_mz->addr; - - while (1) { - if (qat_dev_cmd_param[i].name == NULL) - break; - if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) - internals->min_enq_burst_threshold = - qat_dev_cmd_param[i].val; - i++; - } - - qat_pci_dev->sym_dev = internals; - QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d", - cryptodev->data->name, internals->dev_id); - - rte_cryptodev_pmd_probing_finish(cryptodev); - - return 0; - -error: -#ifdef RTE_LIB_SECURITY - rte_free(cryptodev->security_ctx); - cryptodev->security_ctx = NULL; -#endif - rte_cryptodev_pmd_destroy(cryptodev); - memset(&qat_dev_instance->sym_rte_dev, 0, - sizeof(qat_dev_instance->sym_rte_dev)); - - return ret; -} - -int -qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev) -{ - struct rte_cryptodev *cryptodev; - - if (qat_pci_dev == NULL) - return -ENODEV; - if (qat_pci_dev->sym_dev == NULL) - return 0; - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - rte_memzone_free(qat_pci_dev->sym_dev->capa_mz); - - /* free crypto device */ - cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id); -#ifdef RTE_LIB_SECURITY - rte_free(cryptodev->security_ctx); - cryptodev->security_ctx = NULL; -#endif - rte_cryptodev_pmd_destroy(cryptodev); - qat_pci_devs[qat_pci_dev->qat_dev_id].sym_rte_dev.name = NULL; - qat_pci_dev->sym_dev = NULL; - - return 0; -} - -static struct cryptodev_driver qat_crypto_drv; -RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, - cryptodev_qat_sym_driver, - qat_sym_driver_id); diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h deleted file mode 100644 index 0dc0c6f0d9..0000000000 --- a/drivers/crypto/qat/qat_sym_pmd.h +++ /dev/null @@ -1,95 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2015-2018 Intel Corporation - */ - -#ifndef _QAT_SYM_PMD_H_ -#define _QAT_SYM_PMD_H_ - -#ifdef BUILD_QAT_SYM - -#include -#include -#ifdef RTE_LIB_SECURITY -#include -#endif - -#include "qat_crypto.h" -#include "qat_device.h" - -/** Intel(R) QAT Symmetric Crypto PMD driver name */ -#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat - -/* Internal capabilities */ -#define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) -#define QAT_SYM_CAP_VALID (1 << 31) - -/** - * Macro to add a sym capability - * helper function to add an sym capability - * - * - **/ -#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d) \ - { \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_##n, \ - b, d \ - }, } \ - }, } \ - } - -#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i) \ - { \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_##n, \ - b, k, d, a, i \ - }, } \ - }, } \ - } - -#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i) \ - { \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_##n, \ - b, k, d, a, i \ - }, } \ - }, } \ - } - -#define QAT_SYM_CIPHER_CAP(n, b, k, i) \ - { \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_##n, \ - b, k, i \ - }, } \ - }, } \ - } - -extern uint8_t qat_sym_driver_id; - -extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[]; - -int -qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, - struct qat_dev_cmd_param *qat_dev_cmd_param); - -int -qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev); - -void -qat_sym_init_op_cookie(void *op_cookie); - -#endif -#endif /* _QAT_SYM_PMD_H_ */ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 52837d7c9c..933ac89569 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -20,7 +20,7 @@ #include "qat_logs.h" #include "qat_sym_session.h" -#include "qat_sym_pmd.h" +#include "qat_sym.h" /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ static const uint8_t sha1InitialState[] = { From patchwork Fri Nov 5 00:19:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103816 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 229C2A0C61; Fri, 5 Nov 2021 01:20:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A5C2427C6; Fri, 5 Nov 2021 01:19:50 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 5F965427A1 for ; Fri, 5 Nov 2021 01:19:45 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563728" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563728" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671811" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:43 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:27 +0000 Message-Id: <20211105001932.28784-7-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 06/11] crypto/qat: unify qat asym pmd apis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch integrate all the functions from qat_asym_pmd.c into qat_asym.c, The integration unify the asym crypto pmd apis and should make it easier for furture maintains Signed-off-by: Kai Ji --- drivers/common/qat/meson.build | 2 +- drivers/crypto/qat/qat_asym.c | 180 +++++++++++++++++++++++ drivers/crypto/qat/qat_asym_pmd.c | 231 ------------------------------ drivers/crypto/qat/qat_asym_pmd.h | 54 ------- 4 files changed, 181 insertions(+), 286 deletions(-) delete mode 100644 drivers/crypto/qat/qat_asym_pmd.c delete mode 100644 drivers/crypto/qat/qat_asym_pmd.h diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 7f0ff68946..25c15bebea 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -71,7 +71,7 @@ endif if qat_crypto foreach f: ['qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c', + 'qat_sym_hw_dp.c', 'qat_asym.c', 'qat_crypto.c', 'dev/qat_sym_pmd_gen1.c', 'dev/qat_asym_pmd_gen1.c', 'dev/qat_crypto_pmd_gen2.c', diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index e2a5fb9260..83747118a5 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -15,6 +15,32 @@ #include "qat_logs.h" #include "qat_asym.h" +void +qat_asym_init_op_cookie(void *op_cookie) +{ + int j; + struct qat_asym_op_cookie *cookie = op_cookie; + + cookie->input_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + input_params_ptrs); + + cookie->output_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + output_params_ptrs); + + for (j = 0; j < 8; j++) { + cookie->input_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + input_array[j]); + cookie->output_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + output_array[j]); + } +} + int qat_asym_session_configure(struct rte_cryptodev *dev, struct rte_crypto_asym_xform *xform, @@ -813,6 +839,160 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, return 0; } +static uint16_t +qat_asym_crypto_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_enqueue_op_burst(qp, qat_asym_build_request, (void **)ops, + nb_ops); +} + +static uint16_t +qat_asym_crypto_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_dequeue_op_burst(qp, (void **)ops, qat_asym_process_response, + nb_ops); +} + +int +qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, + struct qat_dev_cmd_param *qat_dev_cmd_param) +{ + struct qat_cryptodev_private *internals; + struct rte_cryptodev *cryptodev; + struct qat_device_info *qat_dev_instance = + &qat_pci_devs[qat_pci_dev->qat_dev_id]; + struct rte_cryptodev_pmd_init_params init_params = { + .name = "", + .socket_id = qat_dev_instance->pci_dev->device.numa_node, + .private_data_size = sizeof(struct qat_cryptodev_private) + }; + struct qat_capabilities_info capa_info; + const struct rte_cryptodev_capabilities *capabilities; + const struct qat_crypto_gen_dev_ops *gen_dev_ops = + &qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; + uint64_t capa_size; + int i = 0; + + snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", + qat_pci_dev->name, "asym"); + QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name); + + if (gen_dev_ops->cryptodev_ops == NULL) { + QAT_LOG(ERR, "Device %s does not support asymmetric crypto", + name); + return -(EFAULT); + } + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + qat_pci_dev->qat_asym_driver_id = + qat_asym_driver_id; + } else if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (qat_pci_dev->qat_asym_driver_id != + qat_asym_driver_id) { + QAT_LOG(ERR, + "Device %s have different driver id than corresponding device in primary process", + name); + return -(EFAULT); + } + } + + /* Populate subset device to use in cryptodev device creation */ + qat_dev_instance->asym_rte_dev.driver = &cryptodev_qat_asym_driver; + qat_dev_instance->asym_rte_dev.numa_node = + qat_dev_instance->pci_dev->device.numa_node; + qat_dev_instance->asym_rte_dev.devargs = NULL; + + cryptodev = rte_cryptodev_pmd_create(name, + &(qat_dev_instance->asym_rte_dev), &init_params); + + if (cryptodev == NULL) + return -ENODEV; + + qat_dev_instance->asym_rte_dev.name = cryptodev->data->name; + cryptodev->driver_id = qat_asym_driver_id; + cryptodev->dev_ops = gen_dev_ops->cryptodev_ops; + + cryptodev->enqueue_burst = qat_asym_crypto_enqueue_op_burst; + cryptodev->dequeue_burst = qat_asym_crypto_dequeue_op_burst; + + cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, + "QAT_ASYM_CAPA_GEN_%d", + qat_pci_dev->qat_dev_gen); + + internals = cryptodev->data->dev_private; + internals->qat_dev = qat_pci_dev; + internals->dev_id = cryptodev->data->dev_id; + + capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); + capabilities = capa_info.data; + capa_size = capa_info.size; + + internals->capa_mz = rte_memzone_lookup(capa_memz_name); + if (internals->capa_mz == NULL) { + internals->capa_mz = rte_memzone_reserve(capa_memz_name, + capa_size, rte_socket_id(), 0); + if (internals->capa_mz == NULL) { + QAT_LOG(DEBUG, + "Error allocating memzone for capabilities, " + "destroying PMD for %s", + name); + rte_cryptodev_pmd_destroy(cryptodev); + memset(&qat_dev_instance->asym_rte_dev, 0, + sizeof(qat_dev_instance->asym_rte_dev)); + return -EFAULT; + } + } + + memcpy(internals->capa_mz->addr, capabilities, capa_size); + internals->qat_dev_capabilities = internals->capa_mz->addr; + + while (1) { + if (qat_dev_cmd_param[i].name == NULL) + break; + if (!strcmp(qat_dev_cmd_param[i].name, ASYM_ENQ_THRESHOLD_NAME)) + internals->min_enq_burst_threshold = + qat_dev_cmd_param[i].val; + i++; + } + + qat_pci_dev->asym_dev = internals; + internals->service_type = QAT_SERVICE_ASYMMETRIC; + QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d", + cryptodev->data->name, internals->dev_id); + return 0; +} + +int +qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev) +{ + struct rte_cryptodev *cryptodev; + + if (qat_pci_dev == NULL) + return -ENODEV; + if (qat_pci_dev->asym_dev == NULL) + return 0; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + rte_memzone_free(qat_pci_dev->asym_dev->capa_mz); + + /* free crypto device */ + cryptodev = rte_cryptodev_pmd_get_dev( + qat_pci_dev->asym_dev->dev_id); + rte_cryptodev_pmd_destroy(cryptodev); + qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL; + qat_pci_dev->asym_dev = NULL; + + return 0; +} + static struct cryptodev_driver qat_crypto_drv; RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, cryptodev_qat_asym_driver, diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c deleted file mode 100644 index 9a7596b227..0000000000 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ /dev/null @@ -1,231 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation - */ - -#include - -#include "qat_logs.h" - -#include "qat_crypto.h" -#include "qat_asym.h" -#include "qat_asym_pmd.h" - -uint8_t qat_asym_driver_id; -struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; - -void -qat_asym_init_op_cookie(void *op_cookie) -{ - int j; - struct qat_asym_op_cookie *cookie = op_cookie; - - cookie->input_addr = rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - input_params_ptrs); - - cookie->output_addr = rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - output_params_ptrs); - - for (j = 0; j < 8; j++) { - cookie->input_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - input_array[j]); - cookie->output_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - output_array[j]); - } -} - -static struct rte_cryptodev_ops crypto_qat_ops = { - - /* Device related operations */ - .dev_configure = qat_cryptodev_config, - .dev_start = qat_cryptodev_start, - .dev_stop = qat_cryptodev_stop, - .dev_close = qat_cryptodev_close, - .dev_infos_get = qat_cryptodev_info_get, - - .stats_get = qat_cryptodev_stats_get, - .stats_reset = qat_cryptodev_stats_reset, - .queue_pair_setup = qat_cryptodev_qp_setup, - .queue_pair_release = qat_cryptodev_qp_release, - - /* Crypto related operations */ - .asym_session_get_size = qat_asym_session_get_private_size, - .asym_session_configure = qat_asym_session_configure, - .asym_session_clear = qat_asym_session_clear -}; - -uint16_t qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - return qat_enqueue_op_burst(qp, NULL, (void **)ops, nb_ops); -} - -uint16_t qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - return qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); -} - -/* An rte_driver is needed in the registration of both the device and the driver - * with cryptodev. - * The actual qat pci's rte_driver can't be used as its name represents - * the whole pci device with all services. Think of this as a holder for a name - * for the crypto part of the pci device. - */ -static const char qat_asym_drv_name[] = RTE_STR(CRYPTODEV_NAME_QAT_ASYM_PMD); -static const struct rte_driver cryptodev_qat_asym_driver = { - .name = qat_asym_drv_name, - .alias = qat_asym_drv_name -}; - -int -qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, - struct qat_dev_cmd_param *qat_dev_cmd_param) -{ - int i = 0; - struct qat_device_info *qat_dev_instance = - &qat_pci_devs[qat_pci_dev->qat_dev_id]; - struct rte_cryptodev_pmd_init_params init_params = { - .name = "", - .socket_id = qat_dev_instance->pci_dev->device.numa_node, - .private_data_size = sizeof(struct qat_cryptodev_private) - }; - struct qat_capabilities_info capa_info; - const struct rte_cryptodev_capabilities *capabilities; - const struct qat_crypto_gen_dev_ops *gen_dev_ops = - &qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; - struct rte_cryptodev *cryptodev; - struct qat_cryptodev_private *internals; - uint64_t capa_size; - - snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", - qat_pci_dev->name, "asym"); - QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name); - - if (gen_dev_ops->cryptodev_ops == NULL) { - QAT_LOG(ERR, "Device %s does not support asymmetric crypto", - name); - return -EFAULT; - } - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - qat_pci_dev->qat_asym_driver_id = - qat_asym_driver_id; - } else if (rte_eal_process_type() == RTE_PROC_SECONDARY) { - if (qat_pci_dev->qat_asym_driver_id != - qat_asym_driver_id) { - QAT_LOG(ERR, - "Device %s have different driver id than corresponding device in primary process", - name); - return -(EFAULT); - } - } - - /* Populate subset device to use in cryptodev device creation */ - qat_dev_instance->asym_rte_dev.driver = &cryptodev_qat_asym_driver; - qat_dev_instance->asym_rte_dev.numa_node = - qat_dev_instance->pci_dev->device.numa_node; - qat_dev_instance->asym_rte_dev.devargs = NULL; - - cryptodev = rte_cryptodev_pmd_create(name, - &(qat_dev_instance->asym_rte_dev), &init_params); - - if (cryptodev == NULL) - return -ENODEV; - - qat_dev_instance->asym_rte_dev.name = cryptodev->data->name; - cryptodev->driver_id = qat_asym_driver_id; - cryptodev->dev_ops = &crypto_qat_ops; - - cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst; - cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst; - - - cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, - "QAT_ASYM_CAPA_GEN_%d", - qat_pci_dev->qat_dev_gen); - - internals = cryptodev->data->dev_private; - internals->qat_dev = qat_pci_dev; - internals->dev_id = cryptodev->data->dev_id; - internals->service_type = QAT_SERVICE_ASYMMETRIC; - - capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); - capabilities = capa_info.data; - capa_size = capa_info.size; - - internals->capa_mz = rte_memzone_lookup(capa_memz_name); - if (internals->capa_mz == NULL) { - internals->capa_mz = rte_memzone_reserve(capa_memz_name, - capa_size, rte_socket_id(), 0); - if (internals->capa_mz == NULL) { - QAT_LOG(DEBUG, - "Error allocating memzone for capabilities, " - "destroying PMD for %s", - name); - rte_cryptodev_pmd_destroy(cryptodev); - memset(&qat_dev_instance->asym_rte_dev, 0, - sizeof(qat_dev_instance->asym_rte_dev)); - return -EFAULT; - } - } - - memcpy(internals->capa_mz->addr, capabilities, capa_size); - internals->qat_dev_capabilities = internals->capa_mz->addr; - - while (1) { - if (qat_dev_cmd_param[i].name == NULL) - break; - if (!strcmp(qat_dev_cmd_param[i].name, ASYM_ENQ_THRESHOLD_NAME)) - internals->min_enq_burst_threshold = - qat_dev_cmd_param[i].val; - i++; - } - - qat_pci_dev->asym_dev = internals; - - rte_cryptodev_pmd_probing_finish(cryptodev); - - QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d", - cryptodev->data->name, internals->dev_id); - return 0; -} - -int -qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev) -{ - struct rte_cryptodev *cryptodev; - - if (qat_pci_dev == NULL) - return -ENODEV; - if (qat_pci_dev->asym_dev == NULL) - return 0; - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - rte_memzone_free(qat_pci_dev->asym_dev->capa_mz); - - /* free crypto device */ - cryptodev = rte_cryptodev_pmd_get_dev( - qat_pci_dev->asym_dev->dev_id); - rte_cryptodev_pmd_destroy(cryptodev); - qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL; - qat_pci_dev->asym_dev = NULL; - - return 0; -} - -static struct cryptodev_driver qat_crypto_drv; -RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, - cryptodev_qat_asym_driver, - qat_asym_driver_id); diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h deleted file mode 100644 index fd6b406248..0000000000 --- a/drivers/crypto/qat/qat_asym_pmd.h +++ /dev/null @@ -1,54 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation - */ - - -#ifndef _QAT_ASYM_PMD_H_ -#define _QAT_ASYM_PMD_H_ - -#include -#include "qat_crypto.h" -#include "qat_device.h" - -/** Intel(R) QAT Asymmetric Crypto PMD driver name */ -#define CRYPTODEV_NAME_QAT_ASYM_PMD crypto_qat_asym - - -/** - * Helper function to add an asym capability - * - **/ -#define QAT_ASYM_CAP(n, o, l, r, i) \ - { \ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ - {.asym = { \ - .xform_capa = { \ - .xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\ - .op_types = o, \ - { \ - .modlen = { \ - .min = l, \ - .max = r, \ - .increment = i \ - }, } \ - } \ - }, \ - } \ - } - -extern uint8_t qat_asym_driver_id; - -extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[]; - -void -qat_asym_init_op_cookie(void *op_cookie); - -uint16_t -qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops); - -uint16_t -qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops); - -#endif /* _QAT_ASYM_PMD_H_ */ From patchwork Fri Nov 5 00:19:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103817 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B5B7BA0C61; Fri, 5 Nov 2021 01:20:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 202AB427CE; Fri, 5 Nov 2021 01:19:52 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 1E727427AB for ; Fri, 5 Nov 2021 01:19:46 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563732" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563732" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671816" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:45 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:28 +0000 Message-Id: <20211105001932.28784-8-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 07/11] crypto/qat: op burst data path rework X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enable op_build_request func in qat enqueue op burst. and qat_dequeue_process_response in qat dequeue op burst. The op_build_request invoked in crypto build request op is based on crypto operations setuped during session init. Signed-off-by: Kai Ji --- drivers/common/qat/qat_qp.c | 42 +- drivers/common/qat/qat_qp.h | 18 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 + drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 180 ++-- drivers/crypto/qat/qat_asym.c | 6 +- drivers/crypto/qat/qat_asym.h | 26 - drivers/crypto/qat/qat_crypto.h | 6 +- drivers/crypto/qat/qat_sym.c | 914 ++++++------------- drivers/crypto/qat/qat_sym.h | 4 - drivers/crypto/qat/qat_sym_session.c | 6 +- drivers/crypto/qat/qat_sym_session.h | 10 +- 11 files changed, 395 insertions(+), 819 deletions(-) diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 7faa5a48fd..56facf0fbe 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -550,8 +550,7 @@ adf_modulo(uint32_t data, uint32_t modulo_mask) } uint16_t -qat_enqueue_op_burst(void *qp, - __rte_unused qat_op_build_request_t op_build_request, +qat_enqueue_op_burst(void *qp, qat_op_build_request_t op_build_request, void **ops, uint16_t nb_ops) { register struct qat_queue *queue; @@ -602,29 +601,18 @@ qat_enqueue_op_burst(void *qp, } } -#ifdef BUILD_QAT_SYM +#ifdef RTE_LIB_SECURITY if (tmp_qp->service_type == QAT_SERVICE_SYMMETRIC) qat_sym_preprocess_requests(ops, nb_ops_possible); #endif + memset(tmp_qp->opaque, 0xff, sizeof(tmp_qp->opaque)); + while (nb_ops_sent != nb_ops_possible) { - if (tmp_qp->service_type == QAT_SERVICE_SYMMETRIC) { -#ifdef BUILD_QAT_SYM - ret = qat_sym_build_request(*ops, base_addr + tail, - tmp_qp->op_cookies[tail >> queue->trailz], - tmp_qp->qat_dev_gen); -#endif - } else if (tmp_qp->service_type == QAT_SERVICE_COMPRESSION) { - ret = qat_comp_build_request(*ops, base_addr + tail, + ret = op_build_request(*ops, base_addr + tail, tmp_qp->op_cookies[tail >> queue->trailz], - tmp_qp->qat_dev_gen); - } else if (tmp_qp->service_type == QAT_SERVICE_ASYMMETRIC) { -#ifdef BUILD_QAT_ASYM - ret = qat_asym_build_request(*ops, base_addr + tail, - tmp_qp->op_cookies[tail >> queue->trailz], - NULL, tmp_qp->qat_dev_gen); -#endif - } + tmp_qp->opaque, tmp_qp->qat_dev_gen); + if (ret != 0) { tmp_qp->stats.enqueue_err_count++; /* This message cannot be enqueued */ @@ -820,8 +808,7 @@ qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops) uint16_t qat_dequeue_op_burst(void *qp, void **ops, - __rte_unused qat_op_dequeue_t qat_dequeue_process_response, - uint16_t nb_ops) + qat_op_dequeue_t qat_dequeue_process_response, uint16_t nb_ops) { struct qat_queue *rx_queue; struct qat_qp *tmp_qp = (struct qat_qp *)qp; @@ -839,21 +826,10 @@ qat_dequeue_op_burst(void *qp, void **ops, nb_fw_responses = 1; - if (tmp_qp->service_type == QAT_SERVICE_SYMMETRIC) - qat_sym_process_response(ops, resp_msg, - tmp_qp->op_cookies[head >> rx_queue->trailz], - NULL); - else if (tmp_qp->service_type == QAT_SERVICE_COMPRESSION) - nb_fw_responses = qat_comp_process_response( + nb_fw_responses = qat_dequeue_process_response( ops, resp_msg, tmp_qp->op_cookies[head >> rx_queue->trailz], &tmp_qp->stats.dequeue_err_count); -#ifdef BUILD_QAT_ASYM - else if (tmp_qp->service_type == QAT_SERVICE_ASYMMETRIC) - qat_asym_process_response(ops, resp_msg, - tmp_qp->op_cookies[head >> rx_queue->trailz], - NULL); -#endif head = adf_modulo(head + rx_queue->msg_size, rx_queue->modulo_mask); diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index f2ca0173de..3b58786bd8 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -14,6 +14,24 @@ struct qat_pci_device; +/* Default qp configuration for GEN4 devices */ +#define QAT_GEN4_QP_DEFCON (QAT_SERVICE_SYMMETRIC | \ + QAT_SERVICE_SYMMETRIC << 8 | \ + QAT_SERVICE_SYMMETRIC << 16 | \ + QAT_SERVICE_SYMMETRIC << 24) + +/* QAT GEN 4 specific macros */ +#define QAT_GEN4_BUNDLE_NUM 4 +#define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 + +/* Queue pair setup error codes */ +#define QAT_NOMEM 1 +#define QAT_QP_INVALID_DESC_NO 2 +#define QAT_QP_BUSY 3 +#define QAT_PCI_NO_RESOURCE 4 + +#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C + /** * Structure associated with each queue. */ diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 621ff85638..9320eb5adf 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -306,4 +306,6 @@ RTE_INIT(qat_asym_crypto_gen2_init) qat_asym_crypto_cap_get_gen1; qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags = qat_asym_crypto_feature_flags_get_gen1; + qat_asym_gen_dev_ops[QAT_GEN2].set_session = + qat_asym_crypto_set_session_gen1; } diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index bcdf0172fe..14fc9d7f50 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -148,10 +148,6 @@ struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = { .sym_session_get_size = qat_sym_session_get_private_size, .sym_session_configure = qat_sym_session_configure, .sym_session_clear = qat_sym_session_clear, - - /* Raw data-path API related operations */ - .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, - .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; static struct qat_capabilities_info @@ -181,6 +177,94 @@ qat_sym_crypto_feature_flags_get_gen1( return feature_flags; } +#ifdef RTE_LIB_SECURITY + +#define QAT_SECURITY_SYM_CAPABILITIES \ + { /* AES DOCSIS BPI */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 16 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ + } + +#define QAT_SECURITY_CAPABILITIES(sym) \ + [0] = { /* DOCSIS Uplink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_UPLINK \ + }, \ + .crypto_capabilities = (sym) \ + }, \ + [1] = { /* DOCSIS Downlink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_DOWNLINK \ + }, \ + .crypto_capabilities = (sym) \ + } + +static const struct rte_cryptodev_capabilities + qat_security_sym_capabilities[] = { + QAT_SECURITY_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static const struct rte_security_capability qat_security_capabilities_gen1[] = { + QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities), + { + .action = RTE_SECURITY_ACTION_TYPE_NONE + } +}; + +static const struct rte_security_capability * +qat_security_cap_get_gen1(void *dev __rte_unused) +{ + return qat_security_capabilities_gen1; +} + +struct rte_security_ops security_qat_ops_gen1 = { + .session_create = qat_security_session_create, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = qat_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = qat_security_cap_get_gen1 +}; + +void * +qat_sym_create_security_gen1(void *cryptodev) +{ + struct rte_security_ctx *security_instance; + + security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE); + if (security_instance == NULL) + return NULL; + + security_instance->device = cryptodev; + security_instance->ops = &security_qat_ops_gen1; + security_instance->sess_cnt = 0; + + return (void *)security_instance; +} + +#endif int qat_sym_build_op_cipher_gen1(void *in_op, struct qat_sym_session *ctx, @@ -367,94 +451,6 @@ qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx, return 0; } -#ifdef RTE_LIB_SECURITY - -#define QAT_SECURITY_SYM_CAPABILITIES \ - { /* AES DOCSIS BPI */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 16 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - } - -#define QAT_SECURITY_CAPABILITIES(sym) \ - [0] = { /* DOCSIS Uplink */ \ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ - .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ - .docsis = { \ - .direction = RTE_SECURITY_DOCSIS_UPLINK \ - }, \ - .crypto_capabilities = (sym) \ - }, \ - [1] = { /* DOCSIS Downlink */ \ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ - .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ - .docsis = { \ - .direction = RTE_SECURITY_DOCSIS_DOWNLINK \ - }, \ - .crypto_capabilities = (sym) \ - } - -static const struct rte_cryptodev_capabilities - qat_security_sym_capabilities[] = { - QAT_SECURITY_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_security_capability qat_security_capabilities_gen1[] = { - QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities), - { - .action = RTE_SECURITY_ACTION_TYPE_NONE - } -}; - -static const struct rte_security_capability * -qat_security_cap_get_gen1(void *dev __rte_unused) -{ - return qat_security_capabilities_gen1; -} - -struct rte_security_ops security_qat_ops_gen1 = { - .session_create = qat_security_session_create, - .session_update = NULL, - .session_stats_get = NULL, - .session_destroy = qat_security_session_destroy, - .set_pkt_metadata = NULL, - .capabilities_get = qat_security_cap_get_gen1 -}; - -void * -qat_sym_create_security_gen1(void *cryptodev) -{ - struct rte_security_ctx *security_instance; - - security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx), - RTE_CACHE_LINE_SIZE); - if (security_instance == NULL) - return NULL; - - security_instance->device = cryptodev; - security_instance->ops = &security_qat_ops_gen1; - security_instance->sess_cnt = 0; - - return (void *)security_instance; -} - -#endif int qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) { diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 83747118a5..a25131e4d9 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -15,6 +15,10 @@ #include "qat_logs.h" #include "qat_asym.h" +uint8_t qat_asym_driver_id; + +struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; + void qat_asym_init_op_cookie(void *op_cookie) { @@ -769,7 +773,7 @@ qat_asym_fill_arrays(struct rte_crypto_asym_op *asym_op, return 0; } -int +static __rte_always_inline int qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, __rte_unused uint64_t *opaque, __rte_unused enum qat_device_gen dev_gen) diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index 655a835c7f..55b1e1b2cc 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -29,10 +29,6 @@ typedef uint64_t large_int_ptr; #define QAT_ASYM_RSA_NUM_OUT_PARAMS 1 #define QAT_ASYM_RSA_QT_NUM_IN_PARAMS 6 -uint8_t qat_asym_driver_id; - -struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; - /** * helper function to add an asym capability * @@ -108,28 +104,6 @@ void qat_asym_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_asym_session *sess); -/* - * Build PKE request to be sent to the fw, partially uses template - * request generated during session creation. - * - * @param in_op Pointer to the crypto operation, for every - * service it points to service specific struct. - * @param out_msg Message to be returned to enqueue function - * @param op_cookie Cookie pointer that holds private metadata - * @param qat_dev_gen Generation of QAT hardware - * - * @return - * This function always returns zero, - * it is because of backward compatibility. - * - 0: Always returned - * - */ -int -qat_asym_build_request(void *in_op, uint8_t *out_msg, - void *op_cookie, - __rte_unused uint64_t *opaque, - enum qat_device_gen qat_dev_gen); - /* * Process PKE response received from outgoing queue of QAT * diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index a32c716d28..3ac6c90e1c 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -12,7 +12,10 @@ extern uint8_t qat_sym_driver_id; extern uint8_t qat_asym_driver_id; -/** helper macro to set cryptodev capability range **/ +/** + * helper macro to set cryptodev capability range + * + **/ #define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i} #define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0} @@ -61,6 +64,7 @@ struct qat_crypto_gen_dev_ops { }; extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[]; +extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[]; int qat_cryptodev_config(struct rte_cryptodev *dev, diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 8b1a49c690..2bbb6615f3 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -11,11 +11,25 @@ #include #include "qat_sym.h" +#include "qat_crypto.h" +#include "qat_qp.h" uint8_t qat_sym_driver_id; struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS]; +/* An rte_driver is needed in the registration of both the device and the driver + * with cryptodev. + * The actual qat pci's rte_driver can't be used as its name represents + * the whole pci device with all services. Think of this as a holder for a name + * for the crypto part of the pci device. + */ +static const char qat_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD); +static const struct rte_driver cryptodev_qat_sym_driver = { + .name = qat_sym_drv_name, + .alias = qat_sym_drv_name +}; + void qat_sym_init_op_cookie(void *op_cookie) { @@ -37,246 +51,67 @@ qat_sym_init_op_cookie(void *op_cookie) opt.spc_gmac.cd_cipher); } -/** Decrypt a single partial block - * Depends on openssl libcrypto - * Uses ECB+XOR to do CFB encryption, same result, more performant - */ -static inline int -bpi_cipher_decrypt(uint8_t *src, uint8_t *dst, - uint8_t *iv, int ivlen, int srclen, - void *bpi_ctx) -{ - EVP_CIPHER_CTX *ctx = (EVP_CIPHER_CTX *)bpi_ctx; - int encrypted_ivlen; - uint8_t encrypted_iv[BPI_MAX_ENCR_IV_LEN]; - uint8_t *encr = encrypted_iv; - - /* ECB method: encrypt (not decrypt!) the IV, then XOR with plaintext */ - if (EVP_EncryptUpdate(ctx, encrypted_iv, &encrypted_ivlen, iv, ivlen) - <= 0) - goto cipher_decrypt_err; - - for (; srclen != 0; --srclen, ++dst, ++src, ++encr) - *dst = *src ^ *encr; - - return 0; - -cipher_decrypt_err: - QAT_DP_LOG(ERR, "libcrypto ECB cipher decrypt for BPI IV failed"); - return -EINVAL; -} - - -static inline uint32_t -qat_bpicipher_preprocess(struct qat_sym_session *ctx, - struct rte_crypto_op *op) -{ - int block_len = qat_cipher_get_block_size(ctx->qat_cipher_alg); - struct rte_crypto_sym_op *sym_op = op->sym; - uint8_t last_block_len = block_len > 0 ? - sym_op->cipher.data.length % block_len : 0; - - if (last_block_len && - ctx->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { - - /* Decrypt last block */ - uint8_t *last_block, *dst, *iv; - uint32_t last_block_offset = sym_op->cipher.data.offset + - sym_op->cipher.data.length - last_block_len; - last_block = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_src, - uint8_t *, last_block_offset); - - if (unlikely((sym_op->m_dst != NULL) - && (sym_op->m_dst != sym_op->m_src))) - /* out-of-place operation (OOP) */ - dst = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_dst, - uint8_t *, last_block_offset); - else - dst = last_block; - - if (last_block_len < sym_op->cipher.data.length) - /* use previous block ciphertext as IV */ - iv = last_block - block_len; - else - /* runt block, i.e. less than one full block */ - iv = rte_crypto_op_ctod_offset(op, uint8_t *, - ctx->cipher_iv.offset); - -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: src before pre-process:", - last_block, last_block_len); - if (sym_op->m_dst != NULL) - QAT_DP_HEXDUMP_LOG(DEBUG, "BPI:dst before pre-process:", - dst, last_block_len); -#endif - bpi_cipher_decrypt(last_block, dst, iv, block_len, - last_block_len, ctx->bpi_ctx); -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: src after pre-process:", - last_block, last_block_len); - if (sym_op->m_dst != NULL) - QAT_DP_HEXDUMP_LOG(DEBUG, "BPI: dst after pre-process:", - dst, last_block_len); -#endif - } - - return sym_op->cipher.data.length - last_block_len; -} - -static inline void -set_cipher_iv(uint16_t iv_length, uint16_t iv_offset, - struct icp_qat_fw_la_cipher_req_params *cipher_param, - struct rte_crypto_op *op, - struct icp_qat_fw_la_bulk_req *qat_req) +static __rte_always_inline int +qat_sym_build_request(void *in_op, uint8_t *out_msg, + void *op_cookie, uint64_t *opaque, enum qat_device_gen dev_gen) { - /* copy IV into request if it fits */ - if (iv_length <= sizeof(cipher_param->u.cipher_IV_array)) { - rte_memcpy(cipher_param->u.cipher_IV_array, - rte_crypto_op_ctod_offset(op, uint8_t *, - iv_offset), - iv_length); - } else { - ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_CIPH_IV_64BIT_PTR); - cipher_param->u.s.cipher_IV_ptr = - rte_crypto_op_ctophys_offset(op, - iv_offset); - } -} + struct rte_crypto_op *op = (struct rte_crypto_op *)in_op; + void *sess = (void *)opaque[0]; + qat_sym_build_request_t build_request = (void *)opaque[1]; + struct qat_sym_session *ctx = NULL; -/** Set IV for CCM is special case, 0th byte is set to q-1 - * where q is padding of nonce in 16 byte block - */ -static inline void -set_cipher_iv_ccm(uint16_t iv_length, uint16_t iv_offset, - struct icp_qat_fw_la_cipher_req_params *cipher_param, - struct rte_crypto_op *op, uint8_t q, uint8_t aad_len_field_sz) -{ - rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) + - ICP_QAT_HW_CCM_NONCE_OFFSET, - rte_crypto_op_ctod_offset(op, uint8_t *, - iv_offset) + ICP_QAT_HW_CCM_NONCE_OFFSET, - iv_length); - *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = - q - ICP_QAT_HW_CCM_NONCE_OFFSET; - - if (aad_len_field_sz) - rte_memcpy(&op->sym->aead.aad.data[ICP_QAT_HW_CCM_NONCE_OFFSET], - rte_crypto_op_ctod_offset(op, uint8_t *, - iv_offset) + ICP_QAT_HW_CCM_NONCE_OFFSET, - iv_length); -} + if (likely(op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)) { + ctx = get_sym_session_private_data(op->sym->session, + qat_sym_driver_id); + if (unlikely(!ctx)) { + QAT_DP_LOG(ERR, "No session for this device"); + return -EINVAL; + } + if (sess != ctx) { + struct rte_cryptodev *cdev; + struct qat_cryptodev_private *internals; + enum rte_proc_type_t proc_type; + + cdev = rte_cryptodev_pmd_get_dev(ctx->dev_id); + internals = cdev->data->dev_private; + proc_type = rte_eal_process_type(); + + if (internals->qat_dev->qat_dev_gen != dev_gen) { + op->status = + RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + return -EINVAL; + } -/** Handle Single-Pass AES-GMAC on QAT GEN3 */ -static inline void -handle_spc_gmac(struct qat_sym_session *ctx, struct rte_crypto_op *op, - struct qat_sym_op_cookie *cookie, - struct icp_qat_fw_la_bulk_req *qat_req) -{ - static const uint32_t ver_key_offset = - sizeof(struct icp_qat_hw_auth_setup) + - ICP_QAT_HW_GALOIS_128_STATE1_SZ + - ICP_QAT_HW_GALOIS_H_SZ + ICP_QAT_HW_GALOIS_LEN_A_SZ + - ICP_QAT_HW_GALOIS_E_CTR0_SZ + - sizeof(struct icp_qat_hw_cipher_config); - struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = - (void *) &qat_req->cd_ctrl; - struct icp_qat_fw_la_cipher_req_params *cipher_param = - (void *) &qat_req->serv_specif_rqpars; - uint32_t data_length = op->sym->auth.data.length; - - /* Fill separate Content Descriptor for this op */ - rte_memcpy(cookie->opt.spc_gmac.cd_cipher.key, - ctx->auth_op == ICP_QAT_HW_AUTH_GENERATE ? - ctx->cd.cipher.key : - RTE_PTR_ADD(&ctx->cd, ver_key_offset), - ctx->auth_key_length); - cookie->opt.spc_gmac.cd_cipher.cipher_config.val = - ICP_QAT_HW_CIPHER_CONFIG_BUILD( - ICP_QAT_HW_CIPHER_AEAD_MODE, - ctx->qat_cipher_alg, - ICP_QAT_HW_CIPHER_NO_CONVERT, - (ctx->auth_op == ICP_QAT_HW_AUTH_GENERATE ? - ICP_QAT_HW_CIPHER_ENCRYPT : - ICP_QAT_HW_CIPHER_DECRYPT)); - QAT_FIELD_SET(cookie->opt.spc_gmac.cd_cipher.cipher_config.val, - ctx->digest_length, - QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS, - QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); - cookie->opt.spc_gmac.cd_cipher.cipher_config.reserved = - ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(data_length); - - /* Update the request */ - qat_req->cd_pars.u.s.content_desc_addr = - cookie->opt.spc_gmac.cd_phys_addr; - qat_req->cd_pars.u.s.content_desc_params_sz = RTE_ALIGN_CEIL( - sizeof(struct icp_qat_hw_cipher_config) + - ctx->auth_key_length, 8) >> 3; - qat_req->comn_mid.src_length = data_length; - qat_req->comn_mid.dst_length = 0; - - cipher_param->spc_aad_addr = 0; - cipher_param->spc_auth_res_addr = op->sym->auth.digest.phys_addr; - cipher_param->spc_aad_sz = data_length; - cipher_param->reserved = 0; - cipher_param->spc_auth_res_sz = ctx->digest_length; - - qat_req->comn_hdr.service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER; - cipher_cd_ctrl->cipher_cfg_offset = 0; - ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER); - ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); - ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_SINGLE_PASS_PROTO); - ICP_QAT_FW_LA_PROTO_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_NO_PROTO); -} + if (unlikely(ctx->build_request[proc_type] == NULL)) { + int ret = + qat_sym_gen_dev_ops[dev_gen].set_session( + (void *)cdev, sess); + if (ret < 0) { + op->status = + RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + return -EINVAL; + } + } -int -qat_sym_build_request(void *in_op, uint8_t *out_msg, - void *op_cookie, __rte_unused enum qat_device_gen qat_dev_gen) -{ - int ret = 0; - struct qat_sym_session *ctx = NULL; - struct icp_qat_fw_la_cipher_req_params *cipher_param; - struct icp_qat_fw_la_cipher_20_req_params *cipher_param20; - struct icp_qat_fw_la_auth_req_params *auth_param; - register struct icp_qat_fw_la_bulk_req *qat_req; - uint8_t do_auth = 0, do_cipher = 0, do_aead = 0; - uint32_t cipher_len = 0, cipher_ofs = 0; - uint32_t auth_len = 0, auth_ofs = 0; - uint32_t min_ofs = 0; - uint64_t src_buf_start = 0, dst_buf_start = 0; - uint64_t auth_data_end = 0; - uint8_t do_sgl = 0; - uint8_t in_place = 1; - int alignment_adjustment = 0; - int oop_shift = 0; - struct rte_crypto_op *op = (struct rte_crypto_op *)in_op; - struct qat_sym_op_cookie *cookie = - (struct qat_sym_op_cookie *)op_cookie; - - if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC)) { - QAT_DP_LOG(ERR, "QAT PMD only supports symmetric crypto " - "operation requests, op (%p) is not a " - "symmetric operation.", op); - return -EINVAL; + build_request = ctx->build_request[proc_type]; + opaque[0] = (uintptr_t)ctx; + opaque[1] = (uintptr_t)build_request; + } } - if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) { - QAT_DP_LOG(ERR, "QAT PMD only supports session oriented" - " requests, op (%p) is sessionless.", op); - return -EINVAL; - } else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - ctx = (struct qat_sym_session *)get_sym_session_private_data( - op->sym->session, qat_sym_driver_id); #ifdef RTE_LIB_SECURITY - } else { - ctx = (struct qat_sym_session *)get_sec_session_private_data( - op->sym->sec_session); - if (likely(ctx)) { + else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { + if (sess != (void *)op->sym->sec_session) { + struct rte_cryptodev *cdev; + struct qat_cryptodev_private *internals; + enum rte_proc_type_t proc_type; + + ctx = get_sec_session_private_data( + op->sym->sec_session); + if (unlikely(!ctx)) { + QAT_DP_LOG(ERR, "No session for this device"); + return -EINVAL; + } if (unlikely(ctx->bpi_ctx == NULL)) { QAT_DP_LOG(ERR, "QAT PMD only supports security" " operation requests for" @@ -292,463 +127,234 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; return -EINVAL; } - } -#endif - } + cdev = rte_cryptodev_pmd_get_dev(ctx->dev_id); + internals = cdev->data->dev_private; + proc_type = rte_eal_process_type(); - if (unlikely(ctx == NULL)) { - QAT_DP_LOG(ERR, "Session was not created for this device"); - return -EINVAL; - } + if (internals->qat_dev->qat_dev_gen != dev_gen) { + op->status = + RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + return -EINVAL; + } + + if (unlikely(ctx->build_request[proc_type] == NULL)) { + int ret = + qat_sym_gen_dev_ops[dev_gen].set_session( + (void *)cdev, sess); + if (ret < 0) { + op->status = + RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + return -EINVAL; + } + } - qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg; - rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); - qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op; - cipher_param = (void *)&qat_req->serv_specif_rqpars; - cipher_param20 = (void *)&qat_req->serv_specif_rqpars; - auth_param = (void *)((uint8_t *)cipher_param + - ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - - if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || - ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && - !ctx->is_gmac) { - /* AES-GCM or AES-CCM */ - if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || - ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || - (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 - && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE - && ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { - do_aead = 1; - } else { - do_auth = 1; - do_cipher = 1; + sess = (void *)op->sym->sec_session; + build_request = ctx->build_request[proc_type]; + opaque[0] = (uintptr_t)sess; + opaque[1] = (uintptr_t)build_request; } - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { - do_auth = 1; - do_cipher = 0; - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { - do_auth = 0; - do_cipher = 1; + } +#endif + else { /* RTE_CRYPTO_OP_SESSIONLESS */ + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + QAT_LOG(DEBUG, "QAT does not support sessionless operation"); + return -1; } - if (do_cipher) { + return build_request(op, (void *)ctx, out_msg, op_cookie); +} - if (ctx->qat_cipher_alg == - ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || - ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI || - ctx->qat_cipher_alg == - ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { +uint16_t +qat_sym_enqueue_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_enqueue_op_burst(qp, qat_sym_build_request, + (void **)ops, nb_ops); +} - if (unlikely( - (op->sym->cipher.data.length % BYTE_LENGTH != 0) || - (op->sym->cipher.data.offset % BYTE_LENGTH != 0))) { - QAT_DP_LOG(ERR, - "SNOW3G/KASUMI/ZUC in QAT PMD only supports byte aligned values"); - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - return -EINVAL; - } - cipher_len = op->sym->cipher.data.length >> 3; - cipher_ofs = op->sym->cipher.data.offset >> 3; - - } else if (ctx->bpi_ctx) { - /* DOCSIS - only send complete blocks to device. - * Process any partial block using CFB mode. - * Even if 0 complete blocks, still send this to device - * to get into rx queue for post-process and dequeuing - */ - cipher_len = qat_bpicipher_preprocess(ctx, op); - cipher_ofs = op->sym->cipher.data.offset; - } else { - cipher_len = op->sym->cipher.data.length; - cipher_ofs = op->sym->cipher.data.offset; - } +uint16_t +qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_dequeue_op_burst(qp, (void **)ops, + qat_sym_process_response, nb_ops); +} - set_cipher_iv(ctx->cipher_iv.length, ctx->cipher_iv.offset, - cipher_param, op, qat_req); - min_ofs = cipher_ofs; +int +qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, + struct qat_dev_cmd_param *qat_dev_cmd_param __rte_unused) +{ + int i = 0, ret = 0; + struct qat_device_info *qat_dev_instance = + &qat_pci_devs[qat_pci_dev->qat_dev_id]; + struct rte_cryptodev_pmd_init_params init_params = { + .name = "", + .socket_id = qat_dev_instance->pci_dev->device.numa_node, + .private_data_size = sizeof(struct qat_cryptodev_private) + }; + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; + struct rte_cryptodev *cryptodev; + struct qat_cryptodev_private *internals; + struct qat_capabilities_info capa_info; + const struct rte_cryptodev_capabilities *capabilities; + const struct qat_crypto_gen_dev_ops *gen_dev_ops = + &qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; + uint64_t capa_size; + + snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", + qat_pci_dev->name, "sym"); + QAT_LOG(DEBUG, "Creating QAT SYM device %s", name); + + if (gen_dev_ops->cryptodev_ops == NULL) { + QAT_LOG(ERR, "Device %s does not support symmetric crypto", + name); + return -(EFAULT); } - if (do_auth) { - - if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 || - ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 || - ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3) { - if (unlikely( - (op->sym->auth.data.offset % BYTE_LENGTH != 0) || - (op->sym->auth.data.length % BYTE_LENGTH != 0))) { - QAT_DP_LOG(ERR, - "For SNOW3G/KASUMI/ZUC, QAT PMD only supports byte aligned values"); - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - return -EINVAL; - } - auth_ofs = op->sym->auth.data.offset >> 3; - auth_len = op->sym->auth.data.length >> 3; - - auth_param->u1.aad_adr = - rte_crypto_op_ctophys_offset(op, - ctx->auth_iv.offset); - - } else if (ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || - ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_GALOIS_64) { - /* AES-GMAC */ - set_cipher_iv(ctx->auth_iv.length, - ctx->auth_iv.offset, - cipher_param, op, qat_req); - auth_ofs = op->sym->auth.data.offset; - auth_len = op->sym->auth.data.length; - - auth_param->u1.aad_adr = 0; - auth_param->u2.aad_sz = 0; - - } else { - auth_ofs = op->sym->auth.data.offset; - auth_len = op->sym->auth.data.length; - + /* + * All processes must use same driver id so they can share sessions. + * Store driver_id so we can validate that all processes have the same + * value, typically they have, but could differ if binaries built + * separately. + */ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + qat_pci_dev->qat_sym_driver_id = + qat_sym_driver_id; + } else if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (qat_pci_dev->qat_sym_driver_id != + qat_sym_driver_id) { + QAT_LOG(ERR, + "Device %s have different driver id than corresponding device in primary process", + name); + return -(EFAULT); } - min_ofs = auth_ofs; - - if (ctx->qat_hash_alg != ICP_QAT_HW_AUTH_ALGO_NULL || - ctx->auth_op == ICP_QAT_HW_AUTH_VERIFY) - auth_param->auth_res_addr = - op->sym->auth.digest.phys_addr; - } - if (do_aead) { - /* - * This address may used for setting AAD physical pointer - * into IV offset from op - */ - rte_iova_t aad_phys_addr_aead = op->sym->aead.aad.phys_addr; - if (ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || - ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_GALOIS_64) { - - set_cipher_iv(ctx->cipher_iv.length, - ctx->cipher_iv.offset, - cipher_param, op, qat_req); - - } else if (ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC) { - - /* In case of AES-CCM this may point to user selected - * memory or iv offset in cypto_op - */ - uint8_t *aad_data = op->sym->aead.aad.data; - /* This is true AAD length, it not includes 18 bytes of - * preceding data - */ - uint8_t aad_ccm_real_len = 0; - uint8_t aad_len_field_sz = 0; - uint32_t msg_len_be = - rte_bswap32(op->sym->aead.data.length); - - if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { - aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; - aad_ccm_real_len = ctx->aad_len - - ICP_QAT_HW_CCM_AAD_B0_LEN - - ICP_QAT_HW_CCM_AAD_LEN_INFO; - } else { - /* - * aad_len not greater than 18, so no actual aad - * data, then use IV after op for B0 block - */ - aad_data = rte_crypto_op_ctod_offset(op, - uint8_t *, - ctx->cipher_iv.offset); - aad_phys_addr_aead = - rte_crypto_op_ctophys_offset(op, - ctx->cipher_iv.offset); - } - - uint8_t q = ICP_QAT_HW_CCM_NQ_CONST - - ctx->cipher_iv.length; - - aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( - aad_len_field_sz, - ctx->digest_length, q); - - if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET + - (q - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), - (uint8_t *)&msg_len_be, - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); - } else { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)&msg_len_be - + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE - - q), q); - } - - if (aad_len_field_sz > 0) { - *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] - = rte_bswap16(aad_ccm_real_len); - - if ((aad_ccm_real_len + aad_len_field_sz) - % ICP_QAT_HW_CCM_AAD_B0_LEN) { - uint8_t pad_len = 0; - uint8_t pad_idx = 0; - - pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - - ((aad_ccm_real_len + aad_len_field_sz) % - ICP_QAT_HW_CCM_AAD_B0_LEN); - pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + - aad_ccm_real_len + aad_len_field_sz; - memset(&aad_data[pad_idx], - 0, pad_len); - } + /* Populate subset device to use in cryptodev device creation */ + qat_dev_instance->sym_rte_dev.driver = &cryptodev_qat_sym_driver; + qat_dev_instance->sym_rte_dev.numa_node = + qat_dev_instance->pci_dev->device.numa_node; + qat_dev_instance->sym_rte_dev.devargs = NULL; - } + cryptodev = rte_cryptodev_pmd_create(name, + &(qat_dev_instance->sym_rte_dev), &init_params); - set_cipher_iv_ccm(ctx->cipher_iv.length, - ctx->cipher_iv.offset, - cipher_param, op, q, - aad_len_field_sz); + if (cryptodev == NULL) + return -ENODEV; - } + qat_dev_instance->sym_rte_dev.name = cryptodev->data->name; + cryptodev->driver_id = qat_sym_driver_id; + cryptodev->dev_ops = gen_dev_ops->cryptodev_ops; - cipher_len = op->sym->aead.data.length; - cipher_ofs = op->sym->aead.data.offset; - auth_len = op->sym->aead.data.length; - auth_ofs = op->sym->aead.data.offset; + cryptodev->enqueue_burst = qat_sym_enqueue_burst; + cryptodev->dequeue_burst = qat_sym_dequeue_burst; - auth_param->u1.aad_adr = aad_phys_addr_aead; - auth_param->auth_res_addr = op->sym->aead.digest.phys_addr; - min_ofs = op->sym->aead.data.offset; - } + cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); - if (op->sym->m_src->nb_segs > 1 || - (op->sym->m_dst && op->sym->m_dst->nb_segs > 1)) - do_sgl = 1; - - /* adjust for chain case */ - if (do_cipher && do_auth) - min_ofs = cipher_ofs < auth_ofs ? cipher_ofs : auth_ofs; - - if (unlikely(min_ofs >= rte_pktmbuf_data_len(op->sym->m_src) && do_sgl)) - min_ofs = 0; - - if (unlikely((op->sym->m_dst != NULL) && - (op->sym->m_dst != op->sym->m_src))) { - /* Out-of-place operation (OOP) - * Don't align DMA start. DMA the minimum data-set - * so as not to overwrite data in dest buffer - */ - in_place = 0; - src_buf_start = - rte_pktmbuf_iova_offset(op->sym->m_src, min_ofs); - dst_buf_start = - rte_pktmbuf_iova_offset(op->sym->m_dst, min_ofs); - oop_shift = min_ofs; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; - } else { - /* In-place operation - * Start DMA at nearest aligned address below min_ofs - */ - src_buf_start = - rte_pktmbuf_iova_offset(op->sym->m_src, min_ofs) - & QAT_64_BTYE_ALIGN_MASK; - - if (unlikely((rte_pktmbuf_iova(op->sym->m_src) - - rte_pktmbuf_headroom(op->sym->m_src)) - > src_buf_start)) { - /* alignment has pushed addr ahead of start of mbuf - * so revert and take the performance hit - */ - src_buf_start = - rte_pktmbuf_iova_offset(op->sym->m_src, - min_ofs); +#ifdef RTE_LIB_SECURITY + if (gen_dev_ops->create_security_ctx) { + cryptodev->security_ctx = + gen_dev_ops->create_security_ctx((void *)cryptodev); + if (cryptodev->security_ctx == NULL) { + QAT_LOG(ERR, "rte_security_ctx memory alloc failed"); + ret = -ENOMEM; + goto error; } - dst_buf_start = src_buf_start; - - /* remember any adjustment for later, note, can be +/- */ - alignment_adjustment = src_buf_start - - rte_pktmbuf_iova_offset(op->sym->m_src, min_ofs); - } - if (do_cipher || do_aead) { - cipher_param->cipher_offset = - (uint32_t)rte_pktmbuf_iova_offset( - op->sym->m_src, cipher_ofs) - src_buf_start; - cipher_param->cipher_length = cipher_len; + cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY; + QAT_LOG(INFO, "Device %s rte_security support ensabled", name); } else { - cipher_param->cipher_offset = 0; - cipher_param->cipher_length = 0; + QAT_LOG(INFO, "Device %s rte_security support disabled", name); } - - if (!ctx->is_single_pass) { - /* Do not let to overwrite spc_aad len */ - if (do_auth || do_aead) { - auth_param->auth_off = - (uint32_t)rte_pktmbuf_iova_offset( - op->sym->m_src, auth_ofs) - src_buf_start; - auth_param->auth_len = auth_len; - } else { - auth_param->auth_off = 0; - auth_param->auth_len = 0; +#endif + snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, + "QAT_SYM_CAPA_GEN_%d", + qat_pci_dev->qat_dev_gen); + + internals = cryptodev->data->dev_private; + internals->qat_dev = qat_pci_dev; + + internals->dev_id = cryptodev->data->dev_id; + + capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); + capabilities = capa_info.data; + capa_size = capa_info.size; + + internals->capa_mz = rte_memzone_lookup(capa_memz_name); + if (internals->capa_mz == NULL) { + internals->capa_mz = rte_memzone_reserve(capa_memz_name, + capa_size, rte_socket_id(), 0); + if (internals->capa_mz == NULL) { + QAT_LOG(DEBUG, + "Error allocating memzone for capabilities, " + "destroying PMD for %s", + name); + ret = -EFAULT; + goto error; } } - qat_req->comn_mid.dst_length = - qat_req->comn_mid.src_length = - (cipher_param->cipher_offset + cipher_param->cipher_length) - > (auth_param->auth_off + auth_param->auth_len) ? - (cipher_param->cipher_offset + cipher_param->cipher_length) - : (auth_param->auth_off + auth_param->auth_len); - - if (do_auth && do_cipher) { - /* Handle digest-encrypted cases, i.e. - * auth-gen-then-cipher-encrypt and - * cipher-decrypt-then-auth-verify - */ - /* First find the end of the data */ - if (do_sgl) { - uint32_t remaining_off = auth_param->auth_off + - auth_param->auth_len + alignment_adjustment + oop_shift; - struct rte_mbuf *sgl_buf = - (in_place ? - op->sym->m_src : op->sym->m_dst); - - while (remaining_off >= rte_pktmbuf_data_len(sgl_buf) - && sgl_buf->next != NULL) { - remaining_off -= rte_pktmbuf_data_len(sgl_buf); - sgl_buf = sgl_buf->next; - } + memcpy(internals->capa_mz->addr, capabilities, capa_size); + internals->qat_dev_capabilities = internals->capa_mz->addr; - auth_data_end = (uint64_t)rte_pktmbuf_iova_offset( - sgl_buf, remaining_off); - } else { - auth_data_end = (in_place ? - src_buf_start : dst_buf_start) + - auth_param->auth_off + auth_param->auth_len; - } - /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < - cipher_param->cipher_offset + - cipher_param->cipher_length) && - (op->sym->auth.digest.phys_addr == - auth_data_end)) { - /* Handle partial digest encryption */ - if (cipher_param->cipher_offset + - cipher_param->cipher_length < - auth_param->auth_off + - auth_param->auth_len + - ctx->digest_length) - qat_req->comn_mid.dst_length = - qat_req->comn_mid.src_length = - auth_param->auth_off + - auth_param->auth_len + - ctx->digest_length; - struct icp_qat_fw_comn_req_hdr *header = - &qat_req->comn_hdr; - ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( - header->serv_specif_flags, - ICP_QAT_FW_LA_DIGEST_IN_BUFFER); - } + while (1) { + if (qat_dev_cmd_param[i].name == NULL) + break; + if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) + internals->min_enq_burst_threshold = + qat_dev_cmd_param[i].val; + i++; } - if (do_sgl) { - - ICP_QAT_FW_COMN_PTR_TYPE_SET(qat_req->comn_hdr.comn_req_flags, - QAT_COMN_PTR_TYPE_SGL); - ret = qat_sgl_fill_array(op->sym->m_src, - (int64_t)(src_buf_start - rte_pktmbuf_iova(op->sym->m_src)), - &cookie->qat_sgl_src, - qat_req->comn_mid.src_length, - QAT_SYM_SGL_MAX_NUMBER); - - if (unlikely(ret)) { - QAT_DP_LOG(ERR, "QAT PMD Cannot fill sgl array"); - return ret; - } + internals->service_type = QAT_SERVICE_SYMMETRIC; + qat_pci_dev->sym_dev = internals; + QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d", + cryptodev->data->name, internals->dev_id); - if (in_place) - qat_req->comn_mid.dest_data_addr = - qat_req->comn_mid.src_data_addr = - cookie->qat_sgl_src_phys_addr; - else { - ret = qat_sgl_fill_array(op->sym->m_dst, - (int64_t)(dst_buf_start - - rte_pktmbuf_iova(op->sym->m_dst)), - &cookie->qat_sgl_dst, - qat_req->comn_mid.dst_length, - QAT_SYM_SGL_MAX_NUMBER); - - if (unlikely(ret)) { - QAT_DP_LOG(ERR, "QAT PMD can't fill sgl array"); - return ret; - } + return 0; - qat_req->comn_mid.src_data_addr = - cookie->qat_sgl_src_phys_addr; - qat_req->comn_mid.dest_data_addr = - cookie->qat_sgl_dst_phys_addr; - } - qat_req->comn_mid.src_length = 0; - qat_req->comn_mid.dst_length = 0; - } else { - qat_req->comn_mid.src_data_addr = src_buf_start; - qat_req->comn_mid.dest_data_addr = dst_buf_start; - } +error: +#ifdef RTE_LIB_SECURITY + rte_free(cryptodev->security_ctx); + cryptodev->security_ctx = NULL; +#endif + rte_cryptodev_pmd_destroy(cryptodev); + memset(&qat_dev_instance->sym_rte_dev, 0, + sizeof(qat_dev_instance->sym_rte_dev)); - if (ctx->is_single_pass) { - if (ctx->is_ucs) { - /* GEN 4 */ - cipher_param20->spc_aad_addr = - op->sym->aead.aad.phys_addr; - cipher_param20->spc_auth_res_addr = - op->sym->aead.digest.phys_addr; - } else { - cipher_param->spc_aad_addr = - op->sym->aead.aad.phys_addr; - cipher_param->spc_auth_res_addr = - op->sym->aead.digest.phys_addr; - } - } else if (ctx->is_single_pass_gmac && - op->sym->auth.data.length <= QAT_AES_GMAC_SPC_MAX_SIZE) { - /* Handle Single-Pass AES-GMAC */ - handle_spc_gmac(ctx, op, cookie, qat_req); - } + return ret; +} -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, - sizeof(struct icp_qat_fw_la_bulk_req)); - QAT_DP_HEXDUMP_LOG(DEBUG, "src_data:", - rte_pktmbuf_mtod(op->sym->m_src, uint8_t*), - rte_pktmbuf_data_len(op->sym->m_src)); - if (do_cipher) { - uint8_t *cipher_iv_ptr = rte_crypto_op_ctod_offset(op, - uint8_t *, - ctx->cipher_iv.offset); - QAT_DP_HEXDUMP_LOG(DEBUG, "cipher iv:", cipher_iv_ptr, - ctx->cipher_iv.length); - } +int +qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev) +{ + struct rte_cryptodev *cryptodev; - if (do_auth) { - if (ctx->auth_iv.length) { - uint8_t *auth_iv_ptr = rte_crypto_op_ctod_offset(op, - uint8_t *, - ctx->auth_iv.offset); - QAT_DP_HEXDUMP_LOG(DEBUG, "auth iv:", auth_iv_ptr, - ctx->auth_iv.length); - } - QAT_DP_HEXDUMP_LOG(DEBUG, "digest:", op->sym->auth.digest.data, - ctx->digest_length); - } + if (qat_pci_dev == NULL) + return -ENODEV; + if (qat_pci_dev->sym_dev == NULL) + return 0; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + rte_memzone_free(qat_pci_dev->sym_dev->capa_mz); - if (do_aead) { - QAT_DP_HEXDUMP_LOG(DEBUG, "digest:", op->sym->aead.digest.data, - ctx->digest_length); - QAT_DP_HEXDUMP_LOG(DEBUG, "aad:", op->sym->aead.aad.data, - ctx->aad_len); - } + /* free crypto device */ + cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id); +#ifdef RTE_LIB_SECURITY + rte_free(cryptodev->security_ctx); + cryptodev->security_ctx = NULL; #endif + rte_cryptodev_pmd_destroy(cryptodev); + qat_pci_devs[qat_pci_dev->qat_dev_id].sym_rte_dev.name = NULL; + qat_pci_dev->sym_dev = NULL; + return 0; } + +static struct cryptodev_driver qat_crypto_drv; +RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, + cryptodev_qat_sym_driver, + qat_sym_driver_id); diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 87e23bb1d1..278abbfd3a 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -130,10 +130,6 @@ uint16_t qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops); -int -qat_sym_build_request(void *in_op, uint8_t *out_msg, - void *op_cookie, enum qat_device_gen qat_dev_gen); - /** Encrypt a single partial block * Depends on openssl libcrypto * Uses ECB+XOR to do CFB encryption, same result, more performant diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 933ac89569..27cdeb1de2 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -600,11 +600,11 @@ qat_sym_session_handle_single_pass(struct qat_sym_session *session, session->is_auth = 1; session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER; /* Chacha-Poly is special case that use QAT CTR mode */ - if (aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) { + if (aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) session->qat_mode = ICP_QAT_HW_CIPHER_AEAD_MODE; - } else { + else session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; - } + session->cipher_iv.offset = aead_xform->iv.offset; session->cipher_iv.length = aead_xform->iv.length; session->aad_len = aead_xform->aad_length; diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 581bce3790..e1b5edb2f9 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -55,6 +55,11 @@ #define QAT_SESSION_IS_SLICE_SET(flags, flag) \ (!!((flags) & (flag))) +struct qat_sym_session; + +typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie); + enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_NONE = 0, QAT_CRYPTO_PROTO_FLAG_CCM = 1, @@ -63,11 +68,6 @@ enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_ZUC = 4 }; -struct qat_sym_session; - -typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, - uint8_t *out_msg, void *op_cookie); - /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; From patchwork Fri Nov 5 00:19:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103818 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8EE54A0C61; Fri, 5 Nov 2021 01:20:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51E9A427D4; Fri, 5 Nov 2021 01:19:53 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 7A0DB427B3 for ; Fri, 5 Nov 2021 01:19:48 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563735" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563735" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671820" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:46 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:29 +0000 Message-Id: <20211105001932.28784-9-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 08/11] compress/qat: comp dequeue burst update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch update the newly comp dequeue process response function pointer introduced in dequeue burst Signed-off-by: Kai Ji --- drivers/compress/qat/qat_comp_pmd.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 31de526056..2f17f8825e 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -616,11 +616,18 @@ static struct rte_compressdev_ops compress_qat_dummy_ops = { .private_xform_free = qat_comp_private_xform_free }; +static uint16_t +qat_comp_dequeue_burst(void *qp, struct rte_comp_op **ops, uint16_t nb_ops) +{ + return qat_dequeue_op_burst(qp, (void **)ops, qat_comp_process_response, + nb_ops); +} + static uint16_t qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops, uint16_t nb_ops) { - uint16_t ret = qat_dequeue_op_burst(qp, (void **)ops, NULL, nb_ops); + uint16_t ret = qat_comp_dequeue_burst(qp, ops, nb_ops); struct qat_qp *tmp_qp = (struct qat_qp *)qp; if (ret) { @@ -638,8 +645,7 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops, } else { tmp_qp->qat_dev->comp_dev->compressdev->dequeue_burst = - (compressdev_dequeue_pkt_burst_t) - qat_dequeue_op_burst; + qat_comp_dequeue_burst; } } return ret; From patchwork Fri Nov 5 00:19:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103819 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BAAEA0C61; Fri, 5 Nov 2021 01:20:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B8EEC427C9; Fri, 5 Nov 2021 01:19:54 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 8888E427B8 for ; Fri, 5 Nov 2021 01:19:50 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563739" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563739" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671825" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:47 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:30 +0000 Message-Id: <20211105001932.28784-10-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 09/11] crypto/qat: raw dp api integration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch unitfy all raw dp api into the qat pmd driver, some qat generation specific are implemented respectively. The qat_sym_hw_dp.c is removed as no longer required. Signed-off-by: Kai Ji --- drivers/common/qat/meson.build | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 + drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 214 ++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 122 +++ drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 78 ++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 656 +++++++++++++ drivers/crypto/qat/qat_crypto.h | 3 + drivers/crypto/qat/qat_sym.c | 50 + drivers/crypto/qat/qat_sym_hw_dp.c | 974 ------------------- 9 files changed, 1126 insertions(+), 975 deletions(-) delete mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 25c15bebea..e53b51dcac 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -71,7 +71,7 @@ endif if qat_crypto foreach f: ['qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym.c', 'qat_crypto.c', + 'qat_asym.c', 'qat_crypto.c', 'dev/qat_sym_pmd_gen1.c', 'dev/qat_asym_pmd_gen1.c', 'dev/qat_crypto_pmd_gen2.c', diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 9320eb5adf..d88f17f0db 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -290,6 +290,8 @@ RTE_INIT(qat_sym_crypto_gen2_init) qat_sym_crypto_cap_get_gen2; qat_sym_gen_dev_ops[QAT_GEN2].set_session = qat_sym_crypto_set_session_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].set_raw_dp_ctx = + qat_sym_configure_raw_dp_ctx_gen1; qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; #ifdef RTE_LIB_SECURITY diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index 6baf1810ed..6494019050 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -393,6 +393,218 @@ qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session) return 0; } +static int +qat_sym_dp_enqueue_single_aead_gen3(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_aead_job_gen3(ctx, req, iv, digest, aad, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, iv, + NULL, aad, digest); +#endif + return 0; +} + +static uint32_t +qat_sym_dp_enqueue_aead_jobs_gen3(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + + enqueue_one_aead_job_gen3(ctx, req, &vec->iv[i], + &vec->digest[i], &vec->aad[i], ofs, + (uint32_t)data_len); + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, &vec->iv[i], NULL, + &vec->aad[i], &vec->digest[i]); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +static int +qat_sym_dp_enqueue_single_auth_gen3(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv __rte_unused, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_auth_job_gen3(ctx, cookie, req, digest, auth_iv, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static uint32_t +qat_sym_dp_enqueue_auth_jobs_gen3(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + enqueue_one_auth_job_gen3(ctx, cookie, req, &vec->digest[i], + &vec->auth_iv[i], ofs, (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +static int +qat_sym_configure_raw_dp_ctx_gen3(void *_raw_dp_ctx, void *_ctx) +{ + struct rte_crypto_raw_dp_ctx *raw_dp_ctx = _raw_dp_ctx; + struct qat_sym_session *ctx = _ctx; + int ret; + + ret = qat_sym_configure_raw_dp_ctx_gen1(_raw_dp_ctx, _ctx); + if (ret < 0) + return ret; + + if (ctx->is_single_pass) { + raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_aead_jobs_gen3; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_aead_gen3; + } else if (ctx->is_single_pass_gmac) { + raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_auth_jobs_gen3; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_auth_gen3; + } + + return 0; +} + + RTE_INIT(qat_sym_crypto_gen3_init) { qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1; @@ -402,6 +614,8 @@ RTE_INIT(qat_sym_crypto_gen3_init) qat_sym_crypto_feature_flags_get_gen1; qat_sym_gen_dev_ops[QAT_GEN3].set_session = qat_sym_crypto_set_session_gen3; + qat_sym_gen_dev_ops[QAT_GEN3].set_raw_dp_ctx = + qat_sym_configure_raw_dp_ctx_gen3; #ifdef RTE_LIB_SECURITY qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx = qat_sym_create_security_gen1; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c index fa6a7ee2fd..167c95abcf 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -221,6 +221,126 @@ qat_sym_crypto_set_session_gen4(void *cdev, void *session) return 0; } +static int +qat_sym_dp_enqueue_single_aead_gen4(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_aead_job_gen4(ctx, req, iv, digest, aad, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, iv, + NULL, aad, digest); +#endif + return 0; +} + +static uint32_t +qat_sym_dp_enqueue_aead_jobs_gen4(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + + enqueue_one_aead_job_gen4(ctx, req, &vec->iv[i], + &vec->digest[i], &vec->aad[i], ofs, + (uint32_t)data_len); + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, &vec->iv[i], NULL, + &vec->aad[i], &vec->digest[i]); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +static int +qat_sym_configure_raw_dp_ctx_gen4(void *_raw_dp_ctx, void *_ctx) +{ + struct rte_crypto_raw_dp_ctx *raw_dp_ctx = _raw_dp_ctx; + struct qat_sym_session *ctx = _ctx; + int ret; + + ret = qat_sym_configure_raw_dp_ctx_gen1(_raw_dp_ctx, _ctx); + if (ret < 0) + return ret; + + if (ctx->is_single_pass && ctx->is_ucs) { + raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_aead_jobs_gen4; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_aead_gen4; + } + + return 0; +} + RTE_INIT(qat_sym_crypto_gen4_init) { qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1; @@ -228,6 +348,8 @@ RTE_INIT(qat_sym_crypto_gen4_init) qat_sym_crypto_cap_get_gen4; qat_sym_gen_dev_ops[QAT_GEN4].set_session = qat_sym_crypto_set_session_gen4; + qat_sym_gen_dev_ops[QAT_GEN4].set_raw_dp_ctx = + qat_sym_configure_raw_dp_ctx_gen4; qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; #ifdef RTE_LIB_SECURITY diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 6277211b35..b69d4d9091 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -839,6 +839,84 @@ int qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx, uint8_t *out_msg, void *op_cookie); +/* -----------------GEN 1 sym crypto raw data path APIs ---------------- */ +int +qat_sym_dp_enqueue_single_cipher_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest __rte_unused, + struct rte_crypto_va_iova_ptr *aad __rte_unused, + void *user_data); + +uint32_t +qat_sym_dp_enqueue_cipher_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status); + +int +qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv __rte_unused, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + void *user_data); + +uint32_t +qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status); + +int +qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + void *user_data); + +uint32_t +qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status); + +int +qat_sym_dp_enqueue_single_aead_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + void *user_data); + +uint32_t +qat_sym_dp_enqueue_aead_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status); + +void * +qat_sym_dp_dequeue_single_gen1(void *qp_data, uint8_t *drv_ctx, + int *dequeue_status, enum rte_crypto_op_status *op_status); + +uint32_t +qat_sym_dp_dequeue_burst_gen1(void *qp_data, uint8_t *drv_ctx, + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, + uint32_t max_nb_to_dequeue, + rte_cryptodev_raw_post_dequeue_t post_dequeue, + void **out_user_data, uint8_t is_user_data_array, + uint32_t *n_success_jobs, int *return_status); + +int +qat_sym_dp_enqueue_done_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t n); + +int +qat_sym_dp_denqueue_done_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t n); + +int +qat_sym_configure_raw_dp_ctx_gen1(void *_raw_dp_ctx, void *_ctx); + /* -----------------GENx control path APIs ---------------- */ uint64_t qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 14fc9d7f50..b044c7b2d3 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -148,6 +148,10 @@ struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = { .sym_session_get_size = qat_sym_session_get_private_size, .sym_session_configure = qat_sym_session_configure, .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, }; static struct qat_capabilities_info @@ -451,6 +455,656 @@ qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx, return 0; } +int +qat_sym_dp_enqueue_single_cipher_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest __rte_unused, + struct rte_crypto_va_iova_ptr *aad __rte_unused, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct qat_sym_op_cookie *cookie; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_cipher_job_gen1(ctx, req, iv, ofs, (uint32_t)data_len); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, &iv, + NULL, NULL, NULL); +#endif + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +uint32_t +qat_sym_dp_enqueue_cipher_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], + cookie, vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + enqueue_one_cipher_job_gen1(ctx, req, &vec->iv[i], ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, &vec->iv[i], + NULL, NULL, NULL); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +int +qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv __rte_unused, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_auth_job_gen1(ctx, req, digest, auth_iv, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, NULL, + auth_iv, NULL, digest); +#endif + return 0; +} + +uint32_t +qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + enqueue_one_auth_job_gen1(ctx, req, &vec->digest[i], + &vec->auth_iv[i], ofs, (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, &vec->auth_iv[i], + NULL, &vec->digest[i]); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +int +qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *cipher_iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + if (unlikely(enqueue_one_chain_job_gen1(ctx, req, data, n_data_vecs, + NULL, 0, cipher_iv, digest, auth_iv, ofs, + (uint32_t)data_len))) + return -1; + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, cipher_iv, + auth_iv, NULL, digest); +#endif + return 0; +} + +uint32_t +qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + + if (unlikely(enqueue_one_chain_job_gen1(ctx, req, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + NULL, 0, + &vec->iv[i], &vec->digest[i], + &vec->auth_iv[i], ofs, (uint32_t)data_len))) + break; + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, &vec->iv[i], + &vec->auth_iv[i], + NULL, &vec->digest[i]); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + +int +qat_sym_dp_enqueue_single_aead_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_build_req_set_data(req, user_data, cookie, + data, n_data_vecs, NULL, 0); + if (unlikely(data_len < 0)) + return -1; + + enqueue_one_aead_job_gen1(ctx, req, iv, digest, aad, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, iv, + NULL, aad, digest); +#endif + return 0; +} + +uint32_t +qat_sym_dp_enqueue_aead_jobs_gen1(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + struct qat_sym_op_cookie *cookie = + qp->op_cookies[tail >> tx_queue->trailz]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (unlikely(data_len < 0)) + break; + + enqueue_one_aead_job_gen1(ctx, req, &vec->iv[i], + &vec->digest[i], &vec->aad[i], ofs, + (uint32_t)data_len); + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + qat_sym_debug_log_dump(req, ctx, vec->src_sgl[i].vec, + vec->src_sgl[i].num, &vec->iv[i], NULL, + &vec->aad[i], &vec->digest[i]); +#endif + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + + +uint32_t +qat_sym_dp_dequeue_burst_gen1(void *qp_data, uint8_t *drv_ctx, + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, + uint32_t max_nb_to_dequeue, + rte_cryptodev_raw_post_dequeue_t post_dequeue, + void **out_user_data, uint8_t is_user_data_array, + uint32_t *n_success_jobs, int *return_status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *rx_queue = &qp->rx_q; + struct icp_qat_fw_comn_resp *resp; + void *resp_opaque; + uint32_t i, n, inflight; + uint32_t head; + uint8_t status; + + *n_success_jobs = 0; + *return_status = 0; + head = dp_ctx->head; + + inflight = qp->enqueued - qp->dequeued; + if (unlikely(inflight == 0)) + return 0; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + head); + /* no operation ready */ + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return 0; + + resp_opaque = (void *)(uintptr_t)resp->opaque_data; + /* get the dequeue count */ + if (get_dequeue_count) { + n = get_dequeue_count(resp_opaque); + if (unlikely(n == 0)) + return 0; + } else { + if (unlikely(max_nb_to_dequeue == 0)) + return 0; + n = max_nb_to_dequeue; + } + + out_user_data[0] = resp_opaque; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + post_dequeue(resp_opaque, 0, status); + *n_success_jobs += status; + + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + + /* we already finished dequeue when n == 1 */ + if (unlikely(n == 1)) { + i = 1; + goto end_deq; + } + + if (is_user_data_array) { + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + if (unlikely(*(uint32_t *)resp == + ADF_RING_EMPTY_SIG)) + goto end_deq; + out_user_data[i] = (void *)(uintptr_t)resp->opaque_data; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(out_user_data[i], i, status); + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + } + + goto end_deq; + } + + /* opaque is not array */ + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + goto end_deq; + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + post_dequeue(resp_opaque, i, status); + *n_success_jobs += status; + } + +end_deq: + dp_ctx->head = head; + dp_ctx->cached_dequeue += i; + return i; +} + +void * +qat_sym_dp_dequeue_single_gen1(void *qp_data, uint8_t *drv_ctx, + int *dequeue_status, enum rte_crypto_op_status *op_status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *rx_queue = &qp->rx_q; + register struct icp_qat_fw_comn_resp *resp; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + dp_ctx->head); + + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return NULL; + + dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) & + rx_queue->modulo_mask; + dp_ctx->cached_dequeue++; + + *op_status = QAT_SYM_DP_IS_RESP_SUCCESS(resp) ? + RTE_CRYPTO_OP_STATUS_SUCCESS : + RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + *dequeue_status = 0; + return (void *)(uintptr_t)resp->opaque_data; +} + +int +qat_sym_dp_enqueue_done_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + + if (unlikely(dp_ctx->cached_enqueue != n)) + return -1; + + qp->enqueued += n; + qp->stats.enqueued_count += n; + + tx_queue->tail = dp_ctx->tail; + + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, + tx_queue->hw_bundle_number, + tx_queue->hw_queue_number, tx_queue->tail); + tx_queue->csr_tail = tx_queue->tail; + dp_ctx->cached_enqueue = 0; + + return 0; +} + +int +qat_sym_dp_denqueue_done_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *rx_queue = &qp->rx_q; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + + if (unlikely(dp_ctx->cached_dequeue != n)) + return -1; + + rx_queue->head = dp_ctx->head; + rx_queue->nb_processed_responses += n; + qp->dequeued += n; + qp->stats.dequeued_count += n; + if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) { + uint32_t old_head, new_head; + uint32_t max_head; + + old_head = rx_queue->csr_head; + new_head = rx_queue->head; + max_head = qp->nb_descriptors * rx_queue->msg_size; + + /* write out free descriptors */ + void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head; + + if (new_head < old_head) { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, + max_head - old_head); + memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE, + new_head); + } else { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head - + old_head); + } + rx_queue->nb_processed_responses = 0; + rx_queue->csr_head = new_head; + + /* write current head to CSR */ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, + rx_queue->hw_bundle_number, rx_queue->hw_queue_number, + new_head); + } + + dp_ctx->cached_dequeue = 0; + return 0; +} + +int +qat_sym_configure_raw_dp_ctx_gen1(void *_raw_dp_ctx, void *_ctx) +{ + struct rte_crypto_raw_dp_ctx *raw_dp_ctx = _raw_dp_ctx; + struct qat_sym_session *ctx = _ctx; + + raw_dp_ctx->enqueue_done = qat_sym_dp_enqueue_done_gen1; + raw_dp_ctx->dequeue_burst = qat_sym_dp_dequeue_burst_gen1; + raw_dp_ctx->dequeue = qat_sym_dp_dequeue_single_gen1; + raw_dp_ctx->dequeue_done = qat_sym_dp_denqueue_done_gen1; + + if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && + !ctx->is_gmac) { + /* AES-GCM or AES-CCM */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || + (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 + && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE + && ctx->qat_hash_alg == + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_aead_jobs_gen1; + raw_dp_ctx->enqueue = + qat_sym_dp_enqueue_single_aead_gen1; + } else { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_chain_jobs_gen1; + raw_dp_ctx->enqueue = + qat_sym_dp_enqueue_single_chain_gen1; + } + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { + raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_auth_jobs_gen1; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_auth_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { + if (ctx->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305) { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_aead_jobs_gen1; + raw_dp_ctx->enqueue = + qat_sym_dp_enqueue_single_aead_gen1; + } else { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_cipher_jobs_gen1; + raw_dp_ctx->enqueue = + qat_sym_dp_enqueue_single_cipher_gen1; + } + } else + return -1; + + return 0; +} + int qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) { @@ -518,6 +1172,8 @@ RTE_INIT(qat_sym_crypto_gen1_init) qat_sym_crypto_cap_get_gen1; qat_sym_gen_dev_ops[QAT_GEN1].set_session = qat_sym_crypto_set_session_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].set_raw_dp_ctx = + qat_sym_configure_raw_dp_ctx_gen1; qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags = qat_sym_crypto_feature_flags_get_gen1; #ifdef RTE_LIB_SECURITY diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 3ac6c90e1c..203f74f46f 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -53,11 +53,14 @@ typedef void * (*create_security_ctx_t)(void *cryptodev); typedef int (*set_session_t)(void *cryptodev, void *session); +typedef int (*set_raw_dp_ctx_t)(void *raw_dp_ctx, void *ctx); + struct qat_crypto_gen_dev_ops { get_feature_flags_t get_feature_flags; get_capabilities_info_t get_capabilities; struct rte_cryptodev_ops *cryptodev_ops; set_session_t set_session; + set_raw_dp_ctx_t set_raw_dp_ctx; #ifdef RTE_LIB_SECURITY create_security_ctx_t create_security_ctx; #endif diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 2bbb6615f3..b78e6a6ef3 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -354,6 +354,56 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev) return 0; } +int +qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_raw_dp_ctx *raw_dp_ctx, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, uint8_t is_update) +{ + struct qat_cryptodev_private *internals = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + struct qat_crypto_gen_dev_ops *gen_dev_ops = + &qat_sym_gen_dev_ops[qat_dev_gen]; + struct qat_qp *qp; + struct qat_sym_session *ctx; + struct qat_sym_dp_ctx *dp_ctx; + + if (!gen_dev_ops->set_raw_dp_ctx) { + QAT_LOG(ERR, "Device GEN %u does not support raw data path", + qat_dev_gen); + return -ENOTSUP; + } + + qp = dev->data->queue_pairs[qp_id]; + dp_ctx = (struct qat_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data; + + if (!is_update) { + memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx) + + sizeof(struct qat_sym_dp_ctx)); + raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id]; + dp_ctx->tail = qp->tx_q.tail; + dp_ctx->head = qp->rx_q.head; + dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0; + } + + if (sess_type != RTE_CRYPTO_OP_WITH_SESSION) + return -EINVAL; + + ctx = (struct qat_sym_session *)get_sym_session_private_data( + session_ctx.crypto_sess, qat_sym_driver_id); + + dp_ctx->session = ctx; + + return gen_dev_ops->set_raw_dp_ctx(raw_dp_ctx, ctx); +} + + +int +qat_sym_get_dp_ctx_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct qat_sym_dp_ctx); +} + static struct cryptodev_driver qat_crypto_drv; RTE_PMD_REGISTER_CRYPTO_DRIVER(qat_crypto_drv, cryptodev_qat_sym_driver, diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c deleted file mode 100644 index 6bcc30d92d..0000000000 --- a/drivers/crypto/qat/qat_sym_hw_dp.c +++ /dev/null @@ -1,974 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2020 Intel Corporation - */ - -#include - -#include "adf_transport_access_macros.h" -#include "icp_qat_fw.h" -#include "icp_qat_fw_la.h" - -#include "qat_sym.h" -#include "qat_sym_session.h" -#include "qat_qp.h" - -static __rte_always_inline int32_t -qat_sym_dp_parse_data_vec(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_vec *data, uint16_t n_data_vecs) -{ - struct qat_queue *tx_queue; - struct qat_sym_op_cookie *cookie; - struct qat_sgl *list; - uint32_t i; - uint32_t total_len; - - if (likely(n_data_vecs == 1)) { - req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = - data[0].iova; - req->comn_mid.src_length = req->comn_mid.dst_length = - data[0].len; - return data[0].len; - } - - if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER) - return -1; - - total_len = 0; - tx_queue = &qp->tx_q; - - ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags, - QAT_COMN_PTR_TYPE_SGL); - cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz]; - list = (struct qat_sgl *)&cookie->qat_sgl_src; - - for (i = 0; i < n_data_vecs; i++) { - list->buffers[i].len = data[i].len; - list->buffers[i].resrvd = 0; - list->buffers[i].addr = data[i].iova; - if (total_len + data[i].len > UINT32_MAX) { - QAT_DP_LOG(ERR, "Message too long"); - return -1; - } - total_len += data[i].len; - } - - list->num_bufs = i; - req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = - cookie->qat_sgl_src_phys_addr; - req->comn_mid.src_length = req->comn_mid.dst_length = 0; - return total_len; -} - -static __rte_always_inline void -set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param, - struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len, - struct icp_qat_fw_la_bulk_req *qat_req) -{ - /* copy IV into request if it fits */ - if (iv_len <= sizeof(cipher_param->u.cipher_IV_array)) - rte_memcpy(cipher_param->u.cipher_IV_array, iv_ptr->va, - iv_len); - else { - ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_CIPH_IV_64BIT_PTR); - cipher_param->u.s.cipher_IV_ptr = iv_ptr->iova; - } -} - -#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \ - (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \ - ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status)) - -static __rte_always_inline void -qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) -{ - uint32_t i; - - for (i = 0; i < n; i++) - sta[i] = status; -} - -#define QAT_SYM_DP_GET_MAX_ENQ(q, c, n) \ - RTE_MIN((q->max_inflights - q->enqueued + q->dequeued - c), n) - -static __rte_always_inline void -enqueue_one_cipher_job(struct qat_sym_session *ctx, - struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_va_iova_ptr *iv, - union rte_crypto_sym_ofs ofs, uint32_t data_len) -{ - struct icp_qat_fw_la_cipher_req_params *cipher_param; - - cipher_param = (void *)&req->serv_specif_rqpars; - - /* cipher IV */ - set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req); - cipher_param->cipher_offset = ofs.ofs.cipher.head; - cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - - ofs.ofs.cipher.tail; -} - -static __rte_always_inline int -qat_sym_dp_enqueue_single_cipher(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_vec *data, uint16_t n_data_vecs, - union rte_crypto_sym_ofs ofs, - struct rte_crypto_va_iova_ptr *iv, - struct rte_crypto_va_iova_ptr *digest __rte_unused, - struct rte_crypto_va_iova_ptr *aad __rte_unused, - void *user_data) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - uint32_t tail = dp_ctx->tail; - - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); - data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); - if (unlikely(data_len < 0)) - return -1; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - - enqueue_one_cipher_job(ctx, req, iv, ofs, (uint32_t)data_len); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue++; - - return 0; -} - -static __rte_always_inline uint32_t -qat_sym_dp_enqueue_cipher_jobs(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, - void *user_data[], int *status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - uint32_t i, n; - uint32_t tail; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - - n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); - if (unlikely(n == 0)) { - qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); - *status = 0; - return 0; - } - - tail = dp_ctx->tail; - - for (i = 0; i < n; i++) { - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - - data_len = qat_sym_dp_parse_data_vec(qp, req, - vec->src_sgl[i].vec, - vec->src_sgl[i].num); - if (unlikely(data_len < 0)) - break; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - enqueue_one_cipher_job(ctx, req, &vec->iv[i], ofs, - (uint32_t)data_len); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - } - - if (unlikely(i < n)) - qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue += i; - *status = 0; - return i; -} - -static __rte_always_inline void -enqueue_one_auth_job(struct qat_sym_session *ctx, - struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *auth_iv, - union rte_crypto_sym_ofs ofs, uint32_t data_len) -{ - struct icp_qat_fw_la_cipher_req_params *cipher_param; - struct icp_qat_fw_la_auth_req_params *auth_param; - - cipher_param = (void *)&req->serv_specif_rqpars; - auth_param = (void *)((uint8_t *)cipher_param + - ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - - auth_param->auth_off = ofs.ofs.auth.head; - auth_param->auth_len = data_len - ofs.ofs.auth.head - - ofs.ofs.auth.tail; - auth_param->auth_res_addr = digest->iova; - - switch (ctx->qat_hash_alg) { - case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: - case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: - case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: - auth_param->u1.aad_adr = auth_iv->iova; - break; - case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: - case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); - rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va, - ctx->auth_iv.length); - break; - default: - break; - } -} - -static __rte_always_inline int -qat_sym_dp_enqueue_single_auth(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_vec *data, uint16_t n_data_vecs, - union rte_crypto_sym_ofs ofs, - struct rte_crypto_va_iova_ptr *iv __rte_unused, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *auth_iv, - void *user_data) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - uint32_t tail = dp_ctx->tail; - - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); - data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); - if (unlikely(data_len < 0)) - return -1; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - - enqueue_one_auth_job(ctx, req, digest, auth_iv, ofs, - (uint32_t)data_len); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue++; - - return 0; -} - -static __rte_always_inline uint32_t -qat_sym_dp_enqueue_auth_jobs(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, - void *user_data[], int *status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - uint32_t i, n; - uint32_t tail; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - - n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); - if (unlikely(n == 0)) { - qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); - *status = 0; - return 0; - } - - tail = dp_ctx->tail; - - for (i = 0; i < n; i++) { - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - - data_len = qat_sym_dp_parse_data_vec(qp, req, - vec->src_sgl[i].vec, - vec->src_sgl[i].num); - if (unlikely(data_len < 0)) - break; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - enqueue_one_auth_job(ctx, req, &vec->digest[i], - &vec->auth_iv[i], ofs, (uint32_t)data_len); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - } - - if (unlikely(i < n)) - qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue += i; - *status = 0; - return i; -} - -static __rte_always_inline int -enqueue_one_chain_job(struct qat_sym_session *ctx, - struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_vec *data, - uint16_t n_data_vecs, - struct rte_crypto_va_iova_ptr *cipher_iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *auth_iv, - union rte_crypto_sym_ofs ofs, uint32_t data_len) -{ - struct icp_qat_fw_la_cipher_req_params *cipher_param; - struct icp_qat_fw_la_auth_req_params *auth_param; - rte_iova_t auth_iova_end; - int32_t cipher_len, auth_len; - - cipher_param = (void *)&req->serv_specif_rqpars; - auth_param = (void *)((uint8_t *)cipher_param + - ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - - cipher_len = data_len - ofs.ofs.cipher.head - - ofs.ofs.cipher.tail; - auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail; - - if (unlikely(cipher_len < 0 || auth_len < 0)) - return -1; - - cipher_param->cipher_offset = ofs.ofs.cipher.head; - cipher_param->cipher_length = cipher_len; - set_cipher_iv(cipher_param, cipher_iv, ctx->cipher_iv.length, req); - - auth_param->auth_off = ofs.ofs.auth.head; - auth_param->auth_len = auth_len; - auth_param->auth_res_addr = digest->iova; - - switch (ctx->qat_hash_alg) { - case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: - case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: - case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: - auth_param->u1.aad_adr = auth_iv->iova; - break; - case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: - case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: - break; - default: - break; - } - - if (unlikely(n_data_vecs > 1)) { - int auth_end_get = 0, i = n_data_vecs - 1; - struct rte_crypto_vec *cvec = &data[0]; - uint32_t len; - - len = data_len - ofs.ofs.auth.tail; - - while (i >= 0 && len > 0) { - if (cvec->len >= len) { - auth_iova_end = cvec->iova + len; - len = 0; - auth_end_get = 1; - break; - } - len -= cvec->len; - i--; - cvec++; - } - - if (unlikely(auth_end_get == 0)) - return -1; - } else - auth_iova_end = data[0].iova + auth_param->auth_off + - auth_param->auth_len; - - /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < - cipher_param->cipher_offset + - cipher_param->cipher_length) && - (digest->iova == auth_iova_end)) { - /* Handle partial digest encryption */ - if (cipher_param->cipher_offset + - cipher_param->cipher_length < - auth_param->auth_off + - auth_param->auth_len + - ctx->digest_length) - req->comn_mid.dst_length = - req->comn_mid.src_length = - auth_param->auth_off + - auth_param->auth_len + - ctx->digest_length; - struct icp_qat_fw_comn_req_hdr *header = - &req->comn_hdr; - ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( - header->serv_specif_flags, - ICP_QAT_FW_LA_DIGEST_IN_BUFFER); - } - - return 0; -} - -static __rte_always_inline int -qat_sym_dp_enqueue_single_chain(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_vec *data, uint16_t n_data_vecs, - union rte_crypto_sym_ofs ofs, - struct rte_crypto_va_iova_ptr *cipher_iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *auth_iv, - void *user_data) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - uint32_t tail = dp_ctx->tail; - - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); - data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); - if (unlikely(data_len < 0)) - return -1; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - - if (unlikely(enqueue_one_chain_job(ctx, req, data, n_data_vecs, - cipher_iv, digest, auth_iv, ofs, (uint32_t)data_len))) - return -1; - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue++; - - return 0; -} - -static __rte_always_inline uint32_t -qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, - void *user_data[], int *status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - uint32_t i, n; - uint32_t tail; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - - n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); - if (unlikely(n == 0)) { - qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); - *status = 0; - return 0; - } - - tail = dp_ctx->tail; - - for (i = 0; i < n; i++) { - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - - data_len = qat_sym_dp_parse_data_vec(qp, req, - vec->src_sgl[i].vec, - vec->src_sgl[i].num); - if (unlikely(data_len < 0)) - break; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - if (unlikely(enqueue_one_chain_job(ctx, req, - vec->src_sgl[i].vec, vec->src_sgl[i].num, - &vec->iv[i], &vec->digest[i], - &vec->auth_iv[i], ofs, (uint32_t)data_len))) - break; - - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - } - - if (unlikely(i < n)) - qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue += i; - *status = 0; - return i; -} - -static __rte_always_inline void -enqueue_one_aead_job(struct qat_sym_session *ctx, - struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_va_iova_ptr *iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *aad, - union rte_crypto_sym_ofs ofs, uint32_t data_len) -{ - struct icp_qat_fw_la_cipher_req_params *cipher_param = - (void *)&req->serv_specif_rqpars; - struct icp_qat_fw_la_auth_req_params *auth_param = - (void *)((uint8_t *)&req->serv_specif_rqpars + - ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - uint8_t *aad_data; - uint8_t aad_ccm_real_len; - uint8_t aad_len_field_sz; - uint32_t msg_len_be; - rte_iova_t aad_iova = 0; - uint8_t q; - - /* CPM 1.7 uses single pass to treat AEAD as cipher operation */ - if (ctx->is_single_pass) { - enqueue_one_cipher_job(ctx, req, iv, ofs, data_len); - cipher_param->spc_aad_addr = aad->iova; - cipher_param->spc_auth_res_addr = digest->iova; - return; - } - - switch (ctx->qat_hash_alg) { - case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: - case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); - rte_memcpy(cipher_param->u.cipher_IV_array, iv->va, - ctx->cipher_iv.length); - aad_iova = aad->iova; - break; - case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: - aad_data = aad->va; - aad_iova = aad->iova; - aad_ccm_real_len = 0; - aad_len_field_sz = 0; - msg_len_be = rte_bswap32((uint32_t)data_len - - ofs.ofs.cipher.head); - - if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { - aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; - aad_ccm_real_len = ctx->aad_len - - ICP_QAT_HW_CCM_AAD_B0_LEN - - ICP_QAT_HW_CCM_AAD_LEN_INFO; - } else { - aad_data = iv->va; - aad_iova = iv->iova; - } - - q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; - aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( - aad_len_field_sz, ctx->digest_length, q); - if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET + (q - - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), - (uint8_t *)&msg_len_be, - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); - } else { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)&msg_len_be + - (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE - - q), q); - } - - if (aad_len_field_sz > 0) { - *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = - rte_bswap16(aad_ccm_real_len); - - if ((aad_ccm_real_len + aad_len_field_sz) - % ICP_QAT_HW_CCM_AAD_B0_LEN) { - uint8_t pad_len = 0; - uint8_t pad_idx = 0; - - pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - - ((aad_ccm_real_len + - aad_len_field_sz) % - ICP_QAT_HW_CCM_AAD_B0_LEN); - pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + - aad_ccm_real_len + - aad_len_field_sz; - memset(&aad_data[pad_idx], 0, pad_len); - } - } - - rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) - + ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)iv->va + - ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length); - *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = - q - ICP_QAT_HW_CCM_NONCE_OFFSET; - - rte_memcpy((uint8_t *)aad->va + - ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET, - ctx->cipher_iv.length); - break; - default: - break; - } - - cipher_param->cipher_offset = ofs.ofs.cipher.head; - cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - - ofs.ofs.cipher.tail; - auth_param->auth_off = ofs.ofs.cipher.head; - auth_param->auth_len = cipher_param->cipher_length; - auth_param->auth_res_addr = digest->iova; - auth_param->u1.aad_adr = aad_iova; -} - -static __rte_always_inline int -qat_sym_dp_enqueue_single_aead(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_vec *data, uint16_t n_data_vecs, - union rte_crypto_sym_ofs ofs, - struct rte_crypto_va_iova_ptr *iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *aad, - void *user_data) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - uint32_t tail = dp_ctx->tail; - - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); - data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); - if (unlikely(data_len < 0)) - return -1; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - - enqueue_one_aead_job(ctx, req, iv, digest, aad, ofs, - (uint32_t)data_len); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue++; - - return 0; -} - -static __rte_always_inline uint32_t -qat_sym_dp_enqueue_aead_jobs(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, - void *user_data[], int *status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - uint32_t i, n; - uint32_t tail; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - - n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); - if (unlikely(n == 0)) { - qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); - *status = 0; - return 0; - } - - tail = dp_ctx->tail; - - for (i = 0; i < n; i++) { - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - - data_len = qat_sym_dp_parse_data_vec(qp, req, - vec->src_sgl[i].vec, - vec->src_sgl[i].num); - if (unlikely(data_len < 0)) - break; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - enqueue_one_aead_job(ctx, req, &vec->iv[i], &vec->digest[i], - &vec->aad[i], ofs, (uint32_t)data_len); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - } - - if (unlikely(i < n)) - qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue += i; - *status = 0; - return i; -} - -static __rte_always_inline uint32_t -qat_sym_dp_dequeue_burst(void *qp_data, uint8_t *drv_ctx, - rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, - uint32_t max_nb_to_dequeue, - rte_cryptodev_raw_post_dequeue_t post_dequeue, - void **out_user_data, uint8_t is_user_data_array, - uint32_t *n_success_jobs, int *return_status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *rx_queue = &qp->rx_q; - struct icp_qat_fw_comn_resp *resp; - void *resp_opaque; - uint32_t i, n, inflight; - uint32_t head; - uint8_t status; - - *n_success_jobs = 0; - *return_status = 0; - head = dp_ctx->head; - - inflight = qp->enqueued - qp->dequeued; - if (unlikely(inflight == 0)) - return 0; - - resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + - head); - /* no operation ready */ - if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) - return 0; - - resp_opaque = (void *)(uintptr_t)resp->opaque_data; - /* get the dequeue count */ - if (get_dequeue_count) { - n = get_dequeue_count(resp_opaque); - if (unlikely(n == 0)) - return 0; - } else { - if (unlikely(max_nb_to_dequeue == 0)) - return 0; - n = max_nb_to_dequeue; - } - - out_user_data[0] = resp_opaque; - status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); - post_dequeue(resp_opaque, 0, status); - *n_success_jobs += status; - - head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; - - /* we already finished dequeue when n == 1 */ - if (unlikely(n == 1)) { - i = 1; - goto end_deq; - } - - if (is_user_data_array) { - for (i = 1; i < n; i++) { - resp = (struct icp_qat_fw_comn_resp *)( - (uint8_t *)rx_queue->base_addr + head); - if (unlikely(*(uint32_t *)resp == - ADF_RING_EMPTY_SIG)) - goto end_deq; - out_user_data[i] = (void *)(uintptr_t)resp->opaque_data; - status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); - *n_success_jobs += status; - post_dequeue(out_user_data[i], i, status); - head = (head + rx_queue->msg_size) & - rx_queue->modulo_mask; - } - - goto end_deq; - } - - /* opaque is not array */ - for (i = 1; i < n; i++) { - resp = (struct icp_qat_fw_comn_resp *)( - (uint8_t *)rx_queue->base_addr + head); - status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); - if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) - goto end_deq; - head = (head + rx_queue->msg_size) & - rx_queue->modulo_mask; - post_dequeue(resp_opaque, i, status); - *n_success_jobs += status; - } - -end_deq: - dp_ctx->head = head; - dp_ctx->cached_dequeue += i; - return i; -} - -static __rte_always_inline void * -qat_sym_dp_dequeue(void *qp_data, uint8_t *drv_ctx, int *dequeue_status, - enum rte_crypto_op_status *op_status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *rx_queue = &qp->rx_q; - register struct icp_qat_fw_comn_resp *resp; - - resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + - dp_ctx->head); - - if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) - return NULL; - - dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) & - rx_queue->modulo_mask; - dp_ctx->cached_dequeue++; - - *op_status = QAT_SYM_DP_IS_RESP_SUCCESS(resp) ? - RTE_CRYPTO_OP_STATUS_SUCCESS : - RTE_CRYPTO_OP_STATUS_AUTH_FAILED; - *dequeue_status = 0; - return (void *)(uintptr_t)resp->opaque_data; -} - -static __rte_always_inline int -qat_sym_dp_kick_tail(void *qp_data, uint8_t *drv_ctx, uint32_t n) -{ - struct qat_qp *qp = qp_data; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - - if (unlikely(dp_ctx->cached_enqueue != n)) - return -1; - - qp->enqueued += n; - qp->stats.enqueued_count += n; - - tx_queue->tail = dp_ctx->tail; - - WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, - tx_queue->hw_bundle_number, - tx_queue->hw_queue_number, tx_queue->tail); - tx_queue->csr_tail = tx_queue->tail; - dp_ctx->cached_enqueue = 0; - - return 0; -} - -static __rte_always_inline int -qat_sym_dp_update_head(void *qp_data, uint8_t *drv_ctx, uint32_t n) -{ - struct qat_qp *qp = qp_data; - struct qat_queue *rx_queue = &qp->rx_q; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - - if (unlikely(dp_ctx->cached_dequeue != n)) - return -1; - - rx_queue->head = dp_ctx->head; - rx_queue->nb_processed_responses += n; - qp->dequeued += n; - qp->stats.dequeued_count += n; - if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) { - uint32_t old_head, new_head; - uint32_t max_head; - - old_head = rx_queue->csr_head; - new_head = rx_queue->head; - max_head = qp->nb_descriptors * rx_queue->msg_size; - - /* write out free descriptors */ - void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head; - - if (new_head < old_head) { - memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, - max_head - old_head); - memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE, - new_head); - } else { - memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head - - old_head); - } - rx_queue->nb_processed_responses = 0; - rx_queue->csr_head = new_head; - - /* write current head to CSR */ - WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, - rx_queue->hw_bundle_number, rx_queue->hw_queue_number, - new_head); - } - - dp_ctx->cached_dequeue = 0; - return 0; -} - -int -qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, - struct rte_crypto_raw_dp_ctx *raw_dp_ctx, - enum rte_crypto_op_sess_type sess_type, - union rte_cryptodev_session_ctx session_ctx, uint8_t is_update) -{ - struct qat_qp *qp; - struct qat_sym_session *ctx; - struct qat_sym_dp_ctx *dp_ctx; - - qp = dev->data->queue_pairs[qp_id]; - dp_ctx = (struct qat_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data; - - if (!is_update) { - memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx) + - sizeof(struct qat_sym_dp_ctx)); - raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id]; - dp_ctx->tail = qp->tx_q.tail; - dp_ctx->head = qp->rx_q.head; - dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0; - } - - if (sess_type != RTE_CRYPTO_OP_WITH_SESSION) - return -EINVAL; - - ctx = (struct qat_sym_session *)get_sym_session_private_data( - session_ctx.crypto_sess, qat_sym_driver_id); - - dp_ctx->session = ctx; - - raw_dp_ctx->enqueue_done = qat_sym_dp_kick_tail; - raw_dp_ctx->dequeue_burst = qat_sym_dp_dequeue_burst; - raw_dp_ctx->dequeue = qat_sym_dp_dequeue; - raw_dp_ctx->dequeue_done = qat_sym_dp_update_head; - - if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || - ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && - !ctx->is_gmac) { - /* AES-GCM or AES-CCM */ - if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || - ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || - (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 - && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE - && ctx->qat_hash_alg == - ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { - raw_dp_ctx->enqueue_burst = - qat_sym_dp_enqueue_aead_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_aead; - } else { - raw_dp_ctx->enqueue_burst = - qat_sym_dp_enqueue_chain_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_chain; - } - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { - raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_auth_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_auth; - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { - if (ctx->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE || - ctx->qat_cipher_alg == - ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305) { - raw_dp_ctx->enqueue_burst = - qat_sym_dp_enqueue_aead_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_aead; - } else { - raw_dp_ctx->enqueue_burst = - qat_sym_dp_enqueue_cipher_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_cipher; - } - } else - return -1; - - return 0; -} - -int -qat_sym_get_dp_ctx_size(__rte_unused struct rte_cryptodev *dev) -{ - return sizeof(struct qat_sym_dp_ctx); -} From patchwork Fri Nov 5 00:19:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103820 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29D6CA0C61; Fri, 5 Nov 2021 01:20:46 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2FA9427E0; Fri, 5 Nov 2021 01:19:55 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 91528427BA for ; Fri, 5 Nov 2021 01:19:51 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563746" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563746" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671832" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:49 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji Date: Fri, 5 Nov 2021 00:19:31 +0000 Message-Id: <20211105001932.28784-11-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 10/11] crypto/qat: support out of place SG list X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add-in sgl oop support in qat driver Signed-off-by: Kai Ji --- drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 28 ++++++++-- drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 14 ++++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 55 +++++++++++++++++--- 3 files changed, 83 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index 6494019050..c59c25fe8f 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -467,8 +467,18 @@ qat_sym_dp_enqueue_aead_jobs_gen3(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; @@ -564,8 +574,18 @@ qat_sym_dp_enqueue_auth_jobs_gen3(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; enqueue_one_auth_job_gen3(ctx, cookie, req, &vec->digest[i], diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c index 167c95abcf..529878e3e7 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -295,8 +295,18 @@ qat_sym_dp_enqueue_aead_jobs_gen4(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index b044c7b2d3..4bbdfdf367 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -529,9 +529,18 @@ qat_sym_dp_enqueue_cipher_jobs_gen1(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], - cookie, vec->src_sgl[i].vec, + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; enqueue_one_cipher_job_gen1(ctx, req, &vec->iv[i], ofs, @@ -628,8 +637,18 @@ qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; enqueue_one_auth_job_gen1(ctx, req, &vec->digest[i], @@ -728,8 +747,18 @@ qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; @@ -833,8 +862,18 @@ qat_sym_dp_enqueue_aead_jobs_gen1(void *qp_data, uint8_t *drv_ctx, (uint8_t *)tx_queue->base_addr + tail); rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - data_len = qat_sym_build_req_set_data(req, user_data[i], cookie, - vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0); + if (vec->dest_sgl) { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, vec->src_sgl[i].num, + vec->dest_sgl[i].vec, vec->dest_sgl[i].num); + } else { + data_len = qat_sym_build_req_set_data(req, + user_data[i], cookie, + vec->src_sgl[i].vec, + vec->src_sgl[i].num, NULL, 0); + } + if (unlikely(data_len < 0)) break; From patchwork Fri Nov 5 00:19:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ji, Kai" X-Patchwork-Id: 103821 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1248A0C61; Fri, 5 Nov 2021 01:20:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B2A4427F3; Fri, 5 Nov 2021 01:19:58 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 6375F427D8 for ; Fri, 5 Nov 2021 01:19:53 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10158"; a="212563751" X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="212563751" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 17:19:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,209,1631602800"; d="scan'208";a="468671845" Received: from silpixa00400272.ir.intel.com (HELO silpixa00400272.ger.corp.intel.com) ([10.237.223.111]) by orsmga002.jf.intel.com with ESMTP; 04 Nov 2021 17:19:50 -0700 From: Kai Ji To: dev@dpdk.org Cc: gakhil@marvell.com, Kai Ji , pablo.de.lara.guarch@intel.com, adamx.dybkowski@intel.com Date: Fri, 5 Nov 2021 00:19:32 +0000 Message-Id: <20211105001932.28784-12-kai.ji@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211105001932.28784-1-kai.ji@intel.com> References: <20211102134913.21148-1-kai.ji@intel.com> <20211105001932.28784-1-kai.ji@intel.com> Subject: [dpdk-dev] [dpdk-dev v4 11/11] test/cryptodev: fix incomplete data length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch fixes incorrect data lengths computation in cryptodev unit test. Previously some data lengths were incorrectly set, which was insensitive for crypto op unit tets but is critical for raw data path API unit tests. The patch addressed the issue by setting the correct data lengths for some tests. Fixes: 681f540da52b ("cryptodev: do not use AAD in wireless algorithms") Cc: pablo.de.lara.guarch@intel.com Fixes: e847fc512817 ("test/crypto: add encrypted digest case for AES-CTR-CMAC") Cc: adamx.dybkowski@intel.com Signed-off-by: Kai Ji --- app/test/test_cryptodev.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index efd8bfd7a0..e1f7c6454d 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -209,6 +209,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id, int enqueue_status, dequeue_status; struct crypto_unittest_params *ut_params = &unittest_params; int is_sgl = sop->m_src->nb_segs > 1; + int is_oop = 0; ctx_service_size = rte_cryptodev_get_raw_dp_ctx_size(dev_id); if (ctx_service_size < 0) { @@ -247,6 +248,9 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id, ofs.raw = 0; + if ((sop->m_dst != NULL) && (sop->m_dst != sop->m_src)) + is_oop = 1; + if (is_cipher && is_auth) { cipher_offset = sop->cipher.data.offset; cipher_len = sop->cipher.data.length; @@ -277,6 +281,8 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id, if (is_sgl) { uint32_t remaining_off = auth_offset + auth_len; struct rte_mbuf *sgl_buf = sop->m_src; + if (is_oop) + sgl_buf = sop->m_dst; while (remaining_off >= rte_pktmbuf_data_len(sgl_buf) && sgl_buf->next != NULL) { @@ -293,7 +299,8 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id, /* Then check if digest-encrypted conditions are met */ if ((auth_offset + auth_len < cipher_offset + cipher_len) && (digest.iova == auth_end_iova) && is_sgl) - max_len = RTE_MAX(max_len, auth_offset + auth_len + + max_len = RTE_MAX(max_len, + auth_offset + auth_len + ut_params->auth_xform.auth.digest_length); } else if (is_cipher) { @@ -356,7 +363,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id, sgl.num = n; /* Out of place */ - if (sop->m_dst != NULL) { + if (is_oop) { dest_sgl.vec = dest_data_vec; vec.dest_sgl = &dest_sgl; n = rte_crypto_mbuf_to_vec(sop->m_dst, 0, max_len, @@ -4102,9 +4109,9 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata) /* Create KASUMI operation */ retval = create_wireless_algo_cipher_operation(tdata->cipher_iv.data, - tdata->cipher_iv.len, - tdata->ciphertext.len, - tdata->validCipherOffsetInBits.len); + tdata->cipher_iv.len, + RTE_ALIGN_CEIL(tdata->validCipherLenInBits.len, 8), + tdata->validCipherOffsetInBits.len); if (retval < 0) return retval; @@ -7335,6 +7342,7 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, unsigned int plaintext_len; unsigned int ciphertext_pad_len; unsigned int ciphertext_len; + unsigned int data_len; struct rte_cryptodev_info dev_info; struct rte_crypto_op *op; @@ -7395,21 +7403,22 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, plaintext_len = ceil_byte_length(tdata->plaintext.len_bits); ciphertext_pad_len = RTE_ALIGN_CEIL(ciphertext_len, 16); plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16); + data_len = RTE_MAX(ciphertext_pad_len, plaintext_pad_len); if (verify) { ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf, - ciphertext_pad_len); + data_len); memcpy(ciphertext, tdata->ciphertext.data, ciphertext_len); if (op_mode == OUT_OF_PLACE) - rte_pktmbuf_append(ut_params->obuf, ciphertext_pad_len); + rte_pktmbuf_append(ut_params->obuf, data_len); debug_hexdump(stdout, "ciphertext:", ciphertext, ciphertext_len); } else { plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf, - plaintext_pad_len); + data_len); memcpy(plaintext, tdata->plaintext.data, plaintext_len); if (op_mode == OUT_OF_PLACE) - rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len); + rte_pktmbuf_append(ut_params->obuf, data_len); debug_hexdump(stdout, "plaintext:", plaintext, plaintext_len); }