From patchwork Tue Feb 27 11:33:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nayak, Nishikanta" X-Patchwork-Id: 137347 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B78243C0C; Tue, 27 Feb 2024 12:34:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2679E410D0; Tue, 27 Feb 2024 12:34:20 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 574F940150 for ; Tue, 27 Feb 2024 12:34:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709033653; x=1740569653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gKrFnb+x30BUmeYPb5Gy93iXjClnl5t4S4JiuY4oLSA=; b=Xl7U/rpDFgUx0VSj50mAEQXj2jcakJaw8t6l3qDcwA+6u4mp2wvNsTMe Zr9stDQSLOWkYlCr1AylroEddDwRLBjDH5NC5Zq0pjEMpeuXgMl00k2Vy CheRBRAn24j7AOEMZHzywMQTDH92vl5pZ7QdB1gE6YoiBOv2X5zEXLcML /G9KZPd6EfZ5b/LgfrwQ+x3oy5cu7ogAQDVp62blCIKWxPpp66DkQrBAN LvUzDxd7Zi3Uu4E21MpMED5a0ML26r6sbouP1Br9sx0ykK2A3nLtdCanK Nwb4vdgPeUcDegKvcVOGHAfV/k/EscAve0ZaVCNCi+ZrnOMSxUxe7PI/P A==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="3527974" X-IronPort-AV: E=Sophos;i="6.06,187,1705392000"; d="scan'208";a="3527974" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2024 03:33:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,187,1705392000"; d="scan'208";a="7044741" Received: from silpixa00401797.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.113]) by orviesa009.jf.intel.com with ESMTP; 27 Feb 2024 03:33:56 -0800 From: Nishikant Nayak To: dev@dpdk.org Cc: ciara.power@intel.com, kai.ji@intel.com, arkadiuszx.kusztal@intel.com, rakesh.s.joshi@intel.com, Nishikant Nayak Subject: [PATCH v5 3/4] crypto/qat: update headers for GEN LCE support Date: Tue, 27 Feb 2024 11:33:44 +0000 Message-Id: <20240227113345.863082-4-nishikanta.nayak@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240227113345.863082-1-nishikanta.nayak@intel.com> References: <20231220132616.318983-1-nishikanta.nayak@intel.com> <20240227113345.863082-1-nishikanta.nayak@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch handles the changes required for updating the common header fields specific to GEN LCE, Also added/updated of the response processing APIs based on GEN LCE requirement. Signed-off-by: Nishikant Nayak Acked-by: Ciara Power --- v2: - Renamed device from GEN 5 to GEN LCE. - Removed unused code. - Updated macro names. - Added GEN LCE specific API for deque burst. - Fixed code formatting. --- --- drivers/crypto/qat/qat_sym.c | 16 ++++++- drivers/crypto/qat/qat_sym.h | 60 ++++++++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.c | 62 +++++++++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.h | 10 ++++- 4 files changed, 140 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 6e03bde841..439a3fc00b 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -180,7 +180,15 @@ qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { return qat_dequeue_op_burst(qp, (void **)ops, - qat_sym_process_response, nb_ops); + qat_sym_process_response, nb_ops); +} + +uint16_t +qat_sym_dequeue_burst_gen_lce(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_dequeue_op_burst(qp, (void **)ops, + qat_sym_process_response_gen_lce, nb_ops); } int @@ -200,6 +208,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; struct qat_cryptodev_private *internals; + enum qat_device_gen qat_dev_gen = qat_pci_dev->qat_dev_gen; const struct qat_crypto_gen_dev_ops *gen_dev_ops = &qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; @@ -249,7 +258,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, cryptodev->dev_ops = gen_dev_ops->cryptodev_ops; cryptodev->enqueue_burst = qat_sym_enqueue_burst; - cryptodev->dequeue_burst = qat_sym_dequeue_burst; + if (qat_dev_gen == QAT_GEN_LCE) + cryptodev->dequeue_burst = qat_sym_dequeue_burst_gen_lce; + else + cryptodev->dequeue_burst = qat_sym_dequeue_burst; cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index f2f197d050..3461113c13 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -90,7 +90,7 @@ /* * Maximum number of SGL entries */ -#define QAT_SYM_SGL_MAX_NUMBER 16 +#define QAT_SYM_SGL_MAX_NUMBER 16 /* Maximum data length for single pass GMAC: 2^14-1 */ #define QAT_AES_GMAC_SPC_MAX_SIZE 16383 @@ -142,6 +142,10 @@ uint16_t qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops); +uint16_t +qat_sym_dequeue_burst_gen_lce(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops); + #ifdef RTE_QAT_OPENSSL /** Encrypt a single partial block * Depends on openssl libcrypto @@ -390,6 +394,52 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, return 1; } +static __rte_always_inline int +qat_sym_process_response_gen_lce(void **op, uint8_t *resp, + void *op_cookie __rte_unused, + uint64_t *dequeue_err_count __rte_unused) +{ + struct icp_qat_fw_comn_resp *resp_msg = + (struct icp_qat_fw_comn_resp *)resp; + struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) + (resp_msg->opaque_data); + struct qat_sym_session *sess; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "qat_response:", (uint8_t *)resp_msg, + sizeof(struct icp_qat_fw_comn_resp)); +#endif + + sess = CRYPTODEV_GET_SYM_SESS_PRIV(rx_op->sym->session); + + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + + if (ICP_QAT_FW_COMN_STATUS_FLAG_OK != + ICP_QAT_FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + + else if (ICP_QAT_FW_COMN_STATUS_FLAG_OK != + ICP_QAT_FW_COMN_RESP_INVALID_PARAM_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + + if (sess->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { + if (ICP_QAT_FW_LA_VER_STATUS_FAIL == + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + } + + *op = (void *)rx_op; + + /* + * return 1 as dequeue op only move on to the next op + * if one was ready to return to API + */ + return 1; +} + int qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, struct rte_crypto_raw_dp_ctx *raw_dp_ctx, @@ -455,7 +505,13 @@ qat_sym_preprocess_requests(void **ops __rte_unused, static inline void qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, - void *op_cookie __rte_unused) + void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused) +{ +} + +static inline void +qat_sym_process_response_gen_lce(void **op __rte_unused, uint8_t *resp __rte_unused, + void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused) { } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 9f4f6c3d93..8f50b61365 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -136,6 +136,9 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); +static void +qat_sym_session_init_gen_lce_hdr(struct qat_sym_session *session); + /* Req/cd init functions */ static void @@ -738,6 +741,12 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, session->qat_cmd); return -ENOTSUP; } + + if (qat_dev_gen == QAT_GEN_LCE) { + qat_sym_session_init_gen_lce_hdr(session); + return 0; + } + qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)dev, @@ -1016,6 +1025,12 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, dev->data->dev_private; enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + if (qat_dev_gen == QAT_GEN_LCE) { + struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req; + struct lce_key_buff_desc *key_buff = &req_tmpl->key_buff; + + key_buff->keybuff = session->key_paddr; + } /* * Store AEAD IV parameters as cipher IV, @@ -1079,9 +1094,15 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, } if (session->is_single_pass) { - if (qat_sym_cd_cipher_set(session, + if (qat_dev_gen != QAT_GEN_LCE) { + if (qat_sym_cd_cipher_set(session, aead_xform->key.data, aead_xform->key.length)) - return -EINVAL; + return -EINVAL; + } else { + session->auth_key_length = aead_xform->key.length; + memcpy(session->key_array, aead_xform->key.data, + aead_xform->key.length); + } } else if ((aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT && aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) || (aead_xform->op == RTE_CRYPTO_AEAD_OP_DECRYPT && @@ -1970,6 +1991,43 @@ qat_sym_session_init_common_hdr(struct qat_sym_session *session) ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); } +static void +qat_sym_session_init_gen_lce_hdr(struct qat_sym_session *session) +{ + struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + + /* + * GEN_LCE specifies separate command id for AEAD operations but Cryptodev + * API processes AEAD operations as Single pass Crypto operations. + * Hence even for GEN_LCE, Session Algo Command ID is CIPHER. + * Note, however Session Algo Mode is AEAD. + */ + header->service_cmd_id = ICP_QAT_FW_LA_CMD_AEAD; + header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA; + header->hdr_flags = + ICP_QAT_FW_COMN_HDR_FLAGS_BUILD_GEN_LCE(ICP_QAT_FW_COMN_REQ_FLAG_SET, + ICP_QAT_FW_COMN_GEN_LCE_DESC_LAYOUT); + header->comn_req_flags = + ICP_QAT_FW_COMN_FLAGS_BUILD_GEN_LCE(QAT_COMN_PTR_TYPE_SGL, + QAT_COMN_KEY_BUFFER_USED); + + ICP_QAT_FW_SYM_AEAD_ALGO_SET(header->serv_specif_flags, + QAT_LA_CRYPTO_AEAD_AES_GCM_GEN_LCE); + ICP_QAT_FW_SYM_IV_SIZE_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + ICP_QAT_FW_SYM_IV_IN_DESC_FLAG_SET(header->serv_specif_flags, + ICP_QAT_FW_SYM_IV_IN_DESC_VALID); + + if (session->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { + ICP_QAT_FW_SYM_DIR_FLAG_SET(header->serv_specif_flags, + ICP_QAT_HW_CIPHER_DECRYPT); + } else { + ICP_QAT_FW_SYM_DIR_FLAG_SET(header->serv_specif_flags, + ICP_QAT_HW_CIPHER_ENCRYPT); + } +} + int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, const uint8_t *cipherkey, uint32_t cipherkeylen) diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 9209e2e8df..958af03405 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -111,10 +111,16 @@ struct qat_sym_session { enum icp_qat_hw_auth_op auth_op; enum icp_qat_hw_auth_mode auth_mode; void *bpi_ctx; - struct qat_sym_cd cd; + union { + struct qat_sym_cd cd; + uint8_t key_array[32]; + }; uint8_t prefix_state[QAT_PREFIX_TBL_SIZE] __rte_cache_aligned; uint8_t *cd_cur_ptr; - phys_addr_t cd_paddr; + union { + phys_addr_t cd_paddr; + phys_addr_t key_paddr; + }; phys_addr_t prefix_paddr; struct icp_qat_fw_la_bulk_req fw_req; uint8_t aad_len;