From patchwork Wed Dec 20 13:26:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nayak, Nishikanta" X-Patchwork-Id: 135396 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C0434374A; Wed, 20 Dec 2023 14:27:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B20A142E59; Wed, 20 Dec 2023 14:26:53 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 0013A4067C for ; Wed, 20 Dec 2023 14:26:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703078812; x=1734614812; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=03YcFYIUsrJaKolvBy58JUwOhAPZF98hANTXspdM27I=; b=SWNK4t2dxOKmmbdlrJQil6VHZJsgnboSg49AiaXrtl6trMrmAhJR0qW8 6TYMZCM4JiZg8YYbx1DL3pVWGOG/MZK5aZ0V6IxeeLjoVY0ArmPI4y1sx 7stYTnTOssLH0XISrjAQggRI+it7MvOJUK5QUwmTgQ3x7/QJbCZUFkSdN j+JAxCap4Dpvh4TgBn1WoE6SUAWug0nP05EuV5MeGPh7+ArN7p1Vm1MBY hRRacFEAJTamYYw+W3mvYwcPBErcfozRi07enQbwPQCesdqbvkVO1TY/I UYQ2w40zOBIqYtKm1NQo591oKdfqfxzm8AU09z6OoeG88W4Kay1VFKrdY w==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="380801533" X-IronPort-AV: E=Sophos;i="6.04,291,1695711600"; d="scan'208";a="380801533" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2023 05:26:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="842276293" X-IronPort-AV: E=Sophos;i="6.04,291,1695711600"; d="scan'208";a="842276293" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by fmsmga008.fm.intel.com with ESMTP; 20 Dec 2023 05:26:49 -0800 From: Nishikant Nayak To: dev@dpdk.org Cc: kai.ji@intel.com, ciara.power@intel.com, arkadiuszx.kusztal@intel.com, Nishikant Nayak , Akhil Goyal , Fan Zhang Subject: [PATCH 3/4] crypto/qat: update headers for GEN5 support Date: Wed, 20 Dec 2023 13:26:15 +0000 Message-Id: <20231220132616.318983-3-nishikanta.nayak@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231220132616.318983-1-nishikanta.nayak@intel.com> References: <20231220132616.318983-1-nishikanta.nayak@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch handles the changes required for updating the common header fields specific to GEN5, Also added/updated of the response processing APIs based on GEN5 requirement. Signed-off-by: Nishikant Nayak --- drivers/crypto/qat/qat_sym.c | 10 ++++- drivers/crypto/qat/qat_sym.h | 60 +++++++++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.c | 52 ++++++++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 5 ++- lib/cryptodev/rte_crypto_sym.h | 3 ++ 5 files changed, 126 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 6e03bde841..8fbb8831ab 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -179,8 +179,14 @@ uint16_t qat_sym_dequeue_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - return qat_dequeue_op_burst(qp, (void **)ops, - qat_sym_process_response, nb_ops); + struct qat_qp *tmp_qp = (struct qat_qp *)qp; + + if (tmp_qp->qat_dev_gen == QAT_GEN5) + return qat_dequeue_op_burst(qp, (void **)ops, + qat_sym_process_response_gen5, nb_ops); + else + return qat_dequeue_op_burst(qp, (void **)ops, + qat_sym_process_response, nb_ops); } int diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 71e9d5f34b..7db21fc341 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -95,6 +95,12 @@ /* Maximum data length for single pass GMAC: 2^14-1 */ #define QAT_AES_GMAC_SPC_MAX_SIZE 16383 +/* Digest length for GCM Algo is 16 bytes */ +#define GCM_256_DIGEST_LEN 16 + +/* IV length for GCM algo is 12 bytes */ +#define GCM_IV_LENGTH 12 + struct qat_sym_session; struct qat_sym_sgl { @@ -383,6 +389,52 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, return 1; } +static __rte_always_inline int +qat_sym_process_response_gen5(void **op, uint8_t *resp, + void *op_cookie __rte_unused, + uint64_t *dequeue_err_count __rte_unused) +{ + struct icp_qat_fw_comn_resp *resp_msg = + (struct icp_qat_fw_comn_resp *)resp; + struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) + (resp_msg->opaque_data); + struct qat_sym_session *sess; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "qat_response:", (uint8_t *)resp_msg, + sizeof(struct icp_qat_fw_comn_resp)); +#endif + + sess = CRYPTODEV_GET_SYM_SESS_PRIV(rx_op->sym->session); + + rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + + if (ICP_QAT_FW_COMN_STATUS_FLAG_OK != + ICP_QAT_FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + + else if (ICP_QAT_FW_COMN_STATUS_FLAG_OK != + ICP_QAT_FW_COMN_RESP_INVALID_PARAM_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + + if (sess->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { + if (ICP_QAT_FW_LA_VER_STATUS_FAIL == + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET( + resp_msg->comn_hdr.comn_status)) + rx_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + } + + *op = (void *)rx_op; + + /* + * return 1 as dequeue op only move on to the next op + * if one was ready to return to API + */ + return 1; +} + int qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, struct rte_crypto_raw_dp_ctx *raw_dp_ctx, @@ -448,7 +500,13 @@ qat_sym_preprocess_requests(void **ops __rte_unused, static inline void qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, - void *op_cookie __rte_unused) + void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused) +{ +} + +static inline void +qat_sym_process_response_gen5(void **op __rte_unused, uint8_t *resp __rte_unused, + void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused) { } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 9f4f6c3d93..c97d6509b8 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -136,6 +136,9 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); +static void +qat_sym_session_init_gen5_hdr(struct qat_sym_session *session); + /* Req/cd init functions */ static void @@ -738,6 +741,12 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, session->qat_cmd); return -ENOTSUP; } + + if (qat_dev_gen == QAT_GEN5) { + qat_sym_session_init_gen5_hdr(session); + return 0; + } + qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)dev, @@ -1082,6 +1091,12 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, if (qat_sym_cd_cipher_set(session, aead_xform->key.data, aead_xform->key.length)) return -EINVAL; + + if (qat_dev_gen == QAT_GEN5) { + session->auth_key_length = aead_xform->key.length; + memcpy(session->key_array, aead_xform->key.data, + aead_xform->key.length); + } } else if ((aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT && aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) || (aead_xform->op == RTE_CRYPTO_AEAD_OP_DECRYPT && @@ -1970,6 +1985,43 @@ qat_sym_session_init_common_hdr(struct qat_sym_session *session) ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); } +static void +qat_sym_session_init_gen5_hdr(struct qat_sym_session *session) +{ + struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + + /* + * GEN5 specifies separate command id for AEAD operations but Cryptodev + * API processes AEAD operations as Single pass Crypto operations. + * Hence even for GEN5, Session Algo Command ID is CIPHER. + * Note, however Session Algo Mode is AEAD. + */ + header->service_cmd_id = ICP_QAT_FW_LA_CMD_AEAD; + header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA; + header->hdr_flags = + ICP_QAT_FW_COMN_HDR_FLAGS_BUILD_GEN5(ICP_QAT_FW_COMN_REQ_FLAG_SET, + ICP_QAT_FW_COMN_GEN5_DESC_LAYOUT); + header->comn_req_flags = + ICP_QAT_FW_COMN_FLAGS_BUILD_GEN5(QAT_COMN_PTR_TYPE_SGL, + QAT_COMN_KEY_BUFFER_USED); + + ICP_QAT_FW_SYM_AEAD_ALGO_SET(header->serv_specif_flags, + RTE_CRYPTO_AEAD_AES_GCM_GEN5); + ICP_QAT_FW_SYM_IV_SIZE_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + ICP_QAT_FW_SYM_IV_IN_DESC_FLAG_SET(header->serv_specif_flags, + ICP_QAT_FW_SYM_IV_IN_DESC_VALID); + + if (session->qat_dir == ICP_QAT_HW_CIPHER_DECRYPT) { + ICP_QAT_FW_SYM_DIR_FLAG_SET(header->serv_specif_flags, + ICP_QAT_HW_CIPHER_DECRYPT); + } else { + ICP_QAT_FW_SYM_DIR_FLAG_SET(header->serv_specif_flags, + ICP_QAT_HW_CIPHER_ENCRYPT); + } +} + int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, const uint8_t *cipherkey, uint32_t cipherkeylen) diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 9209e2e8df..821c53dfbb 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -111,7 +111,10 @@ struct qat_sym_session { enum icp_qat_hw_auth_op auth_op; enum icp_qat_hw_auth_mode auth_mode; void *bpi_ctx; - struct qat_sym_cd cd; + union { + struct qat_sym_cd cd; + uint8_t key_array[32]; + }; uint8_t prefix_state[QAT_PREFIX_TBL_SIZE] __rte_cache_aligned; uint8_t *cd_cur_ptr; phys_addr_t cd_paddr; diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h index 53b18b9412..e545b1ba76 100644 --- a/lib/cryptodev/rte_crypto_sym.h +++ b/lib/cryptodev/rte_crypto_sym.h @@ -492,6 +492,9 @@ enum rte_crypto_aead_operation { /**< Verify digest and decrypt */ }; +/* In GEN5 AEAD AES GCM Algorithm has ID 0 */ +#define RTE_CRYPTO_AEAD_AES_GCM_GEN5 0 + /** Authentication operation name strings */ extern const char * rte_crypto_aead_operation_strings[];