From patchwork Tue Jun 20 09:26:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 128832 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F7B242D06; Tue, 20 Jun 2023 11:26:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F24C4068E; Tue, 20 Jun 2023 11:26:32 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id C91D5400D6 for ; Tue, 20 Jun 2023 11:26:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687253190; x=1718789190; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=je0YXNHWr157RWgPfutyioYThgCeb5Cfb3Y+dDCMBTg=; b=F7uSuPKWTSZOeQpovXO/fyu8I/dA9d2g2g8OI/43qEPCnlvv+/Ikvyei c6XtgkGMfwYacDN9v1wzoy4yZm9BvelzxtlNHmGwDzQcTXS/N61tjiAZq +pQ+TgTh31K8EsaxnnyVLhFn8mhapBzc6sJvhYvVQbRWwvUyj5AcG3Vwh FLujb161+RBgPvCZtHf1gIHlfig4Gowf1YBYylzlYotUiyVRUj7XrMniZ wEZE69J6KRysgX9Vd6fX+K5Lj81Rb5BDh/D87oAn30QPYTheyYnkqvcfH xRdemvbI6ZTcTK7nnMPFvDs6f4B95m8+/9CtcjNPa5l5or7W630Rx7sZG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10746"; a="358682011" X-IronPort-AV: E=Sophos;i="6.00,256,1681196400"; d="scan'208";a="358682011" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jun 2023 02:26:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10746"; a="784002100" X-IronPort-AV: E=Sophos;i="6.00,256,1681196400"; d="scan'208";a="784002100" Received: from silpixa00401012.ir.intel.com ([10.243.23.125]) by fmsmga004.fm.intel.com with ESMTP; 20 Jun 2023 02:26:27 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, kai.ji@intel.com, ciara.power@intel.com, Arek Kusztal Subject: [PATCH v4] crypto/qat: add SM3 HMAC to gen4 devices Date: Tue, 20 Jun 2023 09:26:25 +0000 Message-Id: <20230620092625.473630-1-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds SM3 HMAC to Intel QuickAssist Technology PMD generation 3 and 4 devices. Signed-off-by: Arkadiusz Kusztal Acked-by: Ciara Power --- doc/guides/cryptodevs/features/qat.ini | 1 + doc/guides/cryptodevs/qat.rst | 2 + doc/guides/rel_notes/release_23_07.rst | 1 + drivers/common/qat/qat_adf/icp_qat_fw_la.h | 10 ++ drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 4 + drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 + drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 12 +++ drivers/crypto/qat/qat_sym_session.c | 100 +++++++++++++++---- drivers/crypto/qat/qat_sym_session.h | 7 ++ 9 files changed, 120 insertions(+), 21 deletions(-) diff --git a/doc/guides/cryptodevs/features/qat.ini b/doc/guides/cryptodevs/features/qat.ini index 70511a3076..6358a43357 100644 --- a/doc/guides/cryptodevs/features/qat.ini +++ b/doc/guides/cryptodevs/features/qat.ini @@ -70,6 +70,7 @@ AES XCBC MAC = Y ZUC EIA3 = Y AES CMAC (128) = Y SM3 = Y +SM3 HMAC = Y ; ; Supported AEAD algorithms of the 'qat' crypto driver. diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index b454e1855d..7ff9d227be 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -76,6 +76,8 @@ Hash algorithms: * ``RTE_CRYPTO_AUTH_AES_GMAC`` * ``RTE_CRYPTO_AUTH_ZUC_EIA3`` * ``RTE_CRYPTO_AUTH_AES_CMAC`` +* ``RTE_CRYPTO_AUTH_SM3`` +* ``RTE_CRYPTO_AUTH_SM3_HMAC`` Supported AEAD algorithms: diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index 027ae7bd2d..8895ef4912 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -148,6 +148,7 @@ New Features * **Updated Intel QuickAssist Technology (QAT) crypto driver.** * Added support for combined Cipher-CRC offload for DOCSIS for QAT GENs 2,3 and 4. + * Added support for SM3-HMAC algorithm for QAT GENs 3 and 4. * **Updated Marvell cnxk crypto driver.** diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index 227a6cebc8..70f0effa62 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -188,6 +188,16 @@ struct icp_qat_fw_la_bulk_req { QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \ QAT_LA_PARTIAL_MASK) +#define QAT_FW_LA_MODE2 1 +#define QAT_FW_LA_NO_MODE2 0 +#define QAT_FW_LA_MODE2_MASK 0x1 +#define QAT_FW_LA_MODE2_BITPOS 5 +#define ICP_QAT_FW_HASH_FLAG_MODE2_SET(flags, val) \ +QAT_FIELD_SET(flags, \ + val, \ + QAT_FW_LA_MODE2_BITPOS, \ + QAT_FW_LA_MODE2_MASK) + struct icp_qat_fw_cipher_req_hdr_cd_pars { union { struct { diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index e028a0980f..aeca1db4b8 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -159,6 +159,10 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = { QAT_SYM_PLAIN_AUTH_CAP(SM3, CAP_SET(block_size, 64), CAP_RNG(digest_size, 32, 32, 0)), + QAT_SYM_AUTH_CAP(SM3_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c index fc68925501..de72383d4b 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -107,6 +107,10 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = { QAT_SYM_PLAIN_AUTH_CAP(SM3, CAP_SET(block_size, 64), CAP_RNG(digest_size, 32, 32, 0)), + QAT_SYM_AUTH_CAP(SM3_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index f2bf343793..aa1cb35952 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -625,6 +625,12 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx, rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va, ctx->auth_iv.length); break; + case ICP_QAT_HW_AUTH_ALGO_SM3: + if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0) + auth_param->u1.aad_adr = 0; + else + auth_param->u1.aad_adr = ctx->prefix_paddr; + break; default: break; } @@ -678,6 +684,12 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: break; + case ICP_QAT_HW_AUTH_ALGO_SM3: + if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0) + auth_param->u1.aad_adr = 0; + else + auth_param->u1.aad_adr = ctx->prefix_paddr; + break; default: break; } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 327f568a28..f59d04b6d2 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -127,11 +127,12 @@ qat_sym_cd_crc_set(struct qat_sym_session *cdesc, static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, - const uint8_t *authkey, - uint32_t authkeylen, - uint32_t aad_length, - uint32_t digestsize, - unsigned int operation); + const uint8_t *authkey, + uint32_t authkeylen, + uint32_t aad_length, + uint32_t digestsize, + unsigned int operation, + enum qat_device_gen qat_dev_gen); static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -572,6 +573,8 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, /* Set context descriptor physical address */ session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); + session->prefix_paddr = session_paddr + + offsetof(struct qat_sym_session, prefix_state); session->dev_id = internals->dev_id; session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; @@ -750,6 +753,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; session->auth_mode = ICP_QAT_HW_AUTH_MODE0; break; + case RTE_CRYPTO_AUTH_SM3_HMAC: + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; + session->auth_mode = ICP_QAT_HW_AUTH_MODE2; + break; case RTE_CRYPTO_AUTH_SHA1: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1; session->auth_mode = ICP_QAT_HW_AUTH_MODE0; @@ -875,7 +882,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; } else { session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER; @@ -890,7 +898,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; if (qat_sym_cd_cipher_set(session, @@ -904,7 +913,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; } @@ -1010,7 +1020,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, aead_xform->key.length, aead_xform->aad_length, aead_xform->digest_length, - crypto_operation)) + crypto_operation, + qat_dev_gen)) return -EINVAL; } else { session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT; @@ -1027,7 +1038,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, aead_xform->key.length, aead_xform->aad_length, aead_xform->digest_length, - crypto_operation)) + crypto_operation, + qat_dev_gen)) return -EINVAL; if (qat_sym_cd_cipher_set(session, @@ -1196,6 +1208,8 @@ static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg) case ICP_QAT_HW_AUTH_ALGO_DELIMITER: /* return maximum block size in this case */ return SHA512_CBLOCK; + case ICP_QAT_HW_AUTH_ALGO_SM3: + return QAT_SM3_BLOCK_SIZE; default: QAT_LOG(ERR, "invalid hash alg %u", qat_hash_alg); return -EFAULT; @@ -2076,13 +2090,14 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, } int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, - const uint8_t *authkey, - uint32_t authkeylen, - uint32_t aad_length, - uint32_t digestsize, - unsigned int operation) + const uint8_t *authkey, + uint32_t authkeylen, + uint32_t aad_length, + uint32_t digestsize, + unsigned int operation, + enum qat_device_gen qat_dev_gen) { - struct icp_qat_hw_auth_setup *hash; + struct icp_qat_hw_auth_setup *hash, *hash_2 = NULL; struct icp_qat_hw_cipher_algo_blk *cipherconfig; struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; @@ -2098,6 +2113,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t *aad_len = NULL; uint32_t wordIndex = 0; uint32_t *pTempKey; + uint8_t *prefix = NULL; int ret = 0; if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { @@ -2148,6 +2164,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL + || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3 || cdesc->is_cnt_zero ) hash->auth_counter.counter = 0; @@ -2159,6 +2176,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, hash->auth_counter.counter = rte_bswap32(block_size); } + hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3; cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_auth_setup); switch (cdesc->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SM3: @@ -2167,6 +2185,48 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, state1_size = qat_hash_get_state1_size( cdesc->qat_hash_alg); state2_size = ICP_QAT_HW_SM3_STATE2_SZ; + if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) + break; + hash_2 = (struct icp_qat_hw_auth_setup *)(cdesc->cd_cur_ptr + + state1_size + state2_size); + hash_2->auth_config.config = + ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE2, + cdesc->qat_hash_alg, digestsize); + rte_memcpy(cdesc->cd_cur_ptr + state1_size + state2_size + + sizeof(*hash_2), sm3InitialState, + sizeof(sm3InitialState)); + hash_cd_ctrl->inner_state1_sz = state1_size; + hash_cd_ctrl->inner_state2_sz = state2_size; + hash_cd_ctrl->inner_state2_offset = + hash_cd_ctrl->hash_cfg_offset + + ((sizeof(struct icp_qat_hw_auth_setup) + + RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3); + hash_cd_ctrl->outer_config_offset = + hash_cd_ctrl->inner_state2_offset + + ((hash_cd_ctrl->inner_state2_sz) >> 3); + hash_cd_ctrl->outer_state1_sz = state1_size; + hash_cd_ctrl->outer_res_sz = state2_size; + hash_cd_ctrl->outer_prefix_sz = + qat_hash_get_block_size(cdesc->qat_hash_alg); + hash_cd_ctrl->outer_prefix_offset = + qat_hash_get_block_size(cdesc->qat_hash_alg) >> 3; + auth_param->u2.inner_prefix_sz = + qat_hash_get_block_size(cdesc->qat_hash_alg); + auth_param->hash_state_sz = digestsize; + if (qat_dev_gen == QAT_GEN4) { + ICP_QAT_FW_HASH_FLAG_MODE2_SET( + hash_cd_ctrl->hash_flags, + QAT_FW_LA_MODE2); + } else { + hash_cd_ctrl->hash_flags |= + ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED; + } + prefix = cdesc->prefix_state; + rte_memcpy(prefix, authkey, authkeylen); + rte_memcpy(prefix + QAT_PREFIX_SIZE, authkey, + authkeylen); + cd_extra_size += sizeof(struct icp_qat_hw_auth_setup) + + state1_size + state2_size; break; case ICP_QAT_HW_AUTH_ALGO_SHA1: if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) { @@ -2527,8 +2587,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, } /* Auth CD config setup */ - hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3; - hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + hash_cd_ctrl->hash_flags |= ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; hash_cd_ctrl->inner_state1_sz = state1_size; if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { hash_cd_ctrl->inner_res_sz = 4; @@ -2545,13 +2604,10 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, ((sizeof(struct icp_qat_hw_auth_setup) + RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3); - cdesc->cd_cur_ptr += state1_size + state2_size + cd_extra_size; cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd; - cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; - return 0; } @@ -2857,6 +2913,8 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, /* Set context descriptor physical address */ session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); + session->prefix_paddr = session_paddr + + offsetof(struct qat_sym_session, prefix_state); /* Get requested QAT command id - should be cipher */ qat_cmd_id = qat_get_cmd_id(xform); diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index b7fbf5c491..0cc19b5cc0 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -56,9 +56,14 @@ #define QAT_CRYPTO_SLICE_UCS 2 #define QAT_CRYPTO_SLICE_WCP 4 +#define QAT_PREFIX_SIZE 64 +#define QAT_PREFIX_TBL_SIZE ((QAT_PREFIX_SIZE) * 2) + #define QAT_SESSION_IS_SLICE_SET(flags, flag) \ (!!((flags) & (flag))) +#define QAT_SM3_BLOCK_SIZE 64 + enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_NONE = 0, QAT_CRYPTO_PROTO_FLAG_CCM = 1, @@ -98,8 +103,10 @@ struct qat_sym_session { enum icp_qat_hw_auth_mode auth_mode; void *bpi_ctx; struct qat_sym_cd cd; + uint8_t prefix_state[QAT_PREFIX_TBL_SIZE] __rte_cache_aligned; uint8_t *cd_cur_ptr; phys_addr_t cd_paddr; + phys_addr_t prefix_paddr; struct icp_qat_fw_la_bulk_req fw_req; uint8_t aad_len; struct qat_crypto_instance *inst;