From patchwork Mon Jun 20 12:26:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 113116 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C5A2A0545; Mon, 20 Jun 2022 14:27:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5468C427F3; Mon, 20 Jun 2022 14:27:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id E801F427EB for ; Mon, 20 Jun 2022 14:27:01 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25KA2l35011557 for ; Mon, 20 Jun 2022 05:27:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=qB0LLEk1+KjpRNi8wYq3uRKGgUdq7WOVr7ArmXxTUtI=; b=Obk1VuyOgDaSTBgZRHlmQDkDRMwy3qYo5EGnrBILTN/GJFWjFf+7vIimEdUEs2l3KDxI 3lh9VAW43ita43KqKig/L4cEKA+21qaoTC9IMGkvuhpuONIoI/ogTV8C9sq5vTD6o/xG vsC6j7XTJg6PC4CEw6ctI0LDQrQNPl9eXFwE4dmsEsosikgGY4RNdtfxGHao8Ylq6IT2 HZWFGffQ0h2BBuuVvpT/9tHjF6UDphROHX03VE7QMNfNQp8tugLkirxKwUrIpdJm4wom NQ+I55SQ+jtIZF9kOf0hlx5HUZjCLw0CwyGCmSVeqvXUAn/xphODprQ5hYfy7O8f1wcw Zw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3gsc2p6vpc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 20 Jun 2022 05:27:00 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 20 Jun 2022 05:26:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Jun 2022 05:26:59 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id E4A435B6958; Mon, 20 Jun 2022 05:26:57 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Ankur Dwivedi , Subject: [PATCH v2 1/3] crypto/cnxk: fix CMAC IV Date: Mon, 20 Jun 2022 17:56:52 +0530 Message-ID: <20220620122654.1014994-2-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220620122654.1014994-1-ktejasree@marvell.com> References: <20220620122654.1014994-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: gOM9qG7eRR65w_pPqtQkZq6oI9jSlua7 X-Proofpoint-GUID: gOM9qG7eRR65w_pPqtQkZq6oI9jSlua7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-20_05,2022-06-17_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fixing CMAC IV length to 16 bytes. Fixes: 759b5e653580 ("crypto/cnxk: support AES-CMAC") Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cnxk_se.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index 5c61e4dfa4..ff98d9b553 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -82,7 +82,7 @@ pdcp_iv_copy(uint8_t *iv_d, uint8_t *iv_s, const uint8_t pdcp_alg_type, memcpy(iv_d, iv_s, 16); } else { /* AES-CMAC EIA2, microcode expects 16B zeroized IV */ - for (j = 0; j < 4; j++) + for (j = 0; j < 16; j++) iv_d[j] = 0; } } From patchwork Mon Jun 20 12:26:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 113117 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 081C0A0545; Mon, 20 Jun 2022 14:27:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 49DDB427F8; Mon, 20 Jun 2022 14:27:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6560F427F8 for ; Mon, 20 Jun 2022 14:27:04 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25K9V6bg022260 for ; Mon, 20 Jun 2022 05:27:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=7q3nyfTFXhHIxNt0eesVdVVZdbTKzzVGRxU1auTK5Qg=; b=hvCw3sv7emaFBL3iq8JggnHqqiM9JwWjONfdVkLjaLHVgpxZjibdbnfJlyZv6L01sxHG qciuxnufth7wcAQ0N+nk0fUsBXWPM/izYEIFFydADoSc02g/kp5KVdk9t3548YSzsobO /SoQjtkKCHE4UJk7Ii9LNLMaR1CFXs3+IK55cFAmtsgPYfoHFSvgggjuGfdedzCKaBKJ ybtilWe4c1eezKPm3kv3jOoKJPFbhiPOx/53BvWJ0hCEJOMt+xljfOe9oRWcUZuGifva TPJQzcMa/39mzTUPVW8b95DaonwRrNN8bM3lTlAFOfVg3SVycPS7ZV309A45SdhE8939 VQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3gse7nepqm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 20 Jun 2022 05:27:03 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 20 Jun 2022 05:27:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Jun 2022 05:27:01 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id DF08B5B6957; Mon, 20 Jun 2022 05:26:59 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Ankur Dwivedi , Subject: [PATCH v2 2/3] crypto/cnxk: support stream cipher chained operations Date: Mon, 20 Jun 2022 17:56:53 +0530 Message-ID: <20220620122654.1014994-3-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220620122654.1014994-1-ktejasree@marvell.com> References: <20220620122654.1014994-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: UJWt9xriY0Z405-DGxgGJvIcdjqpj504 X-Proofpoint-ORIG-GUID: UJWt9xriY0Z405-DGxgGJvIcdjqpj504 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-20_05,2022-06-17_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for zuc, snow3g and aes-ctr-cmac chained operations on cn9k using key and IV scheme in microcode. Signed-off-by: Tejasree Kondoj --- drivers/common/cnxk/roc_se.c | 271 +++++++++++++++++------ drivers/common/cnxk/roc_se.h | 74 +++++-- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 69 +++++- drivers/crypto/cnxk/cnxk_se.h | 235 +++++++++++++++++--- 4 files changed, 536 insertions(+), 113 deletions(-) diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c index 3f0821e400..8d6446c3a0 100644 --- a/drivers/common/cnxk/roc_se.c +++ b/drivers/common/cnxk/roc_se.c @@ -88,21 +88,24 @@ cpt_ciph_type_set(roc_se_cipher_type type, struct roc_se_ctx *ctx, fc_type = ROC_SE_FC_GEN; break; case ROC_SE_ZUC_EEA3: - /* No support for chained operations */ - if (unlikely(ctx->hash_type)) - return -1; - fc_type = ROC_SE_PDCP; + if (ctx->hash_type) + fc_type = ROC_SE_PDCP_CHAIN; + else + fc_type = ROC_SE_PDCP; break; case ROC_SE_SNOW3G_UEA2: if (unlikely(key_len != 16)) return -1; - /* No support for AEAD yet */ - if (unlikely(ctx->hash_type)) - return -1; - fc_type = ROC_SE_PDCP; + if (ctx->hash_type) + fc_type = ROC_SE_PDCP_CHAIN; + else + fc_type = ROC_SE_PDCP; break; case ROC_SE_AES_CTR_EEA2: - fc_type = ROC_SE_PDCP; + if (ctx->hash_type) + fc_type = ROC_SE_PDCP_CHAIN; + else + fc_type = ROC_SE_PDCP; break; case ROC_SE_KASUMI_F8_CBC: case ROC_SE_KASUMI_F8_ECB: @@ -171,6 +174,29 @@ cpt_pdcp_key_type_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t key_len) return 0; } +static int +cpt_pdcp_chain_key_type_get(uint16_t key_len) +{ + roc_se_aes_type key_type; + + switch (key_len) { + case 16: + key_type = ROC_SE_AES_128_BIT; + break; + case 24: + key_type = ROC_SE_AES_192_BIT; + break; + case 32: + key_type = ROC_SE_AES_256_BIT; + break; + default: + plt_err("Invalid key len"); + return -ENOTSUP; + } + + return key_type; +} + static int cpt_pdcp_mac_len_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t mac_len) { @@ -202,7 +228,7 @@ cpt_pdcp_mac_len_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t mac_len) } static void -cpt_pdcp_update_zuc_const(uint8_t *zuc_const, int key_len, int mac_len) +cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len) { if (key_len == 16) { memcpy(zuc_const, zuc_key128, 32); @@ -227,15 +253,19 @@ int roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint8_t *key, uint16_t key_len, uint16_t mac_len) { + struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx; struct roc_se_zuc_snow3g_ctx *zs_ctx; struct roc_se_kasumi_ctx *k_ctx; struct roc_se_context *fctx; + uint8_t opcode_minor; + uint8_t pdcp_alg; int ret; if (se_ctx == NULL) return -1; zs_ctx = &se_ctx->se_ctx.zs_ctx; + zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx; k_ctx = &se_ctx->se_ctx.k_ctx; fctx = &se_ctx->se_ctx.fctx; @@ -243,14 +273,12 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, uint8_t *zuc_const; uint32_t keyx[4]; uint8_t *ci_key; + bool chained_op = + se_ctx->ciph_then_auth || se_ctx->auth_then_ciph; if (!key_len) return -1; - /* No support for chained operations yet */ - if (se_ctx->enc_cipher) - return -1; - if (roc_model_is_cn9k()) { ci_key = zs_ctx->zuc.onk_ctx.ci_key; zuc_const = zs_ctx->zuc.onk_ctx.zuc_const; @@ -262,41 +290,88 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, /* For ZUC/SNOW3G/Kasumi */ switch (type) { case ROC_SE_SNOW3G_UIA2: - zs_ctx->zuc.otk_ctx.w0.s.alg_type = - ROC_SE_PDCP_ALG_TYPE_SNOW3G; - zs_ctx->zuc.otk_ctx.w0.s.mac_len = - ROC_SE_PDCP_MAC_LEN_32_BIT; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_SNOW3G; - cpt_snow3g_key_gen(key, keyx); - memcpy(ci_key, keyx, key_len); - se_ctx->fc_type = ROC_SE_PDCP; + if (chained_op) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf = + ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + ctx->w0.s.auth_type = + ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G; + ctx->w0.s.mac_len = mac_len; + ctx->w0.s.auth_key_len = key_len; + se_ctx->fc_type = ROC_SE_PDCP_CHAIN; + cpt_snow3g_key_gen(key, keyx); + memcpy(ctx->st.auth_key, keyx, key_len); + } else { + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_SNOW3G; + zs_ctx->zuc.otk_ctx.w0.s.mac_len = + ROC_SE_PDCP_MAC_LEN_32_BIT; + cpt_snow3g_key_gen(key, keyx); + memcpy(ci_key, keyx, key_len); + se_ctx->fc_type = ROC_SE_PDCP; + } + se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G; se_ctx->zsk_flags = 0x1; break; case ROC_SE_ZUC_EIA3: - zs_ctx->zuc.otk_ctx.w0.s.alg_type = - ROC_SE_PDCP_ALG_TYPE_ZUC; - ret = cpt_pdcp_key_type_set(zs_ctx, key_len); - if (ret) - return ret; - ret = cpt_pdcp_mac_len_set(zs_ctx, mac_len); - if (ret) - return ret; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_ZUC; - memcpy(ci_key, key, key_len); - if (key_len == 32) - roc_se_zuc_bytes_swap(ci_key, key_len); - cpt_pdcp_update_zuc_const(zuc_const, key_len, mac_len); - se_ctx->fc_type = ROC_SE_PDCP; + if (chained_op) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + ctx->w0.s.state_conf = + ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + ctx->w0.s.auth_type = + ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC; + ctx->w0.s.mac_len = mac_len; + ctx->w0.s.auth_key_len = key_len; + memcpy(ctx->st.auth_key, key, key_len); + cpt_zuc_const_update(ctx->st.auth_zuc_const, + key_len, mac_len); + se_ctx->fc_type = ROC_SE_PDCP_CHAIN; + } else { + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_ZUC; + ret = cpt_pdcp_key_type_set(zs_ctx, key_len); + if (ret) + return ret; + ret = cpt_pdcp_mac_len_set(zs_ctx, mac_len); + if (ret) + return ret; + memcpy(ci_key, key, key_len); + if (key_len == 32) + roc_se_zuc_bytes_swap(ci_key, key_len); + cpt_zuc_const_update(zuc_const, key_len, + mac_len); + se_ctx->fc_type = ROC_SE_PDCP; + } + se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_ZUC; se_ctx->zsk_flags = 0x1; break; case ROC_SE_AES_CMAC_EIA2: - zs_ctx->zuc.otk_ctx.w0.s.alg_type = - ROC_SE_PDCP_ALG_TYPE_AES_CTR; - zs_ctx->zuc.otk_ctx.w0.s.mac_len = - ROC_SE_PDCP_MAC_LEN_32_BIT; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR; - memcpy(ci_key, key, key_len); - se_ctx->fc_type = ROC_SE_PDCP; + if (chained_op) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + int key_type; + key_type = cpt_pdcp_chain_key_type_get(key_len); + if (key_type < 0) + return key_type; + ctx->w0.s.auth_key_len = key_type; + ctx->w0.s.state_conf = + ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + ctx->w0.s.auth_type = + ROC_SE_PDCP_ALG_TYPE_AES_CTR; + ctx->w0.s.mac_len = mac_len; + memcpy(ctx->st.auth_key, key, key_len); + se_ctx->fc_type = ROC_SE_PDCP_CHAIN; + } else { + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_AES_CTR; + zs_ctx->zuc.otk_ctx.w0.s.mac_len = + ROC_SE_PDCP_MAC_LEN_32_BIT; + memcpy(ci_key, key, key_len); + se_ctx->fc_type = ROC_SE_PDCP; + } + se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_AES_CMAC; se_ctx->zsk_flags = 0x1; break; case ROC_SE_KASUMI_F9_ECB: @@ -316,11 +391,16 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, } se_ctx->mac_len = mac_len; se_ctx->hash_type = type; + pdcp_alg = zs_ctx->zuc.otk_ctx.w0.s.alg_type; if (roc_model_is_cn9k()) - se_ctx->template_w4.s.opcode_minor = - ((1 << 7) | (se_ctx->pdcp_alg_type << 5) | 1); + if (chained_op == true) + opcode_minor = se_ctx->ciph_then_auth ? 2 : 3; + else + opcode_minor = ((1 << 7) | (pdcp_alg << 5) | 1); else - se_ctx->template_w4.s.opcode_minor = ((1 << 4) | 1); + opcode_minor = ((1 << 4) | 1); + + se_ctx->template_w4.s.opcode_minor = opcode_minor; return 0; } @@ -363,13 +443,18 @@ int roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const uint8_t *key, uint16_t key_len, uint8_t *salt) { + bool chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph; struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx; struct roc_se_context *fctx = &se_ctx->se_ctx.fctx; + struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx; + uint8_t opcode_minor; uint8_t *zuc_const; uint32_t keyx[4]; uint8_t *ci_key; int ret; + zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx; + if (roc_model_is_cn9k()) { ci_key = zs_ctx->zuc.onk_ctx.ci_key; zuc_const = zs_ctx->zuc.onk_ctx.zuc_const; @@ -447,34 +532,73 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, memcpy(fctx->hmac.ipad, &key[key_len], key_len); break; case ROC_SE_SNOW3G_UEA2: - zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT; - zs_ctx->zuc.otk_ctx.w0.s.alg_type = ROC_SE_PDCP_ALG_TYPE_SNOW3G; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_SNOW3G; - cpt_snow3g_key_gen(key, keyx); - memcpy(ci_key, keyx, key_len); + if (chained_op == true) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf = + ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type = + ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G; + zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len; + cpt_snow3g_key_gen(key, keyx); + memcpy(ctx->st.ci_key, keyx, key_len); + } else { + zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT; + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_SNOW3G; + cpt_snow3g_key_gen(key, keyx); + memcpy(ci_key, keyx, key_len); + } + se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G; se_ctx->zsk_flags = 0; goto success; case ROC_SE_ZUC_EEA3: - ret = cpt_pdcp_key_type_set(zs_ctx, key_len); - if (ret) - return ret; - zs_ctx->zuc.otk_ctx.w0.s.alg_type = ROC_SE_PDCP_ALG_TYPE_ZUC; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_ZUC; - memcpy(ci_key, key, key_len); - if (key_len == 32) { - roc_se_zuc_bytes_swap(ci_key, key_len); - memcpy(zuc_const, zuc_key256, 16); - } else - memcpy(zuc_const, zuc_key128, 32); + if (chained_op == true) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf = + ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type = + ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC; + memcpy(ctx->st.ci_key, key, key_len); + memcpy(ctx->st.ci_zuc_const, zuc_key128, 32); + zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len; + } else { + ret = cpt_pdcp_key_type_set(zs_ctx, key_len); + if (ret) + return ret; + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_ZUC; + memcpy(ci_key, key, key_len); + if (key_len == 32) { + roc_se_zuc_bytes_swap(ci_key, key_len); + memcpy(zuc_const, zuc_key256, 16); + } else + memcpy(zuc_const, zuc_key128, 32); + } + se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_ZUC; se_ctx->zsk_flags = 0; goto success; case ROC_SE_AES_CTR_EEA2: - zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT; - zs_ctx->zuc.otk_ctx.w0.s.alg_type = - ROC_SE_PDCP_ALG_TYPE_AES_CTR; - se_ctx->pdcp_alg_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR; - memcpy(ci_key, key, key_len); + if (chained_op == true) { + struct roc_se_onk_zuc_chain_ctx *ctx = + &zs_ch_ctx->zuc.onk_ctx; + int key_type; + key_type = cpt_pdcp_chain_key_type_get(key_len); + if (key_type < 0) + return key_type; + ctx->w0.s.ci_key_len = key_type; + ctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV; + ctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR; + memcpy(ctx->st.ci_key, key, key_len); + } else { + zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT; + zs_ctx->zuc.otk_ctx.w0.s.alg_type = + ROC_SE_PDCP_ALG_TYPE_AES_CTR; + memcpy(ci_key, key, key_len); + } + se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_AES_CTR; se_ctx->zsk_flags = 0; goto success; case ROC_SE_KASUMI_F8_ECB: @@ -502,11 +626,16 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, se_ctx->enc_cipher = type; if (se_ctx->fc_type == ROC_SE_PDCP) { if (roc_model_is_cn9k()) - se_ctx->template_w4.s.opcode_minor = - ((1 << 7) | (se_ctx->pdcp_alg_type << 5) | - (se_ctx->zsk_flags & 0x7)); + if (chained_op == true) + opcode_minor = se_ctx->ciph_then_auth ? 2 : 3; + else + opcode_minor = + ((1 << 7) | (se_ctx->pdcp_ci_alg << 5) | + (se_ctx->zsk_flags & 0x7)); else - se_ctx->template_w4.s.opcode_minor = ((1 << 4)); + opcode_minor = ((1 << 4)); + + se_ctx->template_w4.s.opcode_minor = opcode_minor; } return 0; } diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h index c565ec1b74..86bb3aa79d 100644 --- a/drivers/common/cnxk/roc_se.h +++ b/drivers/common/cnxk/roc_se.h @@ -11,10 +11,11 @@ #define ROC_SE_FC_MINOR_OP_DECRYPT 0x1 #define ROC_SE_FC_MINOR_OP_HMAC_FIRST 0x10 -#define ROC_SE_MAJOR_OP_HASH 0x34 -#define ROC_SE_MAJOR_OP_HMAC 0x35 -#define ROC_SE_MAJOR_OP_PDCP 0x37 -#define ROC_SE_MAJOR_OP_KASUMI 0x38 +#define ROC_SE_MAJOR_OP_HASH 0x34 +#define ROC_SE_MAJOR_OP_HMAC 0x35 +#define ROC_SE_MAJOR_OP_PDCP 0x37 +#define ROC_SE_MAJOR_OP_KASUMI 0x38 +#define ROC_SE_MAJOR_OP_PDCP_CHAIN 0x3C #define ROC_SE_MAJOR_OP_MISC 0x01 #define ROC_SE_MISC_MINOR_OP_PASSTHROUGH 0x03 @@ -38,10 +39,11 @@ #define ROC_SE_K_F8 0x4 #define ROC_SE_K_F9 0x8 -#define ROC_SE_FC_GEN 0x1 -#define ROC_SE_PDCP 0x2 -#define ROC_SE_KASUMI 0x3 -#define ROC_SE_HASH_HMAC 0x4 +#define ROC_SE_FC_GEN 0x1 +#define ROC_SE_PDCP 0x2 +#define ROC_SE_KASUMI 0x3 +#define ROC_SE_HASH_HMAC 0x4 +#define ROC_SE_PDCP_CHAIN 0x5 #define ROC_SE_OP_CIPHER_ENCRYPT 0x1 #define ROC_SE_OP_CIPHER_DECRYPT 0x2 @@ -224,6 +226,42 @@ struct roc_se_onk_zuc_ctx { uint8_t zuc_const[32]; }; +struct roc_se_onk_zuc_chain_ctx { + union { + uint64_t u64; + struct { + uint64_t cipher_type : 2; + uint64_t rsvd58_59 : 2; + uint64_t auth_type : 2; + uint64_t rsvd62_63 : 2; + uint64_t mac_len : 4; + uint64_t ci_key_len : 2; + uint64_t auth_key_len : 2; + uint64_t rsvd42_47 : 6; + uint64_t state_conf : 2; + uint64_t rsvd0_39 : 40; + } s; + } w0; + union { + struct { + uint8_t encr_lfsr_state[64]; + uint8_t auth_lfsr_state[64]; + }; + struct { + uint8_t ci_key[32]; + uint8_t ci_zuc_const[32]; + uint8_t auth_key[32]; + uint8_t auth_zuc_const[32]; + }; + } st; +}; + +struct roc_se_zuc_snow3g_chain_ctx { + union { + struct roc_se_onk_zuc_chain_ctx onk_ctx; + } zuc; +}; + struct roc_se_zuc_snow3g_ctx { union { struct roc_se_onk_zuc_ctx onk_ctx; @@ -275,9 +313,15 @@ struct roc_se_fc_params { PLT_STATIC_ASSERT((offsetof(struct roc_se_fc_params, aad_buf) % 128) == 0); -#define ROC_SE_PDCP_ALG_TYPE_ZUC 0 -#define ROC_SE_PDCP_ALG_TYPE_SNOW3G 1 -#define ROC_SE_PDCP_ALG_TYPE_AES_CTR 2 +#define ROC_SE_PDCP_ALG_TYPE_ZUC 0 +#define ROC_SE_PDCP_ALG_TYPE_SNOW3G 1 +#define ROC_SE_PDCP_ALG_TYPE_AES_CTR 2 +#define ROC_SE_PDCP_ALG_TYPE_AES_CMAC 3 +#define ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G 1 +#define ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC 3 + +#define ROC_SE_PDCP_CHAIN_CTX_LFSR 0 +#define ROC_SE_PDCP_CHAIN_CTX_KEY_IV 1 struct roc_se_ctx { /* Below fields are accessed by sw */ @@ -289,13 +333,17 @@ struct roc_se_ctx { uint64_t hmac : 1; uint64_t zsk_flags : 3; uint64_t k_ecb : 1; - uint64_t pdcp_alg_type : 2; - uint64_t rsvd : 21; + uint64_t pdcp_ci_alg : 2; + uint64_t pdcp_auth_alg : 2; + uint16_t ciph_then_auth : 1; + uint16_t auth_then_ciph : 1; + uint64_t rsvd : 17; union cpt_inst_w4 template_w4; /* Below fields are accessed by hardware */ union { struct roc_se_context fctx; struct roc_se_zuc_snow3g_ctx zs_ctx; + struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx; struct roc_se_kasumi_ctx k_ctx; } se_ctx; uint8_t *auth_key; diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index 7237dacb48..80071872f1 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -421,14 +421,39 @@ cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) return sizeof(struct cnxk_se_sess); } +static bool +is_valid_pdcp_cipher_alg(struct rte_crypto_sym_xform *c_xfrm, + struct cnxk_se_sess *sess) +{ + switch (c_xfrm->cipher.algo) { + case RTE_CRYPTO_CIPHER_SNOW3G_UEA2: + case RTE_CRYPTO_CIPHER_ZUC_EEA3: + break; + case RTE_CRYPTO_CIPHER_AES_CTR: + sess->aes_ctr_eea2 = 1; + break; + default: + return false; + } + + return true; +} + static int -cnxk_sess_fill(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) +cnxk_sess_fill(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xform, + struct cnxk_se_sess *sess) { struct rte_crypto_sym_xform *aead_xfrm = NULL; struct rte_crypto_sym_xform *c_xfrm = NULL; struct rte_crypto_sym_xform *a_xfrm = NULL; + bool pdcp_chain_supported = false; bool ciph_then_auth = false; + if (roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_96XX_B0 || + roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_96XX_C0 || + roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_98XX) + pdcp_chain_supported = true; + if (xform == NULL) return -EINVAL; @@ -506,6 +531,32 @@ cnxk_sess_fill(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) /* Cipher then auth */ if (ciph_then_auth) { + if (c_xfrm->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT) { + if (a_xfrm->auth.op != RTE_CRYPTO_AUTH_OP_VERIFY) + return -EINVAL; + sess->auth_first = 1; + switch (a_xfrm->auth.algo) { + case RTE_CRYPTO_AUTH_SHA1_HMAC: + switch (c_xfrm->cipher.algo) { + case RTE_CRYPTO_CIPHER_AES_CBC: + break; + default: + return -ENOTSUP; + } + break; + case RTE_CRYPTO_AUTH_SNOW3G_UIA2: + case RTE_CRYPTO_AUTH_ZUC_EIA3: + case RTE_CRYPTO_AUTH_AES_CMAC: + if (!pdcp_chain_supported || + !is_valid_pdcp_cipher_alg(c_xfrm, sess)) + return -ENOTSUP; + break; + default: + return -ENOTSUP; + } + } + sess->roc_se_ctx.ciph_then_auth = 1; + sess->chained_op = 1; if (fill_sess_cipher(c_xfrm, sess)) return -ENOTSUP; if (fill_sess_auth(a_xfrm, sess)) @@ -517,6 +568,9 @@ cnxk_sess_fill(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) /* else */ if (c_xfrm->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) { + if (a_xfrm->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE) + return -EINVAL; + sess->auth_first = 1; switch (a_xfrm->auth.algo) { case RTE_CRYPTO_AUTH_SHA1_HMAC: switch (c_xfrm->cipher.algo) { @@ -526,11 +580,20 @@ cnxk_sess_fill(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) return -ENOTSUP; } break; + case RTE_CRYPTO_AUTH_SNOW3G_UIA2: + case RTE_CRYPTO_AUTH_ZUC_EIA3: + case RTE_CRYPTO_AUTH_AES_CMAC: + if (!pdcp_chain_supported || + !is_valid_pdcp_cipher_alg(c_xfrm, sess)) + return -ENOTSUP; + break; default: return -ENOTSUP; } } + sess->roc_se_ctx.auth_then_ciph = 1; + sess->chained_op = 1; if (fill_sess_auth(a_xfrm, sess)) return -ENOTSUP; if (fill_sess_cipher(c_xfrm, sess)) @@ -547,7 +610,7 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt) inst_w7.s.cptr = (uint64_t)&sess->roc_se_ctx.se_ctx; /* Set the engine group */ - if (sess->zsk_flag || sess->chacha_poly) + if (sess->zsk_flag || sess->chacha_poly || sess->aes_ctr_eea2) inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE]; else inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE]; @@ -574,7 +637,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int driver_id, sess_priv = priv; - ret = cnxk_sess_fill(xform, sess_priv); + ret = cnxk_sess_fill(roc_cpt, xform, sess_priv); if (ret) goto priv_put; diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index ff98d9b553..f297adf89b 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -24,7 +24,12 @@ struct cnxk_se_sess { uint16_t chacha_poly : 1; uint16_t is_null : 1; uint16_t is_gmac : 1; - uint16_t rsvd1 : 3; + uint16_t chained_op : 1; + uint16_t auth_first : 1; + uint16_t aes_ctr_eea2 : 1; + uint16_t zs_cipher : 4; + uint16_t zs_auth : 4; + uint16_t rsvd2 : 8; uint16_t aad_length; uint8_t mac_len; uint8_t iv_length; @@ -63,6 +68,11 @@ pdcp_iv_copy(uint8_t *iv_d, uint8_t *iv_s, const uint8_t pdcp_alg_type, uint32_t *iv_s_temp, iv_temp[4]; int j; + if (unlikely(iv_s == NULL)) { + memset(iv_d, 0, 16); + return; + } + if (pdcp_alg_type == ROC_SE_PDCP_ALG_TYPE_SNOW3G) { /* * DPDK seems to provide it in form of IV3 IV2 IV1 IV0 @@ -74,7 +84,8 @@ pdcp_iv_copy(uint8_t *iv_d, uint8_t *iv_s, const uint8_t pdcp_alg_type, for (j = 0; j < 4; j++) iv_temp[j] = iv_s_temp[3 - j]; memcpy(iv_d, iv_temp, 16); - } else if (pdcp_alg_type == ROC_SE_PDCP_ALG_TYPE_ZUC) { + } else if ((pdcp_alg_type == ROC_SE_PDCP_ALG_TYPE_ZUC) || + pdcp_alg_type == ROC_SE_PDCP_ALG_TYPE_AES_CTR) { if (pack_iv) { cpt_pack_iv(iv_s, iv_d); memcpy(iv_d + 6, iv_s + 8, 17); @@ -997,6 +1008,110 @@ cpt_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, return 0; } +static __rte_always_inline int +cpt_pdcp_chain_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, + struct roc_se_fc_params *params, + struct cpt_inst_s *inst) +{ + uint32_t encr_offset, auth_offset, iv_offset = 0; + uint8_t *auth_iv = NULL, *cipher_iv = NULL; + uint32_t encr_data_len, auth_data_len; + uint8_t pdcp_ci_alg, pdcp_auth_alg; + union cpt_inst_w4 cpt_inst_w4; + struct roc_se_ctx *se_ctx; + const int iv_len = 32; + uint32_t mac_len = 0; + uint8_t pack_iv = 0; + void *offset_vaddr; + int32_t inputlen; + void *dm_vaddr; + uint8_t *iv_d; + + if (unlikely((!(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) || + (!(req_flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { + plt_dp_err("Scatter gather mode is not supported"); + return -1; + } + + encr_offset = ROC_SE_ENCR_OFFSET(d_offs); + auth_offset = ROC_SE_AUTH_OFFSET(d_offs); + + if (auth_offset != encr_offset) { + plt_dp_err("encr_offset and auth_offset are not same"); + plt_dp_err("enc_offset: %d", encr_offset); + plt_dp_err("auth_offset: %d", auth_offset); + return -1; + } + + if (unlikely(encr_offset >> 16)) { + plt_dp_err("Offset not supported"); + plt_dp_err("enc_offset: %d", encr_offset); + return -1; + } + + se_ctx = params->ctx_buf.vaddr; + mac_len = se_ctx->mac_len; + pdcp_ci_alg = se_ctx->pdcp_ci_alg; + pdcp_auth_alg = se_ctx->pdcp_auth_alg; + + encr_data_len = ROC_SE_ENCR_DLEN(d_lens); + auth_data_len = ROC_SE_AUTH_DLEN(d_lens); + + if ((auth_data_len + mac_len) != encr_data_len) { + plt_dp_err("(auth_data_len + mac_len) != encr_data_len"); + plt_dp_err("auth_data_len: %d", auth_data_len); + plt_dp_err("encr_data_len: %d", encr_data_len); + plt_dp_err("mac_len: %d", mac_len); + return -1; + } + + cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN; + cpt_inst_w4.s.opcode_minor = se_ctx->template_w4.s.opcode_minor; + + cpt_inst_w4.s.param1 = auth_data_len; + cpt_inst_w4.s.param2 = 0; + + if (likely(params->auth_iv_len)) + auth_iv = params->auth_iv_buf; + + if (likely(params->cipher_iv_len)) + cipher_iv = params->iv_buf; + + encr_offset += iv_len; + + if (se_ctx->auth_then_ciph) + inputlen = encr_offset + auth_data_len; + else + inputlen = encr_offset + encr_data_len; + + dm_vaddr = params->bufs[0].vaddr; + + /* Use Direct mode */ + + offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - + iv_len); + + /* DPTR */ + inst->dptr = (uint64_t)offset_vaddr; + /* RPTR should just exclude offset control word */ + inst->rptr = (uint64_t)dm_vaddr - iv_len; + + cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; + + *(uint64_t *)offset_vaddr = rte_cpu_to_be_64( + ((uint64_t)(iv_offset) << 16) | ((uint64_t)(encr_offset))); + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + pdcp_iv_copy(iv_d, cipher_iv, pdcp_ci_alg, pack_iv); + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN + 16); + pdcp_iv_copy(iv_d, auth_iv, pdcp_auth_alg, pack_iv); + + inst->w4.u64 = cpt_inst_w4.u64; + + return 0; +} + static __rte_always_inline int cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, struct roc_se_fc_params *params, struct cpt_inst_s *inst) @@ -1018,7 +1133,6 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, se_ctx = params->ctx_buf.vaddr; flags = se_ctx->zsk_flags; mac_len = se_ctx->mac_len; - pdcp_alg_type = se_ctx->pdcp_alg_type; cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP; cpt_inst_w4.s.opcode_minor = se_ctx->template_w4.s.opcode_minor; @@ -1032,8 +1146,9 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, */ auth_data_len = ROC_SE_AUTH_DLEN(d_lens); auth_offset = ROC_SE_AUTH_OFFSET(d_offs); + pdcp_alg_type = se_ctx->pdcp_auth_alg; - if (se_ctx->pdcp_alg_type != ROC_SE_PDCP_ALG_TYPE_AES_CTR) { + if (pdcp_alg_type != ROC_SE_PDCP_ALG_TYPE_AES_CMAC) { iv_len = params->auth_iv_len; if (iv_len == 25) { @@ -1067,6 +1182,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, } else { iv_s = params->iv_buf; iv_len = params->cipher_iv_len; + pdcp_alg_type = se_ctx->pdcp_ci_alg; if (iv_len == 25) { roc_se_zuc_bytes_swap(iv_s, iv_len); @@ -1609,6 +1725,9 @@ cpt_fc_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, fc_params, inst); } else if (fc_type == ROC_SE_KASUMI) { ret = cpt_kasumi_dec_prep(d_offs, d_lens, fc_params, inst); + } else if (fc_type == ROC_SE_PDCP_CHAIN) { + ret = cpt_pdcp_chain_alg_prep(flags, d_offs, d_lens, fc_params, + inst); } /* @@ -1640,6 +1759,9 @@ cpt_fc_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, inst); } else if (fc_type == ROC_SE_HASH_HMAC) { ret = cpt_digest_gen_prep(flags, d_lens, fc_params, inst); + } else if (fc_type == ROC_SE_PDCP_CHAIN) { + ret = cpt_pdcp_chain_alg_prep(flags, d_offs, d_lens, fc_params, + inst); } return ret; @@ -1713,10 +1835,10 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) static __rte_always_inline int fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) { + uint8_t zsk_flag = 0, zs_cipher = 0, aes_ctr = 0, is_null = 0; struct rte_crypto_cipher_xform *c_form; roc_se_cipher_type enc_type = 0; /* NULL Cipher type */ uint32_t cipher_key_len = 0; - uint8_t zsk_flag = 0, aes_ctr = 0, is_null = 0; c_form = &xform->cipher; @@ -1750,28 +1872,37 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) cipher_key_len = 8; break; case RTE_CRYPTO_CIPHER_AES_CTR: - enc_type = ROC_SE_AES_CTR; + if (sess->aes_ctr_eea2) { + enc_type = ROC_SE_AES_CTR_EEA2; + } else { + enc_type = ROC_SE_AES_CTR; + aes_ctr = 1; + } cipher_key_len = 16; - aes_ctr = 1; break; case RTE_CRYPTO_CIPHER_NULL: enc_type = 0; is_null = 1; break; case RTE_CRYPTO_CIPHER_KASUMI_F8: + if (sess->chained_op) + return -ENOTSUP; enc_type = ROC_SE_KASUMI_F8_ECB; cipher_key_len = 16; zsk_flag = ROC_SE_K_F8; + zs_cipher = ROC_SE_K_F8; break; case RTE_CRYPTO_CIPHER_SNOW3G_UEA2: enc_type = ROC_SE_SNOW3G_UEA2; cipher_key_len = 16; zsk_flag = ROC_SE_ZS_EA; + zs_cipher = ROC_SE_ZS_EA; break; case RTE_CRYPTO_CIPHER_ZUC_EEA3: enc_type = ROC_SE_ZUC_EEA3; cipher_key_len = c_form->key.length; zsk_flag = ROC_SE_ZS_EA; + zs_cipher = ROC_SE_ZS_EA; break; case RTE_CRYPTO_CIPHER_AES_XTS: enc_type = ROC_SE_AES_XTS; @@ -1802,7 +1933,19 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) return -1; } + if (zsk_flag && sess->roc_se_ctx.ciph_then_auth) { + struct rte_crypto_auth_xform *a_form; + a_form = &xform->next->auth; + if (c_form->op != RTE_CRYPTO_CIPHER_OP_DECRYPT && + a_form->op != RTE_CRYPTO_AUTH_OP_VERIFY) { + plt_dp_err("Crypto: PDCP cipher then auth must use" + " options: decrypt and verify"); + return -EINVAL; + } + } + sess->zsk_flag = zsk_flag; + sess->zs_cipher = zs_cipher; sess->aes_gcm = 0; sess->aes_ctr = aes_ctr; sess->iv_offset = c_form->iv.offset; @@ -1822,9 +1965,9 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) static __rte_always_inline int fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) { + uint8_t zsk_flag = 0, zs_auth = 0, aes_gcm = 0, is_null = 0; struct rte_crypto_auth_xform *a_form; roc_se_auth_type auth_type = 0; /* NULL Auth type */ - uint8_t zsk_flag = 0, aes_gcm = 0, is_null = 0; if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) return fill_sess_gmac(xform, sess); @@ -1879,20 +2022,25 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) auth_type = ROC_SE_MD5_TYPE; break; case RTE_CRYPTO_AUTH_KASUMI_F9: + if (sess->chained_op) + return -ENOTSUP; auth_type = ROC_SE_KASUMI_F9_ECB; /* * Indicate that direction needs to be taken out * from end of src */ zsk_flag = ROC_SE_K_F9; + zs_auth = ROC_SE_K_F9; break; case RTE_CRYPTO_AUTH_SNOW3G_UIA2: auth_type = ROC_SE_SNOW3G_UIA2; zsk_flag = ROC_SE_ZS_IA; + zs_auth = ROC_SE_ZS_IA; break; case RTE_CRYPTO_AUTH_ZUC_EIA3: auth_type = ROC_SE_ZUC_EIA3; zsk_flag = ROC_SE_ZS_IA; + zs_auth = ROC_SE_ZS_IA; break; case RTE_CRYPTO_AUTH_NULL: auth_type = 0; @@ -1912,7 +2060,19 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) return -1; } + if (zsk_flag && sess->roc_se_ctx.auth_then_ciph) { + struct rte_crypto_cipher_xform *c_form; + c_form = &xform->next->cipher; + if (c_form->op != RTE_CRYPTO_CIPHER_OP_ENCRYPT && + a_form->op != RTE_CRYPTO_AUTH_OP_GENERATE) { + plt_dp_err("Crypto: PDCP auth then cipher must use" + " options: encrypt and generate"); + return -EINVAL; + } + } + sess->zsk_flag = zsk_flag; + sess->zs_auth = zs_auth; sess->aes_gcm = aes_gcm; sess->mac_len = a_form->digest_length; sess->is_null = is_null; @@ -2121,11 +2281,15 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, uint8_t inplace = 1; #endif struct roc_se_fc_params fc_params; + bool chain = sess->chained_op; char src[SRC_IOV_SIZE]; char dst[SRC_IOV_SIZE]; uint32_t iv_buf[4]; + bool pdcp_chain; int ret; + pdcp_chain = chain && (sess->zs_auth || sess->zs_cipher); + fc_params.cipher_iv_len = sess->iv_length; fc_params.auth_iv_len = sess->auth_iv_length; @@ -2143,10 +2307,11 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } } - if (sess->zsk_flag) { - fc_params.auth_iv_buf = rte_crypto_op_ctod_offset( - cop, uint8_t *, sess->auth_iv_offset); - if (sess->zsk_flag != ROC_SE_ZS_EA) + if (sess->zsk_flag || sess->zs_auth) { + if (sess->auth_iv_length) + fc_params.auth_iv_buf = rte_crypto_op_ctod_offset( + cop, uint8_t *, sess->auth_iv_offset); + if ((!chain) && (sess->zsk_flag != ROC_SE_ZS_EA)) inplace = 0; } m_src = sym_op->m_src; @@ -2203,17 +2368,35 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } } } else { - d_offs = sym_op->cipher.data.offset; - d_lens = sym_op->cipher.data.length; - mc_hash_off = - sym_op->cipher.data.offset + sym_op->cipher.data.length; - d_offs = (d_offs << 16) | sym_op->auth.data.offset; - d_lens = (d_lens << 32) | sym_op->auth.data.length; - - if (mc_hash_off < - (sym_op->auth.data.offset + sym_op->auth.data.length)) { - mc_hash_off = (sym_op->auth.data.offset + - sym_op->auth.data.length); + uint32_t ci_data_length = sym_op->cipher.data.length; + uint32_t ci_data_offset = sym_op->cipher.data.offset; + uint32_t a_data_length = sym_op->auth.data.length; + uint32_t a_data_offset = sym_op->auth.data.offset; + + if (pdcp_chain) { + if (sess->zs_cipher) { + ci_data_length /= 8; + ci_data_offset /= 8; + } + if (sess->zs_auth) { + a_data_length /= 8; + a_data_offset /= 8; + } + } + + d_offs = ci_data_offset; + d_offs = (d_offs << 16) | a_data_offset; + + d_lens = ci_data_length; + d_lens = (d_lens << 32) | a_data_length; + + if (sess->auth_first) + mc_hash_off = a_data_offset + a_data_length; + else + mc_hash_off = ci_data_offset + ci_data_length; + + if (mc_hash_off < (a_data_offset + a_data_length)) { + mc_hash_off = (a_data_offset + a_data_length); } /* for gmac, salt should be updated like in gcm */ if (unlikely(sess->is_gmac)) { @@ -2247,7 +2430,7 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } fc_params.ctx_buf.vaddr = &sess->roc_se_ctx; - if (!(op_minor & ROC_SE_FC_MINOR_OP_HMAC_FIRST) && + if (!(sess->auth_first) && (!pdcp_chain) && unlikely(sess->is_null || sess->cpt_op == ROC_SE_OP_DECODE)) inplace = 0; @@ -2304,8 +2487,8 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) && (flags & ROC_SE_SINGLE_BUF_HEADROOM) && - ((ctx->fc_type == ROC_SE_FC_GEN) || - (ctx->fc_type == ROC_SE_PDCP))))) { + ((ctx->fc_type != ROC_SE_KASUMI) && + (ctx->fc_type != ROC_SE_HASH_HMAC))))) { mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); if (mdata == NULL) { From patchwork Mon Jun 20 12:26:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 113118 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECD18A0545; Mon, 20 Jun 2022 14:27:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1885C42823; Mon, 20 Jun 2022 14:27:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B5985427F8 for ; Mon, 20 Jun 2022 14:27:05 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25K9nXr7011117 for ; Mon, 20 Jun 2022 05:27:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=aYG58oce8bz+5JUO5XWB3X5Tm/ZqX8lL+lOMlUpaehY=; b=VaJ/31uQ9u4Un0PYRVCY8QhvywgKRSVdGcAQZzkNFtkXfVGvHpaMyIc9j3P1IPfcP4A3 //maXu+Y5X4G9cHbZByCtzc1yTr0054GFB2M9hC4ox8vYunoi0vW07REddCSg/159UPZ lYA8xLjqmg47YMV2HgDH/P02zJeqdFNsb/6Yjh7TK3fYz6tPWjx//VyL/eOurYmu2yAN 7bzc1moPxX0W8LtiHB4cL4Ij3GAlU9mBWzHMhhT4ZgO0fTGz2CSxQWypWxjxJckj6HAo plKo6a+e2vAf9K/HDom5z+0+xeQ5nXRN8q+oqBzidn0EiLNXYzClYJcPcMXkgjHRx+Xc qg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3gsc2p6vpp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 20 Jun 2022 05:27:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 20 Jun 2022 05:27:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Jun 2022 05:27:03 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id D7A865E6862; Mon, 20 Jun 2022 05:27:01 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Ankur Dwivedi , Subject: [PATCH v2 3/3] crypto/cnxk: support scatter gather mode Date: Mon, 20 Jun 2022 17:56:54 +0530 Message-ID: <20220620122654.1014994-4-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220620122654.1014994-1-ktejasree@marvell.com> References: <20220620122654.1014994-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vyHSZUtvLHkn1uKvcIjqY7ljcl1Y7W33 X-Proofpoint-GUID: vyHSZUtvLHkn1uKvcIjqY7ljcl1Y7W33 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-20_05,2022-06-17_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding scatter gather support for zuc, snow3g and aes-ctr-cmac chained operations on cn9k. Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cnxk_se.h | 149 +++++++++++++++++++++++++++++----- 1 file changed, 128 insertions(+), 21 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index f297adf89b..a75003f2c6 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -1027,12 +1027,6 @@ cpt_pdcp_chain_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, void *dm_vaddr; uint8_t *iv_d; - if (unlikely((!(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) || - (!(req_flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { - plt_dp_err("Scatter gather mode is not supported"); - return -1; - } - encr_offset = ROC_SE_ENCR_OFFSET(d_offs); auth_offset = ROC_SE_AUTH_OFFSET(d_offs); @@ -1084,28 +1078,141 @@ cpt_pdcp_chain_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, else inputlen = encr_offset + encr_data_len; - dm_vaddr = params->bufs[0].vaddr; + if (likely(((req_flags & ROC_SE_SINGLE_BUF_INPLACE)) && + ((req_flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { + + dm_vaddr = params->bufs[0].vaddr; - /* Use Direct mode */ + /* Use Direct mode */ + + offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - + ROC_SE_OFF_CTRL_LEN - iv_len); - offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - - iv_len); + /* DPTR */ + inst->dptr = (uint64_t)offset_vaddr; + /* RPTR should just exclude offset control word */ + inst->rptr = (uint64_t)dm_vaddr - iv_len; - /* DPTR */ - inst->dptr = (uint64_t)offset_vaddr; - /* RPTR should just exclude offset control word */ - inst->rptr = (uint64_t)dm_vaddr - iv_len; + cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; - cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; + *(uint64_t *)offset_vaddr = + rte_cpu_to_be_64(((uint64_t)(iv_offset) << 16) | + ((uint64_t)(encr_offset))); - *(uint64_t *)offset_vaddr = rte_cpu_to_be_64( - ((uint64_t)(iv_offset) << 16) | ((uint64_t)(encr_offset))); + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + pdcp_iv_copy(iv_d, cipher_iv, pdcp_ci_alg, pack_iv); + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN + 16); + pdcp_iv_copy(iv_d, auth_iv, pdcp_auth_alg, pack_iv); + + } else { + + struct roc_se_sglist_comp *scatter_comp, *gather_comp; + void *m_vaddr = params->meta_buf.vaddr; + uint32_t i, g_size_bytes, s_size_bytes; + uint8_t *in_buffer; + uint32_t size; + + /* save space for IV */ + offset_vaddr = m_vaddr; + + m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + + RTE_ALIGN_CEIL(iv_len, 8); + + cpt_inst_w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; + + /* DPTR has SG list */ + in_buffer = m_vaddr; + + ((uint16_t *)in_buffer)[0] = 0; + ((uint16_t *)in_buffer)[1] = 0; + + gather_comp = + (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); + + /* Input Gather List */ + i = 0; + + /* Offset control word followed by iv */ + + i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, + ROC_SE_OFF_CTRL_LEN + iv_len); + + *(uint64_t *)offset_vaddr = + rte_cpu_to_be_64(((uint64_t)(iv_offset) << 16) | + ((uint64_t)(encr_offset))); + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + pdcp_iv_copy(iv_d, cipher_iv, pdcp_ci_alg, pack_iv); + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN + 16); + pdcp_iv_copy(iv_d, auth_iv, pdcp_auth_alg, pack_iv); + + /* input data */ + size = inputlen - iv_len; + if (size) { + i = fill_sg_comp_from_iov(gather_comp, i, + params->src_iov, 0, &size, + NULL, 0); + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); + g_size_bytes = + ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + + /* + * Output Scatter List + */ - iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); - pdcp_iv_copy(iv_d, cipher_iv, pdcp_ci_alg, pack_iv); + i = 0; + scatter_comp = + (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + + g_size_bytes); + + if (iv_len) { + i = fill_sg_comp(scatter_comp, i, + (uint64_t)offset_vaddr + + ROC_SE_OFF_CTRL_LEN, + iv_len); + } - iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN + 16); - pdcp_iv_copy(iv_d, auth_iv, pdcp_auth_alg, pack_iv); + /* Add output data */ + if (se_ctx->ciph_then_auth && + (req_flags & ROC_SE_VALID_MAC_BUF)) + size = inputlen - iv_len; + else + /* Output including mac */ + size = inputlen - iv_len + mac_len; + + if (size) { + i = fill_sg_comp_from_iov(scatter_comp, i, + params->dst_iov, 0, &size, + NULL, 0); + + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + + ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); + s_size_bytes = + ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + + size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; + + /* This is DPTR len in case of SG mode */ + cpt_inst_w4.s.dlen = size; + + inst->dptr = (uint64_t)in_buffer; + } inst->w4.u64 = cpt_inst_w4.u64;