From patchwork Tue Jun 20 10:20:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128837 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9720442D07; Tue, 20 Jun 2023 12:21:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6050441151; Tue, 20 Jun 2023 12:21:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 70F3F410F6 for ; Tue, 20 Jun 2023 12:21:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wt5M024228 for ; Tue, 20 Jun 2023 03:21:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=aG+WGALFaxK/aJlcpwpHhoRCGwk7lCUoKOHO+vUh2mc=; b=lPfFlybufCDbSghi5uXrT4JXTMwiiJ1qQPn6ChkucL1yp9WSjz0e2udWqSL8G0ZTGtJB zGXRE1O1VPszszneoyEycdSvjFz5mbLn6nOzRUdlLNqzY8YqJyvr1LyUox17v9Po9nva hgZEhPL/e7Wpp4iqd3SBjCDwJ1qLAo8QwzqiSPtcmsni2ql29Pkoi61VcXxJzmhJOVPg jC74Y0ax/AaSLiZvleSVs5RN2l39PXEXGwwCbepCGpUio2+yLewrxzGabkZtcWlvs159 03sDruL/9aUHIIAAPKcS+hQnMHqUk8tcNQc/ydY6fr9QhxzGgCtWPlA+HvyEB8DPrJdE Zg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd35-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:14 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:11 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id C81A63F7073; Tue, 20 Jun 2023 03:21:09 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Aakash Sasidharan , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 1/8] crypto/cnxk: check for null pointer Date: Tue, 20 Jun 2023 15:50:59 +0530 Message-ID: <20230620102106.3970544-2-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: SDBvHLc-nHCRH2lZXkHb8xGqZtCSPdUF X-Proofpoint-ORIG-GUID: SDBvHLc-nHCRH2lZXkHb8xGqZtCSPdUF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Checking for NULL pointer dereference. Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cnxk_se.h | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index c66ab80749..a85e4c5170 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -2185,12 +2185,14 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) if (zsk_flag && sess->roc_se_ctx.auth_then_ciph) { struct rte_crypto_cipher_xform *c_form; - c_form = &xform->next->cipher; - if (c_form->op != RTE_CRYPTO_CIPHER_OP_ENCRYPT && - a_form->op != RTE_CRYPTO_AUTH_OP_GENERATE) { - plt_dp_err("Crypto: PDCP auth then cipher must use" - " options: encrypt and generate"); - return -EINVAL; + if (xform->next != NULL) { + c_form = &xform->next->cipher; + if ((c_form != NULL) && (c_form->op != RTE_CRYPTO_CIPHER_OP_ENCRYPT) && + a_form->op != RTE_CRYPTO_AUTH_OP_GENERATE) { + plt_dp_err("Crypto: PDCP auth then cipher must use" + " options: encrypt and generate"); + return -EINVAL; + } } } From patchwork Tue Jun 20 10:21:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128838 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2394342D07; Tue, 20 Jun 2023 12:21:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 87D5742C54; Tue, 20 Jun 2023 12:21:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DEA034113F for ; Tue, 20 Jun 2023 12:21:16 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wt5N024228 for ; Tue, 20 Jun 2023 03:21:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=VqH42p6BgQsUFHix2v6rh1BZV4LfWPBhmzbFqKCpAjo=; b=Y4vINa+frPO8vkNhK0Jb/i87q52Aad+TYkL642vwrdobEldaP0gGjE81OkbovM1SFHZK q6T5z1/RaXNUHf3DoTQn77QW/ANy2NrhEzyA9U4nEQUoQ8PeXBl3fbJsxkc+rGyinsun Pg0nn7I+pqLaymskdX1qe7DSQFldrkhk38fBNRtiIXGg6E7HjBSjbXdZq+biltGBU7Vf ETgQKtCw98c2wvWJ8Vns2gY7fanFbzH5Mdp5414WuhUi5lUDA3tGAs1sKEEd2FIDxiI8 VLqcqgQ1YcvBsUXWKYPf0qT+6Evj8s4/1ysnZ4z/s2lLB59rl3ZKg3UMpguy0ha05vD2 Qw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd3b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:16 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:14 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 4E8AA3F707A; Tue, 20 Jun 2023 03:21:12 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Aakash Sasidharan , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 2/8] crypto/cnxk: remove packet length checks in crypto offload Date: Tue, 20 Jun 2023 15:51:00 +0530 Message-ID: <20230620102106.3970544-3-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: NHhV1obLejg-ote-XlmiPOhtNBP9NEWa X-Proofpoint-ORIG-GUID: NHhV1obLejg-ote-XlmiPOhtNBP9NEWa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Anoob Joseph When performing crypto offload, the packet length of the input/output buffer does not matter. The length that matters is the cipher/authentication range specified in crypto_op. Since application can request for ciphering of a small portion of the buffer, the extra comparison of buffer lengths may result in false failures during enqueue of OOP operations. Signed-off-by: Anoob Joseph --- drivers/crypto/cnxk/cnxk_se.h | 54 +++-------------------------------- 1 file changed, 4 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index a85e4c5170..87414eb131 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -2539,23 +2539,6 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } if (unlikely(m_dst != NULL)) { - uint32_t pkt_len; - - /* Try to make room as much as src has */ - pkt_len = rte_pktmbuf_pkt_len(m_dst); - - if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) { - pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len; - if (!rte_pktmbuf_append(m_dst, pkt_len)) { - plt_dp_err("Not enough space in " - "m_dst %p, need %u" - " more", - m_dst, pkt_len); - ret = -EINVAL; - goto err_exit; - } - } - if (prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0)) { plt_dp_err("Prepare dst iov failed for " "m_dst %p", @@ -2650,32 +2633,18 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, fc_params.dst_iov = fc_params.src_iov = (void *)src; prepare_iov_from_pkt_inplace(m_src, &fc_params, &flags); } else { - uint32_t pkt_len; - /* Out of place processing */ + fc_params.src_iov = (void *)src; fc_params.dst_iov = (void *)dst; /* Store SG I/O in the api for reuse */ - if (prepare_iov_from_pkt(m_src, fc_params.src_iov, 0)) { + if (unlikely(prepare_iov_from_pkt(m_src, fc_params.src_iov, 0))) { plt_dp_err("Prepare src iov failed"); ret = -EINVAL; goto err_exit; } - /* Try to make room as much as src has */ - pkt_len = rte_pktmbuf_pkt_len(m_dst); - - if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) { - pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len; - if (unlikely(rte_pktmbuf_append(m_dst, pkt_len) == NULL)) { - plt_dp_err("Not enough space in m_dst %p, need %u more", m_dst, - pkt_len); - ret = -EINVAL; - goto err_exit; - } - } - if (unlikely(prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0))) { plt_dp_err("Prepare dst iov failed for m_dst %p", m_dst); ret = -EINVAL; @@ -2689,7 +2658,8 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); if (mdata == NULL) { plt_dp_err("Could not allocate meta buffer"); - return -ENOMEM; + ret = -ENOMEM; + goto err_exit; } } @@ -2798,22 +2768,6 @@ fill_pdcp_chain_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } if (unlikely(m_dst != NULL)) { - uint32_t pkt_len; - - /* Try to make room as much as src has */ - pkt_len = rte_pktmbuf_pkt_len(m_dst); - - if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) { - pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len; - if (!rte_pktmbuf_append(m_dst, pkt_len)) { - plt_dp_err("Not enough space in m_dst " - "%p, need %u more", - m_dst, pkt_len); - ret = -EINVAL; - goto err_exit; - } - } - if (unlikely(prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0))) { plt_dp_err("Could not prepare m_dst iov %p", m_dst); ret = -EINVAL; From patchwork Tue Jun 20 10:21:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128839 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C14542D07; Tue, 20 Jun 2023 12:21:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1189542D2C; Tue, 20 Jun 2023 12:21:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5F42F42D20 for ; Tue, 20 Jun 2023 12:21:20 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K4hQdU027049 for ; Tue, 20 Jun 2023 03:21:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2wE2FMQkR8MSrjSND/GFoeAA+dv7/d18L/fvWUggSrI=; b=bcDrIBSSq3FOvJTYXRsZVAHLcvM9DWqMLmhRf/yUnSoPDlILHFz5b5NgNa/pFEy0tjqx bFdHczGOm04TNEMU5Q9prlqNctUqmgpmoGeubu5d4pSmD7r0Rj+p10li/cNvSK2It78c MPF0IQeEQ/Ofk0JEEzahkSr9g4sNFVBeyiKs75QA8tMFLoHlA0z8xvqzyab8/ZulX59h bjhvVz8ZjkqScg1DAFVRFEd7LcRv+/I+c1LbDUGsso4Ofwp+wxLq+1+jStfP3oyHfE62 tVzY5ulcFV8qkCPNguLDMiZz48CO3imO9NuO+4NUt+UogUOrPD6PgSo7+UrqbqZeYcT4 6Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rb5b312s2-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:19 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:17 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id BEF7E3F7071; Tue, 20 Jun 2023 03:21:14 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Aakash Sasidharan , Anoob Joseph , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 3/8] crypto/cnxk: use pt inst for null cipher with null auth Date: Tue, 20 Jun 2023 15:51:01 +0530 Message-ID: <20230620102106.3970544-4-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: IAU3ebN5KAGx6ombqeoyKYK-oo5vIxd4 X-Proofpoint-ORIG-GUID: IAU3ebN5KAGx6ombqeoyKYK-oo5vIxd4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Aakash Sasidharan Use passthrough instruction for NULL cipher with NULL auth combination. Signed-off-by: Aakash Sasidharan --- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 20 ++++---- drivers/crypto/cnxk/cnxk_se.h | 59 ++++++++++++++++-------- 2 files changed, 50 insertions(+), 29 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index d405786668..2018b0eba5 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -526,16 +526,13 @@ cnxk_sess_fill(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xform, return -EINVAL; } - if ((c_xfrm == NULL || c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) && - a_xfrm != NULL && a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL && - a_xfrm->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY) { - plt_dp_err("Null cipher + null auth verify is not supported"); - return -ENOTSUP; - } + if ((aead_xfrm == NULL) && + (c_xfrm == NULL || c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) && + (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL)) + sess->passthrough = 1; /* Cipher only */ - if (c_xfrm != NULL && - (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL)) { + if (c_xfrm != NULL && (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL)) { if (fill_sess_cipher(c_xfrm, sess)) return -ENOTSUP; else @@ -662,7 +659,8 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt) inst_w7.s.cptr += 8; /* Set the engine group */ - if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3) + if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 || + sess->passthrough) inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE]; else inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE]; @@ -687,7 +685,9 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor sess_priv->lf = roc_cpt->lf[0]; - if (sess_priv->cpt_op & ROC_SE_OP_CIPHER_MASK) { + if (sess_priv->passthrough) + thr_type = CPT_DP_THREAD_TYPE_PT; + else if (sess_priv->cpt_op & ROC_SE_OP_CIPHER_MASK) { switch (sess_priv->roc_se_ctx.fc_type) { case ROC_SE_FC_GEN: if (sess_priv->aes_gcm || sess_priv->aes_ccm || sess_priv->chacha_poly) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index 87414eb131..ceb50fa3b6 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -24,6 +24,8 @@ enum cpt_dp_thread_type { CPT_DP_THREAD_TYPE_PDCP_CHAIN, CPT_DP_THREAD_TYPE_KASUMI, CPT_DP_THREAD_AUTH_ONLY, + CPT_DP_THREAD_GENERIC, + CPT_DP_THREAD_TYPE_PT, }; struct cnxk_se_sess { @@ -46,7 +48,8 @@ struct cnxk_se_sess { uint8_t is_sha3 : 1; uint8_t short_iv : 1; uint8_t is_sm3 : 1; - uint8_t rsvd : 5; + uint8_t passthrough : 1; + uint8_t rsvd : 4; uint8_t mac_len; uint8_t iv_length; uint8_t auth_iv_length; @@ -636,15 +639,6 @@ cpt_digest_gen_sg_ver1_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_pa cpt_inst_w4.s.dlen = data_len; } - /* Null auth only case enters the if */ - if (unlikely(!hash_type && !ctx->enc_cipher)) { - cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_MISC; - /* Minor op is passthrough */ - cpt_inst_w4.s.opcode_minor = 0x03; - /* Send out completion code only */ - cpt_inst_w4.s.param2 = 0x1; - } - /* DPTR has SG list */ in_buffer = m_vaddr; @@ -758,15 +752,6 @@ cpt_digest_gen_sg_ver2_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_pa cpt_inst_w4.s.dlen = data_len; } - /* Null auth only case enters the if */ - if (unlikely(!hash_type && !ctx->enc_cipher)) { - cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_MISC; - /* Minor op is passthrough */ - cpt_inst_w4.s.opcode_minor = 0x03; - /* Send out completion code only */ - cpt_inst_w4.s.param2 = 0x1; - } - /* DPTR has SG list */ /* TODO Add error check if space will be sufficient */ @@ -2376,6 +2361,7 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, iovec->buf_cnt = index; return; } + static __rte_always_inline int fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, @@ -2592,6 +2578,38 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, return ret; } +static inline int +fill_passthrough_params(struct rte_crypto_op *cop, struct cpt_inst_s *inst) +{ + struct rte_crypto_sym_op *sym_op = cop->sym; + struct rte_mbuf *m_src, *m_dst; + + const union cpt_inst_w4 w4 = { + .s.opcode_major = ROC_SE_MAJOR_OP_MISC, + .s.opcode_minor = ROC_SE_MISC_MINOR_OP_PASSTHROUGH, + .s.param1 = 1, + .s.param2 = 1, + .s.dlen = 0, + }; + + m_src = sym_op->m_src; + m_dst = sym_op->m_dst; + + if (unlikely(m_dst != NULL && m_dst != m_src)) { + void *src = rte_pktmbuf_mtod_offset(m_src, void *, cop->sym->cipher.data.offset); + void *dst = rte_pktmbuf_mtod(m_dst, void *); + int data_len = cop->sym->cipher.data.length; + + rte_memcpy(dst, src, data_len); + } + + inst->w0.u64 = 0; + inst->w5.u64 = 0; + inst->w4.u64 = w4.u64; + + return 0; +} + static __rte_always_inline int fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, @@ -3012,6 +3030,9 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_ int ret; switch (sess->dp_thr_type) { + case CPT_DP_THREAD_TYPE_PT: + ret = fill_passthrough_params(op, inst); + break; case CPT_DP_THREAD_TYPE_PDCP: ret = fill_pdcp_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); break; From patchwork Tue Jun 20 10:21:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128840 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC65E42D07; Tue, 20 Jun 2023 12:21:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0433C42D30; Tue, 20 Jun 2023 12:21:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A0C7C42D16 for ; Tue, 20 Jun 2023 12:21:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wleK024114 for ; Tue, 20 Jun 2023 03:21:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=D5yrDooSPeeEyhPu5UV7ZZrQjiWUeS5qWoBUuOQeE7I=; b=Qon+dYH8kmrgyIU9UqmCkyJ2pdmutFWihnw5uDZQeVy/m7SFPqeA0adn0yaiJsWGiygm 2EM9Ei6pwTx1AY96eNjSN+JMgt145FYh9jWC8idgx3WK011n2tFE+8XoLG3QNMFRvbre 49CXKNsBd2bQ/QIuK247WoLy8097XiinSge87eSg0D/o06/vBab6RkWdhjYVtwZ9/4cV q+imHQdUf4rcOxqx+/KRCN0NFRvUt2UffE4pb5U2M7Mt3kX3H/+nBvnUk4QXzT1MoVHv SB8dedDNXrceLZhUjU5ixIzaPCGgqUC7025LayQSRiEJxobwZWpLodo4V63/rAsJQf49 3w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd3s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:20 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 3A9B23F70D4; Tue, 20 Jun 2023 03:21:16 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Aakash Sasidharan , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 4/8] crypto/cnxk: enable context cache for 103XX Date: Tue, 20 Jun 2023 15:51:02 +0530 Message-ID: <20230620102106.3970544-5-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: UT0ge2goLSg96CVMlpmoir6U8K_r5hJm X-Proofpoint-ORIG-GUID: UT0ge2goLSg96CVMlpmoir6U8K_r5hJm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabling context cache for SE instructions on 106B0 and 103XX. Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 6 +++--- drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 8 ++++++++ 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index 2018b0eba5..d0c99d37e8 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -653,7 +653,7 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt) inst_w7.s.cptr = (uint64_t)&sess->roc_se_ctx.se_ctx; - if (roc_errata_cpt_hang_on_mixed_ctx_val()) + if (hw_ctx_cache_enable()) inst_w7.s.ctx_val = 1; else inst_w7.s.cptr += 8; @@ -729,7 +729,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt); - if (roc_errata_cpt_hang_on_mixed_ctx_val()) + if (hw_ctx_cache_enable()) roc_se_ctx_init(&sess_priv->roc_se_ctx); return 0; @@ -755,7 +755,7 @@ sym_session_clear(struct rte_cryptodev_sym_session *sess, bool is_session_less) struct cnxk_se_sess *sess_priv = (struct cnxk_se_sess *)sess; /* Trigger CTX flush + invalidate to remove from CTX_CACHE */ - if (roc_errata_cpt_hang_on_mixed_ctx_val()) + if (hw_ctx_cache_enable()) roc_cpt_lf_ctx_flush(sess_priv->lf, &sess_priv->roc_se_ctx.se_ctx, true); if (sess_priv->roc_se_ctx.auth_key != NULL) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index b1a40e8e25..6ee4cbda70 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -13,6 +13,7 @@ #include "roc_constants.h" #include "roc_cpt.h" #include "roc_cpt_sg.h" +#include "roc_errata.h" #include "roc_se.h" #define CNXK_CPT_MIN_HEADROOM_REQ 32 @@ -180,4 +181,11 @@ alloc_op_meta(struct roc_se_buf_ptr *buf, int32_t len, struct rte_mempool *cpt_m return mdata; } + +static __rte_always_inline bool +hw_ctx_cache_enable(void) +{ + return roc_errata_cpt_hang_on_mixed_ctx_val() || roc_model_is_cn10ka_b0() || + roc_model_is_cn10kb_a0(); +} #endif /* _CNXK_CRYPTODEV_OPS_H_ */ From patchwork Tue Jun 20 10:21:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128841 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8517F42D07; Tue, 20 Jun 2023 12:21:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CAFF42D35; Tue, 20 Jun 2023 12:21:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8C0F842C4D for ; Tue, 20 Jun 2023 12:21:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wpvD024150 for ; Tue, 20 Jun 2023 03:21:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8yEUCXFzJQFxZA2T+rAcUPM1S6BTGNMeosJXW1Da1OA=; b=Qhag/uIeZdIvaNORZApsGvvxnKxeFeUDlUAQaFHSVaQT2PK936yZalEIIWhLUEe6cIcz JJ2oDNBYjDOkms4iXkeIo6AxF1d/MDlzlceDu90n+5ltyggBhKGUJGx0GYKFPdwwL4Wz 0f6gJaEewg+9UINQVQ/i1ZgJiHXOwuIVuD/QRcxBxoFQEIOS1h/GCvDZ5YUWjeQvwqAl y7LCO+SUaPSJs2yUvAcscIy7AFGzVRS8jwFXzNv1AK0KSNoge2GrjZjIwr3GnJA0ism4 HcLmJYr5B/C8JjVyB2vjIBCk/ZZhCM4dg8svm20NaqcZfz1x7V+/VYJ+xpFSbojV3KHH xQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd40-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:23 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:21 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id ABA2C3F7076; Tue, 20 Jun 2023 03:21:19 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Aakash Sasidharan , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 5/8] crypto/cnxk: add support for raw APIs Date: Tue, 20 Jun 2023 15:51:03 +0530 Message-ID: <20230620102106.3970544-6-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Ag8nY9ly4nsu6rdVi98kOff55gz-jMYu X-Proofpoint-ORIG-GUID: Ag8nY9ly4nsu6rdVi98kOff55gz-jMYu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Anoob Joseph Add crypto RAW API support in cnxk PMD Enable the flag to allow execution of raw test suite. Signed-off-by: Vidya Sagar Velumuri Signed-off-by: Anoob Joseph --- doc/guides/cryptodevs/features/cn10k.ini | 1 + doc/guides/rel_notes/release_23_07.rst | 1 + drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 459 ++++++++++++++++++++++ drivers/crypto/cnxk/cnxk_cryptodev.c | 20 +- drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 1 + drivers/crypto/cnxk/cnxk_se.h | 293 ++++++++++++++ 6 files changed, 762 insertions(+), 13 deletions(-) diff --git a/doc/guides/cryptodevs/features/cn10k.ini b/doc/guides/cryptodevs/features/cn10k.ini index d8844b5c83..68a9fddb80 100644 --- a/doc/guides/cryptodevs/features/cn10k.ini +++ b/doc/guides/cryptodevs/features/cn10k.ini @@ -17,6 +17,7 @@ Symmetric sessionless = Y RSA PRIV OP KEY EXP = Y RSA PRIV OP KEY QT = Y Digest encrypted = Y +Sym raw data path API = Y Inner checksum = Y ; diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index 027ae7bd2d..bd41f49458 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -154,6 +154,7 @@ New Features * Added support for PDCP chain in cn10k crypto driver. * Added support for SM3 hash operations. * Added support for AES-CCM in cn9k and cn10k drivers. + * Added support for RAW cryptodev APIs in cn10k driver. * **Updated OpenSSL crypto driver.** diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index e405a2ad9f..47b0e3a6f3 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -1064,6 +1064,461 @@ cn10k_cpt_dev_info_get(struct rte_cryptodev *dev, } } +static inline int +cn10k_cpt_raw_fill_inst(struct cnxk_iov *iov, struct cnxk_cpt_qp *qp, + struct cnxk_sym_dp_ctx *dp_ctx, struct cpt_inst_s inst[], + struct cpt_inflight_req *infl_req, void *opaque, const bool is_sg_ver2) +{ + struct cnxk_se_sess *sess; + int ret; + + const union cpt_res_s res = { + .cn10k.compcode = CPT_COMP_NOT_DONE, + }; + + inst[0].w0.u64 = 0; + inst[0].w2.u64 = 0; + inst[0].w3.u64 = 0; + + sess = dp_ctx->sess; + + switch (sess->dp_thr_type) { + case CPT_DP_THREAD_TYPE_PT: + ret = fill_raw_passthrough_params(iov, inst); + break; + case CPT_DP_THREAD_TYPE_FC_CHAIN: + ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false, + false, is_sg_ver2); + break; + case CPT_DP_THREAD_TYPE_FC_AEAD: + ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false, true, + is_sg_ver2); + break; + case CPT_DP_THREAD_AUTH_ONLY: + ret = fill_raw_digest_params(iov, sess, &qp->meta_info, infl_req, &inst[0], + is_sg_ver2); + break; + default: + ret = -EINVAL; + } + + if (unlikely(ret)) + return 0; + + inst[0].res_addr = (uint64_t)&infl_req->res; + __atomic_store_n(&infl_req->res.u64[0], res.u64[0], __ATOMIC_RELAXED); + infl_req->opaque = opaque; + + inst[0].w7.u64 = sess->cpt_inst_w7; + + return 1; +} + +static uint32_t +cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec, + union rte_crypto_sym_ofs ofs, void *user_data[], int *enqueue_status, + const bool is_sgv2) +{ + uint16_t lmt_id, nb_allowed, nb_ops = vec->num; + uint64_t lmt_base, lmt_arg, io_addr, head; + struct cpt_inflight_req *infl_req; + struct cnxk_cpt_qp *qp = qpair; + struct cnxk_sym_dp_ctx *dp_ctx; + struct pending_queue *pend_q; + uint32_t count = 0, index; + union cpt_fc_write_s fc; + struct cpt_inst_s *inst; + uint64_t *fc_addr; + int ret, i; + + pend_q = &qp->pend_q; + const uint64_t pq_mask = pend_q->pq_mask; + + head = pend_q->head; + nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask); + nb_ops = RTE_MIN(nb_ops, nb_allowed); + + if (unlikely(nb_ops == 0)) + return 0; + + lmt_base = qp->lmtline.lmt_base; + io_addr = qp->lmtline.io_addr; + fc_addr = qp->lmtline.fc_addr; + + const uint32_t fc_thresh = qp->lmtline.fc_thresh; + + ROC_LMT_BASE_ID_GET(lmt_base, lmt_id); + inst = (struct cpt_inst_s *)lmt_base; + + dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx; +again: + fc.u64[0] = __atomic_load_n(fc_addr, __ATOMIC_RELAXED); + if (unlikely(fc.s.qsize > fc_thresh)) { + i = 0; + goto pend_q_commit; + } + + for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) { + struct cnxk_iov iov; + + index = count + i; + infl_req = &pend_q->req_queue[head]; + infl_req->op_flags = 0; + + cnxk_raw_burst_to_iov(vec, &ofs, index, &iov); + ret = cn10k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[2 * i], infl_req, + user_data[index], is_sgv2); + if (unlikely(ret != 1)) { + plt_dp_err("Could not process vec: %d", index); + if (i == 0 && count == 0) + return -1; + else if (i == 0) + goto pend_q_commit; + else + break; + } + pending_queue_advance(&head, pq_mask); + } + + if (i > PKTS_PER_STEORL) { + lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id; + roc_lmt_submit_steorl(lmt_arg, io_addr); + lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 | + (uint64_t)(lmt_id + PKTS_PER_STEORL); + roc_lmt_submit_steorl(lmt_arg, io_addr); + } else { + lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id; + roc_lmt_submit_steorl(lmt_arg, io_addr); + } + + rte_io_wmb(); + + if (nb_ops - i > 0 && i == PKTS_PER_LOOP) { + nb_ops -= i; + count += i; + goto again; + } + +pend_q_commit: + rte_atomic_thread_fence(__ATOMIC_RELEASE); + + pend_q->head = head; + pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); + + *enqueue_status = 1; + return count + i; +} + +static uint32_t +cn10k_cpt_raw_enqueue_burst_sgv2(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec, + union rte_crypto_sym_ofs ofs, void *user_data[], + int *enqueue_status) +{ + return cn10k_cpt_raw_enqueue_burst(qpair, drv_ctx, vec, ofs, user_data, enqueue_status, + true); +} + +static uint32_t +cn10k_cpt_raw_enqueue_burst_sgv1(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec, + union rte_crypto_sym_ofs ofs, void *user_data[], + int *enqueue_status) +{ + return cn10k_cpt_raw_enqueue_burst(qpair, drv_ctx, vec, ofs, user_data, enqueue_status, + false); +} + +static int +cn10k_cpt_raw_enqueue(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data, + const bool is_sgv2) +{ + uint64_t lmt_base, lmt_arg, io_addr, head; + struct cpt_inflight_req *infl_req; + struct cnxk_cpt_qp *qp = qpair; + struct cnxk_sym_dp_ctx *dp_ctx; + uint16_t lmt_id, nb_allowed; + struct cpt_inst_s *inst; + union cpt_fc_write_s fc; + struct cnxk_iov iov; + uint64_t *fc_addr; + int ret; + + struct pending_queue *pend_q = &qp->pend_q; + const uint64_t pq_mask = pend_q->pq_mask; + const uint32_t fc_thresh = qp->lmtline.fc_thresh; + + head = pend_q->head; + nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask); + + if (unlikely(nb_allowed == 0)) + return -1; + + cnxk_raw_to_iov(data_vec, n_data_vecs, &ofs, iv, digest, aad_or_auth_iv, &iov); + + lmt_base = qp->lmtline.lmt_base; + io_addr = qp->lmtline.io_addr; + fc_addr = qp->lmtline.fc_addr; + + ROC_LMT_BASE_ID_GET(lmt_base, lmt_id); + inst = (struct cpt_inst_s *)lmt_base; + + fc.u64[0] = __atomic_load_n(fc_addr, __ATOMIC_RELAXED); + if (unlikely(fc.s.qsize > fc_thresh)) + return -1; + + dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx; + infl_req = &pend_q->req_queue[head]; + infl_req->op_flags = 0; + + ret = cn10k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[0], infl_req, user_data, is_sgv2); + if (unlikely(ret != 1)) { + plt_dp_err("Could not process vec"); + return -1; + } + + pending_queue_advance(&head, pq_mask); + + lmt_arg = ROC_CN10K_CPT_LMT_ARG | (uint64_t)lmt_id; + roc_lmt_submit_steorl(lmt_arg, io_addr); + + rte_io_wmb(); + + pend_q->head = head; + pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); + + return 1; +} + +static int +cn10k_cpt_raw_enqueue_sgv2(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data) +{ + return cn10k_cpt_raw_enqueue(qpair, drv_ctx, data_vec, n_data_vecs, ofs, iv, digest, + aad_or_auth_iv, user_data, true); +} + +static int +cn10k_cpt_raw_enqueue_sgv1(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data) +{ + return cn10k_cpt_raw_enqueue(qpair, drv_ctx, data_vec, n_data_vecs, ofs, iv, digest, + aad_or_auth_iv, user_data, false); +} + +static inline int +cn10k_cpt_raw_dequeue_post_process(struct cpt_cn10k_res_s *res) +{ + const uint8_t uc_compcode = res->uc_compcode; + const uint8_t compcode = res->compcode; + int ret = 1; + + if (likely(compcode == CPT_COMP_GOOD)) { + if (unlikely(uc_compcode)) + plt_dp_info("Request failed with microcode error: 0x%x", res->uc_compcode); + else + ret = 0; + } + + return ret; +} + +static uint32_t +cn10k_cpt_sym_raw_dequeue_burst(void *qptr, uint8_t *drv_ctx, + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, + uint32_t max_nb_to_dequeue, + rte_cryptodev_raw_post_dequeue_t post_dequeue, void **out_user_data, + uint8_t is_user_data_array, uint32_t *n_success, + int *dequeue_status) +{ + struct cpt_inflight_req *infl_req; + struct cnxk_cpt_qp *qp = qptr; + struct pending_queue *pend_q; + uint64_t infl_cnt, pq_tail; + union cpt_res_s res; + int is_op_success; + uint16_t nb_ops; + void *opaque; + int i = 0; + + pend_q = &qp->pend_q; + + const uint64_t pq_mask = pend_q->pq_mask; + + RTE_SET_USED(drv_ctx); + pq_tail = pend_q->tail; + infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask); + + /* Ensure infl_cnt isn't read before data lands */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + + infl_req = &pend_q->req_queue[pq_tail]; + + opaque = infl_req->opaque; + if (get_dequeue_count) + nb_ops = get_dequeue_count(opaque); + else + nb_ops = max_nb_to_dequeue; + nb_ops = RTE_MIN(nb_ops, infl_cnt); + + for (i = 0; i < nb_ops; i++) { + is_op_success = 0; + infl_req = &pend_q->req_queue[pq_tail]; + + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED); + + if (unlikely(res.cn10k.compcode == CPT_COMP_NOT_DONE)) { + if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) { + plt_err("Request timed out"); + cnxk_cpt_dump_on_err(qp); + pend_q->time_out = rte_get_timer_cycles() + + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); + } + break; + } + + pending_queue_advance(&pq_tail, pq_mask); + + if (!cn10k_cpt_raw_dequeue_post_process(&res.cn10k)) { + is_op_success = 1; + *n_success += 1; + } + + if (is_user_data_array) { + out_user_data[i] = infl_req->opaque; + post_dequeue(out_user_data[i], i, is_op_success); + } else { + if (i == 0) + out_user_data[0] = opaque; + post_dequeue(out_user_data[0], i, is_op_success); + } + + if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) + rte_mempool_put(qp->meta_info.pool, infl_req->mdata); + } + + pend_q->tail = pq_tail; + *dequeue_status = 1; + + return i; +} + +static void * +cn10k_cpt_sym_raw_dequeue(void *qptr, uint8_t *drv_ctx, int *dequeue_status, + enum rte_crypto_op_status *op_status) +{ + struct cpt_inflight_req *infl_req; + struct cnxk_cpt_qp *qp = qptr; + struct pending_queue *pend_q; + uint64_t pq_tail; + union cpt_res_s res; + void *opaque = NULL; + + pend_q = &qp->pend_q; + + const uint64_t pq_mask = pend_q->pq_mask; + + RTE_SET_USED(drv_ctx); + + pq_tail = pend_q->tail; + + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + + infl_req = &pend_q->req_queue[pq_tail]; + + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED); + + if (unlikely(res.cn10k.compcode == CPT_COMP_NOT_DONE)) { + if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) { + plt_err("Request timed out"); + cnxk_cpt_dump_on_err(qp); + pend_q->time_out = rte_get_timer_cycles() + + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); + } + goto exit; + } + + pending_queue_advance(&pq_tail, pq_mask); + + opaque = infl_req->opaque; + + if (!cn10k_cpt_raw_dequeue_post_process(&res.cn10k)) + *op_status = RTE_CRYPTO_OP_STATUS_SUCCESS; + else + *op_status = RTE_CRYPTO_OP_STATUS_ERROR; + + if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) + rte_mempool_put(qp->meta_info.pool, infl_req->mdata); + + *dequeue_status = 1; +exit: + return opaque; +} + +static int +cn10k_sym_get_raw_dp_ctx_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct cnxk_sym_dp_ctx); +} + +static int +cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_raw_dp_ctx *raw_dp_ctx, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, uint8_t is_update) +{ + struct cnxk_se_sess *sess = (struct cnxk_se_sess *)session_ctx.crypto_sess; + struct cnxk_sym_dp_ctx *dp_ctx; + + if (sess_type != RTE_CRYPTO_OP_WITH_SESSION) + return -ENOTSUP; + + if (sess == NULL) + return -EINVAL; + + if ((sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP) || + (sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP_CHAIN) || + (sess->dp_thr_type == CPT_DP_THREAD_TYPE_KASUMI)) + return -ENOTSUP; + + if ((sess->dp_thr_type == CPT_DP_THREAD_AUTH_ONLY) && + ((sess->roc_se_ctx.fc_type == ROC_SE_KASUMI) || + (sess->roc_se_ctx.fc_type == ROC_SE_PDCP))) + return -ENOTSUP; + + if ((sess->roc_se_ctx.hash_type == ROC_SE_GMAC_TYPE) || + (sess->roc_se_ctx.hash_type == ROC_SE_SHA1_TYPE)) + return -ENOTSUP; + + dp_ctx = (struct cnxk_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data; + dp_ctx->sess = sess; + + if (!is_update) { + struct cnxk_cpt_vf *vf; + + raw_dp_ctx->qp_data = (struct cnxk_cpt_qp *)dev->data->queue_pairs[qp_id]; + raw_dp_ctx->dequeue = cn10k_cpt_sym_raw_dequeue; + raw_dp_ctx->dequeue_burst = cn10k_cpt_sym_raw_dequeue_burst; + + vf = dev->data->dev_private; + if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].sg_ver2 && + vf->cpt.hw_caps[CPT_ENG_TYPE_IE].sg_ver2) { + raw_dp_ctx->enqueue = cn10k_cpt_raw_enqueue_sgv2; + raw_dp_ctx->enqueue_burst = cn10k_cpt_raw_enqueue_burst_sgv2; + } else { + raw_dp_ctx->enqueue = cn10k_cpt_raw_enqueue_sgv1; + raw_dp_ctx->enqueue_burst = cn10k_cpt_raw_enqueue_burst_sgv1; + } + } + + return 0; +} + struct rte_cryptodev_ops cn10k_cpt_ops = { /* Device control ops */ .dev_configure = cnxk_cpt_dev_config, @@ -1090,4 +1545,8 @@ struct rte_cryptodev_ops cn10k_cpt_ops = { /* Event crypto ops */ .session_ev_mdata_set = cn10k_cpt_crypto_adapter_ev_mdata_set, .queue_pair_event_error_query = cnxk_cpt_queue_pair_event_error_query, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = cn10k_sym_get_raw_dp_ctx_size, + .sym_configure_raw_dp_ctx = cn10k_sym_configure_raw_dp_ctx, }; diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c index 4fa1907cea..4819a14184 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev.c @@ -13,22 +13,16 @@ uint64_t cnxk_cpt_default_ff_get(void) { - uint64_t ff = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_HW_ACCELERATED | - RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT | + uint64_t ff = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT | RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP | - RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | - RTE_CRYPTODEV_FF_IN_PLACE_SGL | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | - RTE_CRYPTODEV_FF_SYM_SESSIONLESS | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | - RTE_CRYPTODEV_FF_SECURITY; + RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_SYM_SESSIONLESS | + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | RTE_CRYPTODEV_FF_SECURITY; if (roc_model_is_cn10k()) - ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM; + ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP; return ff; } diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index 6ee4cbda70..4a8eb0890b 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -44,6 +44,7 @@ struct cpt_qp_meta_info { struct cpt_inflight_req { union cpt_res_s res; union { + void *opaque; struct rte_crypto_op *cop; struct rte_event_vector *vec; }; diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index ceb50fa3b6..9f3bff3e68 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -63,6 +63,23 @@ struct cnxk_se_sess { struct roc_cpt_lf *lf; } __rte_aligned(ROC_ALIGN); +struct cnxk_sym_dp_ctx { + struct cnxk_se_sess *sess; +}; + +struct cnxk_iov { + char src[SRC_IOV_SIZE]; + char dst[SRC_IOV_SIZE]; + void *iv_buf; + void *aad_buf; + void *mac_buf; + uint16_t c_head; + uint16_t c_tail; + uint16_t a_head; + uint16_t a_tail; + int data_len; +}; + static __rte_always_inline int fill_sess_gmac(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess); @@ -3061,4 +3078,280 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_ return ret; } +static __rte_always_inline uint32_t +prepare_iov_from_raw_vec(struct rte_crypto_vec *vec, struct roc_se_iov_ptr *iovec, uint32_t num) +{ + uint32_t i, total_len = 0; + + for (i = 0; i < num; i++) { + iovec->bufs[i].vaddr = vec[i].base; + iovec->bufs[i].size = vec[i].len; + + total_len += vec[i].len; + } + + iovec->buf_cnt = i; + return total_len; +} + +static __rte_always_inline void +cnxk_raw_burst_to_iov(struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs *ofs, int index, + struct cnxk_iov *iov) +{ + iov->iv_buf = vec->iv[index].va; + iov->aad_buf = vec->aad[index].va; + iov->mac_buf = vec->digest[index].va; + + iov->data_len = + prepare_iov_from_raw_vec(vec->src_sgl[index].vec, (struct roc_se_iov_ptr *)iov->src, + vec->src_sgl[index].num); + + if (vec->dest_sgl == NULL) + prepare_iov_from_raw_vec(vec->src_sgl[index].vec, (struct roc_se_iov_ptr *)iov->dst, + vec->src_sgl[index].num); + else + prepare_iov_from_raw_vec(vec->dest_sgl[index].vec, + (struct roc_se_iov_ptr *)iov->dst, + vec->dest_sgl[index].num); + + iov->c_head = ofs->ofs.cipher.head; + iov->c_tail = ofs->ofs.cipher.tail; + + iov->a_head = ofs->ofs.auth.head; + iov->a_tail = ofs->ofs.auth.tail; +} + +static __rte_always_inline void +cnxk_raw_to_iov(struct rte_crypto_vec *data_vec, uint16_t n_vecs, union rte_crypto_sym_ofs *ofs, + struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, struct cnxk_iov *iov) +{ + iov->iv_buf = iv->va; + iov->aad_buf = aad->va; + iov->mac_buf = digest->va; + + iov->data_len = + prepare_iov_from_raw_vec(data_vec, (struct roc_se_iov_ptr *)iov->src, n_vecs); + prepare_iov_from_raw_vec(data_vec, (struct roc_se_iov_ptr *)iov->dst, n_vecs); + + iov->c_head = ofs->ofs.cipher.head; + iov->c_tail = ofs->ofs.cipher.tail; + + iov->a_head = ofs->ofs.auth.head; + iov->a_tail = ofs->ofs.auth.tail; +} + +static inline void +raw_memcpy(struct cnxk_iov *iov) +{ + struct roc_se_iov_ptr *src = (struct roc_se_iov_ptr *)iov->src; + struct roc_se_iov_ptr *dst = (struct roc_se_iov_ptr *)iov->dst; + int num = src->buf_cnt; + int i; + + /* skip copy in case of inplace */ + if (dst->bufs[0].vaddr == src->bufs[0].vaddr) + return; + + for (i = 0; i < num; i++) { + rte_memcpy(dst->bufs[i].vaddr, src->bufs[i].vaddr, src->bufs[i].size); + dst->bufs[i].size = src->bufs[i].size; + } +} + +static inline int +fill_raw_passthrough_params(struct cnxk_iov *iov, struct cpt_inst_s *inst) +{ + const union cpt_inst_w4 w4 = { + .s.opcode_major = ROC_SE_MAJOR_OP_MISC, + .s.opcode_minor = ROC_SE_MISC_MINOR_OP_PASSTHROUGH, + .s.param1 = 1, + .s.param2 = 1, + .s.dlen = 0, + }; + + inst->w0.u64 = 0; + inst->w5.u64 = 0; + inst->w4.u64 = w4.u64; + + raw_memcpy(iov); + + return 0; +} + +static __rte_always_inline int +fill_raw_fc_params(struct cnxk_iov *iov, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, + struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst, const bool is_kasumi, + const bool is_aead, const bool is_sg_ver2) +{ + uint32_t cipher_len, auth_len = 0; + struct roc_se_fc_params fc_params; + uint8_t cpt_op = sess->cpt_op; + uint64_t d_offs, d_lens; + uint8_t ccm_iv_buf[16]; + uint32_t flags = 0; + void *mdata = NULL; + uint32_t iv_buf[4]; + int ret; + + fc_params.cipher_iv_len = sess->iv_length; + fc_params.ctx = &sess->roc_se_ctx; + fc_params.auth_iv_buf = NULL; + fc_params.auth_iv_len = 0; + fc_params.mac_buf.size = 0; + fc_params.mac_buf.vaddr = 0; + fc_params.iv_buf = NULL; + + if (likely(is_kasumi || sess->iv_length)) { + flags |= ROC_SE_VALID_IV_BUF; + fc_params.iv_buf = iov->iv_buf; + + if (sess->short_iv) { + memcpy((uint8_t *)iv_buf, iov->iv_buf, 12); + iv_buf[3] = rte_cpu_to_be_32(0x1); + fc_params.iv_buf = iv_buf; + } + + if (sess->aes_ccm) { + memcpy((uint8_t *)ccm_iv_buf, iov->iv_buf, sess->iv_length + 1); + ccm_iv_buf[0] = 14 - sess->iv_length; + fc_params.iv_buf = ccm_iv_buf; + } + } + + fc_params.src_iov = (void *)iov->src; + fc_params.dst_iov = (void *)iov->dst; + + cipher_len = iov->data_len - iov->c_head - iov->c_tail; + auth_len = iov->data_len - iov->a_head - iov->a_tail; + + d_offs = (iov->c_head << 16) | iov->a_head; + d_lens = ((uint64_t)cipher_len << 32) | auth_len; + + if (is_aead) { + uint16_t aad_len = sess->aad_length; + + if (likely(aad_len == 0)) { + d_offs = (iov->c_head << 16) | iov->c_head; + d_lens = ((uint64_t)cipher_len << 32) | cipher_len; + } else { + flags |= ROC_SE_VALID_AAD_BUF; + fc_params.aad_buf.size = sess->aad_length; + /* For AES CCM, AAD is written 18B after aad.data as per API */ + if (sess->aes_ccm) + fc_params.aad_buf.vaddr = PLT_PTR_ADD((uint8_t *)iov->aad_buf, 18); + else + fc_params.aad_buf.vaddr = iov->aad_buf; + + d_offs = (iov->c_head << 16); + d_lens = ((uint64_t)cipher_len << 32); + } + } + + if (likely(sess->mac_len)) { + flags |= ROC_SE_VALID_MAC_BUF; + fc_params.mac_buf.size = sess->mac_len; + fc_params.mac_buf.vaddr = iov->mac_buf; + } + + fc_params.meta_buf.vaddr = NULL; + mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); + if (mdata == NULL) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + + if (is_kasumi) { + if (cpt_op & ROC_SE_OP_ENCODE) + ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); + else + ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); + } else { + if (cpt_op & ROC_SE_OP_ENCODE) + ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); + else + ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); + } + + if (unlikely(ret)) { + plt_dp_err("Preparing request failed due to bad input arg"); + goto free_mdata_and_exit; + } + + return 0; + +free_mdata_and_exit: + rte_mempool_put(m_info->pool, infl_req->mdata); + return ret; +} + +static __rte_always_inline int +fill_raw_digest_params(struct cnxk_iov *iov, struct cnxk_se_sess *sess, + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst, const bool is_sg_ver2) +{ + uint16_t auth_op = sess->cpt_op & ROC_SE_OP_AUTH_MASK; + struct roc_se_fc_params fc_params; + uint16_t mac_len = sess->mac_len; + uint64_t d_offs, d_lens; + uint32_t auth_len = 0; + uint32_t flags = 0; + void *mdata = NULL; + uint32_t space = 0; + int ret; + + memset(&fc_params, 0, sizeof(struct roc_se_fc_params)); + fc_params.cipher_iv_len = sess->iv_length; + fc_params.ctx = &sess->roc_se_ctx; + + mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); + if (mdata == NULL) { + plt_dp_err("Error allocating meta buffer for request"); + ret = -ENOMEM; + goto err_exit; + } + + flags |= ROC_SE_VALID_MAC_BUF; + fc_params.src_iov = (void *)iov->src; + auth_len = iov->data_len - iov->a_head - iov->a_tail; + d_lens = auth_len; + d_offs = iov->a_head; + + if (auth_op == ROC_SE_OP_AUTH_GENERATE) { + fc_params.mac_buf.size = sess->mac_len; + fc_params.mac_buf.vaddr = iov->mac_buf; + } else { + uint64_t *op = mdata; + + /* Need space for storing generated mac */ + space += 2 * sizeof(uint64_t); + + fc_params.mac_buf.vaddr = (uint8_t *)mdata + space; + fc_params.mac_buf.size = mac_len; + space += RTE_ALIGN_CEIL(mac_len, 8); + op[0] = (uintptr_t)iov->mac_buf; + op[1] = mac_len; + infl_req->op_flags |= CPT_OP_FLAGS_AUTH_VERIFY; + } + + fc_params.meta_buf.vaddr = (uint8_t *)mdata + space; + fc_params.meta_buf.size -= space; + + ret = cpt_fc_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, is_sg_ver2); + if (ret) + goto free_mdata_and_exit; + + return 0; + +free_mdata_and_exit: + if (infl_req->op_flags & CPT_OP_FLAGS_METABUF) + rte_mempool_put(m_info->pool, infl_req->mdata); +err_exit: + return ret; +} + #endif /*_CNXK_SE_H_ */ From patchwork Tue Jun 20 10:21:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128842 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A1EF142D07; Tue, 20 Jun 2023 12:21:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8938742D20; Tue, 20 Jun 2023 12:21:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 26B0E42D3A for ; Tue, 20 Jun 2023 12:21:27 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K4hOhv027025; Tue, 20 Jun 2023 03:21:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=5+msR+GPct5le5vDwn0PJFJ1sODS+GwF2Zf5GVdupKg=; b=OR4hHY7fBUZExJetdcCGnxdo+Bmg+hZUGK/JDal1aX2zPqGGX4wvSR40vd1dYYenU2zC DN0oNSCeEZP+SlHY5B+yXJ7IUCrSlRRpFHz0BT1ThCNrJ0ucI+v7VZ4ER//a3qQTVNht 2dIIxsmCJgzyFkN2mIjYevccRwFNRXPKIrBdFtZrJUuShsKTol/fVVEgd1Oo/kpShKUP MyMthDiZANdzHaau5U7ubTgRHNFmOUVkQ1/Zty7FsLB4eR5ITYM9X+iyFJhIqU4SvY4u rMKMS0dgIhaKcZGP/ow14R+liDHGAFO0mCw4ladcL1ZdtS8PUL0yC7x0hMUL0OsGq5HU ag== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rb5b312sv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 20 Jun 2023 03:21:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:24 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 2B00F3F7071; Tue, 20 Jun 2023 03:21:21 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal , Fan Zhang CC: Anoob Joseph , Ciara Power , Aakash Sasidharan , Gowrishankar Muthukrishnan , Vidya Sagar Velumuri , Subject: [PATCH v3 6/8] test/crypto: enable raw crypto tests for crypto_cn10k Date: Tue, 20 Jun 2023 15:51:04 +0530 Message-ID: <20230620102106.3970544-7-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: C6vlRKTmKosWNaB1m4HwFCxo2J4pKZ6j X-Proofpoint-ORIG-GUID: C6vlRKTmKosWNaB1m4HwFCxo2J4pKZ6j X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Anoob Joseph Enable raw crypto tests with crypto_cn10k. Signed-off-by: Anoob Joseph --- app/test/test_cryptodev.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index fb2af40b99..2ba37ed4bd 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -17519,6 +17519,12 @@ test_cryptodev_cn10k(void) return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_CN10K_PMD)); } +static int +test_cryptodev_cn10k_raw_api(void) +{ + return run_cryptodev_raw_testsuite(RTE_STR(CRYPTODEV_NAME_CN10K_PMD)); +} + static int test_cryptodev_dpaa2_sec_raw_api(void) { @@ -17531,6 +17537,8 @@ test_cryptodev_dpaa_sec_raw_api(void) return run_cryptodev_raw_testsuite(RTE_STR(CRYPTODEV_NAME_DPAA_SEC_PMD)); } +REGISTER_TEST_COMMAND(cryptodev_cn10k_raw_api_autotest, + test_cryptodev_cn10k_raw_api); REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_raw_api_autotest, test_cryptodev_dpaa2_sec_raw_api); REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_raw_api_autotest, From patchwork Tue Jun 20 10:21:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128843 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D676F42D07; Tue, 20 Jun 2023 12:21:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1B1442D3F; Tue, 20 Jun 2023 12:21:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2903242D3F for ; Tue, 20 Jun 2023 12:21:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wleL024114 for ; Tue, 20 Jun 2023 03:21:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/pngHXcAw/EymG1o1JUgLNgisurqKdGgVDoJLD/uSTE=; b=hNea6a0LKqCubKV1TpbnZIyUM+CeeEbDCY5cbhlhpuUKoGBF1O7FQBuhuSkAtlfAulmV CiydizdaJHOpKSNHNP1HUP1qGZe/qtFwxApODk7eNSHluIlXEWcRljyLPZCS4Vm4XyMI ZNhZ3TwqbE+gI57uHptF8WouDmnp3KY9KUjlNPMTuc2VIY0XnCY1qN4SOVpfFFJa2nhd sSSCT7CMjmVoarfagfbcUC2NgLYQE9fWxFJ6lQ2lGL5Y3It+UQ/I6gCfgabxqF5PdQYF Zl8IhYw7xv5eir7AyRkzg/s84O+zmwcR3sgjpCKijFk3EbrzhGhXF5l6sv3Vt23N1q1b ow== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd46-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:29 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:27 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 28AC03F7073; Tue, 20 Jun 2023 03:21:24 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Vidya Sagar Velumuri , Anoob Joseph , Aakash Sasidharan , Gowrishankar Muthukrishnan , Subject: [PATCH v3 7/8] crypto/cnxk: add support for sm4 Date: Tue, 20 Jun 2023 15:51:05 +0530 Message-ID: <20230620102106.3970544-8-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: QHEi9W1DCeIQ3oCjN_ZTHTwHCylZ6xtg X-Proofpoint-ORIG-GUID: QHEi9W1DCeIQ3oCjN_ZTHTwHCylZ6xtg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add support for SM4 cipher Support for modes: SM4_CBC, SM4_ECB, SM4_CTR, SM4_OFB, SM4_CFB Signed-off-by: Vidya Sagar Velumuri --- doc/guides/cryptodevs/cnxk.rst | 1 + doc/guides/cryptodevs/features/cn10k.ini | 5 + doc/guides/rel_notes/release_23_07.rst | 1 + drivers/common/cnxk/hw/cpt.h | 5 +- drivers/common/cnxk/roc_se.c | 3 + drivers/common/cnxk/roc_se.h | 20 ++ drivers/crypto/cnxk/cnxk_cryptodev.h | 2 +- .../crypto/cnxk/cnxk_cryptodev_capabilities.c | 113 ++++++- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 5 +- drivers/crypto/cnxk/cnxk_se.h | 278 +++++++++++++++++- 10 files changed, 426 insertions(+), 7 deletions(-) diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst index 777e8ffb0e..fbe67475be 100644 --- a/doc/guides/cryptodevs/cnxk.rst +++ b/doc/guides/cryptodevs/cnxk.rst @@ -41,6 +41,7 @@ Cipher algorithms: * ``RTE_CRYPTO_CIPHER_KASUMI_F8`` * ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2`` * ``RTE_CRYPTO_CIPHER_ZUC_EEA3`` +* ``RTE_CRYPTO_CIPHER_SM4`` Hash algorithms: diff --git a/doc/guides/cryptodevs/features/cn10k.ini b/doc/guides/cryptodevs/features/cn10k.ini index 68a9fddb80..53ee2a720e 100644 --- a/doc/guides/cryptodevs/features/cn10k.ini +++ b/doc/guides/cryptodevs/features/cn10k.ini @@ -39,6 +39,11 @@ DES CBC = Y KASUMI F8 = Y SNOW3G UEA2 = Y ZUC EEA3 = Y +SM4 ECB = Y +SM4 CBC = Y +SM4 CTR = Y +SM4 CFB = Y +SM4 OFB = Y ; ; Supported authentication algorithms of 'cn10k' crypto driver. diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index bd41f49458..7468eb2047 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -155,6 +155,7 @@ New Features * Added support for SM3 hash operations. * Added support for AES-CCM in cn9k and cn10k drivers. * Added support for RAW cryptodev APIs in cn10k driver. + * Added support for SM4 operations in cn10k driver. * **Updated OpenSSL crypto driver.** diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h index 82ea076e4c..5e1519e202 100644 --- a/drivers/common/cnxk/hw/cpt.h +++ b/drivers/common/cnxk/hw/cpt.h @@ -73,7 +73,10 @@ union cpt_eng_caps { uint64_t __io des : 1; uint64_t __io crc : 1; uint64_t __io mmul : 1; - uint64_t __io reserved_15_33 : 19; + uint64_t __io reserved_15_20 : 6; + uint64_t __io sm3 : 1; + uint64_t __io sm4 : 1; + uint64_t __io reserved_23_33 : 11; uint64_t __io pdcp_chain : 1; uint64_t __io sg_ver2 : 1; uint64_t __io reserved_36_63 : 28; diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c index f9b6936267..2662297315 100644 --- a/drivers/common/cnxk/roc_se.c +++ b/drivers/common/cnxk/roc_se.c @@ -757,6 +757,9 @@ roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx) case ROC_SE_PDCP_CHAIN: ctx_len = sizeof(struct roc_se_zuc_snow3g_chain_ctx); break; + case ROC_SE_SM: + ctx_len = sizeof(struct roc_se_sm_context); + break; default: ctx_len = 0; } diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h index 1e7abecf8f..008ab31912 100644 --- a/drivers/common/cnxk/roc_se.h +++ b/drivers/common/cnxk/roc_se.h @@ -17,6 +17,7 @@ #define ROC_SE_MAJOR_OP_PDCP 0x37 #define ROC_SE_MAJOR_OP_KASUMI 0x38 #define ROC_SE_MAJOR_OP_PDCP_CHAIN 0x3C +#define ROC_SE_MAJOR_OP_SM 0x3D #define ROC_SE_MAJOR_OP_MISC 0x01ULL #define ROC_SE_MISC_MINOR_OP_PASSTHROUGH 0x03ULL @@ -28,6 +29,8 @@ #define ROC_SE_OFF_CTRL_LEN 8 +#define ROC_SE_SM4_KEY_LEN 16 + #define ROC_SE_ZS_EA 0x1 #define ROC_SE_ZS_IA 0x2 #define ROC_SE_K_F8 0x4 @@ -38,6 +41,7 @@ #define ROC_SE_KASUMI 0x3 #define ROC_SE_HASH_HMAC 0x4 #define ROC_SE_PDCP_CHAIN 0x5 +#define ROC_SE_SM 0x6 #define ROC_SE_OP_CIPHER_ENCRYPT 0x1 #define ROC_SE_OP_CIPHER_DECRYPT 0x2 @@ -125,6 +129,14 @@ typedef enum { ROC_SE_DES_DOCSISBPI = 0x96, } roc_se_cipher_type; +typedef enum { + ROC_SM4_ECB = 0x0, + ROC_SM4_CBC = 0x1, + ROC_SM4_CTR = 0x2, + ROC_SM4_CFB = 0x3, + ROC_SM4_OFB = 0x4, +} roc_sm_cipher_type; + typedef enum { /* Microcode errors */ ROC_SE_NO_ERR = 0x00, @@ -192,6 +204,13 @@ struct roc_se_context { struct roc_se_hmac_context hmac; }; +struct roc_se_sm_context { + uint64_t rsvd_56_60 : 5; + uint64_t enc_cipher : 3; + uint64_t rsvd_0_55 : 56; + uint8_t encr_key[16]; +}; + struct roc_se_otk_zuc_ctx { union { uint64_t u64; @@ -325,6 +344,7 @@ struct roc_se_ctx { struct roc_se_zuc_snow3g_ctx zs_ctx; struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx; struct roc_se_kasumi_ctx k_ctx; + struct roc_se_sm_context sm_ctx; }; } se_ctx __plt_aligned(ROC_ALIGN); uint8_t *auth_key; diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h index ce45f5d01b..09f5ba0650 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev.h @@ -10,7 +10,7 @@ #include "roc_cpt.h" -#define CNXK_CPT_MAX_CAPS 49 +#define CNXK_CPT_MAX_CAPS 54 #define CNXK_SEC_CRYPTO_MAX_CAPS 16 #define CNXK_SEC_MAX_CAPS 9 #define CNXK_AE_EC_ID_MAX 8 diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c index 8a3b0c48d0..4c6357353e 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c @@ -1049,6 +1049,109 @@ static const struct rte_cryptodev_capabilities caps_null[] = { }, }; +static const struct rte_cryptodev_capabilities caps_sm4[] = { + { /* SM4 CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_SM4_CBC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* SM4 ECB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_SM4_ECB, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 0, + .max = 0, + .increment = 0 + } + }, } + }, } + }, + { /* SM4 CTR */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_SM4_CTR, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* SM4 OFB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_SM4_OFB, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* SM4 CFB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_SM4_CFB, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, +}; + static const struct rte_cryptodev_capabilities caps_end[] = { RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; @@ -1513,9 +1616,13 @@ cn9k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos } static void -cn10k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos) +cn10k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], + union cpt_eng_caps *hw_caps, int *cur_pos) { - cpt_caps_add(cnxk_caps, cur_pos, caps_sm3, RTE_DIM(caps_sm3)); + if (hw_caps->sg_ver2) { + CPT_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, sm3); + CPT_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, sm4); + } } static void @@ -1537,7 +1644,7 @@ crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[], cn9k_crypto_caps_add(cnxk_caps, &cur_pos); if (roc_model_is_cn10k()) - cn10k_crypto_caps_add(cnxk_caps, &cur_pos); + cn10k_crypto_caps_add(cnxk_caps, hw_caps, &cur_pos); cpt_caps_add(cnxk_caps, &cur_pos, caps_null, RTE_DIM(caps_null)); cpt_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end)); diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index d0c99d37e8..50150d3f06 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -660,7 +660,7 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt) /* Set the engine group */ if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 || - sess->passthrough) + sess->passthrough || sess->is_sm4) inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE]; else inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE]; @@ -704,6 +704,9 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor case ROC_SE_PDCP_CHAIN: thr_type = CPT_DP_THREAD_TYPE_PDCP_CHAIN; break; + case ROC_SE_SM: + thr_type = CPT_DP_THREAD_TYPE_SM; + break; default: plt_err("Invalid op type"); ret = -ENOTSUP; diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index 9f3bff3e68..3444f2d599 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -23,6 +23,7 @@ enum cpt_dp_thread_type { CPT_DP_THREAD_TYPE_PDCP, CPT_DP_THREAD_TYPE_PDCP_CHAIN, CPT_DP_THREAD_TYPE_KASUMI, + CPT_DP_THREAD_TYPE_SM, CPT_DP_THREAD_AUTH_ONLY, CPT_DP_THREAD_GENERIC, CPT_DP_THREAD_TYPE_PT, @@ -49,7 +50,8 @@ struct cnxk_se_sess { uint8_t short_iv : 1; uint8_t is_sm3 : 1; uint8_t passthrough : 1; - uint8_t rsvd : 4; + uint8_t is_sm4 : 1; + uint8_t rsvd : 3; uint8_t mac_len; uint8_t iv_length; uint8_t auth_iv_length; @@ -1059,6 +1061,100 @@ pdcp_chain_sg2_prep(struct roc_se_fc_params *params, struct roc_se_ctx *cpt_ctx, return ret; } +static __rte_always_inline int +cpt_sm_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, struct roc_se_fc_params *fc_params, + struct cpt_inst_s *inst, const bool is_sg_ver2, int decrypt) +{ + int32_t inputlen, outputlen, enc_dlen; + union cpt_inst_w4 cpt_inst_w4; + uint32_t passthrough_len = 0; + struct roc_se_ctx *se_ctx; + uint32_t encr_data_len; + uint32_t encr_offset; + uint64_t offset_ctrl; + uint8_t iv_len = 16; + uint8_t *src = NULL; + void *offset_vaddr; + int ret; + + encr_offset = ROC_SE_ENCR_OFFSET(d_offs); + encr_data_len = ROC_SE_ENCR_DLEN(d_lens); + + se_ctx = fc_params->ctx; + cpt_inst_w4.u64 = se_ctx->template_w4.u64; + + if (unlikely(!(flags & ROC_SE_VALID_IV_BUF))) + iv_len = 0; + + encr_offset += iv_len; + enc_dlen = encr_data_len + encr_offset; + enc_dlen = RTE_ALIGN_CEIL(encr_data_len, 8) + encr_offset; + + inputlen = enc_dlen; + outputlen = enc_dlen; + + cpt_inst_w4.s.param1 = encr_data_len; + + if (unlikely(encr_offset >> 8)) { + plt_dp_err("Offset not supported"); + plt_dp_err("enc_offset: %d", encr_offset); + return -1; + } + + offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset); + + /* + * In cn9k, cn10k since we have a limitation of + * IV & Offset control word not part of instruction + * and need to be part of Data Buffer, we check if + * head room is there and then only do the Direct mode processing + */ + if (likely((flags & ROC_SE_SINGLE_BUF_INPLACE) && (flags & ROC_SE_SINGLE_BUF_HEADROOM))) { + void *dm_vaddr = fc_params->bufs[0].vaddr; + + /* Use Direct mode */ + + offset_vaddr = PLT_PTR_SUB(dm_vaddr, ROC_SE_OFF_CTRL_LEN + iv_len); + *(uint64_t *)offset_vaddr = offset_ctrl; + + /* DPTR */ + inst->dptr = (uint64_t)offset_vaddr; + + /* RPTR should just exclude offset control word */ + inst->rptr = (uint64_t)dm_vaddr - iv_len; + + cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; + + if (likely(iv_len)) { + void *dst = PLT_PTR_ADD(offset_vaddr, ROC_SE_OFF_CTRL_LEN); + uint64_t *src = fc_params->iv_buf; + + rte_memcpy(dst, src, 16); + } + inst->w4.u64 = cpt_inst_w4.u64; + } else { + if (likely(iv_len)) + src = fc_params->iv_buf; + + inst->w4.u64 = cpt_inst_w4.u64; + + if (is_sg_ver2) + ret = sg2_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, + decrypt); + else + ret = sg_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, decrypt); + + if (unlikely(ret)) { + plt_dp_err("sg prep failed"); + return -1; + } + } + + return 0; +} + static __rte_always_inline int cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst, @@ -1899,6 +1995,71 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) return 0; } +static __rte_always_inline int +fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) +{ + struct roc_se_sm_context *sm_ctx = &sess->roc_se_ctx.se_ctx.sm_ctx; + struct rte_crypto_cipher_xform *c_form; + roc_sm_cipher_type enc_type = 0; + + c_form = &xform->cipher; + + if (c_form->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) { + sess->cpt_op |= ROC_SE_OP_CIPHER_ENCRYPT; + sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_ENCRYPT; + } else if (c_form->op == RTE_CRYPTO_CIPHER_OP_DECRYPT) { + sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT; + sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_DECRYPT; + } else { + plt_dp_err("Unknown cipher operation\n"); + return -1; + } + + switch (c_form->algo) { + case RTE_CRYPTO_CIPHER_SM4_CBC: + enc_type = ROC_SM4_CBC; + break; + case RTE_CRYPTO_CIPHER_SM4_ECB: + enc_type = ROC_SM4_ECB; + break; + case RTE_CRYPTO_CIPHER_SM4_CTR: + enc_type = ROC_SM4_CTR; + break; + case RTE_CRYPTO_CIPHER_SM4_CFB: + enc_type = ROC_SM4_CFB; + break; + case RTE_CRYPTO_CIPHER_SM4_OFB: + enc_type = ROC_SM4_OFB; + break; + default: + plt_dp_err("Crypto: Undefined cipher algo %u specified", c_form->algo); + return -1; + } + + sess->iv_offset = c_form->iv.offset; + sess->iv_length = c_form->iv.length; + + if (c_form->key.length != ROC_SE_SM4_KEY_LEN) { + plt_dp_err("Invalid cipher params keylen %u", c_form->key.length); + return -1; + } + + sess->zsk_flag = 0; + sess->zs_cipher = 0; + sess->aes_gcm = 0; + sess->aes_ctr = 0; + sess->is_null = 0; + sess->is_sm4 = 1; + sess->roc_se_ctx.fc_type = ROC_SE_SM; + + sess->roc_se_ctx.template_w4.s.opcode_major = ROC_SE_MAJOR_OP_SM; + + memcpy(sm_ctx->encr_key, c_form->key.data, ROC_SE_SM4_KEY_LEN); + sm_ctx->enc_cipher = enc_type; + + return 0; +} + static __rte_always_inline int fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) { @@ -1909,6 +2070,13 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess) c_form = &xform->cipher; + if ((c_form->algo == RTE_CRYPTO_CIPHER_SM4_CBC) || + (c_form->algo == RTE_CRYPTO_CIPHER_SM4_ECB) || + (c_form->algo == RTE_CRYPTO_CIPHER_SM4_CTR) || + (c_form->algo == RTE_CRYPTO_CIPHER_SM4_CFB) || + (c_form->algo == RTE_CRYPTO_CIPHER_SM4_OFB)) + return fill_sm_sess_cipher(xform, sess); + if (c_form->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) sess->cpt_op |= ROC_SE_OP_CIPHER_ENCRYPT; else if (c_form->op == RTE_CRYPTO_CIPHER_OP_DECRYPT) { @@ -2379,6 +2547,110 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, return; } +static __rte_always_inline int +fill_sm_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst, const bool is_sg_ver2) +{ + struct rte_crypto_sym_op *sym_op = cop->sym; + struct roc_se_fc_params fc_params; + struct rte_mbuf *m_src, *m_dst; + uint8_t cpt_op = sess->cpt_op; + uint64_t d_offs, d_lens; + char src[SRC_IOV_SIZE]; + char dst[SRC_IOV_SIZE]; + void *mdata = NULL; +#ifdef CPT_ALWAYS_USE_SG_MODE + uint8_t inplace = 0; +#else + uint8_t inplace = 1; +#endif + uint32_t flags = 0; + int ret; + + uint32_t ci_data_length = sym_op->cipher.data.length; + uint32_t ci_data_offset = sym_op->cipher.data.offset; + + fc_params.cipher_iv_len = sess->iv_length; + fc_params.auth_iv_len = 0; + fc_params.auth_iv_buf = NULL; + fc_params.iv_buf = NULL; + fc_params.mac_buf.size = 0; + fc_params.mac_buf.vaddr = 0; + + if (likely(sess->iv_length)) { + flags |= ROC_SE_VALID_IV_BUF; + fc_params.iv_buf = rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset); + } + + m_src = sym_op->m_src; + m_dst = sym_op->m_dst; + + d_offs = ci_data_offset; + d_offs = (d_offs << 16); + + d_lens = ci_data_length; + d_lens = (d_lens << 32); + + fc_params.ctx = &sess->roc_se_ctx; + + if (likely(!m_dst && inplace)) { + fc_params.dst_iov = fc_params.src_iov = (void *)src; + + prepare_iov_from_pkt_inplace(m_src, &fc_params, &flags); + + } else { + /* Out of place processing */ + fc_params.src_iov = (void *)src; + fc_params.dst_iov = (void *)dst; + + /* Store SG I/O in the api for reuse */ + if (prepare_iov_from_pkt(m_src, fc_params.src_iov, 0)) { + plt_dp_err("Prepare src iov failed"); + ret = -EINVAL; + goto err_exit; + } + + if (unlikely(m_dst != NULL)) { + if (prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0)) { + plt_dp_err("Prepare dst iov failed for m_dst %p", m_dst); + ret = -EINVAL; + goto err_exit; + } + } else { + fc_params.dst_iov = (void *)src; + } + } + + fc_params.meta_buf.vaddr = NULL; + + if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) && + (flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { + mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); + if (mdata == NULL) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + } + + /* Finally prepare the instruction */ + ret = cpt_sm_prep(flags, d_offs, d_lens, &fc_params, inst, is_sg_ver2, + !(cpt_op & ROC_SE_OP_ENCODE)); + + if (unlikely(ret)) { + plt_dp_err("Preparing request failed due to bad input arg"); + goto free_mdata_and_exit; + } + + return 0; + +free_mdata_and_exit: + if (infl_req->op_flags & CPT_OP_FLAGS_METABUF) + rte_mempool_put(m_info->pool, infl_req->mdata); +err_exit: + return ret; +} + static __rte_always_inline int fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, @@ -3068,6 +3340,10 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_ ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, true, false, is_sg_ver2); break; + case CPT_DP_THREAD_TYPE_SM: + ret = fill_sm_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); + break; + case CPT_DP_THREAD_AUTH_ONLY: ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); break; From patchwork Tue Jun 20 10:21:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 128844 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 325C442D07; Tue, 20 Jun 2023 12:22:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C0D0942D4A; Tue, 20 Jun 2023 12:21:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6042742D48 for ; Tue, 20 Jun 2023 12:21:32 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35K9wpvF024150 for ; Tue, 20 Jun 2023 03:21:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=OKEW7vt6Upo/KmVFkwKsLFo7+58lfSetq/lbwaL5xTI=; b=P/XDNpbzSFK1aFGKm3cyIuNrkugCHVyNproGw5CuhYGqJdjOE8+21nJnMwgqxma8F9Gi vcBfoonvzw0utn73N/n4fTYgkRE4efVNyA0q/4Sl4Qhr9mz0UWGqbJ+kttCvmQcGkFbL f3cRSydG51Z1q9fOYKFpmFU29ZlmAWOOuAW2l77egM12cODLy0p2w8+FKqWaw4tkAAHa 50zz8Edr13rzRIQ2D811xINoQ8XUMEV4e3SS8wPyu4q6x+vlwtw0/QXvnskvFm2XT566 WkVMiwdtSYuZ3jVHpPo8wVvyf+NH9junmpZ3Tjpl2OSYz6Sa8UNVr08s7LqajZtNADiA SQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r9cbkfd49-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Jun 2023 03:21:31 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 20 Jun 2023 03:21:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 20 Jun 2023 03:21:29 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 99A9E3F7071; Tue, 20 Jun 2023 03:21:27 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Gowrishankar Muthukrishnan , Anoob Joseph , Aakash Sasidharan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 8/8] crypto/cnxk: fix order of ECFPM parameters Date: Tue, 20 Jun 2023 15:51:06 +0530 Message-ID: <20230620102106.3970544-9-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230620102106.3970544-1-ktejasree@marvell.com> References: <20230620102106.3970544-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: EzffOblFcF4NNQXnw75vsXRtFJ7mrlgp X-Proofpoint-ORIG-GUID: EzffOblFcF4NNQXnw75vsXRtFJ7mrlgp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_06,2023-06-16_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gowrishankar Muthukrishnan Fix order of ECFPM parameters. Fixes: 76618fc4bef ("crypto/cnxk: fix order of ECFPM parameters") Signed-off-by: Gowrishankar Muthukrishnan --- drivers/crypto/cnxk/cnxk_ae.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/cnxk/cnxk_ae.h b/drivers/crypto/cnxk/cnxk_ae.h index 47f000dd5e..7ad259b7f4 100644 --- a/drivers/crypto/cnxk/cnxk_ae.h +++ b/drivers/crypto/cnxk/cnxk_ae.h @@ -723,7 +723,8 @@ cnxk_ae_ecfpm_prep(struct rte_crypto_ecpm_op_param *ecpm, * optionally ROUNDUP8(input point(x and y coordinates)). * Please note point length is equivalent to prime of the curve */ - if (cpt_ver == ROC_CPT_REVISION_ID_96XX_C0) { + if (cpt_ver == ROC_CPT_REVISION_ID_96XX_B0 || cpt_ver == ROC_CPT_REVISION_ID_96XX_C0 || + cpt_ver == ROC_CPT_REVISION_ID_98XX) { dlen = sizeof(fpm_table_iova) + 3 * p_align + scalar_align; memset(dptr, 0, dlen); *(uint64_t *)dptr = fpm_table_iova;