From patchwork Tue Aug 9 10:53:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 114750 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA7A9A00C2; Tue, 9 Aug 2022 12:54:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C731B42BDC; Tue, 9 Aug 2022 12:54:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F3FC042BDA for ; Tue, 9 Aug 2022 12:54:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2791TKc5017274 for ; Tue, 9 Aug 2022 03:54:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ZoncwHScaA1cg6x9JgJyPj146wXnAIm6ACWMN/APXOg=; b=D5U6TsctK0Hj1dw6q1nAUCc0KavUmGznBTarMh9c6fojIzCvsE0q/8UROgh2NjOnib4+ 2vXnYlRGUOXt7/CM3xEi0ock2NCiZXlwtjXYczQXiIuDWf118AX1Q+Y7CY9TdWlsqwYG 4EzV9ukb4BqEBLcpqs9BNqmu2TrwKTst6lwC9m6x4ifZn5SgYknC3Yg3iSFL0R3mn7Gw c/eAW9zDufWrAjreBHxW6vSnoZN8jkIb8X2vyF0ouXhxU3Ujpp1HowCmifZZMa5VbJu3 BG3Mu+i8phChkghhfiq0GZg0Y6nE/FTcEl2F5h9DRpJGfT/jJIXnnDUsrdmXwdYFq1lD 9A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hudy6sphe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 09 Aug 2022 03:54:17 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 9 Aug 2022 03:54:15 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 9 Aug 2022 03:54:15 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.193.69.70]) by maili.marvell.com (Postfix) with ESMTP id 33F723F7065; Tue, 9 Aug 2022 03:54:12 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Jerin Jacob CC: Archana Muniganti , Tejasree Kondoj , Subject: [PATCH v2 06/18] crypto/cnxk: add separate datapath for pdcp cipher operation Date: Tue, 9 Aug 2022 16:23:44 +0530 Message-ID: <20220809105356.561-7-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220809105356.561-1-anoobj@marvell.com> References: <20220808080606.220-1-anoobj@marvell.com> <20220809105356.561-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: U2JVJJNPXUChQwdW1QkNYBkBJhDFW14l X-Proofpoint-ORIG-GUID: U2JVJJNPXUChQwdW1QkNYBkBJhDFW14l X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-09_03,2022-08-09_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add separate datapath for PDCP opcode performing cipher operation. Signed-off-by: Anoob Joseph --- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 19 --- drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 27 +--- drivers/crypto/cnxk/cnxk_se.h | 177 +++++++++++++++++++--- 3 files changed, 158 insertions(+), 65 deletions(-) diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index bfa6374005..1b70d02e2a 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -77,25 +77,6 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, return ret; } -static __rte_always_inline int __rte_hot -cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, - struct cnxk_se_sess *sess, struct cpt_inflight_req *infl_req, - struct cpt_inst_s *inst) -{ - uint64_t cpt_op; - int ret; - - cpt_op = sess->cpt_op; - - if (cpt_op & ROC_SE_OP_CIPHER_MASK) - ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst); - else - ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, - inst); - - return ret; -} - static inline int cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[], struct cpt_inflight_req *infl_req) diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 2182c1bd2f..3d69723809 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -17,27 +17,6 @@ #include "cnxk_cryptodev_ops.h" #include "cnxk_se.h" -static __rte_always_inline int __rte_hot -cn9k_cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, - struct cnxk_se_sess *sess, - struct cpt_inflight_req *infl_req, - struct cpt_inst_s *inst) -{ - uint64_t cpt_op; - int ret; - - cpt_op = sess->cpt_op; - - if (sess->roc_se_ctx.fc_type == ROC_SE_PDCP_CHAIN) - ret = fill_pdcp_chain_params(op, sess, &qp->meta_info, infl_req, inst); - else if (cpt_op & ROC_SE_OP_CIPHER_MASK) - ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst); - else - ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst); - - return ret; -} - static __rte_always_inline int __rte_hot cn9k_cpt_sec_inst_fill(struct rte_crypto_op *op, struct cpt_inflight_req *infl_req, @@ -118,8 +97,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, sym_op = op->sym; sess = get_sym_session_private_data( sym_op->session, cn9k_cryptodev_driver_id); - ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req, - inst); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst); inst->w7.u64 = sess->cpt_inst_w7; } else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) ret = cn9k_cpt_sec_inst_fill(op, infl_req, inst); @@ -130,8 +108,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, return -1; } - ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req, - inst); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst); if (unlikely(ret)) { sym_session_clear(cn9k_cryptodev_driver_id, op->sym->session); diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index 2b477284c0..35d074ea34 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -1865,8 +1865,6 @@ cpt_fc_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, if (likely(fc_type == ROC_SE_FC_GEN)) { ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, fc_params, inst); - } else if (fc_type == ROC_SE_PDCP) { - ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, fc_params, inst); } else if (fc_type == ROC_SE_KASUMI) { ret = cpt_kasumi_dec_prep(d_offs, d_lens, fc_params, inst); } @@ -2400,8 +2398,8 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, static __rte_always_inline int fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, - struct cpt_qp_meta_info *m_info, - struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst) + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst, const bool is_kasumi) { struct roc_se_ctx *ctx = &sess->roc_se_ctx; uint8_t op_minor = ctx->template_w4.s.opcode_minor; @@ -2424,7 +2422,9 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, int ret; fc_params.cipher_iv_len = sess->iv_length; - fc_params.auth_iv_len = sess->auth_iv_length; + fc_params.auth_iv_len = 0; + fc_params.auth_iv_buf = NULL; + fc_params.iv_buf = NULL; if (likely(sess->iv_length)) { flags |= ROC_SE_VALID_IV_BUF; @@ -2440,13 +2440,15 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } } - if (sess->zsk_flag) { + /* Kasumi would need auth IV */ + if (is_kasumi && sess->zsk_flag) { + fc_params.auth_iv_len = sess->auth_iv_length; if (sess->auth_iv_length) fc_params.auth_iv_buf = rte_crypto_op_ctod_offset(cop, uint8_t *, sess->auth_iv_offset); - if (sess->zsk_flag != ROC_SE_ZS_EA) - inplace = 0; + inplace = 0; } + m_src = sym_op->m_src; m_dst = sym_op->m_dst; @@ -2508,14 +2510,6 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, d_lens = ci_data_length; d_lens = (d_lens << 32) | a_data_length; - if (sess->auth_first) - mc_hash_off = a_data_offset + a_data_length; - else - mc_hash_off = ci_data_offset + ci_data_length; - - if (mc_hash_off < (a_data_offset + a_data_length)) { - mc_hash_off = (a_data_offset + a_data_length); - } /* for gmac, salt should be updated like in gcm */ if (unlikely(sess->is_gmac)) { uint8_t *salt; @@ -2529,6 +2523,14 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, if (likely(sess->mac_len)) { struct rte_mbuf *m = cpt_m_dst_get(cpt_op, m_src, m_dst); + if (sess->auth_first) + mc_hash_off = a_data_offset + a_data_length; + else + mc_hash_off = ci_data_offset + ci_data_length; + + if (mc_hash_off < (a_data_offset + a_data_length)) + mc_hash_off = (a_data_offset + a_data_length); + /* hmac immediately following data is best case */ if (!(op_minor & ROC_SE_FC_MINOR_OP_HMAC_FIRST) && (unlikely(rte_pktmbuf_mtod(m, uint8_t *) + @@ -2599,11 +2601,8 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) && - (flags & ROC_SE_SINGLE_BUF_HEADROOM) && - ((ctx->fc_type != ROC_SE_KASUMI) && - (ctx->fc_type != ROC_SE_HASH_HMAC))))) { - mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, - m_info->pool, infl_req); + (flags & ROC_SE_SINGLE_BUF_HEADROOM) && !is_kasumi))) { + mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); if (mdata == NULL) { plt_dp_err("Error allocating meta buffer for request"); return -ENOMEM; @@ -2632,6 +2631,112 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, return ret; } +static __rte_always_inline int +fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst) +{ + struct rte_crypto_sym_op *sym_op = cop->sym; + struct roc_se_fc_params fc_params; + uint32_t c_data_len, c_data_off; + struct rte_mbuf *m_src, *m_dst; + uint64_t d_offs, d_lens; + char src[SRC_IOV_SIZE]; + char dst[SRC_IOV_SIZE]; + void *mdata = NULL; + uint32_t flags = 0; + int ret; + + /* Cipher only */ + + fc_params.cipher_iv_len = sess->iv_length; + fc_params.auth_iv_len = 0; + fc_params.iv_buf = NULL; + fc_params.auth_iv_buf = NULL; + + if (likely(sess->iv_length)) + fc_params.iv_buf = rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset); + + m_src = sym_op->m_src; + m_dst = sym_op->m_dst; + + c_data_len = sym_op->cipher.data.length; + c_data_off = sym_op->cipher.data.offset; + + d_offs = (uint64_t)c_data_off << 16; + d_lens = (uint64_t)c_data_len << 32; + + fc_params.ctx_buf.vaddr = &sess->roc_se_ctx; + + if (likely(m_dst == NULL || m_src == m_dst)) { + fc_params.dst_iov = fc_params.src_iov = (void *)src; + prepare_iov_from_pkt_inplace(m_src, &fc_params, &flags); + } else { + /* Out of place processing */ + fc_params.src_iov = (void *)src; + fc_params.dst_iov = (void *)dst; + + /* Store SG I/O in the api for reuse */ + if (prepare_iov_from_pkt(m_src, fc_params.src_iov, 0)) { + plt_dp_err("Prepare src iov failed"); + ret = -EINVAL; + goto err_exit; + } + + if (unlikely(m_dst != NULL)) { + uint32_t pkt_len; + + /* Try to make room as much as src has */ + pkt_len = rte_pktmbuf_pkt_len(m_dst); + + if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) { + pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len; + if (!rte_pktmbuf_append(m_dst, pkt_len)) { + plt_dp_err("Not enough space in " + "m_dst %p, need %u" + " more", + m_dst, pkt_len); + ret = -EINVAL; + goto err_exit; + } + } + + if (prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0)) { + plt_dp_err("Prepare dst iov failed for " + "m_dst %p", + m_dst); + ret = -EINVAL; + goto err_exit; + } + } else { + fc_params.dst_iov = (void *)src; + } + } + + if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) && + (flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { + mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); + if (mdata == NULL) { + plt_dp_err("Could not allocate meta buffer"); + return -ENOMEM; + } + } + + ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, &fc_params, inst); + if (unlikely(ret)) { + plt_dp_err("Could not prepare instruction"); + goto free_mdata_and_exit; + } + + return 0; + +free_mdata_and_exit: + if (infl_req->op_flags & CPT_OP_FLAGS_METABUF) + rte_mempool_put(m_info->pool, infl_req->mdata); +err_exit: + return ret; +} + static __rte_always_inline int fill_pdcp_chain_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, @@ -2974,4 +3079,34 @@ fill_digest_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, err_exit: return ret; } + +static __rte_always_inline int __rte_hot +cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_se_sess *sess, + struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst) +{ + uint64_t cpt_op = sess->cpt_op; + int ret; + + if (cpt_op & ROC_SE_OP_CIPHER_MASK) { + switch (sess->roc_se_ctx.fc_type) { + case ROC_SE_PDCP_CHAIN: + ret = fill_pdcp_chain_params(op, sess, &qp->meta_info, infl_req, inst); + break; + case ROC_SE_PDCP: + ret = fill_pdcp_params(op, sess, &qp->meta_info, infl_req, inst); + break; + case ROC_SE_KASUMI: + ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, true); + break; + default: + ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, false); + break; + } + } else { + ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst); + } + + return ret; +} + #endif /*_CNXK_SE_H_ */