From patchwork Fri Feb 24 09:40:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 124515 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7788E41D5F; Fri, 24 Feb 2023 10:41:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28700427E9; Fri, 24 Feb 2023 10:40:41 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 31E0542D17 for ; Fri, 24 Feb 2023 10:40:38 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 31O71t09027270 for ; Fri, 24 Feb 2023 01:40:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=DhdxFVKSKsuhGNQ8Hxlq4cH0iW4rzFduNAfwth3yLTs=; b=i1nuHAe56WucdCEacSCIsZ2DuOFUfKDc0cdSiIQ55u966uGKolx1y64OIZFvwssPbBH9 KgqiinsppponuF+2VADKaV8QhMEXIXNlqJ2/o1sVLpDqPbRd4irn1md/hH8d6FvVYzUn Ua9L4FAP5qoal8WNzdiY+8lribcHpsObLclZ9wWgPcNmL1wiIIuBKHu87lgwLwfFBTG8 4dvmQnbDD8jOqxu/9Helbs4DuU9iHjlky6Tl5YrZM2/71jT4kD9KUXFwX3QIIgOumO4o kbcmIB9hlAoGOxtx4ng9Z6jXYuQznkzr6Aob32lNOAW8hI36Mz3EI7VdGOdL+J05Cd1E 4Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3nxfkwb2jb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 24 Feb 2023 01:40:37 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 24 Feb 2023 01:40:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Fri, 24 Feb 2023 01:40:34 -0800 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 6AD6E5B6929; Fri, 24 Feb 2023 01:40:33 -0800 (PST) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Gowrishankar Muthukrishnan , Subject: [PATCH v2 09/11] crypto/cnxk: support cn10k IPsec SG mode Date: Fri, 24 Feb 2023 15:10:12 +0530 Message-ID: <20230224094014.3246764-10-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230224094014.3246764-1-ktejasree@marvell.com> References: <20230224094014.3246764-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: O6b8BJ4pz5Ao2nx54tNGADsRnjtSHUb- X-Proofpoint-ORIG-GUID: O6b8BJ4pz5Ao2nx54tNGADsRnjtSHUb- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.170.22 definitions=2023-02-24_05,2023-02-23_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for scatter-gather mode in 103XX and 106XX lookaside IPsec. Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 21 +- drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 222 ++++++++++++++++++++-- drivers/crypto/cnxk/cnxk_sg.h | 23 +++ 3 files changed, 239 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index 9f6fd4e411..e405a2ad9f 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -77,8 +77,8 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op) } static __rte_always_inline int __rte_hot -cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, - struct cn10k_sec_session *sess, struct cpt_inst_s *inst) +cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess, + struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2) { struct rte_crypto_sym_op *sym_op = op->sym; int ret; @@ -88,15 +88,11 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, return -ENOTSUP; } - if (unlikely(!rte_pktmbuf_is_contiguous(sym_op->m_src))) { - plt_dp_err("Scatter Gather mode is not supported"); - return -ENOTSUP; - } - if (sess->is_outbound) - ret = process_outb_sa(&qp->lf, op, sess, inst); + ret = process_outb_sa(&qp->lf, op, sess, &qp->meta_info, infl_req, inst, + is_sg_ver2); else - ret = process_inb_sa(op, sess, inst); + ret = process_inb_sa(op, sess, inst, &qp->meta_info, infl_req, is_sg_ver2); return ret; } @@ -129,7 +125,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { sec_sess = (struct cn10k_sec_session *)sym_op->session; - ret = cpt_sec_inst_fill(qp, op, sec_sess, &inst[0]); + ret = cpt_sec_inst_fill(qp, op, sec_sess, &inst[0], infl_req, is_sg_ver2); if (unlikely(ret)) return 0; w7 = sec_sess->inst.w7; @@ -827,7 +823,10 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re cop->status = RTE_CRYPTO_OP_STATUS_ERROR; return; } - mbuf->data_len = m_len; + + if (mbuf->next == NULL) + mbuf->data_len = m_len; + mbuf->pkt_len = m_len; } diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h index f2761a55a5..8e208eb2ca 100644 --- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h +++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h @@ -8,9 +8,13 @@ #include #include +#include "roc_ie.h" + #include "cn10k_cryptodev.h" #include "cn10k_ipsec.h" #include "cnxk_cryptodev.h" +#include "cnxk_cryptodev_ops.h" +#include "cnxk_sg.h" static inline void ipsec_po_sa_iv_set(struct cn10k_sec_session *sess, struct rte_crypto_op *cop) @@ -44,18 +48,14 @@ ipsec_po_sa_aes_gcm_iv_set(struct cn10k_sec_session *sess, struct rte_crypto_op static __rte_always_inline int process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_sec_session *sess, - struct cpt_inst_s *inst) + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst, const bool is_sg_ver2) { struct rte_crypto_sym_op *sym_op = cop->sym; struct rte_mbuf *m_src = sym_op->m_src; uint64_t inst_w4_u64 = sess->inst.w4; uint64_t dptr; - if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) { - plt_dp_err("Not enough tail room"); - return -ENOMEM; - } - RTE_SET_USED(lf); #ifdef LA_IPSEC_DEBUG @@ -79,27 +79,217 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s if (m_src->ol_flags & RTE_MBUF_F_TX_L4_MASK) inst_w4_u64 &= ~BIT_ULL(32); - /* Prepare CPT instruction */ - inst->w4.u64 = inst_w4_u64 | rte_pktmbuf_pkt_len(m_src); - dptr = rte_pktmbuf_mtod(m_src, uint64_t); - inst->dptr = dptr; + if (likely(m_src->next == NULL)) { + if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) { + plt_dp_err("Not enough tail room"); + return -ENOMEM; + } + + /* Prepare CPT instruction */ + inst->w4.u64 = inst_w4_u64 | rte_pktmbuf_pkt_len(m_src); + dptr = rte_pktmbuf_mtod(m_src, uint64_t); + inst->dptr = dptr; + } else if (is_sg_ver2 == false) { + struct roc_sglist_comp *scatter_comp, *gather_comp; + uint32_t g_size_bytes, s_size_bytes; + struct rte_mbuf *last_seg; + uint8_t *in_buffer; + uint32_t dlen; + void *m_data; + int i; + + last_seg = rte_pktmbuf_lastseg(m_src); + + if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) { + plt_dp_err("Not enough tail room (required: %d, available: %d)", + sess->max_extended_len, rte_pktmbuf_tailroom(last_seg)); + return -ENOMEM; + } + + m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req); + if (unlikely(m_data == NULL)) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + + in_buffer = m_data; + + ((uint16_t *)in_buffer)[0] = 0; + ((uint16_t *)in_buffer)[1] = 0; + + /* Input Gather List */ + i = 0; + gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8); + + i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src); + ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); + + g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp); + + /* Output Scatter List */ + last_seg->data_len += sess->max_extended_len; + + i = 0; + scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes); + + i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src); + ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); + + s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp); + + dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE; + + inst->dptr = (uint64_t)in_buffer; + + inst->w4.u64 = sess->inst.w4 | dlen; + inst->w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG; + } else { + struct roc_sg2list_comp *scatter_comp, *gather_comp; + union cpt_inst_w5 cpt_inst_w5; + union cpt_inst_w6 cpt_inst_w6; + struct rte_mbuf *last_seg; + uint32_t g_size_bytes; + void *m_data; + int i; + + last_seg = rte_pktmbuf_lastseg(m_src); + + if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) { + plt_dp_err("Not enough tail room (required: %d, available: %d)", + sess->max_extended_len, rte_pktmbuf_tailroom(last_seg)); + return -ENOMEM; + } + + m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req); + if (unlikely(m_data == NULL)) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + + /* Input Gather List */ + i = 0; + gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data); + + i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src); + + cpt_inst_w5.s.gather_sz = ((i + 2) / 3); + g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp); + + /* Output Scatter List */ + last_seg->data_len += sess->max_extended_len; + + i = 0; + scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes); + + i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src); + + cpt_inst_w6.s.scatter_sz = ((i + 2) / 3); + + cpt_inst_w5.s.dptr = (uint64_t)gather_comp; + cpt_inst_w6.s.rptr = (uint64_t)scatter_comp; + + inst->w5.u64 = cpt_inst_w5.u64; + inst->w6.u64 = cpt_inst_w6.u64; + inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); + inst->w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT)); + } return 0; } static __rte_always_inline int -process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct cpt_inst_s *inst) +process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct cpt_inst_s *inst, + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + const bool is_sg_ver2) { struct rte_crypto_sym_op *sym_op = cop->sym; struct rte_mbuf *m_src = sym_op->m_src; uint64_t dptr; - /* Prepare CPT instruction */ - inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); - dptr = rte_pktmbuf_mtod(m_src, uint64_t); - inst->dptr = dptr; - m_src->ol_flags |= (uint64_t)sess->ip_csum; + if (likely(m_src->next == NULL)) { + /* Prepare CPT instruction */ + inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); + dptr = rte_pktmbuf_mtod(m_src, uint64_t); + inst->dptr = dptr; + m_src->ol_flags |= (uint64_t)sess->ip_csum; + } else if (is_sg_ver2 == false) { + struct roc_sglist_comp *scatter_comp, *gather_comp; + uint32_t g_size_bytes, s_size_bytes; + uint8_t *in_buffer; + uint32_t dlen; + void *m_data; + int i; + + m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req); + if (unlikely(m_data == NULL)) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + + in_buffer = m_data; + + ((uint16_t *)in_buffer)[0] = 0; + ((uint16_t *)in_buffer)[1] = 0; + + /* Input Gather List */ + i = 0; + gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8); + i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src); + ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); + + g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp); + /* Output Scatter List */ + i = 0; + scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes); + i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src); + ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); + + s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp); + + dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE; + + inst->dptr = (uint64_t)in_buffer; + inst->w4.u64 = sess->inst.w4 | dlen; + inst->w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG; + } else { + struct roc_sg2list_comp *scatter_comp, *gather_comp; + union cpt_inst_w5 cpt_inst_w5; + union cpt_inst_w6 cpt_inst_w6; + uint32_t g_size_bytes; + void *m_data; + int i; + + m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req); + if (unlikely(m_data == NULL)) { + plt_dp_err("Error allocating meta buffer for request"); + return -ENOMEM; + } + + /* Input Gather List */ + i = 0; + gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data); + + i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src); + + cpt_inst_w5.s.gather_sz = ((i + 2) / 3); + g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp); + + /* Output Scatter List */ + i = 0; + scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes); + i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src); + + cpt_inst_w6.s.scatter_sz = ((i + 2) / 3); + + cpt_inst_w5.s.dptr = (uint64_t)gather_comp; + cpt_inst_w6.s.rptr = (uint64_t)scatter_comp; + + inst->w5.u64 = cpt_inst_w5.u64; + inst->w6.u64 = cpt_inst_w6.u64; + inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); + inst->w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT)); + } return 0; } diff --git a/drivers/crypto/cnxk/cnxk_sg.h b/drivers/crypto/cnxk/cnxk_sg.h index ead2886e99..65244199bd 100644 --- a/drivers/crypto/cnxk/cnxk_sg.h +++ b/drivers/crypto/cnxk/cnxk_sg.h @@ -6,6 +6,7 @@ #define _CNXK_SG_H_ #include "roc_cpt_sg.h" +#include "roc_se.h" static __rte_always_inline uint32_t fill_sg_comp(struct roc_sglist_comp *list, uint32_t i, phys_addr_t dma_addr, uint32_t size) @@ -148,6 +149,28 @@ fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte return i; } +static __rte_always_inline uint32_t +fill_ipsec_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt) +{ + uint32_t buf_sz; + void *vaddr; + + while (unlikely(pkt != NULL)) { + struct roc_sg2list_comp *to = &list[i / 3]; + buf_sz = pkt->data_len; + vaddr = rte_pktmbuf_mtod(pkt, void *); + + to->u.s.len[i % 3] = buf_sz; + to->ptr[i % 3] = (uint64_t)vaddr; + to->u.s.valid_segs = (i % 3) + 1; + + pkt = pkt->next; + i++; + } + + return i; +} + static __rte_always_inline uint32_t fill_sg2_comp(struct roc_sg2list_comp *list, uint32_t i, phys_addr_t dma_addr, uint32_t size) {