From patchwork Fri Aug 12 13:23:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 114921 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E929A0543; Fri, 12 Aug 2022 15:23:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93850410FC; Fri, 12 Aug 2022 15:23:42 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 6DF9E406A2 for ; Fri, 12 Aug 2022 15:23:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660310620; x=1691846620; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6isGZosgTRT0piMiTN9fH+NgqFNMo/MEINcFFbGdwdg=; b=NkppdHGKVSw4B0sq0B5gtOxDregnJmsTsO0sJnIZbvzJkYPGcJLeOT9/ d19bkfqHiPvvifhqxC1vTmplRiu5IQeeJjaERVMbeJRCXRpHK5pKXoLa0 FVVqA1HZW437ACQA1nbr786jt+83YE6pAdAf2kMHLAqDEUyqgUvlW/eLz 5SU3L0sJ/7QQQoZzBPMWBwDgv+Hhbz2nJ+NtvvvBhcqo3KXk29/of4mL3 lggi07L57Ofn/w1G1wdFzptVaFurDMuXg+cdDkAInXepxy6YcQOTHGWh1 OAXlAXq/vZt86RosGYhUnJzYZXtyozi7XEzlQIWEHsMCXeDgFXifqrihS g==; X-IronPort-AV: E=McAfee;i="6400,9594,10437"; a="292386740" X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="292386740" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2022 06:23:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="556513809" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 12 Aug 2022 06:23:38 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH 1/3] test/crypto: fix wireless auth digest segment Date: Fri, 12 Aug 2022 13:23:32 +0000 Message-Id: <20220812132334.75707-2-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220812132334.75707-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The segment size for some tests was too small to hold the auth digest. This caused issues when using op->sym->auth.digest.data for comparisons in AESNI_MB PMD after a subsequent patch enables SGL. For example, if segment size is 2, and digest size is 4, then 4 bytes are read from op->sym->auth.digest.data, which overflows into the memory after the segment, rather than using the second segment that contains the remaining half of the digest. Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP") Signed-off-by: Ciara Power --- app/test/test_cryptodev.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 69a0301de0..e6925b6531 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation( remaining_off -= rte_pktmbuf_data_len(sgl_buf); sgl_buf = sgl_buf->next; } + + /* The last segment should be large enough to hold full digest */ + if (sgl_buf->data_len < auth_tag_len) { + rte_pktmbuf_free(sgl_buf->next); + sgl_buf->next = NULL; + rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len); + } + sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf, uint8_t *, remaining_off); sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf, From patchwork Fri Aug 12 13:23:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 114922 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 967C9A0543; Fri, 12 Aug 2022 15:23:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8981E42C01; Fri, 12 Aug 2022 15:23:43 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 72DBE40A89 for ; Fri, 12 Aug 2022 15:23:42 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660310622; x=1691846622; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qJqBOkTwwM07XLwbCMM2ItxmTrwXqCEooaJ160NmuDU=; b=Vg3AzEfpPyQWSQQGjOJ8DKCruxmqAeOAn3G2y5MLsXkeMhV1dsIh+DN7 1chISXfwRJue25ojAp7N0c9JSRmhKMFQ5EhVY6wc0iW2vmnkg5EWzXnKB hvhSPUPBHjgWdciVB+5/HDBRpvjwxVa+GituuWGImR+AydNTS7u9huO3q NGBNPmWuVcfYt09BCn7VIWl4lRCJsezG4OLuPe6xQ/6sryN0oX1SsZxtB QG0uXwoFjyU8SnyCk7iWWLvgVJgIl5az2RGMVo8vgBs+jL8WScGHaM/SF A1gUZw1q+drEw8Onk/G72lllNsBvZFXi3seFC+VNYEpezDo6PdkEVuRE3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10437"; a="292386742" X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="292386742" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2022 06:23:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="556513822" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 12 Aug 2022 06:23:40 -0700 From: Ciara Power To: Fan Zhang , Pablo de Lara Cc: dev@dpdk.org, kai.ji@intel.com, Ciara Power Subject: [PATCH 2/3] crypto/ipsec_mb: add remaining SGL support Date: Fri, 12 Aug 2022 13:23:33 +0000 Message-Id: <20220812132334.75707-3-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220812132334.75707-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly algorithms using the JOB API. This support was added to AESNI_MB PMD previously, but the SGL feature flags could not be added due to no SGL support for other algorithms. This patch adds a workaround SGL approach for other algorithms using the JOB API. The segmented input buffers are copied into a linear buffer, which is passed as a single job to intel-ipsec-mb. The job is processed, and on return, the linear buffer is split into the original destination segments. Existing AESNI_MB testcases are passing with these feature flags added. Signed-off-by: Ciara Power --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 190 ++++++++++++++++++++----- 1 file changed, 157 insertions(+), 33 deletions(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 6d5d3ce8eb..53084f689a 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -937,7 +937,7 @@ static inline uint64_t auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t oop, const uint32_t auth_offset, const uint32_t cipher_offset, const uint32_t auth_length, - const uint32_t cipher_length) + const uint32_t cipher_length, uint8_t lb_sgl) { struct rte_mbuf *m_src, *m_dst; uint8_t *p_src, *p_dst; @@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t cipher_end, auth_end; /* Only cipher then hash needs special calculation. */ - if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH) + if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl) return auth_offset; m_src = op->sym->m_src; @@ -1159,6 +1159,73 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, return 0; } +static int +handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset, + struct aesni_mb_session *session) +{ + uint64_t cipher_len, auth_len; + uint8_t *src, *linear_buf = NULL; + int total_len; + int lb_offset = 0; + struct rte_mbuf *src_seg; + uint16_t src_len; + + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + + (job->cipher_start_src_offset_in_bits >> 3); + else + cipher_len = job->msg_len_to_cipher_in_bytes + + job->cipher_start_src_offset_in_bytes; + + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) + auth_len = (job->msg_len_to_hash_in_bits >> 3) + + job->hash_start_src_offset_in_bytes; + else if (job->hash_alg == IMB_AUTH_AES_GMAC) + auth_len = job->u.GCM.aad_len_in_bytes; + else + auth_len = job->msg_len_to_hash_in_bytes + + job->hash_start_src_offset_in_bytes; + + total_len = RTE_MAX(auth_len, cipher_len) + job->auth_tag_output_len_in_bytes; + linear_buf = rte_zmalloc(NULL, total_len, 0); + if (linear_buf == NULL) { + IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n"); + return -1; + } + + for (src_seg = op->sym->m_src; (src_seg != NULL) && + (total_len - lb_offset > 0); src_seg = src_seg->next) { + src = rte_pktmbuf_mtod(src_seg, uint8_t *); + src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset); + rte_memcpy(linear_buf + lb_offset, src, src_len); + lb_offset += src_len; + } + + job->src = linear_buf; + job->dst = linear_buf + dst_offset; + job->user_data2 = linear_buf; + + if (job->hash_alg == IMB_AUTH_AES_GMAC) + job->u.GCM.aad = linear_buf; + + if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) + job->auth_tag_output = linear_buf + lb_offset; + else + job->auth_tag_output = linear_buf + auth_len; + + return 0; +} + +static inline int +imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg) +{ + if (alg == IMB_CIPHER_CHACHA20_POLY1305 + || alg == IMB_CIPHER_GCM) + return 1; + return 0; +} /** * Process a crypto operation and complete a IMB_JOB job structure for @@ -1171,7 +1238,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, * * @return * - 0 on success, the IMB_JOB will be filled - * - -1 if invalid session, IMB_JOB will not be filled + * - -1 if invalid session or errors allocationg SGL linear buffer, + * IMB_JOB will not be filled */ static inline int set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, @@ -1191,6 +1259,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, uint32_t total_len; IMB_JOB base_job; uint8_t sgl = 0; + uint8_t lb_sgl = 0; int ret; session = ipsec_mb_get_session_private(qp, op); @@ -1199,18 +1268,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, return -1; } - if (op->sym->m_src->nb_segs > 1) { - if (session->cipher.mode != IMB_CIPHER_GCM - && session->cipher.mode != - IMB_CIPHER_CHACHA20_POLY1305) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM" - " or CHACHA20_POLY1305 algorithms."); - return -1; - } - sgl = 1; - } - /* Set crypto operation */ job->chain_order = session->chain_order; @@ -1233,6 +1290,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->dec_keys = session->cipher.expanded_aes_keys.decode; } + if (!op->sym->m_dst) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else if (op->sym->m_dst == op->sym->m_src) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else { + /* out-of-place operation */ + m_dst = op->sym->m_dst; + oop = 1; + } + + if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) { + sgl = 1; + if (!imb_lib_support_sgl_algo(session->cipher.mode)) + lb_sgl = 1; + } + switch (job->hash_alg) { case IMB_AUTH_AES_XCBC: job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded; @@ -1331,20 +1408,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, m_offset = 0; } - if (!op->sym->m_dst) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else if (op->sym->m_dst == op->sym->m_src) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else { - /* out-of-place operation */ - m_dst = op->sym->m_dst; - oop = 1; - } - /* Set digest output location */ if (job->hash_alg != IMB_AUTH_NULL && session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { @@ -1435,7 +1498,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bits = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1452,7 +1515,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bytes = auth_len_in_bytes; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1464,7 +1527,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, session, oop, op->sym->auth.data.offset, op->sym->cipher.data.offset, op->sym->auth.data.length, - op->sym->cipher.data.length); + op->sym->cipher.data.length, lb_sgl); job->msg_len_to_hash_in_bytes = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1525,6 +1588,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->user_data = op; if (sgl) { + + if (lb_sgl) + return handle_sgl_linear(job, op, m_offset, session); + base_job = *job; job->sgl_state = IMB_SGL_INIT; job = IMB_SUBMIT_JOB(mb_mgr); @@ -1695,6 +1762,49 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op, sess->auth.req_digest_len); } +static void +post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job, + struct aesni_mb_session *sess, uint8_t *linear_buf) +{ + + int lb_offset = 0; + struct rte_mbuf *m_dst = op->sym->m_dst == NULL ? + op->sym->m_src : op->sym->m_dst; + uint16_t total_len, dst_len; + uint64_t cipher_len, auth_len; + uint8_t *dst; + + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + + (job->cipher_start_src_offset_in_bits >> 3); + else + cipher_len = job->msg_len_to_cipher_in_bytes + + job->cipher_start_src_offset_in_bytes; + + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) + auth_len = (job->msg_len_to_hash_in_bits >> 3) + + job->hash_start_src_offset_in_bytes; + else if (job->hash_alg == IMB_AUTH_AES_GMAC) + auth_len = job->u.GCM.aad_len_in_bytes; + else + auth_len = job->msg_len_to_hash_in_bytes + + job->hash_start_src_offset_in_bytes; + + total_len = RTE_MAX(auth_len, cipher_len); + + if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY) + total_len += job->auth_tag_output_len_in_bytes; + + for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) { + dst = rte_pktmbuf_mtod(m_dst, uint8_t *); + dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset); + rte_memcpy(dst, linear_buf + lb_offset, dst_len); + lb_offset += dst_len; + } +} + /** * Process a completed job and return rte_mbuf which job processed * @@ -1712,6 +1822,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) struct aesni_mb_session *sess = NULL; uint32_t driver_id = ipsec_mb_get_driver_id( IPSEC_MB_PMD_TYPE_AESNI_MB); + uint8_t *linear_buf = NULL; #ifdef AESNI_MB_DOCSIS_SEC_ENABLED uint8_t is_docsis_sec = 0; @@ -1740,6 +1851,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) case IMB_STATUS_COMPLETED: op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if ((op->sym->m_src->nb_segs > 1 || + (op->sym->m_dst != NULL && + op->sym->m_dst->nb_segs > 1)) && + !imb_lib_support_sgl_algo(sess->cipher.mode)) { + linear_buf = (uint8_t *) job->user_data2; + post_process_sgl_linear(op, job, sess, linear_buf); + } + if (job->hash_alg == IMB_AUTH_NULL) break; @@ -1766,6 +1885,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) default: op->status = RTE_CRYPTO_OP_STATUS_ERROR; } + rte_free(linear_buf); } /* Free session if a session-less crypto op */ @@ -2252,7 +2372,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb) RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO | RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA | - RTE_CRYPTODEV_FF_SYM_SESSIONLESS; + RTE_CRYPTODEV_FF_SYM_SESSIONLESS | + RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; aesni_mb_data->internals_priv_size = 0; aesni_mb_data->ops = &aesni_mb_pmd_ops; From patchwork Fri Aug 12 13:23:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 114923 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC86DA0543; Fri, 12 Aug 2022 15:24:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C5BF942C1E; Fri, 12 Aug 2022 15:23:45 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id B3CA542C06 for ; Fri, 12 Aug 2022 15:23:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660310623; x=1691846623; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J+b+YfiNqDFoySw62yamG2a6iJ5K6agLfCUHGNQF3c8=; b=EGLfVFg+kNhwEIfJmuPJhNtIMmQkIJV5nN7s7QE2AGaYpuIGCngSFygm H4QkNuVVElF3TUVootuN9nVlunDUtFpxGkxWy0M65I9nUH2pSmDSELnGP oOYeo7bIhJTzFSiJQuRkBf+JHn59nFJGKT8CTiKFzVVoCrstCy2RXvoq2 VgozkOhxoohgUzApVshQxRX51iKp1Iqa+kToriSxaH4iqbdir8R+XnIsh WNAfIVJEJohuliSDOLfxL9ft7Ho+vzwFu69Lrth06LPPHbVtF/vuB69Ya R/p/+5ZgwV2OCIxzD/BFJHZBQLuo8fO2TafdTXER5/QZ4SAKeVBJkuieq Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10437"; a="292386743" X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="292386743" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2022 06:23:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,233,1654585200"; d="scan'208";a="556513831" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 12 Aug 2022 06:23:41 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH 3/3] test/crypto: add OOP snow3g SGL tests Date: Fri, 12 Aug 2022 13:23:34 +0000 Message-Id: <20220812132334.75707-4-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220812132334.75707-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org More tests are added to test variations of OOP SGL for snow3g. This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT. Signed-off-by: Ciara Power --- app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index e6925b6531..83860d1853 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata) } static int -test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) +test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata, + uint8_t sgl_in, uint8_t sgl_out) { struct crypto_testsuite_params *ts_params = &testsuite_params; struct crypto_unittest_params *ut_params = &unittest_params; @@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) uint64_t feat_flags = dev_info.feature_flags; - if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { - printf("Device doesn't support out-of-place scatter-gather " - "in both input and output mbufs. " + if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) + || ((!sgl_in && sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT)) + || ((sgl_in && !sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) { + printf("Device doesn't support out-of-place scatter gather type. " "Test Skipped.\n"); return TEST_SKIPPED; } @@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) /* the algorithms block size */ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16); - ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 10, 0); - ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 3, 0); + if (sgl_in) + ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 10, 0); + else { + ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len); + } + + if (sgl_out) + ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 3, 0); + else { + ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len); + } TEST_ASSERT_NOT_NULL(ut_params->ibuf, "Failed to allocate input buffer in mempool"); @@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void) static int test_snow3g_encryption_test_case_1_oop_sgl(void) { - return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1); + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1); +} + +static int +test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1); } +static int +test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0); +} static int test_snow3g_encryption_test_case_1_offset_oop(void) @@ -15985,6 +16011,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = { test_snow3g_encryption_test_case_1_oop), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_oop_sgl), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_offset_oop), TEST_CASE_ST(ut_setup, ut_teardown,