From patchwork Thu Aug 25 14:28:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 115411 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0888A0547; Thu, 25 Aug 2022 16:29:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 725AB415D7; Thu, 25 Aug 2022 16:29:12 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 2971140156 for ; Thu, 25 Aug 2022 16:29:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661437748; x=1692973748; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6isGZosgTRT0piMiTN9fH+NgqFNMo/MEINcFFbGdwdg=; b=EC9TTpRrhugXkQd+h+MToIpkDC+47r9YJ3MsII00dUdSk6VaN/UDD/wK ucEnaQB0TAWFVzfLVQgKX1xefbLvrQKPkKIUJ+zKHjx0u3ODd2edhWFtz zq10SH2hs9GIcY7604J12JPL+iCfkQoIWRRcsXHVmTmLk9Ejzw62vF617 FAws0XiOVzkQkmTHPdK9q75ub/wqm/FbIxD3/5tGkYWGdLnrQJWuSdPvu U/lkLMd+Aw9yFlZSrWpivXtw6aQxKXuBYLP+aht3xS5VjsZyba0vDHFy6 Nz8qoL8agjvRBZ0IxvPtCvZ049H9IdPwMsK6l5wOsh95z2OBxCktqgged A==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="358216610" X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="358216610" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 07:29:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="561043251" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 25 Aug 2022 07:29:06 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v2 1/5] test/crypto: fix wireless auth digest segment Date: Thu, 25 Aug 2022 14:28:57 +0000 Message-Id: <20220825142901.898007-2-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220825142901.898007-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220825142901.898007-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The segment size for some tests was too small to hold the auth digest. This caused issues when using op->sym->auth.digest.data for comparisons in AESNI_MB PMD after a subsequent patch enables SGL. For example, if segment size is 2, and digest size is 4, then 4 bytes are read from op->sym->auth.digest.data, which overflows into the memory after the segment, rather than using the second segment that contains the remaining half of the digest. Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP") Signed-off-by: Ciara Power --- app/test/test_cryptodev.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 69a0301de0..e6925b6531 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation( remaining_off -= rte_pktmbuf_data_len(sgl_buf); sgl_buf = sgl_buf->next; } + + /* The last segment should be large enough to hold full digest */ + if (sgl_buf->data_len < auth_tag_len) { + rte_pktmbuf_free(sgl_buf->next); + sgl_buf->next = NULL; + rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len); + } + sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf, uint8_t *, remaining_off); sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf, From patchwork Thu Aug 25 14:28:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 115412 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AF34A0547; Thu, 25 Aug 2022 16:29:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 986C342825; Thu, 25 Aug 2022 16:29:13 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 4B5B8415D7 for ; Thu, 25 Aug 2022 16:29:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661437750; x=1692973750; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gEE83kA/YeZoj2V3JyZ2CAwTOKnq0REn3pAB0+QdrYw=; b=dJKqMxZAZbqrwbUEpXHorfUJ/B/b70fXc/wPXsx3723FkuuiKX58RR/5 SH1pQtwyX4VHUzJE34WFwOaVVbCWU+sbOqmAg+roYaZ31xKiWlFuJjnBT OQf6RohoLizkQialO16cp6BZWwtHpfiNguEAPf0LkvC9C2mR9/6dUPLxL cCKwNM1U36hUKLGJcHE93veiUUv7GQSowPI/zPLWJ9dXDDrX475gQ+kD4 m91j14NVHuuOvmPyUAm0tVFUPhpF4rNiNZyX7zrQY5yx2sj2ukgYip1PQ 7b5cq9sN2UOL6zl+l0+SPffXrn9EoPSZxEbOjdQWcJsbsV85vYb1AW/8n g==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="358216617" X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="358216617" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 07:29:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="561043262" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 25 Aug 2022 07:29:08 -0700 From: Ciara Power To: Fan Zhang , Pablo de Lara Cc: dev@dpdk.org, kai.ji@intel.com, Ciara Power , slawomirx.mrozowicz@intel.com Subject: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup Date: Thu, 25 Aug 2022 14:28:58 +0000 Message-Id: <20220825142901.898007-3-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220825142901.898007-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220825142901.898007-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, for a sessionless op, the session created is reset before being put back into the mempool. This causes issues as the object isn't correctly released into the mempool. Fixes: c68d7aa354f6 ("crypto/aesni_mb: use architecture independent macros") Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions") Fixes: f16662885472 ("crypto/ipsec_mb: add chacha_poly PMD") Cc: roy.fan.zhang@intel.com Cc: slawomirx.mrozowicz@intel.com Cc: kai.ji@intel.com Signed-off-by: Ciara Power --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 ---- drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 4 ---- drivers/crypto/ipsec_mb/pmd_kasumi.c | 5 ----- drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 ---- drivers/crypto/ipsec_mb/pmd_zuc.c | 4 ---- 5 files changed, 21 deletions(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 6d5d3ce8eb..944fce0261 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -1770,10 +1770,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) /* Free session if a session-less crypto op */ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - memset(sess, 0, sizeof(struct aesni_mb_session)); - memset(op->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - op->sym->session)); rte_mempool_put(qp->sess_mp_priv, sess); rte_mempool_put(qp->sess_mp, op->sym->session); op->sym->session = NULL; diff --git a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c index d953d6e5f5..31397b6395 100644 --- a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c +++ b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c @@ -289,10 +289,6 @@ handle_completed_chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp, /* Free session if a session-less crypto op */ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - memset(sess, 0, sizeof(struct chacha20_poly1305_session)); - memset(op->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - op->sym->session)); rte_mempool_put(qp->sess_mp_priv, sess); rte_mempool_put(qp->sess_mp, op->sym->session); op->sym->session = NULL; diff --git a/drivers/crypto/ipsec_mb/pmd_kasumi.c b/drivers/crypto/ipsec_mb/pmd_kasumi.c index c9d4f9d0ae..de37e012bd 100644 --- a/drivers/crypto/ipsec_mb/pmd_kasumi.c +++ b/drivers/crypto/ipsec_mb/pmd_kasumi.c @@ -230,11 +230,6 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session, ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; /* Free session if a session-less crypto op. */ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - memset(session, 0, sizeof(struct kasumi_session)); - memset( - ops[i]->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - ops[i]->sym->session)); rte_mempool_put(qp->sess_mp_priv, session); rte_mempool_put(qp->sess_mp, ops[i]->sym->session); ops[i]->sym->session = NULL; diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c index 9a85f46721..1634c54fb7 100644 --- a/drivers/crypto/ipsec_mb/pmd_snow3g.c +++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c @@ -361,10 +361,6 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session, ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; /* Free session if a session-less crypto op. */ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - memset(session, 0, sizeof(struct snow3g_session)); - memset(ops[i]->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - ops[i]->sym->session)); rte_mempool_put(qp->sess_mp_priv, session); rte_mempool_put(qp->sess_mp, ops[i]->sym->session); ops[i]->sym->session = NULL; diff --git a/drivers/crypto/ipsec_mb/pmd_zuc.c b/drivers/crypto/ipsec_mb/pmd_zuc.c index e36c7092d6..564ca3457c 100644 --- a/drivers/crypto/ipsec_mb/pmd_zuc.c +++ b/drivers/crypto/ipsec_mb/pmd_zuc.c @@ -238,10 +238,6 @@ process_ops(struct rte_crypto_op **ops, enum ipsec_mb_operation op_type, ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; /* Free session if a session-less crypto op. */ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - memset(sessions[i], 0, sizeof(struct zuc_session)); - memset(ops[i]->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - ops[i]->sym->session)); rte_mempool_put(qp->sess_mp_priv, sessions[i]); rte_mempool_put(qp->sess_mp, ops[i]->sym->session); ops[i]->sym->session = NULL; From patchwork Thu Aug 25 14:28:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 115413 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DAB9A0547; Thu, 25 Aug 2022 16:29:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4F544282F; Thu, 25 Aug 2022 16:29:15 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 7B98B4280C for ; Thu, 25 Aug 2022 16:29:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661437752; x=1692973752; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FL6msSPk3ESdo6Kmqj4UO3/lndY+uvLJpfpl0dM0Ibo=; b=S5IsvVz9Tq5+xkm2Er2Ny0BvNPuJqcrlyWcgDfAIOIcQDaSH0DJ6JMBw kqx+VRokMftWHoyr1XD17gL4/eOkgfSoP1nzp5SmzsQcjXzNFUrX9qikU tnu+Mq4uEXqI/xGlKwZguX5uWxDFSn4puAKvx5R/qL1jVh8fKMol97VoY kLXJd9JGXQ0nVTuSzF1sRgSoombcICkOshi5PmkCTigPOcWlfwmUAFucx +buoUiZH8l+oBxfrHlJ8hC1IFMqeIWapQtVESiANbD1Zx6TTAhiLSwwVJ i+hP+xj/ynSYodDQ0m0kUd4LXyBJYsu+d2Vbz2QH2lNQe7/adYDk3s9GG Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="358216625" X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="358216625" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 07:29:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="561043274" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 25 Aug 2022 07:29:10 -0700 From: Ciara Power To: Fan Zhang , Pablo de Lara Cc: dev@dpdk.org, kai.ji@intel.com, Ciara Power Subject: [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support Date: Thu, 25 Aug 2022 14:28:59 +0000 Message-Id: <20220825142901.898007-4-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220825142901.898007-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220825142901.898007-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly algorithms using the JOB API. This support was added to AESNI_MB PMD previously, but the SGL feature flags could not be added due to no SGL support for other algorithms. This patch adds a workaround SGL approach for other algorithms using the JOB API. The segmented input buffers are copied into a linear buffer, which is passed as a single job to intel-ipsec-mb. The job is processed, and on return, the linear buffer is split into the original destination segments. Existing AESNI_MB testcases are passing with these feature flags added. Signed-off-by: Ciara Power --- v2: - Small improvements when copying segments to linear buffer. - Added documentation changes. --- doc/guides/cryptodevs/aesni_mb.rst | 1 - doc/guides/cryptodevs/features/aesni_mb.ini | 4 + doc/guides/rel_notes/release_22_11.rst | 4 + drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 191 ++++++++++++++++---- 4 files changed, 166 insertions(+), 34 deletions(-) diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst index 07222ee117..59c134556f 100644 --- a/doc/guides/cryptodevs/aesni_mb.rst +++ b/doc/guides/cryptodevs/aesni_mb.rst @@ -72,7 +72,6 @@ Protocol offloads: Limitations ----------- -* Chained mbufs are not supported. * Out-of-place is not supported for combined Crypto-CRC DOCSIS security protocol. * RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini index 3c648a391e..e4e965c35a 100644 --- a/doc/guides/cryptodevs/features/aesni_mb.ini +++ b/doc/guides/cryptodevs/features/aesni_mb.ini @@ -12,6 +12,10 @@ CPU AVX = Y CPU AVX2 = Y CPU AVX512 = Y CPU AESNI = Y +In Place SGL = Y +OOP SGL In SGL Out = Y +OOP SGL In LB Out = Y +OOP LB In SGL Out = Y OOP LB In LB Out = Y CPU crypto = Y Symmetric sessionless = Y diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..6416f0a4e1 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added SGL support to AESNI_MB PMD.** + + Added support for SGL to AESNI_MB PMD. Support for inplace, + OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added. Removed Items ------------- diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 944fce0261..800a9ae72c 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -937,7 +937,7 @@ static inline uint64_t auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t oop, const uint32_t auth_offset, const uint32_t cipher_offset, const uint32_t auth_length, - const uint32_t cipher_length) + const uint32_t cipher_length, uint8_t lb_sgl) { struct rte_mbuf *m_src, *m_dst; uint8_t *p_src, *p_dst; @@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t cipher_end, auth_end; /* Only cipher then hash needs special calculation. */ - if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH) + if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl) return auth_offset; m_src = op->sym->m_src; @@ -1159,6 +1159,74 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, return 0; } +static int +handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset, + struct aesni_mb_session *session) +{ + uint64_t cipher_len, auth_len; + uint8_t *src, *linear_buf = NULL; + int total_len; + int lb_offset = 0; + struct rte_mbuf *src_seg; + uint16_t src_len; + + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + + (job->cipher_start_src_offset_in_bits >> 3); + else + cipher_len = job->msg_len_to_cipher_in_bytes + + job->cipher_start_src_offset_in_bytes; + + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) + auth_len = (job->msg_len_to_hash_in_bits >> 3) + + job->hash_start_src_offset_in_bytes; + else if (job->hash_alg == IMB_AUTH_AES_GMAC) + auth_len = job->u.GCM.aad_len_in_bytes; + else + auth_len = job->msg_len_to_hash_in_bytes + + job->hash_start_src_offset_in_bytes; + + total_len = RTE_MAX(auth_len, cipher_len); + linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0); + if (linear_buf == NULL) { + IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n"); + return -1; + } + + for (src_seg = op->sym->m_src; (src_seg != NULL) && + (total_len - lb_offset > 0); + src_seg = src_seg->next) { + src = rte_pktmbuf_mtod(src_seg, uint8_t *); + src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset); + rte_memcpy(linear_buf + lb_offset, src, src_len); + lb_offset += src_len; + } + + job->src = linear_buf; + job->dst = linear_buf + dst_offset; + job->user_data2 = linear_buf; + + if (job->hash_alg == IMB_AUTH_AES_GMAC) + job->u.GCM.aad = linear_buf; + + if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) + job->auth_tag_output = linear_buf + lb_offset; + else + job->auth_tag_output = linear_buf + auth_len; + + return 0; +} + +static inline int +imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg) +{ + if (alg == IMB_CIPHER_CHACHA20_POLY1305 + || alg == IMB_CIPHER_GCM) + return 1; + return 0; +} /** * Process a crypto operation and complete a IMB_JOB job structure for @@ -1171,7 +1239,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, * * @return * - 0 on success, the IMB_JOB will be filled - * - -1 if invalid session, IMB_JOB will not be filled + * - -1 if invalid session or errors allocationg SGL linear buffer, + * IMB_JOB will not be filled */ static inline int set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, @@ -1191,6 +1260,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, uint32_t total_len; IMB_JOB base_job; uint8_t sgl = 0; + uint8_t lb_sgl = 0; int ret; session = ipsec_mb_get_session_private(qp, op); @@ -1199,18 +1269,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, return -1; } - if (op->sym->m_src->nb_segs > 1) { - if (session->cipher.mode != IMB_CIPHER_GCM - && session->cipher.mode != - IMB_CIPHER_CHACHA20_POLY1305) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM" - " or CHACHA20_POLY1305 algorithms."); - return -1; - } - sgl = 1; - } - /* Set crypto operation */ job->chain_order = session->chain_order; @@ -1233,6 +1291,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->dec_keys = session->cipher.expanded_aes_keys.decode; } + if (!op->sym->m_dst) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else if (op->sym->m_dst == op->sym->m_src) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else { + /* out-of-place operation */ + m_dst = op->sym->m_dst; + oop = 1; + } + + if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) { + sgl = 1; + if (!imb_lib_support_sgl_algo(session->cipher.mode)) + lb_sgl = 1; + } + switch (job->hash_alg) { case IMB_AUTH_AES_XCBC: job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded; @@ -1331,20 +1409,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, m_offset = 0; } - if (!op->sym->m_dst) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else if (op->sym->m_dst == op->sym->m_src) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else { - /* out-of-place operation */ - m_dst = op->sym->m_dst; - oop = 1; - } - /* Set digest output location */ if (job->hash_alg != IMB_AUTH_NULL && session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { @@ -1435,7 +1499,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bits = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1452,7 +1516,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bytes = auth_len_in_bytes; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1464,7 +1528,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, session, oop, op->sym->auth.data.offset, op->sym->cipher.data.offset, op->sym->auth.data.length, - op->sym->cipher.data.length); + op->sym->cipher.data.length, lb_sgl); job->msg_len_to_hash_in_bytes = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1525,6 +1589,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->user_data = op; if (sgl) { + + if (lb_sgl) + return handle_sgl_linear(job, op, m_offset, session); + base_job = *job; job->sgl_state = IMB_SGL_INIT; job = IMB_SUBMIT_JOB(mb_mgr); @@ -1695,6 +1763,49 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op, sess->auth.req_digest_len); } +static void +post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job, + struct aesni_mb_session *sess, uint8_t *linear_buf) +{ + + int lb_offset = 0; + struct rte_mbuf *m_dst = op->sym->m_dst == NULL ? + op->sym->m_src : op->sym->m_dst; + uint16_t total_len, dst_len; + uint64_t cipher_len, auth_len; + uint8_t *dst; + + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + + (job->cipher_start_src_offset_in_bits >> 3); + else + cipher_len = job->msg_len_to_cipher_in_bytes + + job->cipher_start_src_offset_in_bytes; + + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) + auth_len = (job->msg_len_to_hash_in_bits >> 3) + + job->hash_start_src_offset_in_bytes; + else if (job->hash_alg == IMB_AUTH_AES_GMAC) + auth_len = job->u.GCM.aad_len_in_bytes; + else + auth_len = job->msg_len_to_hash_in_bytes + + job->hash_start_src_offset_in_bytes; + + total_len = RTE_MAX(auth_len, cipher_len); + + if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY) + total_len += job->auth_tag_output_len_in_bytes; + + for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) { + dst = rte_pktmbuf_mtod(m_dst, uint8_t *); + dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset); + rte_memcpy(dst, linear_buf + lb_offset, dst_len); + lb_offset += dst_len; + } +} + /** * Process a completed job and return rte_mbuf which job processed * @@ -1712,6 +1823,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) struct aesni_mb_session *sess = NULL; uint32_t driver_id = ipsec_mb_get_driver_id( IPSEC_MB_PMD_TYPE_AESNI_MB); + uint8_t *linear_buf = NULL; #ifdef AESNI_MB_DOCSIS_SEC_ENABLED uint8_t is_docsis_sec = 0; @@ -1740,6 +1852,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) case IMB_STATUS_COMPLETED: op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if ((op->sym->m_src->nb_segs > 1 || + (op->sym->m_dst != NULL && + op->sym->m_dst->nb_segs > 1)) && + !imb_lib_support_sgl_algo(sess->cipher.mode)) { + linear_buf = (uint8_t *) job->user_data2; + post_process_sgl_linear(op, job, sess, linear_buf); + } + if (job->hash_alg == IMB_AUTH_NULL) break; @@ -1766,6 +1886,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) default: op->status = RTE_CRYPTO_OP_STATUS_ERROR; } + rte_free(linear_buf); } /* Free session if a session-less crypto op */ @@ -2248,7 +2369,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb) RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO | RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA | - RTE_CRYPTODEV_FF_SYM_SESSIONLESS; + RTE_CRYPTODEV_FF_SYM_SESSIONLESS | + RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; aesni_mb_data->internals_priv_size = 0; aesni_mb_data->ops = &aesni_mb_pmd_ops; From patchwork Thu Aug 25 14:29:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 115414 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FE15A0547; Thu, 25 Aug 2022 16:29:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B86A42B6C; Thu, 25 Aug 2022 16:29:17 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 59F9042829 for ; Thu, 25 Aug 2022 16:29:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661437754; x=1692973754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J+b+YfiNqDFoySw62yamG2a6iJ5K6agLfCUHGNQF3c8=; b=LGDeriJlI8zOWO0SK7JmKJ5T8JhX8u9NcfjIBHBMfIHDQh29+DanUz4y 0nERlVbFT/MDw6FDKNHGSvf6B1lF9BXBsRujMkUvkWjTGtEmbUMrSsCxS G3GkfPXvAJYt2gTggTOTw7mFAMudROG29znoHmFV4VtJTsFKn6m/VuVhh Z/PZITH1TTnke24rNcZ66JoaXqvOgCP4kiqxa2NsSK4XiY3T47AFlMSQI ERLq7Z40mmMX//DgTrho4DsxVtaDxHIwoX5hv2cbTiZufDDzLeDCXzPml DbtyVCL4CbEN38SVbf4hYHQJlGV+ibLxzMfU4vriNuC862nvOfLmcQaqt Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="358216630" X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="358216630" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 07:29:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="561043288" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 25 Aug 2022 07:29:12 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v2 4/5] test/crypto: add OOP snow3g SGL tests Date: Thu, 25 Aug 2022 14:29:00 +0000 Message-Id: <20220825142901.898007-5-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220825142901.898007-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220825142901.898007-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org More tests are added to test variations of OOP SGL for snow3g. This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT. Signed-off-by: Ciara Power --- app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index e6925b6531..83860d1853 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata) } static int -test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) +test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata, + uint8_t sgl_in, uint8_t sgl_out) { struct crypto_testsuite_params *ts_params = &testsuite_params; struct crypto_unittest_params *ut_params = &unittest_params; @@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) uint64_t feat_flags = dev_info.feature_flags; - if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { - printf("Device doesn't support out-of-place scatter-gather " - "in both input and output mbufs. " + if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) + || ((!sgl_in && sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT)) + || ((sgl_in && !sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) { + printf("Device doesn't support out-of-place scatter gather type. " "Test Skipped.\n"); return TEST_SKIPPED; } @@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) /* the algorithms block size */ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16); - ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 10, 0); - ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 3, 0); + if (sgl_in) + ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 10, 0); + else { + ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len); + } + + if (sgl_out) + ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 3, 0); + else { + ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len); + } TEST_ASSERT_NOT_NULL(ut_params->ibuf, "Failed to allocate input buffer in mempool"); @@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void) static int test_snow3g_encryption_test_case_1_oop_sgl(void) { - return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1); + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1); +} + +static int +test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1); } +static int +test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0); +} static int test_snow3g_encryption_test_case_1_offset_oop(void) @@ -15985,6 +16011,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = { test_snow3g_encryption_test_case_1_oop), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_oop_sgl), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_offset_oop), TEST_CASE_ST(ut_setup, ut_teardown, From patchwork Thu Aug 25 14:29:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 115415 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92720A0547; Thu, 25 Aug 2022 16:29:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ECF8942B75; Thu, 25 Aug 2022 16:29:19 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 5C6CA42905 for ; Thu, 25 Aug 2022 16:29:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661437757; x=1692973757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2dKutrC5jZ7LyUcW7A3TxsoiQnAJ0dvcw6V6OMI7y3o=; b=hb+JUNp9rD9P7kuKTYNqWZWqjDTLUYcCHqCFuCX79iMMlEthBHAGVsjh WEwU18eNu5yu1YiJ7P5p6KO5KUGjthDtOvn7h1dpt6WvpCuoKt/f0alBD YxHPECFm8P2E532vIm+VIwx2hz3uJN6o3w4MatoOUUxvglKILxwSQcp0c 2gA1BgATQUDUNqVmXQrzPMeAT4BSB7UwZqi5V49aLL7q4G6ACAuqzmstV dBE1Bv20yVbGEDfXGIX3F9oWBe2wKb4DZkCkaHCElnMqvihUOh/boMU7W sdGxtiTw5lGHLqq5HGAcmovuC30SavTEAAMdS+2bMEXljDlPSJhtb3YSA A==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="358216644" X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="358216644" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 07:29:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,263,1654585200"; d="scan'208";a="561043304" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.49]) by orsmga003.jf.intel.com with ESMTP; 25 Aug 2022 07:29:14 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang , Yipeng Wang , Sameh Gobriel , Bruce Richardson , Vladimir Medvedkin Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v2 5/5] test/crypto: add remaining blockcipher SGL tests Date: Thu, 25 Aug 2022 14:29:01 +0000 Message-Id: <20220825142901.898007-6-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220825142901.898007-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220825142901.898007-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current blockcipher test function only has support for two types of SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into the function, with the number of segments always set to 3. To ensure all SGL types are tested, blockcipher test vectors now have fields to specify SGL type, and the number of segments. If these fields are missing, the previous defaults are used, either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments. Some AES and Hash vectors are modified to use these new fields, and new AES tests are added to test the SGL types that were not previously being tested. Signed-off-by: Ciara Power --- app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++--- app/test/test_cryptodev_blockcipher.c | 50 +-- app/test/test_cryptodev_blockcipher.h | 2 + app/test/test_cryptodev_hash_test_vectors.h | 8 +- 4 files changed, 335 insertions(+), 70 deletions(-) diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h index a797af1b00..2c1875d3d9 100644 --- a/app/test/test_cryptodev_aes_test_vectors.h +++ b/app/test/test_cryptodev_aes_test_vectors.h @@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " - "Scatter Gather", + "Scatter Gather (Inplace)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (LB in SGL out)", .test_data = &aes_test_data_2, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 + }, + { .test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest", .test_data = &aes_test_data_3, @@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " - "Scatter Gather", + "Scatter Gather (Inplace)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP 16 segs (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 16 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (LB in SGL out)", .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " - "Verify Scatter Gather", + "Verify Scatter Gather (Inplace)", .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP 16 segs (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 16 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " - "Scatter Gather Sessionless", + "Scatter Gather Sessionless (Inplace)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (SGL in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (LB in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (SGL in LB out)", .test_data = &aes_test_data_6, .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " @@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " - "Verify Scatter Gather", + "Verify Scatter Gather (Inplace)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 2 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (SGL in SGL out)", .test_data = &aes_test_data_6, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC XCBC Encryption Digest", @@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " @@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 + }, { .test_descr = "AES-128-CBC Decryption", .test_data = &aes_test_data_4, @@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, }, { - .test_descr = "AES-192-CBC Encryption Scatter gather", + .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)", .test_data = &aes_test_data_10, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-192-CBC Decryption", @@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, }, { - .test_descr = "AES-192-CBC Decryption Scatter Gather", + .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)", .test_data = &aes_test_data_10, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-CBC Encryption", @@ -4689,67 +4969,42 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { }, { .test_descr = "AES-256-XTS Encryption (512-byte plaintext" - " Dataunit 512) Scater gather OOP", + " Dataunit 512) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Decryption (512-byte plaintext" - " Dataunit 512) Scater gather OOP", + " Dataunit 512) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Encryption (512-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Decryption (512-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Encryption (4096-byte plaintext" - " Dataunit 4096) Scater gather OOP", + " Dataunit 4096) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Decryption (4096-byte plaintext" - " Dataunit 4096) Scater gather OOP", + " Dataunit 4096) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Encryption (4096-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Decryption (4096-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "cipher-only - NULL algo - x8 - encryption", diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c index b5813b956f..f1ef0b606f 100644 --- a/app/test/test_cryptodev_blockcipher.c +++ b/app/test/test_cryptodev_blockcipher.c @@ -96,7 +96,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, uint8_t tmp_dst_buf[MBUF_SIZE]; uint32_t pad_len; - int nb_segs = 1; + int nb_segs_in = 1; + int nb_segs_out = 1; + uint64_t sgl_type = t->sgl_flag; uint32_t nb_iterates = 0; rte_cryptodev_info_get(dev_id, &dev_info); @@ -121,30 +123,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } } if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) { - uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; + if (sgl_type == 0) { + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) + sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; + else + sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL; + } - if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { - if (!(feat_flags & oop_flag)) { - printf("Device doesn't support out-of-place " - "scatter-gather in input mbuf. " - "Test Skipped.\n"); - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "SKIPPED"); - return TEST_SKIPPED; - } - } else { - if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) { - printf("Device doesn't support in-place " - "scatter-gather mbufs. " - "Test Skipped.\n"); - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "SKIPPED"); - return TEST_SKIPPED; - } + if (!(feat_flags & sgl_type)) { + printf("Device doesn't support scatter-gather type." + " Test Skipped.\n"); + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "SKIPPED"); + return TEST_SKIPPED; } - nb_segs = 3; + if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT || + sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT || + sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL) + nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs; + + if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT || + sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT) + nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs; } + if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^ tdata->wrapped_key) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, @@ -207,7 +210,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, /* for contiguous mbuf, nb_segs is 1 */ ibuf = create_segmented_mbuf(mbuf_pool, - tdata->ciphertext.len, nb_segs, src_pattern); + tdata->ciphertext.len, nb_segs_in, src_pattern); if (ibuf == NULL) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u FAILED: %s", @@ -256,7 +259,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { - obuf = rte_pktmbuf_alloc(mbuf_pool); + obuf = create_segmented_mbuf(mbuf_pool, + tdata->ciphertext.len, nb_segs_out, dst_pattern); if (!obuf) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u " "FAILED: %s", __LINE__, diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h index 84f5d57787..bad93a5ec1 100644 --- a/app/test/test_cryptodev_blockcipher.h +++ b/app/test/test_cryptodev_blockcipher.h @@ -57,6 +57,8 @@ struct blockcipher_test_case { const struct blockcipher_test_data *test_data; uint8_t op_mask; /* operation mask */ uint8_t feature_mask; + uint64_t sgl_flag; + uint8_t sgl_segs; }; struct blockcipher_test_data { diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h index f7a0981636..944a52721c 100644 --- a/app/test/test_cryptodev_hash_test_vectors.h +++ b/app/test/test_cryptodev_hash_test_vectors.h @@ -463,10 +463,12 @@ static const struct blockcipher_test_case hash_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN, }, { - .test_descr = "HMAC-SHA1 Digest Scatter Gather", + .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)", .test_data = &hmac_sha1_test_vector, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "HMAC-SHA1 Digest Verify", @@ -474,10 +476,12 @@ static const struct blockcipher_test_case hash_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY, }, { - .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather", + .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)", .test_data = &hmac_sha1_test_vector, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "SHA224 Digest",