From patchwork Fri Apr 22 09:59:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 110132 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25493A0032; Fri, 22 Apr 2022 11:59:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EBEED4067B; Fri, 22 Apr 2022 11:59:58 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 7A8A140042 for ; Fri, 22 Apr 2022 11:59:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650621597; x=1682157597; h=from:to:cc:subject:date:message-id; bh=zT9RKcNa5g/jFbKCq0dfWMbFruDqN4/79S397dkVxOo=; b=PZ3C31X5FTVzCODKz1bW7j1MqEpAA7Xl4/K8p7XcmN5hAdWOKArnzcj6 mCXOICWYwpu4UXLPmFTnzjcYS+emBCyTx7Zy3vFB64Wmr9BR5koStrzGj /OXkbwWGjvKAzYn5TIJ5qvQKmdjgqVKGqvXRUalVqf0KrgkatQqR+ivIr ELaJs3cXptmi/dHezf5S1PqrVV6VOvEgtSxfU7LehNs5pylLhrYXqCgbk 17BVUiRr+X1jjmy+zDgNX3dVUORMNtPdbpOFSXtliNZYX9Ks2iNb+uP5N bGJdOITaUnVhA4Irxr+ZbD+hAmGxUUTJPYlwWoIck94DNte31gOrF+DBb w==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="264412226" X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="264412226" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:59:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="805937262" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by fmsmga006.fm.intel.com with ESMTP; 22 Apr 2022 02:59:54 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v2] crypto/qat: add diffie hellman algorithm Date: Fri, 22 Apr 2022 10:59:52 +0100 Message-Id: <20220422095952.13860-1-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commits adds Diffie-Hellman key exchange algorithm to Intel QuickAssist Technology PMD. Signed-off-by: Arek Kusztal --- Depends-on: series-22621 ("crypto/qat: add secp384r1 curve support") v2: - updated release notes - updated qat documentation doc/guides/cryptodevs/qat.rst | 1 + doc/guides/rel_notes/release_22_07.rst | 1 + drivers/common/qat/qat_adf/qat_pke.h | 36 +++++++ drivers/crypto/qat/qat_asym.c | 168 +++++++++++++++++++++++++++++++++ 4 files changed, 206 insertions(+) diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index 785e041324..37fd554ca1 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -177,6 +177,7 @@ The QAT ASYM PMD has support for: * ``RTE_CRYPTO_ASYM_XFORM_RSA`` * ``RTE_CRYPTO_ASYM_XFORM_ECDSA`` * ``RTE_CRYPTO_ASYM_XFORM_ECPM`` +* ``RTE_CRYPTO_ASYM_XFORM_DH`` Limitations ~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 7f44d363b5..ac701645b1 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -58,6 +58,7 @@ New Features * **Updated Intel QuickAssist Technology (QAT) crypto PMD.** * Added support for secp384r1 elliptic curve. + * Added support for Diffie-Hellman (FFDH) algorithm . Removed Items diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h index 6c12bfd989..c727e4e1af 100644 --- a/drivers/common/qat/qat_adf/qat_pke.h +++ b/drivers/common/qat/qat_adf/qat_pke.h @@ -137,6 +137,42 @@ get_modinv_function(struct rte_crypto_asym_xform *xform) } static struct qat_asym_function +get_dh_g2_function(uint32_t bytesize) +{ + struct qat_asym_function qat_function = { }; + + if (bytesize <= 256) { + qat_function.func_id = PKE_DH_G2_2048; + qat_function.bytesize = 256; + } else if (bytesize <= 384) { + qat_function.func_id = PKE_DH_G2_3072; + qat_function.bytesize = 384; + } else if (bytesize <= 512) { + qat_function.func_id = PKE_DH_G2_4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +static struct qat_asym_function +get_dh_function(uint32_t bytesize) +{ + struct qat_asym_function qat_function = { }; + + if (bytesize <= 256) { + qat_function.func_id = PKE_DH_2048; + qat_function.bytesize = 256; + } else if (bytesize <= 384) { + qat_function.func_id = PKE_DH_3072; + qat_function.bytesize = 384; + } else if (bytesize <= 512) { + qat_function.func_id = PKE_DH_4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +static struct qat_asym_function get_rsa_enc_function(struct rte_crypto_asym_xform *xform) { struct qat_asym_function qat_function = { }; diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index d2041b2efa..c2a985b355 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -748,6 +748,125 @@ ecpm_collect(struct rte_crypto_asym_op *asym_op, } static int +dh_mod_g2_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + + qat_function = get_dh_g2_function(xform->dh.p.length); + func_id = qat_function.func_id; + if (qat_function.func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; + SET_PKE_LN(asym_op->dh.priv_key, alg_bytesize, 0); + SET_PKE_LN(xform->dh.p, alg_bytesize, 1); + cookie->alg_bytesize = alg_bytesize; + cookie->qat_func_alignsize = alg_bytesize; + + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = 2; + qat_req->output_param_count = 1; + + HEXDUMP("DH Priv", cookie->input_array[0], alg_bytesize); + HEXDUMP("DH p", cookie->input_array[1], alg_bytesize); + + return 0; +} + +static int +dh_mod_n_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + + qat_function = get_dh_function(xform->dh.p.length); + func_id = qat_function.func_id; + if (qat_function.func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; + if (xform->dh.type == RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE) { + SET_PKE_LN(xform->dh.g, alg_bytesize, 0); + SET_PKE_LN(asym_op->dh.priv_key, alg_bytesize, 1); + SET_PKE_LN(xform->dh.p, alg_bytesize, 2); + } else { + SET_PKE_LN(asym_op->dh.pub_key, alg_bytesize, 0); + SET_PKE_LN(asym_op->dh.priv_key, alg_bytesize, 1); + SET_PKE_LN(xform->dh.p, alg_bytesize, 2); + } + cookie->alg_bytesize = alg_bytesize; + cookie->qat_func_alignsize = alg_bytesize; + + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = 3; + qat_req->output_param_count = 1; + + HEXDUMP("ModExp g/priv key", cookie->input_array[0], alg_bytesize); + HEXDUMP("ModExp priv/pub", cookie->input_array[1], alg_bytesize); + HEXDUMP("ModExp p", cookie->input_array[2], alg_bytesize); + + return 0; +} + +static int +dh_mod_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + if (xform->dh.type == RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE && + xform->dh.g.length == 1 && xform->dh.g.data[0] == 2) + return dh_mod_g2_input(asym_op, qat_req, cookie, xform); + else + return dh_mod_n_input(asym_op, qat_req, cookie, xform); +} + +static int +dh_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_DH: + return dh_mod_set_input(asym_op, qat_req, cookie, xform); + default: + QAT_LOG(ERR, + "Invalid/unsupported asymmetric crypto xform type"); + return -1; + } +} + +static uint8_t +dh_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *DH; + uint32_t alg_bytesize = cookie->alg_bytesize; + + if (xform->dh.type == RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE) { + DH = asym_op->dh.pub_key.data; + asym_op->dh.pub_key.length = alg_bytesize; + } else { + DH = asym_op->dh.shared_secret.data; + asym_op->dh.shared_secret.length = alg_bytesize; + } + rte_memcpy(DH, cookie->output_array[0], alg_bytesize); + HEXDUMP("DH", DH, alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + +static int asym_set_input(struct rte_crypto_asym_op *asym_op, struct icp_qat_fw_pke_request *qat_req, struct qat_asym_op_cookie *cookie, @@ -760,6 +879,9 @@ asym_set_input(struct rte_crypto_asym_op *asym_op, case RTE_CRYPTO_ASYM_XFORM_MODINV: return modinv_set_input(asym_op, qat_req, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_DH: + return dh_set_input(asym_op, qat_req, + cookie, xform); case RTE_CRYPTO_ASYM_XFORM_RSA: return rsa_set_input(asym_op, qat_req, cookie, xform); @@ -849,6 +971,8 @@ qat_asym_collect_response(struct rte_crypto_op *op, return modexp_collect(asym_op, cookie, xform); case RTE_CRYPTO_ASYM_XFORM_MODINV: return modinv_collect(asym_op, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_DH: + return dh_collect(asym_op, cookie, xform); case RTE_CRYPTO_ASYM_XFORM_RSA: return rsa_collect(asym_op, cookie); case RTE_CRYPTO_ASYM_XFORM_ECDSA: @@ -967,6 +1091,35 @@ session_set_modinv(struct qat_asym_session *qat_session, } static int +session_set_dh(struct qat_asym_session *qat_session, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *g = xform->dh.g.data; + uint8_t *p = xform->dh.p.data; + + qat_session->xform.dh.type = xform->dh.type; + qat_session->xform.dh.g.data = + rte_malloc(NULL, xform->dh.g.length, 0); + if (qat_session->xform.dh.g.data == NULL) + return -ENOMEM; + qat_session->xform.dh.g.length = xform->dh.g.length; + qat_session->xform.dh.p.data = rte_malloc(NULL, + xform->dh.p.length, 0); + if (qat_session->xform.dh.p.data == NULL) { + rte_free(qat_session->xform.dh.p.data); + return -ENOMEM; + } + qat_session->xform.dh.p.length = xform->dh.p.length; + + rte_memcpy(qat_session->xform.dh.g.data, g, + xform->dh.g.length); + rte_memcpy(qat_session->xform.dh.p.data, p, + xform->dh.p.length); + + return 0; +} + +static int session_set_rsa(struct qat_asym_session *qat_session, struct rte_crypto_asym_xform *xform) { @@ -1118,6 +1271,9 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused, case RTE_CRYPTO_ASYM_XFORM_RSA: ret = session_set_rsa(qat_session, xform); break; + case RTE_CRYPTO_ASYM_XFORM_DH: + ret = session_set_dh(qat_session, xform); + break; case RTE_CRYPTO_ASYM_XFORM_ECDSA: case RTE_CRYPTO_ASYM_XFORM_ECPM: session_set_ecdsa(qat_session, xform); @@ -1157,6 +1313,15 @@ session_clear_modinv(struct rte_crypto_modinv_xform *modinv) } static void +session_clear_dh(struct rte_crypto_dh_xform *dh) +{ + memset(dh->g.data, 0, dh->g.length); + rte_free(dh->g.data); + memset(dh->p.data, 0, dh->p.length); + rte_free(dh->p.data); +} + +static void session_clear_rsa(struct rte_crypto_rsa_xform *rsa) { memset(rsa->n.data, 0, rsa->n.length); @@ -1190,6 +1355,9 @@ session_clear_xform(struct qat_asym_session *qat_session) case RTE_CRYPTO_ASYM_XFORM_MODINV: session_clear_modinv(&qat_session->xform.modinv); break; + case RTE_CRYPTO_ASYM_XFORM_DH: + session_clear_dh(&qat_session->xform.dh); + break; case RTE_CRYPTO_ASYM_XFORM_RSA: session_clear_rsa(&qat_session->xform.rsa); break;