From patchwork Thu Sep 5 14:56:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143668 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 239364590E; Thu, 5 Sep 2024 16:56:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9082742E83; Thu, 5 Sep 2024 16:56:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 35DEB4025C for ; Thu, 5 Sep 2024 16:56:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 485AWP8p018474; Thu, 5 Sep 2024 07:56:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=r V3chgki3cUXqQC89aEe/sgSUKZDcBI2mibB47UjS+A=; b=c62a9nggFjPfw22d6 VVLarUrj3fyJ4w6/bxGwQPsVqos+wC8A0035SnPW520phsE1wHF+KimAFvRzflpg CgDRlLA3eD7GiyIGKEsxZlo9N2G9xSsxrK6EgtqYRTg3wblaCGhVPzh/Obn8qnUC 8qJaymRR1BYWFdxjqbVT5LOSJ95/SMVwgasA1iHFFmX/vkcemAVCSqJNKf2vOF5l ZscpsXa0w6y3TQsa93cWf5Aw2wseaOQcXQAXxE3TEmXNvxInF4xo+T+HP+Hknl61 eRaWve63hSvDkuLHzlaJtEm0wgGxRMvLAc52KSce0atm0IiQu5nn826NePjtmQ7Q YfsgQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 41faywh136-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:56:40 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:56:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:56:39 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id E597E5B6973; Thu, 5 Sep 2024 07:56:29 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Akhil Goyal , Fan Zhang , Anoob Joseph , Ankur Dwivedi , Tejasree Kondoj , Kai Ji , Brian Dooley , "Gowrishankar Muthukrishnan" CC: , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 1/6] cryptodev: move RSA padding information into xform Date: Thu, 5 Sep 2024 20:26:05 +0530 Message-ID: <20240905145612.1732-2-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: fjsIbz32mFs4XrJnImKarh8VluKVBE_U X-Proofpoint-GUID: fjsIbz32mFs4XrJnImKarh8VluKVBE_U X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org RSA padding information could be a xform entity rather than part of crypto op, as it seems associated with hashing algorithm used for the entire crypto session, where this algorithm is used in message digest itself. Even in virtIO standard spec, this info is associated in the asymmetric session creation. Hence, moving this info from crypto op into xform structure. Signed-off-by: Gowrishankar Muthukrishnan --- app/test/test_cryptodev_asym.c | 4 -- app/test/test_cryptodev_rsa_test_vectors.h | 2 + drivers/common/cpt/cpt_ucode_asym.h | 4 +- drivers/crypto/cnxk/cnxk_ae.h | 13 +++-- drivers/crypto/octeontx/otx_cryptodev_ops.c | 4 +- drivers/crypto/openssl/openssl_pmd_private.h | 1 + drivers/crypto/openssl/rte_openssl_pmd.c | 4 +- drivers/crypto/openssl/rte_openssl_pmd_ops.c | 1 + drivers/crypto/qat/qat_asym.c | 17 ++++--- examples/fips_validation/main.c | 52 +++++++++++--------- lib/cryptodev/rte_crypto_asym.h | 6 +-- 11 files changed, 58 insertions(+), 50 deletions(-) diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c index f0b5d38543..0928367cb0 100644 --- a/app/test/test_cryptodev_asym.c +++ b/app/test/test_cryptodev_asym.c @@ -80,7 +80,6 @@ queue_ops_rsa_sign_verify(void *sess) asym_op->rsa.message.length = rsaplaintext.len; asym_op->rsa.sign.length = RTE_DIM(rsa_n); asym_op->rsa.sign.data = output_buf; - asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5; debug_hexdump(stdout, "message", asym_op->rsa.message.data, asym_op->rsa.message.length); @@ -112,7 +111,6 @@ queue_ops_rsa_sign_verify(void *sess) /* Verify sign */ asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_VERIFY; - asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5; /* Process crypto operation */ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { @@ -171,7 +169,6 @@ queue_ops_rsa_enc_dec(void *sess) asym_op->rsa.cipher.data = cipher_buf; asym_op->rsa.cipher.length = RTE_DIM(rsa_n); asym_op->rsa.message.length = rsaplaintext.len; - asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5; debug_hexdump(stdout, "message", asym_op->rsa.message.data, asym_op->rsa.message.length); @@ -203,7 +200,6 @@ queue_ops_rsa_enc_dec(void *sess) asym_op = result_op->asym; asym_op->rsa.message.length = RTE_DIM(rsa_n); asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT; - asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5; /* Process crypto operation */ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h index 89981f13f0..1b7b451387 100644 --- a/app/test/test_cryptodev_rsa_test_vectors.h +++ b/app/test/test_cryptodev_rsa_test_vectors.h @@ -345,6 +345,7 @@ struct rte_crypto_asym_xform rsa_xform = { .next = NULL, .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, .rsa = { + .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5, .n = { .data = rsa_n, .length = sizeof(rsa_n) @@ -366,6 +367,7 @@ struct rte_crypto_asym_xform rsa_xform_crt = { .next = NULL, .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, .rsa = { + .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5, .n = { .data = rsa_n, .length = sizeof(rsa_n) diff --git a/drivers/common/cpt/cpt_ucode_asym.h b/drivers/common/cpt/cpt_ucode_asym.h index e1034bbeb4..5122378ec7 100644 --- a/drivers/common/cpt/cpt_ucode_asym.h +++ b/drivers/common/cpt/cpt_ucode_asym.h @@ -327,7 +327,7 @@ cpt_rsa_prep(struct asym_op_params *rsa_params, /* Result buffer */ rlen = mod_len; - if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { /* Use mod_exp operation for no_padding type */ vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX; vq_cmd_w0.s.param2 = exp_len; @@ -412,7 +412,7 @@ cpt_rsa_crt_prep(struct asym_op_params *rsa_params, /* Result buffer */ rlen = mod_len; - if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { /*Use mod_exp operation for no_padding type */ vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX_CRT; } else { diff --git a/drivers/crypto/cnxk/cnxk_ae.h b/drivers/crypto/cnxk/cnxk_ae.h index ef9cb5eb91..1bb5a450c5 100644 --- a/drivers/crypto/cnxk/cnxk_ae.h +++ b/drivers/crypto/cnxk/cnxk_ae.h @@ -177,6 +177,9 @@ cnxk_ae_fill_rsa_params(struct cnxk_ae_sess *sess, rsa->n.length = mod_len; rsa->e.length = exp_len; + /* Set padding info */ + rsa->padding.type = xform->rsa.padding.type; + return 0; } @@ -366,7 +369,7 @@ cnxk_ae_rsa_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf, dptr += in_size; dlen = total_key_len + in_size; - if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { /* Use mod_exp operation for no_padding type */ w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX; w4.s.param2 = exp_len; @@ -421,7 +424,7 @@ cnxk_ae_rsa_exp_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf, dptr += in_size; dlen = mod_len + privkey_len + in_size; - if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { /* Use mod_exp operation for no_padding type */ w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX; w4.s.param2 = privkey_len; @@ -479,7 +482,7 @@ cnxk_ae_rsa_crt_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf, dptr += in_size; dlen = total_key_len + in_size; - if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { /*Use mod_exp operation for no_padding type */ w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX_CRT; } else { @@ -1151,7 +1154,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr, memcpy(rsa->cipher.data, rptr, rsa->cipher.length); break; case RTE_CRYPTO_ASYM_OP_DECRYPT: - if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { rsa->message.length = rsa_ctx->n.length; memcpy(rsa->message.data, rptr, rsa->message.length); } else { @@ -1171,7 +1174,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr, memcpy(rsa->sign.data, rptr, rsa->sign.length); break; case RTE_CRYPTO_ASYM_OP_VERIFY: - if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { rsa->sign.length = rsa_ctx->n.length; memcpy(rsa->sign.data, rptr, rsa->sign.length); } else { diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c index bafd0c1669..9a758cd297 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c @@ -708,7 +708,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req, memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length); break; case RTE_CRYPTO_ASYM_OP_DECRYPT: - if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) + if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) rsa->message.length = rsa_ctx->n.length; else { /* Get length of decrypted output */ @@ -725,7 +725,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req, memcpy(rsa->sign.data, req->rptr, rsa->sign.length); break; case RTE_CRYPTO_ASYM_OP_VERIFY: - if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) + if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) rsa->sign.length = rsa_ctx->n.length; else { /* Get length of decrypted output */ diff --git a/drivers/crypto/openssl/openssl_pmd_private.h b/drivers/crypto/openssl/openssl_pmd_private.h index a50e4d4918..4caf1ebcb5 100644 --- a/drivers/crypto/openssl/openssl_pmd_private.h +++ b/drivers/crypto/openssl/openssl_pmd_private.h @@ -197,6 +197,7 @@ struct __rte_cache_aligned openssl_asym_session { union { struct rsa { RSA *rsa; + uint32_t pad; #if (OPENSSL_VERSION_NUMBER >= 0x30000000L) EVP_PKEY_CTX * ctx; #endif diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c index 101111e85b..8877fd5d92 100644 --- a/drivers/crypto/openssl/rte_openssl_pmd.c +++ b/drivers/crypto/openssl/rte_openssl_pmd.c @@ -2699,7 +2699,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop, struct openssl_asym_session *sess) { struct rte_crypto_asym_op *op = cop->asym; - uint32_t pad = (op->rsa.padding.type); + uint32_t pad = sess->u.r.pad; uint8_t *tmp; size_t outlen = 0; int ret = -1; @@ -3082,7 +3082,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop, int ret = 0; struct rte_crypto_asym_op *op = cop->asym; RSA *rsa = sess->u.r.rsa; - uint32_t pad = (op->rsa.padding.type); + uint32_t pad = sess->u.r.pad; uint8_t *tmp; cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS; diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c index 1bbb855a59..4c47c3e7dd 100644 --- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c +++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c @@ -889,6 +889,7 @@ static int openssl_set_asym_session_parameters( if (!n || !e) goto err_rsa; + asym_session->u.r.pad = xform->rsa.padding.type; #if (OPENSSL_VERSION_NUMBER >= 0x30000000L) OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new(); if (!param_bld) { diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 491f5ecd5b..020c8fac5f 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -362,7 +362,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req, alg_bytesize = qat_function.bytesize; if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0); break; @@ -374,7 +374,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req, } HEXDUMP("RSA Message", cookie->input_array[0], alg_bytesize); } else { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: SET_PKE_LN(asym_op->rsa.sign, alg_bytesize, 0); break; @@ -460,7 +460,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req, if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: SET_PKE_LN(asym_op->rsa.cipher, alg_bytesize, 0); HEXDUMP("RSA ciphertext", cookie->input_array[0], @@ -474,7 +474,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req, } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0); HEXDUMP("RSA text to be signed", cookie->input_array[0], @@ -514,7 +514,8 @@ rsa_set_input(struct icp_qat_fw_pke_request *qat_req, static uint8_t rsa_collect(struct rte_crypto_asym_op *asym_op, - const struct qat_asym_op_cookie *cookie) + const struct qat_asym_op_cookie *cookie, + const struct rte_crypto_asym_xform *xform) { uint32_t alg_bytesize = cookie->alg_bytesize; @@ -530,7 +531,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op, HEXDUMP("RSA Encrypted data", cookie->output_array[0], alg_bytesize); } else { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: rte_memcpy(asym_op->rsa.cipher.data, cookie->output_array[0], @@ -547,7 +548,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op, } } else { if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { - switch (asym_op->rsa.padding.type) { + switch (xform->rsa.padding.type) { case RTE_CRYPTO_RSA_PADDING_NONE: rte_memcpy(asym_op->rsa.message.data, cookie->output_array[0], @@ -1105,7 +1106,7 @@ qat_asym_collect_response(struct rte_crypto_op *op, case RTE_CRYPTO_ASYM_XFORM_MODINV: return modinv_collect(asym_op, cookie, xform); case RTE_CRYPTO_ASYM_XFORM_RSA: - return rsa_collect(asym_op, cookie); + return rsa_collect(asym_op, cookie, xform); case RTE_CRYPTO_ASYM_XFORM_ECDSA: return ecdsa_collect(asym_op, cookie); case RTE_CRYPTO_ASYM_XFORM_ECPM: diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c index 7ae2c6c007..c7a78b41de 100644 --- a/examples/fips_validation/main.c +++ b/examples/fips_validation/main.c @@ -926,31 +926,7 @@ prepare_rsa_op(void) __rte_crypto_op_reset(env.op, RTE_CRYPTO_OP_TYPE_ASYMMETRIC); asym = env.op->asym; - asym->rsa.padding.type = info.interim_info.rsa_data.padding; - asym->rsa.padding.hash = info.interim_info.rsa_data.auth; - if (env.digest) { - if (asym->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) { - int b_len = 0; - uint8_t b[32]; - - b_len = get_hash_oid(asym->rsa.padding.hash, b); - if (b_len < 0) { - RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n", - asym->rsa.padding.hash); - return -EINVAL; - } - - if (b_len) { - msg.len = env.digest_len + b_len; - msg.val = rte_zmalloc(NULL, msg.len, 0); - rte_memcpy(msg.val, b, b_len); - rte_memcpy(msg.val + b_len, env.digest, env.digest_len); - rte_free(env.digest); - env.digest = msg.val; - env.digest_len = msg.len; - } - } msg.val = env.digest; msg.len = env.digest_len; } else { @@ -1536,6 +1512,34 @@ prepare_rsa_xform(struct rte_crypto_asym_xform *xform) xform->rsa.e.length = vec.rsa.e.len; xform->rsa.n.data = vec.rsa.n.val; xform->rsa.n.length = vec.rsa.n.len; + + xform->rsa.padding.type = info.interim_info.rsa_data.padding; + xform->rsa.padding.hash = info.interim_info.rsa_data.auth; + if (env.digest) { + if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) { + struct fips_val msg; + int b_len = 0; + uint8_t b[32]; + + b_len = get_hash_oid(xform->rsa.padding.hash, b); + if (b_len < 0) { + RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n", + xform->rsa.padding.hash); + return -EINVAL; + } + + if (b_len) { + msg.len = env.digest_len + b_len; + msg.val = rte_zmalloc(NULL, msg.len, 0); + rte_memcpy(msg.val, b, b_len); + rte_memcpy(msg.val + b_len, env.digest, env.digest_len); + rte_free(env.digest); + env.digest = msg.val; + env.digest_len = msg.len; + } + } + } + return 0; } diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h index 39d3da3952..64d8a42f6a 100644 --- a/lib/cryptodev/rte_crypto_asym.h +++ b/lib/cryptodev/rte_crypto_asym.h @@ -312,6 +312,9 @@ struct rte_crypto_rsa_xform { struct rte_crypto_rsa_priv_key_qt qt; /**< qt - Private key in quintuple format */ }; + + struct rte_crypto_rsa_padding padding; + /**< RSA padding information */ }; /** @@ -447,9 +450,6 @@ struct rte_crypto_rsa_op_param { * This could be validated and overwritten by the PMD * with the signature length. */ - - struct rte_crypto_rsa_padding padding; - /**< RSA padding information */ }; /** From patchwork Thu Sep 5 14:56:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143669 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 513094590E; Thu, 5 Sep 2024 16:57:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3CF7342E5F; Thu, 5 Sep 2024 16:57:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 45ABB40273 for ; Thu, 5 Sep 2024 16:57:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 485AWPV4018437; Thu, 5 Sep 2024 07:57:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=X DMMG3ruy59MrzoFro+YAOGWW1UWUJi3vQ24A8kav5A=; b=XF7zWbKHVb632PdsT uzH5ZXZVvu40f1kfD5V/qN+meQA3V9gNMpgCbW8AkuOOBg9YqxI1zKa4WZv5MPKO Y3sFhe0ahNXHEyYae04AodkaryGyBxjviK/G9gVY0reD1HyiTrz2MNh3xCXrIL77 9v2WnyY3y/CIU6t8xxDY1uxGYeNy2Fih+QKHkTBsZJdJtXd5LCGKa7tVv5SN1wHD 5CLpEZ+z6WCrqziXOBpsWkhPKgMqPp5DFDi89iqquqpogZkvuMqayHPWb8Jf9GSZ Kn1KVTd9/+8Gp7Z94mN7AkJ0wFrafiQCNyx3EulhYQJChHG+9GpYkhl9kNPtR6JK 0gFbA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 41faywh13v-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:57:14 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:56:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:56:53 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id 8C8B25B6973; Thu, 5 Sep 2024 07:56:42 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Akhil Goyal , Fan Zhang CC: Anoob Joseph , , , , , , , , , , , , , , , , , , , , , , , , , , Brian Dooley , Gowrishankar Muthukrishnan Subject: [PATCH 2/6] cryptodev: fix RSA xform for ASN.1 syntax Date: Thu, 5 Sep 2024 20:26:06 +0530 Message-ID: <20240905145612.1732-3-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: GlB7Y0qsc79lWftz5F48FXaNZ4mYRrbq X-Proofpoint-GUID: GlB7Y0qsc79lWftz5F48FXaNZ4mYRrbq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As per ASN.1 syntax (RFC 3447 Appendix A.1.2), RSA private key would need specification of quintuple along with private exponent. It is up to the implementation to internally handle, but not at RTE itself to make them exclusive each other. Removing union on them allows asymmetric implementation in VirtIO to benefit from the xform as per ASN.1 syntax. Signed-off-by: Gowrishankar Muthukrishnan --- lib/cryptodev/rte_crypto_asym.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h index 64d8a42f6a..398b6514e3 100644 --- a/lib/cryptodev/rte_crypto_asym.h +++ b/lib/cryptodev/rte_crypto_asym.h @@ -306,7 +306,7 @@ struct rte_crypto_rsa_xform { enum rte_crypto_rsa_priv_key_type key_type; - union { + struct { rte_crypto_uint d; /**< the RSA private exponent */ struct rte_crypto_rsa_priv_key_qt qt; From patchwork Thu Sep 5 14:56:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143670 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47E914590E; Thu, 5 Sep 2024 16:57:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A709742EA7; Thu, 5 Sep 2024 16:57:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 19ADA42E9D for ; Thu, 5 Sep 2024 16:57:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 48586a9o014996; Thu, 5 Sep 2024 07:57:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=q WXaFmO0/xkQMgEndBMlPfdqitnXU3eyEjhtxYpQ1F4=; b=PeNyDS0JIHFk1D55X JERr+k9lvOb3yBvdrzkzD4g0TSAVzHStqLXV+Msh5nrS+DNF/Y7OE4it4tFd3rLv RKTdFG4OyIZkasup3YNBEZlrKDBZeUeR0HLbNAjyzF2ryJLUnaKaeq2TMfdm6KXe 3lS/Jx6uGKkc7dhFYmfXLE5EcjXHlcUojaC5J1kjdvS6uay/Gc5qMXzIYXDsNGrb ciwrFfL+hSveJ229PNDkzhEHXi/nzCb+M/iOaLFDPWQPcnpUDfGHI83RUsPY68OF g6DyfwGS9sQeex/H4lgXFI11PjO/+jgtdwroHyRbReJHasRVk8uj/uwo7HQ69USA kYa9w== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41f8u49h18-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:57:18 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:57:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:57:17 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id E51EF5B6976; Thu, 5 Sep 2024 07:56:55 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Akhil Goyal , Fan Zhang , Maxime Coquelin , Chenbo Xia CC: Anoob Joseph , , , , , , , , , , , , , , , , , , , , , , , , Brian Dooley , Gowrishankar Muthukrishnan Subject: [PATCH 3/6] vhost: add asymmetric RSA support Date: Thu, 5 Sep 2024 20:26:07 +0530 Message-ID: <20240905145612.1732-4-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: RTFGWeianOwKCYyalcYKTKFZTPZL4bnr X-Proofpoint-GUID: RTFGWeianOwKCYyalcYKTKFZTPZL4bnr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support asymmetric RSA crypto operations in vhost-user. Signed-off-by: Gowrishankar Muthukrishnan --- lib/cryptodev/cryptodev_pmd.h | 6 + lib/vhost/rte_vhost_crypto.h | 14 +- lib/vhost/vhost.c | 11 +- lib/vhost/vhost.h | 1 + lib/vhost/vhost_crypto.c | 550 +++++++++++++++++++++++++++++++--- lib/vhost/vhost_user.c | 4 + lib/vhost/vhost_user.h | 34 ++- lib/vhost/virtio_crypto.h | 87 +++++- 8 files changed, 654 insertions(+), 53 deletions(-) diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index 6c114f7181..ff3756723d 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -697,6 +697,12 @@ struct rte_cryptodev_asym_session { uint8_t sess_private_data[]; }; +/** + * Helper macro to get session private data + */ +#define CRYPTODEV_GET_ASYM_SESS_PRIV(s) \ + ((void *)(((struct rte_cryptodev_asym_session *)s)->sess_private_data)) + #ifdef __cplusplus } #endif diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h index f962a53818..274f737990 100644 --- a/lib/vhost/rte_vhost_crypto.h +++ b/lib/vhost/rte_vhost_crypto.h @@ -49,8 +49,10 @@ rte_vhost_crypto_driver_start(const char *path); * @param cryptodev_id * The identifier of DPDK Cryptodev, the same cryptodev_id can be assigned to * multiple Vhost-crypto devices. - * @param sess_pool - * The pointer to the created cryptodev session pool. + * @param sym_sess_pool + * The pointer to the created cryptodev sym session pool. + * @param asym_sess_pool + * The pointer to the created cryptodev asym session pool. * @param socket_id * NUMA Socket ID to allocate resources on. * * @return @@ -59,7 +61,7 @@ rte_vhost_crypto_driver_start(const char *path); */ int rte_vhost_crypto_create(int vid, uint8_t cryptodev_id, - struct rte_mempool *sess_pool, + struct rte_mempool *sym_sess_pool, struct rte_mempool *asym_sess_pool, int socket_id); /** @@ -113,6 +115,10 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, * dequeued from the cryptodev, this function shall be called to write the * processed data back to the vring descriptor (if no-copy is turned off). * + * @param vid + * The identifier of the vhost device. + * @param qid + * Virtio queue index. * @param ops * The address of an array of *rte_crypto_op* structure that was dequeued * from cryptodev. @@ -127,7 +133,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, * The number of ops processed. */ uint16_t -rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops, +rte_vhost_crypto_finalize_requests(int vid, int qid, struct rte_crypto_op **ops, uint16_t nb_ops, int *callfds, uint16_t *nb_callfds); #ifdef __cplusplus diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index ac71d17784..c9048e4f24 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -636,8 +636,12 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) /* Also allocate holes, if any, up to requested vring index. */ for (i = 0; i <= vring_idx; i++) { - if (dev->virtqueue[i]) + rte_spinlock_lock(&dev->virtqueue_lock); + if (dev->virtqueue[i]) { + rte_spinlock_unlock(&dev->virtqueue_lock); continue; + } + rte_spinlock_unlock(&dev->virtqueue_lock); vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0); if (vq == NULL) { @@ -647,13 +651,15 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) return -1; } - dev->virtqueue[i] = vq; init_vring_queue(dev, vq, i); rte_rwlock_init(&vq->access_lock); rte_rwlock_init(&vq->iotlb_lock); vq->avail_wrap_counter = 1; vq->used_wrap_counter = 1; vq->signalled_used_valid = false; + rte_spinlock_lock(&dev->virtqueue_lock); + dev->virtqueue[i] = vq; + rte_spinlock_unlock(&dev->virtqueue_lock); } dev->nr_vring = RTE_MAX(dev->nr_vring, vring_idx + 1); @@ -740,6 +746,7 @@ vhost_new_device(struct vhost_backend_ops *ops) dev->postcopy_ufd = -1; rte_spinlock_init(&dev->backend_req_lock); dev->backend_ops = ops; + rte_spinlock_init(&dev->virtqueue_lock); return i; } diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index cd3fa55f1b..ad33948e8b 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -494,6 +494,7 @@ struct __rte_cache_aligned virtio_net { int extbuf; int linearbuf; + rte_spinlock_t virtqueue_lock; struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2]; rte_rwlock_t iotlb_pending_lock; diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 7caf6d9afa..f98fa669ea 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO); */ #define vhost_crypto_desc vring_desc +struct vhost_crypto_session { + union { + struct rte_cryptodev_asym_session *asym; + struct rte_cryptodev_sym_session *sym; + }; + enum rte_crypto_op_type type; +}; + static int cipher_algo_transform(uint32_t virtio_cipher_algo, enum rte_crypto_cipher_algorithm *algo) @@ -197,7 +205,8 @@ struct __rte_cache_aligned vhost_crypto { */ struct rte_hash *session_map; struct rte_mempool *mbuf_pool; - struct rte_mempool *sess_pool; + struct rte_mempool *sym_sess_pool; + struct rte_mempool *asym_sess_pool; struct rte_mempool *wb_pool; /** DPDK cryptodev ID */ @@ -206,8 +215,10 @@ struct __rte_cache_aligned vhost_crypto { uint64_t last_session_id; - uint64_t cache_session_id; - struct rte_cryptodev_sym_session *cache_session; + uint64_t cache_sym_session_id; + void *cache_sym_session; + uint64_t cache_asym_session_id; + void *cache_asym_session; /** socket id for the device */ int socket_id; @@ -237,7 +248,7 @@ struct vhost_crypto_data_req { static int transform_cipher_param(struct rte_crypto_sym_xform *xform, - VhostUserCryptoSessionParam *param) + VhostUserCryptoSymSessionParam *param) { int ret; @@ -273,7 +284,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform, static int transform_chain_param(struct rte_crypto_sym_xform *xforms, - VhostUserCryptoSessionParam *param) + VhostUserCryptoSymSessionParam *param) { struct rte_crypto_sym_xform *xform_cipher, *xform_auth; int ret; @@ -334,17 +345,17 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, } static void -vhost_crypto_create_sess(struct vhost_crypto *vcrypto, +vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto, VhostUserCryptoSessionParam *sess_param) { struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0}; struct rte_cryptodev_sym_session *session; int ret; - switch (sess_param->op_type) { + switch (sess_param->u.sym_sess.op_type) { case VIRTIO_CRYPTO_SYM_OP_NONE: case VIRTIO_CRYPTO_SYM_OP_CIPHER: - ret = transform_cipher_param(&xform1, sess_param); + ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess); if (unlikely(ret)) { VC_LOG_ERR("Error transform session msg (%i)", ret); sess_param->session_id = ret; @@ -352,7 +363,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto, } break; case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING: - if (unlikely(sess_param->hash_mode != + if (unlikely(sess_param->u.sym_sess.hash_mode != VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) { sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP; VC_LOG_ERR("Error transform session message (%i)", @@ -362,7 +373,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto, xform1.next = &xform2; - ret = transform_chain_param(&xform1, sess_param); + ret = transform_chain_param(&xform1, &sess_param->u.sym_sess); if (unlikely(ret)) { VC_LOG_ERR("Error transform session message (%i)", ret); sess_param->session_id = ret; @@ -377,7 +388,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto, } session = rte_cryptodev_sym_session_create(vcrypto->cid, &xform1, - vcrypto->sess_pool); + vcrypto->sym_sess_pool); if (!session) { VC_LOG_ERR("Failed to create session"); sess_param->session_id = -VIRTIO_CRYPTO_ERR; @@ -402,22 +413,282 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto, vcrypto->last_session_id++; } +static int +tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len) +{ + size_t tlen = -EINVAL, len; + + if (tlv[0] != type) + return -EINVAL; + + if (tlv[1] == 0x82) { + len = (tlv[2] << 8) | tlv[3]; + *data = rte_malloc(NULL, len, 0); + rte_memcpy(*data, &tlv[4], len); + tlen = len + 4; + } else if (tlv[1] == 0x81) { + len = tlv[2]; + *data = rte_malloc(NULL, len, 0); + rte_memcpy(*data, &tlv[3], len); + tlen = len + 3; + } else { + len = tlv[1]; + *data = rte_malloc(NULL, len, 0); + rte_memcpy(*data, &tlv[2], len); + tlen = len + 2; + } + + *data_len = len; + return tlen; +} + +static int +virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL, + *dq = NULL, *qinv = NULL, *v = NULL, *tlv; + size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen; + int len, i; + + RTE_SET_USED(der_len); + + for (i = 0; i < 8; i++) { + if (der[i] == 0x30) { + der = &der[i]; + break; + } + } + + if (der[0] != 0x30) + return -EINVAL; + + if (der[1] == 0x82) + tlv = &der[4]; + else if (der[1] == 0x81) + tlv = &der[3]; + else + return -EINVAL; + + len = tlv_decode(tlv, 0x02, &v, &vlen); + if (len < 0 || v[0] != 0x0 || vlen != 1) { + len = -EINVAL; + goto _error; + } + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &n, &nlen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &e, &elen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &d, &dlen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &p, &plen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &q, &qlen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &dp, &dplen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &dq, &dqlen); + if (len < 0) + goto _error; + + tlv = tlv + len; + len = tlv_decode(tlv, 0x02, &qinv, &qinvlen); + if (len < 0) + goto _error; + + xform->rsa.n.data = n; + xform->rsa.n.length = nlen; + xform->rsa.e.data = e; + xform->rsa.e.length = elen; + xform->rsa.d.data = d; + xform->rsa.d.length = dlen; + xform->rsa.qt.p.data = p; + xform->rsa.qt.p.length = plen; + xform->rsa.qt.q.data = q; + xform->rsa.qt.q.length = qlen; + xform->rsa.qt.dP.data = dp; + xform->rsa.qt.dP.length = dplen; + xform->rsa.qt.dQ.data = dq; + xform->rsa.qt.dQ.length = dqlen; + xform->rsa.qt.qInv.data = qinv; + xform->rsa.qt.qInv.length = qinvlen; + + RTE_ASSERT((tlv + len - &der[0]) == der_len); + return 0; +_error: + rte_free(v); + rte_free(n); + rte_free(e); + rte_free(d); + rte_free(p); + rte_free(q); + rte_free(dp); + rte_free(dq); + rte_free(qinv); + return len; +} + +static int +transform_rsa_param(struct rte_crypto_asym_xform *xform, + VhostUserCryptoAsymSessionParam *param) +{ + int ret = -EINVAL; + + ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform); + if (ret < 0) + goto _error; + + switch (param->u.rsa.padding_algo) { + case VIRTIO_CRYPTO_RSA_RAW_PADDING: + xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE; + break; + case VIRTIO_CRYPTO_RSA_PKCS1_PADDING: + xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5; + break; + default: + VC_LOG_ERR("Unknown padding type"); + goto _error; + } + + switch (param->u.rsa.private_key_type) { + case VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP: + xform->rsa.key_type = RTE_RSA_KEY_TYPE_EXP; + break; + case VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT: + xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT; + break; + default: + VC_LOG_ERR("Unknown private key type"); + goto _error; + } + + xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA; +_error: + return ret; +} + +static void +vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto, + VhostUserCryptoSessionParam *sess_param) +{ + struct rte_cryptodev_asym_session *session = NULL; + struct vhost_crypto_session *vhost_session; + struct rte_crypto_asym_xform xform = {0}; + int ret; + + switch (sess_param->u.asym_sess.algo) { + case VIRTIO_CRYPTO_AKCIPHER_RSA: + ret = transform_rsa_param(&xform, &sess_param->u.asym_sess); + if (unlikely(ret)) { + VC_LOG_ERR("Error transform session msg (%i)", ret); + sess_param->session_id = ret; + return; + } + break; + default: + VC_LOG_ERR("Invalid op algo"); + sess_param->session_id = -VIRTIO_CRYPTO_ERR; + return; + } + + ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform, + vcrypto->asym_sess_pool, (void *)&session); + if (!session) { + VC_LOG_ERR("Failed to create session"); + sess_param->session_id = -VIRTIO_CRYPTO_ERR; + return; + } + + /* insert session to map */ + vhost_session = rte_malloc(NULL, sizeof(*vhost_session), 0); + if (vhost_session == NULL) { + VC_LOG_ERR("Failed to alloc session memory"); + sess_param->session_id = -VIRTIO_CRYPTO_ERR; + return; + } + + vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC; + vhost_session->asym = session; + if ((rte_hash_add_key_data(vcrypto->session_map, + &vcrypto->last_session_id, vhost_session) < 0)) { + VC_LOG_ERR("Failed to insert session to hash table"); + + if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0) + VC_LOG_ERR("Failed to free session"); + sess_param->session_id = -VIRTIO_CRYPTO_ERR; + return; + } + + VC_LOG_INFO("Session %"PRIu64" created for vdev %i.", + vcrypto->last_session_id, vcrypto->dev->vid); + + sess_param->session_id = vcrypto->last_session_id; + vcrypto->last_session_id++; +} + +static void +vhost_crypto_create_sess(struct vhost_crypto *vcrypto, + VhostUserCryptoSessionParam *sess_param) +{ + if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION) + vhost_crypto_create_asym_sess(vcrypto, sess_param); + else + vhost_crypto_create_sym_sess(vcrypto, sess_param); +} + static int vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id) { - struct rte_cryptodev_sym_session *session; + struct rte_cryptodev_asym_session *asym_session = NULL; + struct rte_cryptodev_sym_session *sym_session = NULL; + struct vhost_crypto_session *vhost_session = NULL; uint64_t sess_id = session_id; int ret; ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id, - (void **)&session); - + (void **)&vhost_session); if (unlikely(ret < 0)) { - VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id); + VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id); return -VIRTIO_CRYPTO_INVSESS; } - if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) { + if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + sym_session = vhost_session->sym; + } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + asym_session = vhost_session->asym; + } else { + VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id); + return -VIRTIO_CRYPTO_INVSESS; + } + + if (sym_session != NULL && + rte_cryptodev_sym_session_free(vcrypto->cid, sym_session) < 0) { + VC_LOG_DBG("Failed to free session"); + return -VIRTIO_CRYPTO_ERR; + } + + if (asym_session != NULL && + rte_cryptodev_asym_session_free(vcrypto->cid, asym_session) < 0) { VC_LOG_DBG("Failed to free session"); return -VIRTIO_CRYPTO_ERR; } @@ -430,6 +701,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id) VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id, vcrypto->dev->vid); + rte_free(vhost_session); return 0; } @@ -1123,6 +1395,118 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, return ret; } +static __rte_always_inline uint8_t +vhost_crypto_check_akcipher_request(struct virtio_crypto_akcipher_data_req *req) +{ + RTE_SET_USED(req); + return VIRTIO_CRYPTO_OK; +} + +static __rte_always_inline uint8_t +prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, + struct vhost_crypto_data_req *vc_req, + struct virtio_crypto_op_data_req *req, + struct vhost_crypto_desc *head, + uint32_t max_n_descs) +{ + uint8_t ret = vhost_crypto_check_akcipher_request(&req->u.akcipher_req); + struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa; + struct vhost_crypto_desc *desc = head; + uint16_t wlen = 0; + + if (unlikely(ret != VIRTIO_CRYPTO_OK)) + goto error_exit; + + /* prepare */ + switch (vcrypto->option) { + case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE: + vc_req->wb_pool = vcrypto->wb_pool; + if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) { + rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN; + rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->message.length = req->u.akcipher_req.para.src_data_len; + rsa->sign.length = req->u.akcipher_req.para.dst_data_len; + wlen = rsa->sign.length; + desc = find_write_desc(head, desc, max_n_descs); + if (unlikely(!desc)) { + VC_LOG_ERR("Cannot find write location"); + ret = VIRTIO_CRYPTO_BADMSG; + goto error_exit; + } + + rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW); + if (unlikely(rsa->sign.data == NULL)) { + ret = VIRTIO_CRYPTO_ERR; + goto error_exit; + } + + desc += 1; + } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) { + rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY; + rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->sign.length = req->u.akcipher_req.para.src_data_len; + desc += 1; + rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->message.length = req->u.akcipher_req.para.dst_data_len; + desc += 1; + } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) { + rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT; + rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->message.length = req->u.akcipher_req.para.src_data_len; + rsa->cipher.length = req->u.akcipher_req.para.dst_data_len; + wlen = rsa->cipher.length; + desc = find_write_desc(head, desc, max_n_descs); + if (unlikely(!desc)) { + VC_LOG_ERR("Cannot find write location"); + ret = VIRTIO_CRYPTO_BADMSG; + goto error_exit; + } + + rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW); + if (unlikely(rsa->cipher.data == NULL)) { + ret = VIRTIO_CRYPTO_ERR; + goto error_exit; + } + + desc += 1; + } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) { + rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT; + rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->cipher.length = req->u.akcipher_req.para.src_data_len; + desc += 1; + rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO); + rsa->message.length = req->u.akcipher_req.para.dst_data_len; + desc += 1; + } else { + goto error_exit; + } + break; + case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE: + default: + ret = VIRTIO_CRYPTO_BADMSG; + goto error_exit; + } + + op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC; + op->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + + vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO); + if (unlikely(vc_req->inhdr == NULL)) { + ret = VIRTIO_CRYPTO_BADMSG; + goto error_exit; + } + + vc_req->inhdr->status = VIRTIO_CRYPTO_OK; + vc_req->len = wlen + INHDR_LEN; + return 0; +error_exit: + if (vc_req->wb) + free_wb_data(vc_req->wb, vc_req->wb_pool); + + vc_req->len = INHDR_LEN; + return ret; +} + /** * Process on descriptor */ @@ -1133,17 +1517,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, uint16_t desc_idx) __rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */ { - struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src); - struct rte_cryptodev_sym_session *session; + struct vhost_crypto_data_req *vc_req, *vc_req_out; + struct rte_cryptodev_asym_session *asym_session; + struct rte_cryptodev_sym_session *sym_session; + struct vhost_crypto_session *vhost_session; + struct vhost_crypto_desc *desc = descs; + uint32_t nb_descs = 0, max_n_descs, i; + struct vhost_crypto_data_req data_req; struct virtio_crypto_op_data_req req; struct virtio_crypto_inhdr *inhdr; - struct vhost_crypto_desc *desc = descs; struct vring_desc *src_desc; uint64_t session_id; uint64_t dlen; - uint32_t nb_descs = 0, max_n_descs, i; int err; + vc_req = &data_req; vc_req->desc_idx = desc_idx; vc_req->dev = vcrypto->dev; vc_req->vq = vq; @@ -1226,12 +1614,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, switch (req.header.opcode) { case VIRTIO_CRYPTO_CIPHER_ENCRYPT: case VIRTIO_CRYPTO_CIPHER_DECRYPT: + vc_req_out = rte_mbuf_to_priv(op->sym->m_src); + rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req)); session_id = req.header.session_id; /* one branch to avoid unnecessary table lookup */ - if (vcrypto->cache_session_id != session_id) { + if (vcrypto->cache_sym_session_id != session_id) { err = rte_hash_lookup_data(vcrypto->session_map, - &session_id, (void **)&session); + &session_id, (void **)&vhost_session); if (unlikely(err < 0)) { err = VIRTIO_CRYPTO_ERR; VC_LOG_ERR("Failed to find session %"PRIu64, @@ -1239,13 +1629,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, goto error_exit; } - vcrypto->cache_session = session; - vcrypto->cache_session_id = session_id; + vcrypto->cache_sym_session = vhost_session->sym; + vcrypto->cache_sym_session_id = session_id; } - session = vcrypto->cache_session; + sym_session = vcrypto->cache_sym_session; + op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; - err = rte_crypto_op_attach_sym_session(op, session); + err = rte_crypto_op_attach_sym_session(op, sym_session); if (unlikely(err < 0)) { err = VIRTIO_CRYPTO_ERR; VC_LOG_ERR("Failed to attach session to op"); @@ -1257,12 +1648,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, err = VIRTIO_CRYPTO_NOTSUPP; break; case VIRTIO_CRYPTO_SYM_OP_CIPHER: - err = prepare_sym_cipher_op(vcrypto, op, vc_req, + err = prepare_sym_cipher_op(vcrypto, op, vc_req_out, &req.u.sym_req.u.cipher, desc, max_n_descs); break; case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING: - err = prepare_sym_chain_op(vcrypto, op, vc_req, + err = prepare_sym_chain_op(vcrypto, op, vc_req_out, &req.u.sym_req.u.chain, desc, max_n_descs); break; @@ -1271,6 +1662,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, VC_LOG_ERR("Failed to process sym request"); goto error_exit; } + break; + case VIRTIO_CRYPTO_AKCIPHER_SIGN: + case VIRTIO_CRYPTO_AKCIPHER_VERIFY: + case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT: + case VIRTIO_CRYPTO_AKCIPHER_DECRYPT: + session_id = req.header.session_id; + + /* one branch to avoid unnecessary table lookup */ + if (vcrypto->cache_asym_session_id != session_id) { + err = rte_hash_lookup_data(vcrypto->session_map, + &session_id, (void **)&vhost_session); + if (unlikely(err < 0)) { + err = VIRTIO_CRYPTO_ERR; + VC_LOG_ERR("Failed to find asym session %"PRIu64, + session_id); + goto error_exit; + } + + vcrypto->cache_asym_session = vhost_session->asym; + vcrypto->cache_asym_session_id = session_id; + } + + asym_session = vcrypto->cache_asym_session; + op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC; + + err = rte_crypto_op_attach_asym_session(op, asym_session); + if (unlikely(err < 0)) { + err = VIRTIO_CRYPTO_ERR; + VC_LOG_ERR("Failed to attach asym session to op"); + goto error_exit; + } + + vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session); + rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req)); + vc_req_out->wb = NULL; + + switch (req.header.algo) { + case VIRTIO_CRYPTO_AKCIPHER_RSA: + err = prepare_asym_rsa_op(vcrypto, op, vc_req_out, + &req, desc, max_n_descs); + break; + } + if (unlikely(err != 0)) { + VC_LOG_ERR("Failed to process asym request"); + goto error_exit; + } + break; default: err = VIRTIO_CRYPTO_ERR; @@ -1294,12 +1732,22 @@ static __rte_always_inline struct vhost_virtqueue * vhost_crypto_finalize_one_request(struct rte_crypto_op *op, struct vhost_virtqueue *old_vq) { - struct rte_mbuf *m_src = op->sym->m_src; - struct rte_mbuf *m_dst = op->sym->m_dst; - struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src); + struct rte_mbuf *m_src = NULL, *m_dst = NULL; + struct vhost_crypto_data_req *vc_req; struct vhost_virtqueue *vq; uint16_t used_idx, desc_idx; + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + m_src = op->sym->m_src; + m_dst = op->sym->m_dst; + vc_req = rte_mbuf_to_priv(m_src); + } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session); + } else { + VC_LOG_ERR("Invalid crypto op type"); + return NULL; + } + if (unlikely(!vc_req)) { VC_LOG_ERR("Failed to retrieve vc_req"); return NULL; @@ -1321,25 +1769,36 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op, vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx]; vq->used->ring[desc_idx].len = vc_req->len; - rte_mempool_put(m_src->pool, (void *)m_src); - - if (m_dst) - rte_mempool_put(m_dst->pool, (void *)m_dst); + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + rte_mempool_put(m_src->pool, (void *)m_src); + if (m_dst) + rte_mempool_put(m_dst->pool, (void *)m_dst); + } return vc_req->vq; } static __rte_always_inline uint16_t -vhost_crypto_complete_one_vm_requests(struct rte_crypto_op **ops, +vhost_crypto_complete_one_vm_requests(int vid, int qid, struct rte_crypto_op **ops, uint16_t nb_ops, int *callfd) { + struct virtio_net *dev = get_device(vid); uint16_t processed = 1; struct vhost_virtqueue *vq, *tmp_vq; + if (unlikely(dev == NULL)) { + VC_LOG_ERR("Invalid vid %i", vid); + return 0; + } + if (unlikely(nb_ops == 0)) return 0; - vq = vhost_crypto_finalize_one_request(ops[0], NULL); + rte_spinlock_lock(&dev->virtqueue_lock); + tmp_vq = dev->virtqueue[qid]; + rte_spinlock_unlock(&dev->virtqueue_lock); + + vq = vhost_crypto_finalize_one_request(ops[0], tmp_vq); if (unlikely(vq == NULL)) return 0; tmp_vq = vq; @@ -1384,7 +1843,7 @@ rte_vhost_crypto_driver_start(const char *path) int rte_vhost_crypto_create(int vid, uint8_t cryptodev_id, - struct rte_mempool *sess_pool, + struct rte_mempool *sym_sess_pool, struct rte_mempool *asym_sess_pool, int socket_id) { struct virtio_net *dev = get_device(vid); @@ -1405,9 +1864,11 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id, return -ENOMEM; } - vcrypto->sess_pool = sess_pool; + vcrypto->sym_sess_pool = sym_sess_pool; + vcrypto->asym_sess_pool = asym_sess_pool; vcrypto->cid = cryptodev_id; - vcrypto->cache_session_id = UINT64_MAX; + vcrypto->cache_sym_session_id = UINT64_MAX; + vcrypto->cache_asym_session_id = UINT64_MAX; vcrypto->last_session_id = 1; vcrypto->dev = dev; vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE; @@ -1578,7 +2039,12 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, return 0; } + rte_spinlock_lock(&dev->virtqueue_lock); vq = dev->virtqueue[qid]; + rte_spinlock_unlock(&dev->virtqueue_lock); + + if (!vq || !vq->avail) + return 0; avail_idx = *((volatile uint16_t *)&vq->avail->idx); start_idx = vq->last_used_idx; @@ -1660,7 +2126,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid, } uint16_t -rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops, +rte_vhost_crypto_finalize_requests(int vid, int qid, struct rte_crypto_op **ops, uint16_t nb_ops, int *callfds, uint16_t *nb_callfds) { struct rte_crypto_op **tmp_ops = ops; @@ -1669,7 +2135,7 @@ rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops, uint16_t idx = 0; while (left) { - count = vhost_crypto_complete_one_vm_requests(tmp_ops, left, + count = vhost_crypto_complete_one_vm_requests(vid, qid, tmp_ops, left, &callfd); if (unlikely(count == 0)) break; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 5f470da38a..b139c64a71 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -456,7 +456,9 @@ vhost_user_set_features(struct virtio_net **pdev, if (!vq) continue; + rte_spinlock_lock(&dev->virtqueue_lock); dev->virtqueue[dev->nr_vring] = NULL; + rte_spinlock_unlock(&dev->virtqueue_lock); cleanup_vq(vq, 1); cleanup_vq_inflight(dev, vq); /* vhost_user_lock_all_queue_pairs locked all qps */ @@ -600,7 +602,9 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) if (vq != dev->virtqueue[vq->index]) { VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node); + rte_spinlock_lock(&dev->virtqueue_lock); dev->virtqueue[vq->index] = vq; + rte_spinlock_unlock(&dev->virtqueue_lock); } if (vq_is_packed(dev)) { diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h index edf7adb3c0..6174f32dcf 100644 --- a/lib/vhost/vhost_user.h +++ b/lib/vhost/vhost_user.h @@ -99,11 +99,10 @@ typedef struct VhostUserLog { /* Comply with Cryptodev-Linux */ #define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512 #define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64 +#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024 /* Same structure as vhost-user backend session info */ -typedef struct VhostUserCryptoSessionParam { - int64_t session_id; - uint32_t op_code; +typedef struct VhostUserCryptoSymSessionParam { uint32_t cipher_algo; uint32_t cipher_key_len; uint32_t hash_algo; @@ -114,10 +113,37 @@ typedef struct VhostUserCryptoSessionParam { uint8_t dir; uint8_t hash_mode; uint8_t chaining_dir; - uint8_t *ciphe_key; + uint8_t *cipher_key; uint8_t *auth_key; uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH]; uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH]; +} VhostUserCryptoSymSessionParam; + + +typedef struct VhostUserCryptoAsymRsaParam { + uint32_t padding_algo; + uint32_t hash_algo; + uint8_t private_key_type; +} VhostUserCryptoAsymRsaParam; + +typedef struct VhostUserCryptoAsymSessionParam { + uint32_t algo; + uint32_t key_type; + uint32_t key_len; + uint8_t *key; + union { + VhostUserCryptoAsymRsaParam rsa; + } u; + uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH]; +} VhostUserCryptoAsymSessionParam; + +typedef struct VhostUserCryptoSessionParam { + uint32_t op_code; + union { + VhostUserCryptoSymSessionParam sym_sess; + VhostUserCryptoAsymSessionParam asym_sess; + } u; + uint64_t session_id; } VhostUserCryptoSessionParam; typedef struct VhostUserVringArea { diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h index e3b93573c8..703a059768 100644 --- a/lib/vhost/virtio_crypto.h +++ b/lib/vhost/virtio_crypto.h @@ -9,6 +9,7 @@ #define VIRTIO_CRYPTO_SERVICE_HASH 1 #define VIRTIO_CRYPTO_SERVICE_MAC 2 #define VIRTIO_CRYPTO_SERVICE_AEAD 3 +#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4 #define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op)) @@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header { VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02) #define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03) +#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04) +#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05) uint32_t opcode; uint32_t algo; uint32_t flag; @@ -152,6 +157,63 @@ struct virtio_crypto_aead_create_session_req { uint8_t padding[32]; }; +struct virtio_crypto_rsa_session_para { +#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0 +#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1 + uint32_t padding_algo; + +#define VIRTIO_CRYPTO_RSA_NO_HASH 0 +#define VIRTIO_CRYPTO_RSA_MD2 1 +#define VIRTIO_CRYPTO_RSA_MD3 2 +#define VIRTIO_CRYPTO_RSA_MD4 3 +#define VIRTIO_CRYPTO_RSA_MD5 4 +#define VIRTIO_CRYPTO_RSA_SHA1 5 +#define VIRTIO_CRYPTO_RSA_SHA256 6 +#define VIRTIO_CRYPTO_RSA_SHA384 7 +#define VIRTIO_CRYPTO_RSA_SHA512 8 +#define VIRTIO_CRYPTO_RSA_SHA224 9 + uint32_t hash_algo; + +#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_UNKNOWN 0 +#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP 1 +#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT 2 + uint8_t private_key_type; +}; + +struct virtio_crypto_ecdsa_session_para { +#define VIRTIO_CRYPTO_CURVE_UNKNOWN 0 +#define VIRTIO_CRYPTO_CURVE_NIST_P192 1 +#define VIRTIO_CRYPTO_CURVE_NIST_P224 2 +#define VIRTIO_CRYPTO_CURVE_NIST_P256 3 +#define VIRTIO_CRYPTO_CURVE_NIST_P384 4 +#define VIRTIO_CRYPTO_CURVE_NIST_P521 5 + uint32_t curve_id; + uint32_t padding; +}; + +struct virtio_crypto_akcipher_session_para { +#define VIRTIO_CRYPTO_NO_AKCIPHER 0 +#define VIRTIO_CRYPTO_AKCIPHER_RSA 1 +#define VIRTIO_CRYPTO_AKCIPHER_DSA 2 +#define VIRTIO_CRYPTO_AKCIPHER_ECDSA 3 + uint32_t algo; + +#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1 +#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2 + uint32_t keytype; + uint32_t keylen; + + union { + struct virtio_crypto_rsa_session_para rsa; + struct virtio_crypto_ecdsa_session_para ecdsa; + } u; +}; + +struct virtio_crypto_akcipher_create_session_req { + struct virtio_crypto_akcipher_session_para para; + uint8_t padding[36]; +}; + struct virtio_crypto_alg_chain_session_para { #define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1 #define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2 @@ -219,6 +281,8 @@ struct virtio_crypto_op_ctrl_req { mac_create_session; struct virtio_crypto_aead_create_session_req aead_create_session; + struct virtio_crypto_akcipher_create_session_req + akcipher_create_session; struct virtio_crypto_destroy_session_req destroy_session; uint8_t padding[56]; @@ -238,6 +302,14 @@ struct virtio_crypto_op_header { VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00) #define VIRTIO_CRYPTO_AEAD_DECRYPT \ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01) +#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00) +#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01) +#define VIRTIO_CRYPTO_AKCIPHER_SIGN \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02) +#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \ + VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03) uint32_t opcode; /* algo should be service-specific algorithms */ uint32_t algo; @@ -362,6 +434,16 @@ struct virtio_crypto_aead_data_req { uint8_t padding[32]; }; +struct virtio_crypto_akcipher_para { + uint32_t src_data_len; + uint32_t dst_data_len; +}; + +struct virtio_crypto_akcipher_data_req { + struct virtio_crypto_akcipher_para para; + uint8_t padding[40]; +}; + /* The request of the data virtqueue's packet */ struct virtio_crypto_op_data_req { struct virtio_crypto_op_header header; @@ -371,6 +453,7 @@ struct virtio_crypto_op_data_req { struct virtio_crypto_hash_data_req hash_req; struct virtio_crypto_mac_data_req mac_req; struct virtio_crypto_aead_data_req aead_req; + struct virtio_crypto_akcipher_data_req akcipher_req; uint8_t padding[48]; } u; }; @@ -380,6 +463,8 @@ struct virtio_crypto_op_data_req { #define VIRTIO_CRYPTO_BADMSG 2 #define VIRTIO_CRYPTO_NOTSUPP 3 #define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */ +#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */ +#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */ /* The accelerator hardware is ready */ #define VIRTIO_CRYPTO_S_HW_READY (1 << 0) @@ -410,7 +495,7 @@ struct virtio_crypto_config { uint32_t max_cipher_key_len; /* Maximum length of authenticated key */ uint32_t max_auth_key_len; - uint32_t reserve; + uint32_t akcipher_algo; /* Maximum size of each crypto request's content */ uint64_t max_size; }; From patchwork Thu Sep 5 14:56:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143671 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8DB534590E; Thu, 5 Sep 2024 16:57:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 722B042EA4; Thu, 5 Sep 2024 16:57:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7CBA542E83 for ; Thu, 5 Sep 2024 16:57:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 485AWPbk018433; Thu, 5 Sep 2024 07:57:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=c 1F75FwjfL4IuZSLOWhllrImkyl7lJvMasyORV9R9ik=; b=AJ5dLDP0Fn6pEo4zc D47cY4cNLwjtEx2VWSMFdYLUHTGtwZtozJMGU5G1aiuWl1rBmjX57+RWPM5an+KL ppJ8sC0iMIvOkGPYwZI5ed58tbsUCiQfeTzcPIz14a8Pk1XN2OrwY+pGwktl0YaJ VW7z1/2cxYubs05AMU3VGMPCRvWr2Jn+6FcdARE+C4//DwFW84Iatra+bACANP2q zCnIBbtF825AQz00vJcznRyGZ693khcsX5MWGBqYpJx9GlZbN1w+eA/x8yskY/ye GtlYKTE1hgEfjIxzXODbyZt9YKbu7yVqaXncOfE0umcTIA+aGwHVC4I/7KgQ4xBp IYN8g== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 41faywh199-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:57:33 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:57:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:57:31 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id 3AED25B6973; Thu, 5 Sep 2024 07:57:19 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Jay Zhou CC: Anoob Joseph , Akhil Goyal , , , , , , , , , , , , , , , , , , , , , , , , , , Brian Dooley , "Gowrishankar Muthukrishnan" Subject: [PATCH 4/6] crypto/virtio: add asymmetric RSA support Date: Thu, 5 Sep 2024 20:26:08 +0530 Message-ID: <20240905145612.1732-5-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 0s5baSKCEs6egsg8aRC2MMyhmzM2YfHM X-Proofpoint-GUID: 0s5baSKCEs6egsg8aRC2MMyhmzM2YfHM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Asymmetric RSA operations (SIGN, VERIFY, ENCRYPT and DECRYPT) are supported in virtio PMD. Signed-off-by: Gowrishankar Muthukrishnan --- .../virtio/virtio_crypto_capabilities.h | 19 + drivers/crypto/virtio/virtio_cryptodev.c | 388 +++++++++++++++--- drivers/crypto/virtio/virtio_rxtx.c | 233 ++++++++++- 3 files changed, 572 insertions(+), 68 deletions(-) diff --git a/drivers/crypto/virtio/virtio_crypto_capabilities.h b/drivers/crypto/virtio/virtio_crypto_capabilities.h index 03c30deefd..1b26ff6720 100644 --- a/drivers/crypto/virtio/virtio_crypto_capabilities.h +++ b/drivers/crypto/virtio/virtio_crypto_capabilities.h @@ -48,4 +48,23 @@ }, } \ } +#define VIRTIO_ASYM_CAPABILITIES \ + { /* RSA */ \ + .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ + {.asym = { \ + .xform_capa = { \ + .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \ + .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \ + (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \ + (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \ + (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \ + {.modlen = { \ + .min = 1, \ + .max = 1024, \ + .increment = 1 \ + }, } \ + } \ + }, } \ + } + #endif /* _VIRTIO_CRYPTO_CAPABILITIES_H_ */ diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c index 4854820ba6..b2a9995c13 100644 --- a/drivers/crypto/virtio/virtio_cryptodev.c +++ b/drivers/crypto/virtio/virtio_cryptodev.c @@ -41,6 +41,11 @@ static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev, static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, struct rte_cryptodev_sym_session *session); +static void virtio_crypto_asym_clear_session(struct rte_cryptodev *dev, + struct rte_cryptodev_asym_session *sess); +static int virtio_crypto_asym_configure_session(struct rte_cryptodev *dev, + struct rte_crypto_asym_xform *xform, + struct rte_cryptodev_asym_session *session); /* * The set of PCI devices this driver supports @@ -53,6 +58,7 @@ static const struct rte_pci_id pci_id_virtio_crypto_map[] = { static const struct rte_cryptodev_capabilities virtio_capabilities[] = { VIRTIO_SYM_CAPABILITIES, + VIRTIO_ASYM_CAPABILITIES, RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; @@ -88,7 +94,7 @@ virtio_crypto_send_command(struct virtqueue *vq, return -EINVAL; } /* cipher only is supported, it is available if auth_key is NULL */ - if (!cipher_key) { + if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) { VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL."); return -EINVAL; } @@ -104,19 +110,23 @@ virtio_crypto_send_command(struct virtqueue *vq, /* calculate the length of cipher key */ if (cipher_key) { - switch (ctrl->u.sym_create_session.op_type) { - case VIRTIO_CRYPTO_SYM_OP_CIPHER: - len_cipher_key - = ctrl->u.sym_create_session.u.cipher - .para.keylen; - break; - case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING: - len_cipher_key - = ctrl->u.sym_create_session.u.chain - .para.cipher_param.keylen; - break; - default: - VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type"); + if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) { + switch (ctrl->u.sym_create_session.op_type) { + case VIRTIO_CRYPTO_SYM_OP_CIPHER: + len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen; + break; + case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING: + len_cipher_key = + ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen; + break; + default: + VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type"); + return -EINVAL; + } + } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_AKCIPHER) { + len_cipher_key = ctrl->u.akcipher_create_session.para.keylen; + } else { + VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key"); return -EINVAL; } } @@ -511,7 +521,10 @@ static struct rte_cryptodev_ops virtio_crypto_dev_ops = { /* Crypto related operations */ .sym_session_get_size = virtio_crypto_sym_get_session_private_size, .sym_session_configure = virtio_crypto_sym_configure_session, - .sym_session_clear = virtio_crypto_sym_clear_session + .sym_session_clear = virtio_crypto_sym_clear_session, + .asym_session_get_size = virtio_crypto_sym_get_session_private_size, + .asym_session_configure = virtio_crypto_asym_configure_session, + .asym_session_clear = virtio_crypto_asym_clear_session }; static void @@ -734,6 +747,9 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev, cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst; cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; @@ -926,32 +942,24 @@ virtio_crypto_check_sym_clear_session_paras( #define NUM_ENTRY_SYM_CLEAR_SESSION 2 static void -virtio_crypto_sym_clear_session( +virtio_crypto_clear_session( struct rte_cryptodev *dev, - struct rte_cryptodev_sym_session *sess) + struct virtio_crypto_op_ctrl_req *ctrl) { struct virtio_crypto_hw *hw; struct virtqueue *vq; - struct virtio_crypto_session *session; - struct virtio_crypto_op_ctrl_req *ctrl; struct vring_desc *desc; uint8_t *status; uint8_t needed = 1; uint32_t head; - uint8_t *malloc_virt_addr; uint64_t malloc_phys_addr; uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr); uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req); uint32_t desc_offset = len_op_ctrl_req + len_inhdr; - - PMD_INIT_FUNC_TRACE(); - - if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0) - return; + uint64_t session_id = ctrl->u.destroy_session.session_id; hw = dev->data->dev_private; vq = hw->cvq; - session = CRYPTODEV_GET_SYM_SESS_PRIV(sess); VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, " "vq = %p", vq->vq_desc_head_idx, vq); @@ -963,34 +971,15 @@ virtio_crypto_sym_clear_session( return; } - /* - * malloc memory to store information of ctrl request op, - * returned status and desc vring - */ - malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr - + NUM_ENTRY_SYM_CLEAR_SESSION - * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE); - if (malloc_virt_addr == NULL) { - VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room"); - return; - } - malloc_phys_addr = rte_malloc_virt2iova(malloc_virt_addr); - - /* assign ctrl request op part */ - ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr; - ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION; - /* default data virtqueue is 0 */ - ctrl->header.queue_id = 0; - ctrl->u.destroy_session.session_id = session->session_id; + malloc_phys_addr = rte_malloc_virt2iova(ctrl); /* status part */ status = &(((struct virtio_crypto_inhdr *) - ((uint8_t *)malloc_virt_addr + len_op_ctrl_req))->status); + ((uint8_t *)ctrl + len_op_ctrl_req))->status); *status = VIRTIO_CRYPTO_ERR; /* indirect desc vring part */ - desc = (struct vring_desc *)((uint8_t *)malloc_virt_addr - + desc_offset); + desc = (struct vring_desc *)((uint8_t *)ctrl + desc_offset); /* ctrl request part */ desc[0].addr = malloc_phys_addr; @@ -1052,8 +1041,8 @@ virtio_crypto_sym_clear_session( if (*status != VIRTIO_CRYPTO_OK) { VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed " "status=%"PRIu32", session_id=%"PRIu64"", - *status, session->session_id); - rte_free(malloc_virt_addr); + *status, session_id); + rte_free(ctrl); return; } @@ -1062,9 +1051,86 @@ virtio_crypto_sym_clear_session( vq->vq_free_cnt, vq->vq_desc_head_idx); VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ", - session->session_id); + session_id); - rte_free(malloc_virt_addr); + rte_free(ctrl); +} + +static void +virtio_crypto_sym_clear_session( + struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess) +{ + uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr); + uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req); + struct virtio_crypto_op_ctrl_req *ctrl; + struct virtio_crypto_session *session; + uint8_t *malloc_virt_addr; + + PMD_INIT_FUNC_TRACE(); + + if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0) + return; + + session = CRYPTODEV_GET_SYM_SESS_PRIV(sess); + + /* + * malloc memory to store information of ctrl request op, + * returned status and desc vring + */ + malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr + + NUM_ENTRY_SYM_CLEAR_SESSION + * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE); + if (malloc_virt_addr == NULL) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room"); + return; + } + + /* assign ctrl request op part */ + ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr; + ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION; + /* default data virtqueue is 0 */ + ctrl->header.queue_id = 0; + ctrl->u.destroy_session.session_id = session->session_id; + + return virtio_crypto_clear_session(dev, ctrl); +} + +static void +virtio_crypto_asym_clear_session( + struct rte_cryptodev *dev, + struct rte_cryptodev_asym_session *sess) +{ + uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr); + uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req); + struct virtio_crypto_op_ctrl_req *ctrl; + struct virtio_crypto_session *session; + uint8_t *malloc_virt_addr; + + PMD_INIT_FUNC_TRACE(); + + session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess); + + /* + * malloc memory to store information of ctrl request op, + * returned status and desc vring + */ + malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr + + NUM_ENTRY_SYM_CLEAR_SESSION + * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE); + if (malloc_virt_addr == NULL) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room"); + return; + } + + /* assign ctrl request op part */ + ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr; + ctrl->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION; + /* default data virtqueue is 0 */ + ctrl->header.queue_id = 0; + ctrl->u.destroy_session.session_id = session->session_id; + + return virtio_crypto_clear_session(dev, ctrl); } static struct rte_crypto_cipher_xform * @@ -1295,6 +1361,23 @@ virtio_crypto_check_sym_configure_session_paras( return 0; } +static int +virtio_crypto_check_asym_configure_session_paras( + struct rte_cryptodev *dev, + struct rte_crypto_asym_xform *xform, + struct rte_cryptodev_asym_session *asym_sess) +{ + if (unlikely(xform == NULL) || unlikely(asym_sess == NULL)) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer"); + return -1; + } + + if (virtio_crypto_check_sym_session_paras(dev) < 0) + return -1; + + return 0; +} + static int virtio_crypto_sym_configure_session( struct rte_cryptodev *dev, @@ -1386,6 +1469,207 @@ virtio_crypto_sym_configure_session( return -1; } +static size_t +tlv_encode(uint8_t **tlv, uint8_t type, uint8_t *data, size_t len) +{ + uint8_t *lenval = NULL; + size_t lenval_n = 0; + + if (len > 65535) { + goto _exit; + } else if (len > 255) { + lenval_n = 4 + len; + lenval = rte_malloc(NULL, lenval_n, 0); + + lenval[0] = type; + lenval[1] = 0x82; + lenval[2] = (len & 0xFF00) >> 8; + lenval[3] = (len & 0xFF); + rte_memcpy(&lenval[4], data, len); + } else if (len > 127) { + lenval_n = 3 + len; + lenval = rte_malloc(NULL, lenval_n, 0); + + lenval[0] = type; + lenval[1] = 0x81; + lenval[2] = len; + rte_memcpy(&lenval[3], data, len); + } else { + lenval_n = 2 + len; + lenval = rte_malloc(NULL, lenval_n, 0); + + lenval[0] = type; + lenval[1] = len; + rte_memcpy(&lenval[2], data, len); + } + +_exit: + *tlv = lenval; + return lenval_n; +} + +static int +virtio_crypto_asym_rsa_xform_to_der( + struct rte_crypto_asym_xform *xform, + unsigned char **der) +{ + size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, tlen; + uint8_t *n, *e, *d, *p, *q, *dp, *dq, *qinv, *t; + uint8_t ver[3] = {0x02, 0x01, 0x00}; + + if (xform->xform_type != RTE_CRYPTO_ASYM_XFORM_RSA) + return -EINVAL; + + /* Length of sequence in bytes */ + tlen = RTE_DIM(ver); + nlen = tlv_encode(&n, 0x02, xform->rsa.n.data, xform->rsa.n.length); + elen = tlv_encode(&e, 0x02, xform->rsa.e.data, xform->rsa.e.length); + tlen += (nlen + elen); + + dlen = tlv_encode(&d, 0x02, xform->rsa.d.data, xform->rsa.d.length); + tlen += dlen; + + plen = tlv_encode(&p, 0x02, xform->rsa.qt.p.data, xform->rsa.qt.p.length); + qlen = tlv_encode(&q, 0x02, xform->rsa.qt.q.data, xform->rsa.qt.q.length); + dplen = tlv_encode(&dp, 0x02, xform->rsa.qt.dP.data, xform->rsa.qt.dP.length); + dqlen = tlv_encode(&dq, 0x02, xform->rsa.qt.dQ.data, xform->rsa.qt.dQ.length); + qinvlen = tlv_encode(&qinv, 0x02, xform->rsa.qt.qInv.data, xform->rsa.qt.qInv.length); + tlen += (plen + qlen + dplen + dqlen + qinvlen); + + t = rte_malloc(NULL, tlen, 0); + *der = t; + rte_memcpy(t, ver, RTE_DIM(ver)); + t += RTE_DIM(ver); + rte_memcpy(t, n, nlen); + t += nlen; + rte_memcpy(t, e, elen); + t += elen; + rte_free(n); + rte_free(e); + + rte_memcpy(t, d, dlen); + t += dlen; + rte_free(d); + + rte_memcpy(t, p, plen); + t += plen; + rte_memcpy(t, q, plen); + t += qlen; + rte_memcpy(t, dp, dplen); + t += dplen; + rte_memcpy(t, dq, dqlen); + t += dqlen; + rte_memcpy(t, qinv, qinvlen); + t += qinvlen; + rte_free(p); + rte_free(q); + rte_free(dp); + rte_free(dq); + rte_free(qinv); + + t = *der; + tlen = tlv_encode(der, 0x30, t, tlen); + return tlen; +} + +static int +virtio_crypto_asym_configure_session( + struct rte_cryptodev *dev, + struct rte_crypto_asym_xform *xform, + struct rte_cryptodev_asym_session *sess) +{ + struct virtio_crypto_akcipher_session_para *para; + struct virtio_crypto_op_ctrl_req *ctrl_req; + struct virtio_crypto_session *session; + struct virtio_crypto_hw *hw; + struct virtqueue *control_vq; + uint8_t *key = NULL; + int ret; + + PMD_INIT_FUNC_TRACE(); + + ret = virtio_crypto_check_asym_configure_session_paras(dev, xform, + sess); + if (ret < 0) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters"); + return ret; + } + + session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess); + memset(session, 0, sizeof(struct virtio_crypto_session)); + ctrl_req = &session->ctrl; + ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION; + /* FIXME: support multiqueue */ + ctrl_req->header.queue_id = 0; + ctrl_req->header.algo = VIRTIO_CRYPTO_SERVICE_AKCIPHER; + para = &ctrl_req->u.akcipher_create_session.para; + + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_RSA: + para->algo = VIRTIO_CRYPTO_AKCIPHER_RSA; + + if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_EXP) { + para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC; + para->u.rsa.private_key_type = VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP; + } else { + para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE; + para->u.rsa.private_key_type = VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT; + } + + if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) { + para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_RAW_PADDING; + } else if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) { + para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_PKCS1_PADDING; + switch (xform->rsa.padding.hash) { + case RTE_CRYPTO_AUTH_SHA1: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA1; + break; + case RTE_CRYPTO_AUTH_SHA224: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA224; + break; + case RTE_CRYPTO_AUTH_SHA256: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA256; + break; + case RTE_CRYPTO_AUTH_SHA512: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA512; + break; + case RTE_CRYPTO_AUTH_MD5: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_MD5; + break; + default: + para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_NO_HASH; + } + } else { + VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid padding type"); + return -EINVAL; + } + + ret = virtio_crypto_asym_rsa_xform_to_der(xform, &key); + if (ret <= 0) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives"); + return ret; + } + + ctrl_req->u.akcipher_create_session.para.keylen = ret; + break; + default: + para->algo = VIRTIO_CRYPTO_NO_AKCIPHER; + } + + hw = dev->data->dev_private; + control_vq = hw->cvq; + ret = virtio_crypto_send_command(control_vq, ctrl_req, + key, NULL, session); + if (ret < 0) { + VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret); + goto error_out; + } + + return 0; +error_out: + return -1; +} + static void virtio_crypto_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info) diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c index 48b5f4ebbb..b85a322175 100644 --- a/drivers/crypto/virtio/virtio_rxtx.c +++ b/drivers/crypto/virtio/virtio_rxtx.c @@ -343,6 +343,203 @@ virtqueue_crypto_sym_enqueue_xmit( return 0; } +static int +virtqueue_crypto_asym_pkt_header_arrange( + struct rte_crypto_op *cop, + struct virtio_crypto_op_data_req *data, + struct virtio_crypto_session *session) +{ + struct rte_crypto_asym_op *asym_op = cop->asym; + struct virtio_crypto_op_data_req *req_data = data; + struct virtio_crypto_akcipher_session_para *para; + struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl; + + req_data->header.session_id = session->session_id; + para = &ctrl->u.akcipher_create_session.para; + + if (ctrl->header.algo != VIRTIO_CRYPTO_SERVICE_AKCIPHER) { + req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER; + return 0; + } + + switch (para->algo) { + case VIRTIO_CRYPTO_AKCIPHER_RSA: + req_data->header.algo = para->algo; + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { + req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_SIGN; + req_data->u.akcipher_req.para.src_data_len + = asym_op->rsa.message.length; + /* qemu does not accept zero size write buffer */ + req_data->u.akcipher_req.para.dst_data_len + = asym_op->rsa.sign.length; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) { + req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_VERIFY; + req_data->u.akcipher_req.para.src_data_len + = asym_op->rsa.sign.length; + /* qemu does not accept zero size write buffer */ + req_data->u.akcipher_req.para.dst_data_len + = asym_op->rsa.message.length; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { + req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_ENCRYPT; + req_data->u.akcipher_req.para.src_data_len + = asym_op->rsa.message.length; + /* qemu does not accept zero size write buffer */ + req_data->u.akcipher_req.para.dst_data_len + = asym_op->rsa.cipher.length; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { + req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DECRYPT; + req_data->u.akcipher_req.para.src_data_len + = asym_op->rsa.cipher.length; + /* qemu does not accept zero size write buffer */ + req_data->u.akcipher_req.para.dst_data_len + = asym_op->rsa.message.length; + } else { + return -EINVAL; + } + + break; + default: + req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER; + } + + return 0; +} + +static int +virtqueue_crypto_asym_enqueue_xmit( + struct virtqueue *txvq, + struct rte_crypto_op *cop) +{ + uint16_t idx = 0; + uint16_t num_entry; + uint16_t needed = 1; + uint16_t head_idx; + struct vq_desc_extra *dxp; + struct vring_desc *start_dp; + struct vring_desc *desc; + uint64_t indirect_op_data_req_phys_addr; + uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req); + uint32_t indirect_vring_addr_offset = req_data_len + + sizeof(struct virtio_crypto_inhdr); + struct rte_crypto_asym_op *asym_op = cop->asym; + struct virtio_crypto_session *session = + CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session); + struct virtio_crypto_op_data_req *op_data_req; + struct virtio_crypto_op_cookie *crypto_op_cookie; + + if (unlikely(txvq->vq_free_cnt == 0)) + return -ENOSPC; + if (unlikely(txvq->vq_free_cnt < needed)) + return -EMSGSIZE; + head_idx = txvq->vq_desc_head_idx; + if (unlikely(head_idx >= txvq->vq_nentries)) + return -EFAULT; + + dxp = &txvq->vq_descx[head_idx]; + + if (rte_mempool_get(txvq->mpool, &dxp->cookie)) { + VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie"); + return -EFAULT; + } + crypto_op_cookie = dxp->cookie; + indirect_op_data_req_phys_addr = + rte_mempool_virt2iova(crypto_op_cookie); + op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie; + if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session)) + return -EFAULT; + + /* status is initialized to VIRTIO_CRYPTO_ERR */ + ((struct virtio_crypto_inhdr *) + ((uint8_t *)op_data_req + req_data_len))->status = + VIRTIO_CRYPTO_ERR; + + /* point to indirect vring entry */ + desc = (struct vring_desc *) + ((uint8_t *)op_data_req + indirect_vring_addr_offset); + for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_OP - 1); idx++) + desc[idx].next = idx + 1; + desc[NUM_ENTRY_VIRTIO_CRYPTO_OP - 1].next = VQ_RING_DESC_CHAIN_END; + + idx = 0; + + /* indirect vring: first part, virtio_crypto_op_data_req */ + desc[idx].addr = indirect_op_data_req_phys_addr; + desc[idx].len = req_data_len; + desc[idx++].flags = VRING_DESC_F_NEXT; + + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { + /* indirect vring: src data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data); + desc[idx].len = asym_op->rsa.message.length; + desc[idx++].flags = VRING_DESC_F_NEXT; + + /* indirect vring: dst data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data); + desc[idx].len = asym_op->rsa.sign.length; + desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) { + /* indirect vring: src data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data); + desc[idx].len = asym_op->rsa.sign.length; + desc[idx++].flags = VRING_DESC_F_NEXT; + + /* indirect vring: dst data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data); + desc[idx].len = asym_op->rsa.message.length; + desc[idx++].flags = VRING_DESC_F_NEXT; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { + /* indirect vring: src data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data); + desc[idx].len = asym_op->rsa.message.length; + desc[idx++].flags = VRING_DESC_F_NEXT; + + /* indirect vring: dst data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data); + desc[idx].len = asym_op->rsa.cipher.length; + desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE; + } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { + /* indirect vring: src data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data); + desc[idx].len = asym_op->rsa.cipher.length; + desc[idx++].flags = VRING_DESC_F_NEXT; + + /* indirect vring: dst data */ + desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data); + desc[idx].len = asym_op->rsa.message.length; + desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE; + } else { + VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op"); + return -EINVAL; + } + + /* indirect vring: last part, status returned */ + desc[idx].addr = indirect_op_data_req_phys_addr + req_data_len; + desc[idx].len = sizeof(struct virtio_crypto_inhdr); + desc[idx++].flags = VRING_DESC_F_WRITE; + + num_entry = idx; + + /* save the infos to use when receiving packets */ + dxp->crypto_op = (void *)cop; + dxp->ndescs = needed; + + /* use a single buffer */ + start_dp = txvq->vq_ring.desc; + start_dp[head_idx].addr = indirect_op_data_req_phys_addr + + indirect_vring_addr_offset; + start_dp[head_idx].len = num_entry * sizeof(struct vring_desc); + start_dp[head_idx].flags = VRING_DESC_F_INDIRECT; + + idx = start_dp[head_idx].next; + txvq->vq_desc_head_idx = idx; + if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + txvq->vq_desc_tail_idx = idx; + txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed); + vq_update_avail_ring(txvq, head_idx); + + return 0; +} + static int virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq, struct rte_crypto_op *cop) @@ -353,6 +550,9 @@ virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq, case RTE_CRYPTO_OP_TYPE_SYMMETRIC: ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop); break; + case RTE_CRYPTO_OP_TYPE_ASYMMETRIC: + ret = virtqueue_crypto_asym_enqueue_xmit(txvq, cop); + break; default: VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u", cop->type); @@ -476,27 +676,28 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts, VIRTIO_CRYPTO_TX_LOG_DBG("%d packets to xmit", nb_pkts); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { - struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src; - /* nb_segs is always 1 at virtio crypto situation */ - int need = txm->nb_segs - txvq->vq_free_cnt; - - /* - * Positive value indicates it hasn't enough space in vring - * descriptors - */ - if (unlikely(need > 0)) { + if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src; + /* nb_segs is always 1 at virtio crypto situation */ + int need = txm->nb_segs - txvq->vq_free_cnt; + /* - * try it again because the receive process may be - * free some space + * Positive value indicates it hasn't enough space in vring + * descriptors */ - need = txm->nb_segs - txvq->vq_free_cnt; if (unlikely(need > 0)) { - VIRTIO_CRYPTO_TX_LOG_DBG("No free tx " - "descriptors to transmit"); - break; + /* + * try it again because the receive process may be + * free some space + */ + need = txm->nb_segs - txvq->vq_free_cnt; + if (unlikely(need > 0)) { + VIRTIO_CRYPTO_TX_LOG_DBG("No free tx " + "descriptors to transmit"); + break; + } } } - txvq->packets_sent_total++; /* Enqueue Packet buffers */ From patchwork Thu Sep 5 14:56:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143672 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB3394590E; Thu, 5 Sep 2024 16:57:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A8CB542EB3; Thu, 5 Sep 2024 16:57:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4BF4242EB1 for ; Thu, 5 Sep 2024 16:57:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 48586auw015000; Thu, 5 Sep 2024 07:57:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=Z ItTMJ1U6Zpb8FCqMuhyVSaOQeUOmyFFtwuGsJJkkyc=; b=USqN0NWiXrlGokhrW w9SU2eFynnu1b+8hHwSZn/gB+tZGFCiY+lNrgdi9OL3URWZQXib+pXeBO+VJtc8+ 7CvLelJ4TWIkKroJ/dOY3tBMbBn+7epeokcZbaVCcrHK4XBGKXGUhKYIHHfYVEwh 4x9P+Y7P2bIgGkLth+qoFLMgNKIClGFealdGGLuqLAizE9jaHEru/VBOeBctgkj7 MxKe1DzCbo3HY9dizFJ779v5q1jSRFDI8dgYRsUowXFSzcRAu6zAQxwbH3/gywwV pK0Ee3qwML11wfnPdtVns2AJs/tCKKEOzQNi47k++Z8fJ+V/isJ1rdVNP3MzOqzw GWRfA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41f8u49h6r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:57:46 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:57:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:57:45 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id BC1E85B6973; Thu, 5 Sep 2024 07:57:34 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Maxime Coquelin , Chenbo Xia CC: Anoob Joseph , Akhil Goyal , , , , , , , , , , , , , , , , , , , , , , , , , Brian Dooley , Gowrishankar Muthukrishnan Subject: [PATCH 5/6] examples/vhost_crypto: add asymmetric support Date: Thu, 5 Sep 2024 20:26:09 +0530 Message-ID: <20240905145612.1732-6-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vuxk9g-otG-nnGCeaEE55cd_a0NHZ610 X-Proofpoint-GUID: vuxk9g-otG-nnGCeaEE55cd_a0NHZ610 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add symmetric support. Signed-off-by: Gowrishankar Muthukrishnan --- examples/vhost_crypto/main.c | 50 +++++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 21 deletions(-) diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c index 558c09a60f..bed7fc637d 100644 --- a/examples/vhost_crypto/main.c +++ b/examples/vhost_crypto/main.c @@ -45,7 +45,8 @@ struct lcore_option { struct __rte_cache_aligned vhost_crypto_info { int vids[MAX_NB_SOCKETS]; uint32_t nb_vids; - struct rte_mempool *sess_pool; + struct rte_mempool *sym_sess_pool; + struct rte_mempool *asym_sess_pool; struct rte_mempool *cop_pool; uint8_t cid; uint32_t qid; @@ -302,7 +303,8 @@ new_device(int vid) return -ENOENT; } - ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool, + ret = rte_vhost_crypto_create(vid, info->cid, info->sym_sess_pool, + info->asym_sess_pool, rte_lcore_to_socket_id(options.los[i].lcore_id)); if (ret) { RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n"); @@ -362,8 +364,8 @@ destroy_device(int vid) } static const struct rte_vhost_device_ops virtio_crypto_device_ops = { - .new_device = new_device, - .destroy_device = destroy_device, + .new_connection = new_device, + .destroy_connection = destroy_device, }; static int @@ -385,7 +387,7 @@ vhost_crypto_worker(void *arg) for (i = 0; i < NB_VIRTIO_QUEUES; i++) { if (rte_crypto_op_bulk_alloc(info->cop_pool, - RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i], + RTE_CRYPTO_OP_TYPE_UNDEFINED, ops[i], burst_size) < burst_size) { RTE_LOG(ERR, USER1, "Failed to alloc cops\n"); ret = -1; @@ -409,20 +411,12 @@ vhost_crypto_worker(void *arg) rte_cryptodev_enqueue_burst( info->cid, info->qid, ops[j], fetched); - if (unlikely(rte_crypto_op_bulk_alloc( - info->cop_pool, - RTE_CRYPTO_OP_TYPE_SYMMETRIC, - ops[j], fetched) < fetched)) { - RTE_LOG(ERR, USER1, "Failed realloc\n"); - return -1; - } - fetched = rte_cryptodev_dequeue_burst( info->cid, info->qid, ops_deq[j], RTE_MIN(burst_size, info->nb_inflight_ops)); fetched = rte_vhost_crypto_finalize_requests( - ops_deq[j], fetched, callfds, + info->vids[i], j, ops_deq[j], fetched, callfds, &nb_callfds); info->nb_inflight_ops -= fetched; @@ -455,7 +449,8 @@ free_resource(void) continue; rte_mempool_free(info->cop_pool); - rte_mempool_free(info->sess_pool); + rte_mempool_free(info->sym_sess_pool); + rte_mempool_free(info->asym_sess_pool); for (j = 0; j < lo->nb_sockets; j++) { rte_vhost_driver_unregister(lo->socket_files[i]); @@ -539,21 +534,34 @@ main(int argc, char *argv[]) goto error_exit; } - snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id); - info->sess_pool = rte_cryptodev_sym_session_pool_create(name, + snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id); + info->sym_sess_pool = rte_cryptodev_sym_session_pool_create(name, SESSION_MAP_ENTRIES, rte_cryptodev_sym_get_private_session_size( info->cid), 0, 0, rte_lcore_to_socket_id(lo->lcore_id)); - if (!info->sess_pool) { - RTE_LOG(ERR, USER1, "Failed to create mempool"); + if (!info->sym_sess_pool) { + RTE_LOG(ERR, USER1, "Failed to create sym session mempool"); + goto error_exit; + } + + /* TODO: storing vhost_crypto_data_req (56 bytes) in user_data, + * but it needs to be moved somewhere. + */ + snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id); + info->asym_sess_pool = rte_cryptodev_asym_session_pool_create(name, + SESSION_MAP_ENTRIES, 0, 64, + rte_lcore_to_socket_id(lo->lcore_id)); + + if (!info->asym_sess_pool) { + RTE_LOG(ERR, USER1, "Failed to create asym session mempool"); goto error_exit; } snprintf(name, 127, "COPPOOL_%u", lo->lcore_id); info->cop_pool = rte_crypto_op_pool_create(name, - RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS, + RTE_CRYPTO_OP_TYPE_UNDEFINED, NB_MEMPOOL_OBJS, NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN, rte_lcore_to_socket_id(lo->lcore_id)); @@ -566,7 +574,7 @@ main(int argc, char *argv[]) options.infos[i] = info; qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS; - qp_conf.mp_session = info->sess_pool; + qp_conf.mp_session = info->sym_sess_pool; for (j = 0; j < dev_info.max_nb_queue_pairs; j++) { ret = rte_cryptodev_queue_pair_setup(info->cid, j, From patchwork Thu Sep 5 14:56:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gowrishankar Muthukrishnan X-Patchwork-Id: 143673 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AE70B4590E; Thu, 5 Sep 2024 16:58:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 90AAE42EA7; Thu, 5 Sep 2024 16:58:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CA0BD42EB6 for ; Thu, 5 Sep 2024 16:58:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 485AWP0D018438; Thu, 5 Sep 2024 07:57:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=e 7qYIlzwzknnjSwCM40dp2k2+c/RK0bQts+cMYks6c8=; b=aZX0tVV9Ri0S3DuHR ZNUVbiQrgtDyikOW36uYjmvH15CoVHGiJ/GX7Nz8zejc95rwpq1f23FR67qJDzq0 9/xw0eUad5hABTlHZT4mbtNzexFCB9B5wEiPhoqWuIQ+ojrkQiYCxg49Amibwow4 GPgkHzaFDMafmg3K2sZseN4bSNntETk4qOcw3ML3aWrWVXH92XrQVpknXKuxScJ9 l0txEXxocm3Gyj1tUrBK881wnHx7Oj1kHA0qSb6R7BgrMzmUYKHCln7zsu7BszFo eWvTTrXe3RrDuoJPZPwdaeJpHGOOtJzSJYViWO8FJDsoVeiAC37+mSoTDqLuzuVH HMBNg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 41faywh1cq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 Sep 2024 07:57:59 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 5 Sep 2024 07:57:58 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 5 Sep 2024 07:57:57 -0700 Received: from BG-LT91401.marvell.com (BG-LT91401.marvell.com [10.28.168.34]) by maili.marvell.com (Postfix) with ESMTP id C2D6D5B6973; Thu, 5 Sep 2024 07:57:47 -0700 (PDT) From: Gowrishankar Muthukrishnan To: , Akhil Goyal , Fan Zhang CC: Anoob Joseph , , , , , , , , , , , , , , , , , , , , , , , , , , Brian Dooley , Gowrishankar Muthukrishnan Subject: [PATCH 6/6] app/test: add asymmetric tests for virtio pmd Date: Thu, 5 Sep 2024 20:26:10 +0530 Message-ID: <20240905145612.1732-7-gmuthukrishn@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240905145612.1732-1-gmuthukrishn@marvell.com> References: <20240905145612.1732-1-gmuthukrishn@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: etF_coo6sVaGwlX3orDhSd9GdmioOlKo X-Proofpoint-GUID: etF_coo6sVaGwlX3orDhSd9GdmioOlKo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-05_10,2024-09-04_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add asymmetric tests for Virtio PMD. Signed-off-by: Gowrishankar Muthukrishnan --- app/test/test_cryptodev_asym.c | 42 +++++++++++++++++++--- app/test/test_cryptodev_rsa_test_vectors.h | 26 ++++++++++++++ 2 files changed, 64 insertions(+), 4 deletions(-) diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c index 0928367cb0..b425643211 100644 --- a/app/test/test_cryptodev_asym.c +++ b/app/test/test_cryptodev_asym.c @@ -475,7 +475,7 @@ testsuite_setup(void) for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) { TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup( dev_id, qp_id, &ts_params->qp_conf, - rte_cryptodev_socket_id(dev_id)), + (int8_t)rte_cryptodev_socket_id(dev_id)), "Failed to setup queue pair %u on cryptodev %u ASYM", qp_id, dev_id); } @@ -538,7 +538,7 @@ ut_setup_asym(void) TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup( ts_params->valid_devs[0], qp_id, &ts_params->qp_conf, - rte_cryptodev_socket_id(ts_params->valid_devs[0])), + (int8_t)rte_cryptodev_socket_id(ts_params->valid_devs[0])), "Failed to setup queue pair %u on cryptodev %u", qp_id, ts_params->valid_devs[0]); } @@ -3319,7 +3319,6 @@ rsa_encrypt(const struct rsa_test_data_2 *vector, uint8_t *cipher_buf) self->op->asym->rsa.cipher.data = cipher_buf; self->op->asym->rsa.cipher.length = 0; SET_RSA_PARAM(self->op->asym->rsa, vector, message); - self->op->asym->rsa.padding.type = vector->padding; rte_crypto_op_attach_asym_session(self->op, self->sess); TEST_ASSERT_SUCCESS(send_one(), @@ -3343,7 +3342,6 @@ rsa_decrypt(const struct rsa_test_data_2 *vector, uint8_t *plaintext, self->op->asym->rsa.message.data = plaintext; self->op->asym->rsa.message.length = 0; self->op->asym->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT; - self->op->asym->rsa.padding.type = vector->padding; rte_crypto_op_attach_asym_session(self->op, self->sess); TEST_ASSERT_SUCCESS(send_one(), "Failed to process crypto op (Decryption)"); @@ -3385,6 +3383,7 @@ kat_rsa_encrypt(const void *data) SET_RSA_PARAM(xform.rsa, vector, n); SET_RSA_PARAM(xform.rsa, vector, e); SET_RSA_PARAM(xform.rsa, vector, d); + xform.rsa.padding.type = vector->padding; xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP; int ret = rsa_init_session(&xform); @@ -3415,6 +3414,7 @@ kat_rsa_encrypt_crt(const void *data) SET_RSA_PARAM_QT(xform.rsa, vector, dP); SET_RSA_PARAM_QT(xform.rsa, vector, dQ); SET_RSA_PARAM_QT(xform.rsa, vector, qInv); + xform.rsa.padding.type = vector->padding; xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT; int ret = rsa_init_session(&xform); if (ret) { @@ -3440,6 +3440,7 @@ kat_rsa_decrypt(const void *data) SET_RSA_PARAM(xform.rsa, vector, n); SET_RSA_PARAM(xform.rsa, vector, e); SET_RSA_PARAM(xform.rsa, vector, d); + xform.rsa.padding.type = vector->padding; xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP; int ret = rsa_init_session(&xform); @@ -3470,6 +3471,7 @@ kat_rsa_decrypt_crt(const void *data) SET_RSA_PARAM_QT(xform.rsa, vector, dP); SET_RSA_PARAM_QT(xform.rsa, vector, dQ); SET_RSA_PARAM_QT(xform.rsa, vector, qInv); + xform.rsa.padding.type = vector->padding; xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT; int ret = rsa_init_session(&xform); if (ret) { @@ -3634,6 +3636,22 @@ static struct unit_test_suite cryptodev_octeontx_asym_testsuite = { } }; +static struct unit_test_suite cryptodev_virtio_asym_testsuite = { + .suite_name = "Crypto Device VIRTIO ASYM Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_capability), + TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, + test_rsa_sign_verify), + TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, + test_rsa_sign_verify_crt), + TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec), + TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec_crt), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + static int test_cryptodev_openssl_asym(void) { @@ -3702,6 +3720,22 @@ test_cryptodev_cn10k_asym(void) return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite); } +static int +test_cryptodev_virtio_asym(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD)); + if (gbl_driver_id == -1) { + RTE_LOG(ERR, USER1, "virtio PMD must be loaded.\n"); + return TEST_FAILED; + } + + /* Use test suite registered for crypto_virtio PMD */ + return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite); +} + +REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym); + REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym); REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym); REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym); diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h index 1b7b451387..1c46a8117a 100644 --- a/app/test/test_cryptodev_rsa_test_vectors.h +++ b/app/test/test_cryptodev_rsa_test_vectors.h @@ -358,6 +358,28 @@ struct rte_crypto_asym_xform rsa_xform = { .d = { .data = rsa_d, .length = sizeof(rsa_d) + }, + .qt = { + .p = { + .data = rsa_p, + .length = sizeof(rsa_p) + }, + .q = { + .data = rsa_q, + .length = sizeof(rsa_q) + }, + .dP = { + .data = rsa_dP, + .length = sizeof(rsa_dP) + }, + .dQ = { + .data = rsa_dQ, + .length = sizeof(rsa_dQ) + }, + .qInv = { + .data = rsa_qInv, + .length = sizeof(rsa_qInv) + }, } } }; @@ -377,6 +399,10 @@ struct rte_crypto_asym_xform rsa_xform_crt = { .length = sizeof(rsa_e) }, .key_type = RTE_RSA_KEY_TYPE_QT, + .d = { + .data = rsa_d, + .length = sizeof(rsa_d) + }, .qt = { .p = { .data = rsa_p,