From patchwork Fri Sep 4 15:25:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76567 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDB94A04C5; Fri, 4 Sep 2020 17:26:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 83F2C1C0CE; Fri, 4 Sep 2020 17:25:54 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 6E5A11C0C2 for ; Fri, 4 Sep 2020 17:25:52 +0200 (CEST) IronPort-SDR: Y2x+85WsfBi5YU0NWq6e9qeXrZCbkZAlsxRvZ1Peb6gH8+CwXl+Dwcyhmu+m1JyYf2ptGOT9XZ GsGQ1FFHAB5A== X-IronPort-AV: E=McAfee;i="6000,8403,9734"; a="137280785" X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="137280785" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2020 08:25:51 -0700 IronPort-SDR: wgqs0eO2rfT/vR6UrK1XcRb714vSU6kiDkT0Eh/k2zLi60qxa/jHX8qgxhkLZuohWzf9kNfBdR C3gPXYtJVbVw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="478540543" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga005.jf.intel.com with ESMTP; 04 Sep 2020 08:25:49 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, Fan Zhang , Piotr Bronowski Date: Fri, 4 Sep 2020 16:25:36 +0100 Message-Id: <20200904152539.20608-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200904152539.20608-1-roy.fan.zhang@intel.com> References: <20200828125815.21614-1-roy.fan.zhang@intel.com> <20200904152539.20608-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto data-path service APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds data-path service APIs for enqueue and dequeue operations to cryptodev. The APIs support flexible user-define enqueue and dequeue behaviors and operation mode. Signed-off-by: Fan Zhang Signed-off-by: Piotr Bronowski Acked-by: Adam Dybkowski --- lib/librte_cryptodev/rte_crypto.h | 9 + lib/librte_cryptodev/rte_crypto_sym.h | 49 ++- lib/librte_cryptodev/rte_cryptodev.c | 98 ++++++ lib/librte_cryptodev/rte_cryptodev.h | 332 +++++++++++++++++- lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 ++- .../rte_cryptodev_version.map | 10 + 6 files changed, 537 insertions(+), 9 deletions(-) diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h index fd5ef3a87..f009be9af 100644 --- a/lib/librte_cryptodev/rte_crypto.h +++ b/lib/librte_cryptodev/rte_crypto.h @@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op, return 0; } +/** Crypto data-path service types */ +enum rte_crypto_dp_service { + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0, + RTE_CRYPTO_DP_SYM_AUTH_ONLY, + RTE_CRYPTO_DP_SYM_CHAIN, + RTE_CRYPTO_DP_SYM_AEAD, + RTE_CRYPTO_DP_N_SERVICE +}; + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index f29c98051..376412e94 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -50,6 +50,30 @@ struct rte_crypto_sgl { uint32_t num; }; +/** + * Symmetri Crypto Addtional Data other than src and destination data. + * Supposed to be used to pass IV/digest/aad data buffers with lengths + * defined when creating crypto session. + */ +union rte_crypto_sym_additional_data { + struct { + void *cipher_iv_ptr; + rte_iova_t cipher_iv_iova; + void *auth_iv_ptr; + rte_iova_t auth_iv_iova; + void *digest_ptr; + rte_iova_t digest_iova; + } cipher_auth; + struct { + void *iv_ptr; + rte_iova_t iv_iova; + void *digest_ptr; + rte_iova_t digest_iova; + void *aad_ptr; + rte_iova_t aad_iova; + } aead; +}; + /** * Synchronous operation descriptor. * Supposed to be used with CPU crypto API call. @@ -57,12 +81,25 @@ struct rte_crypto_sgl { struct rte_crypto_sym_vec { /** array of SGL vectors */ struct rte_crypto_sgl *sgl; - /** array of pointers to IV */ - void **iv; - /** array of pointers to AAD */ - void **aad; - /** array of pointers to digest */ - void **digest; + + union { + + /* Supposed to be used with CPU crypto API call. */ + struct { + /** array of pointers to IV */ + void **iv; + /** array of pointers to AAD */ + void **aad; + /** array of pointers to digest */ + void **digest; + }; + + /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec() + * call. + */ + union rte_crypto_sym_additional_data *additional_data; + }; + /** * array of statuses for each operation: * - 0 on success diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 1dd795bcb..5b670e83e 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -1914,6 +1914,104 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); } +int +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id) +{ + struct rte_cryptodev *dev; + int32_t size = sizeof(struct rte_crypto_dp_service_ctx); + int32_t priv_size; + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) + return -1; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + + if (*dev->dev_ops->get_drv_ctx_size == NULL || + !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)) { + return -1; + } + + priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev); + if (priv_size < 0) + return -1; + + return RTE_ALIGN_CEIL((size + priv_size), 8); +} + +int +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -1; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE) + || dev->dev_ops->configure_service == NULL) + return -1; + + return (*dev->dev_ops->configure_service)(dev, qp_id, service_type, + sess_type, session_ctx, ctx, is_update); +} + +int +rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *additional_data, + void *opaque) +{ + return _cryptodev_dp_submit_single_job(ctx, data, n_data_vecs, ofs, + additional_data, opaque); +} + +uint32_t +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec, + ofs, opaque); +} + +int +rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx, + void **out_opaque) +{ + return _cryptodev_dp_sym_dequeue_single_job(ctx, out_opaque); +} + +void +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx, + uint32_t n) +{ + (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n); +} + +void +rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx, + uint32_t n) +{ + (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n); +} + +uint32_t +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs) +{ + return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data, + get_dequeue_count, post_dequeue, out_opaque, is_opaque_array, + n_success_jobs); +} + /** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 7b3ebc20f..5072b3a40 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support symmetric session-less operations */ #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23) /**< Support operations on data which is not byte aligned */ - +#define RTE_CRYPTODEV_FF_DATA_PATH_SERVICE (1ULL << 24) +/**< Support accelerated specific raw data as input */ /** * Get the name of a crypto device feature flag @@ -1351,6 +1352,335 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/** + * Get the size of the data-path service context for all registered drivers. + * + * @param dev_id The device identifier. + * + * @return + * - If the device supports data-path service, return the context size. + * - If the device does not support the data-dath service, return -1. + */ +__rte_experimental +int +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id); + +/** + * Union of different crypto session types, including session-less xform + * pointer. + */ +union rte_cryptodev_session_ctx { + struct rte_cryptodev_sym_session *crypto_sess; + struct rte_crypto_sym_xform *xform; + struct rte_security_session *sec_sess; +}; + +/** + * Submit a data vector into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param vec The array of job vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param opaque The array of opaque data for dequeue. + * @return + * - The number of jobs successfully submitted. + */ +typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)( + void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec, + union rte_crypto_sym_ofs ofs, void **opaque); + +/** + * Submit single job into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param data The buffer vector. + * @param n_data_vecs Number of buffer vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param additional_data IV, digest, and aad data. + * @param opaque The opaque data for dequeue. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_submit_single_job_t)( + void *qp, uint8_t *service_data, struct rte_crypto_vec *data, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *additional_data, + void *opaque); + +/** + * Inform the queue pair to start processing or finish dequeuing all + * submitted/dequeued jobs. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param n The total number of submitted jobs. + */ +typedef void (*cryptodev_dp_sym_operation_done_t)(void *qp, + uint8_t *service_data, uint32_t n); + +/** + * Typedef that the user provided for the driver to get the dequeue count. + * The function may return a fixed number or the number parsed from the opaque + * data stored in the first processed job. + * + * @param opaque Dequeued opaque data. + **/ +typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque); + +/** + * Typedef that the user provided to deal with post dequeue operation, such + * as filling status. + * + * @param opaque Dequeued opaque data. In case + * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is + * set, this value will be the opaque data stored + * in the specific processed jobs referenced by + * index, otherwise it will be the opaque data + * stored in the first processed job in the burst. + * @param index Index number of the processed job. + * @param is_op_success Driver filled operation status. + **/ +typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index, + uint8_t is_op_success); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque pointer array to be retrieve from + * device queue. In case of + * *is_opaque_array* is set there should + * be enough room to store all opaque data. + * @param is_opaque_array Set 1 if every dequeued job will be + * written the opaque data into + * *out_opaque* array. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * + * @return + * - Returns number of dequeued packets. + */ +typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param out_opaque Opaque pointer to be retrieve from + * device queue. + * + * @return + * - 1 if the job is dequeued and the operation is a success. + * - 0 if the job is dequeued but the operation is failed. + * - -1 if no job is dequeued. + */ +typedef int (*cryptodev_dp_sym_dequeue_single_job_t)( + void *qp, uint8_t *service_data, void **out_opaque); + +/** + * Context data for asynchronous crypto process. + */ +struct rte_crypto_dp_service_ctx { + void *qp_data; + + struct { + cryptodev_dp_submit_single_job_t submit_single_job; + cryptodev_dp_sym_submit_vec_t submit_vec; + cryptodev_dp_sym_operation_done_t submit_done; + cryptodev_dp_sym_dequeue_t dequeue_opaque; + cryptodev_dp_sym_dequeue_single_job_t dequeue_single; + cryptodev_dp_sym_operation_done_t dequeue_done; + }; + + /* Driver specific service data */ + __extension__ uint8_t drv_service_data[]; +}; + +/** + * Configure one DP service context data. Calling this function for the first + * time the user should unset the *is_update* parameter and the driver will + * fill necessary operation data into ctx buffer. Only when + * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer + * will not be effective. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param service_type Type of the service requested. + * @param sess_type session type. + * @param session_ctx Session context data. + * @param ctx The data-path service context data. + * @param is_update Set 1 if ctx is pre-initialized but need + * update to different service type or session, + * but the rest driver data remains the same. + * Since service context data buffer is provided + * by user, the driver will not check the + * validity of the buffer nor its content. It is + * the user's obligation to initialize and + * uses the buffer properly by setting this field. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +__rte_experimental +int +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update); + +static __rte_always_inline int +_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *additional_data, + void *opaque) +{ + return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data, + data, n_data_vecs, ofs, additional_data, opaque); +} + +static __rte_always_inline int +_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx, + void **out_opaque) +{ + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data, + out_opaque); +} + +/** + * Submit single job into device queue but the driver will not start + * processing until rte_cryptodev_dp_submit_done() is called. This is a + * simplified + * + * @param ctx The initialized data-path service context data. + * @param data The buffer vector. + * @param n_data_vecs Number of buffer vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param additional_data IV, digest, and aad + * @param opaque The array of opaque data for dequeue. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +__rte_experimental +int +rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *additional_data, + void *opaque); + +/** + * Submit a data vector into device queue but the driver will not start + * processing until rte_cryptodev_dp_submit_done() is called. + * + * @param ctx The initialized data-path service context data. + * @param vec The array of job vectors. + * @param ofs Start and stop offsets for auth and cipher operations. + * @param opaque The array of opaque data for dequeue. + * @return + * - The number of jobs successfully submitted. + */ +__rte_experimental +uint32_t +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque); + +/** + * Command the queue pair to start processing all submitted jobs from last + * rte_cryptodev_init_dp_service() call. + * + * @param ctx The initialized data-path service context data. + * @param n The total number of submitted jobs. + */ +__rte_experimental +void +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx, + uint32_t n); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param ctx The initialized data-path service + * context data. + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque pointer array to be retrieve from + * device queue. In case of + * *is_opaque_array* is set there should + * be enough room to store all opaque data. + * @param is_opaque_array Set 1 if every dequeued job will be + * written the opaque data into + * *out_opaque* array. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * + * @return + * - Returns number of dequeued packets. + */ +__rte_experimental +uint32_t +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs); + +/** + * Dequeue Single symmetric crypto processing of user provided data. + * + * @param ctx The initialized data-path service + * context data. + * @param out_opaque Opaque pointer to be retrieve from + * device queue. The driver shall support + * NULL input of this parameter. + * + * @return + * - 1 if the job is dequeued and the operation is a success. + * - 0 if the job is dequeued but the operation is failed. + * - -1 if no job is dequeued. + */ +__rte_experimental +int +rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx, + void **out_opaque); + +/** + * Inform the queue pair dequeue jobs finished. + * + * @param ctx The initialized data-path service context data. + * @param n The total number of jobs already dequeued. + */ +__rte_experimental +void +rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx, + uint32_t n); + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index 81975d72b..e19de458c 100644 --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h @@ -316,6 +316,42 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t) (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/** + * Typedef that the driver provided to get service context private date size. + * + * @param dev Crypto device pointer. + * + * @return + * - On success return the size of the device's service context private data. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_get_service_ctx_size_t)( + struct rte_cryptodev *dev); + +/** + * Typedef that the driver provided to configure data-path service. + * + * @param dev Crypto device pointer. + * @param qp_id Crypto device queue pair index. + * @param service_type Type of the service requested. + * @param sess_type session type. + * @param session_ctx Session context data. + * @param ctx The data-path service context data. + * @param is_update Set 1 if ctx is pre-initialized but need + * update to different service type or session, + * but the rest driver data remains the same. + * buffer will always be one. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_configure_service_t)( + struct rte_cryptodev *dev, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *ctx, + uint8_t is_update); /** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -348,8 +384,16 @@ struct rte_cryptodev_ops { /**< Clear a Crypto sessions private data. */ cryptodev_asym_free_session_t asym_session_clear; /**< Clear a Crypto sessions private data. */ - cryptodev_sym_cpu_crypto_process_t sym_cpu_process; - /**< process input data synchronously (cpu-crypto). */ + union { + cryptodev_sym_cpu_crypto_process_t sym_cpu_process; + /**< process input data synchronously (cpu-crypto). */ + struct { + cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size; + /**< Get data path service context data size. */ + cryptodev_dp_configure_service_t configure_service; + /**< Initialize crypto service ctx data. */ + }; + }; }; diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index 02f6dcf72..10388ae90 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -105,4 +105,14 @@ EXPERIMENTAL { # added in 20.08 rte_cryptodev_get_qp_status; + + # added in 20.11 + rte_cryptodev_dp_configure_service; + rte_cryptodev_dp_get_service_ctx_data_size; + rte_cryptodev_dp_sym_dequeue; + rte_cryptodev_dp_sym_dequeue_done; + rte_cryptodev_dp_sym_dequeue_single_job; + rte_cryptodev_dp_sym_submit_done; + rte_cryptodev_dp_sym_submit_single_job; + rte_cryptodev_dp_sym_submit_vec; }; From patchwork Fri Sep 4 15:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76568 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31104A04C5; Fri, 4 Sep 2020 17:26:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 157C01C10D; Fri, 4 Sep 2020 17:25:56 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 8A0641C0CF for ; Fri, 4 Sep 2020 17:25:54 +0200 (CEST) IronPort-SDR: ME0lAUHHXMKoAFnX5aeA3opnjwn4gFuIO03oTYfRLV320qddIqwCJuDv04ZgPBTKY8Eadvfc6a 4o17P3sTtFXA== X-IronPort-AV: E=McAfee;i="6000,8403,9734"; a="137280794" X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="137280794" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2020 08:25:54 -0700 IronPort-SDR: Olbh11UFS7XM4mmWNRLSs9iauSPYjvAKTfQO62WrSeZstUsXT1nc8g2dqdod7C+ZVK8Q978e0x hbR/0BhIB80g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="478540564" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga005.jf.intel.com with ESMTP; 04 Sep 2020 08:25:51 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, Fan Zhang Date: Fri, 4 Sep 2020 16:25:37 +0100 Message-Id: <20200904152539.20608-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200904152539.20608-1-roy.fan.zhang@intel.com> References: <20200828125815.21614-1-roy.fan.zhang@intel.com> <20200904152539.20608-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v8 2/4] crypto/qat: add crypto data-path service API support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates QAT PMD to add crypto service API support. Signed-off-by: Fan Zhang --- drivers/common/qat/Makefile | 1 + drivers/crypto/qat/meson.build | 1 + drivers/crypto/qat/qat_sym.h | 13 + drivers/crypto/qat/qat_sym_hw_dp.c | 941 +++++++++++++++++++++++++++++ drivers/crypto/qat/qat_sym_pmd.c | 9 +- 5 files changed, 963 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile index 85d420709..1b71bbbab 100644 --- a/drivers/common/qat/Makefile +++ b/drivers/common/qat/Makefile @@ -42,6 +42,7 @@ endif SRCS-y += qat_sym.c SRCS-y += qat_sym_session.c SRCS-y += qat_sym_pmd.c + SRCS-y += qat_sym_hw_dp.c build_qat = yes endif endif diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build index a225f374a..bc90ec44c 100644 --- a/drivers/crypto/qat/meson.build +++ b/drivers/crypto/qat/meson.build @@ -15,6 +15,7 @@ if dep.found() qat_sources += files('qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c') qat_ext_deps += dep diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 1a9748849..ea2db0ca0 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp) } *op = (void *)rx_op; } + +int +qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *service_ctx, + uint8_t is_update); + +int +qat_sym_get_service_ctx_size(struct rte_cryptodev *dev); + #else static inline void @@ -276,5 +288,6 @@ static inline void qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused) { } + #endif #endif /* _QAT_SYM_H_ */ diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c new file mode 100644 index 000000000..81887bb96 --- /dev/null +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -0,0 +1,941 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#include + +#include "adf_transport_access_macros.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +#include "qat_sym.h" +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_qp.h" + +struct qat_sym_dp_service_ctx { + struct qat_sym_session *session; + uint32_t tail; + uint32_t head; + uint16_t cached_enqueue; + uint16_t cached_dequeue; + enum rte_crypto_dp_service last_service_type; +}; + +static __rte_always_inline int32_t +qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_vec *data, uint16_t n_data_vecs) +{ + struct qat_queue *tx_queue; + struct qat_sym_op_cookie *cookie; + struct qat_sgl *list; + uint32_t i; + uint32_t total_len; + + if (likely(n_data_vecs == 1)) { + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + data[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + data[0].len; + return data[0].len; + } + + if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER) + return -1; + + total_len = 0; + tx_queue = &qp->tx_q; + + ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_SGL); + cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz]; + list = (struct qat_sgl *)&cookie->qat_sgl_src; + + for (i = 0; i < n_data_vecs; i++) { + list->buffers[i].len = data[i].len; + list->buffers[i].resrvd = 0; + list->buffers[i].addr = data[i].iova; + if (total_len + data[i].len > UINT32_MAX) { + QAT_DP_LOG(ERR, "Message too long"); + return -1; + } + total_len += data[i].len; + } + + list->num_bufs = i; + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + cookie->qat_sgl_src_phys_addr; + req->comn_mid.src_length = req->comn_mid.dst_length = 0; + return total_len; +} + +static __rte_always_inline void +set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param, + union rte_crypto_sym_additional_data *a_data, uint32_t iv_len, + struct icp_qat_fw_la_bulk_req *qat_req) +{ + /* copy IV into request if it fits */ + if (iv_len <= sizeof(cipher_param->u.cipher_IV_array)) + rte_memcpy(cipher_param->u.cipher_IV_array, + a_data->cipher_auth.cipher_iv_ptr, iv_len); + else { + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + qat_req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_CIPH_IV_64BIT_PTR); + cipher_param->u.s.cipher_IV_ptr = + a_data->cipher_auth.cipher_iv_iova; + } +} + +#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \ + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \ + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status)) + +static __rte_always_inline void +qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + sta[i] = status; +} + +#define QAT_SYM_DP_CHECK_ENQ_POSSIBLE(q, c, n) \ + (q->enqueued - q->dequeued + c + n < q->max_inflights) + +static __rte_always_inline void +submit_one_aead_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + union rte_crypto_sym_additional_data *a_data, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + struct icp_qat_fw_la_auth_req_params *auth_param = + (void *)((uint8_t *)&req->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + uint8_t *aad_data; + uint8_t aad_ccm_real_len; + uint8_t aad_len_field_sz; + uint32_t msg_len_be; + rte_iova_t aad_iova = 0; + uint8_t q; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy(cipher_param->u.cipher_IV_array, + a_data->aead.iv_ptr, ctx->cipher_iv.length); + aad_iova = a_data->aead.aad_iova; + break; + case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: + aad_data = a_data->aead.aad_ptr; + aad_iova = a_data->aead.aad_iova; + aad_ccm_real_len = 0; + aad_len_field_sz = 0; + msg_len_be = rte_bswap32((uint32_t)data_len - + ofs.ofs.cipher.head); + + if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { + aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; + aad_ccm_real_len = ctx->aad_len - + ICP_QAT_HW_CCM_AAD_B0_LEN - + ICP_QAT_HW_CCM_AAD_LEN_INFO; + } else { + aad_data = a_data->aead.iv_ptr; + aad_iova = a_data->aead.iv_iova; + } + + q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; + aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( + aad_len_field_sz, ctx->digest_length, q); + if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET + (q - + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), + (uint8_t *)&msg_len_be, + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); + } else { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)&msg_len_be + + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE + - q), q); + } + + if (aad_len_field_sz > 0) { + *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = + rte_bswap16(aad_ccm_real_len); + + if ((aad_ccm_real_len + aad_len_field_sz) + % ICP_QAT_HW_CCM_AAD_B0_LEN) { + uint8_t pad_len = 0; + uint8_t pad_idx = 0; + + pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - + ((aad_ccm_real_len + + aad_len_field_sz) % + ICP_QAT_HW_CCM_AAD_B0_LEN); + pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + + aad_ccm_real_len + + aad_len_field_sz; + memset(&aad_data[pad_idx], 0, pad_len); + } + } + + rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)a_data->aead.iv_ptr + + ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length); + *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = + q - ICP_QAT_HW_CCM_NONCE_OFFSET; + + rte_memcpy((uint8_t *)a_data->aead.aad_ptr + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)a_data->aead.iv_ptr + + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + break; + default: + break; + } + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = cipher_param->cipher_length; + auth_param->auth_res_addr = a_data->aead.digest_iova; + auth_param->u1.aad_adr = aad_iova; + + if (ctx->is_single_pass) { + cipher_param->spc_aad_addr = aad_iova; + cipher_param->spc_auth_res_addr = a_data->aead.digest_iova; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *a_data, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_aead_job(ctx, req, a_data, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp, + dp_ctx->cached_enqueue, vec->num) == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_aead_job(ctx, req, vec->additional_data + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + return i; +} + +static __rte_always_inline void +submit_one_cipher_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + union rte_crypto_sym_additional_data *a_data, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + + /* cipher IV */ + set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; +} + +static __rte_always_inline int +qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *a_data, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_cipher_job(ctx, req, a_data, ofs, (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp, + dp_ctx->cached_enqueue, vec->num) == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_cipher_job(ctx, req, vec->additional_data + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + return i; +} + +static __rte_always_inline void +submit_one_auth_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + union rte_crypto_sym_additional_data *a_data, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = data_len - ofs.ofs.auth.head - + ofs.ofs.auth.tail; + auth_param->auth_res_addr = a_data->cipher_auth.digest_iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova; + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy(cipher_param->u.cipher_IV_array, + a_data->cipher_auth.auth_iv_ptr, + ctx->auth_iv.length); + break; + default: + break; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *a_data, void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_auth_job(ctx, req, a_data, ofs, (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp, + dp_ctx->cached_enqueue, vec->num) == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_auth_job(ctx, req, vec->additional_data + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + dp_ctx->tail = tail; + return i; +} + +static __rte_always_inline void +submit_one_chain_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data, + uint16_t n_data_vecs, union rte_crypto_sym_additional_data *a_data, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + rte_iova_t auth_iova_end; + int32_t cipher_len, auth_len; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + cipher_len = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail; + + assert(cipher_len > 0 && auth_len > 0); + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = cipher_len; + set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req); + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = auth_len; + auth_param->auth_res_addr = a_data->cipher_auth.digest_iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova; + + if (unlikely(n_data_vecs > 1)) { + int auth_end_get = 0, i = n_data_vecs - 1; + struct rte_crypto_vec *cvec = &data[0]; + uint32_t len; + + len = data_len - ofs.ofs.auth.tail; + + while (i >= 0 && len > 0) { + if (cvec->len >= len) { + auth_iova_end = cvec->iova + + (cvec->len - len); + len = 0; + auth_end_get = 1; + break; + } + len -= cvec->len; + i--; + cvec++; + } + + assert(auth_end_get != 0); + } else + auth_iova_end = data[0].iova + auth_param->auth_off + + auth_param->auth_len; + + /* Then check if digest-encrypted conditions are met */ + if ((auth_param->auth_off + auth_param->auth_len < + cipher_param->cipher_offset + + cipher_param->cipher_length) && + (a_data->cipher_auth.digest_iova == auth_iova_end)) { + /* Handle partial digest encryption */ + if (cipher_param->cipher_offset + + cipher_param->cipher_length < + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length) + req->comn_mid.dst_length = + req->comn_mid.src_length = + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length; + struct icp_qat_fw_comn_req_hdr *header = + &req->comn_hdr; + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + } + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + break; + default: + break; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + union rte_crypto_sym_additional_data *a_data, void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_chain_job(ctx, req, data, n_data_vecs, a_data, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp, + dp_ctx->cached_enqueue, vec->num) == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num, + vec->additional_data + i, ofs, (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *rx_queue = &qp->rx_q; + struct icp_qat_fw_comn_resp *resp; + void *resp_opaque; + uint32_t i, n, inflight; + uint32_t head; + uint8_t status; + + *n_success_jobs = 0; + head = dp_ctx->head; + + inflight = qp->enqueued - qp->dequeued; + if (unlikely(inflight == 0)) + return 0; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + head); + /* no operation ready */ + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return 0; + + resp_opaque = (void *)(uintptr_t)resp->opaque_data; + /* get the dequeue count */ + n = get_dequeue_count(resp_opaque); + if (unlikely(n == 0)) + return 0; + + out_opaque[0] = resp_opaque; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + post_dequeue(resp_opaque, 0, status); + *n_success_jobs += status; + + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + + /* we already finished dequeue when n == 1 */ + if (unlikely(n == 1)) { + i = 1; + goto end_deq; + } + + if (is_opaque_array) { + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + if (unlikely(*(uint32_t *)resp == + ADF_RING_EMPTY_SIG)) + goto end_deq; + out_opaque[i] = (void *)(uintptr_t) + resp->opaque_data; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(out_opaque[i], i, status); + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + } + + goto end_deq; + } + + /* opaque is not array */ + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + goto end_deq; + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + post_dequeue(resp_opaque, i, status); + *n_success_jobs += status; + } + +end_deq: + dp_ctx->head = head; + dp_ctx->cached_dequeue += i; + return i; +} + +static __rte_always_inline int +qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data, + void **out_opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + struct qat_queue *rx_queue = &qp->rx_q; + + register struct icp_qat_fw_comn_resp *resp; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + dp_ctx->head); + + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return -1; + + *out_opaque = (void *)(uintptr_t)resp->opaque_data; + + dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) & + rx_queue->modulo_mask; + dp_ctx->cached_dequeue++; + + return QAT_SYM_DP_IS_RESP_SUCCESS(resp); +} + +static __rte_always_inline void +qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + + assert(dp_ctx->cached_enqueue == n); + + qp->enqueued += n; + qp->stats.enqueued_count += n; + + tx_queue->tail = dp_ctx->tail; + + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, + tx_queue->hw_bundle_number, + tx_queue->hw_queue_number, tx_queue->tail); + tx_queue->csr_tail = tx_queue->tail; + dp_ctx->cached_enqueue = 0; +} + +static __rte_always_inline void +qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *rx_queue = &qp->rx_q; + struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data; + + assert(dp_ctx->cached_dequeue == n); + + rx_queue->head = dp_ctx->head; + rx_queue->nb_processed_responses += n; + qp->dequeued += n; + qp->stats.dequeued_count += n; + if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) { + uint32_t old_head, new_head; + uint32_t max_head; + + old_head = rx_queue->csr_head; + new_head = rx_queue->head; + max_head = qp->nb_descriptors * rx_queue->msg_size; + + /* write out free descriptors */ + void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head; + + if (new_head < old_head) { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, + max_head - old_head); + memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE, + new_head); + } else { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head - + old_head); + } + rx_queue->nb_processed_responses = 0; + rx_queue->csr_head = new_head; + + /* write current head to CSR */ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, + rx_queue->hw_bundle_number, rx_queue->hw_queue_number, + new_head); + } + dp_ctx->cached_dequeue = 0; +} + +int +qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *service_ctx, + uint8_t is_update) +{ + struct qat_qp *qp; + struct qat_sym_session *ctx; + struct qat_sym_dp_service_ctx *dp_ctx; + + if (service_ctx == NULL || session_ctx.crypto_sess == NULL || + sess_type != RTE_CRYPTO_OP_WITH_SESSION) + return -EINVAL; + + qp = dev->data->queue_pairs[qp_id]; + ctx = (struct qat_sym_session *)get_sym_session_private_data( + session_ctx.crypto_sess, qat_sym_driver_id); + dp_ctx = (struct qat_sym_dp_service_ctx *) + service_ctx->drv_service_data; + + dp_ctx->session = ctx; + + if (!is_update) { + memset(service_ctx, 0, sizeof(*service_ctx) + + sizeof(struct qat_sym_dp_service_ctx)); + service_ctx->qp_data = dev->data->queue_pairs[qp_id]; + dp_ctx->tail = qp->tx_q.tail; + dp_ctx->head = qp->rx_q.head; + dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0; + } else { + if (dp_ctx->last_service_type == service_type) + return 0; + } + + dp_ctx->last_service_type = service_type; + + service_ctx->submit_done = qat_sym_dp_kick_tail; + service_ctx->dequeue_opaque = qat_sym_dp_dequeue; + service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job; + service_ctx->dequeue_done = qat_sym_dp_update_head; + + if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { + /* AES-GCM or AES-CCM */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || + (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 + && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE + && ctx->qat_hash_alg == + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { + if (service_type != RTE_CRYPTO_DP_SYM_AEAD) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_aead; + } else { + if (service_type != RTE_CRYPTO_DP_SYM_CHAIN) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_chain; + } + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { + if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs; + service_ctx->submit_single_job = qat_sym_dp_submit_single_auth; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { + if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_cipher; + } + + return 0; +} + +int +qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev) +{ + return sizeof(struct qat_sym_dp_service_ctx); +} diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 314742f53..aaaf3e3f1 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = { /* Crypto related operations */ .sym_session_get_size = qat_sym_session_get_private_size, .sym_session_configure = qat_sym_session_configure, - .sym_session_clear = qat_sym_session_clear + .sym_session_clear = qat_sym_session_clear, + + /* Data plane service related operations */ + .get_drv_ctx_size = qat_sym_get_service_ctx_size, + .configure_service = qat_sym_dp_configure_service_ctx, }; #ifdef RTE_LIBRTE_SECURITY @@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | + RTE_CRYPTODEV_FF_DATA_PATH_SERVICE; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; From patchwork Fri Sep 4 15:25:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76569 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC1FBA04C5; Fri, 4 Sep 2020 17:26:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A29421C0DA; Fri, 4 Sep 2020 17:25:59 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id DD3EC1C113 for ; Fri, 4 Sep 2020 17:25:56 +0200 (CEST) IronPort-SDR: oxNtVtBpJuPvhsN5F73HrSkbhX544EJviiyPz4QJ3jEWky1QL/FSI387YXnDXtL95m18M3xIkn WjOMDFVgXLgQ== X-IronPort-AV: E=McAfee;i="6000,8403,9734"; a="137280805" X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="137280805" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2020 08:25:56 -0700 IronPort-SDR: btCk/Qr4MkC/e2o7TMXAOZYGiCYOzMn93/wVlrhwTYRRIJngLXnVaF1fAeMELYrs1Z8OQMKw5P pZYGEJpFQESw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="478540576" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga005.jf.intel.com with ESMTP; 04 Sep 2020 08:25:54 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, Fan Zhang Date: Fri, 4 Sep 2020 16:25:38 +0100 Message-Id: <20200904152539.20608-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200904152539.20608-1-roy.fan.zhang@intel.com> References: <20200828125815.21614-1-roy.fan.zhang@intel.com> <20200904152539.20608-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v8 3/4] test/crypto: add unit-test for cryptodev direct APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the QAT test to use cryptodev symmetric crypto direct APIs. Signed-off-by: Fan Zhang --- app/test/test_cryptodev.c | 452 +++++++++++++++++++++++--- app/test/test_cryptodev.h | 7 + app/test/test_cryptodev_blockcipher.c | 51 ++- 3 files changed, 447 insertions(+), 63 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 70bf6fe2c..387a3cf15 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -49,6 +49,8 @@ #define VDEV_ARGS_SIZE 100 #define MAX_NB_SESSIONS 4 +#define MAX_DRV_SERVICE_CTX_SIZE 256 + #define IN_PLACE 0 #define OUT_OF_PLACE 1 @@ -57,6 +59,8 @@ static int gbl_driver_id; static enum rte_security_session_action_type gbl_action_type = RTE_SECURITY_ACTION_TYPE_NONE; +int cryptodev_dp_test; + struct crypto_testsuite_params { struct rte_mempool *mbuf_pool; struct rte_mempool *large_mbuf_pool; @@ -147,6 +151,173 @@ ceil_byte_length(uint32_t num_bits) return (num_bits >> 3); } +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits, + uint8_t cipher_iv_len) +{ + int32_t n; + struct rte_crypto_sym_op *sop; + struct rte_crypto_op *ret_op = NULL; + struct rte_crypto_vec data_vec[UINT8_MAX]; + union rte_crypto_sym_additional_data a_data; + union rte_crypto_sym_ofs ofs; + int32_t status; + uint32_t max_len; + union rte_cryptodev_session_ctx sess; + enum rte_crypto_dp_service service_type; + uint32_t count = 0; + uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0}; + struct rte_crypto_dp_service_ctx *ctx = (void *)service_data; + uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0, + auth_len = 0; + int ctx_service_size; + + sop = op->sym; + + sess.crypto_sess = sop->session; + + if (is_cipher && is_auth) { + service_type = RTE_CRYPTO_DP_SYM_CHAIN; + cipher_offset = sop->cipher.data.offset; + cipher_len = sop->cipher.data.length; + auth_offset = sop->auth.data.offset; + auth_len = sop->auth.data.length; + max_len = RTE_MAX(cipher_offset + cipher_len, + auth_offset + auth_len); + } else if (is_cipher) { + service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY; + cipher_offset = sop->cipher.data.offset; + cipher_len = sop->cipher.data.length; + max_len = cipher_len + cipher_offset; + } else if (is_auth) { + service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY; + auth_offset = sop->auth.data.offset; + auth_len = sop->auth.data.length; + max_len = auth_len + auth_offset; + } else { /* aead */ + service_type = RTE_CRYPTO_DP_SYM_AEAD; + cipher_offset = sop->aead.data.offset; + cipher_len = sop->aead.data.length; + max_len = cipher_len + cipher_offset; + } + + if (len_in_bits) { + max_len = max_len >> 3; + cipher_offset = cipher_offset >> 3; + auth_offset = auth_offset >> 3; + cipher_len = cipher_len >> 3; + auth_len = auth_len >> 3; + } + + ctx_service_size = rte_cryptodev_dp_get_service_ctx_data_size(dev_id); + assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE && + ctx_service_size > 0); + + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type, + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + /* test update service */ + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type, + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len, + data_vec, RTE_DIM(data_vec)); + if (n < 0 || n > sop->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + ofs.raw = 0; + + switch (service_type) { + case RTE_CRYPTO_DP_SYM_AEAD: + ofs.ofs.cipher.head = cipher_offset; + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len; + a_data.aead.iv_ptr = rte_crypto_op_ctod_offset(op, void *, + IV_OFFSET); + a_data.aead.iv_iova = rte_crypto_op_ctophys_offset(op, + IV_OFFSET); + a_data.aead.aad_ptr = (void *)sop->aead.aad.data; + a_data.aead.aad_iova = sop->aead.aad.phys_addr; + a_data.aead.digest_ptr = (void *)sop->aead.digest.data; + a_data.aead.digest_iova = sop->aead.digest.phys_addr; + break; + case RTE_CRYPTO_DP_SYM_CIPHER_ONLY: + ofs.ofs.cipher.head = cipher_offset; + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len; + a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset( + op, void *, IV_OFFSET); + a_data.cipher_auth.cipher_iv_iova = + rte_crypto_op_ctophys_offset(op, IV_OFFSET); + break; + case RTE_CRYPTO_DP_SYM_AUTH_ONLY: + ofs.ofs.auth.head = auth_offset; + ofs.ofs.auth.tail = max_len - auth_offset - auth_len; + a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset( + op, void *, IV_OFFSET + cipher_iv_len); + a_data.cipher_auth.auth_iv_iova = + rte_crypto_op_ctophys_offset(op, IV_OFFSET + + cipher_iv_len); + a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data; + a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr; + break; + case RTE_CRYPTO_DP_SYM_CHAIN: + ofs.ofs.cipher.head = cipher_offset; + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len; + ofs.ofs.auth.head = auth_offset; + ofs.ofs.auth.tail = max_len - auth_offset - auth_len; + a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset( + op, void *, IV_OFFSET); + a_data.cipher_auth.cipher_iv_iova = + rte_crypto_op_ctophys_offset(op, IV_OFFSET); + a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset( + op, void *, IV_OFFSET + cipher_iv_len); + a_data.cipher_auth.auth_iv_iova = + rte_crypto_op_ctophys_offset(op, IV_OFFSET + + cipher_iv_len); + a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data; + a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr; + break; + default: + break; + } + + status = rte_cryptodev_dp_sym_submit_single_job(ctx, data_vec, n, ofs, + &a_data, (void *)op); + if (status < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + rte_cryptodev_dp_sym_submit_done(ctx, 1); + + status = -1; + while (count++ < 65535 && status == -1) { + status = rte_cryptodev_dp_sym_dequeue_single_job(ctx, + (void **)&ret_op); + if (status == -1) + rte_pause(); + } + + if (status != -1) + rte_cryptodev_dp_sym_dequeue_done(ctx, 1); + + if (count == 65536 || status != 1 || ret_op != op) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS : + RTE_CRYPTO_OP_STATUS_ERROR; +} + static void process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op) { @@ -1656,6 +1827,9 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess, if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -1710,12 +1884,18 @@ test_AES_cipheronly_all(void) static int test_AES_docsis_all(void) { + /* Data-path service does not support DOCSIS yet */ + if (cryptodev_dp_test) + return -ENOTSUP; return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE); } static int test_DES_docsis_all(void) { + /* Data-path service does not support DOCSIS yet */ + if (cryptodev_dp_test) + return -ENOTSUP; return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE); } @@ -2470,7 +2650,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1, 0); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -2549,7 +2733,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1, 0); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2619,6 +2807,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1, 0); else ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); @@ -2690,7 +2881,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1, 0); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2897,8 +3092,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], - ut_params->op); + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], + ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -2983,7 +3182,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3026,8 +3229,9 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata) struct rte_cryptodev_sym_capability_idx cap_idx; cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8; + /* Data-path service does not support OOP */ if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; /* Create KASUMI session */ @@ -3107,8 +3311,9 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata) struct rte_cryptodev_sym_capability_idx cap_idx; cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8; + /* Data-path service does not support OOP */ if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); @@ -3192,8 +3397,9 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata) struct rte_cryptodev_sym_capability_idx cap_idx; cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8; + /* Data-path service does not support OOP */ if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; /* Create KASUMI session */ @@ -3306,7 +3512,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, 0); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3381,7 +3591,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3419,7 +3633,7 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata) cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2; if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; /* Create SNOW 3G session */ @@ -3502,7 +3716,7 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2; if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); @@ -3621,7 +3835,7 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata) cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2; if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; /* Create SNOW 3G session */ @@ -3756,7 +3970,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -3791,7 +4009,7 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata) cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2; if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], - &cap_idx) == NULL) + &cap_idx) == NULL || cryptodev_dp_test) return -ENOTSUP; /* Create SNOW 3G session */ @@ -3924,7 +4142,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4019,7 +4241,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4087,6 +4313,8 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata, printf("Device doesn't support digest encrypted.\n"); return -ENOTSUP; } + if (cryptodev_dp_test) + return -ENOTSUP; } /* Create SNOW 3G session */ @@ -4155,7 +4383,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4266,6 +4498,8 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata, return -ENOTSUP; } } else { + if (cryptodev_dp_test) + return -ENOTSUP; if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { printf("Device doesn't support out-of-place scatter-gather " "in both input and output mbufs.\n"); @@ -4344,7 +4578,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4453,6 +4691,8 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata, uint64_t feat_flags = dev_info.feature_flags; if (op_mode == OUT_OF_PLACE) { + if (cryptodev_dp_test) + return -ENOTSUP; if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED)) { printf("Device doesn't support digest encrypted.\n"); return -ENOTSUP; @@ -4526,7 +4766,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4638,6 +4882,8 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata, return -ENOTSUP; } } else { + if (cryptodev_dp_test) + return -ENOTSUP; if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { printf("Device doesn't support out-of-place scatter-gather " "in both input and output mbufs.\n"); @@ -4716,7 +4962,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4857,7 +5107,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4944,7 +5198,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5031,7 +5289,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5119,7 +5381,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1, 0); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5177,6 +5443,8 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata, return -ENOTSUP; } } else { + if (cryptodev_dp_test) + return -ENOTSUP; if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { printf("Device doesn't support out-of-place scatter-gather " "in both input and output mbufs.\n"); @@ -5251,7 +5519,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5359,6 +5631,8 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata, return -ENOTSUP; } } else { + if (cryptodev_dp_test) + return -ENOTSUP; if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { printf("Device doesn't support out-of-place scatter-gather " "in both input and output mbufs.\n"); @@ -5437,7 +5711,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1, tdata->cipher_iv.len); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5580,6 +5858,9 @@ test_kasumi_decryption_test_case_2(void) static int test_kasumi_decryption_test_case_3(void) { + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */ + if (cryptodev_dp_test) + return -ENOTSUP; return test_kasumi_decryption(&kasumi_test_case_3); } @@ -5779,6 +6060,9 @@ test_snow3g_auth_cipher_part_digest_enc_oop(void) static int test_snow3g_auth_cipher_test_case_3_sgl(void) { + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */ + if (cryptodev_dp_test) + return -ENOTSUP; return test_snow3g_auth_cipher_sgl( &snow3g_auth_cipher_test_case_3, IN_PLACE, 0); } @@ -5793,6 +6077,9 @@ test_snow3g_auth_cipher_test_case_3_oop_sgl(void) static int test_snow3g_auth_cipher_part_digest_enc_sgl(void) { + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */ + if (cryptodev_dp_test) + return -ENOTSUP; return test_snow3g_auth_cipher_sgl( &snow3g_auth_cipher_partial_digest_encryption, IN_PLACE, 0); @@ -6146,10 +6433,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, unsigned int ciphertext_len; struct rte_cryptodev_info dev_info; - struct rte_crypto_op *op; /* Check if device supports particular algorithms separately */ - if (test_mixed_check_if_unsupported(tdata)) + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test) return -ENOTSUP; rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); @@ -6161,6 +6447,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, return -ENOTSUP; } + if (op_mode == OUT_OF_PLACE) + return -ENOTSUP; + /* Create the session */ if (verify) retval = create_wireless_algo_cipher_auth_session( @@ -6192,9 +6481,11 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, /* clear mbuf payload */ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0, rte_pktmbuf_tailroom(ut_params->ibuf)); - if (op_mode == OUT_OF_PLACE) + if (op_mode == OUT_OF_PLACE) { + memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0, rte_pktmbuf_tailroom(ut_params->obuf)); + } ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits); plaintext_len = ceil_byte_length(tdata->plaintext.len_bits); @@ -6235,18 +6526,17 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, if (retval < 0) return retval; - op = process_crypto_request(ts_params->valid_devs[0], + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); /* Check if the op failed because the device doesn't */ /* support this particular combination of algorithms */ - if (op == NULL && ut_params->op->status == + if (ut_params->op == NULL && ut_params->op->status == RTE_CRYPTO_OP_STATUS_INVALID_SESSION) { printf("Device doesn't support this mixed combination. " "Test Skipped.\n"); return -ENOTSUP; } - ut_params->op = op; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -6337,10 +6627,9 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata, uint8_t digest_buffer[10000]; struct rte_cryptodev_info dev_info; - struct rte_crypto_op *op; /* Check if device supports particular algorithms */ - if (test_mixed_check_if_unsupported(tdata)) + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test) return -ENOTSUP; rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); @@ -6440,20 +6729,18 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata, if (retval < 0) return retval; - op = process_crypto_request(ts_params->valid_devs[0], + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); /* Check if the op failed because the device doesn't */ /* support this particular combination of algorithms */ - if (op == NULL && ut_params->op->status == + if (ut_params->op == NULL && ut_params->op->status == RTE_CRYPTO_OP_STATUS_INVALID_SESSION) { printf("Device doesn't support this mixed combination. " "Test Skipped.\n"); return -ENOTSUP; } - ut_params->op = op; - TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = (op_mode == IN_PLACE ? @@ -7043,6 +7330,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -8540,6 +8830,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -8833,6 +9126,9 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata) if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0], &cap_idx) == NULL) return -ENOTSUP; + /* Data-path service does not support OOP */ + if (cryptodev_dp_test) + return -ENOTSUP; /* not supported with CPU crypto */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) @@ -8923,8 +9219,9 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata) &cap_idx) == NULL) return -ENOTSUP; - /* not supported with CPU crypto */ - if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) + /* not supported with CPU crypto and data-path service*/ + if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO || + cryptodev_dp_test) return -ENOTSUP; /* Create AEAD session */ @@ -9151,8 +9448,13 @@ test_authenticated_decryption_sessionless( "crypto op session type not sessionless"); /* Process crypto operation */ - TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0], - ut_params->op), "failed to process sym crypto op"); + if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0, 0); + else + TEST_ASSERT_NOT_NULL(process_crypto_request( + ts_params->valid_devs[0], ut_params->op), + "failed to process sym crypto op"); TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process"); @@ -9472,6 +9774,9 @@ test_MD5_HMAC_generate(const struct HMAC_MD5_vector *test_case) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -9530,6 +9835,9 @@ test_MD5_HMAC_verify(const struct HMAC_MD5_vector *test_case) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -10098,6 +10406,9 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -10215,6 +10526,9 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -10780,7 +11094,10 @@ test_authentication_verify_fail_when_data_corruption( TEST_ASSERT_NOT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS, "authentication not failed"); - } else { + } else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); + else { ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NULL(ut_params->op, "authentication not failed"); @@ -10851,7 +11168,10 @@ test_authentication_verify_GMAC_fail_when_corruption( TEST_ASSERT_NOT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS, "authentication not failed"); - } else { + } else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 0, 0); + else { ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NULL(ut_params->op, "authentication not failed"); @@ -10926,7 +11246,10 @@ test_authenticated_decryption_fail_when_corruption( TEST_ASSERT_NOT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS, "authentication not failed"); - } else { + } else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 0, 0); + else { ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NULL(ut_params->op, "authentication not failed"); @@ -11021,6 +11344,9 @@ test_authenticated_encryt_with_esn( if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 0, 0); else ut_params->op = process_crypto_request( ts_params->valid_devs[0], ut_params->op); @@ -11141,6 +11467,9 @@ test_authenticated_decrypt_with_esn( if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 0, 0); else ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); @@ -11285,6 +11614,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata, unsigned int sgl_in = fragsz < tdata->plaintext.len; unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) < tdata->plaintext.len; + /* Data path service does not support OOP */ + if (cryptodev_dp_test) + return -ENOTSUP; if (sgl_in && !sgl_out) { if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT)) @@ -11480,6 +11812,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata, if (oop == IN_PLACE && gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (cryptodev_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -13041,6 +13376,29 @@ test_cryptodev_nitrox(void) return unit_test_suite_runner(&cryptodev_nitrox_testsuite); } +static int +test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/) +{ + int ret; + + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)); + + if (gbl_driver_id == -1) { + RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both " + "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM " + "are enabled in config file to run this testsuite.\n"); + return TEST_SKIPPED; + } + + cryptodev_dp_test = 1; + ret = unit_test_suite_runner(&cryptodev_testsuite); + cryptodev_dp_test = 0; + + return ret; +} + +REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api); REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat); REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb); REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest, diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 41542e055..e4e4c7626 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -71,6 +71,8 @@ #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym +extern int cryptodev_dp_test; + /** * Write (spread) data from buffer to mbuf data * @@ -209,4 +211,9 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len, return NULL; } +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits, + uint8_t cipher_iv_len); + #endif /* TEST_CRYPTODEV_H_ */ diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c index 221262341..311b34c15 100644 --- a/app/test/test_cryptodev_blockcipher.c +++ b/app/test/test_cryptodev_blockcipher.c @@ -462,25 +462,44 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } /* Process crypto operation */ - if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Error sending packet for encryption"); - status = TEST_FAILED; - goto error_exit; - } + if (cryptodev_dp_test) { + uint8_t is_cipher = 0, is_auth = 0; - op = NULL; + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { + RTE_LOG(DEBUG, USER1, + "QAT direct API does not support OOP, Test Skipped.\n"); + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED"); + status = TEST_SUCCESS; + goto error_exit; + } + if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER) + is_cipher = 1; + if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH) + is_auth = 1; - while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) - rte_pause(); + process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0, + tdata->iv.len); + } else { + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Error sending packet for encryption"); + status = TEST_FAILED; + goto error_exit; + } - if (!op) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Failed to process sym crypto op"); - status = TEST_FAILED; - goto error_exit; + op = NULL; + + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) + rte_pause(); + + if (!op) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Failed to process sym crypto op"); + status = TEST_FAILED; + goto error_exit; + } } debug_hexdump(stdout, "m_src(after):", From patchwork Fri Sep 4 15:25:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76570 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63176A04C5; Fri, 4 Sep 2020 17:26:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D83CA1C120; Fri, 4 Sep 2020 17:26:00 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 9E9E11C0CF for ; Fri, 4 Sep 2020 17:25:58 +0200 (CEST) IronPort-SDR: bYEUyVXgrwja2ORA/N3jQK6aW1+USRbxCE+JLx2rQGmLsP7+tI/oteqf9ChZGYCQY8gf9Ry1WI /Wugc36lVcgw== X-IronPort-AV: E=McAfee;i="6000,8403,9734"; a="137280812" X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="137280812" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2020 08:25:58 -0700 IronPort-SDR: /WeBtosk0jcWOJJ8+UdMMyF3xQIPIMD8jkJMy6irhFSuO8rGgqTVynrV9GZvvk2WnSJvhjx38h O90h4RQVCGlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,389,1592895600"; d="scan'208";a="478540593" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga005.jf.intel.com with ESMTP; 04 Sep 2020 08:25:56 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, Fan Zhang Date: Fri, 4 Sep 2020 16:25:39 +0100 Message-Id: <20200904152539.20608-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200904152539.20608-1-roy.fan.zhang@intel.com> References: <20200828125815.21614-1-roy.fan.zhang@intel.com> <20200904152539.20608-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v8 4/4] doc: add cryptodev service APIs guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates programmer's guide to demonstrate the usage and limitations of cryptodev symmetric crypto data-path service APIs. Signed-off-by: Fan Zhang --- doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst index c14f750fa..1321e4c5d 100644 --- a/doc/guides/prog_guide/cryptodev_lib.rst +++ b/doc/guides/prog_guide/cryptodev_lib.rst @@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error. For more details, e.g. how to convert an mbuf to an SGL, please refer to an example usage in the IPsec library implementation. +Cryptodev Direct Data-path Service API +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Direct crypto data-path service are a set of APIs that especially provided for +the external libraries/applications who want to take advantage of the rich +features provided by cryptodev, but not necessarily depend on cryptodev +operations, mempools, or mbufs in the their data-path implementations. + +The direct crypto data-path service has the following advantages: +- Supports raw data pointer and physical addresses as input. +- Do not require specific data structure allocated from heap, such as + cryptodev operation. +- Enqueue in a burst or single operation. The service allow enqueuing in + a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only + enqueue one job at a time but maintaining necessary context data locally for + next single job enqueue operation. The latter method is especially helpful + when the user application's crypto operations are clustered into a burst. + Allowing enqueue one operation at a time helps reducing one additional loop + and also reduced the cache misses during the double "looping" situation. +- Customerizable dequeue count. Instead of dequeue maximum possible operations + as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the + user to provide a callback function to decide how many operations to be + dequeued. This is especially helpful when the expected dequeue count is + hidden inside the opaque data stored during enqueue. The user can provide + the callback function to parse the opaque data structure. +- Abandon enqueue and dequeue anytime. One of the drawbacks of + ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst`` + operations are: once an operation is enqueued/dequeued there is no way to + undo the operation. The service make the operation abandon possible by + creating a local copy of the queue operation data in the service context + data. The data will be written back to driver maintained operation data + when enqueue or dequeue done function is called. + +The cryptodev data-path service uses + +Cryptodev PMDs who supports this feature will have +``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this +feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should +be called to get the data path service context data size. The user should +creates a local buffer at least this size long and initialize it using +``rte_cryptodev_dp_configure_service`` function call. + +The ``rte_cryptodev_dp_configure_service`` function call initialize or +updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the +driver specific queue pair data pointer and service context buffer, and a +set of function pointers to enqueue and dequeue different algorithms' +operations. The ``rte_cryptodev_dp_configure_service`` should be called when: + +- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0). +- When different cryptodev session, security session, or session-less xform + is used (set ``is_update`` parameter to 1). + +Two different enqueue functions are provided. + +- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in + the ``rte_crypto_sym_vec`` structure. +- ``rte_cryptodev_dp_submit_single_job``: submit single operation. + +Either enqueue functions will not command the crypto device to start processing +until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user +shall expect the driver only stores the necessory context data in the +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user +wants to abandon the submitted operations, simply call +``rte_cryptodev_dp_configure_service`` function instead with the parameter +``is_update`` set to 0. The driver will recover the service context data to +the previous state. + +To dequeue the operations the user also have two operations: + +- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The + user needs to provide the callback function for the driver to get the + dequeue count and perform post processing such as write the status field. +- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job. + +Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to +merge user's local service context data with the driver's queue operation +data. Also to abandon the dequeue operation (still keep the operations in the +queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call +but calling ``rte_cryptodev_dp_configure_service`` function with the parameter +``is_update`` set to 0. + +There are a few limitations to the data path service: + +* Only support in-place operations. +* APIs are NOT thread-safe. +* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or + vice versa. + +See *DPDK API Reference* for details on each API definitions. + Sample code -----------