From patchwork Mon Jul 13 16:57:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 73970 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D128A0540; Mon, 13 Jul 2020 18:58:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D0A411D6B9; Mon, 13 Jul 2020 18:58:06 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 128131D69E for ; Mon, 13 Jul 2020 18:58:00 +0200 (CEST) IronPort-SDR: DtH1X0Mr0t4PnjUztxAFw8c3ed46sBYUuphe91aBvUjtyvQcw+Pk4YH1r0S9eOvZmp2mQEl9CJ t/oCJqbkyBHw== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="210203432" X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="210203432" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2020 09:58:00 -0700 IronPort-SDR: Hq0mU4M66S/sgCosFFHOI+nz/nErhoLLqzz5vFQTSvv/fKaaTVwxW8jw8m+fiTnZvpn13jMK6j V9g9n5G7cs3Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="281465646" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga003.jf.intel.com with ESMTP; 13 Jul 2020 09:57:58 -0700 From: Fan Zhang To: dev@dpdk.org Cc: fiona.trahe@intel.com, akhil.goyal@nxp.com, Fan Zhang , Piotr Bronowski Date: Mon, 13 Jul 2020 17:57:52 +0100 Message-Id: <20200713165755.61814-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713165755.61814-1-roy.fan.zhang@intel.com> References: <20200703124942.29171-1-roy.fan.zhang@intel.com> <20200713165755.61814-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v5 1/4] cryptodev: add data-path APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds data-path APIs for enqueue and dequeue operations to cryptodev. The APIs support flexible user-define enqueue and dequeue behaviors and operation modes. Signed-off-by: Fan Zhang Signed-off-by: Piotr Bronowski --- lib/librte_cryptodev/rte_crypto_sym.h | 27 +- lib/librte_cryptodev/rte_cryptodev.c | 118 ++++++++ lib/librte_cryptodev/rte_cryptodev.h | 256 +++++++++++++++++- lib/librte_cryptodev/rte_cryptodev_pmd.h | 90 +++++- .../rte_cryptodev_version.map | 5 + 5 files changed, 487 insertions(+), 9 deletions(-) diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index f29c98051..8f3a93a3d 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -57,12 +57,27 @@ struct rte_crypto_sgl { struct rte_crypto_sym_vec { /** array of SGL vectors */ struct rte_crypto_sgl *sgl; - /** array of pointers to IV */ - void **iv; - /** array of pointers to AAD */ - void **aad; - /** array of pointers to digest */ - void **digest; + union { + /* Supposed to be used with CPU crypto API call. */ + struct { + /** array of pointers to IV */ + void **iv; + /** array of pointers to AAD */ + void **aad; + /** array of pointers to digest */ + void **digest; + }; + + /* Supposed to be used with HW crypto API call. */ + struct { + /** array of vectors to IV */ + struct rte_crypto_vec *iv_vec; + /** array of vectors to AAD */ + struct rte_crypto_vec *aad_vec; + /** array of vectors to Digest */ + struct rte_crypto_vec *digest_vec; + }; + }; /** * array of statuses for each operation: * - 0 on success diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 1dd795bcb..1e93762a0 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -1914,6 +1914,124 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); } +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_aead(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API) || + dev->dev_ops->sym_hw_enq_deq == NULL || + dev->dev_ops->sym_hw_enq_deq->enqueue_aead == NULL) + return -ENOTSUP; + if (vec == NULL || vec->num == 0 || session.crypto_sess == NULL) + return -EINVAL; + + return dev->dev_ops->sym_hw_enq_deq->enqueue_aead(dev, qp_id, session, + ofs, vec, opaque, flags); +} + +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_cipher(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API) || + dev->dev_ops->sym_hw_enq_deq == NULL || + dev->dev_ops->sym_hw_enq_deq->enqueue_cipher == NULL) + return -ENOTSUP; + if (vec == NULL || vec->num == 0 || session.crypto_sess == NULL) + return -EINVAL; + + return dev->dev_ops->sym_hw_enq_deq->enqueue_cipher(dev, qp_id, session, + ofs, vec, opaque, flags); +} + +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_auth(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API) || + dev->dev_ops->sym_hw_enq_deq == NULL || + dev->dev_ops->sym_hw_enq_deq->enqueue_auth == NULL) + return -ENOTSUP; + if (vec == NULL || vec->num == 0 || session.crypto_sess == NULL) + return -EINVAL; + + return dev->dev_ops->sym_hw_enq_deq->enqueue_auth(dev, qp_id, session, + ofs, vec, opaque, flags); +} + +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_chain(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API) || + dev->dev_ops->sym_hw_enq_deq == NULL || + dev->dev_ops->sym_hw_enq_deq->enqueue_chain == NULL) + return -ENOTSUP; + if (vec == NULL || vec->num == 0 || session.crypto_sess == NULL) + return -EINVAL; + + return dev->dev_ops->sym_hw_enq_deq->enqueue_chain(dev, qp_id, session, + ofs, vec, opaque, flags); +} + +uint32_t +rte_cryptodev_sym_hw_crypto_dequeue(uint8_t dev_id, uint16_t qp_id, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, + uint32_t *n_success_jobs, uint32_t flags) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API) || + dev->dev_ops->sym_hw_enq_deq == NULL || + dev->dev_ops->sym_hw_enq_deq->dequeue == NULL) + return -ENOTSUP; + + if (!get_dequeue_count || !post_dequeue || !n_success_jobs) + return -EINVAL; + + return dev->dev_ops->sym_hw_enq_deq->dequeue(dev, qp_id, + get_dequeue_count, post_dequeue, out_opaque, + n_success_jobs, flags); +} + /** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 7b3ebc20f..83c9f072c 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support symmetric session-less operations */ #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23) /**< Support operations on data which is not byte aligned */ - +#define RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API (1ULL << 24) +/**< Support hardware accelerator specific raw data as input */ /** * Get the name of a crypto device feature flag @@ -1351,6 +1352,259 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/* HW direct symmetric crypto data-path APIs */ +#define RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST (1ULL << 0) +/**< Bit-mask to indicate the last job in a burst. With this bit set the + * driver may read but not write the drv_data buffer, and kick the HW to + * start processing all jobs written. + */ +#define RTE_CRYPTO_HW_DP_FF_CRYPTO_SESSION (1ULL << 1) +/**< Bit-mask indicating sess is a cryptodev sym session */ +#define RTE_CRYPTO_HW_DP_FF_SESSIONLESS (1ULL << 2) +/**< Bit-mask indicating sess is a cryptodev sym xform and session-less + * operation is in-place + **/ +#define RTE_CRYPTO_HW_DP_FF_SECURITY_SESSION (1ULL << 3) +/**< Bit-mask indicating sess is a security session */ +#define RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY (1ULL << 4) +/**< Bit-mask to indicate opaque is an array, all elements in it will be + * stored as opaque data. + */ +#define RTE_CRYPTO_HW_DP_FF_KICK_QUEUE (1ULL << 5) +/**< Bit-mask to command the HW to start processing all stored ops in the + * queue immediately. + */ + +/**< Bit-masks used for dequeuing job */ +#define RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY (1ULL << 0) +/**< Bit-mask to indicate opaque is an array with enough room to fill all + * dequeued opaque data pointers. + */ +#define RTE_CRYPTO_HW_DP_FF_DEQUEUE_EXHAUST (1ULL << 1) +/**< Bit-mask to indicate dequeuing as many as n jobs in dequeue-many function. + * Without this bit once the driver found out the ready-to-dequeue jobs are + * not as many as n, it shall stop immediate, leave all processed jobs in the + * queue, and return the ready jobs in negative. With this bit set the + * function shall continue dequeue all done jobs and return the dequeued + * job count in positive. + */ + +/** + * Typedef that the user provided to get the dequeue count. User may use it to + * return a fixed number or the number parsed from the opaque data stored in + * the first processed job. + * + * @param opaque Dequeued opaque data. + **/ +typedef uint32_t (*rte_cryptodev_get_dequeue_count_t) + (void *opaque); + +/** + * Typedef that the user provided to deal with post dequeue operation, such + * as filling status. + * + * @param opaque Dequeued opaque data. In case + * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is + * set, this value will be the opaque data stored + * in the specific processed jobs referenced by + * index, otherwise it will be the opaque data + * stored in the first processed job in the burst. + * @param index Index number of the processed job. + * @param is_op_success Driver filled operation status. + **/ +typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index, + uint8_t is_op_success); + +/** + * Union + */ +union rte_cryptodev_hw_session_ctx { + struct rte_cryptodev_sym_session *crypto_sess; + struct rte_crypto_sym_xform *xform; + struct rte_security_session *sec_sess; +}; + +/** + * Enqueue actual AEAD symmetric crypto processing on user provided data. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param session Union of different session types, depends on + * RTE_CRYPTO_HW_DP_FF_* flag. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param vec Vectorized operation descriptor. + * @param opaque Opaque data to be written to HW + * descriptor for enqueue. In case + * RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY flag is + * set this value should be an array of all + * 'vec->num' opaque data with the size stated in + * the vec. Otherwise only the first opaque + * data in the array will be stored in the first + * HW descriptor waiting for dequeue. + * @param flags Bit-mask of one or more RTE_CRYPTO_HW_DP_FF_* + * flags. + * + * @return + * - Returns number of successfully processed packets. In case the returned + * value is smaller than 'vec->num', the vec's status array will be written + * the error number accordingly. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_aead(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags); + +/** + * Enqueue actual cipher-only symmetric crypto processing on user provided data. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param session Union of different session types, depends on + * RTE_CRYPTO_HW_DP_FF_* flag. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param vec Vectorized operation descriptor. + * @param opaque Opaque data to be written to HW + * descriptor for enqueue. In case + * RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY flag is + * set this value should be an array of all + * 'vec->num' opaque data with the size stated in + * the vec. Otherwise only the first opaque + * data in the array will be stored in the first + * HW descriptor waiting for dequeue. + * @param flags Bit-mask of one or more RTE_CRYPTO_HW_DP_FF_* + * flags. + * + * @return + * - Returns number of successfully processed packets. In case the returned + * value is smaller than 'vec->num', the vec's status array will be written + * the error number accordingly. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_cipher(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags); + +/** + * Enqueue actual auth-only symmetric crypto processing on user provided data. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param session Union of different session types, depends on + * RTE_CRYPTO_HW_DP_FF_* flag. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param vec Vectorized operation descriptor. + * @param opaque Opaque data to be written to HW + * descriptor for enqueue. In case + * RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY flag is + * set this value should be an array of all + * 'vec->num' opaque data with the size stated in + * the vec. Otherwise only the first opaque + * data in the array will be stored in the first + * HW descriptor waiting for dequeue. + * @param flags Bit-mask of one or more RTE_CRYPTO_HW_DP_FF_* + * flags. + * + * @return + * - Returns number of successfully processed packets. In case the returned + * value is smaller than 'vec->num', the vec's status array will be written + * the error number accordingly. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_auth(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags); + +/** + * Enqueue actual chained symmetric crypto processing on user provided data. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param session Union of different session types, depends on + * RTE_CRYPTO_HW_DP_FF_* flag. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param vec Vectorized operation descriptor. + * @param opaque Opaque data to be written to HW + * descriptor for enqueue. In case + * RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY flag is + * set this value should be an array of all + * 'vec->num' opaque data with the size stated in + * the vec. Otherwise only the first opaque + * data in the array will be stored in the first + * HW descriptor waiting for dequeue. + * @param flags Bit-mask of one or more RTE_CRYPTO_HW_DP_FF_* + * flags. + * + * @return + * - Returns number of successfully processed packets. In case the returned + * value is smaller than 'vec->num', the vec's status array will be written + * the error number accordingly. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_hw_crypto_enqueue_chain(uint8_t dev_id, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which + * to retrieve processed packets. The + * value must be in the range [0, + * nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque data to be retrieve from HW + * queue. In case of the flag + * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY + * is set every dequeued operation + * will be written its stored opaque data + * into this array, otherwise only the + * first dequeued operation will be + * written the opaque data. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * @param flags Bit-mask of one or more + * RTE_CRYPTO_HW_DP_FF_* flags. + * + * @return + * - Returns number of dequeued packets. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_hw_crypto_dequeue(uint8_t dev_id, uint16_t qp_id, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, + uint32_t *n_success_jobs, uint32_t flags); + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index 81975d72b..7ece9f8e9 100644 --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h @@ -316,6 +316,88 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t) (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/** + * Enqueue actual symmetric crypto processing on user provided data. + * + * @param dev Crypto device pointer + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param session Union of different session types, depends on + * RTE_CRYPTO_HW_DP_FF_* flag. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param vec Vectorized operation descriptor. + * @param opaque Opaque data to be written to HW + * descriptor for enqueue. In case + * RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY flag is + * set this value should be an array of all + * 'vec->num' opaque data with the size stated in + * the vec. Otherwise only the first opaque + * data in the array will be stored in the first + * HW descriptor waiting for dequeue. + * @param flags Bit-mask of one or more RTE_CRYPTO_HW_DP_FF_* + * flags. + * + * @return + * - Returns number of successfully processed packets. In case the returned + * value is smaller than 'vec->num', the vec's status array will be written + * the error number accordingly. + */ +typedef uint32_t (*cryptodev_sym_hw_crypto_enqueue_t) + (struct rte_cryptodev *dev, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param dev Crypto device pointer + * @param qp_id The index of the queue pair from which + * to retrieve processed packets. The + * value must be in the range [0, + * nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque data to be retrieve from HW + * queue. In case of the flag + * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY + * is set every dequeued operation + * will be written its stored opaque data + * into this array, otherwise only the + * first dequeued operation will be + * written the opaque data. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * @param flags Bit-mask of one or more + * RTE_CRYPTO_HW_DP_FF_* flags. + * + * @return + * - Returns number of dequeued packets. + */ +typedef uint32_t (*cryptodev_sym_hw_crypto_dequeue_t) + (struct rte_cryptodev *dev, uint16_t qp_id, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, + uint32_t *n_success_jobs, uint32_t flags); + +/** + * Structure of HW crypto Data-plane APIs. + */ +struct rte_crytodev_sym_hw_dp_ops { + cryptodev_sym_hw_crypto_enqueue_t enqueue_aead; + cryptodev_sym_hw_crypto_enqueue_t enqueue_cipher; + cryptodev_sym_hw_crypto_enqueue_t enqueue_auth; + cryptodev_sym_hw_crypto_enqueue_t enqueue_chain; + cryptodev_sym_hw_crypto_dequeue_t dequeue; + void *reserved[3]; +}; /** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -348,8 +430,12 @@ struct rte_cryptodev_ops { /**< Clear a Crypto sessions private data. */ cryptodev_asym_free_session_t asym_session_clear; /**< Clear a Crypto sessions private data. */ - cryptodev_sym_cpu_crypto_process_t sym_cpu_process; - /**< process input data synchronously (cpu-crypto). */ + union { + cryptodev_sym_cpu_crypto_process_t sym_cpu_process; + /**< process input data synchronously (cpu-crypto). */ + struct rte_crytodev_sym_hw_dp_ops *sym_hw_enq_deq; + /**< Get HW crypto data-path call back functions and data */ + }; }; diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index a7a78dc41..fb7ddb50c 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -106,4 +106,9 @@ EXPERIMENTAL { # added in 20.08 rte_cryptodev_get_qp_status; + rte_cryptodev_sym_hw_crypto_enqueue_aead; + rte_cryptodev_sym_hw_crypto_enqueue_cipher; + rte_cryptodev_sym_hw_crypto_enqueue_auth; + rte_cryptodev_sym_hw_crypto_enqueue_chain; + rte_cryptodev_sym_hw_crypto_dequeue; }; From patchwork Mon Jul 13 16:57:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 73971 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C567DA0540; Mon, 13 Jul 2020 18:58:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6AF101D6C1; Mon, 13 Jul 2020 18:58:08 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id ECE001D6AA for ; Mon, 13 Jul 2020 18:58:02 +0200 (CEST) IronPort-SDR: Lycq4P91OsL/RgooZlwmRJQ+rqi0jKjsgec9c/elGVcHP13RGGBHXyaOATXO+fhrnoB1okvPxH jPS2G3ZycJjw== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="210203456" X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="210203456" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2020 09:58:02 -0700 IronPort-SDR: nUcLLqSDmWlYZRri+rMc0AiDv2WN3mFpXVyU8qLUEpThYqFKORuWkDb8p/RuZFxq74k28WM8Zo Nssgg4EkHFpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="281465660" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga003.jf.intel.com with ESMTP; 13 Jul 2020 09:58:00 -0700 From: Fan Zhang To: dev@dpdk.org Cc: fiona.trahe@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 13 Jul 2020 17:57:53 +0100 Message-Id: <20200713165755.61814-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713165755.61814-1-roy.fan.zhang@intel.com> References: <20200703124942.29171-1-roy.fan.zhang@intel.com> <20200713165755.61814-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v5 2/4] crypto/qat: add support to direct data-path APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add symmetric crypto data-path APIs support to QAT-SYM PMD. Signed-off-by: Fan Zhang --- drivers/common/qat/Makefile | 1 + drivers/common/qat/qat_qp.h | 1 + drivers/crypto/qat/meson.build | 1 + drivers/crypto/qat/qat_sym.h | 3 + drivers/crypto/qat/qat_sym_hw_dp.c | 850 +++++++++++++++++++++++++++++ drivers/crypto/qat/qat_sym_pmd.c | 7 +- 6 files changed, 861 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile index 85d420709..1b71bbbab 100644 --- a/drivers/common/qat/Makefile +++ b/drivers/common/qat/Makefile @@ -42,6 +42,7 @@ endif SRCS-y += qat_sym.c SRCS-y += qat_sym_session.c SRCS-y += qat_sym_pmd.c + SRCS-y += qat_sym_hw_dp.c build_qat = yes endif endif diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 575d69059..ea40f2050 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -79,6 +79,7 @@ struct qat_qp { /**< qat device this qp is on */ uint32_t enqueued; uint32_t dequeued __rte_aligned(4); + uint16_t cached; uint16_t max_inflights; uint16_t min_enq_burst_threshold; } __rte_cache_aligned; diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build index a225f374a..bc90ec44c 100644 --- a/drivers/crypto/qat/meson.build +++ b/drivers/crypto/qat/meson.build @@ -15,6 +15,7 @@ if dep.found() qat_sources += files('qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c') qat_ext_deps += dep diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index dbca74efb..383e3c3f7 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -212,11 +212,14 @@ qat_sym_process_response(void **op, uint8_t *resp) } *op = (void *)rx_op; } + +extern struct rte_crytodev_sym_hw_dp_ops qat_hw_dp_ops; #else static inline void qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused) { } + #endif #endif /* _QAT_SYM_H_ */ diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c new file mode 100644 index 000000000..8a946c563 --- /dev/null +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -0,0 +1,850 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#include + +#include "adf_transport_access_macros.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +#include "qat_sym.h" +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_qp.h" + +static __rte_always_inline int32_t +qat_sym_dp_fill_sgl(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_sgl *sgl) +{ + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; + struct qat_sgl *list; + uint32_t i; + uint32_t total_len = 0; + + if (!sgl) + return -EINVAL; + if (sgl->num < 2 || sgl->num > QAT_SYM_SGL_MAX_NUMBER || !sgl->vec) + return -EINVAL; + + ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_SGL); + cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz]; + list = (struct qat_sgl *)&cookie->qat_sgl_src; + + for (i = 0; i < sgl->num; i++) { + list->buffers[i].len = sgl->vec[i].len; + list->buffers[i].resrvd = 0; + list->buffers[i].addr = sgl->vec[i].iova; + if (total_len + sgl->vec[i].len > UINT32_MAX) { + QAT_DP_LOG(ERR, "Message too long"); + return -ENOMEM; + } + total_len += sgl->vec[i].len; + } + + list->num_bufs = i; + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + cookie->qat_sgl_src_phys_addr; + req->comn_mid.src_length = req->comn_mid.dst_length = 0; + return total_len; +} + +static __rte_always_inline void +set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param, + struct rte_crypto_vec *iv, uint32_t iv_len, + struct icp_qat_fw_la_bulk_req *qat_req) +{ + /* copy IV into request if it fits */ + if (iv_len <= sizeof(cipher_param->u.cipher_IV_array)) + rte_memcpy(cipher_param->u.cipher_IV_array, iv->base, iv_len); + else { + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + qat_req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_CIPH_IV_64BIT_PTR); + cipher_param->u.s.cipher_IV_ptr = iv->iova; + } +} + +#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \ + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \ + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status)) + +#define QAT_SYM_DP_IS_VEC_VALID(qp, flag, n) \ + (((qp)->service_type == QAT_SERVICE_SYMMETRIC) && \ + (flags & RTE_CRYPTO_HW_DP_FF_SESSIONLESS) == 0 && \ + (flags & RTE_CRYPTO_HW_DP_FF_SECURITY_SESSION) == 0 && \ + ((qp)->enqueued + (qp)->cached + (n) < qp->nb_descriptors - 1)) + +static __rte_always_inline void +qat_sym_dp_update_tx_queue(struct qat_qp *qp, struct qat_queue *tx_queue, + uint32_t tail, uint32_t n, uint32_t flags) +{ + if (unlikely((flags & RTE_CRYPTO_HW_DP_FF_KICK_QUEUE) || + qp->cached + n > QAT_CSR_HEAD_WRITE_THRESH)) { + qp->enqueued += n; + qp->stats.enqueued_count += n; + + tx_queue->tail = tail; + + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, + tx_queue->hw_bundle_number, + tx_queue->hw_queue_number, tx_queue->tail); + tx_queue->csr_tail = tx_queue->tail; + qp->cached = 0; + + return; + } + + qp->cached += n; +} + +static __rte_always_inline void +qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + sta[i] = status; +} + +static __rte_always_inline uint32_t +qat_sym_dp_enqueue_aead(struct rte_cryptodev *dev, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct qat_qp *qp = dev->data->queue_pairs[qp_id]; + struct rte_cryptodev_sym_session *sess; + struct qat_queue *tx_queue; + struct qat_sym_session *ctx; + uint32_t i; + register uint32_t tail; + + if (unlikely(QAT_SYM_DP_IS_VEC_VALID(qp, flags, vec->num) == 0)) { + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + sess = session.crypto_sess; + ctx = (struct qat_sym_session *)get_sym_session_private_data(sess, + dev->driver_id); + tx_queue = &qp->tx_q; + tail = (tx_queue->tail + qp->cached * tx_queue->msg_size) & + tx_queue->modulo_mask; + + for (i = 0; i < vec->num; i++) { + struct icp_qat_fw_la_bulk_req *req; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct rte_crypto_sgl *sgl = &vec->sgl[i]; + struct rte_crypto_vec *iv_vec = &vec->iv_vec[i]; + struct rte_crypto_vec *aad_vec = &vec->aad_vec[i]; + struct rte_crypto_vec *digest_vec = &vec->digest_vec[i]; + uint8_t *aad_data; + uint8_t aad_ccm_real_len; + uint8_t aad_len_field_sz; + uint32_t aead_len, msg_len_be; + rte_iova_t aad_iova = 0; + uint8_t q; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, + (const uint8_t *)&(ctx->fw_req)); + + if (i == 0 || (flags & RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY)) + req->comn_mid.opaque_data = (uint64_t)opaque[i]; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + sgl->vec[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + sgl->vec[0].len; + + aead_len = sgl->vec[0].len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy_generic(cipher_param->u.cipher_IV_array, + iv_vec->base, ctx->cipher_iv.length); + aad_iova = aad_vec->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: + aad_data = aad_vec->base; + aad_iova = aad_vec->iova; + aad_ccm_real_len = 0; + aad_len_field_sz = 0; + msg_len_be = rte_bswap32(aead_len); + + if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { + aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; + aad_ccm_real_len = ctx->aad_len - + ICP_QAT_HW_CCM_AAD_B0_LEN - + ICP_QAT_HW_CCM_AAD_LEN_INFO; + } else { + aad_data = iv_vec->base; + aad_iova = iv_vec->iova; + } + + q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; + aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( + aad_len_field_sz, ctx->digest_length, q); + if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET + (q - + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), + (uint8_t *)&msg_len_be, + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); + } else { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)&msg_len_be + + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE + - q), q); + } + + if (aad_len_field_sz > 0) { + *(uint16_t *) + &aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = + rte_bswap16(aad_ccm_real_len); + + if ((aad_ccm_real_len + aad_len_field_sz) + % ICP_QAT_HW_CCM_AAD_B0_LEN) { + uint8_t pad_len = 0; + uint8_t pad_idx = 0; + + pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - + ((aad_ccm_real_len + + aad_len_field_sz) % + ICP_QAT_HW_CCM_AAD_B0_LEN); + pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + + aad_ccm_real_len + + aad_len_field_sz; + memset(&aad_data[pad_idx], 0, pad_len); + } + + rte_memcpy(((uint8_t *)cipher_param-> + u.cipher_IV_array) + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + *(uint8_t *)&cipher_param-> + u.cipher_IV_array[0] = + q - ICP_QAT_HW_CCM_NONCE_OFFSET; + + rte_memcpy((uint8_t *)aad_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + } + break; + default: + if (flags & RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust enqueue is not set */ + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = aead_len; + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = aead_len; + auth_param->auth_res_addr = digest_vec->iova; + auth_param->u1.aad_adr = aad_iova; + + /* SGL processing */ + if (unlikely(sgl->num > 1)) { + int total_len = qat_sym_dp_fill_sgl(qp, req, sgl); + + if (total_len < 0) { + if (flags & RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust is not set */ + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, + vec->num); + return 0; + } + + cipher_param->cipher_length = auth_param->auth_len = + total_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + } + + if (ctx->is_single_pass) { + cipher_param->spc_aad_addr = aad_iova; + cipher_param->spc_auth_res_addr = digest_vec->iova; + } + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + + + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + qat_sym_dp_update_tx_queue(qp, tx_queue, tail, i, flags); + + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_enqueue_cipher(struct rte_cryptodev *dev, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct qat_qp *qp = dev->data->queue_pairs[qp_id]; + struct rte_cryptodev_sym_session *sess; + struct qat_queue *tx_queue; + struct qat_sym_session *ctx; + uint32_t i; + register uint32_t tail; + + if (unlikely(QAT_SYM_DP_IS_VEC_VALID(qp, flags, vec->num) == 0)) { + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + sess = session.crypto_sess; + + ctx = (struct qat_sym_session *)get_sym_session_private_data(sess, + dev->driver_id); + + tx_queue = &qp->tx_q; + tail = (tx_queue->tail + qp->cached * tx_queue->msg_size) & + tx_queue->modulo_mask; + + for (i = 0; i < vec->num; i++) { + struct icp_qat_fw_la_bulk_req *req; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct rte_crypto_sgl *sgl = &vec->sgl[i]; + struct rte_crypto_vec *iv_vec = &vec->iv_vec[i]; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, + (const uint8_t *)&(ctx->fw_req)); + + if (i == 0 || (flags & RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY)) + req->comn_mid.opaque_data = (uint64_t)opaque[i]; + + cipher_param = (void *)&req->serv_specif_rqpars; + + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + sgl->vec[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + sgl->vec[0].len; + + /* cipher IV */ + set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = sgl->vec[0].len - + ofs.ofs.cipher.head - ofs.ofs.cipher.tail; + + /* SGL processing */ + if (unlikely(sgl->num > 1)) { + int total_len = qat_sym_dp_fill_sgl(qp, req, sgl); + + if (total_len < 0) { + if (flags & RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust is not set */ + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, + vec->num); + return 0; + } + + cipher_param->cipher_length = total_len - + ofs.ofs.cipher.head - ofs.ofs.cipher.tail; + } + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + + } + + qat_sym_dp_update_tx_queue(qp, tx_queue, tail, i, flags); + + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_enqueue_auth(struct rte_cryptodev *dev, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct qat_qp *qp = dev->data->queue_pairs[qp_id]; + struct rte_cryptodev_sym_session *sess; + struct qat_queue *tx_queue; + struct qat_sym_session *ctx; + uint32_t i; + register uint32_t tail; + + if (unlikely(QAT_SYM_DP_IS_VEC_VALID(qp, flags, vec->num) == 0)) { + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + sess = session.crypto_sess; + + ctx = (struct qat_sym_session *)get_sym_session_private_data(sess, + dev->driver_id); + + tx_queue = &qp->tx_q; + tail = (tx_queue->tail + qp->cached * tx_queue->msg_size) & + tx_queue->modulo_mask; + + for (i = 0; i < vec->num; i++) { + struct icp_qat_fw_la_bulk_req *req; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct rte_crypto_sgl *sgl = &vec->sgl[i]; + struct rte_crypto_vec *iv_vec = &vec->iv_vec[i]; + struct rte_crypto_vec *digest_vec = &vec->digest_vec[i]; + int total_len; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, + (const uint8_t *)&(ctx->fw_req)); + + if (i == 0 || (flags & RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY)) + req->comn_mid.opaque_data = (uint64_t)opaque[i]; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + sgl->vec[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + sgl->vec[0].len; + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = sgl->vec[0].len - ofs.ofs.auth.head - + ofs.ofs.auth.tail; + auth_param->auth_res_addr = digest_vec->iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = iv_vec->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy_generic(cipher_param->u.cipher_IV_array, + iv_vec->base, ctx->cipher_iv.length); + break; + default: + break; + } + + /* SGL processing */ + if (unlikely(sgl->num > 1)) { + total_len = qat_sym_dp_fill_sgl(qp, req, sgl); + + if (total_len < 0) { + if (flags & RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust is not set */ + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, + vec->num); + return 0; + } + + cipher_param->cipher_length = auth_param->auth_len = + total_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + } + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + qat_sym_dp_update_tx_queue(qp, tx_queue, tail, i, flags); + + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_enqueue_chain(struct rte_cryptodev *dev, uint16_t qp_id, + union rte_cryptodev_hw_session_ctx session, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec, + void **opaque, uint32_t flags) +{ + struct qat_qp *qp = dev->data->queue_pairs[qp_id]; + struct rte_cryptodev_sym_session *sess; + struct qat_queue *tx_queue; + struct qat_sym_session *ctx; + uint32_t i; + register uint32_t tail; + + if (unlikely(QAT_SYM_DP_IS_VEC_VALID(qp, flags, vec->num) == 0)) { + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + sess = session.crypto_sess; + + ctx = (struct qat_sym_session *)get_sym_session_private_data(sess, + dev->driver_id); + + tx_queue = &qp->tx_q; + tail = (tx_queue->tail + qp->cached * tx_queue->msg_size) & + tx_queue->modulo_mask; + + for (i = 0; i < vec->num; i++) { + struct icp_qat_fw_la_bulk_req *req; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct rte_crypto_sgl *sgl = &vec->sgl[i]; + struct rte_crypto_vec *iv_vec = &vec->iv_vec[i]; + struct rte_crypto_vec *digest_vec = &vec->digest_vec[i]; + rte_iova_t auth_iova_end; + int total_len; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, + (const uint8_t *)&(ctx->fw_req)); + + if (i == 0 || (flags & RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY)) + req->comn_mid.opaque_data = (uint64_t)opaque[i]; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + sgl->vec[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + sgl->vec[0].len; + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = sgl->vec[0].len - + ofs.ofs.cipher.head - ofs.ofs.cipher.tail; + set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req); + + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = sgl->vec[0].len - + ofs.ofs.auth.head - ofs.ofs.auth.tail; + auth_param->auth_res_addr = digest_vec->iova; + + /* SGL processing */ + if (unlikely(sgl->num > 1)) { + total_len = qat_sym_dp_fill_sgl(qp, req, sgl); + + if (total_len < 0) { + if (flags & RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust is not set */ + QAT_DP_LOG(ERR, "Operation not supported"); + qat_sym_dp_fill_vec_status(vec->status, -1, + vec->num); + return 0; + } + + cipher_param->cipher_length = auth_param->auth_len = + total_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + } + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = iv_vec->iova; + + if (unlikely(sgl->num > 1)) { + int auth_end_get = 0, i = sgl->num - 1; + struct rte_crypto_vec *cvec = &sgl->vec[i]; + uint32_t len; + + if (total_len - ofs.ofs.auth.tail < 0) { + if (flags & + RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust not set */ + QAT_DP_LOG(ERR, "Incorrect length"); + qat_sym_dp_fill_vec_status(vec->status, + -1, vec->num); + return 0; + } + + len = total_len - ofs.ofs.auth.tail; + + while (i >= 0 && len > 0) { + if (cvec->len >= len) { + auth_iova_end = cvec->iova + + (cvec->len - len); + len = 0; + auth_end_get = 1; + break; + } + len -= cvec->len; + i--; + vec--; + } + + if (!auth_end_get) { + QAT_DP_LOG(ERR, "Failed to get end"); + if (flags & + RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST) + break; + /* Give up enqueue if exhaust not set */ + QAT_DP_LOG(ERR, "Incorrect length"); + qat_sym_dp_fill_vec_status(vec->status, + -1, vec->num); + return 0; + } + } else + auth_iova_end = digest_vec->iova + + digest_vec->len; + + /* Then check if digest-encrypted conditions are met */ + if ((auth_param->auth_off + auth_param->auth_len < + cipher_param->cipher_offset + + cipher_param->cipher_length) && + (digest_vec->iova == auth_iova_end)) { + /* Handle partial digest encryption */ + if (cipher_param->cipher_offset + + cipher_param->cipher_length < + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length) + req->comn_mid.dst_length = + req->comn_mid.src_length = + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length; + struct icp_qat_fw_comn_req_hdr *header = + &req->comn_hdr; + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + } + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + QAT_DP_LOG(ERR, "GMAC as auth algo not supported"); + return -1; + default: + break; + } + + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + qat_sym_dp_update_tx_queue(qp, tx_queue, tail, i, flags); + + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_dequeue(struct rte_cryptodev *dev, uint16_t qp_id, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, + uint32_t *n_success_jobs, uint32_t flags) +{ + struct qat_qp *qp = dev->data->queue_pairs[qp_id]; + register struct qat_queue *rx_queue; + struct icp_qat_fw_comn_resp *resp, *last_resp = 0; + void *resp_opaque; + uint32_t i, n; + uint32_t head; + uint8_t status; + + *n_success_jobs = 0; + rx_queue = &qp->rx_q; + head = rx_queue->head; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + head); + /* no operation ready */ + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return 0; + + resp_opaque = (void *)(uintptr_t)resp->opaque_data; + /* get the dequeue count */ + n = get_dequeue_count(resp_opaque); + assert(n > 0); + + out_opaque[0] = resp_opaque; + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + post_dequeue(resp_opaque, 0, status); + *n_success_jobs += status; + + /* we already finished dequeue when n == 1 */ + if (unlikely(n == 1)) { + i = 1; + goto update_rx_queue; + } + + last_resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + ((head + rx_queue->msg_size * + (n - 2)) & rx_queue->modulo_mask)); + + /* if EXAUST is not set, check if we can dequeue that many jobs */ + if (flags & RTE_CRYPTO_HW_DP_FF_DEQUEUE_EXHAUST) { + if (flags & RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY) { + for (i = 1; i < n - 1; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + if (unlikely(*(uint32_t *)resp == + ADF_RING_EMPTY_SIG)) + goto update_rx_queue; + out_opaque[i] = (void *)(uintptr_t) + resp->opaque_data; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(out_opaque[i], i, status); + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + } + + status = QAT_SYM_DP_IS_RESP_SUCCESS(last_resp); + out_opaque[i] = (void *)(uintptr_t) + last_resp->opaque_data; + post_dequeue(out_opaque[i], i, status); + *n_success_jobs += status; + i++; + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + goto update_rx_queue; + } + + /* (flags & RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY) == 0 */ + for (i = 1; i < n - 1; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + goto update_rx_queue; + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + post_dequeue(resp_opaque, i, status); + *n_success_jobs += status; + } + status = QAT_SYM_DP_IS_RESP_SUCCESS(last_resp); + post_dequeue(resp_opaque, i, status); + *n_success_jobs += status; + i++; + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + goto update_rx_queue; + } + + /* not all operation ready */ + if (unlikely(*(uint32_t *)last_resp == ADF_RING_EMPTY_SIG)) + return 0; + + if (flags & RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY) { + for (i = 1; i < n - 1; i++) { + uint8_t status; + + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + out_opaque[i] = (void *)(uintptr_t)resp->opaque_data; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(out_opaque[i], i, status); + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + } + out_opaque[i] = (void *)(uintptr_t)last_resp->opaque_data; + post_dequeue(out_opaque[i], i, + QAT_SYM_DP_IS_RESP_SUCCESS(last_resp)); + i++; + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + goto update_rx_queue; + } + + /* (flags & RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY) == 0 */ + for (i = 1; i < n - 1; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(resp_opaque, i, status); + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + } + + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + i++; + *n_success_jobs += status; + post_dequeue(resp_opaque, i, status); + +update_rx_queue: + rx_queue->head = head; + rx_queue->nb_processed_responses += i; + qp->dequeued += i; + qp->stats.dequeued_count += i; + if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) { + uint32_t old_head, new_head; + uint32_t max_head; + + old_head = rx_queue->csr_head; + new_head = rx_queue->head; + max_head = qp->nb_descriptors * rx_queue->msg_size; + + /* write out free descriptors */ + void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head; + + if (new_head < old_head) { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, + max_head - old_head); + memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE, + new_head); + } else { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head - + old_head); + } + rx_queue->nb_processed_responses = 0; + rx_queue->csr_head = new_head; + + /* write current head to CSR */ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, + rx_queue->hw_bundle_number, rx_queue->hw_queue_number, + new_head); + } + + return i; +} + +struct rte_crytodev_sym_hw_dp_ops qat_hw_dp_ops = { + .enqueue_aead = qat_sym_dp_enqueue_aead, + .enqueue_cipher = qat_sym_dp_enqueue_cipher, + .enqueue_auth = qat_sym_dp_enqueue_auth, + .enqueue_chain = qat_sym_dp_enqueue_chain, + .dequeue = qat_sym_dp_dequeue +}; diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index c7e323cce..ba6c2130f 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -259,7 +259,9 @@ static struct rte_cryptodev_ops crypto_qat_ops = { /* Crypto related operations */ .sym_session_get_size = qat_sym_session_get_private_size, .sym_session_configure = qat_sym_session_configure, - .sym_session_clear = qat_sym_session_clear + .sym_session_clear = qat_sym_session_clear, + + .sym_hw_enq_deq = &qat_hw_dp_ops }; #ifdef RTE_LIBRTE_SECURITY @@ -382,7 +384,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | - RTE_CRYPTODEV_FF_SECURITY; + RTE_CRYPTODEV_FF_SECURITY | + RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; From patchwork Mon Jul 13 16:57:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 73972 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 980E1A0540; Mon, 13 Jul 2020 18:58:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F9741D6D0; Mon, 13 Jul 2020 18:58:10 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id A22291D6AA for ; Mon, 13 Jul 2020 18:58:04 +0200 (CEST) IronPort-SDR: nf3E3UkL1ZUft6uTI9+a5KSByD3e6NLzGMwC3O3Y4TcrvbR3E8fhwSZpVwkYYomfp3MNrCR29S u0w99XjtxOvA== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="210203475" X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="210203475" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2020 09:58:04 -0700 IronPort-SDR: Q7HWlcpR6v/eMNPkcLIW17uf04tIAIb/+v7czsrc6MMrOynjNy8NmJwHIE2qyA83CooPH4ETZw L96BXuJrwIdg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="281465672" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga003.jf.intel.com with ESMTP; 13 Jul 2020 09:58:02 -0700 From: Fan Zhang To: dev@dpdk.org Cc: fiona.trahe@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 13 Jul 2020 17:57:54 +0100 Message-Id: <20200713165755.61814-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713165755.61814-1-roy.fan.zhang@intel.com> References: <20200703124942.29171-1-roy.fan.zhang@intel.com> <20200713165755.61814-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v5 3/4] test/crypto: add unit-test for cryptodev direct APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the QAT test to use cryptodev symmetric crypto direct APIs. Signed-off-by: Fan Zhang --- app/test/test_cryptodev.c | 367 ++++++++++++++++++++++++-- app/test/test_cryptodev.h | 6 + app/test/test_cryptodev_blockcipher.c | 50 ++-- 3 files changed, 386 insertions(+), 37 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index e71e73ae1..5e168c124 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -57,6 +57,8 @@ static int gbl_driver_id; static enum rte_security_session_action_type gbl_action_type = RTE_SECURITY_ACTION_TYPE_NONE; +int hw_dp_test; + struct crypto_testsuite_params { struct rte_mempool *mbuf_pool; struct rte_mempool *large_mbuf_pool; @@ -147,6 +149,168 @@ ceil_byte_length(uint32_t num_bits) return (num_bits >> 3); } +static uint32_t +get_dequeue_count(void *opaque __rte_unused) +{ + return 1; +} + +static void +write_status(void *opaque __rte_unused, uint32_t index __rte_unused, + uint8_t is_op_success) +{ + struct rte_crypto_op *op = opaque; + op->status = is_op_success ? RTE_CRYPTO_OP_STATUS_SUCCESS : + RTE_CRYPTO_OP_STATUS_ERROR; +} + +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits) +{ + int32_t n; + struct rte_crypto_sym_op *sop; + struct rte_crypto_sym_vec vec; + struct rte_crypto_sgl sgl; + struct rte_crypto_op *ret_op = NULL; + struct rte_crypto_vec data_vec[UINT8_MAX], iv_vec, aad_vec, digest_vec; + union rte_crypto_sym_ofs ofs; + int32_t status; + uint32_t min_ofs, max_len, nb_ops; + uint32_t n_success_ops; + union rte_cryptodev_hw_session_ctx sess; + enum { + cipher = 0, + auth, + chain, + aead + } hw_dp_test_type; + uint32_t count = 0; + uint32_t flags = RTE_CRYPTO_HW_DP_FF_CRYPTO_SESSION | + RTE_CRYPTO_HW_DP_FF_SET_OPAQUE_ARRAY | + RTE_CRYPTO_HW_DP_FF_KICK_QUEUE; + + memset(&vec, 0, sizeof(vec)); + + vec.sgl = &sgl; + vec.iv_vec = &iv_vec; + vec.aad_vec = &aad_vec; + vec.digest_vec = &digest_vec; + vec.status = &status; + vec.num = 1; + + sop = op->sym; + + sess.crypto_sess = sop->session; + + if (is_cipher && is_auth) { + hw_dp_test_type = chain; + min_ofs = RTE_MIN(sop->cipher.data.offset, + sop->auth.data.offset); + max_len = RTE_MAX(sop->cipher.data.length, + sop->auth.data.length); + } else if (is_cipher) { + hw_dp_test_type = cipher; + min_ofs = sop->cipher.data.offset; + max_len = sop->cipher.data.length; + } else if (is_auth) { + hw_dp_test_type = auth; + min_ofs = sop->auth.data.offset; + max_len = sop->auth.data.length; + } else { /* aead */ + hw_dp_test_type = aead; + min_ofs = sop->aead.data.offset; + max_len = sop->aead.data.length; + } + + if (len_in_bits) { + max_len = max_len >> 3; + min_ofs = min_ofs >> 3; + } + + n = rte_crypto_mbuf_to_vec(sop->m_src, 0, min_ofs + max_len, + data_vec, RTE_DIM(data_vec)); + if (n < 0 || n != sop->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + sgl.vec = data_vec; + sgl.num = n; + + ofs.raw = 0; + + iv_vec.base = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET); + iv_vec.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET); + + switch (hw_dp_test_type) { + case aead: + ofs.ofs.cipher.head = sop->cipher.data.offset; + aad_vec.base = (void *)sop->aead.aad.data; + aad_vec.iova = sop->aead.aad.phys_addr; + digest_vec.base = (void *)sop->aead.digest.data; + digest_vec.iova = sop->aead.digest.phys_addr; + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + nb_ops = rte_cryptodev_sym_hw_crypto_enqueue_aead(dev_id, qp_id, + sess, ofs, &vec, (void **)&op, flags); + break; + case cipher: + ofs.ofs.cipher.head = sop->cipher.data.offset; + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + nb_ops = rte_cryptodev_sym_hw_crypto_enqueue_cipher(dev_id, + qp_id, sess, ofs, &vec, (void **)&op, flags); + break; + case auth: + ofs.ofs.auth.head = sop->auth.data.offset; + digest_vec.base = (void *)sop->auth.digest.data; + digest_vec.iova = sop->auth.digest.phys_addr; + nb_ops = rte_cryptodev_sym_hw_crypto_enqueue_auth(dev_id, qp_id, + sess, ofs, &vec, (void **)&op, flags); + break; + case chain: + ofs.ofs.cipher.head = + sop->cipher.data.offset - sop->auth.data.offset; + ofs.ofs.cipher.tail = + (sop->auth.data.offset + sop->auth.data.length) - + (sop->cipher.data.offset + sop->cipher.data.length); + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + digest_vec.base = (void *)sop->auth.digest.data; + digest_vec.iova = sop->auth.digest.phys_addr; + nb_ops = rte_cryptodev_sym_hw_crypto_enqueue_chain(dev_id, + qp_id, sess, ofs, &vec, (void **)&op, flags); + break; + } + + if (nb_ops < vec.num) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + nb_ops = 0; + flags = RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY | + RTE_CRYPTO_HW_DP_FF_DEQUEUE_EXHAUST; + while (count++ < 1024 && nb_ops < vec.num) { + nb_ops = rte_cryptodev_sym_hw_crypto_dequeue(dev_id, qp_id, + get_dequeue_count, write_status, (void **)&ret_op, + &n_success_ops, flags); + } + + if (count == 1024 || n_success_ops == 0 || nb_ops == 0 || + ret_op != op) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } +} + static void process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op) { @@ -2456,7 +2620,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -2535,7 +2703,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2605,6 +2777,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); else ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); @@ -2676,7 +2851,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2883,8 +3062,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], - ut_params->op); + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], + ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -2969,7 +3152,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3292,7 +3479,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3367,7 +3558,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3742,7 +3937,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -3910,7 +4109,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4005,7 +4208,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4141,7 +4348,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4330,7 +4541,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4512,7 +4727,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4702,7 +4921,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4843,7 +5066,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4930,7 +5157,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5017,7 +5248,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5105,7 +5340,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5237,7 +5476,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5423,7 +5666,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -7029,6 +7276,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -8521,6 +8771,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test == 1) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -11461,6 +11714,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata, if (oop == IN_PLACE && gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (oop == IN_PLACE && hw_dp_test == 1) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -13022,6 +13278,75 @@ test_cryptodev_nitrox(void) return unit_test_suite_runner(&cryptodev_nitrox_testsuite); } +static struct unit_test_suite cryptodev_sym_direct_api_testsuite = { + .suite_name = "Crypto Sym direct API Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_auth_cipher_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_auth_cipher_verify_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_hash_generate_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_hash_verify_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all), + TEST_CASE_ST(ut_setup, ut_teardown, test_authonly_all), + TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_CCM_authenticated_encryption_test_case_128_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_CCM_authenticated_decryption_test_case_128_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_authenticated_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_authenticated_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encryption_test_case_192_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_decryption_test_case_192_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encryption_test_case_256_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_decryption_test_case_256_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encrypt_SGL_in_place_1500B), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/) +{ + int ret; + + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)); + + if (gbl_driver_id == -1) { + RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both " + "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM " + "are enabled in config file to run this testsuite.\n"); + return TEST_SKIPPED; + } + + hw_dp_test = 1; + ret = unit_test_suite_runner(&cryptodev_sym_direct_api_testsuite); + hw_dp_test = 0; + + return ret; +} + +REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api); REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat); REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb); REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest, diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 41542e055..c382c12c4 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -71,6 +71,8 @@ #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym +extern int hw_dp_test; + /** * Write (spread) data from buffer to mbuf data * @@ -209,4 +211,8 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len, return NULL; } +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits); + #endif /* TEST_CRYPTODEV_H_ */ diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c index 642b54971..26f1c41c9 100644 --- a/app/test/test_cryptodev_blockcipher.c +++ b/app/test/test_cryptodev_blockcipher.c @@ -461,25 +461,43 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } /* Process crypto operation */ - if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Error sending packet for encryption"); - status = TEST_FAILED; - goto error_exit; - } + if (hw_dp_test) { + uint8_t is_cipher = 0, is_auth = 0; + + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { + RTE_LOG(DEBUG, USER1, + "QAT direct API does not support OOP, Test Skipped.\n"); + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED"); + status = TEST_SUCCESS; + goto error_exit; + } + if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER) + is_cipher = 1; + if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH) + is_auth = 1; + + process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0); + } else { + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Error sending packet for encryption"); + status = TEST_FAILED; + goto error_exit; + } - op = NULL; + op = NULL; - while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) - rte_pause(); + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) + rte_pause(); - if (!op) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Failed to process sym crypto op"); - status = TEST_FAILED; - goto error_exit; + if (!op) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Failed to process sym crypto op"); + status = TEST_FAILED; + goto error_exit; + } } debug_hexdump(stdout, "m_src(after):", From patchwork Mon Jul 13 16:57:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 73973 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F09B7A0540; Mon, 13 Jul 2020 18:58:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D22BD1D6FF; Mon, 13 Jul 2020 18:58:16 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 0B1E81D6B7 for ; Mon, 13 Jul 2020 18:58:05 +0200 (CEST) IronPort-SDR: sT9Bi4SDQJdipjSGprFDhHuM3COjjImMY7cJFdMsaXYFA+7xMllU9myqJhAh4DLs4GE/hF6g0I w2G1zs0NHv0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="210203494" X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="210203494" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2020 09:58:05 -0700 IronPort-SDR: 6BZ+BB9mPCVkRqwwVj6goZf9mQJytGwkK3C+1gtHXYfmuU4ERE10u/IPKYIeI3ycFJpFHq/JR4 VN4YYWmKHZVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,348,1589266800"; d="scan'208";a="281465685" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by orsmga003.jf.intel.com with ESMTP; 13 Jul 2020 09:58:04 -0700 From: Fan Zhang To: dev@dpdk.org Cc: fiona.trahe@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 13 Jul 2020 17:57:55 +0100 Message-Id: <20200713165755.61814-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713165755.61814-1-roy.fan.zhang@intel.com> References: <20200703124942.29171-1-roy.fan.zhang@intel.com> <20200713165755.61814-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v5 4/4] doc: add cryptodev direct APIs guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates programmer's guide to demonstrate the usage and limitations of cryptodev symmetric crypto data-path APIs. Signed-off-by: Fan Zhang --- doc/guides/prog_guide/cryptodev_lib.rst | 53 +++++++++++++++++++++++++ doc/guides/rel_notes/release_20_08.rst | 8 ++++ 2 files changed, 61 insertions(+) diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst index c14f750fa..6316fd1a4 100644 --- a/doc/guides/prog_guide/cryptodev_lib.rst +++ b/doc/guides/prog_guide/cryptodev_lib.rst @@ -631,6 +631,59 @@ a call argument. Status different than zero must be treated as error. For more details, e.g. how to convert an mbuf to an SGL, please refer to an example usage in the IPsec library implementation. +Cryptodev Direct Symmetric Crypto Data-plane APIs +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Direct symmetric crypto data-path APIs are a set of APIs that especially +provided for Symmetric HW Crypto PMD that provides fast data-path +enqueue/dequeue operations. The direct data-path APIs take advantage of +existing Cryptodev APIs for device, queue pairs, and session management. In +addition the user are required to get the queue pair pointer data and function +pointers. The APIs are provided as an advanced feature as an alternative +to ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``. The +APIs are designed for the user to develop close-to-native performance symmetric +crypto data-path implementation for their applications that do not necessarily +depend on cryptodev operations and cryptodev operation mempools, or mbufs. + +Cryptodev PMDs who supports this feature will have +``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. The user uses +``rte_cryptodev_sym_get_hw_ops`` function call to get all the function pointers +for different enqueue and dequeue operations, plus the device specific +queue pair data. After the ``rte_crypto_hw_ops`` structure is properly set by +the driver, the user can use the function pointers and the queue data pointers +in the structure to enqueue and dequeue crypto jobs. + +Direct Data-plane APIs share the same ``struct rte_crypto_sym_vec`` structure +as synchronous mode. However to pass IOVA addresses the user are required to +pass the ``struct rte_crypto_vec`` arrays for IV, AAD, and digests, instead +of VOID pointers as synchronous mode. + +Different than Cryptodev operation, the ``rte_crypto_sym_vec`` structure +focuses only on the data field required for crypto PMD to execute a single job, +and is not supposed stored as opaque data. The user can freely allocate the +structure buffer from stack and reuse it to fill all jobs. + +In addtion, to maximum the flexibility in the enqueue/dequeue operation, the +data-plane APIs supports some special operations specified in the flag +parameters in both enqueue and dequeue functions. For example, setting or +unsetting the flag ``RTE_CRYPTO_HW_DP_FF_ENQUEUE_EXHAUST`` shall make the +PMD behaving differently: setting the flag will make the PMD attempts enqueuing +as many jobs in the ``struct rte_crypto_sym_vec``, but unsetting it will +make the PMD only enqueue the ``num`` or zero operations depends on the +queue status. + +To use the direct symmetric crypto APIs safely, the user has to carefully +set the correct fields in rte_crypto_sym_vec structure, otherwise the +application or the system may crash. Also there are a few limitations to the +direct symmetric crypto APIs: + +* Only support in-place operations. +* APIs are NOT thread-safe. +* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or + vice versa. + +See *DPDK API Reference* for details on each API definitions. + Sample code ----------- diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index f19b74872..a24797529 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -225,6 +225,14 @@ New Features See the :doc:`../sample_app_ug/l2_forward_real_virtual` for more details of this parameter usage. +* **Add Cryptodev data-path APIs for no mbuf-centric data-path.** + + Cryptodev is added a set of data-path APIs that are not based on + cryptodev operations. The APIs are designed for external applications + or libraries that want to use cryptodev but their data-path + implementations are not mbuf-centric. QAT Symmetric PMD is also updated + to add the support to these APIs. + Removed Items -------------