From patchwork Wed Jan 15 18:28:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64732 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D03DAA0514; Wed, 15 Jan 2020 19:30:31 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 305721C29C; Wed, 15 Jan 2020 19:30:27 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id CC67F1C297 for ; Wed, 15 Jan 2020 19:30:24 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502250" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:22 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:27 +0100 Message-Id: <20200115182832.17012-2-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new API allowing to process crypto operations in a synchronous manner. Operations are performed on a set of SG arrays. Sync mode is selected by setting appropriate flag in an xform type number. Cryptodevs which allows CPU crypto operation mode have to use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski --- lib/librte_cryptodev/rte_crypto_sym.h | 62 ++++++++++++++++++- lib/librte_cryptodev/rte_cryptodev.c | 30 +++++++++ lib/librte_cryptodev/rte_cryptodev.h | 20 ++++++ lib/librte_cryptodev/rte_cryptodev_pmd.h | 19 ++++++ .../rte_cryptodev_version.map | 1 + 5 files changed, 131 insertions(+), 1 deletion(-) diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index ffa038dc4..f5dd05ab0 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -25,6 +25,59 @@ extern "C" { #include #include +/** + * Crypto IO Vector (in analogy with struct iovec) + * Supposed be used to pass input/output data buffers for crypto data-path + * functions. + */ +struct rte_crypto_vec { + /** virtual address of the data buffer */ + void *base; + /** IOVA of the data buffer */ + rte_iova_t *iova; + /** length of the data buffer */ + uint32_t len; +}; + +struct rte_crypto_sgl { + /** start of an array of vectors */ + struct rte_crypto_vec *vec; + /** size of an array of vectors */ + uint32_t num; +}; + +struct rte_crypto_sym_vec { + /** array of SGL vectors */ + struct rte_crypto_sgl *sgl; + /** array of pointers to IV */ + void **iv; + /** array of pointers to AAD */ + void **aad; + /** array of pointers to digest */ + void **digest; + /** + * array of statuses for each operation: + * - 0 on success + * - errno on error + */ + int32_t *status; + /** number of operations to perform */ + uint32_t num; +}; + +/** + * used for cpu_crypto_process_bulk() to specify head/tail offsets + * for auth/cipher processing. + */ +union rte_crypto_sym_ofs { + uint64_t raw; + struct { + struct { + uint16_t head; + uint16_t tail; + } auth, cipher; + } ofs; +}; /** Symmetric Cipher Algorithms */ enum rte_crypto_cipher_algorithm { @@ -425,7 +478,14 @@ enum rte_crypto_sym_xform_type { RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED = 0, /**< No xform specified */ RTE_CRYPTO_SYM_XFORM_AUTH, /**< Authentication xform */ RTE_CRYPTO_SYM_XFORM_CIPHER, /**< Cipher xform */ - RTE_CRYPTO_SYM_XFORM_AEAD /**< AEAD xform */ + RTE_CRYPTO_SYM_XFORM_AEAD, /**< AEAD xform */ + + RTE_CRYPTO_SYM_XFORM_TYPE_MASK = 0xFFFF, + /**< xform type mask value */ + RTE_CRYPTO_SYM_XFORM_FLAG_MASK = 0xFFFF0000, + /**< xform flag mask value */ + RTE_CRYPTO_SYM_CPU_CRYPTO = 0x80000000, + /**< xform flag for cpu-crypto */ }; /** diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 89aa2ed3e..157fda890 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -1616,6 +1616,36 @@ rte_cryptodev_sym_session_get_user_data( return (void *)(sess->sess_data + sess->nb_drivers); } +static inline void +sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum) +{ + uint32_t i; + for (i = 0; i < vec->num; i++) + vec->status[i] = errnum; +} + +uint32_t +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { + sym_crypto_fill_status(vec, EINVAL); + return 0; + } + + dev = rte_cryptodev_pmd_get_dev(dev_id); + + if (*dev->dev_ops->sym_cpu_process == NULL) { + sym_crypto_fill_status(vec, ENOTSUP); + return 0; + } + + return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); +} + /** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index c6ffa3b35..8786dfb90 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support encrypted-digest operations where digest is appended to data */ #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS (1ULL << 20) /**< Support asymmetric session-less operations */ +#define RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO (1ULL << 21) +/**< Support symmeteric cpu-crypto processing */ /** @@ -1274,6 +1276,24 @@ void * rte_cryptodev_sym_session_get_user_data( struct rte_cryptodev_sym_session *sess); +/** + * Perform actual crypto processing (encrypt/digest or auth/decrypt) + * on user provided data. + * + * @param dev_id The device identifier. + * @param sess Cryptodev session structure + * @param ofs Start and stop offsets for auth and cipher operations + * @param vec Vectorized operation descriptor + * + * @return + * - Returns number of successfully processed packets. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec); + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index fba14f2fa..5d9ee5fef 100644 --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h @@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev, */ typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev, struct rte_cryptodev_asym_session *sess); +/** + * Perform actual crypto processing (encrypt/digest or auth/decrypt) + * on user provided data. + * + * @param dev Crypto device pointer + * @param sess Cryptodev session structure + * @param ofs Start and stop offsets for auth and cipher operations + * @param vec Vectorized operation descriptor + * + * @return + * - Returns number of successfully processed packets. + * + */ +typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t) + (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); + /** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -342,6 +359,8 @@ struct rte_cryptodev_ops { /**< Clear a Crypto sessions private data. */ cryptodev_asym_free_session_t asym_session_clear; /**< Clear a Crypto sessions private data. */ + cryptodev_sym_cpu_crypto_process_t sym_cpu_process; + /**< process input data synchronously (cpu-crypto). */ }; diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index 1dd1e259a..6e41b4be5 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -71,6 +71,7 @@ EXPERIMENTAL { rte_cryptodev_asym_session_init; rte_cryptodev_asym_xform_capability_check_modlen; rte_cryptodev_asym_xform_capability_check_optype; + rte_cryptodev_sym_cpu_crypto_process; rte_cryptodev_sym_get_existing_header_session_size; rte_cryptodev_sym_session_get_user_data; rte_cryptodev_sym_session_pool_create; From patchwork Wed Jan 15 18:28:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64733 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DB5DA0514; Wed, 15 Jan 2020 19:30:45 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7F8921C2A9; Wed, 15 Jan 2020 19:30:29 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 0DF231C29A for ; Wed, 15 Jan 2020 19:30:26 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502374" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:24 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:28 +0100 Message-Id: <20200115182832.17012-3-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for CPU crypto mode by introducing required handler. Crypto mode (sync/async) is chosen during sym session create if an appropriate flag is set in an xform type number. Authenticated encryption and decryption are supported with tag generation/verification. Signed-off-by: Marcin Smoczynski Acked-by: Konstantin Ananyev Acked-by: Fan Zhang --- drivers/crypto/aesni_gcm/aesni_gcm_ops.h | 9 ++ drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 149 +++++++++++++++++- drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 3 + .../crypto/aesni_gcm/aesni_gcm_pmd_private.h | 18 ++- 4 files changed, 169 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h index e272f1067..404c0adff 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h @@ -65,4 +65,13 @@ struct aesni_gcm_ops { aesni_gcm_finalize_t finalize_dec; }; +/** GCM per-session operation handlers */ +struct aesni_gcm_session_ops { + aesni_gcm_t cipher; + aesni_gcm_pre_t pre; + aesni_gcm_init_t init; + aesni_gcm_update_t update; + aesni_gcm_finalize_t finalize; +}; + #endif /* _AESNI_GCM_OPS_H_ */ diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 1a03be31d..860e9b369 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -25,9 +25,16 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, const struct rte_crypto_sym_xform *aead_xform; uint8_t key_length; const uint8_t *key; + uint32_t xform_type; + + /* check for CPU-crypto mode */ + xform_type = xform->type; + sess->mode = xform_type | RTE_CRYPTO_SYM_CPU_CRYPTO ? + AESNI_GCM_MODE_SYNC : AESNI_GCM_MODE_ASYNC; + xform_type &= RTE_CRYPTO_SYM_XFORM_TYPE_MASK; /* AES-GMAC */ - if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) { auth_xform = xform; if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) { AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an " @@ -49,7 +56,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, sess->req_digest_length = auth_xform->auth.digest_length; /* AES-GCM */ - } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + } else if (xform_type == RTE_CRYPTO_SYM_XFORM_AEAD) { aead_xform = xform; if (aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_GCM) { @@ -62,11 +69,24 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, sess->iv.offset = aead_xform->aead.iv.offset; sess->iv.length = aead_xform->aead.iv.length; + /* setup session handlers */ + sess->ops.pre = gcm_ops->pre; + sess->ops.init = gcm_ops->init; + /* Select Crypto operation */ - if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) + if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION; - else + sess->ops.cipher = gcm_ops->enc; + sess->ops.update = gcm_ops->update_enc; + sess->ops.finalize = gcm_ops->finalize_enc; + } + /* op == RTE_CRYPTO_AEAD_OP_DECRYPT */ + else { sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION; + sess->ops.cipher = gcm_ops->dec; + sess->ops.update = gcm_ops->update_dec; + sess->ops.finalize = gcm_ops->finalize_dec; + } key_length = aead_xform->aead.key.length; key = aead_xform->aead.key.data; @@ -78,7 +98,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, return -ENOTSUP; } - /* IV check */ if (sess->iv.length != 16 && sess->iv.length != 12 && sess->iv.length != 0) { @@ -356,6 +375,122 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op, return 0; } +static inline void +aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum) +{ + uint32_t i; + + for (i = 0; i < vec->num; i++) + vec->status[i] = errnum; +} + + +static inline int32_t +aesni_gcm_sgl_op_finalize_encryption(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, uint8_t *digest) +{ + if (s->req_digest_length != s->gen_digest_length) { + uint8_t tmpdigest[s->gen_digest_length]; + + s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest, + s->gen_digest_length); + memcpy(digest, tmpdigest, s->req_digest_length); + } else { + s->ops.finalize(&s->gdata_key, gdata_ctx, digest, + s->gen_digest_length); + } + + return 0; +} + +static inline int32_t +aesni_gcm_sgl_op_finalize_decryption(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, uint8_t *digest) +{ + uint8_t tmpdigest[s->gen_digest_length]; + + s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest, + s->gen_digest_length); + + return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 : + EBADMSG; +} + +static inline void +aesni_gcm_process_gcm_sgl_op(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl, + void *iv, void *aad) +{ + uint32_t i; + + /* init crypto operation */ + s->ops.init(&s->gdata_key, gdata_ctx, iv, aad, + (uint64_t)s->aad_length); + + /* update with sgl data */ + for (i = 0; i < sgl->num; i++) { + struct rte_crypto_vec *vec = &sgl->vec[i]; + + s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base, + vec->len); + } +} + +/** Process CPU crypto bulk operations */ +uint32_t +aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess, + __rte_unused union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec) +{ + void *sess_priv; + struct aesni_gcm_session *s; + uint32_t processed; + uint32_t i; + + sess_priv = get_sym_session_private_data(sess, dev->driver_id); + if (unlikely(sess_priv == NULL)) { + aesni_gcm_fill_error_code(vec, EINVAL); + return 0; + } + + s = sess_priv; + if (unlikely(s->mode != AESNI_GCM_MODE_SYNC)) { + aesni_gcm_fill_error_code(vec, EINVAL); + return 0; + } + + processed = 0; + for (i = 0; i < vec->num; ++i) { + struct gcm_context_data gdata_ctx; + int32_t status; + + aesni_gcm_process_gcm_sgl_op(s, &gdata_ctx, &vec->sgl[i], + vec->iv[i], vec->aad[i]); + + switch (s->op) { + case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION: + status = aesni_gcm_sgl_op_finalize_encryption(s, + &gdata_ctx, vec->digest[i]); + break; + + case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION: + status = aesni_gcm_sgl_op_finalize_decryption(s, + &gdata_ctx, vec->digest[i]); + break; + + default: + status = EINVAL; + } + + vec->status[i] = status; + if (status == 0) + processed++; + } + + return processed; +} + /** * Process a completed job and return rte_mbuf which job processed * @@ -527,7 +662,8 @@ aesni_gcm_create(const char *name, RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_IN_PLACE_SGL | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO; /* Check CPU for support for AES instruction set */ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) @@ -672,7 +808,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD, RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver, cryptodev_driver_id); - RTE_INIT(aesni_gcm_init_log) { aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm"); diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c index 2f66c7c58..5228d98b1 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c @@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .queue_pair_release = aesni_gcm_pmd_qp_release, .queue_pair_count = aesni_gcm_pmd_qp_count, + .sym_cpu_process = aesni_gcm_pmd_cpu_crypto_process, + .sym_session_get_size = aesni_gcm_pmd_sym_session_get_size, .sym_session_configure = aesni_gcm_pmd_sym_session_configure, .sym_session_clear = aesni_gcm_pmd_sym_session_clear }; struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops; + diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h index 2039adb53..dc8d3c653 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h @@ -73,6 +73,11 @@ enum aesni_gcm_operation { AESNI_GMAC_OP_VERIFY }; +enum aesni_gcm_mode { + AESNI_GCM_MODE_ASYNC, + AESNI_GCM_MODE_SYNC +}; + /** AESNI GCM private session structure */ struct aesni_gcm_session { struct { @@ -90,8 +95,12 @@ struct aesni_gcm_session { /**< GCM operation type */ enum aesni_gcm_key key; /**< GCM key type */ + enum aesni_gcm_mode mode; + /**< Sync/async mode */ struct gcm_key_data gdata_key; /**< GCM parameters */ + struct aesni_gcm_session_ops ops; + /**< Session handlers */ }; @@ -109,10 +118,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops, struct aesni_gcm_session *sess, const struct rte_crypto_sym_xform *xform); - -/** - * Device specific operations function pointer structure */ +/* Device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops; +/** CPU crypto bulk process handler */ +uint32_t +aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec); #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */ From patchwork Wed Jan 15 18:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64734 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 50FCDA0514; Wed, 15 Jan 2020 19:30:56 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5BA391C2AD; Wed, 15 Jan 2020 19:30:31 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 1ACB21C2A5 for ; Wed, 15 Jan 2020 19:30:28 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502476" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:26 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:29 +0100 Message-Id: <20200115182832.17012-4-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce CPU crypto action type allowing to differentiate between regular async 'none security' and synchronous, CPU crypto accelerated sessions. Signed-off-by: Marcin Smoczynski Acked-by: Konstantin Ananyev Acked-by: Fan Zhang --- lib/librte_security/rte_security.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index 546779df2..309f7311c 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -307,10 +307,14 @@ enum rte_security_session_action_type { /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously + */ }; /** Security session protocol definition */ From patchwork Wed Jan 15 18:28:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64735 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5846CA0514; Wed, 15 Jan 2020 19:31:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 147A31C2B5; Wed, 15 Jan 2020 19:30:36 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 40AFE1C246 for ; Wed, 15 Jan 2020 19:30:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502617" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:28 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:30 +0100 Message-Id: <20200115182832.17012-5-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update library to handle CPU cypto security mode which utilizes cryptodev's synchronous, CPU accelerated crypto operations. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang Acked-by: Fan Zhang --- lib/librte_ipsec/esp_inb.c | 154 ++++++++++++++++++++++++++++++----- lib/librte_ipsec/esp_outb.c | 134 +++++++++++++++++++++++++++--- lib/librte_ipsec/misc.h | 118 +++++++++++++++++++++++++++ lib/librte_ipsec/rte_ipsec.h | 18 +++- lib/librte_ipsec/sa.c | 126 ++++++++++++++++++++++------ lib/librte_ipsec/sa.h | 17 ++++ lib/librte_ipsec/ses.c | 3 +- 7 files changed, 515 insertions(+), 55 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 5c653dd39..58b3dec1b 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop, } } +static inline uint32_t +inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *pofs, uint32_t plen, void *iv) +{ + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint64_t *ivp; + uint32_t clen; + + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + *pofs + sizeof(struct rte_esp_hdr)); + clen = 0; + + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + gcm = (struct aead_gcm_iv *)iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + copy_iv(iv, ivp, sa->iv_len); + break; + case ALGO_TYPE_AES_CTR: + ctr = (struct aesctr_cnt_blk *)iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + } + + *pofs += sa->ctp.auth.offset; + clen = plen - sa->ctp.auth.length; + return clen; +} + /* * Helper function for prepare() to deal with situation when * ICV is spread by two segments. Tries to move ICV completely into the @@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, } } -/* - * setup/update packet data and metadata for ESP inbound tunnel case. - */ -static inline int32_t -inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, - struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) +static inline int +inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, + struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc) { int32_t rc; uint64_t sqn; - uint32_t clen, icv_len, icv_ofs, plen; - struct rte_mbuf *ml; struct rte_esp_hdr *esph; esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen); @@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, sqn = rte_be_to_cpu_32(esph->seq); if (IS_ESN(sa)) sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + *sqc = rte_cpu_to_be_64(sqn); + /* check IPsec window */ rc = esn_inb_check_sqn(rsn, sa, sqn); - if (rc != 0) - return rc; - sqn = rte_cpu_to_be_64(sqn); + return rc; +} + +/* prepare packet for upcoming processing */ +static inline int32_t +inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t hlen, union sym_op_data *icv) +{ + uint32_t clen, icv_len, icv_ofs, plen; + struct rte_mbuf *ml; /* start packet manipulation */ plen = mb->pkt_len; @@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, icv_ofs += sa->sqh_len; - /* we have to allocate space for AAD somewhere, + /* + * we have to allocate space for AAD somewhere, * right now - just use free trailing space at the last segment. * Would probably be more convenient to reserve space for AAD * inside rte_crypto_op itself @@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, mb->pkt_len += sa->sqh_len; ml->data_len += sa->sqh_len; - inb_pkt_xprepare(sa, sqn, icv); return plen; } +static inline int32_t +inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, + struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) +{ + int rc; + rte_be64_t sqn; + + rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn); + if (rc != 0) + return rc; + + rc = inb_prepare(sa, mb, hlen, icv); + if (rc < 0) + return rc; + + inb_pkt_xprepare(sa, sqn, icv); + return rc; +} + /* * setup/update packets and crypto ops for ESP inbound case. */ @@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], lksd_none_cop_prepare(cop[k], cs, mb[i]); inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); k++; - } else + } else { dr[i - k] = i; + rte_errno = -rc; + } } rsn_release(sa, rsn); /* copy not prepared mbufs beyond good ones */ - if (k != num && k != 0) { + if (k != num && k != 0) move_bad_mbufs(mb, dr, num, num - k); - rte_errno = EBADMSG; - } return k; } @@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return k; } - /* * *process* function for tunnel packets */ @@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], if (k != num && k != 0) move_bad_mbufs(mb, dr, num, num - k); - /* update SQN and replay winow */ + /* update SQN and replay window */ n = esp_inb_rsn_update(sa, sqn, dr, k); /* handle mbufs with wrong SQN */ @@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return n; } +/* + * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO + * (synchronous mode). + */ +uint16_t +cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k; + struct rte_ipsec_sa *sa; + struct replay_sqn *rsn; + union sym_op_data icv; + void *iv[num]; + void *aad[num]; + void *dgst[num]; + uint32_t dr[num]; + uint32_t l4ofs[num]; + uint32_t clen[num]; + uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD]; + + sa = ss->sa; + + /* grab rsn lock */ + rsn = rsn_acquire(sa); + + /* do preparation for all packets */ + for (i = 0, k = 0; i != num; i++) { + + /* calculate ESP header offset */ + l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len; + + /* prepare ESP packet for processing */ + rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv); + if (rc >= 0) { + /* get encrypted data offset and length */ + clen[k] = inb_cpu_crypto_prepare(sa, mb[i], + l4ofs + k, rc, ivbuf[k]); + + /* fill iv, digest and aad */ + iv[k] = ivbuf[k]; + aad[k] = icv.va + sa->icv_len; + dgst[k++] = icv.va; + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* release rsn lock */ + rsn_release(sa, rsn); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* convert mbufs to iovecs and do actual crypto/auth processing */ + cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k); + return k; +} + /* * process group of ESP inbound tunnel packets. */ diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c index e983b25a3..faac831d2 100644 --- a/lib/librte_ipsec/esp_outb.c +++ b/lib/librte_ipsec/esp_outb.c @@ -15,6 +15,9 @@ #include "misc.h" #include "pad.h" +typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + union sym_op_data *icv, uint8_t sqh_len); /* * helper function to fill crypto_sym op for cipher+auth algorithms. @@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = sa->proto; + /* set icv va/pa value(s) */ icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); @@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], static inline int32_t outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - uint32_t l2len, uint32_t l3len, union sym_op_data *icv, - uint8_t sqh_len) + union sym_op_data *icv, uint8_t sqh_len) { uint8_t np; uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; @@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, struct rte_esp_tail *espt; char *ph, *pt; uint64_t *iv; + uint32_t l2len, l3len; + + l2len = mb->l2_len; + l3len = mb->l3_len; uhlen = l2len + l3len; plen = mb->pkt_len - uhlen; @@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = np; + /* set icv va/pa value(s) */ icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); @@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv, - sa->sqh_len); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, + sa->sqh_len); /* success, setup crypto op */ if (rc >= 0) { outb_pkt_xprepare(sa, sqc, &icv); @@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return k; } + +static inline uint32_t +outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs, + uint32_t plen, void *iv) +{ + uint64_t *ivp = iv; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint32_t clen; + + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + gcm = iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CTR: + ctr = iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + } + + *pofs += sa->ctp.auth.offset; + clen = plen + sa->ctp.auth.length; + return clen; +} + +static uint16_t +cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, + esp_outb_prepare_t prepare, uint32_t cofs_mask) +{ + int32_t rc; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + uint32_t i, k, n; + uint32_t l2, l3; + union sym_op_data icv; + void *iv[num]; + void *aad[num]; + void *dgst[num]; + uint32_t dr[num]; + uint32_t l4ofs[num]; + uint32_t clen[num]; + uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0, k = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + /* calculate ESP header offset */ + l4ofs[k] = (l2 + l3) & cofs_mask; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(ivbuf[k], sqc); + + /* try to update the packet itself */ + rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len); + + /* success, proceed with preparations */ + if (rc >= 0) { + + outb_pkt_xprepare(sa, sqc, &icv); + + /* get encrypted data offset and length */ + clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc, + ivbuf[k]); + + /* fill iv, digest and aad */ + iv[k] = ivbuf[k]; + aad[k] = icv.va + sa->icv_len; + dgst[k++] = icv.va; + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + /* convert mbufs to iovecs and do actual crypto/auth processing */ + cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k); + return k; +} + +uint16_t +cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0); +} + +uint16_t +cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare, + UINT32_MAX); +} + /* * process outbound packets for SA with ESN support, * for algorithms that require SQN.hibits to be implictly included @@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { int32_t rc; - uint32_t i, k, n, l2, l3; + uint32_t i, k, n; uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; @@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, k = 0; for (i = 0; i != n; i++) { - l2 = mb[i]->l2_len; - l3 = mb[i]->l3_len; - sqc = rte_cpu_to_be_64(sqn + i); gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], - l2, l3, &icv, 0); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); k += (rc >= 0); diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h index fe4641bfc..6443e0d23 100644 --- a/lib/librte_ipsec/misc.h +++ b/lib/librte_ipsec/misc.h @@ -105,4 +105,122 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs, mb->pkt_len -= len; } +static inline int +mbuf_to_cryptovec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t data_len, + struct rte_crypto_vec vec[], uint32_t num) +{ + uint32_t i; + struct rte_mbuf *nseg; + uint32_t left; + uint32_t seglen; + + /* assuming that requested data starts in the first segment */ + RTE_ASSERT(mb->data_len > ofs); + + if (mb->nb_segs > num) + return -mb->nb_segs; + + vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs); + + /* whole data lies in the first segment */ + seglen = mb->data_len - ofs; + if (data_len <= seglen) { + vec[0].len = data_len; + return 1; + } + + /* data spread across segments */ + vec[0].len = seglen; + left = data_len - seglen; + for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) { + vec[i].base = rte_pktmbuf_mtod(nseg, void *); + + seglen = nseg->data_len; + if (left <= seglen) { + /* whole requested data is completed */ + vec[i].len = left; + left = 0; + break; + } + + /* use whole segment */ + vec[i].len = seglen; + left -= seglen; + } + + RTE_ASSERT(left == 0); + return i + 1; +} + +/* + * process packets using sync crypto engine + */ +static inline void +cpu_crypto_bulk(const struct rte_ipsec_session *ss, + union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[], + void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[], + uint32_t clen[], uint32_t num) +{ + uint32_t i, j, n; + int32_t vcnt, vofs; + int32_t st[num]; + struct rte_crypto_sgl vecpkt[num]; + struct rte_crypto_vec vec[UINT8_MAX]; + struct rte_crypto_sym_vec symvec; + + const uint32_t vnum = RTE_DIM(vec); + + j = 0, n = 0; + vofs = 0; + for (i = 0; i != num; i++) { + + vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], &vec[vofs], + vnum - vofs); + + /* not enough space in vec[] to hold all segments */ + if (vcnt < 0) { + /* fill the request structure */ + symvec.sgl = &vecpkt[j]; + symvec.iv = &iv[j]; + symvec.aad = &aad[j]; + symvec.digest = &dgst[j]; + symvec.status = &st[j]; + symvec.num = i - j; + + /* flush vec array and try again */ + n += rte_cryptodev_sym_cpu_crypto_process( + ss->crypto.dev_id, ss->crypto.ses, ofs, + &symvec); + vofs = 0; + vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], vec, + vnum); + RTE_ASSERT(vcnt > 0); + j = i; + } + + vecpkt[i].vec = &vec[vofs]; + vecpkt[i].num = vcnt; + vofs += vcnt; + } + + /* fill the request structure */ + symvec.sgl = &vecpkt[j]; + symvec.iv = &iv[j]; + symvec.aad = &aad[j]; + symvec.digest = &dgst[j]; + symvec.status = &st[j]; + symvec.num = i - j; + + n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id, + ss->crypto.ses, ofs, &symvec); + + j = num - n; + for (i = 0; j != 0 && i != num; i++) { + if (st[i] != 0) { + mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + j--; + } + } +} + #endif /* _MISC_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h index f3b1f936b..fd685887c 100644 --- a/lib/librte_ipsec/rte_ipsec.h +++ b/lib/librte_ipsec/rte_ipsec.h @@ -33,10 +33,15 @@ struct rte_ipsec_session; * (see rte_ipsec_pkt_process for more details). */ struct rte_ipsec_sa_pkt_func { - uint16_t (*prepare)(const struct rte_ipsec_session *ss, + union { + uint16_t (*async)(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num); + uint16_t (*sync)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + uint16_t num); + } prepare; uint16_t (*process)(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -62,6 +67,7 @@ struct rte_ipsec_session { union { struct { struct rte_cryptodev_sym_session *ses; + uint8_t dev_id; } crypto; struct { struct rte_security_session *ses; @@ -114,7 +120,15 @@ static inline uint16_t rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) { - return ss->pkt_func.prepare(ss, mb, cop, num); + return ss->pkt_func.prepare.async(ss, mb, cop, num); +} + +__rte_experimental +static inline uint16_t +rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return ss->pkt_func.prepare.sync(ss, mb, num); } /** diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 6f1d92c3c..de6ab46dd 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -33,17 +33,21 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type, const struct rte_ipsec_sa_prm *prm) { struct rte_crypto_sym_xform *xf, *xfn; + uint32_t xftype, xfntype; memset(xform, 0, sizeof(*xform)); xf = prm->crypto_xform; + xftype = xf->type & RTE_CRYPTO_SYM_XFORM_TYPE_MASK; if (xf == NULL) return -EINVAL; xfn = xf->next; + xfntype = xfn != NULL ? xfn->type & RTE_CRYPTO_SYM_XFORM_TYPE_MASK : + RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED; /* for AEAD just one xform required */ - if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + if (xftype == RTE_CRYPTO_SYM_XFORM_AEAD) { if (xfn != NULL) return -EINVAL; xform->aead = &xf->aead; @@ -56,8 +60,8 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type, } else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) { /* wrong order or no cipher */ - if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH || - xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER) + if (xfn == NULL || xftype != RTE_CRYPTO_SYM_XFORM_AUTH || + xfntype != RTE_CRYPTO_SYM_XFORM_CIPHER) return -EINVAL; xform->auth = &xf->auth; @@ -66,8 +70,8 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type, } else { /* wrong order or no auth */ - if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER || - xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH) + if (xfn == NULL || xftype != RTE_CRYPTO_SYM_XFORM_CIPHER || + xfntype != RTE_CRYPTO_SYM_XFORM_AUTH) return -EINVAL; xform->cipher = &xf->cipher; @@ -243,10 +247,26 @@ static void esp_inb_init(struct rte_ipsec_sa *sa) { /* these params may differ with new algorithms support */ - sa->ctp.auth.offset = 0; - sa->ctp.auth.length = sa->icv_len - sa->sqh_len; sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len; sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset; + + /* + * for AEAD and NULL algorithms we can assume that + * auth and cipher offsets would be equal. + */ + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + case ALGO_TYPE_NULL: + sa->ctp.auth.raw = sa->ctp.cipher.raw; + break; + default: + sa->ctp.auth.offset = 0; + sa->ctp.auth.length = sa->icv_len - sa->sqh_len; + sa->cofs.ofs.cipher.tail = sa->sqh_len; + break; + } + + sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset; } /* @@ -269,13 +289,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) sa->sqn.outb.raw = 1; - /* these params may differ with new algorithms support */ - sa->ctp.auth.offset = hlen; - sa->ctp.auth.length = sizeof(struct rte_esp_hdr) + - sa->iv_len + sa->sqh_len; - algo_type = sa->algo_type; + /* + * Setup auth and cipher length and offset. + * these params may differ with new algorithms support + */ + switch (algo_type) { case ALGO_TYPE_AES_GCM: case ALGO_TYPE_AES_CTR: @@ -286,11 +306,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) break; case ALGO_TYPE_AES_CBC: case ALGO_TYPE_3DES_CBC: - sa->ctp.cipher.offset = sa->hdr_len + - sizeof(struct rte_esp_hdr); + sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr); sa->ctp.cipher.length = sa->iv_len; break; } + + /* + * for AEAD and NULL algorithms we can assume that + * auth and cipher offsets would be equal. + */ + switch (algo_type) { + case ALGO_TYPE_AES_GCM: + case ALGO_TYPE_NULL: + sa->ctp.auth.raw = sa->ctp.cipher.raw; + break; + default: + sa->ctp.auth.offset = hlen; + sa->ctp.auth.length = sizeof(struct rte_esp_hdr) + + sa->iv_len + sa->sqh_len; + break; + } + + sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset; + sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) - + (sa->ctp.cipher.offset + sa->ctp.cipher.length); } /* @@ -544,9 +583,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss, * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled */ -static uint16_t -pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +uint16_t +pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k; uint32_t dr[num]; @@ -588,21 +627,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, switch (sa->type & msk) { case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = esp_inb_pkt_prepare; + pf->prepare.async = esp_inb_pkt_prepare; pf->process = esp_inb_tun_pkt_process; break; case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = esp_inb_pkt_prepare; + pf->prepare.async = esp_inb_pkt_prepare; pf->process = esp_inb_trs_pkt_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = esp_outb_tun_prepare; + pf->prepare.async = esp_outb_tun_prepare; pf->process = (sa->sqh_len != 0) ? esp_outb_sqh_process : pkt_flag_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = esp_outb_trs_prepare; + pf->prepare.async = esp_outb_trs_prepare; + pf->process = (sa->sqh_len != 0) ? + esp_outb_sqh_process : pkt_flag_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + +static int +cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare.sync = cpu_inb_pkt_prepare; + pf->process = esp_inb_tun_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare.sync = cpu_inb_pkt_prepare; + pf->process = esp_inb_trs_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare.sync = cpu_outb_tun_pkt_prepare; + pf->process = (sa->sqh_len != 0) ? + esp_outb_sqh_process : pkt_flag_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare.sync = cpu_outb_trs_pkt_prepare; pf->process = (sa->sqh_len != 0) ? esp_outb_sqh_process : pkt_flag_process; break; @@ -660,7 +737,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, int32_t rc; rc = 0; - pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 }; + pf[0] = (struct rte_ipsec_sa_pkt_func) { {0} }; switch (ss->type) { case RTE_SECURITY_ACTION_TYPE_NONE: @@ -677,9 +754,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, pf->process = inline_proto_outb_pkt_process; break; case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: - pf->prepare = lksd_proto_prepare; + pf->prepare.async = lksd_proto_prepare; pf->process = pkt_flag_process; break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + rc = cpu_crypto_pkt_func_select(sa, pf); + break; default: rc = -ENOTSUP; } diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 51e69ad05..a16238301 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -88,6 +88,8 @@ struct rte_ipsec_sa { union sym_op_ofslen cipher; union sym_op_ofslen auth; } ctp; + /* cpu-crypto offsets */ + union rte_crypto_sym_ofs cofs; /* tx_offload template for tunnel mbuf */ struct { uint64_t msk; @@ -156,6 +158,10 @@ uint16_t inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + /* outbound processing */ uint16_t @@ -170,6 +176,10 @@ uint16_t esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + uint16_t inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -182,4 +192,11 @@ uint16_t inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c index 82c765a33..7a123e2d9 100644 --- a/lib/librte_ipsec/ses.c +++ b/lib/librte_ipsec/ses.c @@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss) if (ss == NULL || ss->sa == NULL) return -EINVAL; - if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE || + ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { if (ss->crypto.ses == NULL) return -EINVAL; } else { From patchwork Wed Jan 15 18:28:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64736 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F1BCA0514; Wed, 15 Jan 2020 19:31:22 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0AC851C2E6; Wed, 15 Jan 2020 19:30:39 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id AA5621C2B5 for ; Wed, 15 Jan 2020 19:30:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502731" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:31 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:31 +0100 Message-Id: <20200115182832.17012-6-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for CPU accelerated crypto. 'cpu-crypto' SA type has been introduced in configuration allowing to use abovementioned acceleration. Legacy mode is not currently supported. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang --- examples/ipsec-secgw/ipsec.c | 12 ++- examples/ipsec-secgw/ipsec_process.c | 134 +++++++++++++++++---------- examples/ipsec-secgw/sa.c | 33 +++++-- 3 files changed, 123 insertions(+), 56 deletions(-) diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index d4b57121a..55b83bb8d 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, ipsec_ctx->tbl[cdev_id_qp].id, ipsec_ctx->tbl[cdev_id_qp].qp); - if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) { + if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE && + ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { struct rte_security_session_conf sess_conf = { .action_type = ips->type, .protocol = RTE_SECURITY_PROTOCOL_IPSEC, @@ -126,6 +128,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, return -1; } } else { + ips->crypto.dev_id = ipsec_ctx->tbl[cdev_id_qp].id; ips->crypto.ses = rte_cryptodev_sym_session_create( ipsec_ctx->session_pool); rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id, @@ -476,6 +479,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, rte_security_attach_session(&priv->cop, ips->security.ses); break; + + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the" + " legacy mode."); + rte_pktmbuf_free(pkts[i]); + continue; + case RTE_SECURITY_ACTION_TYPE_NONE: priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index 2eb5c8b34..576a9fa8a 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx, int32_t rc; /* setup crypto section */ - if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE || + ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { RTE_ASSERT(ss->crypto.ses == NULL); rc = create_lookaside_session(ctx, sa, ss); if (rc != 0) @@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa, return k; } +/* + * helper routine for inline and cpu(synchronous) processing + * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt(). + * Should be removed in future. + */ +static inline void +prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint32_t j; + struct ipsec_mbuf_metadata *priv; + + for (j = 0; j != cnt; j++) { + priv = get_priv(mb[j]); + priv->sa = sa; + } +} + +/* + * finish processing of packets successfully decrypted by an inline processor + */ +static uint32_t +ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa, + struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint64_t satp; + uint32_t k; + + /* get SA type */ + satp = rte_ipsec_sa_type(ips->sa); + prep_process_group(sa, mb, cnt); + + k = rte_ipsec_pkt_process(ips, mb, cnt); + copy_to_trf(trf, satp, mb, k); + return k; +} + +/* + * process packets synchronously + */ +static uint32_t +ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa, + struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint64_t satp; + uint32_t k; + + /* get SA type */ + satp = rte_ipsec_sa_type(ips->sa); + prep_process_group(sa, mb, cnt); + + k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt); + k = rte_ipsec_pkt_process(ips, mb, k); + copy_to_trf(trf, satp, mb, k); + return k; +} + /* * Process ipsec packets. * If packet belong to SA that is subject of inline-crypto, @@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa, void ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) { - uint64_t satp; - uint32_t i, j, k, n; + uint32_t i, k, n; struct ipsec_sa *sa; - struct ipsec_mbuf_metadata *priv; struct rte_ipsec_group *pg; struct rte_ipsec_session *ips; struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)]; @@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num); for (i = 0; i != n; i++) { + pg = grp + i; sa = ipsec_mask_saptr(pg->id.ptr); - ips = ipsec_get_primary_session(sa); + /* fallback to cryptodev with RX packets which inline + * processor was unable to process + */ + if (sa != NULL) + ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ? + ipsec_get_fallback_session(sa) : + ipsec_get_primary_session(sa); /* no valid HW session for that SA, try to create one */ if (sa == NULL || (ips->crypto.ses == NULL && @@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) k = 0; /* process packets inline */ - else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || - ips->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { - - /* get SA type */ - satp = rte_ipsec_sa_type(ips->sa); - - /* - * This is just to satisfy inbound_sa_check() - * and get_hop_for_offload_pkt(). - * Should be removed in future. - */ - for (j = 0; j != pg->cnt; j++) { - priv = get_priv(pg->m[j]); - priv->sa = sa; + else { + switch (ips->type) { + /* enqueue packets to crypto dev */ + case RTE_SECURITY_ACTION_TYPE_NONE: + case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: + k = ipsec_prepare_crypto_group(ctx, sa, ips, + pg->m, pg->cnt); + break; + case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO: + case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: + k = ipsec_process_inline_group(ips, sa, + trf, pg->m, pg->cnt); + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + k = ipsec_process_cpu_group(ips, sa, + trf, pg->m, pg->cnt); + break; + default: + k = 0; } - - /* fallback to cryptodev with RX packets which inline - * processor was unable to process - */ - if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) { - /* offload packets to cryptodev */ - struct rte_ipsec_session *fallback; - - fallback = ipsec_get_fallback_session(sa); - if (fallback->crypto.ses == NULL && - fill_ipsec_session(fallback, ctx, sa) - != 0) - k = 0; - else - k = ipsec_prepare_crypto_group(ctx, sa, - fallback, pg->m, pg->cnt); - } else { - /* finish processing of packets successfully - * decrypted by an inline processor - */ - k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt); - copy_to_trf(trf, satp, pg->m, k); - - } - /* enqueue packets to crypto dev */ - } else { - k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m, - pg->cnt); } /* drop packets that cannot be enqueued/processed */ diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 7f046e3ed..cfca4fe8f 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -577,6 +577,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL; else if (strcmp(tokens[ti], "no-offload") == 0) ips->type = RTE_SECURITY_ACTION_TYPE_NONE; + else if (strcmp(tokens[ti], "cpu-crypto") == 0) + ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; else { APP_CHECK(0, status, "Invalid input \"%s\"", tokens[ti]); @@ -670,10 +672,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0)) + if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0)) printf("Missing portid option, falling back to non-offload\n"); - if (!type_p || !portid_p) { + if (!type_p || (!portid_p && ips->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) { ips->type = RTE_SECURITY_ACTION_TYPE_NONE; rule->portid = -1; } @@ -759,15 +763,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: printf("lookaside-protocol-offload "); break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; } fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK]; if (fallback_ips != NULL && sa->fallback_sessions > 0) { printf("inline fallback: "); - if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE) + switch (fallback_ips->type) { + case RTE_SECURITY_ACTION_TYPE_NONE: printf("lookaside-none"); - else + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; + default: printf("invalid"); + break; + } } printf("\n"); } @@ -966,7 +980,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } - switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) { case IP4_TUNNEL: sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4); @@ -1017,7 +1030,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } } - print_one_sa_rule(sa, inbound); } else { switch (sa->cipher_algo) { case RTE_CRYPTO_CIPHER_NULL: @@ -1082,9 +1094,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b; sa_ctx->xf[idx].b.next = NULL; sa->xforms = &sa_ctx->xf[idx].a; + } - print_one_sa_rule(sa, inbound); + if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { + sa_ctx->xf[idx].a.type |= RTE_CRYPTO_SYM_CPU_CRYPTO; + if (sa_ctx->xf[idx].b.type != 0) + sa_ctx->xf[idx].b.type |= + RTE_CRYPTO_SYM_CPU_CRYPTO; } + + print_one_sa_rule(sa, inbound); } return 0; From patchwork Wed Jan 15 18:28:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 64737 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE497A0514; Wed, 15 Jan 2020 19:31:35 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C0EFD1C2BB; Wed, 15 Jan 2020 19:30:40 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 148981C2BF for ; Wed, 15 Jan 2020 19:30:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Jan 2020 10:30:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,323,1574150400"; d="scan'208";a="254502851" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.104.125.3]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2020 10:30:34 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Wed, 15 Jan 2020 19:28:32 +0100 Message-Id: <20200115182832.17012-7-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200115182832.17012-1-marcinx.smoczynski@intel.com> References: <20200115182832.17012-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable cpu-crypto mode testing by adding dedicated environmental variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows to run test scenario with cpu crypto acceleration. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang --- examples/ipsec-secgw/test/common_defs.sh | 21 +++++++++++++++++++ examples/ipsec-secgw/test/linux_test4.sh | 11 +--------- examples/ipsec-secgw/test/linux_test6.sh | 11 +--------- .../test/trs_3descbc_sha1_common_defs.sh | 8 +++---- .../test/trs_aescbc_sha1_common_defs.sh | 8 +++---- .../test/trs_aesctr_sha1_common_defs.sh | 8 +++---- .../test/tun_3descbc_sha1_common_defs.sh | 8 +++---- .../test/tun_aescbc_sha1_common_defs.sh | 8 +++---- .../test/tun_aesctr_sha1_common_defs.sh | 8 +++---- 9 files changed, 47 insertions(+), 44 deletions(-) diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh index 4aac4981a..6b6ae06f3 100644 --- a/examples/ipsec-secgw/test/common_defs.sh +++ b/examples/ipsec-secgw/test/common_defs.sh @@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc} DEF_MTU_LEN=1400 DEF_PING_LEN=1200 +#upsate operation mode based on env vars values +select_mode() +{ + # select sync/async mode + if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then + echo "${CRYPTO_PRIM_TYPE} is enabled" + SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}" + fi + + #make linux to generate fragmented packets + if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then + echo "multi-segment test is enabled" + SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" + PING_LEN=5000 + MTU_LEN=1500 + else + PING_LEN=${DEF_PING_LEN} + MTU_LEN=${DEF_MTU_LEN} + fi +} + #setup mtu on local iface set_local_mtu() { diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh index 760451000..fb8ae1023 100644 --- a/examples/ipsec-secgw/test/linux_test4.sh +++ b/examples/ipsec-secgw/test/linux_test4.sh @@ -45,16 +45,7 @@ MODE=$1 . ${DIR}/common_defs.sh . ${DIR}/${MODE}_defs.sh -#make linux to generate fragmented packets -if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then - echo "multi-segment test is enabled" - SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" - PING_LEN=5000 - MTU_LEN=1500 -else - PING_LEN=${DEF_PING_LEN} - MTU_LEN=${DEF_MTU_LEN} -fi +select_mode config_secgw diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh index 479f29be3..dbcca7936 100644 --- a/examples/ipsec-secgw/test/linux_test6.sh +++ b/examples/ipsec-secgw/test/linux_test6.sh @@ -46,16 +46,7 @@ MODE=$1 . ${DIR}/common_defs.sh . ${DIR}/${MODE}_defs.sh -#make linux to generate fragmented packets -if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then - echo "multi-segment test is enabled" - SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" - PING_LEN=5000 - MTU_LEN=1500 -else - PING_LEN=${DEF_PING_LEN} - MTU_LEN=${DEF_MTU_LEN} -fi +select_mode config_secgw diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh index 3c5c18afd..62118bb3f 100644 --- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh @@ -33,14 +33,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -48,7 +48,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo 3des-cbc \ @@ -56,7 +56,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh index 9dbdd1765..7ddeb2b5a 100644 --- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh @@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh index 6aba680f9..f0178355a 100644 --- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh @@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh index 7c3226f84..d8869fad0 100644 --- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh @@ -33,14 +33,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -48,14 +48,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh index bdf5938a0..2616926b2 100644 --- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh @@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh index 06f2ef0c6..06b561fd7 100644 --- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh @@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0