From patchwork Wed May 29 14:40:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kundapura, Ganapati" X-Patchwork-Id: 140392 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFC0844103; Wed, 29 May 2024 16:40:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B22CB406B4; Wed, 29 May 2024 16:40:31 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by mails.dpdk.org (Postfix) with ESMTP id 2D6C2406A2 for ; Wed, 29 May 2024 16:40:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716993631; x=1748529631; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=lYgvIynFswQo4tm2FfczgFiGuiN03ovBhBfUKCl+drc=; b=m5Ovur/3J0oWK7aYP9SkpxfZ+5zNzSZqdrpMuvi/moa9TUnxNunkUPB9 61sQ0x+ko1yukcwBrp4oC6e670+iMMkvobiI2WlllGtFQz8onQvcK4QAi q13h5cPWH5KnJCUIzZWN3tBbN2RFn1vqB20xYuuajFDvlBYt1wIc7NW+m 9htkWxeDD+UDcUKl6Sp4S0tgEj1bnqQcKmwT1lhdkzGQ0Pr52NJ/ElFsQ bC+4jLf6KrYYd4wEG+MkoWL1t1Y2RlxtKDrqBbMJghjB4aL2s7DNnmC2x oG7+E0cWDMyiApfVXLYKiJ0AlgfmhW+33y246dWYzwAfZjC3k5mbkhhsr g==; X-CSE-ConnectionGUID: AwTzzppoSYOllIW32iYAvQ== X-CSE-MsgGUID: UDFJU+JrRRGIx+mo4BTXNQ== X-IronPort-AV: E=McAfee;i="6600,9927,11087"; a="13350647" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="13350647" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 07:40:30 -0700 X-CSE-ConnectionGUID: AIYY6UVKRhS/ULDAIjQvEg== X-CSE-MsgGUID: zNC5jwW+TKGMOFv+8Ll4mw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="35997767" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by orviesa008.jf.intel.com with ESMTP; 29 May 2024 07:40:29 -0700 From: Ganapati Kundapura To: dev@dpdk.org, gakhil@marvell.com, abhinandan.gujjar@intel.com, ferruh.yigit@amd.com, thomas@monjalon.net, bruce.richardson@intel.com, fanzhang.oss@gmail.com, ciara.power@intel.com Subject: [PATCH v2 1/2] crypto: fix build issues on unsetting crypto callbacks macro Date: Wed, 29 May 2024 09:40:24 -0500 Message-Id: <20240529144025.4089318-1-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20240416081222.3002268-1-ganapati.kundapura@intel.com> References: <20240416081222.3002268-1-ganapati.kundapura@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto callbacks macro is defined with value 1 and being used with ifdef, on config value is changed to 0 to disable, crypto callback changes still being compiled. Used #if instead of #ifdef and also wrapped crypto callback changes under RTE_CRYPTO_CALLBACKS macro to fix build issues when macro is unset. Fixes: 1c3ffb95595e ("cryptodev: add enqueue and dequeue callbacks") Fixes: 5523a75af539 ("test/crypto: add case for enqueue/dequeue callbacks") Signed-off-by: Ganapati Kundapura --- v2: * Used #if instead of #ifdef and restored macro definition in config * Split callback registration check in a seperate patch diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 1703ebc..72cf77f 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -14547,6 +14547,7 @@ test_null_burst_operation(void) return TEST_SUCCESS; } +#if RTE_CRYPTO_CALLBACKS static uint16_t test_enq_callback(uint16_t dev_id, uint16_t qp_id, struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param) @@ -14784,6 +14785,7 @@ test_deq_callback_setup(void) return TEST_SUCCESS; } +#endif /* RTE_CRYPTO_CALLBACKS */ static void generate_gmac_large_plaintext(uint8_t *data) @@ -18069,8 +18071,10 @@ static struct unit_test_suite cryptodev_gen_testsuite = { TEST_CASE_ST(ut_setup, ut_teardown, test_device_configure_invalid_queue_pair_ids), TEST_CASE_ST(ut_setup, ut_teardown, test_stats), +#if RTE_CRYPTO_CALLBACKS TEST_CASE_ST(ut_setup, ut_teardown, test_enq_callback_setup), TEST_CASE_ST(ut_setup, ut_teardown, test_deq_callback_setup), +#endif TEST_CASES_END() /**< NULL terminate unit test array */ } }; diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 886eb7a..2e0890f 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -628,6 +628,7 @@ rte_cryptodev_asym_xform_capability_check_hash( return ret; } +#if RTE_CRYPTO_CALLBACKS /* spinlock for crypto device enq callbacks */ static rte_spinlock_t rte_cryptodev_callback_lock = RTE_SPINLOCK_INITIALIZER; @@ -744,6 +745,7 @@ cryptodev_cb_init(struct rte_cryptodev *dev) cryptodev_cb_cleanup(dev); return -ENOMEM; } +#endif /* RTE_CRYPTO_CALLBACKS */ const char * rte_cryptodev_get_feature_name(uint64_t flag) @@ -1244,9 +1246,11 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config) if (*dev->dev_ops->dev_configure == NULL) return -ENOTSUP; +#if RTE_CRYPTO_CALLBACKS rte_spinlock_lock(&rte_cryptodev_callback_lock); cryptodev_cb_cleanup(dev); rte_spinlock_unlock(&rte_cryptodev_callback_lock); +#endif /* Setup new number of queue pairs and reconfigure device. */ diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs, @@ -1257,6 +1261,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config) return diag; } +#if RTE_CRYPTO_CALLBACKS rte_spinlock_lock(&rte_cryptodev_callback_lock); diag = cryptodev_cb_init(dev); rte_spinlock_unlock(&rte_cryptodev_callback_lock); @@ -1264,6 +1269,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config) CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id); return diag; } +#endif rte_cryptodev_trace_configure(dev_id, config); return (*dev->dev_ops->dev_configure)(dev, config); @@ -1485,6 +1491,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, socket_id); } +#if RTE_CRYPTO_CALLBACKS struct rte_cryptodev_cb * rte_cryptodev_add_enq_callback(uint8_t dev_id, uint16_t qp_id, @@ -1763,6 +1770,7 @@ rte_cryptodev_remove_deq_callback(uint8_t dev_id, rte_spinlock_unlock(&rte_cryptodev_callback_lock); return ret; } +#endif /* RTE_CRYPTO_CALLBACKS */ int rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats) diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 00ba6a2..357d4bc 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -1909,7 +1909,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops); -#ifdef RTE_CRYPTO_CALLBACKS +#if RTE_CRYPTO_CALLBACKS if (unlikely(fp_ops->qp.deq_cb != NULL)) { struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; @@ -1976,7 +1976,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, fp_ops = &rte_crypto_fp_ops[dev_id]; qp = fp_ops->qp.data[qp_id]; -#ifdef RTE_CRYPTO_CALLBACKS +#if RTE_CRYPTO_CALLBACKS if (unlikely(fp_ops->qp.enq_cb != NULL)) { struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; From patchwork Wed May 29 14:40:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kundapura, Ganapati" X-Patchwork-Id: 140393 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B12144103; Wed, 29 May 2024 16:40:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDFB140A67; Wed, 29 May 2024 16:40:32 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by mails.dpdk.org (Postfix) with ESMTP id C2094406A2 for ; Wed, 29 May 2024 16:40:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716993631; x=1748529631; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=4BhaHyBq0U9DGU8DJQ0HuX5swrx6HQEVbaaTdA/HBwo=; b=ZqPFp0iD2u0DRD6XXSn1JJD9jROwOV7UyAnjG5ZdLQiNSy9yP5dfDpGs bC9pKOaqBezh8zb9s9s/By6SwXicttOZesdtB3Kh2ogJbOkRZQuqmk3QX zCOFs2FSIGJXMQwxdso9ALoHEY6Oiv16TQSwK6N0s1QHLsBdBXclf2h7y FHJ9//CUJmq4TaVHRCA67AqXLbl/3xi+AeUPMv1zC6OUPU97RmGeMHGjM 95pk9qyosOUR5WVyTDrIK8Mz2aFzxdq2IsBJeOQ31p1P9PYE1vOFYK7MX m4AFMGUfQxKvRjy7Sp5EEeLE7poo0fQ3CMUx9E/k14ztOLYw7iq6/M3ok Q==; X-CSE-ConnectionGUID: tKexsdcURGSetXuFheldOA== X-CSE-MsgGUID: fBBKcQiJSmq5FZKlnDUMjQ== X-IronPort-AV: E=McAfee;i="6600,9927,11087"; a="13350654" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="13350654" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 07:40:30 -0700 X-CSE-ConnectionGUID: OZ9c3oj7TU6GSsP7u7JzWw== X-CSE-MsgGUID: D7gm56G5RT6cUQfqtwthCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="35997775" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by orviesa008.jf.intel.com with ESMTP; 29 May 2024 07:40:29 -0700 From: Ganapati Kundapura To: dev@dpdk.org, gakhil@marvell.com, abhinandan.gujjar@intel.com, ferruh.yigit@amd.com, thomas@monjalon.net, bruce.richardson@intel.com, fanzhang.oss@gmail.com, ciara.power@intel.com Subject: [PATCH v2 2/2] crypto: validate crypto callbacks from next node Date: Wed, 29 May 2024 09:40:25 -0500 Message-Id: <20240529144025.4089318-2-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20240529144025.4089318-1-ganapati.kundapura@intel.com> References: <20240416081222.3002268-1-ganapati.kundapura@intel.com> <20240529144025.4089318-1-ganapati.kundapura@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto callbacks are invoked on checking from head node which is always valid pointer. This patch checks next node from the head node if callbacks registered before invoking callbacks. Fixes: 1c3ffb95595e ("cryptodev: add enqueue and dequeue callbacks") Signed-off-by: Ganapati Kundapura Acked-by: Akhil Goyal --- v2: * Seperated this patch from the combined patch diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 357d4bc..ce3ea36 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -1910,7 +1910,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops); #if RTE_CRYPTO_CALLBACKS - if (unlikely(fp_ops->qp.deq_cb != NULL)) { + if (unlikely(fp_ops->qp.deq_cb[qp_id].next != NULL)) { struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; @@ -1977,7 +1977,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, fp_ops = &rte_crypto_fp_ops[dev_id]; qp = fp_ops->qp.data[qp_id]; #if RTE_CRYPTO_CALLBACKS - if (unlikely(fp_ops->qp.enq_cb != NULL)) { + if (unlikely(fp_ops->qp.enq_cb[qp_id].next != NULL)) { struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb;