From patchwork Fri Jul 24 13:45:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 74762 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45C3FA0526; Fri, 24 Jul 2020 15:44:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 143671C02A; Fri, 24 Jul 2020 15:44:04 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 95B381C01E for ; Fri, 24 Jul 2020 15:44:01 +0200 (CEST) IronPort-SDR: 6tT/jp3nKCUlbQRYEG4D/pVwHLernVwaNuuf2PZM3DkWjdZJ3CJufm8Du2Uz7HIrllcLmIHZ5u hdpe1ewAcv7A== X-IronPort-AV: E=McAfee;i="6000,8403,9691"; a="148199444" X-IronPort-AV: E=Sophos;i="5.75,390,1589266800"; d="scan'208";a="148199444" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2020 06:44:00 -0700 IronPort-SDR: l4Xix2nDlQN5EoLFkHD8LiSnjNh310uODKJbN9+aKLSA1kt9L/xMcxgE9CGq8T+uWn/ctV3bk5 C8qz4UgPcz0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,390,1589266800"; d="scan'208";a="488744142" Received: from silpixa00399779.ir.intel.com (HELO silpixa00399779.ger.corp.intel.com) ([10.237.222.209]) by fmsmga006.fm.intel.com with ESMTP; 24 Jul 2020 06:43:58 -0700 From: Harry van Haaren To: dev@dpdk.org Cc: david.marchand@redhat.com, igor.romanov@oktetlabs.ru, honnappa.nagarahalli@arm.com, ferruh.yigit@intel.com, nd@arm.com, aconole@redhat.com, l.wojciechow@partner.samsung.com, phil.yang@arm.com, Harry van Haaren Date: Fri, 24 Jul 2020 14:45:05 +0100 Message-Id: <20200724134506.11959-1-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200724124503.96282-1-harry.van.haaren@intel.com> References: <20200724124503.96282-1-harry.van.haaren@intel.com> Subject: [dpdk-dev] [PATCH v5 1/2] service: add API to retrieve service core active X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds a new experimental API which allows the user to retrieve the active state of an lcore. Knowing when the service lcore is completed its polling loop can be useful to applications to avoid race conditions when e.g. finalizing statistics. The service thread itself now has a variable to indicate if its thread is active. When zero the service thread has completed its service, and has returned from the service_runner_func() function. Suggested-by: Lukasz Wojciechowski Signed-off-by: Harry van Haaren Reviewed-by: Phil Yang Reviewed-by: Honnappa Nagarahalli --- v5: - Fix typos (robot) v4: - Use _may_be_ style API for lcore_active (Honnappa) - Fix missing tab indent (Honnappa) - Add 'lcore' to doxygen retval description (Honnappa) @Honnappa: Please note i did not update the doxygen title of the lcore_may_be_active() function, as the current description is more accurate than making it more consistent with other functions. v3: - Change service lcore stores to SEQ_CST (Honnappa, David) - Change control thread load to ACQ (Honnappa, David) - Comment reasons for SEQ_CST/ACQ (Honnappa, David) - Add comments to Doxygen for _stop() and _lcore_active() (Honnappa, David) - Add Phil's review tag from ML --- lib/librte_eal/common/rte_service.c | 21 +++++++++++++++++++++ lib/librte_eal/include/rte_service.h | 22 +++++++++++++++++++++- lib/librte_eal/rte_eal_version.map | 1 + 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c index 6a0e0ff65..98565bbef 100644 --- a/lib/librte_eal/common/rte_service.c +++ b/lib/librte_eal/common/rte_service.c @@ -65,6 +65,7 @@ struct core_state { /* map of services IDs are run on this core */ uint64_t service_mask; uint8_t runstate; /* running or stopped */ + uint8_t thread_active; /* indicates when thread is in service_run() */ uint8_t is_service_core; /* set if core is currently a service core */ uint8_t service_active_on_lcore[RTE_SERVICE_NUM_MAX]; uint64_t loops; @@ -457,6 +458,8 @@ service_runner_func(void *arg) const int lcore = rte_lcore_id(); struct core_state *cs = &lcore_states[lcore]; + __atomic_store_n(&cs->thread_active, 1, __ATOMIC_SEQ_CST); + /* runstate act as the guard variable. Use load-acquire * memory order here to synchronize with store-release * in runstate update functions. @@ -475,9 +478,27 @@ service_runner_func(void *arg) cs->loops++; } + /* Use SEQ CST memory ordering to avoid any re-ordering around + * this store, ensuring that once this store is visible, the service + * lcore thread really is done in service cores code. + */ + __atomic_store_n(&cs->thread_active, 0, __ATOMIC_SEQ_CST); return 0; } +int32_t +rte_service_lcore_may_be_active(uint32_t lcore) +{ + if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core) + return -EINVAL; + + /* Load thread_active using ACQUIRE to avoid instructions dependent on + * the result being re-ordered before this load completes. + */ + return __atomic_load_n(&lcore_states[lcore].thread_active, + __ATOMIC_ACQUIRE); +} + int32_t rte_service_lcore_count(void) { diff --git a/lib/librte_eal/include/rte_service.h b/lib/librte_eal/include/rte_service.h index e2d0a6dd3..ca9950d09 100644 --- a/lib/librte_eal/include/rte_service.h +++ b/lib/librte_eal/include/rte_service.h @@ -249,7 +249,11 @@ int32_t rte_service_lcore_start(uint32_t lcore_id); * Stop a service core. * * Stopping a core makes the core become idle, but remains assigned as a - * service core. + * service core. Note that the service lcore thread may not have returned from + * the service it is running when this API returns. + * + * The *rte_service_lcore_may_be_active* API can be used to check if the + * service lcore is * still active. * * @retval 0 Success * @retval -EINVAL Invalid *lcore_id* provided @@ -261,6 +265,22 @@ int32_t rte_service_lcore_start(uint32_t lcore_id); */ int32_t rte_service_lcore_stop(uint32_t lcore_id); +/** + * Reports if a service lcore is currently running. + * + * This function returns if the core has finished service cores code, and has + * returned to EAL control. If *rte_service_lcore_stop* has been called but + * the lcore has not returned to EAL yet, it might be required to wait and call + * this function again. The amount of time to wait before the core returns + * depends on the duration of the services being run. + * + * @retval 0 Service thread is not active, and lcore has been returned to EAL. + * @retval 1 Service thread is in the service core polling loop. + * @retval -EINVAL Invalid *lcore_id* provided. + */ +__rte_experimental +int32_t rte_service_lcore_may_be_active(uint32_t lcore_id); + /** * Adds lcore to the list of service cores. * diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index bf0c17c23..39826ef91 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -401,6 +401,7 @@ EXPERIMENTAL { rte_lcore_dump; rte_lcore_iterate; rte_mp_disable; + rte_service_lcore_may_be_active; rte_thread_register; rte_thread_unregister; };