From patchwork Tue Jan 24 20:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Carrillo, Erik G" X-Patchwork-Id: 122506 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4DA442481; Tue, 24 Jan 2023 21:46:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 86D2640223; Tue, 24 Jan 2023 21:46:06 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 6A0A440150; Tue, 24 Jan 2023 21:46:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674593165; x=1706129165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SLmr07xwKQFD73738CBLekm/WpQX9LKCUe1su6U6Zgs=; b=WyBhLHHtBxEnZdc03LTQ0+++iBHbaawOoaUTXsDBCcEeMnlb/6mHtfx4 yajyJ/dxwEEDewGCIH7ALBHbzhKESYBEzpX8PCfPbA92I6zXCOo4CYfIv kn5cZCVyNUdYnf2JTozzJt97xu8OS6m6g5mVROfVYIhxoAeENlIZM9W+n hcqgYmA94ZtMycDw4xr6YEm8toT+MEg7/D4u6pv1/mV7hTZMUA5tF/jOg qyo6UiJjHwsc+3acCsWbVaacyNBQJ5qsXZOKMU/wcsv9q4rbAyGyYzpLA owzH5lpYz2c5fzO6cLx1YCbEVN2HMAtbQ+siuYh2cbZnb2CtCt2MaxWQu A==; X-IronPort-AV: E=McAfee;i="6500,9779,10600"; a="326434449" X-IronPort-AV: E=Sophos;i="5.97,243,1669104000"; d="scan'208";a="326434449" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2023 12:46:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10600"; a="907644572" X-IronPort-AV: E=Sophos;i="5.97,243,1669104000"; d="scan'208";a="907644572" Received: from txandevlnx321.an.intel.com ([10.123.117.43]) by fmsmga006.fm.intel.com with ESMTP; 24 Jan 2023 12:45:58 -0800 From: Erik Gabriel Carrillo To: jerinj@marvell.com Cc: dev@dpdk.org, stable@dpdk.org Subject: [PATCH v2] eventdev/timer: fix overflow issue Date: Tue, 24 Jan 2023 14:45:55 -0600 Message-Id: <20230124204555.3022361-1-erik.g.carrillo@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20230124160945.3003303-1-erik.g.carrillo@intel.com> References: <20230124160945.3003303-1-erik.g.carrillo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The software timer adapter converts event timer timeout ticks to a number of CPU cycles at which an rte_timer should expire. The computation uses integer operations that can result in overflow. Use floating point operations instead to perform the computation, and convert the final result back to an integer type when returning. Also move the logic that checks the timeout range into the function that performs the above computation. Fixes: 6750b21bd6af ("eventdev: add default software timer adapter") Cc: stable@dpdk.org Signed-off-by: Erik Gabriel Carrillo --- v2: * Fix implicit int to float conversion build warning on Clang lib/eventdev/rte_event_timer_adapter.c | 71 ++++++++++++-------------- 1 file changed, 34 insertions(+), 37 deletions(-) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index d357f7403a..2efeb73bce 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -719,13 +719,28 @@ swtim_callback(struct rte_timer *tim) } } -static __rte_always_inline uint64_t +static __rte_always_inline int get_timeout_cycles(struct rte_event_timer *evtim, - const struct rte_event_timer_adapter *adapter) + const struct rte_event_timer_adapter *adapter, + uint64_t *timeout_cycles) { struct swtim *sw = swtim_pmd_priv(adapter); - uint64_t timeout_ns = evtim->timeout_ticks * sw->timer_tick_ns; - return timeout_ns * rte_get_timer_hz() / NSECPERSEC; + uint64_t timeout_nsecs; + double cycles_per_nsec; + + cycles_per_nsec = (double)rte_get_timer_hz() / NSECPERSEC; + + timeout_nsecs = evtim->timeout_ticks * sw->timer_tick_ns; + if (timeout_nsecs > sw->max_tmo_ns) + return -1; + if (timeout_nsecs < sw->timer_tick_ns) + return -2; + + if (timeout_cycles) + *timeout_cycles = (uint64_t)ceil(timeout_nsecs * + cycles_per_nsec); + + return 0; } /* This function returns true if one or more (adapter) ticks have occurred since @@ -759,23 +774,6 @@ swtim_did_tick(struct swtim *sw) return false; } -/* Check that event timer timeout value is in range */ -static __rte_always_inline int -check_timeout(struct rte_event_timer *evtim, - const struct rte_event_timer_adapter *adapter) -{ - uint64_t tmo_nsec; - struct swtim *sw = swtim_pmd_priv(adapter); - - tmo_nsec = evtim->timeout_ticks * sw->timer_tick_ns; - if (tmo_nsec > sw->max_tmo_ns) - return -1; - if (tmo_nsec < sw->timer_tick_ns) - return -2; - - return 0; -} - /* Check that event timer event queue sched type matches destination event queue * sched type */ @@ -1195,21 +1193,6 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, break; } - ret = check_timeout(evtims[i], adapter); - if (unlikely(ret == -1)) { - __atomic_store_n(&evtims[i]->state, - RTE_EVENT_TIMER_ERROR_TOOLATE, - __ATOMIC_RELAXED); - rte_errno = EINVAL; - break; - } else if (unlikely(ret == -2)) { - __atomic_store_n(&evtims[i]->state, - RTE_EVENT_TIMER_ERROR_TOOEARLY, - __ATOMIC_RELAXED); - rte_errno = EINVAL; - break; - } - if (unlikely(check_destination_event_queue(evtims[i], adapter) < 0)) { __atomic_store_n(&evtims[i]->state, @@ -1225,7 +1208,21 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, evtims[i]->impl_opaque[0] = (uintptr_t)tim; evtims[i]->impl_opaque[1] = (uintptr_t)adapter; - cycles = get_timeout_cycles(evtims[i], adapter); + ret = get_timeout_cycles(evtims[i], adapter, &cycles); + if (unlikely(ret == -1)) { + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR_TOOLATE, + __ATOMIC_RELAXED); + rte_errno = EINVAL; + break; + } else if (unlikely(ret == -2)) { + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR_TOOEARLY, + __ATOMIC_RELAXED); + rte_errno = EINVAL; + break; + } + ret = rte_timer_alt_reset(sw->timer_data_id, tim, cycles, type, lcore_id, NULL, evtims[i]); if (ret < 0) {