eventdev/timer: fix overflow issue

Message ID 20230124160945.3003303-1-erik.g.carrillo@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers
Series eventdev/timer: fix overflow issue |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues
ci/iol-broadcom-Performance success Performance Testing PASS
ci/intel-Testing success Testing PASS
ci/github-robot: build fail github build: failed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance fail Performance Testing issues
ci/iol-x86_64-compile-testing fail Testing issues
ci/iol-testing fail Testing issues
ci/iol-x86_64-unit-testing fail Testing issues
ci/iol-aarch64-unit-testing fail Testing issues
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-aarch64-compile-testing fail Testing issues

Commit Message

Carrillo, Erik G Jan. 24, 2023, 4:09 p.m. UTC
  The software timer adapter converts event timer timeout ticks to a
number of CPU cycles at which an rte_timer should expire. The
computation uses integer operations that can result in overflow.

Use floating point operations instead to perform the computation, and
convert the final result back to an integer type when returning. Also
move the logic that checks the timeout range into the function that
performs the above computation.

Fixes: 6750b21bd6af ("eventdev: add default software timer adapter")
Cc: stable@dpdk.org

Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
---
 lib/eventdev/rte_event_timer_adapter.c | 72 +++++++++++++-------------
 1 file changed, 35 insertions(+), 37 deletions(-)
  

Comments

Stephen Hemminger Jan. 25, 2023, 2:47 a.m. UTC | #1
On Tue, 24 Jan 2023 10:09:45 -0600
Erik Gabriel Carrillo <erik.g.carrillo@intel.com> wrote:

> The software timer adapter converts event timer timeout ticks to a
> number of CPU cycles at which an rte_timer should expire. The
> computation uses integer operations that can result in overflow.
> 
> Use floating point operations instead to perform the computation, and
> convert the final result back to an integer type when returning. Also
> move the logic that checks the timeout range into the function that
> performs the above computation.
> 
> Fixes: 6750b21bd6af ("eventdev: add default software timer adapter")
> Cc: stable@dpdk.org

Don't like this solution.
Floating point is slow and inaccurate.
You can do it with fixed point math if you are careful.
  
Jerin Jacob Jan. 25, 2023, 5:07 a.m. UTC | #2
On Wed, Jan 25, 2023 at 8:17 AM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Tue, 24 Jan 2023 10:09:45 -0600
> Erik Gabriel Carrillo <erik.g.carrillo@intel.com> wrote:
>
> > The software timer adapter converts event timer timeout ticks to a
> > number of CPU cycles at which an rte_timer should expire. The
> > computation uses integer operations that can result in overflow.
> >
> > Use floating point operations instead to perform the computation, and
> > convert the final result back to an integer type when returning. Also
> > move the logic that checks the timeout range into the function that
> > performs the above computation.
> >
> > Fixes: 6750b21bd6af ("eventdev: add default software timer adapter")
> > Cc: stable@dpdk.org
>
> Don't like this solution.
> Floating point is slow and inaccurate.
> You can do it with fixed point math if you are careful.

Looks like Stephan replied to v1 hence comment is not showing up here
https://patches.dpdk.org/project/dpdk/patch/20230124204555.3022361-1-erik.g.carrillo@intel.com/
@Erik Gabriel Carrillo Can be following moved to slow path ?
+ cycles_per_nsec = (double)rte_get_timer_hz() / NSECPERSEC;
  
Stephen Hemminger Jan. 25, 2023, 2:53 p.m. UTC | #3
On Wed, 25 Jan 2023 10:37:16 +0530
Jerin Jacob <jerinjacobk@gmail.com> wrote:

> Looks like Stephan replied to v1 hence comment is not showing up here
> https://patches.dpdk.org/project/dpdk/patch/20230124204555.3022361-1-erik.g.carrillo@intel.com/
> @Erik Gabriel Carrillo Can be following moved to slow path ?
> + cycles_per_nsec = (double)rte_get_timer_hz() / NSECPERSEC;

Best solution is to use rte_reciprocal
  

Patch

diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c
index d357f7403a..dbfb91ec1c 100644
--- a/lib/eventdev/rte_event_timer_adapter.c
+++ b/lib/eventdev/rte_event_timer_adapter.c
@@ -719,13 +719,29 @@  swtim_callback(struct rte_timer *tim)
 	}
 }
 
-static __rte_always_inline uint64_t
+static __rte_always_inline int
 get_timeout_cycles(struct rte_event_timer *evtim,
-		   const struct rte_event_timer_adapter *adapter)
+		   const struct rte_event_timer_adapter *adapter,
+		   uint64_t *timeout_cycles)
 {
 	struct swtim *sw = swtim_pmd_priv(adapter);
-	uint64_t timeout_ns = evtim->timeout_ticks * sw->timer_tick_ns;
-	return timeout_ns * rte_get_timer_hz() / NSECPERSEC;
+	uint64_t timeout_nsecs;
+	double cycles_per_nsec;
+
+	cycles_per_nsec = (double)rte_get_timer_hz() / NSECPERSEC;
+
+	timeout_nsecs = evtim->timeout_ticks * sw->timer_tick_ns;
+	if (timeout_nsecs > sw->max_tmo_ns ||
+	    timeout_nsecs > UINT64_MAX / cycles_per_nsec)
+		return -1;
+	if (timeout_nsecs < sw->timer_tick_ns)
+		return -2;
+
+	if (timeout_cycles)
+		*timeout_cycles = (uint64_t)ceil(timeout_nsecs *
+						 cycles_per_nsec);
+
+	return 0;
 }
 
 /* This function returns true if one or more (adapter) ticks have occurred since
@@ -759,23 +775,6 @@  swtim_did_tick(struct swtim *sw)
 	return false;
 }
 
-/* Check that event timer timeout value is in range */
-static __rte_always_inline int
-check_timeout(struct rte_event_timer *evtim,
-	      const struct rte_event_timer_adapter *adapter)
-{
-	uint64_t tmo_nsec;
-	struct swtim *sw = swtim_pmd_priv(adapter);
-
-	tmo_nsec = evtim->timeout_ticks * sw->timer_tick_ns;
-	if (tmo_nsec > sw->max_tmo_ns)
-		return -1;
-	if (tmo_nsec < sw->timer_tick_ns)
-		return -2;
-
-	return 0;
-}
-
 /* Check that event timer event queue sched type matches destination event queue
  * sched type
  */
@@ -1195,21 +1194,6 @@  __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
 			break;
 		}
 
-		ret = check_timeout(evtims[i], adapter);
-		if (unlikely(ret == -1)) {
-			__atomic_store_n(&evtims[i]->state,
-					RTE_EVENT_TIMER_ERROR_TOOLATE,
-					__ATOMIC_RELAXED);
-			rte_errno = EINVAL;
-			break;
-		} else if (unlikely(ret == -2)) {
-			__atomic_store_n(&evtims[i]->state,
-					RTE_EVENT_TIMER_ERROR_TOOEARLY,
-					__ATOMIC_RELAXED);
-			rte_errno = EINVAL;
-			break;
-		}
-
 		if (unlikely(check_destination_event_queue(evtims[i],
 							   adapter) < 0)) {
 			__atomic_store_n(&evtims[i]->state,
@@ -1225,7 +1209,21 @@  __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
 		evtims[i]->impl_opaque[0] = (uintptr_t)tim;
 		evtims[i]->impl_opaque[1] = (uintptr_t)adapter;
 
-		cycles = get_timeout_cycles(evtims[i], adapter);
+		ret = get_timeout_cycles(evtims[i], adapter, &cycles);
+		if (unlikely(ret == -1)) {
+			__atomic_store_n(&evtims[i]->state,
+					RTE_EVENT_TIMER_ERROR_TOOLATE,
+					__ATOMIC_RELAXED);
+			rte_errno = EINVAL;
+			break;
+		} else if (unlikely(ret == -2)) {
+			__atomic_store_n(&evtims[i]->state,
+					RTE_EVENT_TIMER_ERROR_TOOEARLY,
+					__ATOMIC_RELAXED);
+			rte_errno = EINVAL;
+			break;
+		}
+
 		ret = rte_timer_alt_reset(sw->timer_data_id, tim, cycles,
 					  type, lcore_id, NULL, evtims[i]);
 		if (ret < 0) {