[v2,5/6] event/sw: report idle when no work is performed

Message ID 20221005091615.94652-6-mattias.ronnblom@ericsson.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series Service cores performance and statistics improvements |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Mattias Rönnblom Oct. 5, 2022, 9:16 a.m. UTC
  Have the SW event device conform to the service core convention, where
-EAGAIN is return in case no work was performed.

Prior to this patch, for an idle SW event device, a service lcore load
estimate based on RTE_SERVICE_ATTR_CYCLES would suggest 48% core
load.

At 7% of its maximum capacity, the SW event device needs about 15% of
the available CPU cycles* to perform its duties, but
RTE_SERVICE_ATTR_CYCLES would suggest the SW service used 48% of the
service core.

After this change, load deduced from RTE_SERVICE_ATTR_CYCLES will only
be a minor overestimation of the actual cycles used.

* The SW scheduler becomes more efficient at higher loads.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 drivers/event/sw/sw_evdev.c           | 3 +--
 drivers/event/sw/sw_evdev.h           | 2 +-
 drivers/event/sw/sw_evdev_scheduler.c | 6 ++++--
 3 files changed, 6 insertions(+), 5 deletions(-)
  

Patch

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index bfa9469e29..3531821dd4 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -934,8 +934,7 @@  set_refill_once(const char *key __rte_unused, const char *value, void *opaque)
 static int32_t sw_sched_service_func(void *args)
 {
 	struct rte_eventdev *dev = args;
-	sw_event_schedule(dev);
-	return 0;
+	return sw_event_schedule(dev);
 }
 
 static int
diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h
index 4fd1054470..8542b7d34d 100644
--- a/drivers/event/sw/sw_evdev.h
+++ b/drivers/event/sw/sw_evdev.h
@@ -295,7 +295,7 @@  uint16_t sw_event_enqueue_burst(void *port, const struct rte_event ev[],
 uint16_t sw_event_dequeue(void *port, struct rte_event *ev, uint64_t wait);
 uint16_t sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
 			uint64_t wait);
-void sw_event_schedule(struct rte_eventdev *dev);
+int32_t sw_event_schedule(struct rte_eventdev *dev);
 int sw_xstats_init(struct sw_evdev *dev);
 int sw_xstats_uninit(struct sw_evdev *dev);
 int sw_xstats_get_names(const struct rte_eventdev *dev,
diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c
index 809a54d731..8bc21944f5 100644
--- a/drivers/event/sw/sw_evdev_scheduler.c
+++ b/drivers/event/sw/sw_evdev_scheduler.c
@@ -506,7 +506,7 @@  sw_schedule_pull_port_dir(struct sw_evdev *sw, uint32_t port_id)
 	return pkts_iter;
 }
 
-void
+int32_t
 sw_event_schedule(struct rte_eventdev *dev)
 {
 	struct sw_evdev *sw = sw_pmd_priv(dev);
@@ -517,7 +517,7 @@  sw_event_schedule(struct rte_eventdev *dev)
 
 	sw->sched_called++;
 	if (unlikely(!sw->started))
-		return;
+		return -EAGAIN;
 
 	do {
 		uint32_t in_pkts_this_iteration = 0;
@@ -610,4 +610,6 @@  sw_event_schedule(struct rte_eventdev *dev)
 	sw->sched_last_iter_bitmask = cqs_scheds_last_iter;
 	if (unlikely(sw->port_count >= 64))
 		sw->sched_last_iter_bitmask = UINT64_MAX;
+
+	return work_done ? 0 : -EAGAIN;
 }