[v2] test/lcores: reduce cpu consumption

Message ID 20240307141608.1450695-1-david.marchand@redhat.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series [v2] test/lcores: reduce cpu consumption |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/github-robot: build success github build: passed
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS

Commit Message

David Marchand March 7, 2024, 2:16 p.m. UTC
  Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
systems running the fast-test testsuite.
Ask for a reschedule at the threads synchronisation points.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
---
Changes since v1:
- fix build with mingw,

---
 app/test/test_lcores.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)
  

Comments

Stephen Hemminger March 7, 2024, 5:06 p.m. UTC | #1
On Thu,  7 Mar 2024 15:16:06 +0100
David Marchand <david.marchand@redhat.com> wrote:

> Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> systems running the fast-test testsuite.
> Ask for a reschedule at the threads synchronisation points.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Luca Boccassi <bluca@debian.org>
> ---

That test was always failing on my little desktop machine, now it works.

Tested-by: Stephen Hemminger <stephen@networkplumber.org>
  
Stephen Hemminger March 7, 2024, 5:08 p.m. UTC | #2
On Thu,  7 Mar 2024 15:16:06 +0100
David Marchand <david.marchand@redhat.com> wrote:

> +#ifndef _POSIX_PRIORITY_SCHEDULING
> +/* sched_yield(2):
> + * POSIX  systems on which sched_yield() is available define _POSIX_PRIOR‐
> + * ITY_SCHEDULING in <unistd.h>.
> + */
> +#define sched_yield()
> +#endif

Could you fix the awkward line break in that comment before merging :-)
  
David Marchand March 7, 2024, 6:06 p.m. UTC | #3
On Thu, Mar 7, 2024 at 3:16 PM David Marchand <david.marchand@redhat.com> wrote:
>
> Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> systems running the fast-test testsuite.
> Ask for a reschedule at the threads synchronisation points.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Luca Boccassi <bluca@debian.org>

Ideally, this test should be rewritten with some kind of OS-agnostic
synchronisation/scheduling API (mutex?).
But I think it will be enough for now.

I updated the code comment as requested by Stephen.

Applied, thanks.
  
Tyler Retzlaff March 7, 2024, 6:36 p.m. UTC | #4
On Thu, Mar 07, 2024 at 07:06:26PM +0100, David Marchand wrote:
> On Thu, Mar 7, 2024 at 3:16 PM David Marchand <david.marchand@redhat.com> wrote:
> >
> > Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> > systems running the fast-test testsuite.
> > Ask for a reschedule at the threads synchronisation points.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Luca Boccassi <bluca@debian.org>
> 
> Ideally, this test should be rewritten with some kind of OS-agnostic
> synchronisation/scheduling API (mutex?).
> But I think it will be enough for now.

It's okay, I'll eventually get to this :)

> 
> I updated the code comment as requested by Stephen.
> 
> Applied, thanks.
> 
> -- 
> David Marchand
  
Morten Brørup March 7, 2024, 7:25 p.m. UTC | #5
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Thursday, 7 March 2024 19.37
> 
> On Thu, Mar 07, 2024 at 07:06:26PM +0100, David Marchand wrote:
> > On Thu, Mar 7, 2024 at 3:16 PM David Marchand
> <david.marchand@redhat.com> wrote:
> > >
> > > Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or
> build
> > > systems running the fast-test testsuite.
> > > Ask for a reschedule at the threads synchronisation points.
> > >
> > > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > > Acked-by: Luca Boccassi <bluca@debian.org>
> >
> > Ideally, this test should be rewritten with some kind of OS-agnostic
> > synchronisation/scheduling API (mutex?).
> > But I think it will be enough for now.
> 
> It's okay, I'll eventually get to this :)
> 

For future reference, it seems SwitchToThread() [1] resembles sched_yield() [2].

[1]: https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-switchtothread
[2]: https://linux.die.net/man/2/sched_yield

> >
> > I updated the code comment as requested by Stephen.
> >
> > Applied, thanks.
> >
> > --
> > David Marchand
  

Patch

diff --git a/app/test/test_lcores.c b/app/test/test_lcores.c
index 22225a9fd3..08c4e8dfba 100644
--- a/app/test/test_lcores.c
+++ b/app/test/test_lcores.c
@@ -2,7 +2,9 @@ 
  * Copyright (c) 2020 Red Hat, Inc.
  */
 
+#include <sched.h>
 #include <string.h>
+#include <unistd.h>
 
 #include <rte_common.h>
 #include <rte_errno.h>
@@ -11,6 +13,14 @@ 
 
 #include "test.h"
 
+#ifndef _POSIX_PRIORITY_SCHEDULING
+/* sched_yield(2):
+ * POSIX  systems on which sched_yield() is available define _POSIX_PRIOR‐
+ * ITY_SCHEDULING in <unistd.h>.
+ */
+#define sched_yield()
+#endif
+
 struct thread_context {
 	enum { Thread_INIT, Thread_ERROR, Thread_DONE } state;
 	bool lcore_id_any;
@@ -43,7 +53,7 @@  static uint32_t thread_loop(void *arg)
 
 	/* Wait for release from the control thread. */
 	while (__atomic_load_n(t->registered_count, __ATOMIC_ACQUIRE) != 0)
-		;
+		sched_yield();
 	rte_thread_unregister();
 	lcore_id = rte_lcore_id();
 	if (lcore_id != LCORE_ID_ANY) {
@@ -85,7 +95,7 @@  test_non_eal_lcores(unsigned int eal_threads_count)
 	/* Wait all non-EAL threads to register. */
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 
 	/* We managed to create the max number of threads, let's try to create
 	 * one more. This will allow one more check.
@@ -101,7 +111,7 @@  test_non_eal_lcores(unsigned int eal_threads_count)
 		printf("non-EAL threads count: %u\n", non_eal_threads_count);
 		while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 				non_eal_threads_count)
-			;
+			sched_yield();
 	}
 
 skip_lcore_any:
@@ -267,7 +277,7 @@  test_non_eal_lcores_callback(unsigned int eal_threads_count)
 	non_eal_threads_count++;
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 	if (l[0].init != eal_threads_count + 1 ||
 			l[1].init != eal_threads_count + 1) {
 		printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",
@@ -290,7 +300,7 @@  test_non_eal_lcores_callback(unsigned int eal_threads_count)
 	non_eal_threads_count++;
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 	if (l[0].init != eal_threads_count + 2 ||
 			l[1].init != eal_threads_count + 2) {
 		printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",