From patchwork Fri Apr 1 13:29:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 109079 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D520A0507; Fri, 1 Apr 2022 15:30:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D60E42911; Fri, 1 Apr 2022 15:30:04 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 761FC4067E for ; Fri, 1 Apr 2022 15:30:02 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id B7A9420DEEC6; Fri, 1 Apr 2022 06:30:01 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B7A9420DEEC6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1648819801; bh=y7V/abaX//k5OH3ffVd32Ikh9pKjDeJXL8ZqoVSnHW4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pje1nze8VWYvvp1tZMBhepry5mXmZR6E7EVbR5wxIhO2ZrxTjj54jJR0clD8Jwx1b 7In30sGS+bmSUgkVCjdEKX28fJrFmxIOBH6IM+eJrC8MW1OaRHnMoTlySHdt1UjF6c /UTH3RRSO9vNcMeJ41C7ArYaPjzqGVL9oK6uUCk4= From: Tyler Retzlaff To: dev@dpdk.org Cc: thomas@monjalon.net, dmitry.kozliuk@gmail.com, anatoly.burakov@intel.com, Tyler Retzlaff , Narcisa Vasile Subject: [PATCH 1/3] eal/windows: translate Windows errors to errno-style errors Date: Fri, 1 Apr 2022 06:29:51 -0700 Message-Id: <1648819793-18948-2-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> References: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add function to translate Windows error codes to errno-style error codes. The possible return values are chosen so that we have as much semantical compatibility between platforms as possible. Signed-off-by: Narcisa Vasile Signed-off-by: Tyler Retzlaff --- lib/eal/windows/rte_thread.c | 48 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index 667287c..b3b5362 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -11,6 +11,54 @@ struct eal_tls_key { DWORD thread_index; }; +/* Translates the most common error codes related to threads */ +static int +thread_translate_win32_error(DWORD error) +{ + switch (error) { + case ERROR_SUCCESS: + return 0; + + case ERROR_INVALID_PARAMETER: + return EINVAL; + + case ERROR_INVALID_HANDLE: + return EFAULT; + + case ERROR_NOT_ENOUGH_MEMORY: + /* FALLTHROUGH */ + case ERROR_NO_SYSTEM_RESOURCES: + return ENOMEM; + + case ERROR_PRIVILEGE_NOT_HELD: + /* FALLTHROUGH */ + case ERROR_ACCESS_DENIED: + return EACCES; + + case ERROR_ALREADY_EXISTS: + return EEXIST; + + case ERROR_POSSIBLE_DEADLOCK: + return EDEADLK; + + case ERROR_INVALID_FUNCTION: + /* FALLTHROUGH */ + case ERROR_CALL_NOT_IMPLEMENTED: + return ENOSYS; + } + + return EINVAL; +} + +static int +thread_log_last_error(const char *message) +{ + DWORD error = GetLastError(); + RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message); + + return thread_translate_win32_error(error); +} + int rte_thread_key_create(rte_thread_key *key, __rte_unused void (*destructor)(void *)) From patchwork Fri Apr 1 13:29:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 109080 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E278A0507; Fri, 1 Apr 2022 15:30:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E62DA42928; Fri, 1 Apr 2022 15:30:04 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 796E742911 for ; Fri, 1 Apr 2022 15:30:02 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id C8E4C20DEEC9; Fri, 1 Apr 2022 06:30:01 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com C8E4C20DEEC9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1648819801; bh=8WPU+X2omfXhorc12KTx0SRbCPmA2/+wZl07I1n9IeQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sk5NWPm27RijRVsExHnz+SrpUHpfKsZb9emJ0teUK80sgehIM4D1iY6/Q2NB0uWo7 3fHby7w82WgwYqkjkFSzfkJ09dMPy7WDxrZNSAmsrbJqKjEXHtQRvUm6hBYXwObjfA pQ6PydVBbKttYlMDcvbwCZ6rfkLoR0XXwSOUXYJs= From: Tyler Retzlaff To: dev@dpdk.org Cc: thomas@monjalon.net, dmitry.kozliuk@gmail.com, anatoly.burakov@intel.com, Tyler Retzlaff , Narcisa Vasile Subject: [PATCH 2/3] eal: implement functions for get/set thread affinity Date: Fri, 1 Apr 2022 06:29:52 -0700 Message-Id: <1648819793-18948-3-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> References: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement functions for getting/setting thread affinity. Threads can be pinned to specific cores by setting their affinity attribute. Signed-off-by: Narcisa Vasile Signed-off-by: Tyler Retzlaff Acked-by: Dmitry Kozlyuk --- lib/eal/include/rte_thread.h | 45 ++++++++++ lib/eal/unix/rte_thread.c | 16 ++++ lib/eal/version.map | 4 + lib/eal/windows/eal_lcore.c | 173 +++++++++++++++++++++++++++++---------- lib/eal/windows/eal_windows.h | 10 +++ lib/eal/windows/include/rte_os.h | 2 + lib/eal/windows/rte_thread.c | 131 ++++++++++++++++++++++++++++- 7 files changed, 336 insertions(+), 45 deletions(-) diff --git a/lib/eal/include/rte_thread.h b/lib/eal/include/rte_thread.h index 8be8ed8..4eb113f 100644 --- a/lib/eal/include/rte_thread.h +++ b/lib/eal/include/rte_thread.h @@ -2,6 +2,8 @@ * Copyright(c) 2021 Mellanox Technologies, Ltd */ +#include + #include #include @@ -21,6 +23,13 @@ #endif /** + * Thread id descriptor. + */ +typedef struct rte_thread_tag { + uintptr_t opaque_id; /**< thread identifier */ +} rte_thread_t; + +/** * TLS key type, an opaque pointer. */ typedef struct eal_tls_key *rte_thread_key; @@ -28,6 +37,42 @@ #ifdef RTE_HAS_CPUSET /** + * Set the affinity of thread 'thread_id' to the cpu set + * specified by 'cpuset'. + * + * @param thread_id + * Id of the thread for which to set the affinity. + * + * @param cpuset + * Pointer to CPU affinity to set. + * + * @return + * On success, return 0. + * On failure, return a positive errno-style error number. + */ +__rte_experimental +int rte_thread_set_affinity_by_id(rte_thread_t thread_id, + const rte_cpuset_t *cpuset); + +/** + * Get the affinity of thread 'thread_id' and store it + * in 'cpuset'. + * + * @param thread_id + * Id of the thread for which to get the affinity. + * + * @param cpuset + * Pointer for storing the affinity value. + * + * @return + * On success, return 0. + * On failure, return a positive errno-style error number. + */ +__rte_experimental +int rte_thread_get_affinity_by_id(rte_thread_t thread_id, + rte_cpuset_t *cpuset); + +/** * Set core affinity of the current thread. * Support both EAL and non-EAL thread and update TLS. * diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index c34ede9..3bb721d 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -89,3 +89,19 @@ struct eal_tls_key { } return pthread_getspecific(key->thread_index); } + +int +rte_thread_set_affinity_by_id(rte_thread_t thread_id, + const rte_cpuset_t *cpuset) +{ + return pthread_setaffinity_np((pthread_t)thread_id.opaque_id, + sizeof(*cpuset), cpuset); +} + +int +rte_thread_get_affinity_by_id(rte_thread_t thread_id, + rte_cpuset_t *cpuset) +{ + return pthread_getaffinity_np((pthread_t)thread_id.opaque_id, + sizeof(*cpuset), cpuset); +} diff --git a/lib/eal/version.map b/lib/eal/version.map index b53eeb3..c8d5177 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -420,6 +420,10 @@ EXPERIMENTAL { rte_intr_instance_free; rte_intr_type_get; rte_intr_type_set; + + # added in 22.07 + rte_thread_get_affinity_by_id; + rte_thread_set_affinity_by_id; }; INTERNAL { diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c index 476c2d2..4ef5920 100644 --- a/lib/eal/windows/eal_lcore.c +++ b/lib/eal/windows/eal_lcore.c @@ -2,7 +2,6 @@ * Copyright(c) 2019 Intel Corporation */ -#include #include #include @@ -27,13 +26,15 @@ struct socket_map { }; struct cpu_map { - unsigned int socket_count; unsigned int lcore_count; + unsigned int socket_count; + unsigned int cpu_count; struct lcore_map lcores[RTE_MAX_LCORE]; struct socket_map sockets[RTE_MAX_NUMA_NODES]; + GROUP_AFFINITY cpus[CPU_SETSIZE]; }; -static struct cpu_map cpu_map = { 0 }; +static struct cpu_map cpu_map; /* eal_create_cpu_map() is called before logging is initialized */ static void @@ -47,13 +48,116 @@ struct cpu_map { va_end(va); } +static int +eal_query_group_affinity(void) +{ + SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX *infos = NULL; + unsigned int *cpu_count = &cpu_map.cpu_count; + DWORD infos_size = 0; + int ret = 0; + USHORT group_count; + KAFFINITY affinity; + USHORT group_no; + unsigned int i; + + if (!GetLogicalProcessorInformationEx(RelationGroup, NULL, + &infos_size)) { + DWORD error = GetLastError(); + if (error != ERROR_INSUFFICIENT_BUFFER) { + log_early("Cannot get group information size, " + "error %lu\n", error); + rte_errno = EINVAL; + ret = -1; + goto cleanup; + } + } + + infos = malloc(infos_size); + if (infos == NULL) { + log_early("Cannot allocate memory for NUMA node information\n"); + rte_errno = ENOMEM; + ret = -1; + goto cleanup; + } + + if (!GetLogicalProcessorInformationEx(RelationGroup, infos, + &infos_size)) { + log_early("Cannot get group information, error %lu\n", + GetLastError()); + rte_errno = EINVAL; + ret = -1; + goto cleanup; + } + + *cpu_count = 0; + group_count = infos->Group.ActiveGroupCount; + for (group_no = 0; group_no < group_count; group_no++) { + affinity = infos->Group.GroupInfo[group_no].ActiveProcessorMask; + for (i = 0; i < EAL_PROCESSOR_GROUP_SIZE; i++) { + if ((affinity & ((KAFFINITY)1 << i)) == 0) + continue; + cpu_map.cpus[*cpu_count].Group = group_no; + cpu_map.cpus[*cpu_count].Mask = (KAFFINITY)1 << i; + (*cpu_count)++; + } + } + +cleanup: + free(infos); + return ret; +} + +static bool +eal_create_lcore_map(const SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX *info) +{ + const unsigned int node_id = info->NumaNode.NodeNumber; + const GROUP_AFFINITY *cores = &info->NumaNode.GroupMask; + struct lcore_map *lcore; + unsigned int socket_id; + unsigned int i; + + /* NUMA node may be reported multiple times if it includes + * cores from different processor groups, e. g. 80 cores + * of a physical processor comprise one NUMA node, but two + * processor groups, because group size is limited by 32/64. + */ + for (socket_id = 0; socket_id < cpu_map.socket_count; socket_id++) { + if (cpu_map.sockets[socket_id].node_id == node_id) + break; + } + + if (socket_id == cpu_map.socket_count) { + if (socket_id == RTE_DIM(cpu_map.sockets)) + return true; + + cpu_map.sockets[socket_id].node_id = node_id; + cpu_map.socket_count++; + } + + for (i = 0; i < EAL_PROCESSOR_GROUP_SIZE; i++) { + if ((cores->Mask & ((KAFFINITY)1 << i)) == 0) + continue; + + if (cpu_map.lcore_count == RTE_DIM(cpu_map.lcores)) + return true; + + lcore = &cpu_map.lcores[cpu_map.lcore_count]; + lcore->socket_id = socket_id; + lcore->core_id = cores->Group * EAL_PROCESSOR_GROUP_SIZE + i; + cpu_map.lcore_count++; + } + return false; +} + int eal_create_cpu_map(void) { SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX *infos, *info; DWORD infos_size; bool full = false; + int ret = 0; + infos = NULL; infos_size = 0; if (!GetLogicalProcessorInformationEx( RelationNumaNode, NULL, &infos_size)) { @@ -62,7 +166,8 @@ struct cpu_map { log_early("Cannot get NUMA node info size, error %lu\n", GetLastError()); rte_errno = ENOMEM; - return -1; + ret = -1; + goto exit; } } @@ -83,52 +188,24 @@ struct cpu_map { info = infos; while ((uint8_t *)info - (uint8_t *)infos < infos_size) { - unsigned int node_id = info->NumaNode.NodeNumber; - GROUP_AFFINITY *cores = &info->NumaNode.GroupMask; - struct lcore_map *lcore; - unsigned int i, socket_id; - - /* NUMA node may be reported multiple times if it includes - * cores from different processor groups, e. g. 80 cores - * of a physical processor comprise one NUMA node, but two - * processor groups, because group size is limited by 32/64. - */ - for (socket_id = 0; socket_id < cpu_map.socket_count; - socket_id++) { - if (cpu_map.sockets[socket_id].node_id == node_id) - break; - } - - if (socket_id == cpu_map.socket_count) { - if (socket_id == RTE_DIM(cpu_map.sockets)) { - full = true; - goto exit; - } - - cpu_map.sockets[socket_id].node_id = node_id; - cpu_map.socket_count++; - } - - for (i = 0; i < EAL_PROCESSOR_GROUP_SIZE; i++) { - if ((cores->Mask & ((KAFFINITY)1 << i)) == 0) - continue; - - if (cpu_map.lcore_count == RTE_DIM(cpu_map.lcores)) { - full = true; - goto exit; - } - - lcore = &cpu_map.lcores[cpu_map.lcore_count]; - lcore->socket_id = socket_id; - lcore->core_id = - cores->Group * EAL_PROCESSOR_GROUP_SIZE + i; - cpu_map.lcore_count++; + if (eal_create_lcore_map(info)) { + full = true; + break; } info = (SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX *)( (uint8_t *)info + info->Size); } + if (eal_query_group_affinity()) { + /* + * No need to set rte_errno here. + * It is set by eal_query_group_affinity(). + */ + ret = -1; + goto exit; + } + exit: if (full) { /* Not a fatal error, but important for troubleshooting. */ @@ -164,3 +241,11 @@ struct cpu_map { { return cpu_map.sockets[socket_id].node_id; } + +PGROUP_AFFINITY +eal_get_cpu_affinity(size_t cpu_index) +{ + RTE_VERIFY(cpu_index < CPU_SETSIZE); + + return &cpu_map.cpus[cpu_index]; +} diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 245aa60..c96eca5 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -56,6 +56,16 @@ unsigned int eal_socket_numa_node(unsigned int socket_id); /** + * Get pointer to the group affinity for the cpu. + * + * @param cpu_index + * Index of the cpu, as it comes from rte_cpuset_t. + * @return + * Pointer to the group affinity for the cpu. + */ +PGROUP_AFFINITY eal_get_cpu_affinity(size_t cpu_index); + +/** * Schedule code for execution in the interrupt thread. * * @param func diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h index a0a3114..1c33058 100644 --- a/lib/eal/windows/include/rte_os.h +++ b/lib/eal/windows/include/rte_os.h @@ -14,6 +14,8 @@ #include #include +#include + #ifdef __cplusplus extern "C" { #endif diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index b3b5362..b4fbcc8 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -5,7 +5,8 @@ #include #include #include -#include + +#include "eal_windows.h" struct eal_tls_key { DWORD thread_index; @@ -135,3 +136,131 @@ struct eal_tls_key { } return output; } + +static int +rte_convert_cpuset_to_affinity(const rte_cpuset_t *cpuset, + PGROUP_AFFINITY affinity) +{ + int ret = 0; + PGROUP_AFFINITY cpu_affinity = NULL; + unsigned int cpu_idx; + + memset(affinity, 0, sizeof(GROUP_AFFINITY)); + affinity->Group = (USHORT)-1; + + /* Check that all cpus of the set belong to the same processor group and + * accumulate thread affinity to be applied. + */ + for (cpu_idx = 0; cpu_idx < CPU_SETSIZE; cpu_idx++) { + if (!CPU_ISSET(cpu_idx, cpuset)) + continue; + + cpu_affinity = eal_get_cpu_affinity(cpu_idx); + + if (affinity->Group == (USHORT)-1) { + affinity->Group = cpu_affinity->Group; + } else if (affinity->Group != cpu_affinity->Group) { + ret = EINVAL; + goto cleanup; + } + + affinity->Mask |= cpu_affinity->Mask; + } + + if (affinity->Mask == 0) { + ret = EINVAL; + goto cleanup; + } + +cleanup: + return ret; +} + +int +rte_thread_set_affinity_by_id(rte_thread_t thread_id, + const rte_cpuset_t *cpuset) +{ + int ret = 0; + GROUP_AFFINITY thread_affinity; + HANDLE thread_handle = NULL; + + if (cpuset == NULL) { + ret = EINVAL; + goto cleanup; + } + + ret = rte_convert_cpuset_to_affinity(cpuset, &thread_affinity); + if (ret != 0) { + RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + goto cleanup; + } + + thread_handle = OpenThread(THREAD_ALL_ACCESS, FALSE, + thread_id.opaque_id); + if (thread_handle == NULL) { + ret = thread_log_last_error("OpenThread()"); + goto cleanup; + } + + if (!SetThreadGroupAffinity(thread_handle, &thread_affinity, NULL)) { + ret = thread_log_last_error("SetThreadGroupAffinity()"); + goto cleanup; + } + +cleanup: + if (thread_handle != NULL) { + CloseHandle(thread_handle); + thread_handle = NULL; + } + + return ret; +} + +int +rte_thread_get_affinity_by_id(rte_thread_t thread_id, + rte_cpuset_t *cpuset) +{ + HANDLE thread_handle = NULL; + PGROUP_AFFINITY cpu_affinity; + GROUP_AFFINITY thread_affinity; + unsigned int cpu_idx; + int ret = 0; + + if (cpuset == NULL) { + ret = EINVAL; + goto cleanup; + } + + thread_handle = OpenThread(THREAD_ALL_ACCESS, FALSE, + thread_id.opaque_id); + if (thread_handle == NULL) { + ret = thread_log_last_error("OpenThread()"); + goto cleanup; + } + + /* obtain previous thread affinity */ + if (!GetThreadGroupAffinity(thread_handle, &thread_affinity)) { + ret = thread_log_last_error("GetThreadGroupAffinity()"); + goto cleanup; + } + + CPU_ZERO(cpuset); + + /* Convert affinity to DPDK cpu set */ + for (cpu_idx = 0; cpu_idx < CPU_SETSIZE; cpu_idx++) { + + cpu_affinity = eal_get_cpu_affinity(cpu_idx); + + if ((cpu_affinity->Group == thread_affinity.Group) && + ((cpu_affinity->Mask & thread_affinity.Mask) != 0)) { + CPU_SET(cpu_idx, cpuset); + } + } + +cleanup: + if (thread_handle != NULL) { + CloseHandle(thread_handle); + thread_handle = NULL; + } + return ret; +} From patchwork Fri Apr 1 13:29:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 109081 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88F65A0507; Fri, 1 Apr 2022 15:30:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E43F04292F; Fri, 1 Apr 2022 15:30:05 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 9875842923 for ; Fri, 1 Apr 2022 15:30:02 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id D7BB520DEECB; Fri, 1 Apr 2022 06:30:01 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D7BB520DEECB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1648819801; bh=TTH4p3u6FVNn4RvGuWymJvHKksBTzpTepLjHyVX9CI0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nzYMAU8aTR0T+dNVmPiZI45PVwYbG0EE/hjvuPuh3/5Y/KYaYMciUAxxaqJNAd5R8 WPYVAcJHKXv4JGzcd24N0FFH07H7KbzWUFwGMWGoXo1kvQxE4qkW9R7jxCHanvmHob n3u2mNKXfLIVM20mCw9MaoFKMZKKvZ6r7fjIw3CQ= From: Tyler Retzlaff To: dev@dpdk.org Cc: thomas@monjalon.net, dmitry.kozliuk@gmail.com, anatoly.burakov@intel.com, Tyler Retzlaff , Narcisa Vasile Subject: [PATCH 3/3] test/threads: add unit test for thread API Date: Fri, 1 Apr 2022 06:29:53 -0700 Message-Id: <1648819793-18948-4-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> References: <1648819793-18948-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Establish unit test for testing thread api. Initial unit tests for rte_thread_{get,set}_affinity_by_id(). Signed-off-by: Narcisa Vasile Signed-off-by: Tyler Retzlaff --- app/test/meson.build | 2 ++ app/test/test_threads.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) create mode 100644 app/test/test_threads.c diff --git a/app/test/meson.build b/app/test/meson.build index 5fc1dd1..5a9d69b 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -133,6 +133,7 @@ test_sources = files( 'test_tailq.c', 'test_thash.c', 'test_thash_perf.c', + 'test_threads.c', 'test_timer.c', 'test_timer_perf.c', 'test_timer_racecond.c', @@ -238,6 +239,7 @@ fast_tests = [ ['reorder_autotest', true], ['service_autotest', true], ['thash_autotest', true], + ['threads_autotest', true], ['trace_autotest', true], ] diff --git a/app/test/test_threads.c b/app/test/test_threads.c new file mode 100644 index 0000000..61b97af --- /dev/null +++ b/app/test/test_threads.c @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2021-2022 Microsoft. + */ + +#include +#include + +#include +#include + +#include "test.h" + +RTE_LOG_REGISTER(threads_logtype_test, test.threads, INFO); + +static void * +thread_main(void *arg) +{ + (void)arg; + return NULL; +} + +static int +test_thread_affinity(void) +{ + pthread_t id; + rte_thread_t thread_id; + + RTE_TEST_ASSERT(pthread_create(&id, NULL, thread_main, NULL) == 0, + "Failed to create thread"); + thread_id.opaque_id = id; + + rte_cpuset_t cpuset0; + RTE_TEST_ASSERT(rte_thread_get_affinity_by_id(thread_id, &cpuset0) == 0, + "Failed to get thread affinity"); + + rte_cpuset_t cpuset1; + RTE_TEST_ASSERT(rte_thread_get_affinity_by_id(thread_id, &cpuset1) == 0, + "Failed to get thread affinity"); + RTE_TEST_ASSERT(0 == memcmp(&cpuset0, &cpuset1, sizeof(rte_cpuset_t)), + "Affinity should be stable"); + + RTE_TEST_ASSERT(rte_thread_set_affinity_by_id(thread_id, &cpuset1) == 0, + "Failed to set thread affinity"); + RTE_TEST_ASSERT(rte_thread_get_affinity_by_id(thread_id, &cpuset0) == 0, + "Failed to get thread affinity"); + RTE_TEST_ASSERT(0 == memcmp(&cpuset0, &cpuset1, sizeof(rte_cpuset_t)), + "Affinity should be stable"); + + size_t i; + for (i = 1; i < CPU_SETSIZE; i++) + if (CPU_ISSET(i, &cpuset0)) { + CPU_ZERO(&cpuset0); + CPU_SET(i, &cpuset0); + + break; + } + RTE_TEST_ASSERT(rte_thread_set_affinity_by_id(thread_id, &cpuset0) == 0, + "Failed to set thread affinity"); + RTE_TEST_ASSERT(rte_thread_get_affinity_by_id(thread_id, &cpuset1) == 0, + "Failed to get thread affinity"); + RTE_TEST_ASSERT(0 == memcmp(&cpuset0, &cpuset1, sizeof(rte_cpuset_t)), + "Affinity should be stable"); + + return 0; +} + +static struct unit_test_suite threads_test_suite = { + .suite_name = "threads autotest", + .setup = NULL, + .teardown = NULL, + .unit_test_cases = { + TEST_CASE(test_thread_affinity), + TEST_CASES_END() + } +}; + +static int +test_threads(void) +{ + return unit_test_suite_runner(&threads_test_suite); +} + +REGISTER_TEST_COMMAND(threads_autotest, test_threads);