From patchwork Fri Mar 17 18:52:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 125231 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ACA2D41EBF; Fri, 17 Mar 2023 19:52:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C62842F98; Fri, 17 Mar 2023 19:52:46 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 9285A40395 for ; Fri, 17 Mar 2023 19:52:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1679079164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RnP8oCDim0Pb1B9KBSMjxzW7jrT3KqC5+1nyQw/THOI=; b=O73HmujpoRtFkynMSJKf8ZGPb0BDMyOAwGo2yY8uitb3XwxJcZHWYA3kwjuLvg5SnNeCZA YB5modgDa0Y67+4FLBJuUi6kwyGL8HjPmC3dNYyRVIJNgfkjODPF1qV5L0husYafUaG8u7 QjKCWuaZcdyTQ5T5nbiGUAvmNgGABI4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-589-NL3OjG3rOtSKofA9LG_Wig-1; Fri, 17 Mar 2023 14:52:39 -0400 X-MC-Unique: NL3OjG3rOtSKofA9LG_Wig-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D3A8585530C; Fri, 17 Mar 2023 18:52:38 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.224.63]) by smtp.corp.redhat.com (Postfix) with ESMTP id 64C46C15BA0; Fri, 17 Mar 2023 18:52:37 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: thomas@monjalon.net, stephen@networkplumber.org, Tyler Retzlaff , stable@dpdk.org, Narcisa Vasile , Dmitry Kozlyuk Subject: [PATCH v6] eal/unix: fix thread creation Date: Fri, 17 Mar 2023 19:52:29 +0100 Message-Id: <20230317185229.449011-1-david.marchand@redhat.com> In-Reply-To: <1677782682-27200-1-git-send-email-roretzla@linux.microsoft.com> References: <1677782682-27200-1-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Tyler Retzlaff In rte_thread_create setting affinity after pthread_create may fail. Such a failure should result in the entire rte_thread_create failing but doesn't. Additionally if there is a failure to set affinity a race exists where the creating thread will free ctx and depending on scheduling of the new thread it may also free ctx (double free). Resolve the above by setting the affinity from the newly created thread using a condition variable to signal the completion of the thread start wrapper having completed. Since we are now waiting for the thread start wrapper to complete we can allocate the thread start wrapper context on the stack. While here clean up the variable naming in the context to better highlight the fields of the context require synchronization between the creating and created thread. Fixes: ce6e911d20f6 ("eal: add thread lifetime API") Cc: stable@dpdk.org Signed-off-by: Tyler Retzlaff Signed-off-by: David Marchand Reviewed-by: Tyler Retzlaff --- Changes since v5: - dropped volatile and switched to boolean for wrapper_done, - reverted to rte_thread_set_affinity_by_id() call, --- lib/eal/unix/rte_thread.c | 73 ++++++++++++++++++++++++--------------- 1 file changed, 45 insertions(+), 28 deletions(-) diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index 37ebfcfca1..f4076122a4 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -16,9 +17,14 @@ struct eal_tls_key { pthread_key_t thread_index; }; -struct thread_routine_ctx { +struct thread_start_context { rte_thread_func thread_func; - void *routine_args; + void *thread_args; + const rte_thread_attr_t *thread_attr; + pthread_mutex_t wrapper_mutex; + pthread_cond_t wrapper_cond; + int wrapper_ret; + bool wrapper_done; }; static int @@ -81,13 +87,29 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri, } static void * -thread_func_wrapper(void *arg) +thread_start_wrapper(void *arg) { - struct thread_routine_ctx ctx = *(struct thread_routine_ctx *)arg; + struct thread_start_context *ctx = (struct thread_start_context *)arg; + rte_thread_func thread_func = ctx->thread_func; + void *thread_args = ctx->thread_args; + int ret = 0; + + if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) { + ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset); + if (ret != 0) + RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); + } - free(arg); + pthread_mutex_lock(&ctx->wrapper_mutex); + ctx->wrapper_ret = ret; + ctx->wrapper_done = true; + pthread_cond_signal(&ctx->wrapper_cond); + pthread_mutex_unlock(&ctx->wrapper_mutex); - return (void *)(uintptr_t)ctx.thread_func(ctx.routine_args); + if (ret != 0) + return NULL; + + return (void *)(uintptr_t)thread_func(thread_args); } int @@ -98,20 +120,18 @@ rte_thread_create(rte_thread_t *thread_id, int ret = 0; pthread_attr_t attr; pthread_attr_t *attrp = NULL; - struct thread_routine_ctx *ctx; struct sched_param param = { .sched_priority = 0, }; int policy = SCHED_OTHER; - - ctx = calloc(1, sizeof(*ctx)); - if (ctx == NULL) { - RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n"); - ret = ENOMEM; - goto cleanup; - } - ctx->routine_args = args; - ctx->thread_func = thread_func; + struct thread_start_context ctx = { + .thread_func = thread_func, + .thread_args = args, + .thread_attr = thread_attr, + .wrapper_done = false, + .wrapper_mutex = PTHREAD_MUTEX_INITIALIZER, + .wrapper_cond = PTHREAD_COND_INITIALIZER, + }; if (thread_attr != NULL) { ret = pthread_attr_init(&attr); @@ -133,7 +153,6 @@ rte_thread_create(rte_thread_t *thread_id, goto cleanup; } - if (thread_attr->priority == RTE_THREAD_PRIORITY_REALTIME_CRITICAL) { ret = ENOTSUP; @@ -158,24 +177,22 @@ rte_thread_create(rte_thread_t *thread_id, } ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp, - thread_func_wrapper, ctx); + thread_start_wrapper, &ctx); if (ret != 0) { RTE_LOG(DEBUG, EAL, "pthread_create failed\n"); goto cleanup; } - if (thread_attr != NULL && CPU_COUNT(&thread_attr->cpuset) > 0) { - ret = rte_thread_set_affinity_by_id(*thread_id, - &thread_attr->cpuset); - if (ret != 0) { - RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); - goto cleanup; - } - } + pthread_mutex_lock(&ctx.wrapper_mutex); + while (!ctx.wrapper_done) + pthread_cond_wait(&ctx.wrapper_cond, &ctx.wrapper_mutex); + ret = ctx.wrapper_ret; + pthread_mutex_unlock(&ctx.wrapper_mutex); + + if (ret != 0) + pthread_join((pthread_t)thread_id->opaque_id, NULL); - ctx = NULL; cleanup: - free(ctx); if (attrp != NULL) pthread_attr_destroy(&attr);