From patchwork Wed Feb 5 10:24:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 65576 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F50FA053A; Wed, 5 Feb 2020 11:24:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3D4F11BFAF; Wed, 5 Feb 2020 11:24:12 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F0EF41BF9E for ; Wed, 5 Feb 2020 11:24:10 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 015ALLE1019227; Wed, 5 Feb 2020 02:24:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=RlGwDzNhDtwvsNswOW/J3E0Dzs+3jaIbQP/yixVvEB0=; b=unb7SjvsJKeV5JjgUENrx/Gz+vGmpaB7hxjfeclKnYPYlJwshdj3M38a6UqW7AhqxVmR YVYAuND/h647pFElgf6Xbpr+BVNJEiVmsa3CEensvA5e3xNuQzc3jQ6qpKU2ZD2+JGK1 ss5OnDw/RjQ8xuMZa7Gt3nRfmDfv96PZdZP2wXI81gcUeffgM+564wBMA067F9pcusbf AGy6mOkutOEbNa+FJAawZ54olU3seYgdl5B+ozqBwIARyphnLlRf/hTYosyT1NdnsXYI Ey1F7UoCYwmNgMCd8DGBBGzznU48OlXUgZeqq+6Ydn+Cy/tKISH1heykeRbd5xsYYTpg pw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2xyhmta96s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 05 Feb 2020 02:24:09 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 5 Feb 2020 02:24:08 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 5 Feb 2020 02:24:07 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 5 Feb 2020 02:24:07 -0800 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 34FF83F7203; Wed, 5 Feb 2020 02:24:05 -0800 (PST) From: To: CC: , , Jerin Jacob Date: Wed, 5 Feb 2020 15:54:15 +0530 Message-ID: <20200205102416.698395-1-jerinj@marvell.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-05_02:2020-02-04, 2020-02-05 signatures=0 Subject: [dpdk-dev] [PATCH] eal: store control thread CPU affinity in TLS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob _cpuset TLS variable stores the CPU affinity of eal thread. Populate the _cpuset TLS variable for control thread to 1) Make rte_thread_get_affinity() and eal_thread_dump_affinity functional with control thread. 2) Quick access to cpu affinity. Signed-off-by: Jerin Jacob Reviewed-by: David Marchand --- lib/librte_eal/common/eal_common_thread.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c index 78581753c..99fe1aa4e 100644 --- a/lib/librte_eal/common/eal_common_thread.c +++ b/lib/librte_eal/common/eal_common_thread.c @@ -152,10 +152,14 @@ struct rte_thread_ctrl_params { static void *rte_thread_init(void *arg) { int ret; + rte_cpuset_t *cpuset = &internal_config.ctrl_cpuset; struct rte_thread_ctrl_params *params = arg; void *(*start_routine)(void *) = params->start_routine; void *routine_arg = params->arg; + /* Store cpuset in TLS for quick access */ + memmove(&RTE_PER_LCORE(_cpuset), cpuset, sizeof(rte_cpuset_t)); + ret = pthread_barrier_wait(¶ms->configured); if (ret == PTHREAD_BARRIER_SERIAL_THREAD) { pthread_barrier_destroy(¶ms->configured);