From patchwork Sat Jun 18 09:02:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Li Zhang X-Patchwork-Id: 113066 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9FABA0032; Sat, 18 Jun 2022 11:04:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDD5A42B8A; Sat, 18 Jun 2022 11:03:48 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) by mails.dpdk.org (Postfix) with ESMTP id 60308410DC for ; Sat, 18 Jun 2022 11:03:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QKt2zD86ja7PQtErn7yvfvTPNuyIWUWhsy8jArjZjqbjXCzv6Fd7ZGpF8Kvtmof5rbzmGshUv4RcNOOndNL6eTNK95w5k5CidVHtQmiUO6xLeO0PXEHycwIv5b0THeCWrOOmySCna3mHk484TyPbcM4Tqd1NCnSd2DNpOqmcUbZ9W6ltapAYDeouEKzuQr6oDp9XTQM/ju/GFgLjZBhN9W0P4iONDL9CdM8SVkCgJGEnGoxRHw/lCM1J9qUgS6qaQ6FEr8uy4z5b2pssoOncheKvTAuQW+RsCl/iyJc7MyHEtqVfQrG+A5ltpKrvD8Fp70499RHVVyZrcWWeuyKWOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YPu1wYmMGMNONwSklESS2qkn2Pbt4H+YVTTowcpQjjY=; b=UPe7jwaPCvBLRK37NBebi5Kz1wAbXoPWEH5PVmogqpZClMiLFELK7J9hOXcrA41jZZxK50B7ByKruEmEz4cw5khG4uloVBmSgo6N170zaLnmxQZRk3hknblw9z2Kcq72uQFFD1uFE0GxA/E3FeBV3DnkoBSYqJZE0X+G2fgYk/zxipuHB8i41RxAsr47EJIAl019sQqMiyv3reVlLlNg1dWHecDBjswxrhu5d14ubq4M5QDACCZnwQ/oNTmXGShTCdq3MfKgnjBk0FgpMVI0NgxNSWASlKa58re1ZwCY336KKsJADzup69iM988gNjRpats42yx+x9VbIJUcR3KRfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YPu1wYmMGMNONwSklESS2qkn2Pbt4H+YVTTowcpQjjY=; b=PpsAj47nB1IC6RigoLZgMQb9XcAwBbQ+hRMFI70B+bo2OeQbF9+OyCIZNF0ldgbcpQ3oPUp2lwhaCxVn53KqXAqguNvcQiSiTE5PuYphW0uARdnc++I1WSfw3O5DVU+DkOX2x0RBTcAw6dGnP7x8KE6hSDP7j4G7CAT6ajPKAywyps4J6lBn70Aj+DbrBuYSzkySjnkh5UNZtvlJ7DQN/i1xX1chvea+VuuxhqIlvAge7rYYKQGUQT7g1wSex7MguZoIgX//6QbeJ0BI5wSmqdkh4aQI357Rz8L/5sY/yDqMW4M3LyUwOpbpJtxGRz8usDNornyoxuzEYBTjl2Lqjw== Received: from MWHPR10CA0002.namprd10.prod.outlook.com (2603:10b6:301::12) by MN0PR12MB6222.namprd12.prod.outlook.com (2603:10b6:208:3c2::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Sat, 18 Jun 2022 09:03:44 +0000 Received: from CO1NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:301:0:cafe::eb) by MWHPR10CA0002.outlook.office365.com (2603:10b6:301::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend Transport; Sat, 18 Jun 2022 09:03:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT031.mail.protection.outlook.com (10.13.174.118) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5353.14 via Frontend Transport; Sat, 18 Jun 2022 09:03:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Sat, 18 Jun 2022 09:03:43 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sat, 18 Jun 2022 02:03:40 -0700 From: Li Zhang To: , , , CC: , , , , Subject: [PATCH v4 09/15] vdpa/mlx5: add task ring for MT management Date: Sat, 18 Jun 2022 12:02:52 +0300 Message-ID: <20220618090258.91157-10-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220618090258.91157-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220618090258.91157-1-lizh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 02f8c4b2-7615-457b-cb18-08da5109722b X-MS-TrafficTypeDiagnostic: MN0PR12MB6222:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fY9lAWSvAQzQW8fSyZ/H7wcJI2Qmqb0T7jUKNI+0E4Rn2pEOmaclVAPQZ9W26eWVj5JAxqgZF3oMmC5v41ugXKPcMBiVl+pJuUfjwiPdGd5eboi7NyqSW8Z9NzPFn4QqFjHccvrKfYfUYObRaF3QkWvM+FtKwBHXufnVyTh6TwpYSVcQ0g5/HhZbGqeTsmsjBPB4OtsBKC8dlFmuWvFtq/NPXIniAeJOLs7DYCVcNfsf3x8vYJWpNZI2128mIHtTCefH2aK/HvPuxFI33YbN36z3qQgJdTaOPrQGX0sX13HmCOFWfHBXf3K/SaaUZ2HBICP6OPHfaSeRFOhrtGf2Psoa4SSXRweOvdRl3kP/6SV6PKGIB396MGrmjkhKAcsqMESnYHYDfu6GDfvqfcBBFU8dFVVQaSaH+j1hUou9hWL+Jysj2I43qZiz5O4xd45JuVm3eqgmTP4E0zT1ycqxbxeGuc7+MtmpYUJ6sqUHxfYwPUqNfrS3mwosmhLxir7ruoe3Nn9fUPkhf+CU1ODvqdl6TtRcU/IGRT56BxbOCRcjx80PP+wa3zyongY95Nr29+g2lWfKOOGJyxPVaW3byf/sdWWZB6zJ0h0hL/dSD5sFjvtbkGkLSwpx2/x5XgLXSaVmuj+WCvEdtSEt2uacerh8yfkDFg+ZkOTtoGpRLxSRkVylFq9FTLERIljYqmC1nt4RqL+zZcjDGbmUBGrbuw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(46966006)(36840700001)(40470700004)(47076005)(7696005)(86362001)(70586007)(55016003)(8676002)(26005)(1076003)(82310400005)(36860700001)(426003)(70206006)(4326008)(2616005)(40460700003)(336012)(36756003)(498600001)(316002)(16526019)(6636002)(5660300002)(356005)(6286002)(83380400001)(186003)(81166007)(54906003)(8936002)(2906002)(110136005)(6666004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2022 09:03:43.7280 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 02f8c4b2-7615-457b-cb18-08da5109722b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6222 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The configuration threads tasks need a container to support multiple tasks assigned to a thread in parallel. Use rte_ring container per thread to manage the thread tasks without locks. The caller thread from the user context opens a task to a thread and enqueue it to the thread ring. The thread polls its ring and dequeue tasks. That’s why the ring should be in multi-producer and single consumer mode. Anatomic counter manages the tasks completion notification. The threads report errors to the caller by a dedicated error counter per task. Signed-off-by: Li Zhang Acked-by: Matan Azrad Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.h | 17 ++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 115 +++++++++++++++++++++++++- 2 files changed, 130 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 4e7c2557b7..2bbb868ec6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -74,10 +74,22 @@ enum { }; #define MLX5_VDPA_MAX_C_THRD 256 +#define MLX5_VDPA_MAX_TASKS_PER_THRD 4096 +#define MLX5_VDPA_TASKS_PER_DEV 64 + +/* Generic task information and size must be multiple of 4B. */ +struct mlx5_vdpa_task { + struct mlx5_vdpa_priv *priv; + uint32_t *remaining_cnt; + uint32_t *err_cnt; + uint32_t idx; +} __rte_packed __rte_aligned(4); /* Generic mlx5_vdpa_c_thread information. */ struct mlx5_vdpa_c_thread { pthread_t tid; + struct rte_ring *rng; + pthread_cond_t c_cond; }; struct mlx5_vdpa_conf_thread_mng { @@ -532,4 +544,9 @@ mlx5_vdpa_mult_threads_create(int cpu_core); */ void mlx5_vdpa_mult_threads_destroy(bool need_unlock); + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index ba7d8b63b3..1fdc92d3ad 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -11,17 +11,103 @@ #include #include #include +#include #include #include "mlx5_vdpa_utils.h" #include "mlx5_vdpa.h" +static inline uint32_t +mlx5_vdpa_c_thrd_ring_dequeue_bulk(struct rte_ring *r, + void **obj, uint32_t n, uint32_t *avail) +{ + uint32_t m; + + m = rte_ring_dequeue_bulk_elem_start(r, obj, + sizeof(struct mlx5_vdpa_task), n, avail); + n = (m == n) ? n : 0; + rte_ring_dequeue_elem_finish(r, n); + return n; +} + +static inline uint32_t +mlx5_vdpa_c_thrd_ring_enqueue_bulk(struct rte_ring *r, + void * const *obj, uint32_t n, uint32_t *free) +{ + uint32_t m; + + m = rte_ring_enqueue_bulk_elem_start(r, n, free); + n = (m == n) ? n : 0; + rte_ring_enqueue_elem_finish(r, obj, + sizeof(struct mlx5_vdpa_task), n); + return n; +} + +bool +mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, + uint32_t thrd_idx, + uint32_t num) +{ + struct rte_ring *rng = conf_thread_mng.cthrd[thrd_idx].rng; + struct mlx5_vdpa_task task[MLX5_VDPA_TASKS_PER_DEV]; + uint32_t i; + + MLX5_ASSERT(num <= MLX5_VDPA_TASKS_PER_DEV); + for (i = 0 ; i < num; i++) { + task[i].priv = priv; + /* To be added later. */ + } + if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) + return -1; + for (i = 0 ; i < num; i++) + if (task[i].remaining_cnt) + __atomic_fetch_add(task[i].remaining_cnt, 1, + __ATOMIC_RELAXED); + /* wake up conf thread. */ + pthread_mutex_lock(&conf_thread_mng.cthrd_lock); + pthread_cond_signal(&conf_thread_mng.cthrd[thrd_idx].c_cond); + pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); + return 0; +} + static void * mlx5_vdpa_c_thread_handle(void *arg) { - /* To be added later. */ - return arg; + struct mlx5_vdpa_conf_thread_mng *multhrd = arg; + pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_priv *priv; + struct mlx5_vdpa_task task; + struct rte_ring *rng; + uint32_t thrd_idx; + uint32_t task_num; + + for (thrd_idx = 0; thrd_idx < multhrd->max_thrds; + thrd_idx++) + if (multhrd->cthrd[thrd_idx].tid == thread_id) + break; + if (thrd_idx >= multhrd->max_thrds) + return NULL; + rng = multhrd->cthrd[thrd_idx].rng; + while (1) { + task_num = mlx5_vdpa_c_thrd_ring_dequeue_bulk(rng, + (void **)&task, 1, NULL); + if (!task_num) { + /* No task and condition wait. */ + pthread_mutex_lock(&multhrd->cthrd_lock); + pthread_cond_wait( + &multhrd->cthrd[thrd_idx].c_cond, + &multhrd->cthrd_lock); + pthread_mutex_unlock(&multhrd->cthrd_lock); + } + priv = task.priv; + if (priv == NULL) + continue; + __atomic_fetch_sub(task.remaining_cnt, + 1, __ATOMIC_RELAXED); + /* To be added later. */ + } + return NULL; } static void @@ -34,6 +120,10 @@ mlx5_vdpa_c_thread_destroy(uint32_t thrd_idx, bool need_unlock) if (need_unlock) pthread_mutex_init(&conf_thread_mng.cthrd_lock, NULL); } + if (conf_thread_mng.cthrd[thrd_idx].rng) { + rte_ring_free(conf_thread_mng.cthrd[thrd_idx].rng); + conf_thread_mng.cthrd[thrd_idx].rng = NULL; + } } static int @@ -45,6 +135,7 @@ mlx5_vdpa_c_thread_create(int cpu_core) rte_cpuset_t cpuset; pthread_attr_t attr; uint32_t thrd_idx; + uint32_t ring_num; char name[32]; int ret; @@ -60,8 +151,26 @@ mlx5_vdpa_c_thread_create(int cpu_core) DRV_LOG(ERR, "Failed to set thread priority."); goto c_thread_err; } + ring_num = MLX5_VDPA_MAX_TASKS_PER_THRD / conf_thread_mng.max_thrds; + if (!ring_num) { + DRV_LOG(ERR, "Invalid ring number for thread."); + goto c_thread_err; + } for (thrd_idx = 0; thrd_idx < conf_thread_mng.max_thrds; thrd_idx++) { + snprintf(name, sizeof(name), "vDPA-mthread-ring-%d", + thrd_idx); + conf_thread_mng.cthrd[thrd_idx].rng = rte_ring_create_elem(name, + sizeof(struct mlx5_vdpa_task), ring_num, + rte_socket_id(), + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ | + RING_F_EXACT_SZ); + if (!conf_thread_mng.cthrd[thrd_idx].rng) { + DRV_LOG(ERR, + "Failed to create vdpa multi-threads %d ring.", + thrd_idx); + goto c_thread_err; + } ret = pthread_create(&conf_thread_mng.cthrd[thrd_idx].tid, &attr, mlx5_vdpa_c_thread_handle, (void *)&conf_thread_mng); @@ -91,6 +200,8 @@ mlx5_vdpa_c_thread_create(int cpu_core) name); else DRV_LOG(DEBUG, "Thread name: %s.", name); + pthread_cond_init(&conf_thread_mng.cthrd[thrd_idx].c_cond, + NULL); } pthread_mutex_unlock(&conf_thread_mng.cthrd_lock); return 0;