From patchwork Tue Nov 29 15:33:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 120290 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3526CA00C3; Tue, 29 Nov 2022 16:33:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8703642D19; Tue, 29 Nov 2022 16:33:44 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id DD0DB42BD9 for ; Tue, 29 Nov 2022 16:33:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669736020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2mYXiMvLTkft8ZMb+eCw3N4b2GDFZzpRoRfRmVscSgM=; b=ZwimixeY3hTCNjRoquGm7e9E5VsCyugS2yDPKQaNSM0zLo5SV/2ob93Bw2YfJdajwcH3wI DJ3zRlx3ceo1W0URj/DpFn3JQmVb3kzx17y9E4krqUGOCrmfJ5TYl/Slttn+vPGI6AeqDr QaYD0JAkXHxZWMs/aHMNdgmWjOxDKOg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-93-wbOSAZwbNeSPxX9VfKyFIw-1; Tue, 29 Nov 2022 10:33:37 -0500 X-MC-Unique: wbOSAZwbNeSPxX9VfKyFIw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B44D9802C17; Tue, 29 Nov 2022 15:33:36 +0000 (UTC) Received: from paul.home (unknown [10.39.208.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id DFDBA17593; Tue, 29 Nov 2022 15:33:35 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v3 2/4] eal: allow applications to report their cpu cycles usage Date: Tue, 29 Nov 2022 16:33:27 +0100 Message-Id: <20221129153329.181652-3-rjarry@redhat.com> In-Reply-To: <20221129153329.181652-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20221129153329.181652-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allow applications to register a callback that will be invoked in rte_lcore_dump() and when requesting lcore info in the telemetry API. The callback is expected to return the number of CPU cycles that have passed since application start and the number of these cycles that were spent doing busy work. Signed-off-by: Robin Jarry Acked-by: Morten Brørup --- v2 -> v3: - Copied callback to local variable to guard against (unlikely) races. - Used != NULL convention to test if callback is defined. - Fixed typo in doc string. - Did not add a % value in rte_lcore_dump() as its use would be very limited. v1 -> v2: Changed the approach based on Morten's review: the callback is now expected to report the total number of cycles since application start and the amount of these cycles that were spent doing busy work. This will give more flexibility in external monitoring tools to decide the sample period to compute busyness ratio. lib/eal/common/eal_common_lcore.c | 35 ++++++++++++++++++++++++++++--- lib/eal/include/rte_lcore.h | 29 +++++++++++++++++++++++++ lib/eal/version.map | 1 + 3 files changed, 62 insertions(+), 3 deletions(-) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 16548977dce8..23717abf6530 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -2,6 +2,7 @@ * Copyright(c) 2010-2014 Intel Corporation */ +#include #include #include @@ -422,11 +423,21 @@ rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg) return ret; } +static rte_lcore_usage_cb lcore_usage_cb; + +void +rte_lcore_register_usage_cb(rte_lcore_usage_cb cb) +{ + lcore_usage_cb = cb; +} + static int lcore_dump_cb(unsigned int lcore_id, void *arg) { struct rte_config *cfg = rte_eal_get_configuration(); - char cpuset[RTE_CPU_AFFINITY_STR_LEN]; + char cpuset[RTE_CPU_AFFINITY_STR_LEN], usage_str[256]; + uint64_t busy_cycles, total_cycles; + rte_lcore_usage_cb usage_cb; const char *role; FILE *f = arg; int ret; @@ -446,11 +457,20 @@ lcore_dump_cb(unsigned int lcore_id, void *arg) break; } + busy_cycles = 0; + total_cycles = 0; + usage_str[0] = '\0'; + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &busy_cycles, &total_cycles) == 0) { + snprintf(usage_str, sizeof(usage_str), ", busy cycles %"PRIu64"/%"PRIu64, + busy_cycles, total_cycles); + } ret = eal_thread_dump_affinity(&lcore_config[lcore_id].cpuset, cpuset, sizeof(cpuset)); - fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s\n", lcore_id, + fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s%s\n", lcore_id, rte_lcore_to_socket_id(lcore_id), role, cpuset, - ret == 0 ? "" : "..."); + ret == 0 ? "" : "...", usage_str); + return 0; } @@ -489,7 +509,9 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) { struct lcore_telemetry_info *info = arg; struct rte_config *cfg = rte_eal_get_configuration(); + uint64_t busy_cycles, total_cycles; struct rte_tel_data *cpuset; + rte_lcore_usage_cb usage_cb; const char *role; unsigned int cpu; @@ -522,6 +544,13 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset)) rte_tel_data_add_array_int(cpuset, cpu); rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0); + busy_cycles = 0; + total_cycles = 0; + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &busy_cycles, &total_cycles) == 0) { + rte_tel_data_add_dict_u64(info->d, "busy_cycles", busy_cycles); + rte_tel_data_add_dict_u64(info->d, "total_cycles", total_cycles); + } return 0; } diff --git a/lib/eal/include/rte_lcore.h b/lib/eal/include/rte_lcore.h index 6938c3fd7b81..0552e6f44142 100644 --- a/lib/eal/include/rte_lcore.h +++ b/lib/eal/include/rte_lcore.h @@ -328,6 +328,35 @@ typedef int (*rte_lcore_iterate_cb)(unsigned int lcore_id, void *arg); int rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg); +/** + * Callback to allow applications to report CPU usage. + * + * @param [in] lcore_id + * The lcore to consider. + * @param [out] busy_cycles + * The number of busy CPU cycles since the application start. + * @param [out] total_cycles + * The total number of CPU cycles since the application start. + * @return + * - 0 if both busy and total were set correctly. + * - a negative value if the information is not available or if any error occurred. + */ +typedef int (*rte_lcore_usage_cb)( + unsigned int lcore_id, uint64_t *busy_cycles, uint64_t *total_cycles); + +/** + * Register a callback from an application to be called in rte_lcore_dump() + * and the /eal/lcore/info telemetry endpoint handler. + * + * Applications are expected to report the amount of busy and total CPU cycles + * since their startup. + * + * @param cb + * The callback function. + */ +__rte_experimental +void rte_lcore_register_usage_cb(rte_lcore_usage_cb cb); + /** * List all lcores. * diff --git a/lib/eal/version.map b/lib/eal/version.map index 7ad12a7dc985..30fd216a12ea 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -440,6 +440,7 @@ EXPERIMENTAL { rte_thread_detach; rte_thread_equal; rte_thread_join; + rte_lcore_register_usage_cb; }; INTERNAL {