From patchwork Tue Nov 29 15:33:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 120288 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9CD20A00C3; Tue, 29 Nov 2022 16:33:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9CD2B42B8C; Tue, 29 Nov 2022 16:33:40 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 954FB42686 for ; Tue, 29 Nov 2022 16:33:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669736019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QhneE/fRpswyk8vVzpWXq3cCC4qRL/dbbq2rLGd+oe4=; b=fQiHMcE6OuhLKqC2FYNn57gwy12EI2l6BtNTFBwS8/Vpa5ckzmYXhXwls65NVxu2QC6Tvo rorNtbuqzLO4eStN2O57LIYD+ro5ijy+SEswY+3kY+uEko7bMC/qDYmoze/ItoHhx3BQn4 qCX4irtdC9L9Gd/SBwT7vHhTJ6xGOIw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-424-n_9LeHqCNrC6Qbm0DpKGAw-1; Tue, 29 Nov 2022 10:33:35 -0500 X-MC-Unique: n_9LeHqCNrC6Qbm0DpKGAw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 971CA101A54E; Tue, 29 Nov 2022 15:33:35 +0000 (UTC) Received: from paul.home (unknown [10.39.208.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id B0BC817593; Tue, 29 Nov 2022 15:33:34 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v3 1/4] eal: add lcore info in telemetry Date: Tue, 29 Nov 2022 16:33:26 +0100 Message-Id: <20221129153329.181652-2-rjarry@redhat.com> In-Reply-To: <20221129153329.181652-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20221129153329.181652-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Report the same information than rte_lcore_dump() in the telemetry API into /eal/lcore/list and /eal/lcore/info,ID. Example: --> /eal/lcore/info,3 { "/eal/lcore/info": { "lcore_id": 3, "socket": 0, "role": "RTE", "cpuset": [ 3 ] } } Signed-off-by: Robin Jarry Acked-by: Morten Brørup --- v2 -> v3: Added #ifndef WINDOWS guards. Telemetry is not available. v1 -> v2: Changed "cpuset" to an array of CPU ids instead of a string. lib/eal/common/eal_common_lcore.c | 96 +++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 06c594b0224f..16548977dce8 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -10,6 +10,9 @@ #include #include #include +#ifndef RTE_EXEC_ENV_WINDOWS +#include +#endif #include "eal_private.h" #include "eal_thread.h" @@ -456,3 +459,96 @@ rte_lcore_dump(FILE *f) { rte_lcore_iterate(lcore_dump_cb, f); } + +#ifndef RTE_EXEC_ENV_WINDOWS +static int +lcore_telemetry_id_cb(unsigned int lcore_id, void *arg) +{ + struct rte_tel_data *d = arg; + return rte_tel_data_add_array_int(d, lcore_id); +} + +static int +handle_lcore_list(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + int ret = rte_tel_data_start_array(d, RTE_TEL_INT_VAL); + if (ret) + return ret; + return rte_lcore_iterate(lcore_telemetry_id_cb, d); +} + +struct lcore_telemetry_info { + unsigned int lcore_id; + struct rte_tel_data *d; +}; + +static int +lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) +{ + struct lcore_telemetry_info *info = arg; + struct rte_config *cfg = rte_eal_get_configuration(); + struct rte_tel_data *cpuset; + const char *role; + unsigned int cpu; + + if (info->lcore_id != lcore_id) + return 0; + + switch (cfg->lcore_role[lcore_id]) { + case ROLE_RTE: + role = "RTE"; + break; + case ROLE_SERVICE: + role = "SERVICE"; + break; + case ROLE_NON_EAL: + role = "NON_EAL"; + break; + default: + role = "UNKNOWN"; + break; + } + rte_tel_data_start_dict(info->d); + rte_tel_data_add_dict_int(info->d, "lcore_id", lcore_id); + rte_tel_data_add_dict_int(info->d, "socket", rte_lcore_to_socket_id(lcore_id)); + rte_tel_data_add_dict_string(info->d, "role", role); + cpuset = rte_tel_data_alloc(); + if (!cpuset) + return -ENOMEM; + rte_tel_data_start_array(cpuset, RTE_TEL_INT_VAL); + for (cpu = 0; cpu < CPU_SETSIZE; cpu++) + if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset)) + rte_tel_data_add_array_int(cpuset, cpu); + rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0); + + return 0; +} + +static int +handle_lcore_info(const char *cmd __rte_unused, const char *params, struct rte_tel_data *d) +{ + struct lcore_telemetry_info info = { .d = d }; + char *endptr = NULL; + if (params == NULL || strlen(params) == 0) + return -EINVAL; + errno = 0; + info.lcore_id = strtoul(params, &endptr, 10); + if (errno) + return -errno; + if (endptr == params) + return -EINVAL; + return rte_lcore_iterate(lcore_telemetry_info_cb, &info); +} + +RTE_INIT(lcore_telemetry) +{ + rte_telemetry_register_cmd( + "/eal/lcore/list", handle_lcore_list, + "List of lcore ids. Takes no parameters"); + rte_telemetry_register_cmd( + "/eal/lcore/info", handle_lcore_info, + "Returns lcore info. Parameters: int lcore_id"); +} +#endif /* !RTE_EXEC_ENV_WINDOWS */ From patchwork Tue Nov 29 15:33:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 120290 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3526CA00C3; Tue, 29 Nov 2022 16:33:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8703642D19; Tue, 29 Nov 2022 16:33:44 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id DD0DB42BD9 for ; Tue, 29 Nov 2022 16:33:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669736020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2mYXiMvLTkft8ZMb+eCw3N4b2GDFZzpRoRfRmVscSgM=; b=ZwimixeY3hTCNjRoquGm7e9E5VsCyugS2yDPKQaNSM0zLo5SV/2ob93Bw2YfJdajwcH3wI DJ3zRlx3ceo1W0URj/DpFn3JQmVb3kzx17y9E4krqUGOCrmfJ5TYl/Slttn+vPGI6AeqDr QaYD0JAkXHxZWMs/aHMNdgmWjOxDKOg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-93-wbOSAZwbNeSPxX9VfKyFIw-1; Tue, 29 Nov 2022 10:33:37 -0500 X-MC-Unique: wbOSAZwbNeSPxX9VfKyFIw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B44D9802C17; Tue, 29 Nov 2022 15:33:36 +0000 (UTC) Received: from paul.home (unknown [10.39.208.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id DFDBA17593; Tue, 29 Nov 2022 15:33:35 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v3 2/4] eal: allow applications to report their cpu cycles usage Date: Tue, 29 Nov 2022 16:33:27 +0100 Message-Id: <20221129153329.181652-3-rjarry@redhat.com> In-Reply-To: <20221129153329.181652-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20221129153329.181652-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allow applications to register a callback that will be invoked in rte_lcore_dump() and when requesting lcore info in the telemetry API. The callback is expected to return the number of CPU cycles that have passed since application start and the number of these cycles that were spent doing busy work. Signed-off-by: Robin Jarry Acked-by: Morten Brørup --- v2 -> v3: - Copied callback to local variable to guard against (unlikely) races. - Used != NULL convention to test if callback is defined. - Fixed typo in doc string. - Did not add a % value in rte_lcore_dump() as its use would be very limited. v1 -> v2: Changed the approach based on Morten's review: the callback is now expected to report the total number of cycles since application start and the amount of these cycles that were spent doing busy work. This will give more flexibility in external monitoring tools to decide the sample period to compute busyness ratio. lib/eal/common/eal_common_lcore.c | 35 ++++++++++++++++++++++++++++--- lib/eal/include/rte_lcore.h | 29 +++++++++++++++++++++++++ lib/eal/version.map | 1 + 3 files changed, 62 insertions(+), 3 deletions(-) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 16548977dce8..23717abf6530 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -2,6 +2,7 @@ * Copyright(c) 2010-2014 Intel Corporation */ +#include #include #include @@ -422,11 +423,21 @@ rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg) return ret; } +static rte_lcore_usage_cb lcore_usage_cb; + +void +rte_lcore_register_usage_cb(rte_lcore_usage_cb cb) +{ + lcore_usage_cb = cb; +} + static int lcore_dump_cb(unsigned int lcore_id, void *arg) { struct rte_config *cfg = rte_eal_get_configuration(); - char cpuset[RTE_CPU_AFFINITY_STR_LEN]; + char cpuset[RTE_CPU_AFFINITY_STR_LEN], usage_str[256]; + uint64_t busy_cycles, total_cycles; + rte_lcore_usage_cb usage_cb; const char *role; FILE *f = arg; int ret; @@ -446,11 +457,20 @@ lcore_dump_cb(unsigned int lcore_id, void *arg) break; } + busy_cycles = 0; + total_cycles = 0; + usage_str[0] = '\0'; + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &busy_cycles, &total_cycles) == 0) { + snprintf(usage_str, sizeof(usage_str), ", busy cycles %"PRIu64"/%"PRIu64, + busy_cycles, total_cycles); + } ret = eal_thread_dump_affinity(&lcore_config[lcore_id].cpuset, cpuset, sizeof(cpuset)); - fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s\n", lcore_id, + fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s%s\n", lcore_id, rte_lcore_to_socket_id(lcore_id), role, cpuset, - ret == 0 ? "" : "..."); + ret == 0 ? "" : "...", usage_str); + return 0; } @@ -489,7 +509,9 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) { struct lcore_telemetry_info *info = arg; struct rte_config *cfg = rte_eal_get_configuration(); + uint64_t busy_cycles, total_cycles; struct rte_tel_data *cpuset; + rte_lcore_usage_cb usage_cb; const char *role; unsigned int cpu; @@ -522,6 +544,13 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset)) rte_tel_data_add_array_int(cpuset, cpu); rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0); + busy_cycles = 0; + total_cycles = 0; + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &busy_cycles, &total_cycles) == 0) { + rte_tel_data_add_dict_u64(info->d, "busy_cycles", busy_cycles); + rte_tel_data_add_dict_u64(info->d, "total_cycles", total_cycles); + } return 0; } diff --git a/lib/eal/include/rte_lcore.h b/lib/eal/include/rte_lcore.h index 6938c3fd7b81..0552e6f44142 100644 --- a/lib/eal/include/rte_lcore.h +++ b/lib/eal/include/rte_lcore.h @@ -328,6 +328,35 @@ typedef int (*rte_lcore_iterate_cb)(unsigned int lcore_id, void *arg); int rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg); +/** + * Callback to allow applications to report CPU usage. + * + * @param [in] lcore_id + * The lcore to consider. + * @param [out] busy_cycles + * The number of busy CPU cycles since the application start. + * @param [out] total_cycles + * The total number of CPU cycles since the application start. + * @return + * - 0 if both busy and total were set correctly. + * - a negative value if the information is not available or if any error occurred. + */ +typedef int (*rte_lcore_usage_cb)( + unsigned int lcore_id, uint64_t *busy_cycles, uint64_t *total_cycles); + +/** + * Register a callback from an application to be called in rte_lcore_dump() + * and the /eal/lcore/info telemetry endpoint handler. + * + * Applications are expected to report the amount of busy and total CPU cycles + * since their startup. + * + * @param cb + * The callback function. + */ +__rte_experimental +void rte_lcore_register_usage_cb(rte_lcore_usage_cb cb); + /** * List all lcores. * diff --git a/lib/eal/version.map b/lib/eal/version.map index 7ad12a7dc985..30fd216a12ea 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -440,6 +440,7 @@ EXPERIMENTAL { rte_thread_detach; rte_thread_equal; rte_thread_join; + rte_lcore_register_usage_cb; }; INTERNAL { From patchwork Tue Nov 29 15:33:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 120289 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB58EA00C3; Tue, 29 Nov 2022 16:33:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7F564067E; Tue, 29 Nov 2022 16:33:43 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 3CF2E42686 for ; Tue, 29 Nov 2022 16:33:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669736019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t++m0VA/6Sa+VlzioLupPbvoV2WnJEGE5bDEy8mK9Xg=; b=GOC71lN5qMF8WvSE7nBp6I3i9lCNigBd3QZFlXyOSDRLvFXld+gZRqeJhaSp1W3p/UoI7B UVQrV+iQ0T8kzma6Cd4jpUT1RUn5UkLCBrIPAK0S9fDr9gDQwPz1ULS6STCFQf9LKzZbAT TVjGOX57yac+jfNMm3dT+XdRwJFPBMU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-347-eBjlc0fzMByCX9AiK2N4oA-1; Tue, 29 Nov 2022 10:33:38 -0500 X-MC-Unique: eBjlc0fzMByCX9AiK2N4oA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF3B41C008AB; Tue, 29 Nov 2022 15:33:37 +0000 (UTC) Received: from paul.home (unknown [10.39.208.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0761717593; Tue, 29 Nov 2022 15:33:36 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v3 3/4] testpmd: add dump_lcores command Date: Tue, 29 Nov 2022 16:33:28 +0100 Message-Id: <20221129153329.181652-4-rjarry@redhat.com> In-Reply-To: <20221129153329.181652-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20221129153329.181652-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a simple command that calls rte_lcore_dump(). Signed-off-by: Robin Jarry Acked-by: Morten Brørup --- v2 -> v3: no change v1 -> v2: renamed show lcores -> dump_lcores app/test-pmd/cmdline.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b32dc8bfd445..96474d2ae458 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -8345,6 +8345,8 @@ static void cmd_dump_parsed(void *parsed_result, rte_mempool_list_dump(stdout); else if (!strcmp(res->dump, "dump_devargs")) rte_devargs_dump(stdout); + else if (!strcmp(res->dump, "dump_lcores")) + rte_lcore_dump(stdout); else if (!strcmp(res->dump, "dump_log_types")) rte_log_dump(stdout); } @@ -8358,6 +8360,7 @@ static cmdline_parse_token_string_t cmd_dump_dump = "dump_ring#" "dump_mempool#" "dump_devargs#" + "dump_lcores#" "dump_log_types"); static cmdline_parse_inst_t cmd_dump = { From patchwork Tue Nov 29 15:33:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 120291 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F929A00C3; Tue, 29 Nov 2022 16:34:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4463942D20; Tue, 29 Nov 2022 16:33:45 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 7A8BA42D11 for ; Tue, 29 Nov 2022 16:33:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669736020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jdxyl2qP4uhP0KGzNPJ0aJqu+jbpBzG+WvRa4zJ8mxg=; b=GuQgE4e5U1FEjTgLg3EBUKYwhHSa4TezetuU+FZ9XPrw0Mq7E9HtaV2QaheccmLlxXEe95 xUGs7A8FbqMhfJ5p2tgxYD8/tc34bHZnFRQ3u8DSd3EzuPlINk2KqT3vm1g94HFRsb5Mzv j9pNxxKbXoQ4++OuFVCfKnPNvgU9390= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-Nsh8-lccMu2KH4IM_7DLjw-1; Tue, 29 Nov 2022 10:33:39 -0500 X-MC-Unique: Nsh8-lccMu2KH4IM_7DLjw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 01DD9185A7A9; Tue, 29 Nov 2022 15:33:39 +0000 (UTC) Received: from paul.home (unknown [10.39.208.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2CEB04EA52; Tue, 29 Nov 2022 15:33:38 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v3 4/4] testpmd: report lcore usage Date: Tue, 29 Nov 2022 16:33:29 +0100 Message-Id: <20221129153329.181652-5-rjarry@redhat.com> In-Reply-To: <20221129153329.181652-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20221129153329.181652-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reuse the --record-core-cycles option to account for busy cycles. One turn of packet_fwd_t is considered "busy" if there was at least one received or transmitted packet. Add a new busy_cycles field in struct fwd_stream. Update get_end_cycles to accept an additional argument for the number of processed packets. Update fwd_stream.busy_cycles when the number of packets is greater than zero. When --record-core-cycles is specified, register a callback with rte_lcore_register_usage_cb(). In the callback, use the new lcore_id field in struct fwd_lcore to identify the correct index in fwd_lcores and return the sum of busy/total cycles of all fwd_streams. This makes the cycles counters available in rte_lcore_dump() and the lcore telemetry API: testpmd> dump_lcores lcore 3, socket 0, role RTE, cpuset 3 lcore 4, socket 0, role RTE, cpuset 4, busy cycles 1228584096/9239923140 lcore 5, socket 0, role RTE, cpuset 5, busy cycles 1255661768/9218141538 --> /eal/lcore/info,4 { "/eal/lcore/info": { "lcore_id": 4, "socket": 0, "role": "RTE", "cpuset": [ 4 ], "busy_cycles": 10623340318, "total_cycles": 55331167354 } } Signed-off-by: Robin Jarry Acked-by: Morten Brørup --- v2 -> v3: no change v1 -> v2: adjusted to new lcore_usage api app/test-pmd/5tswap.c | 5 +++-- app/test-pmd/csumonly.c | 6 +++--- app/test-pmd/flowgen.c | 2 +- app/test-pmd/icmpecho.c | 6 +++--- app/test-pmd/iofwd.c | 5 +++-- app/test-pmd/macfwd.c | 5 +++-- app/test-pmd/macswap.c | 5 +++-- app/test-pmd/noisy_vnf.c | 4 ++++ app/test-pmd/rxonly.c | 5 +++-- app/test-pmd/shared_rxq_fwd.c | 5 +++-- app/test-pmd/testpmd.c | 39 ++++++++++++++++++++++++++++++++++- app/test-pmd/testpmd.h | 14 +++++++++---- app/test-pmd/txonly.c | 7 ++++--- 13 files changed, 81 insertions(+), 27 deletions(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index f041a5e1d530..03225075716c 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -116,7 +116,7 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -182,7 +182,8 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 1c2459851522..03e141221a56 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -868,7 +868,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; rx_bad_ip_csum = 0; @@ -1200,8 +1200,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) rte_pktmbuf_free(tx_pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index fd6abc0f4124..7b2f0ffdf0f5 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -196,7 +196,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs) RTE_PER_LCORE(_next_flow) = next_flow; - get_end_cycles(fs, start_tsc); + get_end_cycles(fs, start_tsc, nb_tx); } static int diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 066f2a3ab79b..2fc9f96dc95f 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -303,7 +303,7 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; nb_replies = 0; @@ -508,8 +508,8 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) } while (++nb_tx < nb_replies); } } - - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 8fafdec548ad..e5a2dbe20c69 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -59,7 +59,7 @@ pkt_burst_io_forward(struct fwd_stream *fs) pkts_burst, nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, @@ -84,7 +84,8 @@ pkt_burst_io_forward(struct fwd_stream *fs) } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index beb220fbb462..9db623999970 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -65,7 +65,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -115,7 +115,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs) } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 4f8deb338296..4db134ac1d91 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -66,7 +66,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -93,7 +93,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index c65ec6f06a5c..290bdcda45f0 100644 --- a/app/test-pmd/noisy_vnf.c +++ b/app/test-pmd/noisy_vnf.c @@ -152,6 +152,9 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs) uint64_t delta_ms; bool needs_flush = false; uint64_t now; + uint64_t start_tsc = 0; + + get_start_cycles(&start_tsc); nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, nb_pkt_per_burst); @@ -219,6 +222,7 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs) fs->fwd_dropped += drop_pkts(tmp_pkts, nb_deqd, sent); ncf->prev_time = rte_get_timer_cycles(); } + get_end_cycles(fs, start_tsc, nb_rx + nb_tx); } #define NOISY_STRSIZE 256 diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index d528d4f34e60..519202339e16 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -58,13 +58,14 @@ pkt_burst_receive(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; for (i = 0; i < nb_rx; i++) rte_pktmbuf_free(pkts_burst[i]); - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/shared_rxq_fwd.c b/app/test-pmd/shared_rxq_fwd.c index 2e9047804b5b..395b73bfe52e 100644 --- a/app/test-pmd/shared_rxq_fwd.c +++ b/app/test-pmd/shared_rxq_fwd.c @@ -102,9 +102,10 @@ shared_rxq_fwd(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; forward_shared_rxq(fs, nb_rx, pkts_burst); - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 134d79a55547..6ad91334d352 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2053,7 +2053,7 @@ fwd_stats_display(void) fs->rx_bad_outer_ip_csum; if (record_core_cycles) - fwd_cycles += fs->core_cycles; + fwd_cycles += fs->busy_cycles; } for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { pt_id = fwd_ports_ids[i]; @@ -2184,6 +2184,7 @@ fwd_stats_reset(void) memset(&fs->rx_burst_stats, 0, sizeof(fs->rx_burst_stats)); memset(&fs->tx_burst_stats, 0, sizeof(fs->tx_burst_stats)); + fs->busy_cycles = 0; fs->core_cycles = 0; } } @@ -2260,6 +2261,7 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) tics_datum = rte_rdtsc(); tics_per_1sec = rte_get_timer_hz(); #endif + fc->lcore_id = rte_lcore_id(); fsm = &fwd_streams[fc->stream_idx]; nb_fs = fc->stream_nb; do { @@ -2288,6 +2290,37 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) } while (! fc->stopped); } +static int +lcore_usage_callback(unsigned int lcore_id, uint64_t *busy_cycles, uint64_t *total_cycles) +{ + struct fwd_stream **fsm; + struct fwd_lcore *fc; + streamid_t nb_fs; + streamid_t sm_id; + int c; + + for (c = 0; c < nb_lcores; c++) { + fc = fwd_lcores[c]; + if (fc->lcore_id != lcore_id) + continue; + + fsm = &fwd_streams[fc->stream_idx]; + nb_fs = fc->stream_nb; + *busy_cycles = 0; + *total_cycles = 0; + + for (sm_id = 0; sm_id < nb_fs; sm_id++) + if (!fsm[sm_id]->disabled) { + *busy_cycles += fsm[sm_id]->busy_cycles; + *total_cycles += fsm[sm_id]->core_cycles; + } + + return 0; + } + + return -1; +} + static int start_pkt_forward_on_core(void *fwd_arg) { @@ -4522,6 +4555,10 @@ main(int argc, char** argv) rte_stats_bitrate_reg(bitrate_data); } #endif + + if (record_core_cycles) + rte_lcore_register_usage_cb(lcore_usage_callback); + #ifdef RTE_LIB_CMDLINE if (init_cmdline() != 0) rte_exit(EXIT_FAILURE, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 7d24d25970d2..5dbf5d1c465c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -174,7 +174,8 @@ struct fwd_stream { #ifdef RTE_LIB_GRO unsigned int gro_times; /**< GRO operation times */ #endif - uint64_t core_cycles; /**< used for RX and TX processing */ + uint64_t busy_cycles; /**< used with --record-core-cycles */ + uint64_t core_cycles; /**< used with --record-core-cycles */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; struct fwd_lcore *lcore; /**< Lcore being scheduled. */ @@ -360,6 +361,7 @@ struct fwd_lcore { streamid_t stream_nb; /**< number of streams in "fwd_streams" */ lcoreid_t cpuid_idx; /**< index of logical core in CPU id table */ volatile char stopped; /**< stop forwarding when set */ + unsigned int lcore_id; /**< return value of rte_lcore_id() */ }; /* @@ -836,10 +838,14 @@ get_start_cycles(uint64_t *start_tsc) } static inline void -get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc) +get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc, uint64_t nb_packets) { - if (record_core_cycles) - fs->core_cycles += rte_rdtsc() - start_tsc; + if (record_core_cycles) { + uint64_t cycles = rte_rdtsc() - start_tsc; + fs->core_cycles += cycles; + if (nb_packets > 0) + fs->busy_cycles += cycles; + } } static inline void diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 021624952daa..ad37626ff63c 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -331,7 +331,7 @@ pkt_burst_transmit(struct fwd_stream *fs) struct rte_mbuf *pkt; struct rte_mempool *mbp; struct rte_ether_hdr eth_hdr; - uint16_t nb_tx; + uint16_t nb_tx = 0; uint16_t nb_pkt; uint16_t vlan_tci, vlan_tci_outer; uint32_t retry; @@ -392,7 +392,7 @@ pkt_burst_transmit(struct fwd_stream *fs) } if (nb_pkt == 0) - return; + goto end; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); @@ -426,7 +426,8 @@ pkt_burst_transmit(struct fwd_stream *fs) } while (++nb_tx < nb_pkt); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_tx); } static int