Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/123448/?format=api
http://patchwork.dpdk.org/api/patches/123448/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20230208084507.1328625-5-rjarry@redhat.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20230208084507.1328625-5-rjarry@redhat.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20230208084507.1328625-5-rjarry@redhat.com", "date": "2023-02-08T08:45:06", "name": "[RESEND,v9,4/5] app/testpmd: report lcore usage", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "46a75f0257ed63a9f849d0ca0661a2786c9c6826", "submitter": { "id": 2850, "url": "http://patchwork.dpdk.org/api/people/2850/?format=api", "name": "Robin Jarry", "email": "rjarry@redhat.com" }, "delegate": { "id": 24651, "url": "http://patchwork.dpdk.org/api/users/24651/?format=api", "username": "dmarchand", "first_name": "David", "last_name": "Marchand", "email": "david.marchand@redhat.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20230208084507.1328625-5-rjarry@redhat.com/mbox/", "series": [ { "id": 26885, "url": "http://patchwork.dpdk.org/api/series/26885/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=26885", "date": "2023-02-08T08:45:02", "name": "lcore telemetry improvements", "version": 9, "mbox": "http://patchwork.dpdk.org/series/26885/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/123448/comments/", "check": "pending", "checks": "http://patchwork.dpdk.org/api/patches/123448/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 7858C41C3C;\n\tWed, 8 Feb 2023 09:49:04 +0100 (CET)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 6133A40141;\n\tWed, 8 Feb 2023 09:49:04 +0100 (CET)", "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.129.124])\n by mails.dpdk.org (Postfix) with ESMTP id 0F829400D6\n for <dev@dpdk.org>; Wed, 8 Feb 2023 09:49:02 +0100 (CET)", "from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com\n [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS\n (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n us-mta-7-iHCudSBSNLewFzfsxv7mnw-1; Wed, 08 Feb 2023 03:45:48 -0500", "from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com\n [10.11.54.9])\n (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n (No client certificate requested)\n by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2D0A4857D07;\n Wed, 8 Feb 2023 08:45:48 +0000 (UTC)", "from ringo.redhat.com (unknown [10.39.208.30])\n by smtp.corp.redhat.com (Postfix) with ESMTP id 83535492C3C;\n Wed, 8 Feb 2023 08:45:46 +0000 (UTC)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1675846142;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=2YKjJYqCIhopAU/3ZXeketZLTEOrs3PSm9c1aNIIK0A=;\n b=icZTAQ6nzkzlsFetKUWubH/RUclszGHDbv0VdukVl81laMSKTQ6R5v/MDR40h+ynwH0aGY\n A7heXZ31cmuPrR7SsPc7hSP/HV2Y8HgnwkWGB7/3rdXEegUu86lYLPXDGkHEsjokgj+xSq\n LguUP3bKMr5Lke+aulwzj1W6FzQe2gY=", "X-MC-Unique": "iHCudSBSNLewFzfsxv7mnw-1", "From": "Robin Jarry <rjarry@redhat.com>", "To": "dev@dpdk.org", "Cc": "Robin Jarry <rjarry@redhat.com>,\n =?utf-8?q?Morten_Br=C3=B8rup?= <mb@smartsharesystems.com>,\n Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,\n Kevin Laatz <kevin.laatz@intel.com>, Aman Singh <aman.deep.singh@intel.com>,\n Yuying Zhang <yuying.zhang@intel.com>", "Subject": "[RESEND PATCH v9 4/5] app/testpmd: report lcore usage", "Date": "Wed, 8 Feb 2023 09:45:06 +0100", "Message-Id": "<20230208084507.1328625-5-rjarry@redhat.com>", "In-Reply-To": "<20230208084507.1328625-1-rjarry@redhat.com>", "References": "<20221123102612.1688865-1-rjarry@redhat.com>\n <20230208084507.1328625-1-rjarry@redhat.com>", "MIME-Version": "1.0", "X-Scanned-By": "MIMEDefang 3.1 on 10.11.54.9", "X-Mimecast-Spam-Score": "0", "X-Mimecast-Originator": "redhat.com", "Content-Type": "text/plain; charset=UTF-8", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "The --record-core-cycles option already accounts for busy cycles. One\nturn of packet_fwd_t is considered \"busy\" if there was at least one\nreceived or transmitted packet.\n\nRename core_cycles to busy_cycles in struct fwd_stream to make it more\nexplicit. Add total_cycles to struct fwd_lcore. Add cycles accounting in\nnoisy_vnf where it was missing.\n\nWhen --record-core-cycles is specified, register a callback with\nrte_lcore_register_usage_cb() and update total_cycles every turn of\nlcore loop based on a starting tsc value.\n\nIn the callback, resolve the proper struct fwd_lcore based on lcore_id\nand return the lcore total_cycles and the sum of busy_cycles of all its\nfwd_streams.\n\nThis makes the cycles counters available in rte_lcore_dump() and the\nlcore telemetry API:\n\n testpmd> dump_lcores\n lcore 3, socket 0, role RTE, cpuset 3\n lcore 4, socket 0, role RTE, cpuset 4, busy cycles 1228584096/9239923140\n lcore 5, socket 0, role RTE, cpuset 5, busy cycles 1255661768/9218141538\n\n --> /eal/lcore/info,4\n {\n \"/eal/lcore/info\": {\n \"lcore_id\": 4,\n \"socket\": 0,\n \"role\": \"RTE\",\n \"cpuset\": [\n 4\n ],\n \"busy_cycles\": 10623340318,\n \"total_cycles\": 55331167354\n }\n }\n\nSigned-off-by: Robin Jarry <rjarry@redhat.com>\nAcked-by: Morten Brørup <mb@smartsharesystems.com>\nAcked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>\nReviewed-by: Kevin Laatz <kevin.laatz@intel.com>\n---\n\nNotes:\n v8 -> v9: Fixed accounting of total cycles\n\n app/test-pmd/noisy_vnf.c | 8 +++++++-\n app/test-pmd/testpmd.c | 42 ++++++++++++++++++++++++++++++++++++----\n app/test-pmd/testpmd.h | 25 +++++++++++++++---------\n 3 files changed, 61 insertions(+), 14 deletions(-)", "diff": "diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c\nindex c65ec6f06a5c..ce5a3e5e6987 100644\n--- a/app/test-pmd/noisy_vnf.c\n+++ b/app/test-pmd/noisy_vnf.c\n@@ -144,6 +144,7 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs)\n \tstruct noisy_config *ncf = noisy_cfg[fs->rx_port];\n \tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n \tstruct rte_mbuf *tmp_pkts[MAX_PKT_BURST];\n+\tuint64_t start_tsc = 0;\n \tuint16_t nb_deqd = 0;\n \tuint16_t nb_rx = 0;\n \tuint16_t nb_tx = 0;\n@@ -153,6 +154,8 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs)\n \tbool needs_flush = false;\n \tuint64_t now;\n \n+\tget_start_cycles(&start_tsc);\n+\n \tnb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue,\n \t\t\tpkts_burst, nb_pkt_per_burst);\n \tinc_rx_burst_stats(fs, nb_rx);\n@@ -169,7 +172,7 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs)\n \t\tinc_tx_burst_stats(fs, nb_tx);\n \t\tfs->tx_packets += nb_tx;\n \t\tfs->fwd_dropped += drop_pkts(pkts_burst, nb_rx, nb_tx);\n-\t\treturn;\n+\t\tgoto end;\n \t}\n \n \tfifo_free = rte_ring_free_count(ncf->f);\n@@ -219,6 +222,9 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs)\n \t\tfs->fwd_dropped += drop_pkts(tmp_pkts, nb_deqd, sent);\n \t\tncf->prev_time = rte_get_timer_cycles();\n \t}\n+end:\n+\tif (nb_rx > 0 || nb_tx > 0)\n+\t\tget_end_cycles(fs, start_tsc);\n }\n \n #define NOISY_STRSIZE 256\ndiff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c\nindex e366f81a0f46..eeb96aefa80b 100644\n--- a/app/test-pmd/testpmd.c\n+++ b/app/test-pmd/testpmd.c\n@@ -2053,7 +2053,7 @@ fwd_stats_display(void)\n \t\t\t\tfs->rx_bad_outer_ip_csum;\n \n \t\tif (record_core_cycles)\n-\t\t\tfwd_cycles += fs->core_cycles;\n+\t\t\tfwd_cycles += fs->busy_cycles;\n \t}\n \tfor (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {\n \t\tpt_id = fwd_ports_ids[i];\n@@ -2145,7 +2145,7 @@ fwd_stats_display(void)\n \t\t\telse\n \t\t\t\ttotal_pkts = total_recv;\n \n-\t\t\tprintf(\"\\n CPU cycles/packet=%.2F (total cycles=\"\n+\t\t\tprintf(\"\\n CPU cycles/packet=%.2F (busy cycles=\"\n \t\t\t \"%\"PRIu64\" / total %s packets=%\"PRIu64\") at %\"PRIu64\n \t\t\t \" MHz Clock\\n\",\n \t\t\t (double) fwd_cycles / total_pkts,\n@@ -2184,8 +2184,10 @@ fwd_stats_reset(void)\n \n \t\tmemset(&fs->rx_burst_stats, 0, sizeof(fs->rx_burst_stats));\n \t\tmemset(&fs->tx_burst_stats, 0, sizeof(fs->tx_burst_stats));\n-\t\tfs->core_cycles = 0;\n+\t\tfs->busy_cycles = 0;\n \t}\n+\tfor (i = 0; i < cur_fwd_config.nb_fwd_lcores; i++)\n+\t\tfwd_lcores[i]->total_cycles = 0;\n }\n \n static void\n@@ -2248,6 +2250,7 @@ static void\n run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd)\n {\n \tstruct fwd_stream **fsm;\n+\tuint64_t start_tsc;\n \tstreamid_t nb_fs;\n \tstreamid_t sm_id;\n #ifdef RTE_LIB_BITRATESTATS\n@@ -2262,6 +2265,7 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd)\n #endif\n \tfsm = &fwd_streams[fc->stream_idx];\n \tnb_fs = fc->stream_nb;\n+\tstart_tsc = rte_rdtsc();\n \tdo {\n \t\tfor (sm_id = 0; sm_id < nb_fs; sm_id++)\n \t\t\tif (!fsm[sm_id]->disabled)\n@@ -2284,10 +2288,36 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd)\n \t\t\t\tlatencystats_lcore_id == rte_lcore_id())\n \t\t\trte_latencystats_update();\n #endif\n-\n+\t\tif (record_core_cycles)\n+\t\t\tfc->total_cycles = rte_rdtsc() - start_tsc;\n \t} while (! fc->stopped);\n }\n \n+static int\n+lcore_usage_callback(unsigned int lcore_id, struct rte_lcore_usage *usage)\n+{\n+\tstruct fwd_stream **fsm;\n+\tstruct fwd_lcore *fc;\n+\tstreamid_t nb_fs;\n+\tstreamid_t sm_id;\n+\n+\tfc = lcore_to_fwd_lcore(lcore_id);\n+\tif (fc == NULL)\n+\t\treturn -1;\n+\n+\tfsm = &fwd_streams[fc->stream_idx];\n+\tnb_fs = fc->stream_nb;\n+\tusage->busy_cycles = 0;\n+\tusage->total_cycles = fc->total_cycles;\n+\n+\tfor (sm_id = 0; sm_id < nb_fs; sm_id++) {\n+\t\tif (!fsm[sm_id]->disabled)\n+\t\t\tusage->busy_cycles += fsm[sm_id]->busy_cycles;\n+\t}\n+\n+\treturn 0;\n+}\n+\n static int\n start_pkt_forward_on_core(void *fwd_arg)\n {\n@@ -4527,6 +4557,10 @@ main(int argc, char** argv)\n \t\trte_stats_bitrate_reg(bitrate_data);\n \t}\n #endif\n+\n+\tif (record_core_cycles)\n+\t\trte_lcore_register_usage_cb(lcore_usage_callback);\n+\n #ifdef RTE_LIB_CMDLINE\n \tif (init_cmdline() != 0)\n \t\trte_exit(EXIT_FAILURE,\ndiff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h\nindex 7d24d25970d2..6ec2f6879b47 100644\n--- a/app/test-pmd/testpmd.h\n+++ b/app/test-pmd/testpmd.h\n@@ -174,7 +174,7 @@ struct fwd_stream {\n #ifdef RTE_LIB_GRO\n \tunsigned int gro_times;\t/**< GRO operation times */\n #endif\n-\tuint64_t core_cycles; /**< used for RX and TX processing */\n+\tuint64_t busy_cycles; /**< used with --record-core-cycles */\n \tstruct pkt_burst_stats rx_burst_stats;\n \tstruct pkt_burst_stats tx_burst_stats;\n \tstruct fwd_lcore *lcore; /**< Lcore being scheduled. */\n@@ -360,6 +360,7 @@ struct fwd_lcore {\n \tstreamid_t stream_nb; /**< number of streams in \"fwd_streams\" */\n \tlcoreid_t cpuid_idx; /**< index of logical core in CPU id table */\n \tvolatile char stopped; /**< stop forwarding when set */\n+\tuint64_t total_cycles; /**< used with --record-core-cycles */\n };\n \n /*\n@@ -785,16 +786,17 @@ is_proc_primary(void)\n \treturn rte_eal_process_type() == RTE_PROC_PRIMARY;\n }\n \n-static inline unsigned int\n-lcore_num(void)\n+static inline struct fwd_lcore *\n+lcore_to_fwd_lcore(uint16_t lcore_id)\n {\n \tunsigned int i;\n \n-\tfor (i = 0; i < RTE_MAX_LCORE; ++i)\n-\t\tif (fwd_lcores_cpuids[i] == rte_lcore_id())\n-\t\t\treturn i;\n+\tfor (i = 0; i < cur_fwd_config.nb_fwd_lcores; ++i) {\n+\t\tif (fwd_lcores_cpuids[i] == lcore_id)\n+\t\t\treturn fwd_lcores[i];\n+\t}\n \n-\trte_panic(\"lcore_id of current thread not found in fwd_lcores_cpuids\\n\");\n+\treturn NULL;\n }\n \n void\n@@ -803,7 +805,12 @@ parse_fwd_portlist(const char *port);\n static inline struct fwd_lcore *\n current_fwd_lcore(void)\n {\n-\treturn fwd_lcores[lcore_num()];\n+\tstruct fwd_lcore *fc = lcore_to_fwd_lcore(rte_lcore_id());\n+\n+\tif (fc == NULL)\n+\t\trte_panic(\"lcore_id of current thread not found in fwd_lcores_cpuids\\n\");\n+\n+\treturn fc;\n }\n \n /* Mbuf Pools */\n@@ -839,7 +846,7 @@ static inline void\n get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc)\n {\n \tif (record_core_cycles)\n-\t\tfs->core_cycles += rte_rdtsc() - start_tsc;\n+\t\tfs->busy_cycles += rte_rdtsc() - start_tsc;\n }\n \n static inline void\n", "prefixes": [ "RESEND", "v9", "4/5" ] }{ "id": 123448, "url": "