From patchwork Wed Oct 25 11:59:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh X-Patchwork-Id: 30855 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 21D3B1B9D0; Wed, 25 Oct 2017 14:00:13 +0200 (CEST) Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0067.outbound.protection.outlook.com [104.47.37.67]) by dpdk.org (Postfix) with ESMTP id C601B1B781 for ; Wed, 25 Oct 2017 13:59:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=lgJAPSdAFYkrvGJ93MoWzSRVjOYO5N+cLpDcJnOWoK0=; b=kIXptDVSD45m++MHBa+Rvgi70HqGbx9Fobfd3Gbl1kV0vpW+8Hd6t/gtNKjn7sKWcQ66wdVGsw7xSxmTmnZAXbb1liYMjsOKMJT4oImSfRxTpKX9pM5Ecek2zvXOVDbsiVdNBzYwD2sV5jvApifhbehKv4yEO2JLyPz8sjzJ2eU= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from PBHAGAVATULA-LT.caveonetworks.com (111.93.218.67) by CY4PR07MB3462.namprd07.prod.outlook.com (10.171.252.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.156.4; Wed, 25 Oct 2017 11:59:52 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Wed, 25 Oct 2017 17:29:09 +0530 Message-Id: <1508932752-22964-4-git-send-email-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1508932752-22964-1-git-send-email-pbhagavatula@caviumnetworks.com> References: <1507712990-13064-1-git-send-email-pbhagavatula@caviumnetworks.com> <1508932752-22964-1-git-send-email-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MAXPR0101CA0060.INDPRD01.PROD.OUTLOOK.COM (10.174.63.22) To CY4PR07MB3462.namprd07.prod.outlook.com (10.171.252.143) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7f85fa54-66f7-4878-eafd-08d51b9fe77e X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(4627075)(201703031133081)(201702281549075)(2017052603238); SRVR:CY4PR07MB3462; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3462; 3:18T1fGukltUs3ONrUl0ykEi+NIHWa8CmxQDTW3UZofK5nPeWSKKrrnCPnMdVrTSFjjO5iEDrnHZpRNBskifw5YFjlGy6AWVMZRuVQfMmYzsocoyHbL6uonckuSpEZcuL22zwEM9ZF8b4F0rMjG6gUrYMC8wRJmKnS/8WxnwIFWaljhksCPNy3edBY5eLMD5ujE+YG1h8sIquv+V2oFQbV1admJuuaOi++AVJm8lDB8EP23XwbWVFxcWrkADwpB8Z; 25:+0emB0H3AenERTdA1bgqaBq3+4hSdjrifwDUxffjUsYyyv50BBRlUPY6E2Jhq4J/uPdmnqlRSJSEb3rYbmPE3amygcLhaQGPLb6JnyEOVi/p4dBhpB1m2pj6HgmdCl9F/99W8E1TRF0MT8JGEpL6JyNN3G2uAVaCMdkG4979mBVzlpaoIimo9sjZVcROXdawyWWxaoGnzJvZFeE/ME2GXY5zAhGcABeSstFRKDmv5NGkMJvzzKsz3e6WTr1/+VgCKWRgPSuAO9Azvqku1oIlEZDFHJ6G+jNxMvthqUop/c3l9Etj8wiLoMtK1OIhL/kQw1yOjFWdoXnyeEIt/Rlxtg==; 31:y7Lhf2umaIaJV0/BkX5T9hCRMtX7pkUmTBmvoyAhk6czPqn/jgCds+t7XG8X8sTSZaJSSGTEfRLPEsYF0AFXDFAVxi+Am7sw0rUlISdGFjXtbPuuzztceOXuLlgcy+RLXWrSez93BanfOj0PaSeoM/uca7Uxm7/d7ykuN2MlKKjwBahcYWwdPU2YY/9bx2OYksj8VJ77Y7FSn5p+4tj3a+guiDMm3bg902DqDVDo5Qo= X-MS-TrafficTypeDiagnostic: CY4PR07MB3462: X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3462; 20:qMIcy920jxsda8dhiS8S2UWRWj0hBgbirLQje9ytE7wllLQN1XsMWdioYwLcZKGxUskCxo5fx3EOwMeY5a79cTpdGysrevTtilSlox95Ilv5lokqINk1OGibz9Ws6X1ZCj0PRHrsYzyovetsluW0KgUokRQ5fQrroQQkHLUw7RPFaUvsKFRvgF/4kUgs569qm8lGEASUR1Tnuep0iTODbu8Yw05bmzqAXHH75NHPsCA8hcIv/PT31X6sUeZGVlAntERCaoPuFMrfn6ICotubVzlEC0E/GRStYluzAeAyCnJz11yY4A630v96oUOIw3crg+Hu2v2TKqwQQ2sAmfDmtZYjHLA4qsgkmOOVwR3c1Sr+ey//vNV38P91YxPp+LM5PEOstWYlDK1Wu8SHNjf6C1VrkgR9DpePMmJQHs7gQ+XEHMBus7+clJn5d1zUyS3Ev5viGfe69tw5LKjHSHbY7Xypwb4QPEiJr0QL9X8vngZQuthcRymbP+F4xdSBsiXJvi9W7sM1uaatJ/obHYTE0Df4Z/U+RkSUp9xf9R0QbcfUr2EE75+/fbvCv+7QLOQtkF4IpdlAkdNqLw/a6v4c0nhX8t73yEqYzBXkz0zNkhA=; 4:bDtkXFNVxCytiGBc8crbVNiE7skljvy3tYq24+yflhsiEpzz9tMDAeqTUvDL5HrOTCZofPSnRecyy8XKjlmRsKpMfqWKKvj0rll1YELFG9DxEHzDuWPkQNJEXJtgKvW7Y8wXvpoLHRUrdclkUp5wMpQ7rObzQlSiQ8HYGYgZLr+QXjgkKfKR2wUO/tnO+6On3z+MqkTMMZqSBADUSBlsjBRCtQiqig3bG3dKl+rBHoWrBr+qeWi6po3r83w9ySkc X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(100000703101)(100105400095)(3231020)(93006095)(10201501046)(3002001)(6041248)(20161123560025)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(20161123558100)(20161123555025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CY4PR07MB3462; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CY4PR07MB3462; X-Forefront-PRVS: 0471B73328 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(199003)(189002)(4326008)(106356001)(42882006)(8676002)(8936002)(97736004)(68736007)(50226002)(7736002)(305945005)(5003940100001)(7110500001)(5009440100003)(48376002)(189998001)(16526018)(6506006)(2906002)(6666003)(6486002)(5660300001)(6116002)(50466002)(10710500007)(76176999)(101416001)(316002)(16586007)(2950100002)(81156014)(25786009)(50986999)(53416004)(36756003)(478600001)(105586002)(2420400007)(69596002)(3846002)(66066001)(33646002)(15650500001)(47776003)(6306002)(53376002)(53936002)(966005)(81166006)(72206003)(107886003)(6512007)(8656006)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3462; H:PBHAGAVATULA-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3462; 23:eW4/RejhHuOhHds01oSYd8v6o0i/Iuof3Cf5TymYM?= tQ7pPh+NrPtbtbK0jTHZu7bLKbMk9BtajfGsZS/6goTxJ3VwkD5zXQOgo8l0PjA6u5JwsK6R+r1ypDmMo8m6wNgThuLXOoZSovcqEEkAjU+v4m3yduSxlcHtaZuqdI67A8FN6zlOWwCXRmRKxZCWGBYw/h2NT868IQolehsRkY2pfSJljNofBu7STMYrPRFrAL1FjquDwgkOr4ZvYgS/XQqWEdFtBhGbIT2tIUekjIwGheX6v64G0GSahEO/dyI6q7YCTnJQox3DNSpuMeeqgVo+6m26wGNxE80dD4Cun+XGWK01dpEbp28nnctq/0kjJ6/fk/ZZ+RDLhhb9oW/qnASIGOY0HfaUh4qkxDw1iMkRZPg+UtWOpnFMmFQmgwwAxGzoBNHw/zzdd4LCb2MubDzcpGHGhsmnqll0J/z573l8FjFhjBZ084qXq8yISJCRZWYOlzlsnHAumfHr7uZvWsTpr7Tw7NryOFopBe8iFGS6nTcwDXopdx5Pz3EG85mg26z4K6hmzlrK2MU6VrScNjFAB6fvOHyVoc8EGk37w1iwcSH108LuoQdSZJcdVP+2Q8/jLzZqmeQF4eQlLgW2d1XkN73wB1fAk9iKKwbukTagz7OGTRX9Fdx5nqHbu1NiqJ1Si2iDHizjx6rgB668K+ZHRctvRxDsInJmlQ2nqOqIk++T3vPmVWyrKEQv9X8aFvFWf7w+1bl+pLpVENdaCY530rr92LnrW6FQZMA0eNtcEYghw/Rg6wABQZalUlf84km1xJr4Wl+13Df8xoKYa+qLY4PLNrKdbnnUPUuygGcFVoJZ+wNnBYxAetTexnJxUz5jBPDPHVqHBhlPKIw++w/B0yaoZRnJo7jhXyeJRHISsjCnw64bWP8+ENRzjnTP8rSWE3E2pbGV7XEXNmsXhdLbgcR94Dw8zj5SyY0Qdzb7jlX9mtj1yHRMUBVpDeCLylD1I17XZ8KLMRrNMIZMrHSV58UgD6hJh9dZtkYzDTt76ESPKj7lNlqws981Rc0wYVYlqpbpk530mq0KoUTCREZzemBbiLMv+u6BN41B9erq2K6tDNRVmNma0JProiJjcMMZ7q9XDZXeLwckOFlUhzY4ON1KwtQj2FbaKbN7qHeX3aRnzb0pyBMGMIvlm4WxWWtml6UCX/gaibYbFpdwk3Io0fEOVBRhmidqXF8CbdzTyyhGBBXXRrmpTV6XKnlxGBUJhp3mjwckM2hYexNES74G+2Nw3IYztYaTUV46bhaiWarSaok7XDaHivschG1J2xuq0crXHOc7jLSZuoOUhUGYjrUsR7Q2CsxobyLhiSHuw== X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3462; 6:XaMkfFFPutXuo6wVf3Y2hgde8VhfbVo9UJvSYnAhBZBqnt5JHkv1d+4MkM8cmnv3etnVksMZHgx+CefkXj1PzmmVaXjnsnZAcqUwsWnWlP8bPJUkwPWWZAn2NydEqvZveXDBDWZykkxE/lTd7TIPqbGcDUJdzxbOzhl9oZhYjTlQnhwyAlSAk+9lUAkBABB4o/OAvex1ZFbrWO876Db9UWFIoVhqN06u6yZTVvTtwUlgw+YWQzaWCib1Dnv1MtljNO2aiZiWu7gcayBNCO3EoAiQ+RUhYHdKQkL7cTyUVRAAWOggg1k/0RBWTxEK/5UINuorHg3/AKDaOvmbiC1WDQ==; 5:KrZ7X7rKDEwd2nntE2IIhuQItyFi4hf3LxKbCAsRggXh/1c8lojXqQMBT5mG8zmAho1tIWzdj5EmlozDhQY8ZNJ+4Cq5PiZgWkHQVuT3rXcBMmBP5ye+YOMVyxh4jZj/twt0rLhMTmn4TH6I4jgktw==; 24:a1sTJuq9KIEuQDCm1mQsNASJpMSzbY8T8aZBix9y3jo2zKIJXzZDfyi0lsdXs/OkXRz++5JOQgSC8UlTrBrI/E8udq3AiYMn7daYa9vlnME=; 7:rWnwVdvmVKcuHBPNFE+b1LI3IsAeN3SDY9D+rDOvySCj9UQOaMwjKH3Rn8grLYE7oA8KKerQJhqpSEPm0/wnSuhvMfT39xasp18livkeuZm6OhX1zHKcEbm5o5PemwkVAu3YSTFSb31jlz3zlSSzrbruYp+7LhinGnTaz9BXrtQg1hIqW2MGCLGyYLFuf3htup/XjPBNA5Jg4lvOgKM9ZqpY4gR9VtlHcB6NKKQKKSA= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Oct 2017 11:59:52.3678 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7f85fa54-66f7-4878-eafd-08d51b9fe77e X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3462 Subject: [dpdk-dev] [PATCH v4 4/7] test/eventdev: update test to use service iter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use service run iter for event scheduling instead of calling the event schedule api directly. Signed-off-by: Pavan Nikhilesh Acked-by: Harry van Haaren --- v4 changes: - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/ for controlled event scheduling in case event_sw test/test/test_eventdev_sw.c | 68 ++++++++++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 28 deletions(-) -- 2.7.4 diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c index dea302f..5c7751b 100644 --- a/test/test/test_eventdev_sw.c +++ b/test/test/test_eventdev_sw.c @@ -49,6 +49,8 @@ #include #include #include +#include +#include #include "test.h" @@ -63,6 +65,7 @@ struct test { uint8_t port[MAX_PORTS]; uint8_t qid[MAX_QIDS]; int nb_qids; + uint32_t service_id; }; static struct rte_event release_ev; @@ -415,7 +418,7 @@ run_prio_packet_test(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -507,7 +510,7 @@ test_single_directed_packet(struct test *t) } /* Run schedule() as dir packets may need to be re-ordered */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -574,7 +577,7 @@ test_directed_forward_credits(struct test *t) printf("%d: error failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); uint32_t deq_pkts; deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0); @@ -736,7 +739,7 @@ burst_packets(struct test *t) return -1; } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* Check stats for all NUM_PKTS arrived to sched core */ struct test_event_dev_stats stats; @@ -825,7 +828,7 @@ abuse_inflights(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); struct test_event_dev_stats stats; @@ -963,7 +966,7 @@ xstats_tests(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* Device names / values */ int num_stats = rte_event_dev_xstats_names_get(evdev, @@ -1290,7 +1293,7 @@ port_reconfig_credits(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); struct rte_event ev[NPKTS]; int deq = rte_event_dequeue_burst(evdev, t->port[0], ev, @@ -1516,7 +1519,7 @@ xstats_id_reset_tests(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); static const char * const dev_names[] = { "dev_rx", "dev_tx", "dev_drop", "dev_sched_calls", @@ -1907,7 +1910,7 @@ qid_priorities(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* dequeue packets, verify priority was upheld */ struct rte_event ev[32]; @@ -1988,7 +1991,7 @@ load_balancing(struct test *t) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -2088,7 +2091,7 @@ load_balancing_history(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* Dequeue the flow 0 packet from port 1, so that we can then drop */ struct rte_event ev; @@ -2105,7 +2108,7 @@ load_balancing_history(struct test *t) rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1); /* call the scheduler */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* * Set up the next set of flows, first a new flow to fill up @@ -2138,7 +2141,7 @@ load_balancing_history(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2182,7 +2185,7 @@ load_balancing_history(struct test *t) while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0)) rte_event_enqueue_burst(evdev, i, &release_ev, 1); } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); cleanup(t); return 0; @@ -2248,7 +2251,7 @@ invalid_qid(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2333,7 +2336,7 @@ single_packet(struct test *t) return -1; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2376,7 +2379,7 @@ single_packet(struct test *t) printf("%d: Failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[wrk_enq] != 0) { @@ -2464,7 +2467,7 @@ inflight_counts(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2520,7 +2523,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p1] != 0) { @@ -2555,7 +2558,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p2] != 0) { @@ -2649,7 +2652,7 @@ parallel_basic(struct test *t, int check_order) } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* use extra slot to make logic in loops easier */ struct rte_event deq_ev[w3_port + 1]; @@ -2676,7 +2679,7 @@ parallel_basic(struct test *t, int check_order) return -1; } } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* dequeue from the tx ports, we should get 3 packets */ deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev, @@ -2754,7 +2757,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error doing first enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL) != 1) @@ -2779,7 +2782,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); } while (rte_event_dev_xstats_by_name_get(evdev, rx_port_free_stat, NULL) != 0); @@ -2789,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); /* check that the other port still has an empty CQ */ if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) @@ -2812,7 +2815,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) != 1) { @@ -3002,7 +3005,7 @@ worker_loopback(struct test *t) while (rte_eal_get_lcore_state(p_lcore) != FINISHED || rte_eal_get_lcore_state(w_lcore) != FINISHED) { - rte_event_schedule(evdev); + rte_service_run_iter_on_app_lcore(t->service_id); uint64_t new_cycles = rte_get_timer_cycles(); @@ -3029,7 +3032,8 @@ worker_loopback(struct test *t) cycles = new_cycles; } } - rte_event_schedule(evdev); /* ensure all completions are flushed */ + rte_service_run_iter_on_app_lcore(t->service_id); + /* ensure all completions are flushed */ rte_eal_mp_wait_lcore(); @@ -3066,6 +3070,14 @@ test_sw_eventdev(void) } } + if (rte_event_dev_service_id_get(evdev, &t->service_id) < 0) { + printf("Failed to get service ID for software event dev\n"); + return -1; + } + + rte_service_runstate_set(t->service_id, 1); + rte_service_set_runstate_mapped_check(t->service_id, 0); + /* Only create mbuf pool once, reuse for each test run */ if (!eventdev_func_mempool) { eventdev_func_mempool = rte_pktmbuf_pool_create(