From patchwork Fri Oct 13 16:36:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh X-Patchwork-Id: 30378 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D7A7D1B7BF; Fri, 13 Oct 2017 18:37:22 +0200 (CEST) Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0055.outbound.protection.outlook.com [104.47.42.55]) by dpdk.org (Postfix) with ESMTP id 4A23A1B7AF for ; Fri, 13 Oct 2017 18:37:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=tIetKm6bmV+/+zcWo1n/UJl3okikh8eEjv7wv+XHK9w=; b=PGeQJfjaKNt8kdFTPY/7PQvAlbEhcB3uO9ssSLeOar6soAritxw0vwKPvqGHe6epWPRJYI2N+17Fo1Oke/m8kOG7jBgQ/T3J0pWBs308CPmCDZf8k7lmLjK/j+8NX6LytVW7g3Pr5WfAtRJO8OIDPGYeCs6yB/tzsuSvgZ/z3vI= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from PBHAGAVATULA-LT.caveonetworks.com (111.93.218.67) by CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Fri, 13 Oct 2017 16:37:16 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com Cc: dev@dpdk.org, Pavan Bhagavatula Date: Fri, 13 Oct 2017 22:06:47 +0530 Message-Id: <1507912610-14409-4-git-send-email-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507912610-14409-1-git-send-email-pbhagavatula@caviumnetworks.com> References: <1507712990-13064-1-git-send-email-pbhagavatula@caviumnetworks.com> <1507912610-14409-1-git-send-email-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: DM5PR05CA0054.namprd05.prod.outlook.com (10.174.188.171) To CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c2cf64c8-e89e-4c1f-9da6-08d51258ab20 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:CY4PR07MB3464; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 3:LTiDU9698poVX6+pBxcGm8YGEdzG5bV5Vge2kIhO92s5HhkfjqA9MGuaveX/X5G3rFk7dw1kRTotXzU0zMWOkj67BbZ+9DqZd2mnR8K6lX/AVTyd2U+iaH9aMvbLeWGv4rmArKDy4WeiiI7VEch1jaXjVOLpMy10RTeCaJj91L/ren5pv6X7EShp9+ChAl45fiLx/udPCbDTBaipGmZaPnX4CXm4ytuPgnUZNrV+iXWvUlt6mu7gPWlvCzQUTYGo; 25:A2kxeVE7LbHqzN3dtsmu2ELWnTu7J4j0sOpCk2jYE4Ibz3vXtuUwd5nj4w/nTD85EoWF3tRb9js7jUw3oURS9M3birqde9XTegF/eebzwEzMDIgdWCdgiwUH/zGgQRBp3YWViHko1VEwtjrUBOj4rO8XS+4djyCezbosDSVddLTbWJrmBDvKiH8nrcfe5bUY0H38xiP3awa1iULz4mJDtJ+U22QGjm05ToIqZJhgrYssVBpuBxVQ+9NB5wWiKiurbS5kFNObPGa192sGs0m6as0As1/NoDk2eb0vOO+5/Gsw2jV3sA0eXIdq/0dc4/ax1WzcE0C7AYqWBEaefW4y7A==; 31:rjAu/3ShmOY7N1ngVG3q0yrOusHJGMPNglxXKR3JOyJCrs3hG6v16uzSXumkaJ3k557RAwa5JhtX07si3GjgO74OxpicHsFcUUogZR7IvhLhr9GldHKJOiT4REyh4JjtxEnWXMPycJ1V/HzPZ5qmkoBod0La+XumMcTUOE+TL1G2zyRoMYv0FbylsJZM0zQsK+Erc72R6H5nlKFqu59pgMTrKQgu4blFsGX/CGT/zK4= X-MS-TrafficTypeDiagnostic: CY4PR07MB3464: X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 20:mK9SrTHJ4s8Va6bbWdF9YIy5D40wWmC05UtRWrNz3fUWY2xl/KP5QNyUmmwgIcdd/H7RnsquAWayaNststMtfzsMxmF35lfHBZ7wYdBtcpHEGxsZ0ZwaKUzr6gnzTuL7x2eDJ/JLVaJdZsXs1ANg+bcklTreAUqb44TDq/71Gk84XUmyEmJeWXePgu9ELqJOusg05aKbEa6oRwAePj3blWauRZd2ZmgrDIFQUFJWdf+V/jp/SB9xM22KJ/xsFucShHA/TQjkDxrcv6phHn0u3yoklaTyo813BbFGKaYkqnbyRwEqjYCDK9gZoNQFYHPgOFyu9OUeR4wC4nhdb6KiT1A16dAaHvX3zK3A8FuSBvBdtQCm+MTjzNKHiNYHXihTx4Bq9KYMO8mav5FHrqvSUEqH2klmhAOMrRtgTyJGN7Tu91QU4Bmgc/0HPPHGQK4DMQAbyTwaToVWQYBFMouV/qqafSbZG1FWAxEv4/HKrCpHrssADcTxMyZTnHVz0hu8CHNQvyGD+xyixaHvuyny116Hpa8rLa9s5JWC58Hk2RXFW7tYrVUDbB7l3ZmbLaf1lwMGBlGPj6Gq7NfJ2qsc9/QvsTBLsmtJKWfI+gaKQwQ=; 4:r3Tfh0fK/d0lcOSrhvrsJwCl892uV7/Vm1Fyz2XAfP0KkylG3gWB2QGW7PGyebZNkLD7gJgYNjKPzTlbXHR0vpT3WHdAhSTDkbQ0tYf9y9OL8TWSoGT8a8GtUVqFPbuxkKAyoSjNM6z/9ROP/U4jj+h27qXfeftI7I/XuT5odx+jjFWCOKbXcqTyrfTOmbuJJgw8VCY6bZ9ZODU6bDxdrEkXFx+HSWoqUiIhgm2fMxf4rdr5C6H1IJcWuNU9pnLw X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(10201501046)(3002001)(100000703101)(100105400095)(93006095)(6041248)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123560025)(20161123558100)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CY4PR07MB3464; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CY4PR07MB3464; X-Forefront-PRVS: 04599F3534 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(189002)(199003)(68736007)(5660300001)(53416004)(15650500001)(3846002)(6116002)(33646002)(6486002)(316002)(6506006)(5009440100003)(50466002)(72206003)(478600001)(4326008)(2906002)(2950100002)(25786009)(42882006)(107886003)(6666003)(36756003)(53936002)(5003940100001)(48376002)(69596002)(97736004)(105586002)(6512007)(16526018)(106356001)(50226002)(8676002)(50986999)(7736002)(189998001)(101416001)(16586007)(81156014)(8656003)(47776003)(81166006)(305945005)(8936002)(66066001)(76176999)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3464; H:PBHAGAVATULA-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3464; 23:uNh+xvko61+WXJNIYt39D1djX3MwzKgItWXWKrjKV?= um5wrS9B+TVP/U7zAdO1KLQ9bqEJWhKy9o7H8b+b+PLuRDGAOjQRllHSzxrzBovon5wvPZ3sRMEiPC7RxSuIWlfdcIReYKTAVw6c+pQEDOuvh+Azu7a/RT9XQR1zb89CxgOVCixgPmjb1Aopq0IrEbeUJ2A/ah3cYvZBgrU4Poxee8AWpj0dvZXb4KGyew0Mr4TtsmYKbUR9Nb9q12VCEL8an58fp+P2cvodsriPOpGO5laO5Fsqhr1HrWUGdwUX02ujaGBEJfhwpwtrcUTh1RdUGCxldgPQraELcMEcbEjbtGQ7K5oKdQL4fO5dh6OT8rc+xTfh30rmntaZOa1Zlwrm803NNKFAAk2fRkW0owMx7/HWHflGv6qWYD60OuSonGiXAKFVH60GO+GzYndaPbprT3GjWQJUylpOtQCyvDuA1VHvsIghAO9RYIipKEzY9hJfRSB3veaB1GDLliG5+5dDcrLfLdQTL3MqCMFZZbGO+dUBbG0EEXwpDo7tlvo9GUUH+EQIlNAZ/BwFBwtoqIonFKwISXLT6o4DTWkUhGjRLXiX3Tk1EUU0uHwEn75G20pJ6uoyMHLDy63hCwk/i40kGrZdqTTzd8kGtj3uuYwsPW9HH76w3y9uydpRnlVSy8m6iWFoPEGRlnI1R0i0UPaRgKofkbff4yFFEfEqz9Fi2aCw2D+D0e+ntUobQzKiorDi5Wa28BB2ChLRfR7j1fyOp7irvP7Seqhr0vfuVecBnvfFbx8hea5cH/hG6ILYpBM0aXUygtJpu/YHkPUPDrDmeqcXo0oBCeel0NEU6vGJ5gF9MmPnIQwf3ORB2WuiudgmmnlhneOhnZ3wIljQHxwQ/G+UZO8xYkcpotxiI4AIZt7YVytZ4mYv3Vw51Tn/ejKSjhALuGmB59wOwRuq3zwJf2szZfoJTbHiZytAmZRF4qDkLpSdSMa4CnQQp8ZSd3lirf5I5Y807ZmFotx2+B/UoS/krMQE+sIRfXEEZ/Mz8Cc0EFLwkNxd97xbs69M4PWJmI7IhYuDHVRDETfC5G8zYY2VHZjtACsURKI1DXPGMBW67aaPcc/+dn1Zsqcv1t9krvh4Le0X8WqJ2PdRui6rWbdiGM8lOnxghAVhr3WxMBXJLyC8YNzXzusdMjptaYLa1mwkWN/ushgh3ER436pEmNFJz/y1WfGo/5MGfPO0g== X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 6:4eKbSgK82lKXWasfT+RQkJy3auS25dLwNCK/pdEIWnP9LJT1yY6EPwBuZrWyNNH86pujCzoLDNKtHq1Ytf0mK53TQdhUpPRxAMaWK3CB3FF/ZbQpJ4im8Y9+ex/T5UPnWXudbQypWaQhHJj+EF2BF7Xv70pbJqIDXCxR3Y61EQ9d76xDJX/lEuf5f2q+R/UHAtB1PDXp3qGAXu20xsvzM0ZJpn3IGUq/+sbcOpVa7AtxNecO2CzjfJVrloYNaEVWmwouNmt8ULhCjjqVTtdPsnQHrK65EfUHtWlCcuqpCZAGASk5YJSbhNAy4STGEwDmnnAPJZS8euspXx76HTrVUA==; 5:ENPmD7zAoGTaI1UA8sSnOzKwBdVyZEg2FWEnewv0EyiyqttTPMPns2yvYp5WDf5dO1a2YbbBHYdqXA+OQPR5WQj7Ium5IEo3qmsBbNpip66BPIVM+UA6kFy2KDqED0rTwmUsNUdFEeXPQfDsGuRzpCV8eXldpoNeAjQioVBYMWs=; 24:RNvjcYu1Xtj6ZX6kXvoYwKlU2J70OHE3kXkQkPyEPVTYzKZaUjxxnWtDz0xiiDpfUvBywb97+2jWghxz8yw4wKFeIkVF3TLFzt0u+gUE0RQ=; 7:GRLps31tlb/xL5i12eolQap071FGhF03yiR27/QjM2z8/j7UBBqoZ8Cn4p3uVOD5R+9/mc64fQj2UdXOLzZK2CymIvi58LwINiHGcWhJNvBvymAki4c9gYJLtQIKfSI7bgAOJGhdszp77yZSsOA/IrjVjEZtFL3RRYp/zQsUO9D9ChwWUangL+E+CcThwEwVsII1fWnX9XtrxDyIntVyK70QPGDj/wNCckHhtvPyzv8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2017 16:37:16.1223 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3464 Subject: [dpdk-dev] [PATCH v2 4/7] test/eventdev: update test to use service core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Bhagavatula Use service core for event scheduling instead of calling the event schedule api directly. Signed-off-by: Pavan Nikhilesh --- test/test/test_eventdev_sw.c | 120 ++++++++++++++++++++++++------------------- 1 file changed, 67 insertions(+), 53 deletions(-) diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c index 7219886..81954dc 100644 --- a/test/test/test_eventdev_sw.c +++ b/test/test/test_eventdev_sw.c @@ -49,6 +49,7 @@ #include #include #include +#include #include "test.h" @@ -320,6 +321,19 @@ struct test_event_dev_stats { uint64_t qid_tx_pkts[MAX_QIDS]; }; +static inline void +wait_schedule(int evdev) +{ + static const char * const dev_names[] = {"dev_sched_calls"}; + uint64_t val; + + val = rte_event_dev_xstats_by_name_get(evdev, dev_names[0], + 0); + while ((rte_event_dev_xstats_by_name_get(evdev, dev_names[0], 0) - val) + < 2) + ; +} + static inline int test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats) { @@ -392,9 +406,9 @@ run_prio_packet_test(struct test *t) RTE_EVENT_DEV_PRIORITY_HIGHEST }; unsigned int i; + struct rte_event ev_arr[2]; for (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) { /* generate pkt and enqueue */ - struct rte_event ev; struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool); if (!arp) { printf("%d: gen of pkt failed\n", __LINE__); @@ -402,20 +416,20 @@ run_prio_packet_test(struct test *t) } arp->seqn = MAGIC_SEQN[i]; - ev = (struct rte_event){ + ev_arr[i] = (struct rte_event){ .priority = PRIORITY[i], .op = RTE_EVENT_OP_NEW, .queue_id = t->qid[0], .mbuf = arp }; - err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); - if (err < 0) { - printf("%d: error failed to enqueue\n", __LINE__); - return -1; - } + } + err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 2); + if (err < 0) { + printf("%d: error failed to enqueue\n", __LINE__); + return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -425,8 +439,8 @@ run_prio_packet_test(struct test *t) } if (stats.port_rx_pkts[t->port[0]] != 2) { - printf("%d: error stats incorrect for directed port\n", - __LINE__); + printf("%d: error stats incorrect for directed port %"PRIu64"\n", + __LINE__, stats.port_rx_pkts[t->port[0]]); rte_event_dev_dump(evdev, stdout); return -1; } @@ -439,6 +453,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } + if (ev.mbuf->seqn != MAGIC_SEQN[1]) { printf("%d: first packet out not highest priority\n", __LINE__); @@ -507,7 +522,7 @@ test_single_directed_packet(struct test *t) } /* Run schedule() as dir packets may need to be re-ordered */ - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -574,7 +589,7 @@ test_directed_forward_credits(struct test *t) printf("%d: error failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); uint32_t deq_pkts; deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0); @@ -736,7 +751,7 @@ burst_packets(struct test *t) return -1; } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* Check stats for all NUM_PKTS arrived to sched core */ struct test_event_dev_stats stats; @@ -825,7 +840,7 @@ abuse_inflights(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; @@ -963,7 +978,7 @@ xstats_tests(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* Device names / values */ int num_stats = rte_event_dev_xstats_names_get(evdev, @@ -974,8 +989,8 @@ xstats_tests(struct test *t) ret = rte_event_dev_xstats_get(evdev, RTE_EVENT_DEV_XSTATS_DEVICE, 0, ids, values, num_stats); - static const uint64_t expected[] = {3, 3, 0, 1, 0, 0}; - for (i = 0; (signed int)i < ret; i++) { + static const uint64_t expected[] = {3, 3, 0}; + for (i = 0; (signed int)i < 3; i++) { if (expected[i] != values[i]) { printf( "%d Error xstat %d (id %d) %s : %"PRIu64 @@ -994,7 +1009,7 @@ xstats_tests(struct test *t) ret = rte_event_dev_xstats_get(evdev, RTE_EVENT_DEV_XSTATS_DEVICE, 0, ids, values, num_stats); - for (i = 0; (signed int)i < ret; i++) { + for (i = 0; (signed int)i < 3; i++) { if (expected_zero[i] != values[i]) { printf( "%d Error, xstat %d (id %d) %s : %"PRIu64 @@ -1290,7 +1305,7 @@ port_reconfig_credits(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); struct rte_event ev[NPKTS]; int deq = rte_event_dequeue_burst(evdev, t->port[0], ev, @@ -1516,14 +1531,12 @@ xstats_id_reset_tests(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); static const char * const dev_names[] = { - "dev_rx", "dev_tx", "dev_drop", "dev_sched_calls", - "dev_sched_no_iq_enq", "dev_sched_no_cq_enq", - }; + "dev_rx", "dev_tx", "dev_drop"}; uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0}; - for (i = 0; (int)i < ret; i++) { + for (i = 0; (int)i < 3; i++) { unsigned int id; uint64_t val = rte_event_dev_xstats_by_name_get(evdev, dev_names[i], @@ -1888,26 +1901,26 @@ qid_priorities(struct test *t) } /* enqueue 3 packets, setting seqn and QID to check priority */ + struct rte_event ev_arr[3]; for (i = 0; i < 3; i++) { - struct rte_event ev; struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool); if (!arp) { printf("%d: gen of pkt failed\n", __LINE__); return -1; } - ev.queue_id = t->qid[i]; - ev.op = RTE_EVENT_OP_NEW; - ev.mbuf = arp; + ev_arr[i].queue_id = t->qid[i]; + ev_arr[i].op = RTE_EVENT_OP_NEW; + ev_arr[i].mbuf = arp; arp->seqn = i; - int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); - if (err != 1) { - printf("%d: Failed to enqueue\n", __LINE__); - return -1; - } + } + int err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 3); + if (err != 3) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); /* dequeue packets, verify priority was upheld */ struct rte_event ev[32]; @@ -1988,7 +2001,7 @@ load_balancing(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -2088,7 +2101,7 @@ load_balancing_history(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); /* Dequeue the flow 0 packet from port 1, so that we can then drop */ struct rte_event ev; @@ -2105,7 +2118,7 @@ load_balancing_history(struct test *t) rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1); /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); /* * Set up the next set of flows, first a new flow to fill up @@ -2138,7 +2151,7 @@ load_balancing_history(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2182,7 +2195,7 @@ load_balancing_history(struct test *t) while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0)) rte_event_enqueue_burst(evdev, i, &release_ev, 1); } - rte_event_schedule(evdev); + wait_schedule(evdev); cleanup(t); return 0; @@ -2248,7 +2261,7 @@ invalid_qid(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2333,7 +2346,7 @@ single_packet(struct test *t) return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2376,7 +2389,7 @@ single_packet(struct test *t) printf("%d: Failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[wrk_enq] != 0) { @@ -2464,7 +2477,7 @@ inflight_counts(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2520,7 +2533,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p1] != 0) { @@ -2555,7 +2568,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p2] != 0) { @@ -2649,7 +2662,7 @@ parallel_basic(struct test *t, int check_order) } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* use extra slot to make logic in loops easier */ struct rte_event deq_ev[w3_port + 1]; @@ -2676,7 +2689,7 @@ parallel_basic(struct test *t, int check_order) return -1; } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* dequeue from the tx ports, we should get 3 packets */ deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev, @@ -2754,7 +2767,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error doing first enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL) != 1) @@ -2779,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); } while (rte_event_dev_xstats_by_name_get(evdev, rx_port_free_stat, NULL) != 0); @@ -2789,7 +2802,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); /* check that the other port still has an empty CQ */ if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) @@ -2812,7 +2825,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) != 1) { @@ -3002,7 +3015,7 @@ worker_loopback(struct test *t) while (rte_eal_get_lcore_state(p_lcore) != FINISHED || rte_eal_get_lcore_state(w_lcore) != FINISHED) { - rte_event_schedule(evdev); + wait_schedule(evdev); uint64_t new_cycles = rte_get_timer_cycles(); @@ -3029,7 +3042,7 @@ worker_loopback(struct test *t) cycles = new_cycles; } } - rte_event_schedule(evdev); /* ensure all completions are flushed */ + wait_schedule(evdev); /* ensure all completions are flushed */ rte_eal_mp_wait_lcore(); @@ -3064,6 +3077,7 @@ test_sw_eventdev(void) printf("Error finding newly created eventdev\n"); return -1; } + rte_service_start_with_defaults(); } /* Only create mbuf pool once, reuse for each test run */