From patchwork Sun Oct 22 09:16:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh X-Patchwork-Id: 30666 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 345E71B33E; Sun, 22 Oct 2017 11:17:11 +0200 (CEST) Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0057.outbound.protection.outlook.com [104.47.37.57]) by dpdk.org (Postfix) with ESMTP id 0FE841B337 for ; Sun, 22 Oct 2017 11:16:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=tIetKm6bmV+/+zcWo1n/UJl3okikh8eEjv7wv+XHK9w=; b=AFCvNrri5JK6zsPG8W/Kc+psPGENCTJ/ZdBqFX0vAgouE/4TbHq9Qep2CZ+1k1zJMVBwZwkkUcvfZX6m2Or7LUgCfMVPs2YG95JcHvYXVv8p/hmzQ+t0sdaAY8QMMCg3edR/NIBp/GGhYTU8hdbtrUeUrh77X2HAREmzZMhD5y8= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (103.16.71.47) by BN6PR07MB3460.namprd07.prod.outlook.com (10.161.153.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.156.4; Sun, 22 Oct 2017 09:16:56 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com Cc: dev@dpdk.org, Pavan Bhagavatula Date: Sun, 22 Oct 2017 14:46:22 +0530 Message-Id: <1508663785-15288-4-git-send-email-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1508663785-15288-1-git-send-email-pbhagavatula@caviumnetworks.com> References: <1507712990-13064-1-git-send-email-pbhagavatula@caviumnetworks.com> <1508663785-15288-1-git-send-email-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [103.16.71.47] X-ClientProxiedBy: HK2PR04CA0083.apcprd04.prod.outlook.com (10.170.154.155) To BN6PR07MB3460.namprd07.prod.outlook.com (10.161.153.23) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3e6c2f8a-d74b-4682-21db-08d5192da594 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(4627075)(201703031133081)(201702281549075)(2017052603199); SRVR:BN6PR07MB3460; X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 3:NwPiwLDkRQ3OyGB7tc+v+gwSOvlDE7G82jbuSynXubcvq3NkcXtN3iV5wzJpM90q7SvDEk6/jkc8n4uwsSZDTWrAf1qGG97PYb/0U//jz8MKOYbfRMGRq8C39jlI3oR6Z++MIKdRp2AFhAP3wY1NudGQgUsSs7qd+luyT/BYowfbBFAQxSqD504ZVZuT3P34OxA8b8L48Ap6/LaKH23pPgdLqWfuNNmEmMeDUwCAz3w99HL2lo8v6X2IdU3rLQsB; 25:uymmrr9ENEmS3FOkp1VeJg3XjlwXxpb7scPU7pMx3ZEv+GStRUqCR8CCC0fvm5mM+n9xcyPhlrsPLxNSguWFJxyj7rvges43q3yxGE/2TFIRy5hyEFhyu7cVP6wZXn0y05S/2D26FkX1ONIJd9D11+66xuxiUc2wSQcCh6byw/5KF/YU12PD528aNY3QTpdhH093BUeznXmZQ+0qo+6CrsTUe6VKzyIeycvmDalIj0EGzAJOePWrhgNzQVTeCyJRqAVDT7XJdbCPCvR6XqekeS0kWRmuA+l761/g5OPg3qPnSoI0pEluQc7UPdeXmAfqxMIURZ2+d1TfR/riHAw4WQ==; 31:AicbSF0dUbdMegUQ9FwKdKEL1qOUyPJ0V871xOKkmTQSATB4a8XxGW3DMSo8pqH4wMiCSSDP8wYFCOZjUGLAQSASYj7vsxOjSH63euo2Wu8dFBGCxEz7vFfiDt1sxd1yaexOwrNGzCuCGN4uZqqi+B+RDeLMxwjJXPX2XLQt+LYfJrkQfQ3MqQ9o0OTEVLvnDr4vvRNK5fDyqkK85NewIIPlCoPSuN6hG6GASvqO+k4= X-MS-TrafficTypeDiagnostic: BN6PR07MB3460: X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 20:AtBJ3LX3yT6fL1KgtTlvaUSoNhlnXf0FzBt8mxqe5QYJKb60rkGX7M/sZbENVCRZpTlArg9ao9PEQ+ugqKsKx0/NxAV8f7IKs+PlTFYZa4GojIlSNC3qmn730IVLYM5xH76/bLx87VboXWqZxkAXxXY1fXVy+GbRyXekDc3G6eWm0BUwkkw51li6DW9tL4itZF4RQCF9c/ztKEYK3uSEMSq+cbzEoU7gl8/DLsuVdi1zMWMGSaqUc3FQheJMJZd00mLCTG81llciiR5Cihg68LTROmWNu2icETOypaFZzRTi672xY5zqm9owv0rmhiS1ubs9mnHFLPigTwH0ED7jA27BmMkh0yBEwt285IloIYpQr8M3u9t7Lbr3pLidq8wD5n6GRHIs3Qp8cnGX4HA/Fgn4OYaZKy8HmzywvSFvg4bXJLcygiWIABPdVqFG3XpSU6PyZwNPNJ7CMbJ+zQAo9ZqebNNzUri56pCmqdwgVuyH8y4J4iY5/RDv4wUCAD1+J6lkhHLCfOyk5ZdqI1UFFqWw6efscwywAy/KHCXjgJSGnDvkT8HFzAGngd9WxDb0xgFP23mhHmV5goPzO6wgfBZ7ljxe6Ig/dx+ts782cXI=; 4:/c2C3HfMrh5pXdutpY507f3827zzY+92Xb4uVDmZUoHYPZKlHFejVdHg8RquD9icS8pNsVn1Y3FjX+3gO2HpEp+zpLrToPWplAJO4cThEzBMTUtaQbguoPclUpBY2Z1gE7MOsJ2cOUXjrWv5IVgv2vc1Px0yBYmyd1e0TjDSIqzWC0j89aXM6II5vJY9aqejqazJGvD3QJ7iYQ5ktkzmaav56A9poK3Ub7/4OtYThh3hpecHHB+Ux4Ppdkqe392A X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(93006095)(100000703101)(100105400095)(3002001)(10201501046)(3231020)(6041248)(20161123558100)(20161123564025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123560025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BN6PR07MB3460; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BN6PR07MB3460; X-Forefront-PRVS: 0468FE4A2B X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(6009001)(346002)(376002)(189002)(199003)(72206003)(478600001)(8936002)(3846002)(15650500001)(6116002)(6506006)(6486002)(7736002)(66066001)(50226002)(97736004)(101416001)(6666003)(2906002)(8656005)(305945005)(189998001)(8656005)(5660300001)(16526018)(76176999)(50986999)(33646002)(47776003)(105586002)(81166006)(81156014)(106356001)(2950100002)(42882006)(68736007)(4326008)(107886003)(316002)(16586007)(50466002)(48376002)(5003940100001)(25786009)(36756003)(53936002)(6512007)(8676002)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR07MB3460; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR07MB3460; 23:At5hvnmo5sggeaEk8jGGK1RxTsywzoIipWHMrto20?= kGITxA2jGo7buqwIlcW9WSF7nOBLOaB6+Q9W67G2FeO0l3OOq++c4/q22/kX1V6GRTOAZw63D15dMUPZup4CcUVr3AGS3nsDJnLRkoM1Nyd5YhgX5hhEpi514B5Xg2VDIA+closKNx+pcwjx2t0TR3M/GN2HLb1ZECeDS2pi7kwH52oaLKkNYePzfIoQ3nG3B/YiYOyVJkFqUtYSGfhooXXAITIMCjKbEIYLsOkhqwl8IGaXxdcWvdg2M3YR9dUvzy4ZR6l25gCwOXw22xKdFWyW6OwK/6PjJDiAuSBGRyAwdI+x70PQgwFzaJ9z3caRTQ2YIqxx4XyEgpq28/NbE/qV5nwQpmwPYMUR0XqqNSAR0yNaRQuvlgZq2EcPyKUr1ut6TmDZP7eES4MpYT+0NKLGx3mBVKG5TOzkPY2GsvzCrtlF57F3+aPVmxH2JYEML6e1KdBNR/hXJ5PRDduipMJUOP3R4QFEUzjb4rSGx/p4+U4uvyaluhvaSqp0mZViaIegLgVqjmI8IrdSonR4gwb0OxBRhm9TIQa4svLisYbPw+VSHP9C63fBXY3x9KerkelQclm83GMXnfB5JUMGq5ES2BFuHGEE4lxA2cOdBBtywLpY7ijN4dLChl9s0zWU+ooQK+idqbnzLQkTByrQPUDwa1Q+XSJY2kzN9irbYkwMhmjPmwYay2mZ+5BGVjwD20XVwZoAR2m6f7FLWS7xol8vyvPSP3ZEfSMkth7UqPYk+xqzclQL6YDpGw3gppv2UmxzH6O6fMsq6uh17JYa+QkJCmDQk8WxyeED0NVJtH/mzezYFc24QauPowfhByGe/jnidP/ydfZh3+fyP06511ULtzfQQD2KMKKI1hd5hbcpzJXtPRtVpI1L3zOHvms75VntoJ3/khbBiMulksCbRCK/9/r/JiERA7c6OcTsB1Ntw2KwHCN89DodO7dakDKBTF9vNExsMb+fOMETcDqzO82KNw5gDmK/bRN5hBkxr0cGghC7unZVMmpogoCmpNDzFSZV1KFqvbBiWoGZRylJwP+KCrpuSzEPo2kV6Q/04Q6b65GVxd+DP592DwX7i6D+UDBdrqUQkwuQUSOqo7XCo9rT4B7ZU6Fq5jrr999j1dgxl/GSOpybSgonLJanUE4Blo= X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 6:96JLy0alc4+QWF5CAN800AMlRP54COja6sWEZmKkHUtLQTqhSTTWiS5yMt0QFLpVTxCWyKQv3ziWeSOYeCk31tqTgyOe9hXxC3iDyw7GGMXtbynyZmLQ3+bFeLFXg+j5p62s+VnaPHJpvyrkAD7X6pgtrpWPLwr0krSHUeZEn4JQRglqgcJS4qFpgJFqcc54y7NB2yPGlJhgF0dxi0UAsfWYgAfSZX8Y/F9GFEhBL89P8ubu+vmLJxU6xfzGGt1eR98/hUWko2bX82xyxKvUt6lxzTO7SCQAO1toPUjkwdc0NMooIWiFOkhc+FcsmXmdWBUUME6EornH2+wta9HktQ==; 5:yqjJs6wF645R0D+XMGACJB+BiqhvWeYHgNkCyvGXA8vxJbvw1NSNPkNtdKxhpRLxCTmjnmpsl2Oso2dkzoeDMgRjBcdotX2wuYhmurVOZ3yFek7GI2/YTnzMe2Y5IqonG6Fk3m9sZb+1x0IGpQGWHw==; 24:wq0QX+Te4F0tX+BbE4WzLqe9lDhHReyeRYSpCY6UjHRaLh4U3L41upzY5lHDux1w8cZmaQF9Eh8y4o8MOcpUTxS8jP+RhlMkK6T7y2F+I2Y=; 7:+G5kaYNmo02CY1u3EzjANMMoAHh7TmEaxWu9HxHax9K9nKsR7qeE1Ky/BzrRx/p1jppBP4uM0kUqO87yMnyHYMk155CefZxttNVp5YH3uIvJXDcIxPB+nAeHeuUwZDVL5qGnh2BaOmA7m+cHwtDbDUAlccgWVyPY9jx6ynuBdbdQocC3glhZtR1MhkyxsAM8u9zvIFGbSaGRl6hDuGUeSYmOyF24K75/jvDI8dEXs3g= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2017 09:16:56.4965 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3e6c2f8a-d74b-4682-21db-08d5192da594 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR07MB3460 Subject: [dpdk-dev] [PATCH v3 4/7] test/eventdev: update test to use service core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Bhagavatula Use service core for event scheduling instead of calling the event schedule api directly. Signed-off-by: Pavan Nikhilesh --- test/test/test_eventdev_sw.c | 120 ++++++++++++++++++++++++------------------- 1 file changed, 67 insertions(+), 53 deletions(-) diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c index 7219886..81954dc 100644 --- a/test/test/test_eventdev_sw.c +++ b/test/test/test_eventdev_sw.c @@ -49,6 +49,7 @@ #include #include #include +#include #include "test.h" @@ -320,6 +321,19 @@ struct test_event_dev_stats { uint64_t qid_tx_pkts[MAX_QIDS]; }; +static inline void +wait_schedule(int evdev) +{ + static const char * const dev_names[] = {"dev_sched_calls"}; + uint64_t val; + + val = rte_event_dev_xstats_by_name_get(evdev, dev_names[0], + 0); + while ((rte_event_dev_xstats_by_name_get(evdev, dev_names[0], 0) - val) + < 2) + ; +} + static inline int test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats) { @@ -392,9 +406,9 @@ run_prio_packet_test(struct test *t) RTE_EVENT_DEV_PRIORITY_HIGHEST }; unsigned int i; + struct rte_event ev_arr[2]; for (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) { /* generate pkt and enqueue */ - struct rte_event ev; struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool); if (!arp) { printf("%d: gen of pkt failed\n", __LINE__); @@ -402,20 +416,20 @@ run_prio_packet_test(struct test *t) } arp->seqn = MAGIC_SEQN[i]; - ev = (struct rte_event){ + ev_arr[i] = (struct rte_event){ .priority = PRIORITY[i], .op = RTE_EVENT_OP_NEW, .queue_id = t->qid[0], .mbuf = arp }; - err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); - if (err < 0) { - printf("%d: error failed to enqueue\n", __LINE__); - return -1; - } + } + err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 2); + if (err < 0) { + printf("%d: error failed to enqueue\n", __LINE__); + return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -425,8 +439,8 @@ run_prio_packet_test(struct test *t) } if (stats.port_rx_pkts[t->port[0]] != 2) { - printf("%d: error stats incorrect for directed port\n", - __LINE__); + printf("%d: error stats incorrect for directed port %"PRIu64"\n", + __LINE__, stats.port_rx_pkts[t->port[0]]); rte_event_dev_dump(evdev, stdout); return -1; } @@ -439,6 +453,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } + if (ev.mbuf->seqn != MAGIC_SEQN[1]) { printf("%d: first packet out not highest priority\n", __LINE__); @@ -507,7 +522,7 @@ test_single_directed_packet(struct test *t) } /* Run schedule() as dir packets may need to be re-ordered */ - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -574,7 +589,7 @@ test_directed_forward_credits(struct test *t) printf("%d: error failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); uint32_t deq_pkts; deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0); @@ -736,7 +751,7 @@ burst_packets(struct test *t) return -1; } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* Check stats for all NUM_PKTS arrived to sched core */ struct test_event_dev_stats stats; @@ -825,7 +840,7 @@ abuse_inflights(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; @@ -963,7 +978,7 @@ xstats_tests(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* Device names / values */ int num_stats = rte_event_dev_xstats_names_get(evdev, @@ -974,8 +989,8 @@ xstats_tests(struct test *t) ret = rte_event_dev_xstats_get(evdev, RTE_EVENT_DEV_XSTATS_DEVICE, 0, ids, values, num_stats); - static const uint64_t expected[] = {3, 3, 0, 1, 0, 0}; - for (i = 0; (signed int)i < ret; i++) { + static const uint64_t expected[] = {3, 3, 0}; + for (i = 0; (signed int)i < 3; i++) { if (expected[i] != values[i]) { printf( "%d Error xstat %d (id %d) %s : %"PRIu64 @@ -994,7 +1009,7 @@ xstats_tests(struct test *t) ret = rte_event_dev_xstats_get(evdev, RTE_EVENT_DEV_XSTATS_DEVICE, 0, ids, values, num_stats); - for (i = 0; (signed int)i < ret; i++) { + for (i = 0; (signed int)i < 3; i++) { if (expected_zero[i] != values[i]) { printf( "%d Error, xstat %d (id %d) %s : %"PRIu64 @@ -1290,7 +1305,7 @@ port_reconfig_credits(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); struct rte_event ev[NPKTS]; int deq = rte_event_dequeue_burst(evdev, t->port[0], ev, @@ -1516,14 +1531,12 @@ xstats_id_reset_tests(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); static const char * const dev_names[] = { - "dev_rx", "dev_tx", "dev_drop", "dev_sched_calls", - "dev_sched_no_iq_enq", "dev_sched_no_cq_enq", - }; + "dev_rx", "dev_tx", "dev_drop"}; uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0}; - for (i = 0; (int)i < ret; i++) { + for (i = 0; (int)i < 3; i++) { unsigned int id; uint64_t val = rte_event_dev_xstats_by_name_get(evdev, dev_names[i], @@ -1888,26 +1901,26 @@ qid_priorities(struct test *t) } /* enqueue 3 packets, setting seqn and QID to check priority */ + struct rte_event ev_arr[3]; for (i = 0; i < 3; i++) { - struct rte_event ev; struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool); if (!arp) { printf("%d: gen of pkt failed\n", __LINE__); return -1; } - ev.queue_id = t->qid[i]; - ev.op = RTE_EVENT_OP_NEW; - ev.mbuf = arp; + ev_arr[i].queue_id = t->qid[i]; + ev_arr[i].op = RTE_EVENT_OP_NEW; + ev_arr[i].mbuf = arp; arp->seqn = i; - int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); - if (err != 1) { - printf("%d: Failed to enqueue\n", __LINE__); - return -1; - } + } + int err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 3); + if (err != 3) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); /* dequeue packets, verify priority was upheld */ struct rte_event ev[32]; @@ -1988,7 +2001,7 @@ load_balancing(struct test *t) } } - rte_event_schedule(evdev); + wait_schedule(evdev); struct test_event_dev_stats stats; err = test_event_dev_stats_get(evdev, &stats); @@ -2088,7 +2101,7 @@ load_balancing_history(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); /* Dequeue the flow 0 packet from port 1, so that we can then drop */ struct rte_event ev; @@ -2105,7 +2118,7 @@ load_balancing_history(struct test *t) rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1); /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); /* * Set up the next set of flows, first a new flow to fill up @@ -2138,7 +2151,7 @@ load_balancing_history(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2182,7 +2195,7 @@ load_balancing_history(struct test *t) while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0)) rte_event_enqueue_burst(evdev, i, &release_ev, 1); } - rte_event_schedule(evdev); + wait_schedule(evdev); cleanup(t); return 0; @@ -2248,7 +2261,7 @@ invalid_qid(struct test *t) } /* call the scheduler */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2333,7 +2346,7 @@ single_packet(struct test *t) return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2376,7 +2389,7 @@ single_packet(struct test *t) printf("%d: Failed to enqueue\n", __LINE__); return -1; } - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[wrk_enq] != 0) { @@ -2464,7 +2477,7 @@ inflight_counts(struct test *t) } /* schedule */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (err) { @@ -2520,7 +2533,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p1] != 0) { @@ -2555,7 +2568,7 @@ inflight_counts(struct test *t) * As the scheduler core decrements inflights, it needs to run to * process packets to act on the drop messages */ - rte_event_schedule(evdev); + wait_schedule(evdev); err = test_event_dev_stats_get(evdev, &stats); if (stats.port_inflight[p2] != 0) { @@ -2649,7 +2662,7 @@ parallel_basic(struct test *t, int check_order) } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* use extra slot to make logic in loops easier */ struct rte_event deq_ev[w3_port + 1]; @@ -2676,7 +2689,7 @@ parallel_basic(struct test *t, int check_order) return -1; } } - rte_event_schedule(evdev); + wait_schedule(evdev); /* dequeue from the tx ports, we should get 3 packets */ deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev, @@ -2754,7 +2767,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error doing first enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL) != 1) @@ -2779,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); } while (rte_event_dev_xstats_by_name_get(evdev, rx_port_free_stat, NULL) != 0); @@ -2789,7 +2802,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); /* check that the other port still has an empty CQ */ if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) @@ -2812,7 +2825,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */ printf("%d: Error with enqueue\n", __LINE__); goto err; } - rte_event_schedule(evdev); + wait_schedule(evdev); if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) != 1) { @@ -3002,7 +3015,7 @@ worker_loopback(struct test *t) while (rte_eal_get_lcore_state(p_lcore) != FINISHED || rte_eal_get_lcore_state(w_lcore) != FINISHED) { - rte_event_schedule(evdev); + wait_schedule(evdev); uint64_t new_cycles = rte_get_timer_cycles(); @@ -3029,7 +3042,7 @@ worker_loopback(struct test *t) cycles = new_cycles; } } - rte_event_schedule(evdev); /* ensure all completions are flushed */ + wait_schedule(evdev); /* ensure all completions are flushed */ rte_eal_mp_wait_lcore(); @@ -3064,6 +3077,7 @@ test_sw_eventdev(void) printf("Error finding newly created eventdev\n"); return -1; } + rte_service_start_with_defaults(); } /* Only create mbuf pool once, reuse for each test run */