From patchwork Mon May 22 09:16:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 127151 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E73242B6F; Mon, 22 May 2023 11:23:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 122E242D35; Mon, 22 May 2023 11:23:08 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2042.outbound.protection.outlook.com [40.107.20.42]) by mails.dpdk.org (Postfix) with ESMTP id A37894282D for ; Mon, 22 May 2023 11:23:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zo3UR2DmHzev0Aw1Aj626d9gj6qdgdIIjtoPKyEpEHSCSml9LjiY5tykjL20Q7mDlsEHtp1CwdhyxPK6kDfVQ4mEQlcl2BvUt1wDDwdKcbZBxvFWQ+5zLfYnKINL1qg+C5r3V6j6gYy+xxrJadezG+4DVVGLKigQjh66zV+uySlQaEgp0wVCUIcbfmloAN8Ldv5RuLHjI0ndhqv676eumN5EcAofhbckRpGn5pWGk9LLFiOeFl6DjtOlwv0ZijdFwyawgZGj0DhZaza7et8ybDFx0xXWdzWjFtli6mIrlxYU2/mYKMU0+IzOKwQO2uiqOM+irr0NRqgAkZmBsVtVTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uQkHyqgxukKlIOcDMKO9YW4q2D3oofkAaVUAzTCR7ss=; b=Ovr4JpfOc4vPWCGBFB30s2Eh+b5yThqTvW9pc4z0I8yDpAS2tfM6GrlyxmdxjYkHTpI5gzxxTIh9Is/894SIr1lLbb/YD0nwJHx+s16RGTJuJX/gNl4siOydSiwlVHw/o3yqn6WfXv45GELB40m6JKUHSwdInY/YjwiFBnlIemjEiZeMYPpMOsUBf9D1kHEcptkOkM4Xao19aPjfz+GQMnR72Arz3hKfqdbVdJ+LBzfdij2pMTUOFmqfLJr75SS8AGgqGhJ7ZXYZihYdVkOhLRgKaIefXiHwlFSry3xkds0CaoWmQ9EwwfqXPedsnW0w3/+L+wP7LwHs5Jh0hGHLxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uQkHyqgxukKlIOcDMKO9YW4q2D3oofkAaVUAzTCR7ss=; b=DzD8F29CL6njjP03jHeVuTZxS1HFGH7NyAriNjfR9RXnR4lirgnE/gk9DkhBV40qq/6rjwPJTMdAw5ni28OqiP1h/aujGV9CRDYD9rH8qKkp2/y2btzOqe1L07cu/qRry6O5ltl+VNZlWA3WFGaPsFuJhhes+/u1+8gVWeK+wPo= Received: from AM5PR1001CA0060.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:15::37) by AM8PR07MB7396.eurprd07.prod.outlook.com (2603:10a6:20b:24d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May 2023 09:23:04 +0000 Received: from AM0EUR02FT016.eop-EUR02.prod.protection.outlook.com (2603:10a6:206:15:cafe::8e) by AM5PR1001CA0060.outlook.office365.com (2603:10a6:206:15::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend Transport; Mon, 22 May 2023 09:23:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by AM0EUR02FT016.mail.protection.outlook.com (10.13.54.178) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6433.12 via Frontend Transport; Mon, 22 May 2023 09:23:04 +0000 Received: from ESESBMB502.ericsson.se (153.88.183.169) by ESESBMB505.ericsson.se (153.88.183.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Mon, 22 May 2023 11:23:03 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.185) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Mon, 22 May 2023 11:23:03 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 8DD453800D4; Mon, 22 May 2023 11:23:03 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: CC: Jerin Jacob , , , Van@dpdk.org, Haaren@dpdk.org, Harry , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [RFC v3 1/3] eventdev: introduce event dispatcher Date: Mon, 22 May 2023 11:16:26 +0200 Message-ID: <20230522091628.96236-2-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230522091628.96236-1-mattias.ronnblom@ericsson.com> References: <20210409113223.65260-1-mattias.ronnblom@ericsson.com> <20230522091628.96236-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM0EUR02FT016:EE_|AM8PR07MB7396:EE_ X-MS-Office365-Filtering-Correlation-Id: 900a726c-4b76-4678-8231-08db5aa6258e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QnUhE/2uqlDmPYfVfeTOB1NfjrgnlDJVNpkEQ99wC0e28tzrXDnAUfUgEghckdZdIp4usRgvPuBSM7xW5pwSPXcOJ4hYRzW+ODNtYdd663akb9uemc96mG9oqOdTmi1f0TRAKJpiPr1DadhxhcJDFD5jeI9zB4V0xQE3JQn545eTzsyM3iGf7xWN3dWvoaCrDXI0qACJQy75+d11mgPTotG/cW1vgeFAEI18gEPMiAZL2cuJG+ei5sHjC+z9XSh5iYzzv+vJ6vXPSWsADDnkNwldJ565BCuRshUmw3iEz0Cpp4laiD5J2X4yWBYw9J71r9H/S3a1vRx5AYtj8BsssCoj+bH89uATAf3TyBo5q8wGBnEUH19rU2kESXwsIwQuDPx1PfqGggbt9tMggQoxdDg/1gPjvNTb2lQHBoiFHUC13fjgTYbThnIFEDN88arAx/HkphTdLlLq16Ybomvj56hO8W2jP0ZvsKBgOGAzQZ2e5fB6NLeG7uykpA/OuNM9kk/MXAeM3G42jjIelnru9WG3PNr3ko7q4Fx/LO9kFuJIjKUZoTvSDNtaNEK6O39/u4nKyHkoiEKsYosYqqof94/cuFqvGsItriAS2Q9fy+xSSecYW8jqyVZAgdyjewToRZLOnO+HGxtCrQLElt+T1+q4M4JXvgt/tm98Ztu68h8l8+6mG/3F9lq6gzzDpPPG5mm5gyvgES6zHsdRC1p/D/P2y++MfVW5MSo0whXpVve7gplDY4UHatvcbSH0F8MUtGNLDRfwExuOlBeEjPTA4NjvJOqYj0BH87gRSpijQJ8= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(36840700001)(40470700004)(46966006)(82310400005)(2906002)(30864003)(6666004)(82960400001)(6916009)(70206006)(70586007)(54906003)(478600001)(2616005)(336012)(47076005)(40480700001)(66574015)(356005)(7636003)(8676002)(8936002)(36756003)(82740400003)(40460700003)(316002)(4326008)(41300700001)(5660300002)(86362001)(83380400001)(1076003)(26005)(6266002)(107886003)(186003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 09:23:04.3599 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 900a726c-4b76-4678-8231-08db5aa6258e X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: AM0EUR02FT016.eop-EUR02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR07MB7396 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The purpose of the event dispatcher is to help reduce coupling in an Eventdev-based DPDK application. In addition, the event dispatcher also provides a convenient and flexible way for the application to use service cores for application-level processing. Signed-off-by: Mattias Rönnblom --- lib/eventdev/meson.build | 2 + lib/eventdev/rte_event_dispatcher.c | 670 ++++++++++++++++++++++++++++ lib/eventdev/rte_event_dispatcher.h | 440 ++++++++++++++++++ lib/eventdev/version.map | 12 + 4 files changed, 1124 insertions(+) create mode 100644 lib/eventdev/rte_event_dispatcher.c create mode 100644 lib/eventdev/rte_event_dispatcher.h diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index 6edf98dfa5..c0edc744fe 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -19,6 +19,7 @@ sources = files( 'rte_event_crypto_adapter.c', 'rte_event_eth_rx_adapter.c', 'rte_event_eth_tx_adapter.c', + 'rte_event_dispatcher.c', 'rte_event_ring.c', 'rte_event_timer_adapter.c', 'rte_eventdev.c', @@ -27,6 +28,7 @@ headers = files( 'rte_event_crypto_adapter.h', 'rte_event_eth_rx_adapter.h', 'rte_event_eth_tx_adapter.h', + 'rte_event_dispatcher.h', 'rte_event_ring.h', 'rte_event_timer_adapter.h', 'rte_eventdev.h', diff --git a/lib/eventdev/rte_event_dispatcher.c b/lib/eventdev/rte_event_dispatcher.c new file mode 100644 index 0000000000..591efeef80 --- /dev/null +++ b/lib/eventdev/rte_event_dispatcher.c @@ -0,0 +1,670 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#include +#include + +#include +#include +#include + +#include "eventdev_pmd.h" + +#include + +#define RED_MAX_PORTS_PER_LCORE (4) +#define RED_MAX_HANDLERS (32) +#define RED_MAX_FINALIZERS (16) + +struct rte_event_dispatcher_lcore_port { + uint8_t port_id; + uint16_t batch_size; + uint64_t timeout; +}; + +struct rte_event_dispatcher_lcore { + uint8_t num_ports; + struct rte_event_dispatcher_lcore_port ports[RED_MAX_PORTS_PER_LCORE]; + struct rte_event_dispatcher_stats stats; +} __rte_cache_aligned; + +struct rte_event_dispatcher_handler { + int id; + rte_event_dispatcher_match_t match_fun; + void *match_data; + rte_event_dispatcher_process_t process_fun; + void *process_data; +}; + +struct rte_event_dispatcher_finalizer { + int id; + rte_event_dispatcher_finalize_t finalize_fun; + void *finalize_data; +}; + +struct rte_event_dispatcher { + uint8_t id; + uint8_t event_dev_id; + int socket_id; + uint32_t service_id; + struct rte_event_dispatcher_lcore lcores[RTE_MAX_LCORE]; + uint16_t num_handlers; + uint16_t num_finalizers; + struct rte_event_dispatcher_handler handlers[RED_MAX_HANDLERS]; + struct rte_event_dispatcher_finalizer finalizers[RED_MAX_FINALIZERS]; +}; + +static struct rte_event_dispatcher *dispatchers[UINT8_MAX]; + +static bool +red_has_dispatcher(uint8_t id) +{ + return dispatchers[id] != NULL; +} + +static struct rte_event_dispatcher * +red_get_dispatcher(uint8_t id) +{ + return dispatchers[id]; +} + +static void +red_set_dispatcher(uint8_t id, struct rte_event_dispatcher *dispatcher) +{ + dispatchers[id] = dispatcher; +} + +#define RED_VALID_ID_OR_RET_EINVAL(id) \ + do { \ + if (unlikely(!red_has_dispatcher(id))) { \ + RTE_EDEV_LOG_ERR("Invalid dispatcher id %d\n", id); \ + return -EINVAL; \ + } \ + } while (0) + +static int +red_lookup_handler_idx(struct rte_event_dispatcher *dispatcher, + const struct rte_event *event) +{ + uint16_t i; + + for (i = 0; i < dispatcher->num_handlers; i++) { + struct rte_event_dispatcher_handler *handler = + &dispatcher->handlers[i]; + + if (handler->match_fun(event, handler->match_data)) + return i; + } + + return -1; +} + +static inline void +red_dispatch_events(struct rte_event_dispatcher *dispatcher, + struct rte_event_dispatcher_lcore *lcore, + struct rte_event_dispatcher_lcore_port *port, + struct rte_event *events, uint16_t num_events) +{ + int i; + struct rte_event bursts[RED_MAX_HANDLERS][num_events]; + uint16_t burst_lens[RED_MAX_HANDLERS] = { 0 }; + uint16_t drop_count = 0; + uint16_t dispatch_count; + + for (i = 0; i < num_events; i++) { + struct rte_event *event = &events[i]; + int handler_idx; + + handler_idx = red_lookup_handler_idx(dispatcher, event); + + if (unlikely(handler_idx < 0)) { + drop_count++; + continue; + } + + bursts[handler_idx][burst_lens[handler_idx]] = *event; + burst_lens[handler_idx]++; + } + + for (i = 0; i < dispatcher->num_handlers; i++) { + struct rte_event_dispatcher_handler *handler = + &dispatcher->handlers[i]; + uint16_t len = burst_lens[i]; + + if (len == 0) + continue; + + handler->process_fun(dispatcher->event_dev_id, port->port_id, + bursts[i], len, handler->process_data); + } + + dispatch_count = num_events - drop_count; + + lcore->stats.ev_dispatch_count += dispatch_count; + lcore->stats.ev_drop_count += drop_count; + + for (i = 0; i < dispatcher->num_finalizers; i++) { + struct rte_event_dispatcher_finalizer *finalizer = + &dispatcher->finalizers[i]; + + finalizer->finalize_fun(dispatcher->event_dev_id, + port->port_id, + finalizer->finalize_data); + } +} + +static __rte_always_inline void +red_port_dequeue(struct rte_event_dispatcher *dispatcher, + struct rte_event_dispatcher_lcore *lcore, + struct rte_event_dispatcher_lcore_port *port) +{ + uint16_t batch_size = port->batch_size; + struct rte_event events[batch_size]; + uint16_t n; + + n = rte_event_dequeue_burst(dispatcher->event_dev_id, port->port_id, + events, batch_size, port->timeout); + + if (likely(n > 0)) + red_dispatch_events(dispatcher, lcore, port, events, n); + + lcore->stats.poll_count++; +} + +static __rte_always_inline void +red_lcore_process(struct rte_event_dispatcher *dispatcher, + struct rte_event_dispatcher_lcore *lcore) +{ + uint16_t i; + + for (i = 0; i < lcore->num_ports; i++) { + struct rte_event_dispatcher_lcore_port *port = + &lcore->ports[i]; + + red_port_dequeue(dispatcher, lcore, port); + } +} + +static int32_t +red_process(void *userdata) +{ + struct rte_event_dispatcher *dispatcher = userdata; + unsigned int lcore_id = rte_lcore_id(); + struct rte_event_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + + int i; + for (i = 0; i < 15; i++) + red_lcore_process(dispatcher, lcore); + + return 0; +} + +static int +red_service_register(struct rte_event_dispatcher *dispatcher) +{ + struct rte_service_spec service = { + .callback = red_process, + .callback_userdata = dispatcher, + .capabilities = RTE_SERVICE_CAP_MT_SAFE, + .socket_id = dispatcher->socket_id + }; + int rc; + + snprintf(service.name, RTE_SERVICE_NAME_MAX - 1, "red_%d", + dispatcher->id); + + rc = rte_service_component_register(&service, &dispatcher->service_id); + + if (rc) + RTE_EDEV_LOG_ERR("Registration of event dispatcher service " + "%s failed with error code %d\n", + service.name, rc); + + return rc; +} + +static int +red_service_unregister(struct rte_event_dispatcher *dispatcher) +{ + int rc; + + rc = rte_service_component_unregister(dispatcher->service_id); + + if (rc) + RTE_EDEV_LOG_ERR("Unregistration of event dispatcher service " + "failed with error code %d\n", rc); + + return rc; +} + +int +rte_event_dispatcher_create(uint8_t id, uint8_t event_dev_id) +{ + int socket_id; + struct rte_event_dispatcher *dispatcher; + int rc; + + if (red_has_dispatcher(id)) { + RTE_EDEV_LOG_ERR("Dispatcher with id %d already exists\n", + id); + return -EEXIST; + } + + socket_id = rte_event_dev_socket_id(event_dev_id); + + dispatcher = + rte_malloc_socket("event dispatcher", + sizeof(struct rte_event_dispatcher), + RTE_CACHE_LINE_SIZE, socket_id); + + if (dispatcher == NULL) { + RTE_EDEV_LOG_ERR("Unable to allocate memory for event " + "dispatcher\n"); + return -ENOMEM; + } + + *dispatcher = (struct rte_event_dispatcher) { + .id = id, + .event_dev_id = event_dev_id, + .socket_id = socket_id + }; + + rc = red_service_register(dispatcher); + + if (rc < 0) { + rte_free(dispatcher); + return rc; + } + + red_set_dispatcher(id, dispatcher); + + return 0; +} + +int +rte_event_dispatcher_free(uint8_t id) +{ + struct rte_event_dispatcher *dispatcher; + int rc; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + rc = red_service_unregister(dispatcher); + + if (rc) + return rc; + + red_set_dispatcher(id, NULL); + + rte_free(dispatcher); + + return 0; +} + +int +rte_event_dispatcher_service_id_get(uint8_t id, uint32_t *service_id) +{ + struct rte_event_dispatcher *dispatcher; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + *service_id = dispatcher->service_id; + + return 0; +} + +static int +lcore_port_index(struct rte_event_dispatcher_lcore *lcore, + uint8_t event_port_id) +{ + uint16_t i; + + for (i = 0; i < lcore->num_ports; i++) { + struct rte_event_dispatcher_lcore_port *port = + &lcore->ports[i]; + + if (port->port_id == event_port_id) + return i; + } + + return -1; +} + +int +rte_event_dispatcher_bind_port_to_lcore(uint8_t id, uint8_t event_port_id, + uint16_t batch_size, uint64_t timeout, + unsigned int lcore_id) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_lcore *lcore; + struct rte_event_dispatcher_lcore_port *port; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + lcore = &dispatcher->lcores[lcore_id]; + + if (lcore->num_ports == RED_MAX_PORTS_PER_LCORE) + return -ENOMEM; + + if (lcore_port_index(lcore, event_port_id) >= 0) + return -EEXIST; + + port = &lcore->ports[lcore->num_ports]; + + *port = (struct rte_event_dispatcher_lcore_port) { + .port_id = event_port_id, + .batch_size = batch_size, + .timeout = timeout + }; + + lcore->num_ports++; + + return 0; +} + +int +rte_event_dispatcher_unbind_port_from_lcore(uint8_t id, uint8_t event_port_id, + unsigned int lcore_id) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_lcore *lcore; + int port_idx; + struct rte_event_dispatcher_lcore_port *port; + struct rte_event_dispatcher_lcore_port *last; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + lcore = &dispatcher->lcores[lcore_id]; + + port_idx = lcore_port_index(lcore, event_port_id); + + if (port_idx < 0) + return -ENOENT; + + port = &lcore->ports[port_idx]; + last = &lcore->ports[lcore->num_ports - 1]; + + if (port != last) + *port = *last; + + lcore->num_ports--; + + return 0; +} + +static struct rte_event_dispatcher_handler* +red_get_handler_by_id(struct rte_event_dispatcher *dispatcher, + int handler_id) +{ + int i; + + for (i = 0; i < dispatcher->num_handlers; i++) { + struct rte_event_dispatcher_handler *handler = + &dispatcher->handlers[i]; + + if (handler->id == handler_id) + return handler; + } + + return NULL; +} + +static int +red_alloc_handler_id(struct rte_event_dispatcher *dispatcher) +{ + int handler_id = 0; + + while (red_get_handler_by_id(dispatcher, handler_id) != NULL) + handler_id++; + + return handler_id; +} + +static struct rte_event_dispatcher_handler * +red_alloc_handler(struct rte_event_dispatcher *dispatcher) +{ + int handler_idx; + struct rte_event_dispatcher_handler *handler; + + if (dispatcher->num_handlers == RED_MAX_HANDLERS) + return NULL; + + handler_idx = dispatcher->num_handlers; + handler = &dispatcher->handlers[handler_idx]; + + handler->id = red_alloc_handler_id(dispatcher); + + dispatcher->num_handlers++; + + return handler; +} + +int +rte_event_dispatcher_register(uint8_t id, + rte_event_dispatcher_match_t match_fun, + void *match_data, + rte_event_dispatcher_process_t process_fun, + void *process_data) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_handler *handler; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + handler = red_alloc_handler(dispatcher); + + if (handler == NULL) + return -ENOMEM; + + handler->match_fun = match_fun; + handler->match_data = match_data; + handler->process_fun = process_fun; + handler->process_data = process_data; + + return handler->id; +} + +int +rte_event_dispatcher_unregister(uint8_t id, int handler_id) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_handler *unreg_handler; + int handler_idx; + uint16_t last_idx; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + unreg_handler = red_get_handler_by_id(dispatcher, handler_id); + + if (unreg_handler == NULL) + return -EINVAL; + + handler_idx = &dispatcher->handlers[0] - unreg_handler; + + last_idx = dispatcher->num_handlers - 1; + + if (handler_idx != last_idx) { + /* move all handlers to maintain handler order */ + int n = last_idx - handler_idx; + memmove(unreg_handler, unreg_handler + 1, + sizeof(struct rte_event_dispatcher_handler) * n); + } + + dispatcher->num_handlers--; + + return 0; +} + +static struct rte_event_dispatcher_finalizer* +red_get_finalizer_by_id(struct rte_event_dispatcher *dispatcher, + int handler_id) +{ + int i; + + for (i = 0; i < dispatcher->num_finalizers; i++) { + struct rte_event_dispatcher_finalizer *finalizer = + &dispatcher->finalizers[i]; + + if (finalizer->id == handler_id) + return finalizer; + } + + return NULL; +} + +static int +red_alloc_finalizer_id(struct rte_event_dispatcher *dispatcher) +{ + int finalizer_id = 0; + + while (red_get_finalizer_by_id(dispatcher, finalizer_id) != NULL) + finalizer_id++; + + return finalizer_id; +} + +static struct rte_event_dispatcher_finalizer * +red_alloc_finalizer(struct rte_event_dispatcher *dispatcher) +{ + int finalizer_idx; + struct rte_event_dispatcher_finalizer *finalizer; + + if (dispatcher->num_finalizers == RED_MAX_FINALIZERS) + return NULL; + + finalizer_idx = dispatcher->num_finalizers; + finalizer = &dispatcher->finalizers[finalizer_idx]; + + finalizer->id = red_alloc_finalizer_id(dispatcher); + + dispatcher->num_finalizers++; + + return finalizer; +} + +int +rte_event_dispatcher_finalize_register(uint8_t id, + rte_event_dispatcher_finalize_t finalize_fun, + void *finalize_data) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_finalizer *finalizer; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + finalizer = red_alloc_finalizer(dispatcher); + + if (finalizer == NULL) + return -ENOMEM; + + finalizer->finalize_fun = finalize_fun; + finalizer->finalize_data = finalize_data; + + return finalizer->id; +} + +int +rte_event_dispatcher_finalize_unregister(uint8_t id, int handler_id) +{ + struct rte_event_dispatcher *dispatcher; + struct rte_event_dispatcher_finalizer *unreg_finalizer; + int handler_idx; + uint16_t last_idx; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + unreg_finalizer = red_get_finalizer_by_id(dispatcher, handler_id); + + if (unreg_finalizer == NULL) + return -EINVAL; + + handler_idx = &dispatcher->finalizers[0] - unreg_finalizer; + + last_idx = dispatcher->num_finalizers - 1; + + if (handler_idx != last_idx) { + /* move all finalizers to maintain finalizer order */ + int n = last_idx - handler_idx; + memmove(unreg_finalizer, unreg_finalizer + 1, + sizeof(struct rte_event_dispatcher_finalizer) * n); + } + + dispatcher->num_finalizers--; + + return 0; +} + +static void +red_aggregate_stats(struct rte_event_dispatcher_stats *result, + const struct rte_event_dispatcher_stats *part) +{ + result->poll_count += part->poll_count; + result->ev_dispatch_count += part->ev_dispatch_count; + result->ev_drop_count += part->ev_drop_count; +} + +int +rte_event_dispatcher_stats_get(uint8_t id, + struct rte_event_dispatcher_stats *stats) +{ + struct rte_event_dispatcher *dispatcher; + unsigned int lcore_id; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + *stats = (struct rte_event_dispatcher_stats) {}; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + struct rte_event_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + + red_aggregate_stats(stats, &lcore->stats); + } + + return 0; +} + +static int +red_set_service_runstate(uint8_t id, int state) +{ + struct rte_event_dispatcher *dispatcher; + int rc; + + RED_VALID_ID_OR_RET_EINVAL(id); + dispatcher = red_get_dispatcher(id); + + rc = rte_service_component_runstate_set(dispatcher->service_id, + state); + + if (rc != 0) { + RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " + "service component run state to %d\n", rc, + state); + RTE_ASSERT(0); + } + + return 0; +} + +int +rte_event_dispatcher_start(uint8_t id) +{ + return red_set_service_runstate(id, 1); +} + +int +rte_event_dispatcher_stop(uint8_t id) +{ + return red_set_service_runstate(id, 0); +} diff --git a/lib/eventdev/rte_event_dispatcher.h b/lib/eventdev/rte_event_dispatcher.h new file mode 100644 index 0000000000..5563660f31 --- /dev/null +++ b/lib/eventdev/rte_event_dispatcher.h @@ -0,0 +1,440 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#ifndef __RTE_EVENT_DISPATCHER_H__ +#define __RTE_EVENT_DISPATCHER_H__ + +/** + * @file + * + * RTE Event Dispatcher + * + * The purpose of the event dispatcher is to decouple different parts + * of an application (e.g., modules), sharing the same underlying + * event device. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +/** + * Function prototype for match callbacks. + * + * Match callbacks are used by an application to decide how to the + * event dispatcher distributes events to different parts of the + * application. + * + * The application is not expected to process the event at the point + * of the match call. Such matters should be deferred to the process + * callback invocation. + * + * The match callback may be used as an opportunity to prefetch data. + * + * @param event + * Pointer to event + * + * @param cb_data + * The pointer supplied by the application in + * rte_event_dispatcher_register(). + * + * @return + * Returns true in case this events should be delivered (via + * the process callback), and false otherwise. + */ +typedef bool +(*rte_event_dispatcher_match_t)(const struct rte_event *event, void *cb_data); + +/** + * Function prototype for process callbacks. + * + * The process callbacks are used by the event dispatcher to deliver + * events for processing. + * + * @param event_dev_id + * The originating event device id. + * + * @param event_port_id + * The originating event port. + * + * @param events + * Pointer to an array of events. + * + * @param num + * The number of events in the @p events array. + * + * @param cb_data + * The pointer supplied by the application in + * rte_event_dispatcher_register(). + */ + +typedef void +(*rte_event_dispatcher_process_t)(uint8_t event_dev_id, uint8_t event_port_id, + const struct rte_event *events, + uint16_t num, void *cb_data); + +/** + * Function prototype for finalize callbacks. + * + * Using a finalize callback, the application may ask to be notified when a + * complete batch of events have been delivered to the various process + * callbacks. + * + * @param event_dev_id + * The originating event device id. + * + * @param event_port_id + * The originating event port. + * + * @param cb_data + * The pointer supplied by the application in + * rte_event_dispatcher_finalize_register(). + */ + +typedef void +(*rte_event_dispatcher_finalize_t)(uint8_t event_dev_id, uint8_t event_port_id, + void *cb_data); + +/** + * Event dispatcher statistics + */ +struct rte_event_dispatcher_stats { + uint64_t poll_count; + /**< Number of event dequeue calls made toward the event device. */ + uint64_t ev_dispatch_count; + /**< Number of events dispatched to a handler.*/ + uint64_t ev_drop_count; + /**< Number of events dropped because no handler was found. */ +}; + +/** + * Create an event dispatcher with the specified id. + * + * @param id + * An application-specified, unique (across all event dispatcher + * instances) identifier. + * + * @param event_dev_id + * The identifier of the event device from which this event dispatcher + * will dequeue events. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int +rte_event_dispatcher_create(uint8_t id, uint8_t event_dev_id); + +/** + * Free an event dispatcher. + * + * @param id + * The event dispatcher identifier. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int +rte_event_dispatcher_free(uint8_t id); + +/** + * Retrieve the service identifier of an event dispatcher. + * + * @param id + * The event dispatcher identifier. + * + * @param [out] service_id + * A pointer to a caller-supplied buffer where the event dispatcher's + * service id will be stored. + * + * @return + * - 0: Success + * - <0: Error code on failure. + */ +__rte_experimental +int +rte_event_dispatcher_service_id_get(uint8_t id, uint32_t *service_id); + +/** + * Binds an event device port to a specific lcore on the specified + * event dispatcher. + * + * This function configures the event port id to be used by the event + * dispatcher service, if run on the specified lcore. + * + * Multiple event device ports may be bound to the same lcore. A + * particular port must not be bound to more than one lcore. + * + * If the event dispatcher service is mapped (with + * rte_service_map_lcore_set()) to a lcore for which no ports are + * bound, the service function will be a no-operation. + * + * This function is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param event_port_id + * The event device port identifier. + * + * @param batch_size + * The batch size to use in rte_event_dequeue_burst(), for the + * configured event device port and lcore. + * + * @param timeout + * The timeout parameter to use in rte_event_dequeue_burst(), for the + * configured event device port and lcore. + * + * @param lcore_id + * The lcore by which this event port will be used. + * + * @return + * - 0: Success + * - -ENOMEM: Unable to allocate sufficient resources. + * - -EEXISTS: Event port is already configured. + * - -EINVAL: Invalid arguments. + */ +__rte_experimental +int +rte_event_dispatcher_bind_port_to_lcore(uint8_t id, uint8_t event_port_id, + uint16_t batch_size, uint64_t timeout, + unsigned int lcore_id); + +/** + * Unbind an event device port from a specific lcore. + * + * This function is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param event_port_id + * The event device port identifier. + * + * @param lcore_id + * The lcore which was using this event port. + * + * @return + * - 0: Success + * - -EINVAL: Invalid @c id. + * - -ENOENT: Event port id not bound to this @c lcore_id. + */ +__rte_experimental +int +rte_event_dispatcher_unbind_port_from_lcore(uint8_t id, uint8_t event_port_id, + unsigned int lcore_id); + +/** + * Register an event handler. + * + * The match callback function is used to select if a particular event + * should be delivered, using the corresponding process callback + * function. + * + * The reason for having two distinct steps is to allow the dispatcher + * to deliver all events as a batch. This in turn will cause + * processing of a particular kind of events to happen in a + * back-to-back manner, improving cache locality. + * + * The list of handler callback functions is shared among all lcores, + * but will only be executed on lcores which has an eventdev port + * bound to them, and which are running the dispatcher service. + * + * Ordering of events is not guaranteed to be maintained between + * different deliver callbacks. For example, suppose there are two + * callbacks registered, matching different subsets of events an + * atomic queue. A batch of events [ev0, ev1, ev2] are dequeued on a + * particular port, all pertaining to the same flow. The match + * callback for registration A returns true for ev0 and ev2, and the + * matching function for registration B for ev1. In that scenario, the + * event dispatcher may choose to deliver first [ev0, ev2] using A's + * deliver function, and then [ev1] to B - or vice versa. + * + * rte_event_dispatcher_register() is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param match_fun + * The match callback function. + * + * @param match_cb_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when match_fun is + * called. + * + * @param process_fun + * The process callback function. + * + * @param process_cb_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when process_fun is + * called. + * + * @return + * - >= 0: The identifier for this registration. + * - -ENOMEM: Unable to allocate sufficient resources. + */ +__rte_experimental +int +rte_event_dispatcher_register(uint8_t id, + rte_event_dispatcher_match_t match_fun, + void *match_cb_data, + rte_event_dispatcher_process_t process_fun, + void *process_cb_data); + +/** + * Unregister an event handler. + * + * This function is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param handler_id + * The handler registration id returned by the original + * rte_event_dispatcher_register() call. + * + * @return + * - 0: Success + * - -EINVAL: The @c id and/or the @c handler_id parameter was invalid. + */ +__rte_experimental +int +rte_event_dispatcher_unregister(uint8_t id, int handler_id); + +/** + * Registers a finalize callback function. + * + * An application may optionally install a finalize callback. + * + * The finalize callback is called when all event of a particular + * batch of events (retrieve using rte_event_dequeue_burst()) have + * been delivered (or dropped). + * + * The finalize callback is not tied to any particular handler. + * + * The finalize callback provides an opportunity for the application + * to do per-batch processing. One case where this may be useful is if + * an event output buffer is used, and is shared among several + * handlers. In such a case, proper output buffer flushing may be + * assured using a finalize callback. + * + * rte_event_dispatcher_finalize_register() is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param finalize_fun + * The function called after completing the processing of a + * dequeue batch. + * + * @param finalize_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when @c finalize_fun is + * called. + * + * @return + * - >= 0: The identifier for this registration. + * - -ENOMEM: Unable to allocate sufficient resources. + */ +__rte_experimental +int +rte_event_dispatcher_finalize_register(uint8_t id, + rte_event_dispatcher_finalize_t finalize_fun, + void *finalize_data); + +/** + * Unregister an event handler. + * + * This function is not MT safe. + * + * @param id + * The event dispatcher identifier. + * + * @param reg_id + * The finalize registration id returned by the original + * rte_event_dispatcher_finalize_register() call. + * + * @return + * - 0: Success + * - -EINVAL: The @c id and/or the @c reg_id parameter was invalid. + */ +__rte_experimental +int +rte_event_dispatcher_finalize_unregister(uint8_t id, int reg_id); + +/** + * Start an event dispatcher instance. + * + * Enables the event dispatcher service. + * + * The underlying event device must have been started prior to calling + * rte_event_dispatcher_start(). + * + * For the event dispatcher to actually perform work (i.e., dispatch + * events), its service must have been mapped to one or more service + * lcores, and its service run state set to '1'. An event dispatcher's + * service is retrieved using rte_event_dispatcher_service_id_get(). + * + * Each service lcore to which the event dispatcher is mapped should + * have at least one event port configured. Such configuration is + * performed by calling rte_event_dispatcher_bind_port_to_lcore(), + * prior to starting the event dispatcher. + * + * @param id + * The event dispatcher identifier. + * + * @return + * - 0: Success + * - -EINVAL: Invalid @c id. + */ +__rte_experimental +int +rte_event_dispatcher_start(uint8_t id); + +/** + * Stop a running event dispatcher instance. + * + * Disables the event dispatcher service. + * + * @param id + * The event dispatcher identifier. + * + * @return + * - 0: Success + * - -EINVAL: Invalid @c id. + */ +__rte_experimental +int +rte_event_dispatcher_stop(uint8_t id); + +/** + * Retrieve statistics for an event dispatcher instance. + * + * This function is MT safe and may be called from any thread + * (including unregistered non-EAL threads). + * + * @param id + * The event dispatcher identifier. + * @param[out] stats + * A pointer to a structure to fill with statistics. + * @return + * - 0: Success + * - -EINVAL: The @c id parameter was invalid. + */ +int +rte_event_dispatcher_stats_get(uint8_t id, + struct rte_event_dispatcher_stats *stats); + +#ifdef __cplusplus +} +#endif + +#endif /* __RTE_EVENT_DISPATCHER__ */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 89068a5713..36466e9f24 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -131,6 +131,18 @@ EXPERIMENTAL { rte_event_eth_tx_adapter_runtime_params_init; rte_event_eth_tx_adapter_runtime_params_set; rte_event_timer_remaining_ticks_get; + + rte_event_dispatcher_create; + rte_event_dispatcher_free; + rte_event_dispatcher_service_id_get; + rte_event_dispatcher_bind_port_to_lcore; + rte_event_dispatcher_unbind_port_from_lcore; + rte_event_dispatcher_register; + rte_event_dispatcher_unregister; + rte_event_dispatcher_finalize_register; + rte_event_dispatcher_finalize_unregister; + rte_event_dispatcher_start; + rte_event_dispatcher_stop; }; INTERNAL { From patchwork Mon May 22 09:16:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 127152 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5437F42B6F; Mon, 22 May 2023 11:23:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3FC442D46; Mon, 22 May 2023 11:23:09 +0200 (CEST) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on2044.outbound.protection.outlook.com [40.107.15.44]) by mails.dpdk.org (Postfix) with ESMTP id DC1544282D for ; Mon, 22 May 2023 11:23:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=glYmot7u8RxvYCJr0bObwm/qhV5optdeS2M4qxDE/aLvN8uwoFKz93AniHCjoHk/qzF60zFVR59yRilGusXrJ2es+vwgKBlYERAFL9w+VRrMXlBzc8r7eGlFC44njoEm/hJHs8Oer5EzaZZ1MbH0yiNNqIEnkN0JLT43LT6CSidApRMedUMOt7fXbnFJ6PsUYUzNsIrwTn7d7juZcC7rP9yfH0t83LX9RbgkoUtTDPavA2nHS11SeEJZklB1mtw5+SgNyKzf0FJ3INzEF2tLdfN0a8q5AQYsq0Zn8Myk/XgB0sixqEhF04jxAeJbQ+9ld5ZWKU89ZEFFti/E/KqECg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ocr7dWWTsABsMCpY98ZP04vAZwWKxEOnHGvvE8iZH4I=; b=C0mBUg9YqGOJUdXU3rVSAcSHCS9ZwweTZ/xeUQqCMpqvP3twC6SI5qpjmI/HKha5L4/s2tz9dUxRPoFoAUuuU7W3dlW07f6ijRbIToRfU46x7DH39tfaFiqMlqKfDd0r0zCJ8YAigj2YRKoUtuy1lPLztr45zPMGoCRrQqYFuMZD0+B7TvMoCvnWpAN4s05UjAZvi4VWpX6Ckak/gdpvFHZvnM4jU2Pc5PZJ0rX+NENhTI2H5SklQYfO/LK14BagDgx2mQCo9QuB10nesocPj4titx7E60lhDfekkCJ4k0O7+TTvc1E7kFhZJV5X2f6+qvcXH6zmoWbtF3OyViqzWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ocr7dWWTsABsMCpY98ZP04vAZwWKxEOnHGvvE8iZH4I=; b=EzhaKne/v5JJpkYK4ftBxohfpBN8dYqNUgSjC4C8NHQ/NB259CAYUJCrzq2avmH/C9yuHWi14rl51kf12CUjUdq/COsrhzXgBZVbjhyYhrM8DadknI63kh8SODhM0nvoELctwQgYGuS4aO3oUbrBGCjF/f+7+8B0emqD7lfyxUA= Received: from DU2PR04CA0191.eurprd04.prod.outlook.com (2603:10a6:10:28d::16) by PA4PR07MB7662.eurprd07.prod.outlook.com (2603:10a6:102:ba::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May 2023 09:23:06 +0000 Received: from DB5EUR02FT062.eop-EUR02.prod.protection.outlook.com (2603:10a6:10:28d:cafe::18) by DU2PR04CA0191.outlook.office365.com (2603:10a6:10:28d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend Transport; Mon, 22 May 2023 09:23:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by DB5EUR02FT062.mail.protection.outlook.com (10.13.59.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6433.13 via Frontend Transport; Mon, 22 May 2023 09:23:06 +0000 Received: from ESESBMB505.ericsson.se (153.88.183.172) by ESESBMB504.ericsson.se (153.88.183.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Mon, 22 May 2023 11:23:05 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.188) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Mon, 22 May 2023 11:23:05 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 3648C3800D4; Mon, 22 May 2023 11:23:05 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: CC: Jerin Jacob , , , Van@dpdk.org, Haaren@dpdk.org, Harry , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [RFC v3 2/3] test: add event dispatcher test suite Date: Mon, 22 May 2023 11:16:27 +0200 Message-ID: <20230522091628.96236-3-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230522091628.96236-1-mattias.ronnblom@ericsson.com> References: <20210409113223.65260-1-mattias.ronnblom@ericsson.com> <20230522091628.96236-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB5EUR02FT062:EE_|PA4PR07MB7662:EE_ X-MS-Office365-Filtering-Correlation-Id: 56c38fe7-0998-4a29-d933-08db5aa626af X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2dIZozbOuJhlCSiA2oWG8vpEqznDhvMkdWbjWa9I+YnPT7zmKibhzFRL65SZvomuoH59G9C34nWkfajY2D8KV373Y4ORucKXwpPKSXwswuA0/8yud1Uzk5g3EYc7OCoS4+p+NbJkx5p10fLpuqo5fwz1Fr/FsKjS8MEF9St+p9FGX9russ8ia0CkYunWz6aUOEBJ5gf0ymQHyxhj+qb4gZT4HCSj4coURAJmhiOS2ldNkGGitb5TQRbB+l/R/KVUCFwUscsdFIGdOInA0Vb189qx7SekBCge+2itKzyRP9sfVtpbcQuLQn/9obj9Jb14FQ3f+vSw1gUTHSxvFv6gDwKUUtduvyDdIZwuP7yojNEmGDKl+QRfvXzEVKw6jER4dMPu3x2DtdL53nXQz3Q3PVDTdor4VHfUurULnl5qGhWXo29QQCpiaJWPEkLkYeqMZwuVOYom8uyteiIN2E1/IpoyUyMR7u+SW4htUf8heCD6KPM1BmoxTSiIzaC1jAHH+CFrTUDx5UCDKfNyNJCM3yvxTX40+uaFzJ5Z60QjtGXl+Zz6u3c7jTZrRl3krwj7w7DyGTANHrdarXwRpJrLaukX15UaHgorKWwwhXZGOerRWvPgA2GtkbbR7w4lOCackNYJkDvT2378j5AFY0d6dNHEzyW0Pf5px6B89ME3gX5OrVqVp3RT4Hs8SO74uM8s6NvZIwgEY4+WZy+8a+qZhX++yfVCaYRH1+UDQObfaCan051TgAM8YgPqq3HUmKye X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(30864003)(2906002)(5660300002)(83380400001)(8676002)(8936002)(36756003)(70586007)(70206006)(4326008)(6916009)(54906003)(316002)(478600001)(41300700001)(40480700001)(6666004)(82310400005)(86362001)(336012)(107886003)(1076003)(26005)(2616005)(356005)(7636003)(82960400001)(82740400003)(36860700001)(66574015)(47076005)(186003)(6266002)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 09:23:06.2249 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 56c38fe7-0998-4a29-d933-08db5aa626af X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: DB5EUR02FT062.eop-EUR02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR07MB7662 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add unit tests for the event dispatcher. Signed-off-by: Mattias Rönnblom --- app/test/meson.build | 1 + app/test/test_event_dispatcher.c | 814 +++++++++++++++++++++++++++++++ 2 files changed, 815 insertions(+) create mode 100644 app/test/test_event_dispatcher.c diff --git a/app/test/meson.build b/app/test/meson.build index b9b5432496..fac3b6b88b 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -50,6 +50,7 @@ test_sources = files( 'test_errno.c', 'test_ethdev_link.c', 'test_event_crypto_adapter.c', + 'test_event_dispatcher.c', 'test_event_eth_rx_adapter.c', 'test_event_ring.c', 'test_event_timer_adapter.c', diff --git a/app/test/test_event_dispatcher.c b/app/test/test_event_dispatcher.c new file mode 100644 index 0000000000..93f6c53e33 --- /dev/null +++ b/app/test/test_event_dispatcher.c @@ -0,0 +1,814 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#include "test.h" + +#include + +#include +#include +#include +#include +#include + +#define NUM_WORKERS 3 + +#define NUM_PORTS (NUM_WORKERS + 1) +#define WORKER_PORT_ID(worker_idx) (worker_idx) +#define DRIVER_PORT_ID (NUM_PORTS - 1) + +#define NUM_SERVICE_CORES NUM_WORKERS + +/* Eventdev */ +#define NUM_QUEUES 8 +#define LAST_QUEUE_ID (NUM_QUEUES - 1) +#define MAX_EVENTS 4096 +#define NEW_EVENT_THRESHOLD (MAX_EVENTS / 2) +#define DEQUEUE_BURST_SIZE 32 +#define ENQUEUE_BURST_SIZE 32 + +#define NUM_EVENTS 10000000 +#define NUM_FLOWS 16 + +#define DSW_VDEV "event_dsw0" + +struct app_queue { + uint8_t queue_id; + uint64_t sn[NUM_FLOWS]; + int dispatcher_reg_id; +}; + +struct test_app { + uint8_t event_dev_id; + uint8_t dispatcher_id; + uint32_t dispatcher_service_id; + + unsigned int service_lcores[NUM_SERVICE_CORES]; + + struct app_queue queues[NUM_QUEUES]; + + bool running; + + atomic_int completed_events; + atomic_int errors; +}; + +#define RETURN_ON_ERROR(rc) \ + do { \ + if (rc != TEST_SUCCESS) \ + return rc; \ + } while (0) + +static struct test_app * +test_app_create(void) +{ + int i; + struct test_app *app; + + app = calloc(1, sizeof(struct test_app)); + + if (app == NULL) + return NULL; + + for (i = 0; i < NUM_QUEUES; i++) + app->queues[i].queue_id = i; + + return app; +} + +static void +test_app_free(struct test_app *app) +{ + free(app); +} + +static int +test_app_create_vdev(struct test_app *app) +{ + int rc; + + rc = rte_vdev_init(DSW_VDEV, NULL); + if (rc < 0) + return TEST_SKIPPED; + + rc = rte_event_dev_get_dev_id(DSW_VDEV); + + app->event_dev_id = (uint8_t)rc; + + return TEST_SUCCESS; +} + +static int +test_app_destroy_vdev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_close(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Error while closing event device"); + + rc = rte_vdev_uninit(DSW_VDEV); + TEST_ASSERT_SUCCESS(rc, "Error while uninitializing virtual device"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_event_dev(struct test_app *app) +{ + int rc; + int i; + + rc = test_app_create_vdev(app); + if (rc < 0) + return rc; + + struct rte_event_dev_config config = { + .nb_event_queues = NUM_QUEUES, + .nb_event_ports = NUM_PORTS, + .nb_events_limit = MAX_EVENTS, + .nb_event_queue_flows = 64, + .nb_event_port_dequeue_depth = DEQUEUE_BURST_SIZE, + .nb_event_port_enqueue_depth = ENQUEUE_BURST_SIZE + }; + + rc = rte_event_dev_configure(app->event_dev_id, &config); + + TEST_ASSERT_SUCCESS(rc, "Unable to configure event device"); + + struct rte_event_queue_conf queue_config = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .nb_atomic_flows = 64 + }; + + for (i = 0; i < NUM_QUEUES; i++) { + uint8_t queue_id = i; + + rc = rte_event_queue_setup(app->event_dev_id, queue_id, + &queue_config); + + TEST_ASSERT_SUCCESS(rc, "Unable to setup queue %d", queue_id); + } + + struct rte_event_port_conf port_config = { + .new_event_threshold = NEW_EVENT_THRESHOLD, + .dequeue_depth = DEQUEUE_BURST_SIZE, + .enqueue_depth = ENQUEUE_BURST_SIZE + }; + + for (i = 0; i < NUM_PORTS; i++) { + uint8_t event_port_id = i; + + rc = rte_event_port_setup(app->event_dev_id, event_port_id, + &port_config); + TEST_ASSERT_SUCCESS(rc, "Failed to create event port %d", + event_port_id); + + if (event_port_id == DRIVER_PORT_ID) + continue; + + rc = rte_event_port_link(app->event_dev_id, event_port_id, + NULL, NULL, 0); + + TEST_ASSERT_EQUAL(rc, NUM_QUEUES, "Failed to link port %d", + event_port_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_event_dev(struct test_app *app) +{ + return test_app_destroy_vdev(app); +} + +static int +test_app_start_event_dev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_start(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start event device"); + + return TEST_SUCCESS; +} + +static void +test_app_stop_event_dev(struct test_app *app) +{ + rte_event_dev_stop(app->event_dev_id); +} + +static int +test_app_create_dispatcher(struct test_app *app) +{ + int rc; + + app->dispatcher_id = rte_rand_max(256); + + rc = rte_event_dispatcher_create(app->dispatcher_id, + app->event_dev_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to create event dispatcher"); + + rc = rte_event_dispatcher_service_id_get(app->dispatcher_id, + &app->dispatcher_service_id); + TEST_ASSERT_SUCCESS(rc, "Unable to get event dispatcher service ID"); + + rc = rte_service_set_stats_enable(app->dispatcher_service_id, 1); + + TEST_ASSERT_SUCCESS(rc, "Unable to enable event dispatcher service " + "stats"); + + rc = rte_service_runstate_set(app->dispatcher_service_id, 1); + TEST_ASSERT_SUCCESS(rc, "Error disabling dispatcher service"); + + return TEST_SUCCESS; +} + +static int +test_app_free_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_service_runstate_set(app->dispatcher_service_id, 0); + TEST_ASSERT_SUCCESS(rc, "Error disabling dispatcher service"); + + rte_event_dispatcher_free(app->dispatcher_id); + + return TEST_SUCCESS; +} + +static int +test_app_bind_ports(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + + int rc = rte_event_dispatcher_bind_port_to_lcore( + app->dispatcher_id, WORKER_PORT_ID(i), + DEQUEUE_BURST_SIZE, 0, lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to bind event device port %d " + "to lcore %d", WORKER_PORT_ID(i), + lcore_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_unbind_ports(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + + int rc = rte_event_dispatcher_unbind_port_from_lcore( + app->dispatcher_id, + WORKER_PORT_ID(i), + lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to unbind event device port %d " + "from lcore %d", WORKER_PORT_ID(i), + lcore_id); + } + + return TEST_SUCCESS; +} + +static bool +match_queue(const struct rte_event *event, void *cb_data) +{ + uintptr_t queue_id = (uintptr_t)cb_data; + + return event->queue_id == queue_id; +} + +static int +test_app_get_worker_index(struct test_app *app, unsigned int lcore_id) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) + if (app->service_lcores[i] == lcore_id) + return i; + + return -1; +} + +static int +test_app_get_worker_port(struct test_app *app, unsigned int lcore_id) +{ + int worker; + + worker = test_app_get_worker_index(app, lcore_id); + + if (worker < 0) + return -1; + + return WORKER_PORT_ID(worker); +} + +static void +test_app_queue_note_error(struct test_app *app) +{ + atomic_fetch_add_explicit(&app->errors, 1, memory_order_relaxed); +} + +static void +test_app_process_queue(uint8_t p_event_dev_id, uint8_t p_event_port_id, + const struct rte_event *in_events, uint16_t num, + void *cb_data) +{ + struct app_queue *app_queue = cb_data; + struct test_app *app = container_of(app_queue, struct test_app, + queues[app_queue->queue_id]); + unsigned int lcore_id = rte_lcore_id(); + bool intermediate_queue = app_queue->queue_id != LAST_QUEUE_ID; + int event_port_id; + uint16_t i; + struct rte_event out_events[num]; + + event_port_id = test_app_get_worker_port(app, lcore_id); + + if (event_port_id < 0 || p_event_dev_id != app->event_dev_id || + p_event_port_id != event_port_id) { + test_app_queue_note_error(app); + return; + } + + for (i = 0; i < num; i++) { + const struct rte_event *in_event = &in_events[i]; + struct rte_event *out_event = &out_events[i]; + uint64_t sn = in_event->u64; + uint64_t expected_sn; + + if (in_event->queue_id != app_queue->queue_id) { + test_app_queue_note_error(app); + return; + } + + expected_sn = app_queue->sn[in_event->flow_id]++; + + if (expected_sn != sn) { + test_app_queue_note_error(app); + return; + } + + if (intermediate_queue) + *out_event = (struct rte_event) { + .queue_id = in_event->queue_id + 1, + .flow_id = in_event->flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_FORWARD, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + if (intermediate_queue) { + uint16_t n = 0; + + do { + n += rte_event_enqueue_forward_burst(p_event_dev_id, + p_event_port_id, + out_events + n, + num - n); + } while (n != num); + } else + atomic_fetch_add_explicit(&app->completed_events, num, + memory_order_relaxed); +} + +static int +test_app_register_callbacks(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_QUEUES; i++) { + struct app_queue *app_queue = &app->queues[i]; + uintptr_t queue_id = app_queue->queue_id; + int reg_id; + + reg_id = rte_event_dispatcher_register(app->dispatcher_id, + match_queue, + (void *)queue_id, + test_app_process_queue, + app_queue); + + TEST_ASSERT(reg_id >= 0, "Unable to register consumer " + "callback for queue %d", i); + + app_queue->dispatcher_reg_id = reg_id; + } + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callback(struct test_app *app, uint8_t queue_id) +{ + int reg_id = app->queues[queue_id].dispatcher_reg_id; + + if (reg_id < 0) /* unregistered already */ + return 0; + + int rc = rte_event_dispatcher_unregister(app->dispatcher_id, reg_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to unregister consumer " + "callback for queue %d", queue_id); + + app->queues[queue_id].dispatcher_reg_id = -1; + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callbacks(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_QUEUES; i++) { + int rc; + + rc = test_app_unregister_callback(app, i); + RETURN_ON_ERROR(rc); + } + + return TEST_SUCCESS; +} + +static int +test_app_start_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_event_dispatcher_start(app->dispatcher_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to start the event dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_stop_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_event_dispatcher_stop(app->dispatcher_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to stop the event dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_lcore_add(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to make lcore %d an event dispatcher " + "service core", lcore_id); + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 1); + TEST_ASSERT_SUCCESS(rc, "Unable to map event dispatcher service"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_cores(struct test_app *app) +{ + int i; + int lcore_id = -1; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + lcore_id = rte_get_next_lcore(lcore_id, 1, 0); + + TEST_ASSERT(lcore_id != RTE_MAX_LCORE, + "Too few lcores. Needs at least %d worker lcores", + NUM_SERVICE_CORES); + + app->service_lcores[i] = lcore_id; + } + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + int rc; + + rc = test_app_setup_service_core(app, app->service_lcores[i]); + + RETURN_ON_ERROR(rc); + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 0); + TEST_ASSERT_SUCCESS(rc, "Unable to unmap event dispatcher service"); + + rc = rte_service_lcore_del(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable change role of service lcore %d", + lcore_id); + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = test_app_teardown_service_core(app, lcore_id); + + RETURN_ON_ERROR(rc); + } + + return TEST_SUCCESS; +} + +static int +test_app_start_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_start(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start service lcore %d", + lcore_id); + + RETURN_ON_ERROR(rc); + } + + return TEST_SUCCESS; +} + +static int +test_app_stop_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_stop(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to stop service lcore %d", + lcore_id); + + RETURN_ON_ERROR(rc); + } + + return TEST_SUCCESS; +} + +static int +test_app_start(struct test_app *app) +{ + int rc; + + rc = test_app_start_event_dev(app); + RETURN_ON_ERROR(rc); + + rc = test_app_start_service_cores(app); + RETURN_ON_ERROR(rc); + + rc = test_app_start_dispatcher(app); + + app->running = true; + + return rc; +} + +static int +test_app_stop(struct test_app *app) +{ + int rc; + + rc = test_app_stop_dispatcher(app); + RETURN_ON_ERROR(rc); + + test_app_stop_service_cores(app); + RETURN_ON_ERROR(rc); + + test_app_stop_event_dev(app); + RETURN_ON_ERROR(rc); + + app->running = false; + + return TEST_SUCCESS; +} + +struct test_app *test_app; + +static int +test_setup(void) +{ + int rc; + + test_app = test_app_create(); + TEST_ASSERT(test_app != NULL, "Unable to allocate memory"); + + rc = test_app_setup_event_dev(test_app); + RETURN_ON_ERROR(rc); + + rc = test_app_create_dispatcher(test_app); + + rc = test_app_setup_service_cores(test_app); + RETURN_ON_ERROR(rc); + + rc = test_app_register_callbacks(test_app); + RETURN_ON_ERROR(rc); + + rc = test_app_bind_ports(test_app); + + return rc; +} + +static void test_teardown(void) +{ + if (test_app->running) + test_app_stop(test_app); + + test_app_teardown_service_cores(test_app); + + test_app_unregister_callbacks(test_app); + + test_app_unbind_ports(test_app); + + test_app_free_dispatcher(test_app); + + test_app_teardown_event_dev(test_app); + + test_app_free(test_app); + + test_app = NULL; +} + +static int +test_app_get_completed_events(struct test_app *app) +{ + return atomic_load_explicit(&app->completed_events, + memory_order_relaxed); +} + +static int +test_app_get_errors(struct test_app *app) +{ + return atomic_load_explicit(&app->errors, memory_order_relaxed); +} + +static int +test_basic(void) +{ + int rc; + int i; + + rc = test_app_start(test_app); + RETURN_ON_ERROR(rc); + + uint64_t sns[NUM_FLOWS] = { 0 }; + + for (i = 0; i < NUM_EVENTS;) { + struct rte_event events[ENQUEUE_BURST_SIZE]; + int left; + int batch_size; + int j; + uint16_t n = 0; + + batch_size = 1 + rte_rand_max(ENQUEUE_BURST_SIZE); + left = NUM_EVENTS - i; + + batch_size = RTE_MIN(left, batch_size); + + for (j = 0; j < batch_size; j++) { + struct rte_event *event = &events[j]; + uint64_t sn; + uint32_t flow_id; + + flow_id = rte_rand_max(NUM_FLOWS); + + sn = sns[flow_id]++; + + *event = (struct rte_event) { + .queue_id = 0, + .flow_id = flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + while (n < batch_size) + n += rte_event_enqueue_new_burst(test_app->event_dev_id, + DRIVER_PORT_ID, + events + n, + batch_size - n); + + i += batch_size; + } + + while (test_app_get_completed_events(test_app) != NUM_EVENTS) + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + + rc = test_app_get_errors(test_app); + TEST_ASSERT(rc == 0, "%d errors occurred", rc); + + rc = test_app_stop(test_app); + RETURN_ON_ERROR(rc); + + struct rte_event_dispatcher_stats stats; + rc = rte_event_dispatcher_stats_get(test_app->dispatcher_id, + &stats); + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 0, "Drop count is not zero"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, NUM_EVENTS * NUM_QUEUES, + "Invalid dispatch count"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + return TEST_SUCCESS; +} + +static int +test_drop(void) +{ + int rc; + uint8_t unhandled_queue = 1; + struct rte_event_dispatcher_stats stats; + + rc = test_app_start(test_app); + RETURN_ON_ERROR(rc); + + rc = test_app_unregister_callback(test_app, unhandled_queue); + RETURN_ON_ERROR(rc); + + struct rte_event event = { + .queue_id = unhandled_queue, + .flow_id = 0, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = 0 + }; + + do { + rc = rte_event_enqueue_burst(test_app->event_dev_id, + DRIVER_PORT_ID, &event, 1); + } while (rc == 0); + + do { + rc = rte_event_dispatcher_stats_get(test_app->dispatcher_id, + &stats); + RETURN_ON_ERROR(rc); + + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + } while (stats.ev_drop_count == 0 && stats.ev_dispatch_count == 0); + + rc = test_app_stop(test_app); + RETURN_ON_ERROR(rc); + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 1, "Drop count is not one"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, 0, + "Dispatch count is not zero"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + return TEST_SUCCESS; +} + +static struct unit_test_suite test_suite = { + .suite_name = "Event dispatcher test suite", + .unit_test_cases = { + TEST_CASE_ST(test_setup, test_teardown, test_basic), + TEST_CASE_ST(test_setup, test_teardown, test_drop), + TEST_CASES_END() + } +}; + +static int +test_event_dispatcher(void) +{ + return unit_test_suite_runner(&test_suite); +} + +REGISTER_TEST_COMMAND(event_dispatcher_autotest, test_event_dispatcher); From patchwork Mon May 22 09:16:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 127153 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F65742B6F; Mon, 22 May 2023 11:23:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA11A42D4E; Mon, 22 May 2023 11:23:10 +0200 (CEST) Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03on2086.outbound.protection.outlook.com [40.107.104.86]) by mails.dpdk.org (Postfix) with ESMTP id 75EB042D41 for ; Mon, 22 May 2023 11:23:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eQO4brqmsbzsnyYT3tBArT/ONQoqQQT8273oB87SiNLsXohBrHD2s8hP3oAgbtOjDlTkz9sixsH/n0TwdeW96dfxEKkbf5vK8C4pp8Sw9QhE3bL4c1om5nRjxxE+Oe/cPqjm7M5koAjj9pz1LuDPhaRT6OSLfgF/Nad/rd3P/rGvQBpshpCq8elotjBaBFoLXTxplVTlDe22iFbw7nDU7T0KDqbvEF3jN/oo9dlKk4CaBdiaN5Rqc2yC8nLfpF0qW+eQJkKQVp8ZkU1dFQS/6XqTzD9E5BkyLY1ZvzGASLnZ8klc+D9mGqzl7Bf0Sob1sWrP8QaVpw+0arYh7p9agA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yLaa2w/hTnMNsJhw4PSw0wNfy39CoNsj5fJQLvKtWG8=; b=S4Dyp1sUw2rN1T+Jv8Sl9dGTGZvRyK/VE/9H6p+CG/OO+yI1iq7q08vYOEjrdZ7l6zABRWoiuQA27cmfOpng1JLtZwxG9Lm8wD3Q1tAivrFoJpFCjyCdlRbU5NDSozgviqu1yUuR7/obcXSxGFjGh0CKoHvGajAGp/k6jYi9Vs9mxAvhHsWqhwOEwxpgL74VUoO+rWK2NjFnGFu9yWvije+rfZg5BH9aAmN+9u5vI0i5yjYWgmg0YHuSWTcQU6PFHlYMu3iFYx1yZQ085kXX8xCYZ3F9/ZXC5Bz7N35MsPdPpYzzo/kOTKt7yYsMdQEF1vh40C/XK75SMkfdm6gr+g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yLaa2w/hTnMNsJhw4PSw0wNfy39CoNsj5fJQLvKtWG8=; b=Tai9sdbwkz4yExGeEAUf+IKJZdQhluHvX3NGMfeNNqiDU9UIFyTsTH/PBEC90o3d8FehR0hoeax1Qg+qi5tGwkoQFDk3iUNywACJS1eVYHT5/9cPb00OIKNCrV6A/xyP+Q9xHTdhV3jozROZT96zs3heOGGcRAmVkKFXSHIIn10= Received: from AS9PR05CA0035.eurprd05.prod.outlook.com (2603:10a6:20b:489::17) by AS5PR07MB9722.eurprd07.prod.outlook.com (2603:10a6:20b:673::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May 2023 09:23:07 +0000 Received: from AM0EUR02FT025.eop-EUR02.prod.protection.outlook.com (2603:10a6:20b:489:cafe::a4) by AS9PR05CA0035.outlook.office365.com (2603:10a6:20b:489::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend Transport; Mon, 22 May 2023 09:23:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by AM0EUR02FT025.mail.protection.outlook.com (10.13.54.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6411.21 via Frontend Transport; Mon, 22 May 2023 09:23:07 +0000 Received: from ESESBMB503.ericsson.se (153.88.183.170) by ESESBMB502.ericsson.se (153.88.183.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Mon, 22 May 2023 11:23:06 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.186) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Mon, 22 May 2023 11:23:06 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 564163800D4; Mon, 22 May 2023 11:23:06 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: CC: Jerin Jacob , , , Van@dpdk.org, Haaren@dpdk.org, Harry , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [RFC v3 3/3] doc: add event dispatcher programming guide Date: Mon, 22 May 2023 11:16:28 +0200 Message-ID: <20230522091628.96236-4-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230522091628.96236-1-mattias.ronnblom@ericsson.com> References: <20210409113223.65260-1-mattias.ronnblom@ericsson.com> <20230522091628.96236-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM0EUR02FT025:EE_|AS5PR07MB9722:EE_ X-MS-Office365-Filtering-Correlation-Id: 8e707107-f342-408a-366f-08db5aa62726 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5g4J9Sxe+HSINZeCRpLyuQMJkwOM7dJhF2XORFKrAkQMd551GBFl/do0RIp+tzIyUmlquTnp/I3JqzRsYrJNB+/Mv9EOVAJA0TxU/xv3dw+WNUPYHtg/K+Qj6GqGd39ajU2xB6pbXnNEwU3mcoqZbkA5Mg2HOz2VSsDECrui2YG0VryPaRmqNeW640A3jjTGUwUqLBAR1Q0T0GbeGHqpnhXEBZzqU8IlwyGtzqHXlroxZp+0naOTYMY9GMVdRwItZIVObnRS+I2OH0PpwiYd9vKf5KU0ruThnfaWtY31TfMYsmZTu9OOGOOPsJ8z31wSan3+sGHnBsstVs8x+UJ2+cV+amK85UQ6gY+a6IjLxT/dEb1As63TdYBsd9RE+UK4eK67lsPwcOb0LrdA/jDU+c39/Uri5WdDRB7kWu4qeFYM7YGMeWOGEiv11zw32jEcHy/BhzEIQd+jmbSGgUZnLq1Ufbi1PMK6VVkNFylwODJvqZ9+xYTrKm7yBeRNbZCSApsJg1FWeVCA/pe/LvqMqG4/xkxzolj+PE/aB/PgaHbSUWly5Tcqf4zHEBMCAkcgkGKzfE9shQh+/o3qn4ncgnbjgU9sUuXonBlfUDI6oLRAUNU9vS47r8Fi9oi6vzmUhH3BERuKbjriXQow3/iZfdXEWjzZ2eOymy8AzhD+LpAekYdZb8dGRJqNN3KBR/F7thxjD/XANfeXsgJowh9W/i+qUy/NbsZdkCme74+kcR6sgT1rGh/5pwXneC5js0V4+bDnJoUAHmfy4hTsBNVckMWEXtaQrbLd+53JfaiG2KM= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199021)(40470700004)(36840700001)(46966006)(30864003)(2906002)(66899021)(5660300002)(47076005)(66574015)(83380400001)(8676002)(8936002)(41300700001)(70206006)(70586007)(4326008)(6916009)(54906003)(316002)(36756003)(40480700001)(6666004)(478600001)(336012)(2616005)(1076003)(26005)(7636003)(356005)(86362001)(82960400001)(82310400005)(36860700001)(186003)(6266002)(107886003)(40460700003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 09:23:07.0162 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8e707107-f342-408a-366f-08db5aa62726 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: AM0EUR02FT025.eop-EUR02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR07MB9722 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provide programming guide the for the event dispatcher. Signed-off-by: Mattias Rönnblom --- doc/api/doxy-api-index.md | 1 + doc/guides/prog_guide/event_dispatcher.rst | 423 +++++++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + 3 files changed, 425 insertions(+) create mode 100644 doc/guides/prog_guide/event_dispatcher.rst diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index c709fd48ad..05b22057f9 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -29,6 +29,7 @@ The public API headers are grouped by topics: [event_eth_tx_adapter](@ref rte_event_eth_tx_adapter.h), [event_timer_adapter](@ref rte_event_timer_adapter.h), [event_crypto_adapter](@ref rte_event_crypto_adapter.h), + [event_dispatcher](@ref rte_event_dispatcher.h), [rawdev](@ref rte_rawdev.h), [metrics](@ref rte_metrics.h), [bitrate](@ref rte_bitrate.h), diff --git a/doc/guides/prog_guide/event_dispatcher.rst b/doc/guides/prog_guide/event_dispatcher.rst new file mode 100644 index 0000000000..6fabadf560 --- /dev/null +++ b/doc/guides/prog_guide/event_dispatcher.rst @@ -0,0 +1,423 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Ericsson AB. + +Event Dispatcher +================ + +Overview +-------- + +The purpose of the event dispatcher is to help reduce coupling in an +:doc:`Eventdev `-based DPDK application. + +In particular, the event dispatcher addresses a scenario where an +application's modules share the same event device and event device +ports, and performs work on the same lcore threads. + +The event dispatcher replaces the conditional logic that follows an +event device dequeue operation, where events are dispatched to +different parts of the application, typically based on fields in the +``rte_event``, such as the ``queue_id``, ``sub_event_type``, or +``sched_type``. + +Below is an excerpt from an fictious application consisting of two +modules; A and B. In this example, event-to-module routing is based +purely on queue id, where module A expects all events to a certain +queue id, and module B two other queue ids. [#Mapping]_ + +.. code-block:: c + + for (;;) { + struct rte_event events[MAX_BURST]; + unsigned int n; + + n = rte_event_dequeue_burst(dev_id, port_id, events, + MAX_BURST, 0); + + for (i = 0; i < n; i++) { + const struct rte_event *event = &events[i]; + + switch (event->queue_id) { + case MODULE_A_QUEUE_ID: + module_a_process(event); + break; + case MODULE_B_STAGE_0_QUEUE_ID: + module_b_process_stage_0(event); + break; + case MODULE_B_STAGE_1_QUEUE_ID: + module_b_process_stage_1(event); + break; + } + } + } + +The issue this example attempts to illustrate is that the centralized +conditional logic has knowledge of things that should be private to +the modules involved. In other words, this pattern leads to a +violation of module encapsulation. + +The shared conditional logic contains explicit knowledge about what +events should go where. In case, for example, the +``module_a_process()`` is broken into two processing stages — a purely +module-internal change — the shared conditional code must be updated +to reflect this change. + +The centralized event routing code becomes an issue in larger +applications, where modules are developed by different organizations. +This pattern also makes module reuse across different application more +difficult. The part of the conditional logic relevant for a particular +application may need to be duplicated across many module +instantiations (e.g., applications and test setups). + +The event dispatcher separates the mechanism (routing events to their +receiver) from the policy (which events should go where). + +The basic operation of the event dispatcher is as follows: + +* Dequeue a batch of events from the event device. +* For each event determine which handler should receive the event, using + a set of application-provided, per-handler event matching callback + functions. +* Provide events matching a particular handler, to that handler, using + its process callback. + +If the above application would have made use of the event dispatcher, +the code relevant for its module A may have looked something like +this: + +.. code-block:: c + + static bool + module_a_match(const struct rte_event *event, void *cb_data) + { + return event->queue_id == MODULE_A_QUEUE_ID; + } + + static void + module_a_process_events(uint8_t event_dev_id, uint8_t event_port_id, + const struct rte_event *events, + uint16_t num, void *cb_data) + { + uint16_t i; + + for (i = 0; i < num; i++) + module_a_process_event(&events[i]); + } + + /* In the module's initialization code */ + rte_event_dispatcher_register(EVENT_DISPATCHER_ID, module_a_match, + NULL, module_a_process_events, + module_a_data); + +(Error handling is left out of this and future example code in this +chapter.) + +When the shared conditional logic is removed, a new question arise: +which part of the system actually runs the dispatching mechanism? Or +phrased differently, what is replacing the function hosting the shared +conditional logic (typically launched on all lcores using +``rte_eal_remote_launch()``)? To solve this issue, the event +dispatcher is a run as a DPDK :doc:`Service `. + +The event dispatcher is a layer between the application and the event +device in the receive direction. In the transmit (i.e., item of work +submission) direction, the application directly accesses the Eventdev +core API (e.g., ``rte_event_enqueue_burst()``) to submit new or +forwarded event to the event device. + +Event Dispatcher Creation +------------------------- + +Before any ``rte_event_dispatcher_*()`` calls are made, the +application must create an event dispatcher instance by means of a +``rte_event_dispatcher_create()`` call. + +The supplied dispatcher id is allocated by the application, and must +be unique. + +The event device must be configured before the event dispatcher is +created. + +Usually, only one event dispatcher is needed per event device. An +event dispatcher can handle only a single event device. + +An event dispatcher is freed using the ``rte_event_dispatcher_free()`` +function. The event dispatcher's service functions must not be running +on any lcore at the point of this call. + +Event Port Binding +------------------ + +In order to be able to dequeue events, the event dispatcher must known +which event ports are to be used, for every lcore to which the event +dispatcher service is mapped. + +The application configures a particular event dev port to be managed +by a particular lcore by calling +``rte_event_dispatcher_bind_port_to_lcore()``. + +This call is typically made from the part of the application that +deals with deployment issues (e.g., iterating lcores and determining +which lcore does what), at the time of application initialization. + +The ``rte_event_dispatcher_unbind_port_from_lcore()`` is used to undo +this operation. + +Multiple lcore threads may not safely use the same event port. + +Event ports cannot safely be bound or unbound while the event +dispatcher's service function is running on any lcore. + +Event Handlers +-------------- + +The event dispatcher handler is an interface between the event +dispatcher and an application module, used to route events to the +appropriate part of the application. + +Handler Registration +^^^^^^^^^^^^^^^^^^^^ + +The event handler interface consists of two function pointers: + +* The ``rte_event_dispatcher_match_t`` callback, which job is to + decide if this event is to be the property of this handler. +* The ``rte_event_dispatcher_process_t``, which is used by the + event dispatcher to deliver matched events. + +An event handler registration is valid for lcores. + +The functions pointed to by the match and process callbacks resides in +the application's domain logic, with one or more handlers per +application module. + +A module may use more than one event handler, for convience or to +further decouple sub-modules. However, the event dispatcher may impose +an upper limit of the number handlers. In addition, many handlers +increase event dispatcher overhead, although this does not nessarily +translate to an application-level performance degradation. See the +section on :ref:`Event Clustering` for more information. + +Handler registration and unregistration cannot safely be done while +the event dispatcher's service function is running on any lcore. + +Event Delivery +^^^^^^^^^^^^^^ + +The handler callbacks are invocated by the event dispatcher's service +function, upon the arrival of events to the event ports bound to the +running service lcore. + +The application must not depend on all match callback invocations for +a particular event batch being made prior to any process calls are +being made. For example, if the event dispatcher dequeues two events +from the event device, it may choose to find out the destination for +the first event, and deliver it, and then continue to find out the +destination for the second, and then deliver that event as well. The +event dispatcher may also choose a strategy where no event is +delivered until the destination handler for both events have been +determined. + +The event dispatcher guarantees that all events provided in a process +batch has been seen (and matched) by the handler's match callbacks. It +also guarantees that all events provided in a single process call +belong to the same event port dequeue burst. + +.. _Event Clustering: + +Event Clustering +^^^^^^^^^^^^^^^^ + +The event dispatcher maintains the order of events destined for the +same handler. + +The event dispatcher *does not* guarantee to maintain the order of +events delivered to *different* handlers. + +For example, assume that ``MODULE_A_QUEUE_ID`` takes on the value 0, +and ``MODULE_B_STAGE_0_QUEUE_ID`` takes on the value 1. Then consider +a scenario where the following events are dequeued from the event +device (qid is short for event queue id). + +.. code-block:: + + [e0: qid=1], [e1: qid=1], [e2: qid=0], [e3: qid=1] + +The event dispatcher may deliver the events in the following manner: + +.. code-block:: + + module_b_stage_0_process([e0: qid=1], [e1: qid=1]) + module_a_process([e2: qid=0]) + module_b_stage_0_process([e2: qid=1]) + +The event dispatcher may also choose to cluster (group) all events +destined for ``module_b_stage_0_process()`` into one array: + +.. code-block:: + + module_b_stage_0_process([e0: qid=1], [e1: qid=1], [e3: qid=1]) + module_a_process([e2: qid=0]) + +The event ``e2`` is reordered and placed behind ``e3``, from a deliver +order point of view. This kind of reshuffling is allowed, since they +belong to different handlers. + +The event dispatcher may also deliver ``e2`` before the three events +destined for module B. + +An example of what the event dispatcher may not do, is to reorder +event ``e1`` so, that it precedes ``e0`` in the array passed to the +module B's stage 0 process callback. + +Clustering events destined for the same callback may help to improve +application-level performance, since processing events destined for +the same handler likely increases temporal locality of memory +accesses, which in turn may lead to fewer caches misses and improved +performance. + +Finalize +-------- + +The event dispatcher may be configured to notify one or more parts of +the application when the matching and processing of a batch of events +has completed. + +The ``rte_event_dispatcher_finalize_register`` call is used to +register a finalize callback. The function +``rte_event_dispatcher_finalize_unregister`` is used to remove a +callback. + +The finalize hook may be used by a set of event handlers (in the same +modules, or a set of cooperating modules) sharing an event output +buffer, since it allows for flushing of the buffers at the last +possible moment. In particular, it allows for buffering of +``RTE_EVENT_OP_FORWARD`` events, which must be flushed before the next +``rte_event_dequeue_burst()`` call is made (assuming implicit release +is employed). + +The following is an example with an application-defined event output +buffer (the ``event_buffer``): + +.. code-block:: c + + static void + finalize_batch(uint8_t event_dev_id, uint8_t event_port_id, + void *cb_data) + { + struct event_buffer *buffer = cb_data; + unsigned lcore_id = rte_lcore_id(); + struct event_buffer_lcore *lcore_buffer = + &buffer->lcore_buffer[lcore_id]; + + event_buffer_lcore_flush(lcore_buffer); + } + + /* In the module's initialization code */ + rte_event_dispatcher_finalize_register(EVENT_DISPATCHER_ID, + finalize_batch, + shared_event_buffer); + +The event dispatcher does not track any relationship between a handler +and a finalize callback, and all finalize callbacks will be called, if +(and only if) at least one event was dequeued from the event device. + +Finalize callback registration and unregistration cannot safely be +done while the event dispatcher's service function is running on any +lcore. + +Service +------- + +The event dispatcher is a DPDK service, and is managed in a manner +similar to other DPDK services (e.g., an Event Timer Adapter). + +Below is an example of how to configure a particular lcore to serve as +a service lcore, and to map an already-configured event dispatcher +(identified by ``EVENT_DISPATCHER_ID``) to that lcore. + +.. code-block:: c + + static void + launch_event_dispatcher_core(unsigned lcore_id) + { + uint32_t service_id; + + rte_service_lcore_add(lcore_id); + + rte_event_dispatcher_service_id_get(EVENT_DISPATCHER_ID, + &service_id); + + rte_service_map_lcore_set(service_id, lcore_id, 1); + + rte_service_lcore_start(lcore_id); + + rte_service_runstate_set(service_id, 1); + } + +As the final step, the event dispatcher must be started. + +.. code-block:: c + + rte_event_dispatcher_start(EVENT_DISPATCHER_ID); + + +Multi Service Dispatcher Lcores +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In an Eventdev application, most (or all) compute-intensive and +performance-sensitive processing is done in an event-driven manner, +where CPU cycles spent on application domain logic is the direct +result of items of work (i.e., ``rte_event`` events) dequeued from an +event device. + +In the light of this, it makes sense to have the event dispatcher +service be the only DPDK service on all lcores used for packet +processing — at least in principle. + +There is nothing preventing colocating other services with the event +dispatcher service, on the same lcore. + +Tasks that prior to the introduction was performed on the lcore, even +though no events were received, are prime targets for being converted +into such auxiliary services, running on the dispatcher core set. + +An example of such a task would be the management of a per-lcore timer +wheel (i.e., calling ``rte_timer_manage()``). + +For applications employing :doc:`Read-Copy-Update (RCU) ` (or +similar technique), may opt for having quiescent state (e.g., calling +``rte_rcu_qsbr_quiescent()``) signaling factored out into a separate +service, to assure resource reclaimination occurs even in though some +lcores currently do not process any events. + +If more services than the event dispatcher service is mapped to a +service lcore, it's important that the other service are well-behaved +and don't interfere with event processing to the extent the system's +throughput and/or latency requirements are at risk of not being met. + +In particular, to avoid jitter, they should have an small upper bound +for the maximum amount of time spent in a single service function +call. + +An example of scenario with a more CPU-heavy colocated service is +low-lcore count deployment, where the event device lacks the +``RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT`` capability (and thus +require software to feed incoming packets into the event device). In +this case, the best performance may be achieved if the Event Ethernet +RX and/or TX Adapters are mapped to lcores also used by for event +dispatching, since otherwise the adapter lcores would have a lot of +idle CPU cycles. + +.. rubric:: Footnotes + +.. [#Mapping] + Event-to-module mapping may reasonably be done based on other + ``rte_event`` fields (or even event data), but queue id-based + routing serves well in a simple example. Indeed, that's the very + reason to have callback match functions, instead of a simple + queue id-to-handler scheme. + +.. [#Port-MT-Safety] + This property (which is a feature, not a bug) is inherited from the + core Eventdev APIs. diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 87333ee84a..74fcbcee6b 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -59,6 +59,7 @@ Programmer's Guide event_ethernet_tx_adapter event_timer_adapter event_crypto_adapter + event_dispatcher qos_framework power_man packet_classif_access_ctrl