From patchwork Fri Jan 19 17:43:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135993 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9937A43901; Fri, 19 Jan 2024 18:44:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 430A642DB4; Fri, 19 Jan 2024 18:44:08 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 58EE24027A for ; Fri, 19 Jan 2024 18:44:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686246; x=1737222246; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WPaCCs7XUYug4BgpoKg8uxcxnhqUYNt7+Ihn9a6O3FM=; b=fflblJ1jEw5elydvkSSEbC25AuA+8NXv9mO093N0rUxe0MGHhJpkLUDj O8KVIt8lYNbKso4Uh/JBOJy8QU6N6r7OF6IOFjuQjXzbjtCzO8InJsyUn U986D2bXXAghl/8wOhOJ93RUoxz5XMxsiiTJBBTFl+42pavBbUYFnmogo bQeQLW0+B1yOz59+a1j2Y58O1XvI3iHBVqD8OjcHCLikPq//P2pjJR2kZ 8aGNJPpRYT9pjag5FJTE3sQH1GFCN2UV2QdfaFMkzVnJglaTtjJ5UkTUq nNvNuh8RHlLNEzcGc1xsvjH2s4cjWgSGf/WeeSRPEMOJv4P9jTR1NjJaL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683621" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683621" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:43:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177759" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177759" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:43:55 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 01/11] eventdev: improve doxygen introduction text Date: Fri, 19 Jan 2024 17:43:36 +0000 Message-Id: <20240119174346.108905-2-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Make some textual improvements to the introduction to eventdev and event devices in the eventdev header file. This text appears in the doxygen output for the header file, and introduces the key concepts, for example: events, event devices, queues, ports and scheduling. This patch makes the following improvements: * small textual fixups, e.g. correcting use of singular/plural * rewrites of some sentences to improve clarity * using doxygen markdown to split the whole large block up into sections, thereby making it easier to read. No large-scale changes are made, and blocks are not reordered Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++--------------- 1 file changed, 66 insertions(+), 46 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index ec9b02455d..a36c89c7a4 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -12,12 +12,13 @@ * @file * * RTE Event Device API + * ==================== * * In a polling model, lcores poll ethdev ports and associated rx queues - * directly to look for packet. In an event driven model, by contrast, lcores - * call the scheduler that selects packets for them based on programmer - * specified criteria. Eventdev library adds support for event driven - * programming model, which offer applications automatic multicore scaling, + * directly to look for packets. In an event driven model, in contrast, lcores + * call a scheduler that selects packets for them based on programmer + * specified criteria. The eventdev library adds support for the event driven + * programming model, which offers applications automatic multicore scaling, * dynamic load balancing, pipelining, packet ingress order maintenance and * synchronization services to simplify application packet processing. * @@ -25,12 +26,15 @@ * * - The application-oriented Event API that includes functions to setup * an event device (configure it, setup its queues, ports and start it), to - * establish the link between queues to port and to receive events, and so on. + * establish the links between queues and ports to receive events, and so on. * * - The driver-oriented Event API that exports a function allowing - * an event poll Mode Driver (PMD) to simultaneously register itself as + * an event poll Mode Driver (PMD) to register itself as * an event device driver. * + * Application-oriented Event API + * ------------------------------ + * * Event device components: * * +-----------------+ @@ -75,27 +79,33 @@ * | | * +-----------------------------------------------------------+ * - * Event device: A hardware or software-based event scheduler. + * **Event device**: A hardware or software-based event scheduler. * - * Event: A unit of scheduling that encapsulates a packet or other datatype - * like SW generated event from the CPU, Crypto work completion notification, - * Timer expiry event notification etc as well as metadata. - * The metadata includes flow ID, scheduling type, event priority, event_type, + * **Event**: A unit of scheduling that encapsulates a packet or other datatype, + * such as: SW generated event from the CPU, crypto work completion notification, + * timer expiry event notification etc., as well as metadata about the packet or data. + * The metadata includes a flow ID (if any), scheduling type, event priority, event_type, * sub_event_type etc. * - * Event queue: A queue containing events that are scheduled by the event dev. + * **Event queue**: A queue containing events that are scheduled by the event device. * An event queue contains events of different flows associated with scheduling * types, such as atomic, ordered, or parallel. + * Each event given to an eventdev must have a valid event queue id field in the metadata, + * to specify on which event queue in the device the event must be placed, + * for later scheduling to a core. * - * Event port: An application's interface into the event dev for enqueue and + * **Event port**: An application's interface into the event dev for enqueue and * dequeue operations. Each event port can be linked with one or more * event queues for dequeue operations. - * - * By default, all the functions of the Event Device API exported by a PMD - * are lock-free functions which assume to not be invoked in parallel on - * different logical cores to work on the same target object. For instance, - * the dequeue function of a PMD cannot be invoked in parallel on two logical - * cores to operates on same event port. Of course, this function + * Each port should be associated with a single core (enqueue and dequeue is not thread-safe). + * To schedule events to a core, the event device will schedule them to the event port(s) + * being polled by that core. + * + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD + * are lock-free functions, which must not be invoked on the same object in parallel on + * different logical cores. + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical + * cores to operate on same event port. Of course, this function * can be invoked in parallel by different logical cores on different ports. * It is the responsibility of the upper level application to enforce this rule. * @@ -107,22 +117,19 @@ * * Event devices are dynamically registered during the PCI/SoC device probing * phase performed at EAL initialization time. - * When an Event device is being probed, a *rte_event_dev* structure and - * a new device identifier are allocated for that device. Then, the - * event_dev_init() function supplied by the Event driver matching the probed - * device is invoked to properly initialize the device. + * When an Event device is being probed, an *rte_event_dev* structure is allocated + * for it and the event_dev_init() function supplied by the Event driver + * is invoked to properly initialize the device. * - * The role of the device init function consists of resetting the hardware or - * software event driver implementations. + * The role of the device init function is to reset the device hardware or + * to initialize the software event driver implementation. * - * If the device init operation is successful, the correspondence between - * the device identifier assigned to the new device and its associated - * *rte_event_dev* structure is effectively registered. - * Otherwise, both the *rte_event_dev* structure and the device identifier are - * freed. + * If the device init operation is successful, the device is assigned a device + * id (dev_id) for application use. + * Otherwise, the *rte_event_dev* structure is freed. * * The functions exported by the application Event API to setup a device - * designated by its device identifier must be invoked in the following order: + * must be invoked in the following order: * - rte_event_dev_configure() * - rte_event_queue_setup() * - rte_event_port_setup() @@ -130,10 +137,15 @@ * - rte_event_dev_start() * * Then, the application can invoke, in any order, the functions - * exported by the Event API to schedule events, dequeue events, enqueue events, - * change event queue(s) to event port [un]link establishment and so on. - * - * Application may use rte_event_[queue/port]_default_conf_get() to get the + * exported by the Event API to dequeue events, enqueue events, + * and link and unlink event queue(s) to event ports. + * + * Before configuring a device, an application should call rte_event_dev_info_get() + * to determine the capabilities of the event device, and any queue or port + * limits of that device. The parameters set in the various device configuration + * structures may need to be adjusted based on the max values provided in the + * device information structure returned from the info_get API. + * An application may use rte_event_[queue/port]_default_conf_get() to get the * default configuration to set up an event queue or event port by * overriding few default values. * @@ -145,7 +157,11 @@ * when the device is stopped. * * Finally, an application can close an Event device by invoking the - * rte_event_dev_close() function. + * rte_event_dev_close() function. Once closed, a device cannot be + * reconfigured or restarted. + * + * Driver-Oriented Event API + * ------------------------- * * Each function of the application Event API invokes a specific function * of the PMD that controls the target device designated by its device @@ -164,10 +180,13 @@ * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure. * * For performance reasons, the address of the fast-path functions of the - * Event driver is not contained in the *event_dev_ops* structure. + * Event driver are not contained in the *event_dev_ops* structure. * Instead, they are directly stored at the beginning of the *rte_event_dev* * structure to avoid an extra indirect memory access during their invocation. * + * Event Enqueue, Dequeue and Scheduling + * ------------------------------------- + * * RTE event device drivers do not use interrupts for enqueue or dequeue * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue * functions to applications. @@ -179,21 +198,22 @@ * crypto work completion notification etc * * The *dequeue* operation gets one or more events from the event ports. - * The application process the events and send to downstream event queue through - * rte_event_enqueue_burst() if it is an intermediate stage of event processing, - * on the final stage, the application may use Tx adapter API for maintaining - * the ingress order and then send the packet/event on the wire. + * The application processes the events and sends them to a downstream event queue through + * rte_event_enqueue_burst(), if it is an intermediate stage of event processing. + * On the final stage of processing, the application may use the Tx adapter API for maintaining + * the event ingress order while sending the packet/event on the wire via NIC Tx. * * The point at which events are scheduled to ports depends on the device. * For hardware devices, scheduling occurs asynchronously without any software * intervention. Software schedulers can either be distributed * (each worker thread schedules events to its own port) or centralized * (a dedicated thread schedules to all ports). Distributed software schedulers - * perform the scheduling in rte_event_dequeue_burst(), whereas centralized - * scheduler logic need a dedicated service core for scheduling. - * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set - * indicates the device is centralized and thus needs a dedicated scheduling - * thread that repeatedly calls software specific scheduling function. + * perform the scheduling inside the enqueue or dequeue functions, whereas centralized + * software schedulers need a dedicated service core for scheduling. + * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag + * indicates that the device is centralized and thus needs a dedicated scheduling + * thread, generally a service core, + * that repeatedly calls the software specific scheduling function. * * An event driven worker thread has following typical workflow on fastpath: * \code{.c} From patchwork Fri Jan 19 17:43:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135994 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA88743901; Fri, 19 Jan 2024 18:44:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7301942DC5; Fri, 19 Jan 2024 18:44:09 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id D1D0E402AE for ; Fri, 19 Jan 2024 18:44:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686247; x=1737222247; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y2Ve+nG3X8nOWMsHnECp86Z1V/xau0sRwjR7SSfULMo=; b=O8Kl38sc+BmvylEZhwFZEeOSQqsYx28B0tS2pc9mM6ZOEyGV/uq+5ZOm +W+YvLyegaX92mMN8sD0HYSYsrtfcUCLgoXNgRriS2pXGrftSqv5bz++U vxXy0zaOK7+idBbDCi58Rs1oJTw9fQTklmhzJjPojiJiwZHTeWoYxVtnl UAHTZMzTE1ULyNuQTdZoqh+O6ynpH0QMsGgFeFjE6GumowOmshg/DH1Bs JWKeDAtLAhOxGY1Uu79b8RuzkRbfvYdx++9sSjS/YOCFX00UkgswlelrV nxqhtHVaReWRE3KRTX3vD8gBRl4b//sLn+K21adS0aaUB6/toKU2YD4t3 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683651" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683651" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177765" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177765" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:43:58 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 02/11] eventdev: move text on driver internals to proper section Date: Fri, 19 Jan 2024 17:43:37 +0000 Message-Id: <20240119174346.108905-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Inside the doxygen introduction text, some internal details of how eventdev works was mixed in with application-relevant details. Move these details on probing etc. to the driver-relevant section. Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index a36c89c7a4..949e957f1b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -112,22 +112,6 @@ * In all functions of the Event API, the Event device is * designated by an integer >= 0 named the device identifier *dev_id* * - * At the Event driver level, Event devices are represented by a generic - * data structure of type *rte_event_dev*. - * - * Event devices are dynamically registered during the PCI/SoC device probing - * phase performed at EAL initialization time. - * When an Event device is being probed, an *rte_event_dev* structure is allocated - * for it and the event_dev_init() function supplied by the Event driver - * is invoked to properly initialize the device. - * - * The role of the device init function is to reset the device hardware or - * to initialize the software event driver implementation. - * - * If the device init operation is successful, the device is assigned a device - * id (dev_id) for application use. - * Otherwise, the *rte_event_dev* structure is freed. - * * The functions exported by the application Event API to setup a device * must be invoked in the following order: * - rte_event_dev_configure() @@ -163,6 +147,22 @@ * Driver-Oriented Event API * ------------------------- * + * At the Event driver level, Event devices are represented by a generic + * data structure of type *rte_event_dev*. + * + * Event devices are dynamically registered during the PCI/SoC device probing + * phase performed at EAL initialization time. + * When an Event device is being probed, an *rte_event_dev* structure is allocated + * for it and the event_dev_init() function supplied by the Event driver + * is invoked to properly initialize the device. + * + * The role of the device init function is to reset the device hardware or + * to initialize the software event driver implementation. + * + * If the device init operation is successful, the device is assigned a device + * id (dev_id) for application use. + * Otherwise, the *rte_event_dev* structure is freed. + * * Each function of the application Event API invokes a specific function * of the PMD that controls the target device designated by its device * identifier. From patchwork Fri Jan 19 17:43:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135995 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9567B43901; Fri, 19 Jan 2024 18:44:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A0BB42DD1; Fri, 19 Jan 2024 18:44:10 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 29D5C4027A for ; Fri, 19 Jan 2024 18:44:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686247; x=1737222247; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5rbLLqD8oaCo0oh2qcgy4Y/lFT/lNjrP4dPiU/DsDAU=; b=fA2gB77jpD39jLsqiBjh06XPOkpfabjYA8q2ZkSDdOM3sI+HLwS3H1Ce dqafKRIwxLbmNrYRlaUfnxDu4tUB9/kBRX459lJqozDY9pz4JFFFVJ9jy +q7HQ34q+wOn8sHC1iZpI/acPLWUMQ58DIcGqeLCS4lrnjTg5fksQX0dr bp8bNDe/LF/lxZmzaop/M/lQM+5RVms2con0FCItKh1ronm2aK1eWSuFn qV6dqg4GM9jk9qJcPPApb6te0hdqUjpSUXzSNGnSNZyFaWZcfqZrgaPiw Nu5AEuTl3CjOglKWzROrXyG9KpKiZHzstu1MdfljaLLtLm7lpw7pHeIrD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683677" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683677" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177774" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177774" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:01 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 03/11] eventdev: update documentation on device capability flags Date: Fri, 19 Jan 2024 17:43:38 +0000 Message-Id: <20240119174346.108905-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update the device capability docs, to: * include more cross-references * split longer text into paragraphs, in most cases with each flag having a single-line summary at the start of the doc block * general comment rewording and clarification as appropriate Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++---------- 1 file changed, 93 insertions(+), 37 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 949e957f1b..57a2791946 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -243,143 +243,199 @@ struct rte_event; /* Event device capability bitmap flags */ #define RTE_EVENT_DEV_CAP_QUEUE_QOS (1ULL << 0) /**< Event scheduling prioritization is based on the priority and weight - * associated with each event queue. Events from a queue with highest priority - * is scheduled first. If the queues are of same priority, weight of the queues + * associated with each event queue. + * + * Events from a queue with highest priority + * are scheduled first. If the queues are of same priority, weight of the queues * are considered to select a queue in a weighted round robin fashion. * Subsequent dequeue calls from an event port could see events from the same * event queue, if the queue is configured with an affinity count. Affinity * count is the number of subsequent dequeue calls, in which an event port * should use the same event queue if the queue is non-empty * + * NOTE: A device may use both queue prioritization and event prioritization + * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions. + * * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ #define RTE_EVENT_DEV_CAP_EVENT_QOS (1ULL << 1) /**< Event scheduling prioritization is based on the priority associated with - * each event. Priority of each event is supplied in *rte_event* structure + * each event. + * + * Priority of each event is supplied in *rte_event* structure * on each enqueue operation. + * If this capability is not set, the priority field of the event structure + * is ignored for each event. * + * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability) + * and event prioritization when making packet scheduling decisions. + * @see rte_event_enqueue_burst() */ #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL << 2) /**< Event device operates in distributed scheduling mode. + * * In distributed scheduling mode, event scheduling happens in HW or - * rte_event_dequeue_burst() or the combination of these two. + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two. * If the flag is not set then eventdev is centralized and thus needs a * dedicated service core that acts as a scheduling thread . * - * @see rte_event_dequeue_burst() + * @see rte_event_dev_service_id_get */ #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES (1ULL << 3) /**< Event device is capable of enqueuing events of any type to any queue. + * * If this capability is not set, the queue only supports events of the - * *RTE_SCHED_TYPE_* type that it was created with. + * *RTE_SCHED_TYPE_* type that it was created with. + * Any events of other types scheduled to the queue will handled in an + * implementation-dependent manner. They may be dropped by the + * event device, or enqueued with the scheduling type adjusted to the + * correct/supported value. * - * @see RTE_SCHED_TYPE_* values + * @see rte_event_enqueue_burst + * @see RTE_SCHED_TYPE_ATOMIC RTE_SCHED_TYPE_ORDERED RTE_SCHED_TYPE_PARALLEL */ #define RTE_EVENT_DEV_CAP_BURST_MODE (1ULL << 4) /**< Event device is capable of operating in burst mode for enqueue(forward, - * release) and dequeue operation. If this capability is not set, application - * still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst() but - * PMD accepts only one event at a time. + * release) and dequeue operation. + * + * If this capability is not set, application + * can still use the rte_event_dequeue_burst() and rte_event_enqueue_burst() but + * PMD accepts or returns only one event at a time. * * @see rte_event_dequeue_burst() rte_event_enqueue_burst() */ #define RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE (1ULL << 5) /**< Event device ports support disabling the implicit release feature, in * which the port will release all unreleased events in its dequeue operation. + * * If this capability is set and the port is configured with implicit release * disabled, the application is responsible for explicitly releasing events - * using either the RTE_EVENT_OP_FORWARD or the RTE_EVENT_OP_RELEASE event + * using either the @ref RTE_EVENT_OP_FORWARD or the @ref RTE_EVENT_OP_RELEASE event * enqueue operations. * * @see rte_event_dequeue_burst() rte_event_enqueue_burst() */ #define RTE_EVENT_DEV_CAP_NONSEQ_MODE (1ULL << 6) -/**< Event device is capable of operating in none sequential mode. The path - * of the event is not necessary to be sequential. Application can change - * the path of event at runtime. If the flag is not set, then event each event - * will follow a path from queue 0 to queue 1 to queue 2 etc. If the flag is - * set, events may be sent to queues in any order. If the flag is not set, the - * eventdev will return an error when the application enqueues an event for a +/**< Event device is capable of operating in non-sequential mode. + * + * The path of the event is not necessary to be sequential. Application can change + * the path of event at runtime and events may be sent to queues in any order. + * + * If the flag is not set, then event each event will follow a path from queue 0 + * to queue 1 to queue 2 etc. + * The eventdev will return an error when the application enqueues an event for a * qid which is not the next in the sequence. */ #define RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK (1ULL << 7) -/**< Event device is capable of configuring the queue/port link at runtime. +/**< Event device is capable of reconfiguring the queue/port link at runtime. + * * If the flag is not set, the eventdev queue/port link is only can be - * configured during initialization. + * configured during initialization, or by stopping the device and + * then later restarting it after reconfiguration. + * + * @see rte_event_port_link rte_event_port_unlink */ #define RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT (1ULL << 8) -/**< Event device is capable of setting up the link between multiple queue - * with single port. If the flag is not set, the eventdev can only map a - * single queue to each port or map a single queue to many port. +/**< Event device is capable of setting up links between multiple queues and a single port. + * + * If the flag is not set, each port may only be linked to a single queue, and + * so can only receive events from that queue. + * However, each queue may be linked to multiple ports. + * + * @see rte_event_port_link */ #define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9) -/**< Event device preserves the flow ID from the enqueued - * event to the dequeued event if the flag is set. Otherwise, - * the content of this field is implementation dependent. +/**< Event device preserves the flow ID from the enqueued event to the dequeued event. + * + * If this flag is not set, + * the content of the flow-id field in dequeued events is implementation dependent. + * + * @see rte_event_dequeue_burst */ #define RTE_EVENT_DEV_CAP_MAINTENANCE_FREE (1ULL << 10) /**< Event device *does not* require calls to rte_event_maintain(). + * * An event device that does not set this flag requires calls to * rte_event_maintain() during periods when neither * rte_event_dequeue_burst() nor rte_event_enqueue_burst() are called * on a port. This will allow the event device to perform internal * processing, such as flushing buffered events, return credits to a * global pool, or process signaling related to load balancing. + * + * @see rte_event_maintain */ #define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11) /**< Event device is capable of changing the queue attributes at runtime i.e - * after rte_event_queue_setup() or rte_event_start() call sequence. If this - * flag is not set, eventdev queue attributes can only be configured during + * after rte_event_queue_setup() or rte_event_dev_start() call sequence. + * + * If this flag is not set, eventdev queue attributes can only be configured during * rte_event_queue_setup(). + * + * @see rte_event_queue_setup */ #define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12) -/**< Event device is capable of supporting multiple link profiles per event port - * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater - * than one. +/**< Event device is capable of supporting multiple link profiles per event port. + * + * + * When set, the value of `rte_event_dev_info::max_profiles_per_port` is greater + * than one, and multiple profiles may be configured and then switched at runtime. + * If not set, only a single profile may be configured, which may itself be + * runtime adjustable (if @ref RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK is set). + * + * @see rte_event_port_profile_links_set rte_event_port_profile_links_get + * @see rte_event_port_profile_switch + * @see RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK */ /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 -/**< Highest priority expressed across eventdev subsystem +/**< Highest priority expressed across eventdev subsystem. + * * @see rte_event_queue_setup(), rte_event_enqueue_burst() * @see rte_event_port_link() */ #define RTE_EVENT_DEV_PRIORITY_NORMAL 128 -/**< Normal priority expressed across eventdev subsystem +/**< Normal priority expressed across eventdev subsystem. + * * @see rte_event_queue_setup(), rte_event_enqueue_burst() * @see rte_event_port_link() */ #define RTE_EVENT_DEV_PRIORITY_LOWEST 255 -/**< Lowest priority expressed across eventdev subsystem +/**< Lowest priority expressed across eventdev subsystem. + * * @see rte_event_queue_setup(), rte_event_enqueue_burst() * @see rte_event_port_link() */ /* Event queue scheduling weights */ #define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255 -/**< Highest weight of an event queue +/**< Highest weight of an event queue. + * * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() */ #define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0 -/**< Lowest weight of an event queue +/**< Lowest weight of an event queue. + * * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() */ /* Event queue scheduling affinity */ #define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255 -/**< Highest scheduling affinity of an event queue +/**< Highest scheduling affinity of an event queue. + * * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() */ #define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0 -/**< Lowest scheduling affinity of an event queue +/**< Lowest scheduling affinity of an event queue. + * * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() */ From patchwork Fri Jan 19 17:43:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135996 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0C2943901; Fri, 19 Jan 2024 18:44:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0ACB842DDA; Fri, 19 Jan 2024 18:44:12 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 6F18C42DC0 for ; Fri, 19 Jan 2024 18:44:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686249; x=1737222249; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Nt+GFGoOHNhg+sZGyWzQzs8DPgHePNbdmC6ZwcDT10o=; b=GZXT8Zl6chSz2evaPNCaFjsM9cOdz0bX5Qz9dcW/2Vitl86JRRK2bFpx ix1YqtbSJJko8F79jaKDlwbdsLxy2hpbyU8V1dySiV3RXSnqmvl6T7I85 /82UWKdBxlsk0kprbP4pRh/zlJBy+Es6OQzi5yMwazjczK+HGLU3fQDqk gQQToz/lQaAgbB4YpoHkzoFxh9OcZthsMCW1JoG+/dNIH36WpZOjiXTRg iDgBV4UNqJpoAEo8mhU10Z6H42LhvR05xgBO4MYdeMeXfP85aqP5aQrtO DkYgrmcbIPqwAHDLTj5LpaEirEsLZWK9l8CY3TKKCA8f4q2e1bzL/GMT9 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683693" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683693" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177783" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177783" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:04 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure Date: Fri, 19 Jan 2024 17:43:39 +0000 Message-Id: <20240119174346.108905-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some small rewording changes to the doxygen comments on struct rte_event_dev_info. Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 21 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 57a2791946..872f241df2 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -482,54 +482,58 @@ struct rte_event_dev_info { const char *driver_name; /**< Event driver name */ struct rte_device *dev; /**< Device information */ uint32_t min_dequeue_timeout_ns; - /**< Minimum supported global dequeue timeout(ns) by this device */ + /**< Minimum global dequeue timeout(ns) supported by this device */ uint32_t max_dequeue_timeout_ns; - /**< Maximum supported global dequeue timeout(ns) by this device */ + /**< Maximum global dequeue timeout(ns) supported by this device */ uint32_t dequeue_timeout_ns; /**< Configured global dequeue timeout(ns) for this device */ uint8_t max_event_queues; - /**< Maximum event_queues supported by this device */ + /**< Maximum event queues supported by this device */ uint32_t max_event_queue_flows; - /**< Maximum supported flows in an event queue by this device*/ + /**< Maximum number of flows within an event queue supported by this device*/ uint8_t max_event_queue_priority_levels; /**< Maximum number of event queue priority levels by this device. - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. + * The priority levels are evenly distributed between + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST. */ uint8_t max_event_priority_levels; /**< Maximum number of event priority levels by this device. * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability + * The priority levels are evenly distributed between + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST. */ uint8_t max_event_ports; /**< Maximum number of event ports supported by this device */ uint8_t max_event_port_dequeue_depth; - /**< Maximum number of events can be dequeued at a time from an - * event port by this device. - * A device that does not support bulk dequeue will set this as 1. + /**< Maximum number of events that can be dequeued at a time from an event port + * on this device. + * A device that does not support bulk dequeue will set this to 1. */ uint32_t max_event_port_enqueue_depth; - /**< Maximum number of events can be enqueued at a time from an - * event port by this device. - * A device that does not support bulk enqueue will set this as 1. + /**< Maximum number of events that can be enqueued at a time to an event port + * on this device. + * A device that does not support bulk enqueue will set this to 1. */ uint8_t max_event_port_links; - /**< Maximum number of queues that can be linked to a single event - * port by this device. + /**< Maximum number of queues that can be linked to a single event port on this device. */ int32_t max_num_events; /**< A *closed system* event dev has a limit on the number of events it - * can manage at a time. An *open system* event dev does not have a - * limit and will specify this as -1. + * can manage at a time. + * Once the number of events tracked by an eventdev exceeds this number, + * any enqueues of NEW events will fail. + * An *open system* event dev does not have a limit and will specify this as -1. */ uint32_t event_dev_cap; - /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/ + /**< Event device capabilities flags (RTE_EVENT_DEV_CAP_*) */ uint8_t max_single_link_event_port_queue_pairs; - /**< Maximum number of event ports and queues that are optimized for - * (and only capable of) single-link configurations supported by this - * device. These ports and queues are not accounted for in - * max_event_ports or max_event_queues. + /**< Maximum number of event ports and queues, supported by this device, + * that are optimized for (and only capable of) single-link configurations. + * These ports and queues are not accounted for in max_event_ports or max_event_queues. */ uint8_t max_profiles_per_port; - /**< Maximum number of event queue profiles per event port. + /**< Maximum number of event queue link profiles per event port. * A device that doesn't support multiple profiles will set this as 1. */ }; From patchwork Fri Jan 19 17:43:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135997 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8A7143901; Fri, 19 Jan 2024 18:44:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1992F42DE0; Fri, 19 Jan 2024 18:44:13 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 3930642DB2 for ; Fri, 19 Jan 2024 18:44:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686251; x=1737222251; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L5m5miMiYE21agROvRXl0A3v7/+usNFxKwe9mRepIpA=; b=UXAQB83TDPTfzRsvq0P/WFKUO2qIKewNh6IzdR4FCWD6rQfrJuTYpLIz 9rpTDaPic7QmQDGKohpv4qCyVOjMDQKaCnMDvzoSSZ3RzqwcnGQ59g5vI IZ2h2wUJcw1m3r0GfMd4TrgOmGNbyT56U+9TXiYnSLYXvtkspfNJSNvGF 68smDCPlnSqRur86CacoX77LnYWmjupTMtpPUO91gT2E+vLktott6ouF6 A+31ityRgxmbS9QRRIr0k0bRTTyxBrrMPR1S4EzZM+TuujKZIHQnPlC9J gNt9R6fj4xgY3NB1yk4K+KzsHhJAqIIwJyydCeuCC0GZQ+76lz9fWE/2n Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683704" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683704" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177787" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177787" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:07 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 05/11] eventdev: improve function documentation for query fns Date: Fri, 19 Jan 2024 17:43:40 +0000 Message-Id: <20240119174346.108905-6-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org General improvements to the doxygen docs for eventdev functions for querying basic information: * number of devices * id for a particular device * socket id of device * capability information for a device Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 872f241df2..c57c93a22e 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -440,8 +440,7 @@ struct rte_event; */ /** - * Get the total number of event devices that have been successfully - * initialised. + * Get the total number of event devices available for application use. * * @return * The total number of usable event devices. @@ -456,8 +455,10 @@ rte_event_dev_count(void); * Event device name to select the event device identifier. * * @return - * Returns event device identifier on success. - * - <0: Failure to find named event device. + * Event device identifier (dev_id >= 0) on success. + * Negative error code on failure: + * - -EINVAL - input name parameter is invalid + * - -ENODEV - no event device found with that name */ int rte_event_dev_get_dev_id(const char *name); @@ -470,7 +471,8 @@ rte_event_dev_get_dev_id(const char *name); * @return * The NUMA socket id to which the device is connected or * a default of zero if the socket could not be determined. - * -(-EINVAL) dev_id value is out of range. + * -EINVAL on error, where the given dev_id value does not + * correspond to any event device. */ int rte_event_dev_socket_id(uint8_t dev_id); @@ -539,18 +541,20 @@ struct rte_event_dev_info { }; /** - * Retrieve the contextual information of an event device. + * Retrieve details of an event device's capabilities and configuration limits. * * @param dev_id * The identifier of the device. * * @param[out] dev_info * A pointer to a structure of type *rte_event_dev_info* to be filled with the - * contextual information of the device. + * information about the device's capabilities. * * @return - * - 0: Success, driver updates the contextual information of the event device - * - <0: Error code returned by the driver info get function. + * - 0: Success, information about the event device is present in dev_info. + * - <0: Failure, error code returned by the function. + * - -EINVAL - invalid input parameters, e.g. incorrect device id + * - -ENOTSUP - device does not support returning capabilities information */ int rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info); From patchwork Fri Jan 19 17:43:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135998 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 230FB43901; Fri, 19 Jan 2024 18:44:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 78D8E42DE9; Fri, 19 Jan 2024 18:44:15 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 71D7A42DDD for ; Fri, 19 Jan 2024 18:44:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686254; x=1737222254; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VdnUqDrGM2/e2tMufxjZ0eJYKdinvpliMXZWcxc2voM=; b=Mv6exfC/1RXOXeapUr9mI29Kq2+9XjyBw/195sMdSD7EYm69dYkCGVcw PiRQ9ewWO9+WP0dJbQvxc12YnrgsFX0Mgbjgrvpnq9vGbNGarKUYAYYNI Vrk3Zxs+wavRXhfPAN3StNTEzzH6wf8tYyoI6AhunymKrM8aoTEJQfWWy RU28rQ9/dg7hR4Jm8wJkKCxWTV3xt76hRYI+Tbe3L38O9HuAWLCDWoD4u IPWgMTDABgh5PTnHp291XG1G8dPvWv4paCkKM8sAJuSFFW6wULBEBW4GI dIVjEBThZoMtcuYASiyYjWfAQdN4iqDxfuPMKJG6zaLIjUPpdA4dvI2j0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683713" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683713" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177790" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177790" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:10 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 06/11] eventdev: improve doxygen comments on configure struct Date: Fri, 19 Jan 2024 17:43:41 +0000 Message-Id: <20240119174346.108905-7-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org General rewording and cleanup on the rte_event_dev_config structure. Improved the wording of some sentences and created linked cross-references out of the existing references to the dev_info structure. Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 47 +++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index c57c93a22e..4139ccb982 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -599,9 +599,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, struct rte_event_dev_config { uint32_t dequeue_timeout_ns; /**< rte_event_dequeue_burst() timeout on this device. - * This value should be in the range of *min_dequeue_timeout_ns* and - * *max_dequeue_timeout_ns* which previously provided in - * rte_event_dev_info_get() + * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and + * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by + * @ref rte_event_dev_info_get() * The value 0 is allowed, in which case, default dequeue timeout used. * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT */ @@ -609,40 +609,41 @@ struct rte_event_dev_config { /**< In a *closed system* this field is the limit on maximum number of * events that can be inflight in the eventdev at a given time. The * limit is required to ensure that the finite space in a closed system - * is not overwhelmed. The value cannot exceed the *max_num_events* - * as provided by rte_event_dev_info_get(). + * is not overwhelmed. + * Once the limit has been reached, any enqueues of NEW events to the + * system will fail. + * The value cannot exceed @ref rte_event_dev_info.max_num_events + * returned by rte_event_dev_info_get(). * This value should be set to -1 for *open system*. */ uint8_t nb_event_queues; /**< Number of event queues to configure on this device. - * This value cannot exceed the *max_event_queues* which previously - * provided in rte_event_dev_info_get() + * This value cannot exceed @ref rte_event_dev_info.max_event_queues + * returned by rte_event_dev_info_get() */ uint8_t nb_event_ports; /**< Number of event ports to configure on this device. - * This value cannot exceed the *max_event_ports* which previously - * provided in rte_event_dev_info_get() + * This value cannot exceed @ref rte_event_dev_info.max_event_ports + * returned by rte_event_dev_info_get() */ uint32_t nb_event_queue_flows; - /**< Number of flows for any event queue on this device. - * This value cannot exceed the *max_event_queue_flows* which previously - * provided in rte_event_dev_info_get() + /**< Max number of flows needed for a single event queue on this device. + * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows + * returned by rte_event_dev_info_get() */ uint32_t nb_event_port_dequeue_depth; - /**< Maximum number of events can be dequeued at a time from an - * event port by this device. - * This value cannot exceed the *max_event_port_dequeue_depth* - * which previously provided in rte_event_dev_info_get(). + /**< Max number of events that can be dequeued at a time from an event port on this device. + * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth + * returned by rte_event_dev_info_get(). * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. - * @see rte_event_port_setup() + * @see rte_event_port_setup() rte_event_dequeue_burst() */ uint32_t nb_event_port_enqueue_depth; - /**< Maximum number of events can be enqueued at a time from an - * event port by this device. - * This value cannot exceed the *max_event_port_enqueue_depth* - * which previously provided in rte_event_dev_info_get(). + /**< Maximum number of events can be enqueued at a time to an event port on this device. + * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth + * returned by rte_event_dev_info_get(). * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. - * @see rte_event_port_setup() + * @see rte_event_port_setup() rte_event_enqueue_burst() */ uint32_t event_dev_cfg; /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/ @@ -652,7 +653,7 @@ struct rte_event_dev_config { * queues; this value cannot exceed *nb_event_ports* or * *nb_event_queues*. If the device has ports and queues that are * optimized for single-link usage, this field is a hint for how many - * to allocate; otherwise, regular event ports and queues can be used. + * to allocate; otherwise, regular event ports and queues will be used. */ }; From patchwork Fri Jan 19 17:43:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 135999 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D30943901; Fri, 19 Jan 2024 18:44:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A26842DEC; Fri, 19 Jan 2024 18:44:18 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 6AB1D42DEC; Fri, 19 Jan 2024 18:44:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686257; x=1737222257; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DfoGN2SQUmfRnzneAdEIuu2MDo+I5e694eY3aUmQFFk=; b=RbhI+pRJZr+/wxzkwFBD+B3N9X5cz8J0cyw08ilOqV9oDe9ixWCXTSlv I6WmQpdBT75ZVRyAhEgN+Z3tZf2NYdqOD4rYee0ibtgtz3U1RXGcMx1jf hhJIZbCWkDiCym0fVwrzk9LnXWvjYwn5B/zvJbRSsbVBWX/SJGlnMQVX4 NaZv8FTD6+Mkvb80RbdjSjXhhUngpJDjMuLPIy6yEpp2Eq9Yiq6vpmyHW 6kUhOJPiRY0zkOzYuJbQeu7MXi3hvw4LuUldt29LGfR6njCZxw7+A0OPw s5jXgqu0FwTWX3812CDs/bjaHbwdxYoqeYUV4XdomQPiAC2ogznWLIkb4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683730" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683730" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177794" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177794" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:13 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson , stable@dpdk.org Subject: [PATCH v2 07/11] eventdev: fix documentation for counting single-link ports Date: Fri, 19 Jan 2024 17:43:42 +0000 Message-Id: <20240119174346.108905-8-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The documentation of how single-link port-queue pairs were counted in the rte_event_dev_config structure did not match the actual implementation and, if following the documentation, certain valid port/queue configurations would have been impossible to configure. Fix this by changing the documentation to match the implementation - however confusing that implementation ends up being. Bugzilla ID: 1368 Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints") Cc: stable@dpdk.org Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 4139ccb982..3b8f5b8101 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -490,7 +490,10 @@ struct rte_event_dev_info { uint32_t dequeue_timeout_ns; /**< Configured global dequeue timeout(ns) for this device */ uint8_t max_event_queues; - /**< Maximum event queues supported by this device */ + /**< Maximum event queues supported by this device. + * This excludes any queue-port pairs covered by the + * *max_single_link_event_port_queue_pairs* value in this structure. + */ uint32_t max_event_queue_flows; /**< Maximum number of flows within an event queue supported by this device*/ uint8_t max_event_queue_priority_levels; @@ -506,7 +509,10 @@ struct rte_event_dev_info { * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST. */ uint8_t max_event_ports; - /**< Maximum number of event ports supported by this device */ + /**< Maximum number of event ports supported by this device + * This excludes any queue-port pairs covered by the + * *max_single_link_event_port_queue_pairs* value in this structure. + */ uint8_t max_event_port_dequeue_depth; /**< Maximum number of events that can be dequeued at a time from an event port * on this device. @@ -618,13 +624,23 @@ struct rte_event_dev_config { */ uint8_t nb_event_queues; /**< Number of event queues to configure on this device. - * This value cannot exceed @ref rte_event_dev_info.max_event_queues - * returned by rte_event_dev_info_get() + * This value *includes* any single-link queue-port pairs to be used. + * This value cannot exceed @ref rte_event_dev_info.max_event_queues + + * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs + * returned by rte_event_dev_info_get(). + * The number of non-single-link queues i.e. this value less + * *nb_single_link_event_port_queues* in this struct, cannot exceed + * @ref rte_event_dev_info.max_event_queues */ uint8_t nb_event_ports; /**< Number of event ports to configure on this device. - * This value cannot exceed @ref rte_event_dev_info.max_event_ports - * returned by rte_event_dev_info_get() + * This value *includes* any single-link queue-port pairs to be used. + * This value cannot exceed @ref rte_event_dev_info.max_event_ports + + * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs + * returned by rte_event_dev_info_get(). + * The number of non-single-link ports i.e. this value less + * *nb_single_link_event_port_queues* in this struct, cannot exceed + * @ref rte_event_dev_info.max_event_ports */ uint32_t nb_event_queue_flows; /**< Max number of flows needed for a single event queue on this device. From patchwork Fri Jan 19 17:43:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136000 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E994943901; Fri, 19 Jan 2024 18:45:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7C7E42DE2; Fri, 19 Jan 2024 18:44:21 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 696CE42DE3 for ; Fri, 19 Jan 2024 18:44:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686260; x=1737222260; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ITNmE3s1ZIJ6+Y/lDuQ/2LE50DWCBOqahMlQp9TCN0Q=; b=OTGweEL2LJcAR/y/sWuwxwEU2xjWsJzCHLFk5iJSt5EsB2mxIid87Qjr ms9LQ989IPVaIpFVkFuJ4iiePgPurYK/KWLHuE1qkzZ4bcC/XclABjNT8 oxqnJLraT/WG7gax41OZ+vc9JI4tL6aPjnBXnzlLhs9wT2WRn9DeF+9IL sVnpQqqwcE4jBaW4CwJjMO2ENi/wAg+Y7KBwWCUd9UfoTEOanRFxhbuS8 31prSWucengqS4SwGHxnqu1dSdWRjSFnT3dwNgQZUc4YVDhR6Xd1ZK90u M1e2hoJbvu8tMwlm58FbLezUYSAAr5sawdYFT5OEGQmnf1AbhqgvAu9zt Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683741" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683741" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177798" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177798" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:16 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 08/11] eventdev: improve doxygen comments on config fns Date: Fri, 19 Jan 2024 17:43:43 +0000 Message-Id: <20240119174346.108905-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Improve the documentation text for the configuration functions and structures for configuring an eventdev, as well as ports and queues. Clarify text where possible, and ensure references come through as links in the html output. Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 196 ++++++++++++++++++++++++------------ 1 file changed, 130 insertions(+), 66 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 3b8f5b8101..1fda8a5a13 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -676,12 +676,14 @@ struct rte_event_dev_config { /** * Configure an event device. * - * This function must be invoked first before any other function in the - * API. This function can also be re-invoked when a device is in the - * stopped state. + * This function must be invoked before any other configuration function in the + * API, when preparing an event device for application use. + * This function can also be re-invoked when a device is in the stopped state. * - * The caller may use rte_event_dev_info_get() to get the capability of each - * resources available for this event device. + * The caller should use rte_event_dev_info_get() to get the capabilities and + * resource limits for this event device before calling this API. + * Many values in the dev_conf input parameter are subject to limits given + * in the device information returned from rte_event_dev_info_get(). * * @param dev_id * The identifier of the device to configure. @@ -691,6 +693,9 @@ struct rte_event_dev_config { * @return * - 0: Success, device configured. * - <0: Error code returned by the driver configuration function. + * - -ENOTSUP - device does not support configuration + * - -EINVAL - invalid input parameter + * - -EBUSY - device has already been started */ int rte_event_dev_configure(uint8_t dev_id, @@ -700,14 +705,33 @@ rte_event_dev_configure(uint8_t dev_id, /* Event queue configuration bitmap flags */ #define RTE_EVENT_QUEUE_CFG_ALL_TYPES (1ULL << 0) -/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue +/**< Allow events with schedule types ATOMIC, ORDERED, and PARALLEL to be enqueued to this queue. + * The scheduling type to be used is that specified in each individual event. + * This flag can only be set when configuring queues on devices reporting the + * @ref RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability. * + * Without this flag, only events with the specific scheduling type configured at queue setup + * can be sent to the queue. + * + * @see RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL * @see rte_event_enqueue_burst() */ #define RTE_EVENT_QUEUE_CFG_SINGLE_LINK (1ULL << 1) /**< This event queue links only to a single event port. - * + * No load-balancing of events is performed, as all events + * sent to this queue end up at the same event port. + * The number of queues on which this flag is to be set must be + * configured at device configuration time, by setting + * @ref rte_event_dev_config.nb_single_link_event_port_queues + * parameter appropriately. + * + * This flag serves as a hint only, any devices without specific + * support for single-link queues can fall-back automatically to + * using regular queues with a single destination port. + * + * @see rte_event_dev_info.max_single_link_event_port_queue_pairs + * @see rte_event_dev_config.nb_single_link_event_port_queues * @see rte_event_port_setup(), rte_event_port_link() */ @@ -715,56 +739,75 @@ rte_event_dev_configure(uint8_t dev_id, struct rte_event_queue_conf { uint32_t nb_atomic_flows; /**< The maximum number of active flows this queue can track at any - * given time. If the queue is configured for atomic scheduling (by - * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg - * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the - * value must be in the range of [1, nb_event_queue_flows], which was - * previously provided in rte_event_dev_configure(). + * given time. + * + * If the queue is configured for atomic scheduling (by + * applying the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to + * @ref rte_event_queue_conf.event_queue_cfg + * or @ref RTE_SCHED_TYPE_ATOMIC flag to @ref rte_event_queue_conf.schedule_type), then the + * value must be in the range of [1, @ref rte_event_dev_config.nb_event_queue_flows], + * which was previously provided in rte_event_dev_configure(). + * + * If the queue is not configured for atomic scheduling this value is ignored. */ uint32_t nb_atomic_order_sequences; /**< The maximum number of outstanding events waiting to be * reordered by this queue. In other words, the number of entries in * this queue’s reorder buffer.When the number of events in the * reorder buffer reaches to *nb_atomic_order_sequences* then the - * scheduler cannot schedule the events from this queue and invalid - * event will be returned from dequeue until one or more entries are + * scheduler cannot schedule the events from this queue and no + * events will be returned from dequeue until one or more entries are * freed up/released. + * * If the queue is configured for ordered scheduling (by applying the - * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or - * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must - * be in the range of [1, nb_event_queue_flows], which was + * @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to @ref rte_event_queue_conf.event_queue_cfg or + * @ref RTE_SCHED_TYPE_ORDERED flag to @ref rte_event_queue_conf.schedule_type), + * then the value must be in the range of + * [1, @ref rte_event_dev_config.nb_event_queue_flows], which was * previously supplied to rte_event_dev_configure(). + * + * If the queue is not configured for ordered scheduling, then this value is ignored */ uint32_t event_queue_cfg; /**< Queue cfg flags(EVENT_QUEUE_CFG_) */ uint8_t schedule_type; /**< Queue schedule type(RTE_SCHED_TYPE_*). - * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in - * event_queue_cfg. + * Valid when @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is not set in + * @ref rte_event_queue_conf.event_queue_cfg. + * + * If the @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES flag is set, then this field is ignored. + * + * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL */ uint8_t priority; /**< Priority for this event queue relative to other event queues. * The requested priority should in the range of - * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST]. + * [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, @ref RTE_EVENT_DEV_PRIORITY_LOWEST]. * The implementation shall normalize the requested priority to * event device supported priority value. - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability + * + * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability, + * ignored otherwise */ uint8_t weight; /**< Weight of the event queue relative to other event queues. * The requested weight should be in the range of - * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST]. + * [@ref RTE_EVENT_QUEUE_WEIGHT_HIGHEST, @ref RTE_EVENT_QUEUE_WEIGHT_LOWEST]. * The implementation shall normalize the requested weight to event * device supported weight value. - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. + * + * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability, + * ignored otherwise. */ uint8_t affinity; /**< Affinity of the event queue relative to other event queues. * The requested affinity should be in the range of - * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST]. + * [@ref RTE_EVENT_QUEUE_AFFINITY_HIGHEST, @ref RTE_EVENT_QUEUE_AFFINITY_LOWEST]. * The implementation shall normalize the requested affinity to event * device supported affinity value. - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. + * + * Valid when the device has @ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability, + * ignored otherwise. */ }; @@ -779,7 +822,7 @@ struct rte_event_queue_conf { * The identifier of the device. * @param queue_id * The index of the event queue to get the configuration information. - * The value must be in the range [0, nb_event_queues - 1] + * The value must be in the range [0, @ref rte_event_dev_config.nb_event_queues - 1] * previously supplied to rte_event_dev_configure(). * @param[out] queue_conf * The pointer to the default event queue configuration data. @@ -800,7 +843,8 @@ rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id, * The identifier of the device. * @param queue_id * The index of the event queue to setup. The value must be in the range - * [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure(). + * [0, @ref rte_event_dev_config.nb_event_queues - 1] previously supplied to + * rte_event_dev_configure(). * @param queue_conf * The pointer to the configuration data to be used for the event queue. * NULL value is allowed, in which case default configuration used. @@ -816,43 +860,44 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); /** - * The priority of the queue. + * Queue attribute id for the priority of the queue. */ #define RTE_EVENT_QUEUE_ATTR_PRIORITY 0 /** - * The number of atomic flows configured for the queue. + * Queue attribute id for the number of atomic flows configured for the queue. */ #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS 1 /** - * The number of atomic order sequences configured for the queue. + * Queue attribute id for the number of atomic order sequences configured for the queue. */ #define RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES 2 /** - * The cfg flags for the queue. + * Queue attribute id for the cfg flags for the queue. */ #define RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG 3 /** - * The schedule type of the queue. + * Queue attribute id for the schedule type of the queue. */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 /** - * The weight of the queue. + * Queue attribute id for the weight of the queue. */ #define RTE_EVENT_QUEUE_ATTR_WEIGHT 5 /** - * Affinity of the queue. + * Queue attribute id for the affinity of the queue. */ #define RTE_EVENT_QUEUE_ATTR_AFFINITY 6 /** - * Get an attribute from a queue. + * Get an attribute or property of an event queue. * * @param dev_id - * Eventdev id + * The identifier of the device. * @param queue_id - * Eventdev queue id + * The index of the event queue to query. The value must be in the range + * [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure(). * @param attr_id - * The attribute ID to retrieve + * The attribute ID to retrieve (RTE_EVENT_QUEUE_ATTR_*) * @param[out] attr_value * A pointer that will be filled in with the attribute value if successful * @@ -861,8 +906,8 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, * - -EINVAL: invalid device, queue or attr_id provided, or attr_value was * NULL * - -EOVERFLOW: returned when attr_id is set to - * RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and event_queue_cfg is set to - * RTE_EVENT_QUEUE_CFG_ALL_TYPES + * @ref RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE and @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES is + * set in the queue configuration flags. */ int rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, @@ -872,11 +917,13 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, * Set an event queue attribute. * * @param dev_id - * Eventdev id + * The identifier of the device. * @param queue_id - * Eventdev queue id + * The index of the event queue to configure. The value must be in the range + * [0, @ref rte_event_dev_config.nb_event_queues - 1] previously + * supplied to rte_event_dev_configure(). * @param attr_id - * The attribute ID to set + * The attribute ID to set (RTE_EVENT_QUEUE_ATTR_*) * @param attr_value * The attribute value to set * @@ -902,7 +949,10 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, */ #define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1) /**< This event port links only to a single event queue. + * The queue it links with should be similarly configured with the + * @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK flag. * + * @see RTE_EVENT_QUEUE_CFG_SINGLE_LINK * @see rte_event_port_setup(), rte_event_port_link() */ #define RTE_EVENT_PORT_CFG_HINT_PRODUCER (1ULL << 2) @@ -918,7 +968,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, #define RTE_EVENT_PORT_CFG_HINT_CONSUMER (1ULL << 3) /**< Hint that this event port will primarily dequeue events from the system. * A PMD can optimize its internal workings by assuming that this port is - * primarily going to consume events, and not enqueue FORWARD or RELEASE + * primarily going to consume events, and not enqueue NEW or FORWARD * events. * * Note that this flag is only a hint, so PMDs must operate under the @@ -944,48 +994,55 @@ struct rte_event_port_conf { /**< A backpressure threshold for new event enqueues on this port. * Use for *closed system* event dev where event capacity is limited, * and cannot exceed the capacity of the event dev. + * * Configuring ports with different thresholds can make higher priority * traffic less likely to be backpressured. * For example, a port used to inject NIC Rx packets into the event dev * can have a lower threshold so as not to overwhelm the device, * while ports used for worker pools can have a higher threshold. - * This value cannot exceed the *nb_events_limit* + * This value cannot exceed the @ref rte_event_dev_config.nb_events_limit value * which was previously supplied to rte_event_dev_configure(). - * This should be set to '-1' for *open system*. + * + * This should be set to '-1' for *open system*, i.e when + * @ref rte_event_dev_info.max_num_events == -1. */ uint16_t dequeue_depth; - /**< Configure number of bulk dequeues for this event port. - * This value cannot exceed the *nb_event_port_dequeue_depth* - * which previously supplied to rte_event_dev_configure(). - * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. + /**< Configure the maximum size of burst dequeues for this event port. + * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_dequeue_depth value + * which was previously supplied to rte_event_dev_configure(). + * + * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability. */ uint16_t enqueue_depth; - /**< Configure number of bulk enqueues for this event port. - * This value cannot exceed the *nb_event_port_enqueue_depth* - * which previously supplied to rte_event_dev_configure(). - * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable. + /**< Configure the maximum size of burst enqueues to this event port. + * This value cannot exceed the @ref rte_event_dev_config.nb_event_port_enqueue_depth value + * which was previously supplied to rte_event_dev_configure(). + * + * Ignored when device does not support the @ref RTE_EVENT_DEV_CAP_BURST_MODE capability. */ - uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */ + uint32_t event_port_cfg; /**< Port configuration flags(EVENT_PORT_CFG_) */ }; /** * Retrieve the default configuration information of an event port designated * by its *port_id* from the event driver for an event device. * - * This function intended to be used in conjunction with rte_event_port_setup() - * where caller needs to set up the port by overriding few default values. + * This function is intended to be used in conjunction with rte_event_port_setup() + * where the caller can set up the port by just overriding few default values. * * @param dev_id * The identifier of the device. * @param port_id * The index of the event port to get the configuration information. - * The value must be in the range [0, nb_event_ports - 1] + * The value must be in the range [0, @ref rte_event_dev_config.nb_event_ports - 1] * previously supplied to rte_event_dev_configure(). * @param[out] port_conf - * The pointer to the default event port configuration data + * The pointer to a structure to store the default event port configuration data. * @return * - 0: Success, driver updates the default event port configuration data. * - <0: Error code returned by the driver info get function. + * - -EINVAL - invalid input parameter + * - -ENOTSUP - function is not supported for this device * * @see rte_event_port_setup() */ @@ -1000,18 +1057,24 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id, * The identifier of the device. * @param port_id * The index of the event port to setup. The value must be in the range - * [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure(). + * [0, @ref rte_event_dev_config.nb_event_ports - 1] previously supplied to + * rte_event_dev_configure(). * @param port_conf - * The pointer to the configuration data to be used for the queue. - * NULL value is allowed, in which case default configuration used. + * The pointer to the configuration data to be used for the port. + * NULL value is allowed, in which case the default configuration is used. * * @see rte_event_port_default_conf_get() * * @return * - 0: Success, event port correctly set up. * - <0: Port configuration failed - * - (-EDQUOT) Quota exceeded(Application tried to link the queue configured - * with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports) + * - -EINVAL - Invalid input parameter + * - -EBUSY - Port already started + * - -ENOTSUP - Function not supported on this device, or a NULL pointer passed + * as the port_conf parameter, and no default configuration function available + * for this device. + * - -EDQUOT - Application tried to link a queue configured + * with @ref RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event port. */ int rte_event_port_setup(uint8_t dev_id, uint8_t port_id, @@ -1041,8 +1104,9 @@ typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id, * @param dev_id * The identifier of the device. * @param port_id - * The index of the event port to setup. The value must be in the range - * [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure(). + * The index of the event port to quiesce. The value must be in the range + * [0, @ref rte_event_dev_config.nb_event_ports - 1] + * previously supplied to rte_event_dev_configure(). * @param release_cb * Callback function invoked once per flushed event. * @param args From patchwork Fri Jan 19 17:43:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136001 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 707FC43901; Fri, 19 Jan 2024 18:45:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 440F042DF9; Fri, 19 Jan 2024 18:44:24 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 3334142DF0 for ; Fri, 19 Jan 2024 18:44:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686262; x=1737222262; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qAoPoc77mPbZe//oBkOrCaX/GLEDYqz1OCUco46eQwg=; b=bEFRyavhSD/8SFQwLnI9Jir7/KRcJ8LYF4oga7IocoP79rakaI4HiPwQ u7Cfp2h0ynu2mvUtwS7YXwCB0pVbhAxvvijBQw6GVuL9hKsRDiPmZ5b6s cbhoRowLJOe/Xhkx9dhkiyITW/K809nTQcZVFOhND9272YGrhUmsfeqHd 6SEmPQrQS59FV8zu3rA4qkIXhUWk2Hub1RMZfSYmQT5Lm65jjWLrzDVjY UenE4GkndsL+lzkFPHKjVBQ5BF0Goofw6H23Dj8P+nu5mpZAN8mkF4OWe 4J0i8BHT5h4C8GPeqw60a+FduSJ3mG5ZJOEO/2VaSeRphC7S3EsBoaI2Q A==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683754" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683754" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177801" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177801" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:18 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 09/11] eventdev: improve doxygen comments for control APIs Date: Fri, 19 Jan 2024 17:43:44 +0000 Message-Id: <20240119174346.108905-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The doxygen comments for the port attributes, start and stop (and related functions) are improved. Signed-off-by: Bruce Richardson --- lib/eventdev/rte_eventdev.h | 34 +++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 1fda8a5a13..2c6576e921 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1117,19 +1117,21 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, rte_eventdev_port_flush_t release_cb, void *args); /** - * The queue depth of the port on the enqueue side + * Port attribute id for the maximum size of a burst enqueue operation supported on a port */ #define RTE_EVENT_PORT_ATTR_ENQ_DEPTH 0 /** - * The queue depth of the port on the dequeue side + * Port attribute id for the maximum size of a dequeue burst which can be returned from a port */ #define RTE_EVENT_PORT_ATTR_DEQ_DEPTH 1 /** - * The new event threshold of the port + * Port attribute id for the new event threshold of the port. + * Once the number of events in the system exceeds this threshold, the enqueue of NEW-type + * events will fail. */ #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2 /** - * The implicit release disable attribute of the port + * Port attribute id for the implicit release disable attribute of the port */ #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3 @@ -1137,11 +1139,13 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, * Get an attribute from a port. * * @param dev_id - * Eventdev id + * The identifier of the device. * @param port_id - * Eventdev port id + * The index of the event port to query. The value must be in the range + * [0, @ref rte_event_dev_config.nb_event_ports - 1] + * previously supplied to rte_event_dev_configure(). * @param attr_id - * The attribute ID to retrieve + * The attribute ID to retrieve (RTE_EVENT_PORT_ATTR_*) * @param[out] attr_value * A pointer that will be filled in with the attribute value if successful * @@ -1156,8 +1160,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id, /** * Start an event device. * - * The device start step is the last one and consists of setting the event - * queues to start accepting the events and schedules to event ports. + * The device start step is the last one in device setup, and enables the event + * ports and queues to start accepting events and scheduling them to event ports. * * On success, all basic functions exported by the API (event enqueue, * event dequeue and so on) can be invoked. @@ -1166,6 +1170,8 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id, * Event device identifier * @return * - 0: Success, device started. + * - -EINVAL: Invalid device id provided + * - -ENOTSUP: Device does not support this operation. * - -ESTALE : Not all ports of the device are configured * - -ENOLINK: Not all queues are linked, which could lead to deadlock. */ @@ -1208,12 +1214,16 @@ typedef void (*rte_eventdev_stop_flush_t)(uint8_t dev_id, * callback function must be registered in every process that can call * rte_event_dev_stop(). * + * Only one callback function may be registered. Each new call replaces + * the existing registered callback function with the new function passed in. + * * To unregister a callback, call this function with a NULL callback pointer. * * @param dev_id * The identifier of the device. * @param callback - * Callback function invoked once per flushed event. + * Callback function to be invoked once per flushed event. + * Pass NULL to unset any previously-registered callback function. * @param userdata * Argument supplied to callback. * @@ -1235,7 +1245,9 @@ int rte_event_dev_stop_flush_callback_register(uint8_t dev_id, * @return * - 0 on successfully closing device * - <0 on failure to close device - * - (-EAGAIN) if device is busy + * - -EINVAL - invalid device id + * - -ENOTSUP - operation not supported for this device + * - -EAGAIN - device is busy */ int rte_event_dev_close(uint8_t dev_id); From patchwork Fri Jan 19 17:43:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136002 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B821143901; Fri, 19 Jan 2024 18:45:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB32F42E00; Fri, 19 Jan 2024 18:44:26 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id CED3242DFB for ; Fri, 19 Jan 2024 18:44:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686265; x=1737222265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hnIRK2eWofYyvAmuq0QxvL6eyltRxKyucaSgQchA0r8=; b=dChZKne9g3hzkvFFvQu8kMxp2/WujfGKW43S4GVc/5BgGGNcphiIZB8W vK8eb+Ke01YrxB449Qc96747SJdlTNTiMnIgIk1hEIfyJE4bKkR7wu5aI iEBHJjqIcxnjud4ZuGEA3LHwEnYnJo3cKMcrVr/alI5Dw0b37nMYcYAhK /eWVlM1pqTfOnT975EsmH3Tg33uUKjlQq7c6+WY2aFagiz7qZa9uwEM2t YZdVBgPYbRpguPHNxxnoBKDzJay7zS2Bl5C4JMAyq9amqRxs/5zuaODhP BqQX0xobUBtQC6PWflIqFyW6Jb4g47DOdjtbfg20YmGei7KfQdgF5jXQD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683768" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683768" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177804" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177804" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:21 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Date: Fri, 19 Jan 2024 17:43:45 +0000 Message-Id: <20240119174346.108905-11-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The description of ordered and atomic scheduling given in the eventdev doxygen documentation was not always clear. Try and simplify this so that it is clearer for the end-user of the application Signed-off-by: Bruce Richardson --- NOTE TO REVIEWERS: I've updated this based on my understanding of what these scheduling types are meant to do. It matches my understanding of the support offered by our Intel DLB2 driver, as well as the SW eventdev, and I believe the DSW eventdev too. If it does not match the behaviour of other eventdevs, let's have a discussion to see if we can reach a good definition of the behaviour that is common. --- lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 2c6576e921..cb13602ffb 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1313,26 +1313,24 @@ struct rte_event_vector { #define RTE_SCHED_TYPE_ORDERED 0 /**< Ordered scheduling * - * Events from an ordered flow of an event queue can be scheduled to multiple + * Events from an ordered event queue can be scheduled to multiple * ports for concurrent processing while maintaining the original event order. * This scheme enables the user to achieve high single flow throughput by - * avoiding SW synchronization for ordering between ports which bound to cores. - * - * The source flow ordering from an event queue is maintained when events are - * enqueued to their destination queue within the same ordered flow context. - * An event port holds the context until application call - * rte_event_dequeue_burst() from the same port, which implicitly releases - * the context. - * User may allow the scheduler to release the context earlier than that - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. - * - * Events from the source queue appear in their original order when dequeued - * from a destination queue. - * Event ordering is based on the received event(s), but also other - * (newly allocated or stored) events are ordered when enqueued within the same - * ordered context. Events not enqueued (e.g. released or stored) within the - * context are considered missing from reordering and are skipped at this time - * (but can be ordered again within another context). + * avoiding SW synchronization for ordering between ports which are polled by + * different cores. + * + * As events are scheduled to ports/cores, the original event order from the + * source event queue is recorded internally in the scheduler. As events are + * returned (via FORWARD type enqueue) to the scheduler, the original event + * order is restored before the events are enqueued into their new destination + * queue. + * + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly + * released by the next dequeue from a port, are skipped by the reordering + * stage and do not affect the reordering of returned events. + * + * The ordering behaviour of NEW events with respect to FORWARD events is + * undefined and implementation dependent. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */ @@ -1340,18 +1338,23 @@ struct rte_event_vector { #define RTE_SCHED_TYPE_ATOMIC 1 /**< Atomic scheduling * - * Events from an atomic flow of an event queue can be scheduled only to a + * Events from an atomic flow, identified by @ref rte_event.flow_id, + * of an event queue can be scheduled only to a * single port at a time. The port is guaranteed to have exclusive (atomic) * access to the associated flow context, which enables the user to avoid SW * synchronization. Atomic flows also help to maintain event ordering - * since only one port at a time can process events from a flow of an + * since only one port at a time can process events from each flow of an * event queue. * - * The atomic queue synchronization context is dedicated to the port until + * The atomic queue synchronization context for a flow is dedicated to the port until * application call rte_event_dequeue_burst() from the same port, * which implicitly releases the context. User may allow the scheduler to * release the context earlier than that by invoking rte_event_enqueue_burst() - * with RTE_EVENT_OP_RELEASE operation. + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context + * is only released once the last event from the flow, outstanding on the port, + * is released. So long as there is one event from an atomic flow scheduled to + * a port/core (including any events in the port's dequeue queue, not yet read + * by the application), that port will hold the synchronization context. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */ From patchwork Fri Jan 19 17:43:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136003 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 339F043901; Fri, 19 Jan 2024 18:45:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2EA342DDF; Fri, 19 Jan 2024 18:44:29 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id A2CCB42DDF for ; Fri, 19 Jan 2024 18:44:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686268; x=1737222268; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NzMPSNl5bfeliLA7ajeymmLObnSe4jYMd463PBp+P6U=; b=e7/yRRvQl1SgI0A0sMSL9fjiuemM+wF37RwbwEU50E+S/WYFo5z0EuI9 v0Oi8zvazTKFqnR0P/vlLoBtwlsY/dHFX+SiH3EvlnYjHg/bYUMuGoV2L yMsT9tLbHOjILBX0i+InqUS7/0E5zb1njL3ojhrthQv8RBbJr3aC/wL6+ m47kPWCCcAR2IGeiSvPolpShASTu3wEqiwpu/9rV1yOQ2n0hmQxJalrrf OBccKyRUaX+03igWvQHM6500m6kXweubzX6PrmCtee/0x98Lre5NJvSpB JZwJXK7F9URedhyBROhnOKqoGEcmiPulfpjW2sFOkYolrLDhDiTtNzPLi w==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683780" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683780" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177809" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177809" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:24 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Date: Fri, 19 Jan 2024 17:43:46 +0000 Message-Id: <20240119174346.108905-12-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Clarify the meaning of the NEW, FORWARD and RELEASE event types. For the fields in "rte_event" struct, enhance the comments on each to clarify the field's use, and whether it is preserved between enqueue and dequeue, and it's role, if any, in scheduling. Signed-off-by: Bruce Richardson --- As with the previous patch, please review this patch to ensure that the expected semantics of the various event types and event fields have not changed in an unexpected way. --- lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++---------- 1 file changed, 77 insertions(+), 28 deletions(-) -- 2.40.1 diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index cb13602ffb..4eff1c4958 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1416,21 +1416,25 @@ struct rte_event_vector { /* Event enqueue operations */ #define RTE_EVENT_OP_NEW 0 -/**< The event producers use this operation to inject a new event to the +/**< The @ref rte_event.op field should be set to this type to inject a new event to the * event device. */ #define RTE_EVENT_OP_FORWARD 1 -/**< The CPU use this operation to forward the event to different event queue or - * change to new application specific flow or schedule type to enable - * pipelining. +/**< SW should set the @ref rte_event.op filed to this type to return a + * previously dequeued event to the event device for further processing. * - * This operation must only be enqueued to the same port that the + * This event *must* be enqueued to the same port that the * event to be forwarded was dequeued from. + * + * The event's fields, including (but not limited to) flow_id, scheduling type, + * destination queue, and event payload e.g. mbuf pointer, may all be updated as + * desired by software, but the @ref rte_event.impl_opaque field must + * be kept to the same value as was present when the event was dequeued. */ #define RTE_EVENT_OP_RELEASE 2 /**< Release the flow context associated with the schedule type. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC* + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC * then this function hints the scheduler that the user has completed critical * section processing in the current atomic context. * The scheduler is now allowed to schedule events from the same flow from @@ -1442,21 +1446,19 @@ struct rte_event_vector { * performance, but the user needs to design carefully the split into critical * vs non-critical sections. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED* - * then this function hints the scheduler that the user has done all that need - * to maintain event order in the current ordered context. - * The scheduler is allowed to release the ordered context of this port and - * avoid reordering any following enqueues. - * - * Early ordered context release may increase parallelism and thus system - * performance. + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED + * then this function informs the scheduler that the current event has + * completed processing and will not be returned to the scheduler, i.e. + * it has been dropped, and so the reordering context for that event + * should be considered filled. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL* + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL * or no scheduling context is held then this function may be an NOOP, * depending on the implementation. * * This operation must only be enqueued to the same port that the - * event to be released was dequeued from. + * event to be released was dequeued from. The @ref rte_event.impl_opaque + * field in the release event must match that in the original dequeued event. */ /** @@ -1473,53 +1475,100 @@ struct rte_event { /**< Targeted flow identifier for the enqueue and * dequeue operation. * The value must be in the range of - * [0, nb_event_queue_flows - 1] which + * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which * previously supplied to rte_event_dev_configure(). + * + * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a + * flow context for atomicity, such that events from each individual flow + * will only be scheduled to one port at a time. + * + * This field is preserved between enqueue and dequeue when + * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID + * capability. Otherwise the value is implementation dependent + * on dequeue. */ uint32_t sub_event_type:8; /**< Sub-event types based on the event source. + * + * This field is preserved between enqueue and dequeue. + * This field is for SW or event adapter use, + * and is unused in scheduling decisions. + * * @see RTE_EVENT_TYPE_CPU */ uint32_t event_type:4; - /**< Event type to classify the event source. - * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*) + /**< Event type to classify the event source. (RTE_EVENT_TYPE_*) + * + * This field is preserved between enqueue and dequeue + * This field is for SW or event adapter use, + * and is unused in scheduling decisions. */ uint8_t op:2; - /**< The type of event enqueue operation - new/forward/ - * etc.This field is not preserved across an instance + /**< The type of event enqueue operation - new/forward/ etc. + * + * This field is *not* preserved across an instance * and is undefined on dequeue. - * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*) + * + * @see RTE_EVENT_OP_NEW + * @see RTE_EVENT_OP_FORWARD + * @see RTE_EVENT_OP_RELEASE */ uint8_t rsvd:4; - /**< Reserved for future use */ + /**< Reserved for future use. + * + * Should be set to zero on enqueue. Zero on dequeue. + */ uint8_t sched_type:2; /**< Scheduler synchronization type (RTE_SCHED_TYPE_*) * associated with flow id on a given event queue * for the enqueue and dequeue operation. + * + * This field is used to determine the scheduling type + * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES + * is supported. + * For queues where only a single scheduling type is available, + * this field must be set to match the configured scheduling type. + * + * This field is preserved between enqueue and dequeue. + * + * @see RTE_SCHED_TYPE_ORDERED + * @see RTE_SCHED_TYPE_ATOMIC + * @see RTE_SCHED_TYPE_PARALLEL */ uint8_t queue_id; /**< Targeted event queue identifier for the enqueue or * dequeue operation. * The value must be in the range of - * [0, nb_event_queues - 1] which previously supplied to - * rte_event_dev_configure(). + * [0, @ref rte_event_dev_config.nb_event_queues - 1] which was + * previously supplied to rte_event_dev_configure(). + * + * This field is preserved between enqueue on dequeue. */ uint8_t priority; /**< Event priority relative to other events in the * event queue. The requested priority should in the - * range of [RTE_EVENT_DEV_PRIORITY_HIGHEST, - * RTE_EVENT_DEV_PRIORITY_LOWEST]. + * range of [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, + * @ref RTE_EVENT_DEV_PRIORITY_LOWEST]. * The implementation shall normalize the requested * priority to supported priority value. + * * Valid when the device has - * RTE_EVENT_DEV_CAP_EVENT_QOS capability. + * @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability. + * Ignored otherwise. + * + * This field is preserved between enqueue and dequeue. */ uint8_t impl_opaque; /**< Implementation specific opaque value. + * * An implementation may use this field to hold * implementation specific value to share between * dequeue and enqueue operation. + * * The application should not modify this field. + * Its value is implementation dependent on dequeue, + * and must be returned unmodified on enqueue when + * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE */ }; };