[v3,01/11] eventdev: improve doxygen introduction text

Message ID 20240202123953.77166-2-bruce.richardson@intel.com (mailing list archive)
State Changes Requested, archived
Delegated to: Jerin Jacob
Headers
Series improve eventdev API specification/documentation |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing warning apply patch failure

Commit Message

Bruce Richardson Feb. 2, 2024, 12:39 p.m. UTC
  Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

---
V3: reworked following feedback from Mattias
---
 lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
 1 file changed, 81 insertions(+), 51 deletions(-)
  

Comments

Jerin Jacob Feb. 7, 2024, 10:14 a.m. UTC | #1
On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> Make some textual improvements to the introduction to eventdev and event
> devices in the eventdev header file. This text appears in the doxygen
> output for the header file, and introduces the key concepts, for
> example: events, event devices, queues, ports and scheduling.
>
> This patch makes the following improvements:
> * small textual fixups, e.g. correcting use of singular/plural
> * rewrites of some sentences to improve clarity
> * using doxygen markdown to split the whole large block up into
>   sections, thereby making it easier to read.
>
> No large-scale changes are made, and blocks are not reordered
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

Thanks Bruce, While you are cleaning up, Please add following or
similar change to fix for not properly
parsing the struct rte_event_vector. i.e it is coming as global
variables in html files.

l[dpdk.org] $ git diff
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index e31c927905..ce4a195a8f 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1309,9 +1309,9 @@ struct rte_event_vector {
                 */
                struct {
                        uint16_t port;
-                       /* Ethernet device port id. */
+                       /**< Ethernet device port id. */
                        uint16_t queue;
-                       /* Ethernet device queue id. */
+                       /**< Ethernet device queue id. */
                };
        };
        /**< Union to hold common attributes of the vector array. */
@@ -1340,7 +1340,11 @@ struct rte_event_vector {
         * vector array can be an array of mbufs or pointers or opaque u64
         * values.
         */
+#ifndef __DOXYGEN__
 } __rte_aligned(16);
+#else
+};
+#endif

 /* Scheduler type definitions */
 #define RTE_SCHED_TYPE_ORDERED          0

>
> ---
> V3: reworked following feedback from Mattias
> ---
>  lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>  1 file changed, 81 insertions(+), 51 deletions(-)
>
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index ec9b02455d..a741832e8e 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -12,25 +12,33 @@
>   * @file
>   *
>   * RTE Event Device API
> + * ====================
>   *
> - * In a polling model, lcores poll ethdev ports and associated rx queues
> - * directly to look for packet. In an event driven model, by contrast, lcores
> - * call the scheduler that selects packets for them based on programmer
> - * specified criteria. Eventdev library adds support for event driven
> - * programming model, which offer applications automatic multicore scaling,
> - * dynamic load balancing, pipelining, packet ingress order maintenance and
> - * synchronization services to simplify application packet processing.
> + * In a traditional run-to-completion application model, lcores pick up packets

Can we keep it is as poll mode instead of run-to-completion as event mode also
supports run to completion by having dequuee() and then Tx.

> + * from Ethdev ports and associated RX queues, run the packet processing to completion,
> + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
> + * may be used to balance the load across multiple CPU cores.
> + *
> + * In contrast, in an event-driver model, as supported by this "eventdev" library,
> + * incoming packets are fed into an event device, which schedules those packets across

packets -> events. We may need to bring in Rx adapter if the event is packet.

> + * the available lcores, in accordance with its configuration.
> + * This event-driven programming model offers applications automatic multicore scaling,
> + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
> + * and prioritization/quality of service.
>   *
>   * The Event Device API is composed of two parts:
>   *
>   * - The application-oriented Event API that includes functions to setup
>   *   an event device (configure it, setup its queues, ports and start it), to
> - *   establish the link between queues to port and to receive events, and so on.
> + *   establish the links between queues and ports to receive events, and so on.
>   *
>   * - The driver-oriented Event API that exports a function allowing
> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> + *   an event poll Mode Driver (PMD) to register itself as
>   *   an event device driver.
>   *
> + * Application-oriented Event API
> + * ------------------------------
> + *
>   * Event device components:
>   *
>   *                     +-----------------+
> @@ -75,27 +83,39 @@
>   *            |                                                           |
>   *            +-----------------------------------------------------------+
>   *
> - * Event device: A hardware or software-based event scheduler.
> + * **Event device**: A hardware or software-based event scheduler.
>   *
> - * Event: A unit of scheduling that encapsulates a packet or other datatype
> - * like SW generated event from the CPU, Crypto work completion notification,
> - * Timer expiry event notification etc as well as metadata.
> - * The metadata includes flow ID, scheduling type, event priority, event_type,
> - * sub_event_type etc.
> + * **Event**: Represents an item of work and is the smallest unit of scheduling.
> + * An event carries metadata, such as queue ID, scheduling type, and event priority,
> + * and data such as one or more packets or other kinds of buffers.
> + * Some examples of events are:
> + * - a software-generated item of work originating from a lcore,

lcore.

> + *   perhaps carrying a packet to be processed,

processed.

> + * - a crypto work completion notification

notification.

> + * - a timer expiry notification.
>   *
> - * Event queue: A queue containing events that are scheduled by the event dev.
> + * **Event queue**: A queue containing events that are scheduled by the event device.

Shouldn't we add "to be" or so?
i.e
A queue containing events that are to be scheduled by the event device.

>   * An event queue contains events of different flows associated with scheduling
>   * types, such as atomic, ordered, or parallel.
> + * Each event given to an event device must have a valid event queue id field in the metadata,
> + * to specify on which event queue in the device the event must be placed,
> + * for later scheduling.
>   *
> - * Event port: An application's interface into the event dev for enqueue and
> + * **Event port**: An application's interface into the event dev for enqueue and
>   * dequeue operations. Each event port can be linked with one or more
>   * event queues for dequeue operations.
> - *
> - * By default, all the functions of the Event Device API exported by a PMD
> - * are lock-free functions which assume to not be invoked in parallel on
> - * different logical cores to work on the same target object. For instance,
> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> - * cores to operates on same  event port. Of course, this function
> + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
> + * that each port is polled by only a single lcore. [If this is not the case,
> + * a suitable synchronization mechanism should be used to prevent simultaneous
> + * access from multiple lcores.]
> + * To schedule events to an lcore, the event device will schedule them to the event port(s)
> + * being polled by that lcore.
> + *
> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
> + * different logical cores.
> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> + * cores to operate on same  event port. Of course, this function
>   * can be invoked in parallel by different logical cores on different ports.
>   * It is the responsibility of the upper level application to enforce this rule.
>   *
> @@ -107,22 +127,19 @@
>   *
>   * Event devices are dynamically registered during the PCI/SoC device probing
>   * phase performed at EAL initialization time.
> - * When an Event device is being probed, a *rte_event_dev* structure and
> - * a new device identifier are allocated for that device. Then, the
> - * event_dev_init() function supplied by the Event driver matching the probed
> - * device is invoked to properly initialize the device.
> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> + * for it and the event_dev_init() function supplied by the Event driver
> + * is invoked to properly initialize the device.
>   *
> - * The role of the device init function consists of resetting the hardware or
> - * software event driver implementations.
> + * The role of the device init function is to reset the device hardware or
> + * to initialize the software event driver implementation.
>   *
> - * If the device init operation is successful, the correspondence between
> - * the device identifier assigned to the new device and its associated
> - * *rte_event_dev* structure is effectively registered.
> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> - * freed.
> + * If the device init operation is successful, the device is assigned a device
> + * id (dev_id) for application use.
> + * Otherwise, the *rte_event_dev* structure is freed.
>   *
>   * The functions exported by the application Event API to setup a device
> - * designated by its device identifier must be invoked in the following order:
> + * must be invoked in the following order:
>   *     - rte_event_dev_configure()
>   *     - rte_event_queue_setup()
>   *     - rte_event_port_setup()
> @@ -130,10 +147,15 @@
>   *     - rte_event_dev_start()
>   *
>   * Then, the application can invoke, in any order, the functions
> - * exported by the Event API to schedule events, dequeue events, enqueue events,
> - * change event queue(s) to event port [un]link establishment and so on.
> - *
> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> + * exported by the Event API to dequeue events, enqueue events,
> + * and link and unlink event queue(s) to event ports.
> + *
> + * Before configuring a device, an application should call rte_event_dev_info_get()
> + * to determine the capabilities of the event device, and any queue or port
> + * limits of that device. The parameters set in the various device configuration
> + * structures may need to be adjusted based on the max values provided in the
> + * device information structure returned from the info_get API.

Can we add full name of info_get()?
  
Mattias Rönnblom Feb. 8, 2024, 9:50 a.m. UTC | #2
On 2024-02-07 11:14, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
>>
>> Make some textual improvements to the introduction to eventdev and event
>> devices in the eventdev header file. This text appears in the doxygen
>> output for the header file, and introduces the key concepts, for
>> example: events, event devices, queues, ports and scheduling.
>>
>> This patch makes the following improvements:
>> * small textual fixups, e.g. correcting use of singular/plural
>> * rewrites of some sentences to improve clarity
>> * using doxygen markdown to split the whole large block up into
>>    sections, thereby making it easier to read.
>>
>> No large-scale changes are made, and blocks are not reordered
>>
>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Thanks Bruce, While you are cleaning up, Please add following or
> similar change to fix for not properly
> parsing the struct rte_event_vector. i.e it is coming as global
> variables in html files.
> 
> l[dpdk.org] $ git diff
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e31c927905..ce4a195a8f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>                   */
>                  struct {
>                          uint16_t port;
> -                       /* Ethernet device port id. */
> +                       /**< Ethernet device port id. */
>                          uint16_t queue;
> -                       /* Ethernet device queue id. */
> +                       /**< Ethernet device queue id. */
>                  };
>          };
>          /**< Union to hold common attributes of the vector array. */
> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>           * vector array can be an array of mbufs or pointers or opaque u64
>           * values.
>           */
> +#ifndef __DOXYGEN__
>   } __rte_aligned(16);
> +#else
> +};
> +#endif
> 
>   /* Scheduler type definitions */
>   #define RTE_SCHED_TYPE_ORDERED          0
> 
>>
>> ---
>> V3: reworked following feedback from Mattias
>> ---
>>   lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>>   1 file changed, 81 insertions(+), 51 deletions(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index ec9b02455d..a741832e8e 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -12,25 +12,33 @@
>>    * @file
>>    *
>>    * RTE Event Device API
>> + * ====================
>>    *
>> - * In a polling model, lcores poll ethdev ports and associated rx queues
>> - * directly to look for packet. In an event driven model, by contrast, lcores
>> - * call the scheduler that selects packets for them based on programmer
>> - * specified criteria. Eventdev library adds support for event driven
>> - * programming model, which offer applications automatic multicore scaling,
>> - * dynamic load balancing, pipelining, packet ingress order maintenance and
>> - * synchronization services to simplify application packet processing.
>> + * In a traditional run-to-completion application model, lcores pick up packets
> 
> Can we keep it is as poll mode instead of run-to-completion as event mode also
> supports run to completion by having dequuee() and then Tx.
> 

A "traditional" DPDK app is both polling and run-to-completion. You 
could always add "polling" somewhere, but "run-to-completion" in that 
context serves a purpose, imo.

A single-stage eventdev-based pipeline will also process packets in a 
run-to-completion fashion. In such a scenario, the difference between 
eventdev and the "tradition" lies in the (ingress-only) load balancing 
mechanism used (which the below note on the "traditional" use of RSS 
indicates).

>> + * from Ethdev ports and associated RX queues, run the packet processing to completion,
>> + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
>> + * may be used to balance the load across multiple CPU cores.
>> + *
>> + * In contrast, in an event-driver model, as supported by this "eventdev" library,
>> + * incoming packets are fed into an event device, which schedules those packets across
> 
> packets -> events. We may need to bring in Rx adapter if the event is packet.
> 
>> + * the available lcores, in accordance with its configuration.
>> + * This event-driven programming model offers applications automatic multicore scaling,
>> + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
>> + * and prioritization/quality of service.
>>    *
>>    * The Event Device API is composed of two parts:
>>    *
>>    * - The application-oriented Event API that includes functions to setup
>>    *   an event device (configure it, setup its queues, ports and start it), to
>> - *   establish the link between queues to port and to receive events, and so on.
>> + *   establish the links between queues and ports to receive events, and so on.
>>    *
>>    * - The driver-oriented Event API that exports a function allowing
>> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
>> + *   an event poll Mode Driver (PMD) to register itself as
>>    *   an event device driver.
>>    *
>> + * Application-oriented Event API
>> + * ------------------------------
>> + *
>>    * Event device components:
>>    *
>>    *                     +-----------------+
>> @@ -75,27 +83,39 @@
>>    *            |                                                           |
>>    *            +-----------------------------------------------------------+
>>    *
>> - * Event device: A hardware or software-based event scheduler.
>> + * **Event device**: A hardware or software-based event scheduler.
>>    *
>> - * Event: A unit of scheduling that encapsulates a packet or other datatype
>> - * like SW generated event from the CPU, Crypto work completion notification,
>> - * Timer expiry event notification etc as well as metadata.
>> - * The metadata includes flow ID, scheduling type, event priority, event_type,
>> - * sub_event_type etc.
>> + * **Event**: Represents an item of work and is the smallest unit of scheduling.
>> + * An event carries metadata, such as queue ID, scheduling type, and event priority,
>> + * and data such as one or more packets or other kinds of buffers.
>> + * Some examples of events are:
>> + * - a software-generated item of work originating from a lcore,
> 
> lcore.
> 
>> + *   perhaps carrying a packet to be processed,
> 
> processed.
> 
>> + * - a crypto work completion notification
> 
> notification.
> 
>> + * - a timer expiry notification.
>>    *
>> - * Event queue: A queue containing events that are scheduled by the event dev.
>> + * **Event queue**: A queue containing events that are scheduled by the event device.
> 
> Shouldn't we add "to be" or so?
> i.e
> A queue containing events that are to be scheduled by the event device.
> 
>>    * An event queue contains events of different flows associated with scheduling
>>    * types, such as atomic, ordered, or parallel.
>> + * Each event given to an event device must have a valid event queue id field in the metadata,
>> + * to specify on which event queue in the device the event must be placed,
>> + * for later scheduling.
>>    *
>> - * Event port: An application's interface into the event dev for enqueue and
>> + * **Event port**: An application's interface into the event dev for enqueue and
>>    * dequeue operations. Each event port can be linked with one or more
>>    * event queues for dequeue operations.
>> - *
>> - * By default, all the functions of the Event Device API exported by a PMD
>> - * are lock-free functions which assume to not be invoked in parallel on
>> - * different logical cores to work on the same target object. For instance,
>> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
>> - * cores to operates on same  event port. Of course, this function
>> + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
>> + * that each port is polled by only a single lcore. [If this is not the case,
>> + * a suitable synchronization mechanism should be used to prevent simultaneous
>> + * access from multiple lcores.]
>> + * To schedule events to an lcore, the event device will schedule them to the event port(s)
>> + * being polled by that lcore.
>> + *
>> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
>> + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
>> + * different logical cores.
>> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
>> + * cores to operate on same  event port. Of course, this function
>>    * can be invoked in parallel by different logical cores on different ports.
>>    * It is the responsibility of the upper level application to enforce this rule.
>>    *
>> @@ -107,22 +127,19 @@
>>    *
>>    * Event devices are dynamically registered during the PCI/SoC device probing
>>    * phase performed at EAL initialization time.
>> - * When an Event device is being probed, a *rte_event_dev* structure and
>> - * a new device identifier are allocated for that device. Then, the
>> - * event_dev_init() function supplied by the Event driver matching the probed
>> - * device is invoked to properly initialize the device.
>> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
>> + * for it and the event_dev_init() function supplied by the Event driver
>> + * is invoked to properly initialize the device.
>>    *
>> - * The role of the device init function consists of resetting the hardware or
>> - * software event driver implementations.
>> + * The role of the device init function is to reset the device hardware or
>> + * to initialize the software event driver implementation.
>>    *
>> - * If the device init operation is successful, the correspondence between
>> - * the device identifier assigned to the new device and its associated
>> - * *rte_event_dev* structure is effectively registered.
>> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
>> - * freed.
>> + * If the device init operation is successful, the device is assigned a device
>> + * id (dev_id) for application use.
>> + * Otherwise, the *rte_event_dev* structure is freed.
>>    *
>>    * The functions exported by the application Event API to setup a device
>> - * designated by its device identifier must be invoked in the following order:
>> + * must be invoked in the following order:
>>    *     - rte_event_dev_configure()
>>    *     - rte_event_queue_setup()
>>    *     - rte_event_port_setup()
>> @@ -130,10 +147,15 @@
>>    *     - rte_event_dev_start()
>>    *
>>    * Then, the application can invoke, in any order, the functions
>> - * exported by the Event API to schedule events, dequeue events, enqueue events,
>> - * change event queue(s) to event port [un]link establishment and so on.
>> - *
>> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
>> + * exported by the Event API to dequeue events, enqueue events,
>> + * and link and unlink event queue(s) to event ports.
>> + *
>> + * Before configuring a device, an application should call rte_event_dev_info_get()
>> + * to determine the capabilities of the event device, and any queue or port
>> + * limits of that device. The parameters set in the various device configuration
>> + * structures may need to be adjusted based on the max values provided in the
>> + * device information structure returned from the info_get API.
> 
> Can we add full name of info_get()?
  
Jerin Jacob Feb. 9, 2024, 8:43 a.m. UTC | #3
On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-02-07 11:14, Jerin Jacob wrote:
> > On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> >>
> >> Make some textual improvements to the introduction to eventdev and event
> >> devices in the eventdev header file. This text appears in the doxygen
> >> output for the header file, and introduces the key concepts, for
> >> example: events, event devices, queues, ports and scheduling.
> >>
> >> This patch makes the following improvements:
> >> * small textual fixups, e.g. correcting use of singular/plural
> >> * rewrites of some sentences to improve clarity
> >> * using doxygen markdown to split the whole large block up into
> >>    sections, thereby making it easier to read.
> >>
> >> No large-scale changes are made, and blocks are not reordered
> >>
> >> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > Thanks Bruce, While you are cleaning up, Please add following or
> > similar change to fix for not properly
> > parsing the struct rte_event_vector. i.e it is coming as global
> > variables in html files.
> >
> > l[dpdk.org] $ git diff
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index e31c927905..ce4a195a8f 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -1309,9 +1309,9 @@ struct rte_event_vector {
> >                   */
> >                  struct {
> >                          uint16_t port;
> > -                       /* Ethernet device port id. */
> > +                       /**< Ethernet device port id. */
> >                          uint16_t queue;
> > -                       /* Ethernet device queue id. */
> > +                       /**< Ethernet device queue id. */
> >                  };
> >          };
> >          /**< Union to hold common attributes of the vector array. */
> > @@ -1340,7 +1340,11 @@ struct rte_event_vector {
> >           * vector array can be an array of mbufs or pointers or opaque u64
> >           * values.
> >           */
> > +#ifndef __DOXYGEN__
> >   } __rte_aligned(16);
> > +#else
> > +};
> > +#endif
> >
> >   /* Scheduler type definitions */
> >   #define RTE_SCHED_TYPE_ORDERED          0
> >
> >>
> >> ---
> >> V3: reworked following feedback from Mattias
> >> ---
> >>   lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> >>   1 file changed, 81 insertions(+), 51 deletions(-)
> >>
> >> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >> index ec9b02455d..a741832e8e 100644
> >> --- a/lib/eventdev/rte_eventdev.h
> >> +++ b/lib/eventdev/rte_eventdev.h
> >> @@ -12,25 +12,33 @@
> >>    * @file
> >>    *
> >>    * RTE Event Device API
> >> + * ====================
> >>    *
> >> - * In a polling model, lcores poll ethdev ports and associated rx queues
> >> - * directly to look for packet. In an event driven model, by contrast, lcores
> >> - * call the scheduler that selects packets for them based on programmer
> >> - * specified criteria. Eventdev library adds support for event driven
> >> - * programming model, which offer applications automatic multicore scaling,
> >> - * dynamic load balancing, pipelining, packet ingress order maintenance and
> >> - * synchronization services to simplify application packet processing.
> >> + * In a traditional run-to-completion application model, lcores pick up packets
> >
> > Can we keep it is as poll mode instead of run-to-completion as event mode also
> > supports run to completion by having dequuee() and then Tx.
> >
>
> A "traditional" DPDK app is both polling and run-to-completion. You
> could always add "polling" somewhere, but "run-to-completion" in that
> context serves a purpose, imo.

Yeah. Some event devices can actually sleep to save power if packet is
not present(using WFE in arm64 world).

I think, We can be more specific then, like

In a traditional run-to-completion application model where packet are
dequeued from NIC RX queues, .......


>
> A single-stage eventdev-based pipeline will also process packets in a
> run-to-completion fashion. In such a scenario, the difference between
> eventdev and the "tradition" lies in the (ingress-only) load balancing
> mechanism used (which the below note on the "traditional" use of RSS
> indicates).
  
Mattias Rönnblom Feb. 10, 2024, 7:24 a.m. UTC | #4
On 2024-02-09 09:43, Jerin Jacob wrote:
> On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2024-02-07 11:14, Jerin Jacob wrote:
>>> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
>>> <bruce.richardson@intel.com> wrote:
>>>>
>>>> Make some textual improvements to the introduction to eventdev and event
>>>> devices in the eventdev header file. This text appears in the doxygen
>>>> output for the header file, and introduces the key concepts, for
>>>> example: events, event devices, queues, ports and scheduling.
>>>>
>>>> This patch makes the following improvements:
>>>> * small textual fixups, e.g. correcting use of singular/plural
>>>> * rewrites of some sentences to improve clarity
>>>> * using doxygen markdown to split the whole large block up into
>>>>     sections, thereby making it easier to read.
>>>>
>>>> No large-scale changes are made, and blocks are not reordered
>>>>
>>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>>
>>> Thanks Bruce, While you are cleaning up, Please add following or
>>> similar change to fix for not properly
>>> parsing the struct rte_event_vector. i.e it is coming as global
>>> variables in html files.
>>>
>>> l[dpdk.org] $ git diff
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index e31c927905..ce4a195a8f 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
>>> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>>>                    */
>>>                   struct {
>>>                           uint16_t port;
>>> -                       /* Ethernet device port id. */
>>> +                       /**< Ethernet device port id. */
>>>                           uint16_t queue;
>>> -                       /* Ethernet device queue id. */
>>> +                       /**< Ethernet device queue id. */
>>>                   };
>>>           };
>>>           /**< Union to hold common attributes of the vector array. */
>>> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>>>            * vector array can be an array of mbufs or pointers or opaque u64
>>>            * values.
>>>            */
>>> +#ifndef __DOXYGEN__
>>>    } __rte_aligned(16);
>>> +#else
>>> +};
>>> +#endif
>>>
>>>    /* Scheduler type definitions */
>>>    #define RTE_SCHED_TYPE_ORDERED          0
>>>
>>>>
>>>> ---
>>>> V3: reworked following feedback from Mattias
>>>> ---
>>>>    lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>>>>    1 file changed, 81 insertions(+), 51 deletions(-)
>>>>
>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>> index ec9b02455d..a741832e8e 100644
>>>> --- a/lib/eventdev/rte_eventdev.h
>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>> @@ -12,25 +12,33 @@
>>>>     * @file
>>>>     *
>>>>     * RTE Event Device API
>>>> + * ====================
>>>>     *
>>>> - * In a polling model, lcores poll ethdev ports and associated rx queues
>>>> - * directly to look for packet. In an event driven model, by contrast, lcores
>>>> - * call the scheduler that selects packets for them based on programmer
>>>> - * specified criteria. Eventdev library adds support for event driven
>>>> - * programming model, which offer applications automatic multicore scaling,
>>>> - * dynamic load balancing, pipelining, packet ingress order maintenance and
>>>> - * synchronization services to simplify application packet processing.
>>>> + * In a traditional run-to-completion application model, lcores pick up packets
>>>
>>> Can we keep it is as poll mode instead of run-to-completion as event mode also
>>> supports run to completion by having dequuee() and then Tx.
>>>
>>
>> A "traditional" DPDK app is both polling and run-to-completion. You
>> could always add "polling" somewhere, but "run-to-completion" in that
>> context serves a purpose, imo.
> 
> Yeah. Some event devices can actually sleep to save power if packet is
> not present(using WFE in arm64 world).
> 

Sure, and I believe you can do that with certain Ethdevs as well. Also, 
you can also use interrupts. So polling/energy-efficient polling 
(wfe/umwait)/interrupts aren't really a differentiator between Eventdev 
and "raw" Ethdev.

> I think, We can be more specific then, like
> 
> In a traditional run-to-completion application model where packet are
> dequeued from NIC RX queues, .......
> 

"In a traditional DPDK application model, the application polls Ethdev 
port RX queues to look for work, and processing is done in a 
run-to-completion manner, after which the packets are transmitted on a 
Ethdev TX queue. Load is distributed by statically assigning ports and 
queues to lcores, and NIC receive-side scaling (RSS, or similar) is 
employed to distribute network flows (and thus work) on the same port 
across multiple RX queues."

I don̈́'t know if that's too much.

> 
>>
>> A single-stage eventdev-based pipeline will also process packets in a
>> run-to-completion fashion. In such a scenario, the difference between
>> eventdev and the "tradition" lies in the (ingress-only) load balancing
>> mechanism used (which the below note on the "traditional" use of RSS
>> indicates).
  
Bruce Richardson Feb. 20, 2024, 4:28 p.m. UTC | #5
On Sat, Feb 10, 2024 at 08:24:29AM +0100, Mattias Rönnblom wrote:
> On 2024-02-09 09:43, Jerin Jacob wrote:
> > On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> > > 
> > > On 2024-02-07 11:14, Jerin Jacob wrote:
> > > > On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > > 
> > > > > Make some textual improvements to the introduction to eventdev and event
> > > > > devices in the eventdev header file. This text appears in the doxygen
> > > > > output for the header file, and introduces the key concepts, for
> > > > > example: events, event devices, queues, ports and scheduling.
> > > > > 
> > > > > This patch makes the following improvements:
> > > > > * small textual fixups, e.g. correcting use of singular/plural
> > > > > * rewrites of some sentences to improve clarity
> > > > > * using doxygen markdown to split the whole large block up into
> > > > >     sections, thereby making it easier to read.
> > > > > 
> > > > > No large-scale changes are made, and blocks are not reordered
> > > > > 
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > 
> > > > Thanks Bruce, While you are cleaning up, Please add following or
> > > > similar change to fix for not properly
> > > > parsing the struct rte_event_vector. i.e it is coming as global
> > > > variables in html files.
> > > > 
> > > > l[dpdk.org] $ git diff
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index e31c927905..ce4a195a8f 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > @@ -1309,9 +1309,9 @@ struct rte_event_vector {
> > > >                    */
> > > >                   struct {
> > > >                           uint16_t port;
> > > > -                       /* Ethernet device port id. */
> > > > +                       /**< Ethernet device port id. */
> > > >                           uint16_t queue;
> > > > -                       /* Ethernet device queue id. */
> > > > +                       /**< Ethernet device queue id. */
> > > >                   };
> > > >           };
> > > >           /**< Union to hold common attributes of the vector array. */
> > > > @@ -1340,7 +1340,11 @@ struct rte_event_vector {
> > > >            * vector array can be an array of mbufs or pointers or opaque u64
> > > >            * values.
> > > >            */
> > > > +#ifndef __DOXYGEN__
> > > >    } __rte_aligned(16);
> > > > +#else
> > > > +};
> > > > +#endif
> > > > 
> > > >    /* Scheduler type definitions */
> > > >    #define RTE_SCHED_TYPE_ORDERED          0
> > > > 
> > > > > 
> > > > > ---
> > > > > V3: reworked following feedback from Mattias
> > > > > ---
> > > > >    lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> > > > >    1 file changed, 81 insertions(+), 51 deletions(-)
> > > > > 
> > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > > index ec9b02455d..a741832e8e 100644
> > > > > --- a/lib/eventdev/rte_eventdev.h
> > > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > > @@ -12,25 +12,33 @@
> > > > >     * @file
> > > > >     *
> > > > >     * RTE Event Device API
> > > > > + * ====================
> > > > >     *
> > > > > - * In a polling model, lcores poll ethdev ports and associated rx queues
> > > > > - * directly to look for packet. In an event driven model, by contrast, lcores
> > > > > - * call the scheduler that selects packets for them based on programmer
> > > > > - * specified criteria. Eventdev library adds support for event driven
> > > > > - * programming model, which offer applications automatic multicore scaling,
> > > > > - * dynamic load balancing, pipelining, packet ingress order maintenance and
> > > > > - * synchronization services to simplify application packet processing.
> > > > > + * In a traditional run-to-completion application model, lcores pick up packets
> > > > 
> > > > Can we keep it is as poll mode instead of run-to-completion as event mode also
> > > > supports run to completion by having dequuee() and then Tx.
> > > > 
> > > 
> > > A "traditional" DPDK app is both polling and run-to-completion. You
> > > could always add "polling" somewhere, but "run-to-completion" in that
> > > context serves a purpose, imo.
> > 
> > Yeah. Some event devices can actually sleep to save power if packet is
> > not present(using WFE in arm64 world).
> > 
> 
> Sure, and I believe you can do that with certain Ethdevs as well. Also, you
> can also use interrupts. So polling/energy-efficient polling
> (wfe/umwait)/interrupts aren't really a differentiator between Eventdev and
> "raw" Ethdev.
> 
> > I think, We can be more specific then, like
> > 
> > In a traditional run-to-completion application model where packet are
> > dequeued from NIC RX queues, .......
> > 
> 
> "In a traditional DPDK application model, the application polls Ethdev port
> RX queues to look for work, and processing is done in a run-to-completion
> manner, after which the packets are transmitted on a Ethdev TX queue. Load
> is distributed by statically assigning ports and queues to lcores, and NIC
> receive-side scaling (RSS, or similar) is employed to distribute network
> flows (and thus work) on the same port across multiple RX queues."
> 
> I don̈́'t know if that's too much.
> 
Looks fine to me, I'll just use that text in V4.
  
Bruce Richardson Feb. 20, 2024, 4:33 p.m. UTC | #6
On Wed, Feb 07, 2024 at 03:44:37PM +0530, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> >
> > This patch makes the following improvements:
> > * small textual fixups, e.g. correcting use of singular/plural
> > * rewrites of some sentences to improve clarity
> > * using doxygen markdown to split the whole large block up into
> >   sections, thereby making it easier to read.
> >
> > No large-scale changes are made, and blocks are not reordered
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> Thanks Bruce, While you are cleaning up, Please add following or
> similar change to fix for not properly
> parsing the struct rte_event_vector. i.e it is coming as global
> variables in html files.
> 
> l[dpdk.org] $ git diff
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e31c927905..ce4a195a8f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
>                  */
>                 struct {
>                         uint16_t port;
> -                       /* Ethernet device port id. */
> +                       /**< Ethernet device port id. */
>                         uint16_t queue;
> -                       /* Ethernet device queue id. */
> +                       /**< Ethernet device queue id. */
>                 };
>         };
>         /**< Union to hold common attributes of the vector array. */
> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
>          * vector array can be an array of mbufs or pointers or opaque u64
>          * values.
>          */
> +#ifndef __DOXYGEN__
>  } __rte_aligned(16);
> +#else
> +};
> +#endif
> 

Yep, that's an easy enough extra patch to add to v4.

>  /* Scheduler type definitions */
>  #define RTE_SCHED_TYPE_ORDERED          0
> 
> >
> > ---
> > V3: reworked following feedback from Mattias
> > ---
> >  lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
> >  1 file changed, 81 insertions(+), 51 deletions(-)
> >
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index ec9b02455d..a741832e8e 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -12,25 +12,33 @@
> >   * @file
> >   *
> >   * RTE Event Device API
> > + * ====================
> >   *
> > - * In a polling model, lcores poll ethdev ports and associated rx queues
> > - * directly to look for packet. In an event driven model, by contrast, lcores
> > - * call the scheduler that selects packets for them based on programmer
> > - * specified criteria. Eventdev library adds support for event driven
> > - * programming model, which offer applications automatic multicore scaling,
> > - * dynamic load balancing, pipelining, packet ingress order maintenance and
> > - * synchronization services to simplify application packet processing.
> > + * In a traditional run-to-completion application model, lcores pick up packets
> 
> Can we keep it is as poll mode instead of run-to-completion as event mode also
> supports run to completion by having dequuee() and then Tx.
> 
> > + * from Ethdev ports and associated RX queues, run the packet processing to completion,
> > + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
> > + * may be used to balance the load across multiple CPU cores.
> > + *
> > + * In contrast, in an event-driver model, as supported by this "eventdev" library,
> > + * incoming packets are fed into an event device, which schedules those packets across
> 
> packets -> events. We may need to bring in Rx adapter if the event is packet.
> 

I think keeping it as packets is correct, rather than confusing things too
much. However, I will put "incoming packets (or other input events) ..." to
acknowledge other sources. We don't need to bring in input adapters at this
point since we want to keep it high-level.

> > + * the available lcores, in accordance with its configuration.
> > + * This event-driven programming model offers applications automatic multicore scaling,
> > + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
> > + * and prioritization/quality of service.
> >   *
> >   * The Event Device API is composed of two parts:
> >   *
> >   * - The application-oriented Event API that includes functions to setup
> >   *   an event device (configure it, setup its queues, ports and start it), to
> > - *   establish the link between queues to port and to receive events, and so on.
> > + *   establish the links between queues and ports to receive events, and so on.
> >   *
> >   * - The driver-oriented Event API that exports a function allowing
> > - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> > + *   an event poll Mode Driver (PMD) to register itself as
> >   *   an event device driver.
> >   *
> > + * Application-oriented Event API
> > + * ------------------------------
> > + *
> >   * Event device components:
> >   *
> >   *                     +-----------------+
> > @@ -75,27 +83,39 @@
> >   *            |                                                           |
> >   *            +-----------------------------------------------------------+
> >   *
> > - * Event device: A hardware or software-based event scheduler.
> > + * **Event device**: A hardware or software-based event scheduler.
> >   *
> > - * Event: A unit of scheduling that encapsulates a packet or other datatype
> > - * like SW generated event from the CPU, Crypto work completion notification,
> > - * Timer expiry event notification etc as well as metadata.
> > - * The metadata includes flow ID, scheduling type, event priority, event_type,
> > - * sub_event_type etc.
> > + * **Event**: Represents an item of work and is the smallest unit of scheduling.
> > + * An event carries metadata, such as queue ID, scheduling type, and event priority,
> > + * and data such as one or more packets or other kinds of buffers.
> > + * Some examples of events are:
> > + * - a software-generated item of work originating from a lcore,
> 
> lcore.
> 
Nak for this, since it's not the end of a sentence, but ack for the other
two below.

> > + *   perhaps carrying a packet to be processed,
> 
> processed.
> 
> > + * - a crypto work completion notification
> 
> notification.
> 
> > + * - a timer expiry notification.
> >   *
> > - * Event queue: A queue containing events that are scheduled by the event dev.
> > + * **Event queue**: A queue containing events that are scheduled by the event device.
> 
> Shouldn't we add "to be" or so?
> i.e
> A queue containing events that are to be scheduled by the event device.
> 

Sure, ack.

> >   * An event queue contains events of different flows associated with scheduling
> >   * types, such as atomic, ordered, or parallel.
> > + * Each event given to an event device must have a valid event queue id field in the metadata,
> > + * to specify on which event queue in the device the event must be placed,
> > + * for later scheduling.
> >   *
> > - * Event port: An application's interface into the event dev for enqueue and
> > + * **Event port**: An application's interface into the event dev for enqueue and
> >   * dequeue operations. Each event port can be linked with one or more
> >   * event queues for dequeue operations.
> > - *
> > - * By default, all the functions of the Event Device API exported by a PMD
> > - * are lock-free functions which assume to not be invoked in parallel on
> > - * different logical cores to work on the same target object. For instance,
> > - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> > - * cores to operates on same  event port. Of course, this function
> > + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
> > + * that each port is polled by only a single lcore. [If this is not the case,
> > + * a suitable synchronization mechanism should be used to prevent simultaneous
> > + * access from multiple lcores.]
> > + * To schedule events to an lcore, the event device will schedule them to the event port(s)
> > + * being polled by that lcore.
> > + *
> > + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> > + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
> > + * different logical cores.
> > + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> > + * cores to operate on same  event port. Of course, this function
> >   * can be invoked in parallel by different logical cores on different ports.
> >   * It is the responsibility of the upper level application to enforce this rule.
> >   *
> > @@ -107,22 +127,19 @@
> >   *
> >   * Event devices are dynamically registered during the PCI/SoC device probing
> >   * phase performed at EAL initialization time.
> > - * When an Event device is being probed, a *rte_event_dev* structure and
> > - * a new device identifier are allocated for that device. Then, the
> > - * event_dev_init() function supplied by the Event driver matching the probed
> > - * device is invoked to properly initialize the device.
> > + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> > + * for it and the event_dev_init() function supplied by the Event driver
> > + * is invoked to properly initialize the device.
> >   *
> > - * The role of the device init function consists of resetting the hardware or
> > - * software event driver implementations.
> > + * The role of the device init function is to reset the device hardware or
> > + * to initialize the software event driver implementation.
> >   *
> > - * If the device init operation is successful, the correspondence between
> > - * the device identifier assigned to the new device and its associated
> > - * *rte_event_dev* structure is effectively registered.
> > - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> > - * freed.
> > + * If the device init operation is successful, the device is assigned a device
> > + * id (dev_id) for application use.
> > + * Otherwise, the *rte_event_dev* structure is freed.
> >   *
> >   * The functions exported by the application Event API to setup a device
> > - * designated by its device identifier must be invoked in the following order:
> > + * must be invoked in the following order:
> >   *     - rte_event_dev_configure()
> >   *     - rte_event_queue_setup()
> >   *     - rte_event_port_setup()
> > @@ -130,10 +147,15 @@
> >   *     - rte_event_dev_start()
> >   *
> >   * Then, the application can invoke, in any order, the functions
> > - * exported by the Event API to schedule events, dequeue events, enqueue events,
> > - * change event queue(s) to event port [un]link establishment and so on.
> > - *
> > - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> > + * exported by the Event API to dequeue events, enqueue events,
> > + * and link and unlink event queue(s) to event ports.
> > + *
> > + * Before configuring a device, an application should call rte_event_dev_info_get()
> > + * to determine the capabilities of the event device, and any queue or port
> > + * limits of that device. The parameters set in the various device configuration
> > + * structures may need to be adjusted based on the max values provided in the
> > + * device information structure returned from the info_get API.
> 
> Can we add full name of info_get()?

Yep, that will turn it into a hyperlink, so will update in v4
  

Patch

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a741832e8e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,25 +12,33 @@ 
  * @file
  *
  * RTE Event Device API
+ * ====================
  *
- * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
- * dynamic load balancing, pipelining, packet ingress order maintenance and
- * synchronization services to simplify application packet processing.
+ * In a traditional run-to-completion application model, lcores pick up packets
+ * from Ethdev ports and associated RX queues, run the packet processing to completion,
+ * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
+ * may be used to balance the load across multiple CPU cores.
+ *
+ * In contrast, in an event-driver model, as supported by this "eventdev" library,
+ * incoming packets are fed into an event device, which schedules those packets across
+ * the available lcores, in accordance with its configuration.
+ * This event-driven programming model offers applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet order maintenance, synchronization,
+ * and prioritization/quality of service.
  *
  * The Event Device API is composed of two parts:
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +83,39 @@ 
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
- * sub_event_type etc.
+ * **Event**: Represents an item of work and is the smallest unit of scheduling.
+ * An event carries metadata, such as queue ID, scheduling type, and event priority,
+ * and data such as one or more packets or other kinds of buffers.
+ * Some examples of events are:
+ * - a software-generated item of work originating from a lcore,
+ *   perhaps carrying a packet to be processed,
+ * - a crypto work completion notification
+ * - a timer expiry notification.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an event device must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
+ * that each port is polled by only a single lcore. [If this is not the case,
+ * a suitable synchronization mechanism should be used to prevent simultaneous
+ * access from multiple lcores.]
+ * To schedule events to an lcore, the event device will schedule them to the event port(s)
+ * being polled by that lcore.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are non-thread-safe functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +127,19 @@ 
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,10 +147,15 @@ 
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the info_get API.
+ * An application may use rte_event_[queue/port]_default_conf_get() to get the
  * default configuration to set up an event queue or event port by
  * overriding few default values.
  *
@@ -145,7 +167,11 @@ 
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -163,11 +189,14 @@ 
  * performs an indirect invocation of the corresponding driver function
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
- * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * For performance reasons, the addresses of the fast-path functions of the
+ * event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +208,22 @@ 
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread (generally an RTE service that should be mapped to one or more service cores)
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}