From patchwork Wed Feb 21 10:32:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136965 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF99443B61; Wed, 21 Feb 2024 11:33:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EB9040EDC; Wed, 21 Feb 2024 11:32:45 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by mails.dpdk.org (Postfix) with ESMTP id EEA2540ED2 for ; Wed, 21 Feb 2024 11:32:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708511563; x=1740047563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E046b7RLJ6t1pYV7Rp5qyui/A39JRzujVceiYd7BYro=; b=OLxJJcFx3UshOmPHSHqnvEVv3+qAF6NSWGdvS4vE45XhBEOp54GbnnNg D4pEriSJk5cOmvdk+4OxmlEfpyHpBuypHaRQmpgJE/DB0FxqRn9qLyHgx caVM4vDdzc8t5Lke4teCARXjbNAM8iILYwo1IeQRAzLOzpQw/0jy8qaQE A9yHhLJPLDiMpNY3nizQ0F1j5ZFo81CBc7DfJJF+SjeHa9uAibBkVR7ec Kzthv3LaeNwwLB3vZ3QQ06DOnA/p3g4nfzXmY1SfAjuFdfunwvzqq7BFf r/at8Ntv6aJyAPjyZnApZUZJoXNGRREDwaxYH1M1WRn5tVX54SMHLKZpa Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10990"; a="2800743" X-IronPort-AV: E=Sophos;i="6.06,175,1705392000"; d="scan'208";a="2800743" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2024 02:32:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,175,1705392000"; d="scan'208";a="5393022" Received: from silpixa00401385.ir.intel.com ([10.237.214.38]) by orviesa007.jf.intel.com with ESMTP; 21 Feb 2024 02:32:41 -0800 From: Bruce Richardson To: dev@dpdk.org, jerinj@marvell.com, mattias.ronnblom@ericsson.com Cc: Bruce Richardson Subject: [PATCH v4 09/12] eventdev: improve comments on scheduling types Date: Wed, 21 Feb 2024 10:32:18 +0000 Message-Id: <20240221103221.933238-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240221103221.933238-1-bruce.richardson@intel.com> References: <20240119174346.108905-1-bruce.richardson@intel.com> <20240221103221.933238-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The description of ordered and atomic scheduling given in the eventdev doxygen documentation was not always clear. Try and simplify this so that it is clearer for the end-user of the application Signed-off-by: Bruce Richardson Acked-by: Pavan Nikhilesh --- V4: reworked following review by Jerin V3: extensive rework following feedback. Please re-review! --- lib/eventdev/rte_eventdev.h | 77 +++++++++++++++++++++++-------------- 1 file changed, 48 insertions(+), 29 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 72814719b2..6d881bd665 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1397,25 +1397,36 @@ struct rte_event_vector { /**< Ordered scheduling * * Events from an ordered flow of an event queue can be scheduled to multiple - * ports for concurrent processing while maintaining the original event order. - * This scheme enables the user to achieve high single flow throughput by - * avoiding SW synchronization for ordering between ports which bound to cores. - * - * The source flow ordering from an event queue is maintained when events are - * enqueued to their destination queue within the same ordered flow context. - * An event port holds the context until application call - * rte_event_dequeue_burst() from the same port, which implicitly releases - * the context. - * User may allow the scheduler to release the context earlier than that - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. - * - * Events from the source queue appear in their original order when dequeued - * from a destination queue. - * Event ordering is based on the received event(s), but also other - * (newly allocated or stored) events are ordered when enqueued within the same - * ordered context. Events not enqueued (e.g. released or stored) within the - * context are considered missing from reordering and are skipped at this time - * (but can be ordered again within another context). + * ports for concurrent processing while maintaining the original event order, + * i.e. the order in which they were first enqueued to that queue. + * This scheme allows events pertaining to the same, potentially large, flow to + * be processed in parallel on multiple cores without incurring any + * application-level order restoration logic overhead. + * + * After events are dequeued from a set of ports, as those events are re-enqueued + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event + * device restores the original event order - including events returned from all + * ports in the set - before the events are placed on the destination queue, + * for subsequent scheduling to ports. + * + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly + * released by the next dequeue operation on a port, are skipped by the reordering + * stage and do not affect the reordering of other returned events. + * + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent + * on the same port, since they have no original event order. They also are not + * ordered with respect to NEW events enqueued on other ports. + * However, NEW events to the same destination queue from the same port are guaranteed + * to be enqueued in the order they were submitted via rte_event_enqueue_burst(). + * + * NOTE: + * In restoring event order of forwarded events, the eventdev API guarantees that + * all events from the same flow (i.e. same @ref rte_event.flow_id, + * @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original + * order before being forwarded to the destination queue. + * Some eventdevs may implement stricter ordering to achieve this aim, + * for example, restoring the order across *all* flows dequeued from the same ORDERED + * queue. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */ @@ -1423,18 +1434,26 @@ struct rte_event_vector { #define RTE_SCHED_TYPE_ATOMIC 1 /**< Atomic scheduling * - * Events from an atomic flow of an event queue can be scheduled only to a + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id, + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a * single port at a time. The port is guaranteed to have exclusive (atomic) * access to the associated flow context, which enables the user to avoid SW - * synchronization. Atomic flows also help to maintain event ordering - * since only one port at a time can process events from a flow of an - * event queue. - * - * The atomic queue synchronization context is dedicated to the port until - * application call rte_event_dequeue_burst() from the same port, - * which implicitly releases the context. User may allow the scheduler to - * release the context earlier than that by invoking rte_event_enqueue_burst() - * with RTE_EVENT_OP_RELEASE operation. + * synchronization. Atomic flows also maintain event ordering + * since only one port at a time can process events from each flow of an + * event queue, and events within a flow are not reordered within the scheduler. + * + * An atomic flow is locked to a port when events from that flow are first + * scheduled to that port. That lock remains in place until the + * application calls rte_event_dequeue_burst() from the same port, + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set). + * User may allow the scheduler to release the lock earlier than that by invoking + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow. + * + * NOTE: Where multiple events from the same queue and atomic flow are scheduled to a port, + * the lock for that flow is only released once the last event from the flow is released, + * or forwarded to another queue. So long as there is at least one event from an atomic + * flow scheduled to a port/core (including any events in the port's dequeue queue, not yet read + * by the application), that port will hold the synchronization lock for that flow. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */