From patchwork Fri Jan 19 17:43:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136002 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B821143901; Fri, 19 Jan 2024 18:45:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB32F42E00; Fri, 19 Jan 2024 18:44:26 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id CED3242DFB for ; Fri, 19 Jan 2024 18:44:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686265; x=1737222265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hnIRK2eWofYyvAmuq0QxvL6eyltRxKyucaSgQchA0r8=; b=dChZKne9g3hzkvFFvQu8kMxp2/WujfGKW43S4GVc/5BgGGNcphiIZB8W vK8eb+Ke01YrxB449Qc96747SJdlTNTiMnIgIk1hEIfyJE4bKkR7wu5aI iEBHJjqIcxnjud4ZuGEA3LHwEnYnJo3cKMcrVr/alI5Dw0b37nMYcYAhK /eWVlM1pqTfOnT975EsmH3Tg33uUKjlQq7c6+WY2aFagiz7qZa9uwEM2t YZdVBgPYbRpguPHNxxnoBKDzJay7zS2Bl5C4JMAyq9amqRxs/5zuaODhP BqQX0xobUBtQC6PWflIqFyW6Jb4g47DOdjtbfg20YmGei7KfQdgF5jXQD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683768" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683768" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177804" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177804" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:21 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Date: Fri, 19 Jan 2024 17:43:45 +0000 Message-Id: <20240119174346.108905-11-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The description of ordered and atomic scheduling given in the eventdev doxygen documentation was not always clear. Try and simplify this so that it is clearer for the end-user of the application Signed-off-by: Bruce Richardson --- NOTE TO REVIEWERS: I've updated this based on my understanding of what these scheduling types are meant to do. It matches my understanding of the support offered by our Intel DLB2 driver, as well as the SW eventdev, and I believe the DSW eventdev too. If it does not match the behaviour of other eventdevs, let's have a discussion to see if we can reach a good definition of the behaviour that is common. --- lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 2c6576e921..cb13602ffb 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1313,26 +1313,24 @@ struct rte_event_vector { #define RTE_SCHED_TYPE_ORDERED 0 /**< Ordered scheduling * - * Events from an ordered flow of an event queue can be scheduled to multiple + * Events from an ordered event queue can be scheduled to multiple * ports for concurrent processing while maintaining the original event order. * This scheme enables the user to achieve high single flow throughput by - * avoiding SW synchronization for ordering between ports which bound to cores. - * - * The source flow ordering from an event queue is maintained when events are - * enqueued to their destination queue within the same ordered flow context. - * An event port holds the context until application call - * rte_event_dequeue_burst() from the same port, which implicitly releases - * the context. - * User may allow the scheduler to release the context earlier than that - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. - * - * Events from the source queue appear in their original order when dequeued - * from a destination queue. - * Event ordering is based on the received event(s), but also other - * (newly allocated or stored) events are ordered when enqueued within the same - * ordered context. Events not enqueued (e.g. released or stored) within the - * context are considered missing from reordering and are skipped at this time - * (but can be ordered again within another context). + * avoiding SW synchronization for ordering between ports which are polled by + * different cores. + * + * As events are scheduled to ports/cores, the original event order from the + * source event queue is recorded internally in the scheduler. As events are + * returned (via FORWARD type enqueue) to the scheduler, the original event + * order is restored before the events are enqueued into their new destination + * queue. + * + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly + * released by the next dequeue from a port, are skipped by the reordering + * stage and do not affect the reordering of returned events. + * + * The ordering behaviour of NEW events with respect to FORWARD events is + * undefined and implementation dependent. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */ @@ -1340,18 +1338,23 @@ struct rte_event_vector { #define RTE_SCHED_TYPE_ATOMIC 1 /**< Atomic scheduling * - * Events from an atomic flow of an event queue can be scheduled only to a + * Events from an atomic flow, identified by @ref rte_event.flow_id, + * of an event queue can be scheduled only to a * single port at a time. The port is guaranteed to have exclusive (atomic) * access to the associated flow context, which enables the user to avoid SW * synchronization. Atomic flows also help to maintain event ordering - * since only one port at a time can process events from a flow of an + * since only one port at a time can process events from each flow of an * event queue. * - * The atomic queue synchronization context is dedicated to the port until + * The atomic queue synchronization context for a flow is dedicated to the port until * application call rte_event_dequeue_burst() from the same port, * which implicitly releases the context. User may allow the scheduler to * release the context earlier than that by invoking rte_event_enqueue_burst() - * with RTE_EVENT_OP_RELEASE operation. + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context + * is only released once the last event from the flow, outstanding on the port, + * is released. So long as there is one event from an atomic flow scheduled to + * a port/core (including any events in the port's dequeue queue, not yet read + * by the application), that port will hold the synchronization context. * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE */