From patchwork Fri Jan 19 17:43:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 136003 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 339F043901; Fri, 19 Jan 2024 18:45:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2EA342DDF; Fri, 19 Jan 2024 18:44:29 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id A2CCB42DDF for ; Fri, 19 Jan 2024 18:44:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705686268; x=1737222268; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NzMPSNl5bfeliLA7ajeymmLObnSe4jYMd463PBp+P6U=; b=e7/yRRvQl1SgI0A0sMSL9fjiuemM+wF37RwbwEU50E+S/WYFo5z0EuI9 v0Oi8zvazTKFqnR0P/vlLoBtwlsY/dHFX+SiH3EvlnYjHg/bYUMuGoV2L yMsT9tLbHOjILBX0i+InqUS7/0E5zb1njL3ojhrthQv8RBbJr3aC/wL6+ m47kPWCCcAR2IGeiSvPolpShASTu3wEqiwpu/9rV1yOQ2n0hmQxJalrrf OBccKyRUaX+03igWvQHM6500m6kXweubzX6PrmCtee/0x98Lre5NJvSpB JZwJXK7F9URedhyBROhnOKqoGEcmiPulfpjW2sFOkYolrLDhDiTtNzPLi w==; X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="683780" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="683780" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2024 09:44:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10957"; a="761177809" X-IronPort-AV: E=Sophos;i="6.05,204,1701158400"; d="scan'208";a="761177809" Received: from silpixa00400957.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.26]) by orsmga006.jf.intel.com with ESMTP; 19 Jan 2024 09:44:24 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jerinj@marvell.com, mattias.ronnblom@ericsson.com, abdullah.sevincer@intel.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, pbhagavatula@marvell.com, pravin.pathak@intel.com, Bruce Richardson Subject: [PATCH v2 11/11] eventdev: RFC clarify docs on event object fields Date: Fri, 19 Jan 2024 17:43:46 +0000 Message-Id: <20240119174346.108905-12-bruce.richardson@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240119174346.108905-1-bruce.richardson@intel.com> References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Clarify the meaning of the NEW, FORWARD and RELEASE event types. For the fields in "rte_event" struct, enhance the comments on each to clarify the field's use, and whether it is preserved between enqueue and dequeue, and it's role, if any, in scheduling. Signed-off-by: Bruce Richardson --- As with the previous patch, please review this patch to ensure that the expected semantics of the various event types and event fields have not changed in an unexpected way. --- lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++---------- 1 file changed, 77 insertions(+), 28 deletions(-) -- 2.40.1 diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index cb13602ffb..4eff1c4958 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1416,21 +1416,25 @@ struct rte_event_vector { /* Event enqueue operations */ #define RTE_EVENT_OP_NEW 0 -/**< The event producers use this operation to inject a new event to the +/**< The @ref rte_event.op field should be set to this type to inject a new event to the * event device. */ #define RTE_EVENT_OP_FORWARD 1 -/**< The CPU use this operation to forward the event to different event queue or - * change to new application specific flow or schedule type to enable - * pipelining. +/**< SW should set the @ref rte_event.op filed to this type to return a + * previously dequeued event to the event device for further processing. * - * This operation must only be enqueued to the same port that the + * This event *must* be enqueued to the same port that the * event to be forwarded was dequeued from. + * + * The event's fields, including (but not limited to) flow_id, scheduling type, + * destination queue, and event payload e.g. mbuf pointer, may all be updated as + * desired by software, but the @ref rte_event.impl_opaque field must + * be kept to the same value as was present when the event was dequeued. */ #define RTE_EVENT_OP_RELEASE 2 /**< Release the flow context associated with the schedule type. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC* + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC * then this function hints the scheduler that the user has completed critical * section processing in the current atomic context. * The scheduler is now allowed to schedule events from the same flow from @@ -1442,21 +1446,19 @@ struct rte_event_vector { * performance, but the user needs to design carefully the split into critical * vs non-critical sections. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED* - * then this function hints the scheduler that the user has done all that need - * to maintain event order in the current ordered context. - * The scheduler is allowed to release the ordered context of this port and - * avoid reordering any following enqueues. - * - * Early ordered context release may increase parallelism and thus system - * performance. + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ORDERED + * then this function informs the scheduler that the current event has + * completed processing and will not be returned to the scheduler, i.e. + * it has been dropped, and so the reordering context for that event + * should be considered filled. * - * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL* + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_PARALLEL * or no scheduling context is held then this function may be an NOOP, * depending on the implementation. * * This operation must only be enqueued to the same port that the - * event to be released was dequeued from. + * event to be released was dequeued from. The @ref rte_event.impl_opaque + * field in the release event must match that in the original dequeued event. */ /** @@ -1473,53 +1475,100 @@ struct rte_event { /**< Targeted flow identifier for the enqueue and * dequeue operation. * The value must be in the range of - * [0, nb_event_queue_flows - 1] which + * [0, @ref rte_event_dev_config.nb_event_queue_flows - 1] which * previously supplied to rte_event_dev_configure(). + * + * For @ref RTE_SCHED_TYPE_ATOMIC, this field is used to identify a + * flow context for atomicity, such that events from each individual flow + * will only be scheduled to one port at a time. + * + * This field is preserved between enqueue and dequeue when + * a device reports the @ref RTE_EVENT_DEV_CAP_CARRY_FLOW_ID + * capability. Otherwise the value is implementation dependent + * on dequeue. */ uint32_t sub_event_type:8; /**< Sub-event types based on the event source. + * + * This field is preserved between enqueue and dequeue. + * This field is for SW or event adapter use, + * and is unused in scheduling decisions. + * * @see RTE_EVENT_TYPE_CPU */ uint32_t event_type:4; - /**< Event type to classify the event source. - * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*) + /**< Event type to classify the event source. (RTE_EVENT_TYPE_*) + * + * This field is preserved between enqueue and dequeue + * This field is for SW or event adapter use, + * and is unused in scheduling decisions. */ uint8_t op:2; - /**< The type of event enqueue operation - new/forward/ - * etc.This field is not preserved across an instance + /**< The type of event enqueue operation - new/forward/ etc. + * + * This field is *not* preserved across an instance * and is undefined on dequeue. - * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*) + * + * @see RTE_EVENT_OP_NEW + * @see RTE_EVENT_OP_FORWARD + * @see RTE_EVENT_OP_RELEASE */ uint8_t rsvd:4; - /**< Reserved for future use */ + /**< Reserved for future use. + * + * Should be set to zero on enqueue. Zero on dequeue. + */ uint8_t sched_type:2; /**< Scheduler synchronization type (RTE_SCHED_TYPE_*) * associated with flow id on a given event queue * for the enqueue and dequeue operation. + * + * This field is used to determine the scheduling type + * for events sent to queues where @ref RTE_EVENT_QUEUE_CFG_ALL_TYPES + * is supported. + * For queues where only a single scheduling type is available, + * this field must be set to match the configured scheduling type. + * + * This field is preserved between enqueue and dequeue. + * + * @see RTE_SCHED_TYPE_ORDERED + * @see RTE_SCHED_TYPE_ATOMIC + * @see RTE_SCHED_TYPE_PARALLEL */ uint8_t queue_id; /**< Targeted event queue identifier for the enqueue or * dequeue operation. * The value must be in the range of - * [0, nb_event_queues - 1] which previously supplied to - * rte_event_dev_configure(). + * [0, @ref rte_event_dev_config.nb_event_queues - 1] which was + * previously supplied to rte_event_dev_configure(). + * + * This field is preserved between enqueue on dequeue. */ uint8_t priority; /**< Event priority relative to other events in the * event queue. The requested priority should in the - * range of [RTE_EVENT_DEV_PRIORITY_HIGHEST, - * RTE_EVENT_DEV_PRIORITY_LOWEST]. + * range of [@ref RTE_EVENT_DEV_PRIORITY_HIGHEST, + * @ref RTE_EVENT_DEV_PRIORITY_LOWEST]. * The implementation shall normalize the requested * priority to supported priority value. + * * Valid when the device has - * RTE_EVENT_DEV_CAP_EVENT_QOS capability. + * @ref RTE_EVENT_DEV_CAP_EVENT_QOS capability. + * Ignored otherwise. + * + * This field is preserved between enqueue and dequeue. */ uint8_t impl_opaque; /**< Implementation specific opaque value. + * * An implementation may use this field to hold * implementation specific value to share between * dequeue and enqueue operation. + * * The application should not modify this field. + * Its value is implementation dependent on dequeue, + * and must be returned unmodified on enqueue when + * op type is @ref RTE_EVENT_OP_FORWARD or @ref RTE_EVENT_OP_RELEASE */ }; };