From patchwork Mon Jun 27 09:57:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 113468 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8266BA0552; Mon, 27 Jun 2022 11:59:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2221B400D7; Mon, 27 Jun 2022 11:59:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 768AC400D5 for ; Mon, 27 Jun 2022 11:59:09 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25QNRA5F030927; Mon, 27 Jun 2022 02:57:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Noqsfn7dShDug3rdpSTs58mMIExXDJPYHwnUv+wBSPw=; b=ellOKG62qBl/7HdkQeNkw8UpJw92C3AEYYMJdq4/OIcIvxZPbTkZzK2D3ZKycVr43CuD qItxkftJeOZw3RfmtipdVc0I7m8+0BjoS8bau4H8Zp0gesiDWCg96pDYf8H9xx2iWESb PCvyqolIyVID1jGpAPcFK+EseRiK3UvQK8ufZIrlGMwTSijyfRJ+YFxgrJ6j/IWLjSU1 S+ktAKxb74iU0AOczZN046bwcIw2AO8tMmlQHOLiszEAY+F357Neqov+4e8oG+rIIgWC hKJoDoRKGRRP1nESc48HH6lNWGfeN2Di718ved6AjWMWPo/gNr03U8tM+9aBhYIcj1pJ 2g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3gwyqqeb0x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 27 Jun 2022 02:57:05 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 27 Jun 2022 02:57:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 27 Jun 2022 02:57:03 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.161.88]) by maili.marvell.com (Postfix) with ESMTP id 039053F70C2; Mon, 27 Jun 2022 02:57:01 -0700 (PDT) From: To: , Ray Kinsella CC: , Pavan Nikhilesh Subject: [PATCH 1/2] doc: add enqueue depth for new event type Date: Mon, 27 Jun 2022 15:27:01 +0530 Message-ID: <20220627095702.8047-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: lbMzpWYcoBIcyL7-DAYSoQinELiveS7f X-Proofpoint-ORIG-GUID: lbMzpWYcoBIcyL7-DAYSoQinELiveS7f X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-06-27_06,2022-06-24_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh A new field ``max_event_port_enqueue_new_burst`` will be added to the structure ``rte_event_dev_info``. The field defines the max enqueue burst size of new events (OP_NEW) supported by the underlying event device. Signed-off-by: Pavan Nikhilesh Acked-by: Jerin Jacob Acked-by: Hemant Agrawal Acked-by: Harry van Haaren --- doc/guides/rel_notes/deprecation.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 4e5b23c53d..071317e8e3 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -125,3 +125,8 @@ Deprecation Notices applications should be updated to use the ``dmadev`` library instead, with the underlying HW-functionality being provided by the ``ioat`` or ``idxd`` dma drivers + +* eventdev: The structure ``rte_event_dev_info`` will be extended to include the + max enqueue burst size of new events supported by the underlying event device. + A new field ``max_event_port_enqueue_new_burst`` will be added to the structure + ``rte_event_dev_info`` in DPDK 22.11. From patchwork Mon Jun 27 09:57:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 113467 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E3EAA0552; Mon, 27 Jun 2022 11:57:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E440C400D7; Mon, 27 Jun 2022 11:57:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 213B1400D5 for ; Mon, 27 Jun 2022 11:57:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25QNrtBt014669 for ; Mon, 27 Jun 2022 02:57:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=wfGSglM1LaZfZt4ofvpDIBfWREcx+OtX9ODmZ308WGw=; b=apPvHW9jcnzcJ8+maH+6WfUMZyZRB+0dP4No6c30bnE4LQJ+69GHYIhypjezPeBYgLpI NPqGq4PPI9iBThRxdYh0Tn4pCNWx363JmmHyMPYEbfBs0YqkwgJ4n9ZO5mipnc9fBi9n IRzRA/YtMRUzLkmv2Vu9hB3AN60yxCouzUmTthVoWTKdAUnfnSxvlYRztjngws7Klw/M K0uCj4pH368oGZi17REirvBUAWFDCVKP90dziqbNjf4tG9qU7Xgokw9Ic12NGIGAYKhn 04x6y8BXwWeEBsKzU4VDlbVwNU80QINhsnKBkiXgA0qf4+7cqEZkLuzIKi7/dKVO63iE RA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3gx1vkwvde-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 27 Jun 2022 02:57:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 27 Jun 2022 02:57:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 27 Jun 2022 02:57:06 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.161.88]) by maili.marvell.com (Postfix) with ESMTP id 8A3423F70B9; Mon, 27 Jun 2022 02:57:04 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Subject: [PATCH 2/2] eventdev: add function to enq new events to the same queue Date: Mon, 27 Jun 2022 15:27:02 +0530 Message-ID: <20220627095702.8047-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220627095702.8047-1-pbhagavatula@marvell.com> References: <20220627095702.8047-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: pxF_KhclL6OtChu3fxNvgvLVdKyCp95K X-Proofpoint-GUID: pxF_KhclL6OtChu3fxNvgvLVdKyCp95K X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-06-27_06,2022-06-24_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Introduce new fastpath function to enqueue events with type *OP_NEW* to the same destination event queue. This function can be used as a hint to the PMD to use optimized the enqueue sequence. Signed-off-by: Pavan Nikhilesh --- lib/eventdev/eventdev_pmd.h | 5 +- lib/eventdev/eventdev_private.c | 13 ++++++ lib/eventdev/rte_eventdev.h | 80 +++++++++++++++++++++++++++++++- lib/eventdev/rte_eventdev_core.h | 11 ++++- 4 files changed, 105 insertions(+), 4 deletions(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 69402668d8..f0bb97fb89 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -178,7 +178,10 @@ struct rte_eventdev { /**< Pointer to PMD eth Tx adapter enqueue function. */ event_crypto_adapter_enqueue_t ca_enqueue; - uint64_t reserved_64s[4]; /**< Reserved for future fields */ + event_enqueue_queue_burst_t enqueue_new_same_dest; + /**< PMD enqueue burst queue new function to same destination queue. */ + + uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 1d3d9d357e..53d1db281b 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -24,6 +24,17 @@ dummy_event_enqueue_burst(__rte_unused void *port, return 0; } +static uint16_t +dummy_event_enqueue_queue_burst(__rte_unused void *port, + __rte_unused uint8_t queue, + __rte_unused const struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR( + "event enqueue burst requested for unconfigured event device"); + return 0; +} + static uint16_t dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev, __rte_unused uint64_t timeout_ticks) @@ -90,6 +101,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .enqueue_burst = dummy_event_enqueue_burst, .enqueue_new_burst = dummy_event_enqueue_burst, .enqueue_forward_burst = dummy_event_enqueue_burst, + .enqueue_new_same_dest = dummy_event_enqueue_queue_burst, .dequeue = dummy_event_dequeue, .dequeue_burst = dummy_event_dequeue_burst, .maintain = dummy_event_maintain, @@ -111,6 +123,7 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->enqueue_burst = dev->enqueue_burst; fp_op->enqueue_new_burst = dev->enqueue_new_burst; fp_op->enqueue_forward_burst = dev->enqueue_forward_burst; + fp_op->enqueue_new_same_dest = dev->enqueue_new_same_dest; fp_op->dequeue = dev->dequeue; fp_op->dequeue_burst = dev->dequeue_burst; fp_op->maintain = dev->maintain; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 6a6f6ea4c1..2aa563740b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -425,8 +425,9 @@ struct rte_event_dev_info { * A device that does not support bulk dequeue will set this as 1. */ uint32_t max_event_port_enqueue_depth; - /**< Maximum number of events can be enqueued at a time from an - * event port by this device. + /**< Maximum number of events that can be enqueued at a time to a + * event port by this device, applicable for rte_event::op is either + * *RTE_EVENT_OP_FORWARD* or *RTE_EVENT_OP_RELEASE* * A device that does not support bulk enqueue will set this as 1. */ uint8_t max_event_port_links; @@ -446,6 +447,12 @@ struct rte_event_dev_info { * device. These ports and queues are not accounted for in * max_event_ports or max_event_queues. */ + int16_t max_event_port_enqueue_new_burst; + /**< Maximum number of events that can be enqueued at a time to a + * event port by this device, applicable when rte_event::op is set to + * *RTE_EVENT_OP_NEW*. + * A device with no limits will set this value to -1. + */ }; /** @@ -2082,6 +2089,75 @@ rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id, fp_ops->enqueue_forward_burst); } +/** + * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on + * an event device designated by its *dev_id* through the event port specified + * by *port_id* to the same queue specified by *queue_id*. + * + * Provides the same functionality as rte_event_enqueue_burst(), expect that + * application can use this API when the all objects in the burst contains + * the enqueue operation of the type *RTE_EVENT_OP_NEW* and are destined to the + * same queue. This specialized function can provide the additional hint to the + * PMD and optimize if possible. + * + * The rte_event_enqueue_new_queue_burst() result is undefined if the enqueue + * burst has event object of operation type != RTE_EVENT_OP_NEW. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param queue_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH + * @see rte_event_enqueue_burst() + */ +static inline uint16_t +rte_event_enqueue_new_queue_burst(uint8_t dev_id, uint8_t port_id, + uint8_t queue_id, const struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) { + rte_errno = EINVAL; + return 0; + } + + if (port == NULL) { + rte_errno = EINVAL; + return 0; + } +#endif + return fp_ops->enqueue_new_same_dest(port, queue_id, ev, nb_events); +} + /** * Dequeue a burst of events objects or an event object from the event port * designated by its *event_port_id*, on an event device designated diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index c328bdbc82..4d7d27e82d 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -20,6 +20,13 @@ typedef uint16_t (*event_enqueue_burst_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on port of a device */ +typedef uint16_t (*event_enqueue_queue_burst_t)(void *port, uint8_t queue_id, + const struct rte_event ev[], + uint16_t nb_events); +/**< @internal Enqueue burst of events on port of a device to a specific + * event queue. + */ + typedef uint16_t (*event_dequeue_t)(void *port, struct rte_event *ev, uint64_t timeout_ticks); /**< @internal Dequeue event from port of a device */ @@ -65,7 +72,9 @@ struct rte_event_fp_ops { /**< PMD Tx adapter enqueue same destination function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< PMD Crypto adapter enqueue function. */ - uintptr_t reserved[6]; + event_enqueue_queue_burst_t enqueue_new_same_dest; + /**< PMD enqueue burst new function to same destination queue. */ + uintptr_t reserved[5]; } __rte_cache_aligned; extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];