From patchwork Thu Oct 3 20:50:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Abdullah Sevincer X-Patchwork-Id: 145002 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB5E845AA2; Thu, 3 Oct 2024 22:50:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 21EA9402B1; Thu, 3 Oct 2024 22:50:11 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by mails.dpdk.org (Postfix) with ESMTP id D6DB640268 for ; Thu, 3 Oct 2024 22:50:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727988607; x=1759524607; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QaelT5hX/K2hlz/7K5Z58t4un8FYjqP3WIP2QrZrG34=; b=HXN6rnY5Awdmt+Cgrm69OfuGAoSlB2jFrilvC9hDqWt5kjYPfEGyQ9Aj 8DCjIV26Rr8ubSuIrGkxOfMh5Zdofcv2YQb2AdxXs3w8gSEmgauNK3M1r ooOy7nm92R9jzzQbwFBXhJZrhXPdNzxrIsx35SNccWJXHxhovGMJdSHl5 wBWqVh7NFVK7VRDpOnil0OH4VqzB0e+uVwz2zoEw0yiaNgagX6Uogy5fL j/t2fV6EB+RPob8NQdRT7PsQ0oytratWbO2Wm0k2FDvtn+aRWJhAWkcjN IqkRLy0uwuYf0KWPCh5LC0cOT+8pHn3zvSXoh6RRGx+BCCDdw4CkaivgH Q==; X-CSE-ConnectionGUID: hda1k0pBSVagmjktIKkkcw== X-CSE-MsgGUID: St5ZXXdhRLCkKqJSP1SQrQ== X-IronPort-AV: E=McAfee;i="6700,10204,11214"; a="27334253" X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="27334253" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2024 13:50:07 -0700 X-CSE-ConnectionGUID: kokNXQRqSDucKIi5kRm8Qg== X-CSE-MsgGUID: CNijhGhFTXWkRk6aOSL9CQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="78895960" Received: from txanpdk02.an.intel.com ([10.123.117.76]) by fmviesa005.fm.intel.com with ESMTP; 03 Oct 2024 13:50:06 -0700 From: Abdullah Sevincer To: dev@dpdk.org Cc: jerinj@marvell.com, bruce.richardson@intel.com, pravin.pathak@intel.com, mattias.ronnblom@ericsson.com, manish.aggarwal@intel.com, Abdullah Sevincer Subject: [PATCH v14 1/3] eventdev: add support for independent enqueue Date: Thu, 3 Oct 2024 15:50:00 -0500 Message-Id: <20241003205002.4090954-2-abdullah.sevincer@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241003205002.4090954-1-abdullah.sevincer@intel.com> References: <20240909160506.2655354-3-abdullah.sevincer@intel.com> <20241003205002.4090954-1-abdullah.sevincer@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds support for independent enqueue feature and updates Event Device and PMD feature list. A new capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ is introduced. It allows out-of-order enqueuing of RTE_EVENT_OP_FORWARD or RELEASE type events on an event port where this capability is enabled. To use this capability applications need to set flag RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ during port setup only if the capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ exists. Signed-off-by: Abdullah Sevincer Acked-by: Mattias Rönnblom --- doc/guides/eventdevs/features/default.ini | 1 + doc/guides/eventdevs/features/dlb2.ini | 1 + doc/guides/rel_notes/release_24_11.rst | 5 +++ lib/eventdev/rte_eventdev.h | 37 +++++++++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 1cc4303fe5..7c4ee99238 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -22,6 +22,7 @@ carry_flow_id = maintenance_free = runtime_queue_attr = profile_links = +independent_enq = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/eventdevs/features/dlb2.ini b/doc/guides/eventdevs/features/dlb2.ini index 7b80286927..c7193b47c1 100644 --- a/doc/guides/eventdevs/features/dlb2.ini +++ b/doc/guides/eventdevs/features/dlb2.ini @@ -15,6 +15,7 @@ implicit_release_disable = Y runtime_port_link = Y multiple_queue_port = Y maintenance_free = Y +independent_enq = Y [Eth Rx adapter Features] diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index e0a9aa55a1..dee6723b70 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -67,6 +67,11 @@ New Features The new statistics are useful for debugging and profiling. +* **Updated Event Device Library for independent enqueue feature** + + * Added support for independent enqueue feature. Updated Event Device and + PMD feature list. + Removed Items ------------- diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 08e5f9320b..3e3142d4a6 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -446,6 +446,31 @@ struct rte_event; * @see RTE_SCHED_TYPE_PARALLEL */ +#define RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ (1ULL << 16) +/**< Event device is capable of independent enqueue. + * A new capability, RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ, will indicate that Eventdev + * supports the enqueue in any order or specifically in a different order than the + * dequeue. Eventdev PMD can either transmit events in the changed order in which + * they are enqueued or restore the original order before sending them to the + * underlying hardware device. A flag is provided during the port configuration to + * inform Eventdev PMD that the application intends to use an independent enqueue + * order on a particular port. Note that this capability only matters for Eventdevs + * supporting burst mode. + * + * To Inform PMD that the application plans to use independent enqueue order on a port + * this code example can be used: + * + * if (capability & RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ) + * port_config = port_config | RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ; + * + * When an implicit release is enabled on a port, Eventdev PMD will also handle + * the insertion of RELEASE events in place of dropped events. The independent enqueue + * feature only applies to FORWARD and RELEASE events. New events (op=RTE_EVENT_OP_NEW) + * will be transmitted in the order the application enqueues them and do not maintain + * any order relative to FORWARD/RELEASE events. FORWARD vs NEW relaxed ordering + * only applies to ports that have enabled independent enqueue feature. + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority level for events and queues. @@ -1072,6 +1097,18 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, * * @see rte_event_port_setup() */ +#define RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ (1ULL << 5) +/**< Flag to enable independent enqueue. Must not be set if the device + * is not RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ capable. This feature + * allows an application to enqueue RTE_EVENT_OP_FORWARD or + * RTE_EVENT_OP_RELEASE in an order different than the order the + * events were dequeued from the event device, while maintaining + * RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_ORDERED semantics. + * + * Note that this flag only matters for Eventdevs supporting burst mode. + * + * @see rte_event_port_setup() + */ /** Event port configuration structure */ struct rte_event_port_conf { From patchwork Thu Oct 3 20:50:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Abdullah Sevincer X-Patchwork-Id: 145003 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6037845AA2; Thu, 3 Oct 2024 22:50:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C98944065A; Thu, 3 Oct 2024 22:50:12 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by mails.dpdk.org (Postfix) with ESMTP id D104840613 for ; Thu, 3 Oct 2024 22:50:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727988609; x=1759524609; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fXf/f1lPfFRQl74HupgoEZ7T2m4lGC3zbPQ6jl8IxC8=; b=G6RfPDWJFZinSNR2WbSWvqKxddUeI/oeAJO12OpFOClz0bn1dmAHV9PU ut81JfhKmoa1tDIQS9gx82vIshpu/NpYTWLpcD7x9ZG4PCt5/mxKYO6ek uQNY54aCNl3vZhSD8pjJ+p68V5TioR3p8rhUlCZXV3S1M6ryicX3RO7l6 W7HZ6l7nG7sQYZ71pyR1fKSFNe1pTKUsaTx0KOwVogG7idxBuSyEV9xqX dpqfdJTaF/rmiYqbg5Jeq4rCcaiDYQeAzWmGhgXdftQP03KC6ZIUvDeNB wiUODGkYGBK/bvFq7/eR3UR0VXCK3bwYBSMSvW8V3OGIuot/SLppnfEF6 w==; X-CSE-ConnectionGUID: A0QKRAtpSKGwF+s1t8WaAQ== X-CSE-MsgGUID: LLi7AGEsQMydSEO7l7HNpw== X-IronPort-AV: E=McAfee;i="6700,10204,11214"; a="27334258" X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="27334258" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2024 13:50:08 -0700 X-CSE-ConnectionGUID: CJD/QbrUQ5yPrgjDm1dRYQ== X-CSE-MsgGUID: unI3g8YPTkaQ6GRARPBNzQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="78895976" Received: from txanpdk02.an.intel.com ([10.123.117.76]) by fmviesa005.fm.intel.com with ESMTP; 03 Oct 2024 13:50:07 -0700 From: Abdullah Sevincer To: dev@dpdk.org Cc: jerinj@marvell.com, bruce.richardson@intel.com, pravin.pathak@intel.com, mattias.ronnblom@ericsson.com, manish.aggarwal@intel.com, Abdullah Sevincer Subject: [PATCH v14 2/3] event/dlb2: add support for independent enqueue Date: Thu, 3 Oct 2024 15:50:01 -0500 Message-Id: <20241003205002.4090954-3-abdullah.sevincer@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241003205002.4090954-1-abdullah.sevincer@intel.com> References: <20240909160506.2655354-3-abdullah.sevincer@intel.com> <20241003205002.4090954-1-abdullah.sevincer@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org DLB devices need events to be enqueued in the same order they are dequeued. Applications are not suppose to change event order between dequeue and to enqueue. Since Eventdev standard does not add such restrictions independent enqueue support is needed for DLB PMD so that it restores dequeue order on enqueue if applications happen to change it. It also adds missing releases in places where events are dropped by the application and it expects implicit release to handle it. By default the feature will be off on all DLB ports and they will behave the same as older releases. To enable reordering feature, applications need to add the flag RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ to port configuration if only the device advertises the capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ. Signed-off-by: Abdullah Sevincer Acked-by: Mattias Rönnblom --- doc/guides/prog_guide/eventdev/eventdev.rst | 33 ++ doc/guides/rel_notes/release_24_11.rst | 5 + drivers/event/dlb2/dlb2.c | 490 +++++++++++++------- drivers/event/dlb2/dlb2_avx512.c | 27 +- drivers/event/dlb2/dlb2_inline_fns.h | 8 + drivers/event/dlb2/dlb2_priv.h | 25 +- drivers/event/dlb2/rte_pmd_dlb2.h | 24 + 7 files changed, 422 insertions(+), 190 deletions(-) diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst index fb6dfce102..801e970021 100644 --- a/doc/guides/prog_guide/eventdev/eventdev.rst +++ b/doc/guides/prog_guide/eventdev/eventdev.rst @@ -472,6 +472,39 @@ A flush callback can be passed to the function to handle any outstanding events. Invocation of this API does not affect the existing port configuration. +Independent Enqueue Capability +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Some eventdev hardware devices such as DLB2 expects all forwarded events to be +enqueued in the same order as they are dequeued. For dropped events, their +releases should come at the same location as the original event was expected. +Hardware has this restriction as it uses the order to retrieve information about +the original event that was sent to the CPU. This contains information like atomic +flow ID to release the flow lock and ordered events sequence number to restore the +original order. + +Some applications, like those based on the DPDK dispatcher library, want +enqueue order independence. To support this, DLB2 PMD supports the +``RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ`` capability. + +This capability applies to Eventdevs supporting burst mode. On ports where +the application is going to change enqueue order, +``RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ`` support should be enabled. + +Example code to inform PMD that the application plans to use independent enqueue +order on a port: + + .. code-block:: c + + if (capability & RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ) + port_config = port_config | RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ; + +This code example enables enqueue event reordering inside DLB2 PMD before the events +are sent to the DLB2 hardware. If the application is not going to change the enqueue +order, this flag should not be enabled to get better performance. DLB2 PMD saves +ordering information inside the impl_opaque field of the event, and this field should +be preserved for all FORWARD or RELEASE events. + Stopping the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index dee6723b70..98e9732100 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -72,6 +72,11 @@ New Features * Added support for independent enqueue feature. Updated Event Device and PMD feature list. + * Updated DLB2 driver for independent enqueue feature. Applications should + use ``RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ`` to enable the feature if the + capability ``RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ`` exists. + + Removed Items ------------- diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index c43ab864ca..09e4107824 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -82,6 +82,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = { RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE | RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | + RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE), .max_profiles_per_port = 1, }; @@ -98,6 +99,11 @@ dlb2_free_qe_mem(struct dlb2_port *qm_port) rte_free(qm_port->qe4); qm_port->qe4 = NULL; + if (qm_port->order) { + rte_free(qm_port->order); + qm_port->order = NULL; + } + rte_free(qm_port->int_arm_qe); qm_port->int_arm_qe = NULL; @@ -304,7 +310,7 @@ set_max_cq_depth(const char *key __rte_unused, if (*max_cq_depth < DLB2_MIN_CQ_DEPTH_OVERRIDE || *max_cq_depth > DLB2_MAX_CQ_DEPTH_OVERRIDE || !rte_is_power_of_2(*max_cq_depth)) { - DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2", + DLB2_LOG_ERR("dlb2: Allowed max_cq_depth range %d - %d and should be power of 2", DLB2_MIN_CQ_DEPTH_OVERRIDE, DLB2_MAX_CQ_DEPTH_OVERRIDE); return -EINVAL; @@ -1445,6 +1451,17 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name) goto error_exit; } + if (qm_port->reorder_en) { + sz = sizeof(struct dlb2_reorder); + qm_port->order = rte_zmalloc(mz_name, sz, RTE_CACHE_LINE_SIZE); + + if (qm_port->order == NULL) { + DLB2_LOG_ERR("dlb2: no reorder memory"); + ret = -ENOMEM; + goto error_exit; + } + } + ret = dlb2_init_int_arm_qe(qm_port, mz_name); if (ret < 0) { DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d", ret); @@ -1541,13 +1558,6 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, return -EINVAL; } - if (dlb2->version == DLB2_HW_V2 && ev_port->cq_weight != 0 && - ev_port->cq_weight > dequeue_depth) { - DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d", - dequeue_depth, ev_port->cq_weight); - return -EINVAL; - } - rte_spinlock_lock(&handle->resource_lock); /* We round up to the next power of 2 if necessary */ @@ -1620,9 +1630,6 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, dlb2_error_strings[cfg.response. status]); goto error_exit; } - qm_port->cq_weight = dequeue_depth; - } else { - qm_port->cq_weight = 0; } /* CQs with depth < 8 use an 8-entry queue, but withhold credits so @@ -1947,6 +1954,13 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev, evdev_dlb2_default_info.max_event_port_enqueue_depth) return -EINVAL; + if ((port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ) && + port_conf->dequeue_depth > DLB2_MAX_CQ_DEPTH_REORDER) { + DLB2_LOG_ERR("evport %d: Max dequeue depth supported with reorder is %d", + ev_port_id, DLB2_MAX_CQ_DEPTH_REORDER); + return -EINVAL; + } + ev_port = &dlb2->ev_ports[ev_port_id]; /* configured? */ if (ev_port->setup_done) { @@ -1988,7 +2002,11 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev, hw_credit_quanta); return -EINVAL; } - ev_port->enq_retries = port_conf->enqueue_depth / sw_credit_quanta; + ev_port->enq_retries = port_conf->enqueue_depth; + + ev_port->qm_port.reorder_id = 0; + ev_port->qm_port.reorder_en = port_conf->event_port_cfg & + RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ; /* Save off port config for reconfig */ ev_port->conf = *port_conf; @@ -2792,10 +2810,34 @@ dlb2_check_enqueue_hw_credits(struct dlb2_port *qm_port) } static __rte_always_inline void -dlb2_pp_write(struct dlb2_enqueue_qe *qe4, - struct process_local_port_data *port_data) +dlb2_pp_write(struct process_local_port_data *port_data, struct dlb2_enqueue_qe *qe4) +{ + dlb2_movdir64b(port_data->pp_addr, qe4); +} + +static __rte_always_inline void +dlb2_pp_write_reorder(struct process_local_port_data *port_data, + struct dlb2_enqueue_qe *qe4) +{ + for (uint8_t i = 0; i < 4; i++) { + if (qe4[i].cmd_byte != DLB2_NOOP_CMD_BYTE) { + dlb2_movdir64b(port_data->pp_addr, qe4); + return; + } + } +} + +static __rte_always_inline int +dlb2_pp_check4_write(struct process_local_port_data *port_data, + struct dlb2_enqueue_qe *qe4) { + for (uint8_t i = 0; i < DLB2_NUM_QES_PER_CACHE_LINE; i++) + if (((uint64_t *)&qe4[i])[1] == 0) + return 0; + dlb2_movdir64b(port_data->pp_addr, qe4); + memset(qe4, 0, DLB2_NUM_QES_PER_CACHE_LINE * sizeof(struct dlb2_enqueue_qe)); + return DLB2_NUM_QES_PER_CACHE_LINE; } static inline int @@ -2815,7 +2857,7 @@ dlb2_consume_qe_immediate(struct dlb2_port *qm_port, int num) */ port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; - dlb2_movntdq_single(port_data->pp_addr, qe); + dlb2_movdir64b_single(port_data->pp_addr, qe); DLB2_LOG_LINE_DBG("dlb2: consume immediate - %d QEs", num); @@ -2835,7 +2877,7 @@ dlb2_hw_do_enqueue(struct dlb2_port *qm_port, if (do_sfence) rte_wmb(); - dlb2_pp_write(qm_port->qe4, port_data); + dlb2_pp_write(port_data, qm_port->qe4); } static inline void @@ -2986,6 +3028,166 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port, return 0; } +static inline __m128i +dlb2_event_to_qe(const struct rte_event *ev, uint8_t cmd, uint8_t sched_type, uint8_t qid) +{ + __m128i dlb2_to_qe_shuffle = _mm_set_epi8( + 0xFF, 0xFF, /* zero out cmd word */ + 1, 0, /* low 16-bits of flow id */ + 0xFF, 0xFF, /* zero QID, sched_type etc fields to be filled later */ + 3, 2, /* top of flow id, event type and subtype */ + 15, 14, 13, 12, 11, 10, 9, 8 /* data from end of event goes at start */ + ); + + /* event may not be 16 byte aligned. Use 16 byte unaligned load */ + __m128i tmp = _mm_lddqu_si128((const __m128i *)ev); + __m128i qe = _mm_shuffle_epi8(tmp, dlb2_to_qe_shuffle); + struct dlb2_enqueue_qe *dq = (struct dlb2_enqueue_qe *)&qe; + /* set the cmd field */ + qe = _mm_insert_epi8(qe, cmd, 15); + /* insert missing 16-bits with qid, sched_type and priority */ + uint16_t qid_stype_prio = + qid | (uint16_t)sched_type << 8 | ((uint16_t)ev->priority & 0xE0) << 5; + qe = _mm_insert_epi16(qe, qid_stype_prio, 5); + dq->weight = RTE_PMD_DLB2_GET_QE_WEIGHT(ev); + return qe; +} + +static inline uint16_t +__dlb2_event_enqueue_burst_reorder(void *event_port, + const struct rte_event events[], + uint16_t num, + bool use_delayed) +{ + struct dlb2_eventdev_port *ev_port = event_port; + struct dlb2_port *qm_port = &ev_port->qm_port; + struct dlb2_reorder *order = qm_port->order; + struct process_local_port_data *port_data; + bool is_directed = qm_port->is_directed; + uint8_t n = order->next_to_enqueue; + uint8_t p_cnt = 0; + int retries = ev_port->enq_retries; + __m128i new_qes[4], *from = NULL; + int num_new = 0; + int num_tx; + int i; + + RTE_ASSERT(ev_port->enq_configured); + RTE_ASSERT(events != NULL); + + port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; + + num_tx = RTE_MIN(num, ev_port->conf.enqueue_depth); +#if DLB2_BYPASS_FENCE_ON_PP == 1 + if (!qm_port->is_producer) /* Call memory fense once at the start */ + rte_wmb(); /* calls _mm_sfence() */ +#else + rte_wmb(); /* calls _mm_sfence() */ +#endif + for (i = 0; i < num_tx; i++) { + uint8_t sched_type = 0; + uint8_t reorder_idx = events[i].impl_opaque; + int16_t thresh = qm_port->token_pop_thresh; + uint8_t qid = 0; + int ret; + + while ((ret = dlb2_event_enqueue_prep(ev_port, qm_port, &events[i], + &sched_type, &qid)) != 0 && + rte_errno == -ENOSPC && --retries > 0) + rte_pause(); + + if (ret != 0) /* Either there is error or retires exceeded */ + break; + + switch (events[i].op) { + case RTE_EVENT_OP_NEW: + new_qes[num_new++] = dlb2_event_to_qe( + &events[i], DLB2_NEW_CMD_BYTE, sched_type, qid); + if (num_new == RTE_DIM(new_qes)) { + dlb2_pp_write(port_data, (struct dlb2_enqueue_qe *)&new_qes); + num_new = 0; + } + break; + case RTE_EVENT_OP_FORWARD: { + order->enq_reorder[reorder_idx].m128 = dlb2_event_to_qe( + &events[i], is_directed ? DLB2_NEW_CMD_BYTE : DLB2_FWD_CMD_BYTE, + sched_type, qid); + n += dlb2_pp_check4_write(port_data, &order->enq_reorder[n].qe); + break; + } + case RTE_EVENT_OP_RELEASE: { + order->enq_reorder[reorder_idx].m128 = dlb2_event_to_qe( + &events[i], is_directed ? DLB2_NOOP_CMD_BYTE : DLB2_COMP_CMD_BYTE, + sched_type, 0xFF); + break; + } + } + + if (use_delayed && qm_port->token_pop_mode == DELAYED_POP && + (events[i].op == RTE_EVENT_OP_FORWARD || + events[i].op == RTE_EVENT_OP_RELEASE) && + qm_port->issued_releases >= thresh - 1) { + + dlb2_consume_qe_immediate(qm_port, qm_port->owed_tokens); + + /* Reset the releases for the next QE batch */ + qm_port->issued_releases -= thresh; + + /* When using delayed token pop mode, the + * initial token threshold is the full CQ + * depth. After the first token pop, we need to + * reset it to the dequeue_depth. + */ + qm_port->token_pop_thresh = + qm_port->dequeue_depth; + } + } + while (order->enq_reorder[n].u64[1] != 0) { + __m128i tmp[4] = {0}, *send = NULL; + bool enq; + + if (!p_cnt) + from = &order->enq_reorder[n].m128; + + p_cnt++; + n++; + + enq = !n || p_cnt == 4 || !order->enq_reorder[n].u64[1]; + if (!enq) + continue; + + if (p_cnt < 4) { + memcpy(tmp, from, p_cnt * sizeof(struct dlb2_enqueue_qe)); + send = tmp; + } else { + send = from; + } + + if (is_directed) + dlb2_pp_write_reorder(port_data, (struct dlb2_enqueue_qe *)send); + else + dlb2_pp_write(port_data, (struct dlb2_enqueue_qe *)send); + memset(from, 0, p_cnt * sizeof(struct dlb2_enqueue_qe)); + p_cnt = 0; + } + order->next_to_enqueue = n; + + if (num_new > 0) { + switch (num_new) { + case 1: + new_qes[1] = _mm_setzero_si128(); /* fall-through */ + case 2: + new_qes[2] = _mm_setzero_si128(); /* fall-through */ + case 3: + new_qes[3] = _mm_setzero_si128(); + } + dlb2_pp_write(port_data, (struct dlb2_enqueue_qe *)&new_qes); + num_new = 0; + } + + return i; +} + static inline uint16_t __dlb2_event_enqueue_burst(void *event_port, const struct rte_event events[], @@ -3002,6 +3204,9 @@ __dlb2_event_enqueue_burst(void *event_port, RTE_ASSERT(ev_port->enq_configured); RTE_ASSERT(events != NULL); + if (qm_port->reorder_en) + return __dlb2_event_enqueue_burst_reorder(event_port, events, num, use_delayed); + i = 0; port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; @@ -3379,7 +3584,8 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port, events[num].event_type = qe->u.event_type.major; events[num].sub_event_type = qe->u.event_type.sub; events[num].sched_type = sched_type_map[qe->sched_type]; - events[num].impl_opaque = qe->qid_depth; + events[num].impl_opaque = qm_port->reorder_id++; + RTE_PMD_DLB2_SET_QID_DEPTH(&events[num], qe->qid_depth); /* qid not preserved for directed queues */ if (qm_port->is_directed) @@ -3414,7 +3620,6 @@ dlb2_process_dequeue_four_qes(struct dlb2_eventdev_port *ev_port, }; const int num_events = DLB2_NUM_QES_PER_CACHE_LINE; uint8_t *qid_mappings = qm_port->qid_mappings; - __m128i sse_evt[2]; /* In the unlikely case that any of the QE error bits are set, process * them one at a time. @@ -3423,153 +3628,33 @@ dlb2_process_dequeue_four_qes(struct dlb2_eventdev_port *ev_port, qes[2].error || qes[3].error)) return dlb2_process_dequeue_qes(ev_port, qm_port, events, qes, num_events); + const __m128i qe_to_ev_shuffle = + _mm_set_epi8(7, 6, 5, 4, 3, 2, 1, 0, /* last 8-bytes = data from first 8 */ + 0xFF, 0xFF, 0xFF, 0xFF, /* fill in later as 32-bit value*/ + 9, 8, /* event type and sub-event, + 4 zero bits */ + 13, 12 /* flow id, 16 bits */); + for (int i = 0; i < 4; i++) { + const __m128i hw_qe = _mm_load_si128((void *)&qes[i]); + const __m128i event = _mm_shuffle_epi8(hw_qe, qe_to_ev_shuffle); + /* prepare missing 32-bits for op, sched_type, QID, Priority and + * sequence number in impl_opaque + */ + const uint16_t qid_sched_prio = _mm_extract_epi16(hw_qe, 5); + /* Extract qid_depth and format it as per event header */ + const uint8_t qid_depth = (_mm_extract_epi8(hw_qe, 15) & 0x6) << 1; + const uint32_t qid = (qm_port->is_directed) ? ev_port->link[0].queue_id : + qid_mappings[(uint8_t)qid_sched_prio]; + const uint32_t sched_type = sched_type_map[(qid_sched_prio >> 8) & 0x3]; + const uint32_t priority = (qid_sched_prio >> 5) & 0xE0; - events[0].u64 = qes[0].data; - events[1].u64 = qes[1].data; - events[2].u64 = qes[2].data; - events[3].u64 = qes[3].data; - - /* Construct the metadata portion of two struct rte_events - * in one 128b SSE register. Event metadata is constructed in the SSE - * registers like so: - * sse_evt[0][63:0]: event[0]'s metadata - * sse_evt[0][127:64]: event[1]'s metadata - * sse_evt[1][63:0]: event[2]'s metadata - * sse_evt[1][127:64]: event[3]'s metadata - */ - sse_evt[0] = _mm_setzero_si128(); - sse_evt[1] = _mm_setzero_si128(); - - /* Convert the hardware queue ID to an event queue ID and store it in - * the metadata: - * sse_evt[0][47:40] = qid_mappings[qes[0].qid] - * sse_evt[0][111:104] = qid_mappings[qes[1].qid] - * sse_evt[1][47:40] = qid_mappings[qes[2].qid] - * sse_evt[1][111:104] = qid_mappings[qes[3].qid] - */ -#define DLB_EVENT_QUEUE_ID_BYTE 5 - sse_evt[0] = _mm_insert_epi8(sse_evt[0], - qid_mappings[qes[0].qid], - DLB_EVENT_QUEUE_ID_BYTE); - sse_evt[0] = _mm_insert_epi8(sse_evt[0], - qid_mappings[qes[1].qid], - DLB_EVENT_QUEUE_ID_BYTE + 8); - sse_evt[1] = _mm_insert_epi8(sse_evt[1], - qid_mappings[qes[2].qid], - DLB_EVENT_QUEUE_ID_BYTE); - sse_evt[1] = _mm_insert_epi8(sse_evt[1], - qid_mappings[qes[3].qid], - DLB_EVENT_QUEUE_ID_BYTE + 8); - - /* Convert the hardware priority to an event priority and store it in - * the metadata, while also returning the queue depth status - * value captured by the hardware, storing it in impl_opaque, which can - * be read by the application but not modified - * sse_evt[0][55:48] = DLB2_TO_EV_PRIO(qes[0].priority) - * sse_evt[0][63:56] = qes[0].qid_depth - * sse_evt[0][119:112] = DLB2_TO_EV_PRIO(qes[1].priority) - * sse_evt[0][127:120] = qes[1].qid_depth - * sse_evt[1][55:48] = DLB2_TO_EV_PRIO(qes[2].priority) - * sse_evt[1][63:56] = qes[2].qid_depth - * sse_evt[1][119:112] = DLB2_TO_EV_PRIO(qes[3].priority) - * sse_evt[1][127:120] = qes[3].qid_depth - */ -#define DLB_EVENT_PRIO_IMPL_OPAQUE_WORD 3 -#define DLB_BYTE_SHIFT 8 - sse_evt[0] = - _mm_insert_epi16(sse_evt[0], - DLB2_TO_EV_PRIO((uint8_t)qes[0].priority) | - (qes[0].qid_depth << DLB_BYTE_SHIFT), - DLB_EVENT_PRIO_IMPL_OPAQUE_WORD); - sse_evt[0] = - _mm_insert_epi16(sse_evt[0], - DLB2_TO_EV_PRIO((uint8_t)qes[1].priority) | - (qes[1].qid_depth << DLB_BYTE_SHIFT), - DLB_EVENT_PRIO_IMPL_OPAQUE_WORD + 4); - sse_evt[1] = - _mm_insert_epi16(sse_evt[1], - DLB2_TO_EV_PRIO((uint8_t)qes[2].priority) | - (qes[2].qid_depth << DLB_BYTE_SHIFT), - DLB_EVENT_PRIO_IMPL_OPAQUE_WORD); - sse_evt[1] = - _mm_insert_epi16(sse_evt[1], - DLB2_TO_EV_PRIO((uint8_t)qes[3].priority) | - (qes[3].qid_depth << DLB_BYTE_SHIFT), - DLB_EVENT_PRIO_IMPL_OPAQUE_WORD + 4); - - /* Write the event type, sub event type, and flow_id to the event - * metadata. - * sse_evt[0][31:0] = qes[0].flow_id | - * qes[0].u.event_type.major << 28 | - * qes[0].u.event_type.sub << 20; - * sse_evt[0][95:64] = qes[1].flow_id | - * qes[1].u.event_type.major << 28 | - * qes[1].u.event_type.sub << 20; - * sse_evt[1][31:0] = qes[2].flow_id | - * qes[2].u.event_type.major << 28 | - * qes[2].u.event_type.sub << 20; - * sse_evt[1][95:64] = qes[3].flow_id | - * qes[3].u.event_type.major << 28 | - * qes[3].u.event_type.sub << 20; - */ -#define DLB_EVENT_EV_TYPE_DW 0 -#define DLB_EVENT_EV_TYPE_SHIFT 28 -#define DLB_EVENT_SUB_EV_TYPE_SHIFT 20 - sse_evt[0] = _mm_insert_epi32(sse_evt[0], - qes[0].flow_id | - qes[0].u.event_type.major << DLB_EVENT_EV_TYPE_SHIFT | - qes[0].u.event_type.sub << DLB_EVENT_SUB_EV_TYPE_SHIFT, - DLB_EVENT_EV_TYPE_DW); - sse_evt[0] = _mm_insert_epi32(sse_evt[0], - qes[1].flow_id | - qes[1].u.event_type.major << DLB_EVENT_EV_TYPE_SHIFT | - qes[1].u.event_type.sub << DLB_EVENT_SUB_EV_TYPE_SHIFT, - DLB_EVENT_EV_TYPE_DW + 2); - sse_evt[1] = _mm_insert_epi32(sse_evt[1], - qes[2].flow_id | - qes[2].u.event_type.major << DLB_EVENT_EV_TYPE_SHIFT | - qes[2].u.event_type.sub << DLB_EVENT_SUB_EV_TYPE_SHIFT, - DLB_EVENT_EV_TYPE_DW); - sse_evt[1] = _mm_insert_epi32(sse_evt[1], - qes[3].flow_id | - qes[3].u.event_type.major << DLB_EVENT_EV_TYPE_SHIFT | - qes[3].u.event_type.sub << DLB_EVENT_SUB_EV_TYPE_SHIFT, - DLB_EVENT_EV_TYPE_DW + 2); - - /* Write the sched type to the event metadata. 'op' and 'rsvd' are not - * set: - * sse_evt[0][39:32] = sched_type_map[qes[0].sched_type] << 6 - * sse_evt[0][103:96] = sched_type_map[qes[1].sched_type] << 6 - * sse_evt[1][39:32] = sched_type_map[qes[2].sched_type] << 6 - * sse_evt[1][103:96] = sched_type_map[qes[3].sched_type] << 6 - */ -#define DLB_EVENT_SCHED_TYPE_BYTE 4 -#define DLB_EVENT_SCHED_TYPE_SHIFT 6 - sse_evt[0] = _mm_insert_epi8(sse_evt[0], - sched_type_map[qes[0].sched_type] << DLB_EVENT_SCHED_TYPE_SHIFT, - DLB_EVENT_SCHED_TYPE_BYTE); - sse_evt[0] = _mm_insert_epi8(sse_evt[0], - sched_type_map[qes[1].sched_type] << DLB_EVENT_SCHED_TYPE_SHIFT, - DLB_EVENT_SCHED_TYPE_BYTE + 8); - sse_evt[1] = _mm_insert_epi8(sse_evt[1], - sched_type_map[qes[2].sched_type] << DLB_EVENT_SCHED_TYPE_SHIFT, - DLB_EVENT_SCHED_TYPE_BYTE); - sse_evt[1] = _mm_insert_epi8(sse_evt[1], - sched_type_map[qes[3].sched_type] << DLB_EVENT_SCHED_TYPE_SHIFT, - DLB_EVENT_SCHED_TYPE_BYTE + 8); - - /* Store the metadata to the event (use the double-precision - * _mm_storeh_pd because there is no integer function for storing the - * upper 64b): - * events[0].event = sse_evt[0][63:0] - * events[1].event = sse_evt[0][127:64] - * events[2].event = sse_evt[1][63:0] - * events[3].event = sse_evt[1][127:64] - */ - _mm_storel_epi64((__m128i *)&events[0].event, sse_evt[0]); - _mm_storeh_pd((double *)&events[1].event, (__m128d) sse_evt[0]); - _mm_storel_epi64((__m128i *)&events[2].event, sse_evt[1]); - _mm_storeh_pd((double *)&events[3].event, (__m128d) sse_evt[1]); + const uint32_t dword1 = qid_depth | + sched_type << 6 | qid << 8 | priority << 16 | (qm_port->reorder_id + i) << 24; + + /* events[] may not be 16 byte aligned. So use separate load and store */ + const __m128i tmpEv = _mm_insert_epi32(event, dword1, 1); + _mm_storeu_si128((__m128i *) &events[i], tmpEv); + } + qm_port->reorder_id += 4; DLB2_INC_STAT(ev_port->stats.rx_sched_cnt[qes[0].sched_type], 1); DLB2_INC_STAT(ev_port->stats.rx_sched_cnt[qes[1].sched_type], 1); @@ -3722,6 +3807,15 @@ _process_deq_qes_vec_impl(struct dlb2_port *qm_port, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x03, }; + + static const uint8_t qid_depth_mask[16] = { + 0x00, 0x00, 0x00, 0x06, + 0x00, 0x00, 0x00, 0x06, + 0x00, 0x00, 0x00, 0x06, + 0x00, 0x00, 0x00, 0x06, + }; + const __m128i v_qid_depth_mask = _mm_loadu_si128( + (const __m128i *)qid_depth_mask); const __m128i v_sched_map = _mm_loadu_si128( (const __m128i *)sched_type_map); __m128i v_sched_mask = _mm_loadu_si128( @@ -3732,6 +3826,9 @@ _process_deq_qes_vec_impl(struct dlb2_port *qm_port, __m128i v_preshift = _mm_and_si128(v_sched_remapped, v_sched_mask); v_sched_done = _mm_srli_epi32(v_preshift, 10); + __m128i v_qid_depth = _mm_and_si128(v_qe_status, v_qid_depth_mask); + v_qid_depth = _mm_srli_epi32(v_qid_depth, 15); + v_sched_done = _mm_or_si128(v_sched_done, v_qid_depth); } /* Priority handling @@ -3784,9 +3881,10 @@ _process_deq_qes_vec_impl(struct dlb2_port *qm_port, (const __m128i *)sub_event_mask); __m128i v_flow_mask = _mm_loadu_si128( (const __m128i *)flow_mask); - __m128i v_sub = _mm_srli_epi32(v_qe_meta, 8); + __m128i v_sub = _mm_srli_epi32(v_qe_meta, 4); v_sub = _mm_and_si128(v_sub, v_sub_event_mask); - __m128i v_type = _mm_and_si128(v_qe_meta, v_event_mask); + __m128i v_type = _mm_srli_epi32(v_qe_meta, 12); + v_type = _mm_and_si128(v_type, v_event_mask); v_type = _mm_slli_epi32(v_type, 8); v_types_done = _mm_or_si128(v_type, v_sub); v_types_done = _mm_slli_epi32(v_types_done, 20); @@ -3814,12 +3912,14 @@ _process_deq_qes_vec_impl(struct dlb2_port *qm_port, case 4: v_ev_3 = _mm_blend_epi16(v_unpk_ev_23, v_qe_3, 0x0F); v_ev_3 = _mm_alignr_epi8(v_ev_3, v_ev_3, 8); + v_ev_3 = _mm_insert_epi8(v_ev_3, qm_port->reorder_id + 3, 7); _mm_storeu_si128((__m128i *)&events[3], v_ev_3); DLB2_INC_STAT(qm_port->ev_port->stats.rx_sched_cnt[hw_sched3], 1); /* fallthrough */ case 3: v_ev_2 = _mm_unpacklo_epi64(v_unpk_ev_23, v_qe_2); + v_ev_2 = _mm_insert_epi8(v_ev_2, qm_port->reorder_id + 2, 7); _mm_storeu_si128((__m128i *)&events[2], v_ev_2); DLB2_INC_STAT(qm_port->ev_port->stats.rx_sched_cnt[hw_sched2], 1); @@ -3827,16 +3927,19 @@ _process_deq_qes_vec_impl(struct dlb2_port *qm_port, case 2: v_ev_1 = _mm_blend_epi16(v_unpk_ev_01, v_qe_1, 0x0F); v_ev_1 = _mm_alignr_epi8(v_ev_1, v_ev_1, 8); + v_ev_1 = _mm_insert_epi8(v_ev_1, qm_port->reorder_id + 1, 7); _mm_storeu_si128((__m128i *)&events[1], v_ev_1); DLB2_INC_STAT(qm_port->ev_port->stats.rx_sched_cnt[hw_sched1], 1); /* fallthrough */ case 1: v_ev_0 = _mm_unpacklo_epi64(v_unpk_ev_01, v_qe_0); + v_ev_0 = _mm_insert_epi8(v_ev_0, qm_port->reorder_id, 7); _mm_storeu_si128((__m128i *)&events[0], v_ev_0); DLB2_INC_STAT(qm_port->ev_port->stats.rx_sched_cnt[hw_sched0], 1); } + qm_port->reorder_id += valid_events; } static __rte_always_inline int @@ -4171,6 +4274,7 @@ dlb2_event_dequeue_burst(void *event_port, struct rte_event *ev, uint16_t num, struct dlb2_eventdev_port *ev_port = event_port; struct dlb2_port *qm_port = &ev_port->qm_port; struct dlb2_eventdev *dlb2 = ev_port->dlb2; + struct dlb2_reorder *order = qm_port->order; uint16_t cnt; RTE_ASSERT(ev_port->setup_done); @@ -4178,8 +4282,21 @@ dlb2_event_dequeue_burst(void *event_port, struct rte_event *ev, uint16_t num, if (ev_port->implicit_release && ev_port->outstanding_releases > 0) { uint16_t out_rels = ev_port->outstanding_releases; - - dlb2_event_release(dlb2, ev_port->id, out_rels); + if (qm_port->reorder_en) { + /* for directed, no-op command-byte = 0, but set dsi field */ + /* for load-balanced, set COMP */ + uint64_t release_u64 = + qm_port->is_directed ? 0xFF : (uint64_t)DLB2_COMP_CMD_BYTE << 56; + + for (uint8_t i = order->next_to_enqueue; i != qm_port->reorder_id; i++) + if (order->enq_reorder[i].u64[1] == 0) + order->enq_reorder[i].u64[1] = release_u64; + + __dlb2_event_enqueue_burst_reorder(event_port, NULL, 0, + qm_port->token_pop_mode == DELAYED_POP); + } else { + dlb2_event_release(dlb2, ev_port->id, out_rels); + } DLB2_INC_STAT(ev_port->stats.tx_implicit_rel, out_rels); } @@ -4208,6 +4325,7 @@ dlb2_event_dequeue_burst_sparse(void *event_port, struct rte_event *ev, struct dlb2_eventdev_port *ev_port = event_port; struct dlb2_port *qm_port = &ev_port->qm_port; struct dlb2_eventdev *dlb2 = ev_port->dlb2; + struct dlb2_reorder *order = qm_port->order; uint16_t cnt; RTE_ASSERT(ev_port->setup_done); @@ -4215,9 +4333,35 @@ dlb2_event_dequeue_burst_sparse(void *event_port, struct rte_event *ev, if (ev_port->implicit_release && ev_port->outstanding_releases > 0) { uint16_t out_rels = ev_port->outstanding_releases; + if (qm_port->reorder_en) { + struct rte_event release_burst[8]; + int num_releases = 0; + + /* go through reorder buffer looking for missing releases. */ + for (uint8_t i = order->next_to_enqueue; i != qm_port->reorder_id; i++) { + if (order->enq_reorder[i].u64[1] == 0) { + release_burst[num_releases++] = (struct rte_event){ + .op = RTE_EVENT_OP_RELEASE, + .impl_opaque = i, + }; + + if (num_releases == RTE_DIM(release_burst)) { + __dlb2_event_enqueue_burst_reorder(event_port, + release_burst, RTE_DIM(release_burst), + qm_port->token_pop_mode == DELAYED_POP); + num_releases = 0; + } + } + } - dlb2_event_release(dlb2, ev_port->id, out_rels); + if (num_releases) + __dlb2_event_enqueue_burst_reorder(event_port, release_burst + , num_releases, qm_port->token_pop_mode == DELAYED_POP); + } else { + dlb2_event_release(dlb2, ev_port->id, out_rels); + } + RTE_ASSERT(ev_port->outstanding_releases == 0); DLB2_INC_STAT(ev_port->stats.tx_implicit_rel, out_rels); } @@ -4242,6 +4386,8 @@ static void dlb2_flush_port(struct rte_eventdev *dev, int port_id) { struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev); + struct dlb2_eventdev_port *ev_port = &dlb2->ev_ports[port_id]; + struct dlb2_reorder *order = ev_port->qm_port.order; eventdev_stop_flush_t flush; struct rte_event ev; uint8_t dev_id; @@ -4267,8 +4413,10 @@ dlb2_flush_port(struct rte_eventdev *dev, int port_id) /* Enqueue any additional outstanding releases */ ev.op = RTE_EVENT_OP_RELEASE; - for (i = dlb2->ev_ports[port_id].outstanding_releases; i > 0; i--) + for (i = dlb2->ev_ports[port_id].outstanding_releases; i > 0; i--) { + ev.impl_opaque = order ? order->next_to_enqueue : 0; rte_event_enqueue_burst(dev_id, port_id, &ev, 1); + } } static uint32_t @@ -4939,6 +5087,8 @@ dlb2_parse_params(const char *params, rte_kvargs_free(kvlist); return ret; } + if (version == DLB2_HW_V2 && dlb2_args->enable_cq_weight) + DLB2_LOG_INFO("Ignoring 'enable_cq_weight=y'. Only supported for 2.5 HW onwards"); rte_kvargs_free(kvlist); } diff --git a/drivers/event/dlb2/dlb2_avx512.c b/drivers/event/dlb2/dlb2_avx512.c index 3c8906af9d..4f8c490f8c 100644 --- a/drivers/event/dlb2/dlb2_avx512.c +++ b/drivers/event/dlb2/dlb2_avx512.c @@ -151,20 +151,20 @@ dlb2_event_build_hcws(struct dlb2_port *qm_port, */ #define DLB2_QE_EV_TYPE_WORD 0 sse_qe[0] = _mm_insert_epi16(sse_qe[0], - ev[0].sub_event_type << 8 | - ev[0].event_type, + ev[0].sub_event_type << 4 | + ev[0].event_type << 12, DLB2_QE_EV_TYPE_WORD); sse_qe[0] = _mm_insert_epi16(sse_qe[0], - ev[1].sub_event_type << 8 | - ev[1].event_type, + ev[1].sub_event_type << 4 | + ev[1].event_type << 12, DLB2_QE_EV_TYPE_WORD + 4); sse_qe[1] = _mm_insert_epi16(sse_qe[1], - ev[2].sub_event_type << 8 | - ev[2].event_type, + ev[2].sub_event_type << 4 | + ev[2].event_type << 12, DLB2_QE_EV_TYPE_WORD); sse_qe[1] = _mm_insert_epi16(sse_qe[1], - ev[3].sub_event_type << 8 | - ev[3].event_type, + ev[3].sub_event_type << 4 | + ev[3].event_type << 12, DLB2_QE_EV_TYPE_WORD + 4); if (qm_port->use_avx512) { @@ -238,11 +238,11 @@ dlb2_event_build_hcws(struct dlb2_port *qm_port, } /* will only be set for DLB 2.5 + */ - if (qm_port->cq_weight) { - qe[0].weight = ev[0].impl_opaque & 3; - qe[1].weight = ev[1].impl_opaque & 3; - qe[2].weight = ev[2].impl_opaque & 3; - qe[3].weight = ev[3].impl_opaque & 3; + if (qm_port->dlb2->enable_cq_weight) { + qe[0].weight = RTE_PMD_DLB2_GET_QE_WEIGHT(&ev[0]); + qe[1].weight = RTE_PMD_DLB2_GET_QE_WEIGHT(&ev[1]); + qe[2].weight = RTE_PMD_DLB2_GET_QE_WEIGHT(&ev[2]); + qe[3].weight = RTE_PMD_DLB2_GET_QE_WEIGHT(&ev[3]); } break; @@ -267,6 +267,7 @@ dlb2_event_build_hcws(struct dlb2_port *qm_port, } qe[i].u.event_type.major = ev[i].event_type; qe[i].u.event_type.sub = ev[i].sub_event_type; + qe[i].weight = RTE_PMD_DLB2_GET_QE_WEIGHT(&ev[i]); } break; case 0: diff --git a/drivers/event/dlb2/dlb2_inline_fns.h b/drivers/event/dlb2/dlb2_inline_fns.h index 1429281cfd..61a507d159 100644 --- a/drivers/event/dlb2/dlb2_inline_fns.h +++ b/drivers/event/dlb2/dlb2_inline_fns.h @@ -32,4 +32,12 @@ dlb2_movdir64b(void *dest, void *src) : "a" (dest), "d" (src)); } +static inline void +dlb2_movdir64b_single(void *pp_addr, void *qe4) +{ + asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02" + : + : "a" (pp_addr), "d" (qe4)); +} + #endif /* _DLB2_INLINE_FNS_H_ */ diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index 2470ae0271..52da31ed31 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -29,7 +29,8 @@ #define DLB2_SW_CREDIT_C_QUANTA_DEFAULT 256 /* Consumer */ #define DLB2_DEPTH_THRESH_DEFAULT 256 #define DLB2_MIN_CQ_DEPTH_OVERRIDE 32 -#define DLB2_MAX_CQ_DEPTH_OVERRIDE 128 +#define DLB2_MAX_CQ_DEPTH_OVERRIDE 1024 +#define DLB2_MAX_CQ_DEPTH_REORDER 128 #define DLB2_MIN_ENQ_DEPTH_OVERRIDE 32 #define DLB2_MAX_ENQ_DEPTH_OVERRIDE 1024 @@ -387,8 +388,23 @@ struct dlb2_port { bool use_scalar; /* force usage of scalar code */ uint16_t hw_credit_quanta; bool use_avx512; - uint32_t cq_weight; bool is_producer; /* True if port is of type producer */ + uint8_t reorder_id; /* id used for reordering events coming back into the scheduler */ + bool reorder_en; + struct dlb2_reorder *order; /* For ordering enqueues */ +}; + +struct dlb2_reorder { + /* a reorder buffer for events coming back in different order from dequeue + * We use UINT8_MAX + 1 elements, but add on three no-ops to make movdirs easier at the end + */ + union { + __m128i m128; + struct dlb2_enqueue_qe qe; + uint64_t u64[2]; + } enq_reorder[UINT8_MAX + 4]; + /* id of the next entry in the reorder enqueue ring to send in */ + uint8_t next_to_enqueue; }; /* Per-process per-port mmio and memory pointers */ @@ -642,10 +658,6 @@ struct dlb2_qid_depth_thresholds { int val[DLB2_MAX_NUM_QUEUES_ALL]; }; -struct dlb2_cq_weight { - int limit[DLB2_MAX_NUM_PORTS_ALL]; -}; - struct dlb2_port_cos { int cos_id[DLB2_MAX_NUM_PORTS_ALL]; }; @@ -667,7 +679,6 @@ struct dlb2_devargs { bool vector_opts_enabled; int max_cq_depth; int max_enq_depth; - struct dlb2_cq_weight cq_weight; struct dlb2_port_cos port_cos; struct dlb2_cos_bw cos_bw; const char *producer_coremask; diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb2/rte_pmd_dlb2.h index 334c6c356d..564b4f18c6 100644 --- a/drivers/event/dlb2/rte_pmd_dlb2.h +++ b/drivers/event/dlb2/rte_pmd_dlb2.h @@ -19,6 +19,30 @@ extern "C" { #include +/** + * Macro function to get QID depth of rte_event metadata. + * Currently lower 2 bits of 'rsvd' field are used to store QID depth. + */ +#define RTE_PMD_DLB2_GET_QID_DEPTH(x) ((x)->rsvd & 0x3) + +/** + * Macro function to set QID depth of rte_event metadata. + * Currently lower 2 bits of 'rsvd' field are used to store QID depth. + */ +#define RTE_PMD_DLB2_SET_QID_DEPTH(x, v) ((x)->rsvd = ((x)->rsvd & ~0x3) | (v & 0x3)) + +/** + * Macro function to get QE weight from rte_event metadata. + * Currently upper 2 bits of 'rsvd' field are used to store QE weight. + */ +#define RTE_PMD_DLB2_GET_QE_WEIGHT(x) (((x)->rsvd >> 2) & 0x3) + +/** + * Macro function to set QE weight from rte_event metadata. + * Currently upper 2 bits of 'rsvd' field are used to store QE weight. + */ +#define RTE_PMD_DLB2_SET_QE_WEIGHT(x, v) ((x)->rsvd = ((x)->rsvd & 0x3) | ((v & 0x3) << 2)) + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice From patchwork Thu Oct 3 20:50:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Abdullah Sevincer X-Patchwork-Id: 145004 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85E2D45AA2; Thu, 3 Oct 2024 22:50:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A81E40673; Thu, 3 Oct 2024 22:50:14 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by mails.dpdk.org (Postfix) with ESMTP id BB5EE40613 for ; Thu, 3 Oct 2024 22:50:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727988610; x=1759524610; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sGU7PZ3eTatnSDkIzHSVESZzpKRz3fw+vS6zKshPqcw=; b=ExWtB8WTLGTBUac+I0NxIHaFRgzkmgUAHCRCG80dZCC6Z1LlbnS4mb7F TY26DfeiM+mqIK+B8oGI/opRaLjNbS+oUVxhetOc1ItUCwRX3lJiMlomP 6XW0QRnU0QQvjwuk46cTyLXiMEAsfIdavDalmxmRiouIw/kiNEjv6wnet GoWG3RrqB3HukZSjo3ppnhC8oexZrcG+IaKwz1tir49WqQE7EnYq3uT6j JgjmcAvspfjheWfmlyhm67P+MHUl6t3bsRi0g8kFqum2bP8ztxxbrKU4v lEvhIaYn2brblDiL856vDqraHH7Fe/SXhIQgGCk0HH1aTZANpJhYdRG9Q A==; X-CSE-ConnectionGUID: a8Tjco45Rd6xhqLsrfHZKA== X-CSE-MsgGUID: 3iFBa6MnRwKUG6NmZXGERw== X-IronPort-AV: E=McAfee;i="6700,10204,11214"; a="27334262" X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="27334262" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2024 13:50:09 -0700 X-CSE-ConnectionGUID: kL0f8+p5TJ+Ncy2a3nnlew== X-CSE-MsgGUID: bcYTqfzfRbC9z5M1yDHUzg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,175,1725346800"; d="scan'208";a="78895985" Received: from txanpdk02.an.intel.com ([10.123.117.76]) by fmviesa005.fm.intel.com with ESMTP; 03 Oct 2024 13:50:08 -0700 From: Abdullah Sevincer To: dev@dpdk.org Cc: jerinj@marvell.com, bruce.richardson@intel.com, pravin.pathak@intel.com, mattias.ronnblom@ericsson.com, manish.aggarwal@intel.com, Abdullah Sevincer Subject: [PATCH v14 3/3] event/dsw: add capability for independent enqueue Date: Thu, 3 Oct 2024 15:50:02 -0500 Message-Id: <20241003205002.4090954-4-abdullah.sevincer@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241003205002.4090954-1-abdullah.sevincer@intel.com> References: <20240909160506.2655354-3-abdullah.sevincer@intel.com> <20241003205002.4090954-1-abdullah.sevincer@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To use independent enqueue capability applications need to set flag RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ during port setup only if the capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ exists. Hence, this commit adds the capability of independent enqueue to the DSW driver. Signed-off-by: Abdullah Sevincer Acked-by: Mattias Rönnblom --- doc/guides/rel_notes/release_24_11.rst | 1 + drivers/event/dsw/dsw_evdev.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 98e9732100..4e4ca4fc23 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -76,6 +76,7 @@ New Features use ``RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ`` to enable the feature if the capability ``RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ`` exists. + * Updated DSW driver for independent enqueue feature. Removed Items diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c index 8a1a2db8ac..9fb187bc74 100644 --- a/drivers/event/dsw/dsw_evdev.c +++ b/drivers/event/dsw/dsw_evdev.c @@ -230,7 +230,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused, RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE| RTE_EVENT_DEV_CAP_NONSEQ_MODE| RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT| - RTE_EVENT_DEV_CAP_CARRY_FLOW_ID + RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | + RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ }; }