From patchwork Mon May 16 17:35:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111189 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 51474A00BE; Mon, 16 May 2022 19:39:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3F0AD40A7A; Mon, 16 May 2022 19:39:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DD6D44068B for ; Mon, 16 May 2022 19:39:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GB6vIw012720; Mon, 16 May 2022 10:37:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=0wR4lBXb1r2DEULglOhQr95WATJSJ2FcEqaywGw+L0I=; b=J5yQuzEjrszAWXMK5jOmGux/XmCoFdqJYA28X5kTVXOjWQcQfR84iUPdZCfwHO7kdt28 JeWcn0EBdTEUexk86C/gKvxGeTCOQfQHhIyPh0CNHnmo0nyPKH+1joDy69zAqcvopFdx FF7IVC5A4A6o3kgLB6z8cJK4SDfFE/9JjtJbngBrBFExcMms3kxxaItjs0lm6ZXmEX5J Ii8kjEhEvWwHtyfamKjsp8Q2IIsvA0TwSGBEI4vUsPnTXohctiLvPIZbvRIiXRbfQjHx SWe4b3kapQAIFxss8xGeyqJwuq919XCjR2Sh6V8gfPsxAjHLW2tOlptvasloJX5fwTRn 5Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g2bxsqytr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 16 May 2022 10:37:20 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 65ED13F70CF; Mon, 16 May 2022 10:37:18 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v4 1/5] eventdev: support to set queue attributes at runtime Date: Mon, 16 May 2022 23:05:47 +0530 Message-ID: <33fe903b38d712388230cfa58ee3a64987771918.1652722314.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: i-cBrDFAMxiMq1QKFIdaxitZkVulfJzS X-Proofpoint-GUID: i-cBrDFAMxiMq1QKFIdaxitZkVulfJzS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rte_event_queue_setup(). PMD's supporting this feature should expose the capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob Acked-by: Ray Kinsella --- doc/guides/eventdevs/features/default.ini | 1 + doc/guides/rel_notes/release_22_07.rst | 5 ++++ lib/eventdev/eventdev_pmd.h | 22 ++++++++++++++++ lib/eventdev/rte_eventdev.c | 26 ++++++++++++++++++ lib/eventdev/rte_eventdev.h | 32 ++++++++++++++++++++++- lib/eventdev/version.map | 3 +++ 6 files changed, 88 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 2ea233463a..00360f60c6 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -17,6 +17,7 @@ runtime_port_link = multiple_queue_port = carry_flow_id = maintenance_free = +runtime_queue_attr = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 88d6e96cc1..a7a912d665 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -65,6 +65,11 @@ New Features * Added support for promiscuous mode on Windows. * Added support for MTU on Windows. +* **Added support for setting queue attributes at runtime in eventdev.** + + Added new API ``rte_event_queue_attr_set()``, to set event queue attributes + at runtime. + Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..3b85d9f7a5 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Set an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /** * Retrieve the default event port configuration. * @@ -1211,6 +1231,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_set_t queue_attr_set; + /**< Set an event queue attribute. */ eventdev_port_default_conf_get_t port_def_conf; /**< Get default port configuration. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..a31e99be02 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, return 0; } +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_queue(dev, queue_id)) { + RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id); + return -EINVAL; + } + + if (!(dev->data->event_dev_cap & + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) { + RTE_EDEV_LOG_ERR( + "Device %" PRIu8 "does not support changing queue attributes at runtime", + dev_id); + return -ENOTSUP; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP); + return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id, + attr_value); +} + int rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..a79b1397eb 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -225,7 +225,7 @@ struct rte_event; /**< Event scheduling prioritization is based on the priority associated with * each event queue. * - * @see rte_event_queue_setup() + * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ #define RTE_EVENT_DEV_CAP_EVENT_QOS (1ULL << 1) /**< Event scheduling prioritization is based on the priority associated with @@ -307,6 +307,13 @@ struct rte_event; * global pool, or process signaling related to load balancing. */ +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11) +/**< Event device is capable of changing the queue attributes at runtime i.e + * after rte_event_queue_setup() or rte_event_start() call sequence. If this + * flag is not set, eventdev queue attributes can only be configured during + * rte_event_queue_setup(). + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -702,6 +709,29 @@ int rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, uint32_t *attr_value); +/** + * Set an event queue attribute. + * + * @param dev_id + * Eventdev id + * @param queue_id + * Eventdev queue id + * @param attr_id + * The attribute ID to set + * @param attr_value + * The attribute value to set + * + * @return + * - 0: Successfully set attribute. + * - -EINVAL: invalid device, queue or attr_id. + * - -ENOTSUP: device does not support setting the event attribute. + * - <0: failed to set event queue attribute + */ +__rte_experimental +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /* Event port specific APIs */ /* Event port configuration bitmap flags */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..c581b75c18 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_queue_attr_set; }; INTERNAL { From patchwork Mon May 16 17:35:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111190 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E63CA00BE; Mon, 16 May 2022 19:39:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3F6E842825; Mon, 16 May 2022 19:39:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 597A04068B for ; Mon, 16 May 2022 19:39:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GEsSwi004319; Mon, 16 May 2022 10:37:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=N8PN2R7y5k7bpiho0L4n6F3EZJrvY9dEIXK4noK2EZY=; b=LmyaQwTM+y2G5CQ87T4I151foqjuCczolCEWpLoYOEr+xkPLm4PzTjoq4zu7CJtKs8N8 wuJi+HABnZrIcGqUVGqR8ZwThCWv7dNCk4cCoDvBwlcfCFTXsEECjS2GsZ3ibu+QTIQu BvDl9P3MfSFXEzBNc7nUa+M6PpKvePF+X2Y3Thkhcnq4pSpshc0R7xKiYmm8TbG7cnVO 6pKvGxYw4DW9kXozWq02PgA+tBQw56MgsEMqnMqSsmY7JOrmE9q8tz6UjaX27g1bCJb5 iWZ9W9T3JGzsQwjsPcMm47s3opE4OBzIPtiBV6UKF8MkTZbUyQOR3Hy2wmiRxlYkkvaZ rw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g3rsqrnxa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:24 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 16 May 2022 10:37:23 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 2CA733F70CD; Mon, 16 May 2022 10:37:20 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v4 2/5] eventdev: add weight and affinity to queue attributes Date: Mon, 16 May 2022 23:05:48 +0530 Message-ID: <043ba4c16cac74e28e001ae46619a2e4bcfc0f00.1652722314.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: e6x8nxZ7zIBhDDnL3GtpLb5K5UVgX7w- X-Proofpoint-GUID: e6x8nxZ7zIBhDDnL3GtpLb5K5UVgX7w- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Extended eventdev queue QoS attributes to support weight and affinity. If queues are of the same priority, events from the queue with highest weight will be scheduled first. Affinity indicates the number of times, the subsequent schedule calls from an event port will use the same event queue. Schedule call selects another queue if current queue goes empty or schedule count reaches affinity count. To avoid ABI break, weight and affinity attributes are not yet added to queue config structure and rely on PMD for managing it. New eventdev op queue_attr_get can be used to get it from the PMD. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- doc/guides/rel_notes/release_22_07.rst | 7 +++++ lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++ lib/eventdev/rte_eventdev.c | 12 ++++++++ lib/eventdev/rte_eventdev.h | 38 ++++++++++++++++++++++++-- 4 files changed, 77 insertions(+), 2 deletions(-) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index a7a912d665..f35a31bbdf 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -70,6 +70,13 @@ New Features Added new API ``rte_event_queue_attr_set()``, to set event queue attributes at runtime. +* **Added new queues attributes weight and affinity in eventdev.** + + Defined new event queue attributes weight and affinity as below: + + * ``RTE_EVENT_QUEUE_ATTR_WEIGHT`` + * ``RTE_EVENT_QUEUE_ATTR_AFFINITY`` + Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 3b85d9f7a5..5495aee4f6 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Get an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param[out] attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); + /** * Set an event queue attribute at runtime. * @@ -1231,6 +1251,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_get_t queue_attr_get; + /**< Get an event queue attribute. */ eventdev_queue_attr_set_t queue_attr_set; /**< Set an event queue attribute. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index a31e99be02..12b261f923 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, *attr_value = conf->schedule_type; break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; default: return -EINVAL; }; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index a79b1397eb..9a7c0bcf25 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -222,8 +222,14 @@ struct rte_event; /* Event device capability bitmap flags */ #define RTE_EVENT_DEV_CAP_QUEUE_QOS (1ULL << 0) -/**< Event scheduling prioritization is based on the priority associated with - * each event queue. +/**< Event scheduling prioritization is based on the priority and weight + * associated with each event queue. Events from a queue with highest priority + * is scheduled first. If the queues are of same priority, weight of the queues + * are considered to select a queue in a weighted round robin fashion. + * Subsequent dequeue calls from an event port could see events from the same + * event queue, if the queue is configured with an affinity count. Affinity + * count is the number of subsequent dequeue calls, in which an event port + * should use the same event queue if the queue is non-empty * * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ @@ -331,6 +337,26 @@ struct rte_event; * @see rte_event_port_link() */ +/* Event queue scheduling weights */ +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255 +/**< Highest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0 +/**< Lowest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + +/* Event queue scheduling affinity */ +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255 +/**< Highest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0 +/**< Lowest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + /** * Get the total number of event devices that have been successfully * initialised. @@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, * The schedule type of the queue. */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 +/** + * The weight of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5 +/** + * Affinity of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6 /** * Get an attribute from a queue. From patchwork Mon May 16 17:35:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111191 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA200A00BE; Mon, 16 May 2022 19:39:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25F9A42B68; Mon, 16 May 2022 19:39:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EB40142685 for ; Mon, 16 May 2022 19:39:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GBCUpj024067; Mon, 16 May 2022 10:37:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=llNKxbyEXkKyiDlFmEc6cITnT2GDedhxxYiUGYC3o7o=; b=feaWhdEEIAsQttITwIxWEMuqahkxc/PIur78MW1I6eNHYK204FdLWdxk9Dtf84SHBa7c fOdeO+yzzGcjALzTffT/BCexBCj1EM4XF/9CdXjocSA/HA+La1I7EQF8uy6Ab4q/BniU 8vKnmIICy0eemOyleUG8SnNeFpkbWzFLenXq1nw7pxJbpuNaQJ/lRj0EwV5oMI0UHzgQ tSwyDClg8ja7Uh13t8Dz/NwoUvKga1Vfj8Ryb8Tue60A3JNXHavl6y99wcGiQyStZcAn 03u4XOTu84+OwbTUDktvumg7kgfG6i6ev4p+F77KEvPhxPavarw0IvMtwk7L4PhFXjYU IA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g2bxsqyu8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:28 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 16 May 2022 10:37:26 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id E44D23F70CB; Mon, 16 May 2022 10:37:23 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v4 3/5] test/event: test cases to test runtime queue attribute Date: Mon, 16 May 2022 23:05:49 +0530 Message-ID: <7f2bd0734a71e6b46496933379df1bac51cc8b0a.1652722314.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Uju6ANIKYczQaRi30lATqEhaMYmll3mT X-Proofpoint-GUID: Uju6ANIKYczQaRi30lATqEhaMYmll3mT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime. Signed-off-by: Shijith Thotton --- app/test/test_eventdev.c | 201 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 201 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 4f51042bda..336529038e 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -385,6 +385,201 @@ test_eventdev_queue_attr_priority(void) return TEST_SUCCESS; } +static int +test_eventdev_queue_attr_priority_runtime(void) +{ + uint32_t queue_count, queue_req, prio, deq_cnt; + struct rte_event_queue_conf qconf; + struct rte_event_port_conf pconf; + struct rte_event_dev_info info; + struct rte_event event = { + .op = RTE_EVENT_OP_NEW, + .event_type = RTE_EVENT_TYPE_CPU, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .u64 = 0xbadbadba, + }; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + /* Need at least 2 queues to test LOW and HIGH priority. */ + TEST_ASSERT(queue_count > 1, "Not enough event queues, needed 2"); + queue_req = 2; + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + ret = rte_event_queue_attr_set(TEST_DEV_ID, 0, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + RTE_EVENT_DEV_PRIORITY_LOWEST); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + TEST_ASSERT_SUCCESS(ret, "Queue0 priority set failed"); + + ret = rte_event_queue_attr_set(TEST_DEV_ID, 1, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + RTE_EVENT_DEV_PRIORITY_HIGHEST); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + TEST_ASSERT_SUCCESS(ret, "Queue1 priority set failed"); + + /* Setup event port 0 */ + ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info"); + ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup port0"); + ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0); + TEST_ASSERT(ret == (int)queue_count, "Failed to link port, device %d", + TEST_DEV_ID); + + ret = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID); + + for (i = 0; i < (int)queue_req; i++) { + event.queue_id = i; + while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) + rte_pause(); + } + + prio = RTE_EVENT_DEV_PRIORITY_HIGHEST; + deq_cnt = 0; + while (deq_cnt < queue_req) { + uint32_t queue_prio; + + if (rte_event_dequeue_burst(TEST_DEV_ID, 0, &event, 1, 0) == 0) + continue; + + ret = rte_event_queue_attr_get(TEST_DEV_ID, event.queue_id, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + &queue_prio); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue priority get failed"); + TEST_ASSERT(queue_prio >= prio, + "Received event from a lower priority queue first"); + prio = queue_prio; + deq_cnt++; + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_weight_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + ret = rte_event_queue_attr_set( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, set_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue weight set failed"); + + ret = rte_event_queue_attr_get( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue weight get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong weight value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_affinity_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + ret = rte_event_queue_attr_set( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, set_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue affinity set failed"); + + ret = rte_event_queue_attr_get( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, &get_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue affinity get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong affinity value for queue%d", i); + } + + return TEST_SUCCESS; +} + static int test_eventdev_queue_attr_nb_atomic_flows(void) { @@ -964,6 +1159,12 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_queue_count), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_priority), + TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, + test_eventdev_queue_attr_priority_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_weight_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_affinity_runtime), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_nb_atomic_flows), TEST_CASE_ST(eventdev_configure_setup, NULL, From patchwork Mon May 16 17:35:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111192 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 58396A00BE; Mon, 16 May 2022 19:39:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1AA3C42B6B; Mon, 16 May 2022 19:39:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 38F4A42B5B for ; Mon, 16 May 2022 19:39:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GEt30Y004590; Mon, 16 May 2022 10:37:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=m+AvGG7Vl1oTIXMOQAdtxgkai5PvzhqpLzPGp354N8c=; b=kwwXFFOqprWcWGBBkaqNTRdXfOCqb8CxBSHwkrgRSh5qPLE1I+RPs+olKrm4pzz/RWnn utng2dv4KotYMewBqFeFd8aBM57ci23Z8neshri+le1wdDK0f4R4yS3VxralhC3tDGsK XdxagVcvtLS63ZIemYRtoaOeU33Wlt56+YmfajL/wZD9QNaOa2PD+dhpajwDohaCCi4p Db4T4souFbLQAUe6G7EHj2FNl1i3sW2QWajdr3rS45jh3fETttrmLMPIZsMgXQ+ATta0 dW4f/EkazIIjjcARyRb/Gd83OJUz4X8FusI8S1MAoJ8hlmdMdPCp5UU6x9YX89BbHRV7 vw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g3rsqrnxr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:31 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 16 May 2022 10:37:29 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id AAD253F70CD; Mon, 16 May 2022 10:37:26 -0700 (PDT) From: Shijith Thotton To: , CC: Pavan Nikhilesh , , , , Shijith Thotton , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao Subject: [PATCH v4 4/5] common/cnxk: use lock when accessing mbox of SSO Date: Mon, 16 May 2022 23:05:50 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: P0277cQDDcIHcMXy5wBhT4PEEkq8FmfA X-Proofpoint-GUID: P0277cQDDcIHcMXy5wBhT4PEEkq8FmfA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Since mbox is now accessed from multiple threads, use lock to synchronize access. Signed-off-by: Pavan Nikhilesh Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_sso.c | 174 +++++++++++++++++++++-------- drivers/common/cnxk/roc_sso_priv.h | 1 + drivers/common/cnxk/roc_tim.c | 134 ++++++++++++++-------- 3 files changed, 215 insertions(+), 94 deletions(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index f8a0a96533..358d37a9f2 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, } rc = mbox_process_msg(dev->mbox, rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf) } rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type, } req->modify = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type) } req->partial = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso) mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt); - if (rc < 0) { + if (rc) { plt_err("Failed to get free resource count\n"); - return rc; + return -EIO; } roc_sso->max_hwgrp = rsrc_cnt->sso; @@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp) mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_hws; i++) sso->hws_msix_offset[i] = rsp->ssow_msixoff[i]; @@ -285,53 +285,71 @@ int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws, struct roc_sso_hws_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_hws_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_hws_stats *) mbox_alloc_msg_sso_hws_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->hws = hws; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->arbitration = req_rsp->arbitration; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, struct roc_sso_hwgrp_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_grp_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_grp_stats *) mbox_alloc_msg_sso_grp_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->grp = hwgrp; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->aw_status = req_rsp->aw_status; stats->dq_pc = req_rsp->dq_pc; @@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, stats->ts_pc = req_rsp->ts_pc; stats->wa_pc = req_rsp->wa_pc; stats->ws_pc = req_rsp->ws_pc; - return 0; + +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -358,10 +379,12 @@ int roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, uint8_t nb_qos, uint32_t nb_xaq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_qos_cfg *req; int i, rc; + plt_spinlock_lock(&sso->mbox_lock); for (i = 0; i < nb_qos; i++) { uint8_t xaq_prcnt = qos[i].xaq_prcnt; uint8_t iaq_prcnt = qos[i].iaq_prcnt; @@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); if (req == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); - if (req == NULL) - return -ENOSPC; + if (req == NULL) { + rc = -ENOSPC; + goto fail; + } } req->grp = qos[i].hwgrp; req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100; @@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, 100; } - return mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, - roc_sso->xae_waes, roc_sso->xaq_buf_size, - roc_sso->nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, + roc_sso->xae_waes, roc_sso->xaq_buf_size, + roc_sso->nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps) req->npa_aura_id = npa_aura_id; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps) return -EINVAL; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_priority *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox); if (req == NULL) - return rc; + goto fail; req->grp = hwgrp; req->weight = weight; req->affinity = affinity; req->priority = priority; rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + plt_spinlock_unlock(&sso->mbox_lock); plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight, affinity, priority); return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) if (roc_sso->max_hws < nb_hws) return -ENOENT; + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws); if (rc < 0) { plt_err("Unable to attach SSO HWS LFs"); - return rc; + goto fail; } rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); @@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) goto sso_msix_fail; } + plt_spinlock_unlock(&sso->mbox_lock); roc_sso->nb_hwgrp = nb_hwgrp; roc_sso->nb_hws = nb_hws; @@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); hwgrp_atch_fail: sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso) roc_sso->nb_hwgrp = 0; roc_sso->nb_hws = 0; + plt_spinlock_unlock(&sso->mbox_lock); } int @@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso = roc_sso_to_sso_priv(roc_sso); memset(sso, 0, sizeof(*sso)); pci_dev = roc_sso->pci_dev; + plt_spinlock_init(&sso->mbox_lock); rc = dev_init(&sso->dev, pci_dev); if (rc < 0) { @@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) goto fail; } + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_get(roc_sso); if (rc < 0) { plt_err("Failed to get SSO resources"); @@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso->pci_dev = pci_dev; sso->dev.drv_inited = true; roc_sso->lmt_base = sso->dev.lmt_base; + plt_spinlock_unlock(&sso->mbox_lock); return 0; link_mem_free: @@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) rsrc_fail: rc |= dev_fini(&sso->dev, pci_dev); fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..674e4e0a39 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -22,6 +22,7 @@ struct sso { /* SSO link mapping. */ struct plt_bitmap **link_map; void *link_map_mem; + plt_spinlock_t mbox_lock; } __plt_cache_aligned; enum sso_err_status { diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index cefd9bc89d..0f9209937b 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -8,15 +8,16 @@ static int tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct dev *dev = &sso->dev; struct msix_offset_rsp *rsp; int i, rc; mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_ring; i++) tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i]; @@ -88,20 +89,23 @@ int roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, uint32_t *cur_bkt) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_enable_rsp *rsp; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_enable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (cur_bkt) @@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, if (start_tsc) *start_tsc = rsp->timestarted; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_disable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } uintptr_t @@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz, uint32_t interval, uint64_t intervalns, uint64_t clockfreq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_config_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_config_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; req->bigendian = false; req->bucketsize = bucket_sz; @@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, req->gpioedge = TIM_GPIO_LTOH_TRANS; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src, uint64_t clockfreq, uint64_t *intervalns, uint64_t *interval) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_intvl_req *req; struct tim_intvl_rsp *rsp; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox); if (req == NULL) - return rc; + goto fail; req->clockfreq = clockfreq; req->clocksource = clk_src; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } *intervalns = rsp->intvl_ns; *interval = rsp->intvl_cyc; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) struct dev *dev = &sso->dev; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_alloc(dev->mbox); if (req == NULL) - return rc; + goto fail; req->npa_pf_func = idev_npa_pffunc_get(); req->sso_pf_func = idev_sso_pffunc_get(); req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (clk) @@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (rc < 0) { plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); free_req = mbox_alloc_msg_tim_lf_free(dev->mbox); - if (free_req == NULL) - return -ENOSPC; + if (free_req == NULL) { + rc = -ENOSPC; + goto fail; + } free_req->ring = ring_id; - mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_free(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); if (rc < 0) { tim_err_desc(rc); - return rc; + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return 0; } @@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim) struct rsrc_attach_req *attach_req; struct rsrc_detach_req *detach_req; struct free_rsrcs_rsp *free_rsrc; - struct dev *dev; + struct sso *sso; uint16_t nb_lfs; + struct dev *dev; int rc; if (roc_tim == NULL || roc_tim->roc_sso == NULL) return TIM_ERR_PARAM; + sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + dev = &sso->dev; PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ); - dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; nb_lfs = roc_tim->nb_lfs; + plt_spinlock_lock(&sso->mbox_lock); mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc); - if (rc < 0) { + if (rc) { plt_err("Unable to get free rsrc count."); - return 0; + nb_lfs = 0; + goto fail; } if (nb_lfs && (free_rsrc->tim < nb_lfs)) { plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs, free_rsrc->tim); - return 0; + nb_lfs = 0; + goto fail; } attach_req = mbox_alloc_msg_attach_resources(dev->mbox); - if (attach_req == NULL) - return -ENOSPC; + if (attach_req == NULL) { + nb_lfs = 0; + goto fail; + } attach_req->modify = true; attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim; nb_lfs = attach_req->timlfs; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { plt_err("Unable to attach TIM LFs."); - return 0; + nb_lfs = 0; + goto fail; } rc = tim_fill_msix(roc_tim, nb_lfs); @@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim) plt_err("Unable to get TIM MSIX vectors"); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); - if (detach_req == NULL) - return -ENOSPC; + if (detach_req == NULL) { + nb_lfs = 0; + goto fail; + } detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); - - return 0; + nb_lfs = 0; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return nb_lfs; } void roc_tim_fini(struct roc_tim *roc_tim) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct rsrc_detach_req *detach_req; + struct dev *dev = &sso->dev; + plt_spinlock_lock(&sso->mbox_lock); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); PLT_ASSERT(detach_req); detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); + plt_spinlock_unlock(&sso->mbox_lock); } From patchwork Mon May 16 17:35:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111193 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 564BCA00BE; Mon, 16 May 2022 19:39:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3FE5C42B76; Mon, 16 May 2022 19:39:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 35BCF42B76 for ; Mon, 16 May 2022 19:39:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GAZ2fV011450; Mon, 16 May 2022 10:37:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Ei/Qgg/BT9PobNYEme8+gVv9Sqfd8CX4+Yr1qXmsoUo=; b=c3WwWQKnWf4Pt2SUHqILc30Mds6ySTAQGkhN/Ks40rHKuyhNvchHo+NNH3IIFKpWWKXf 1ItcmdwSVxJClhyEsnEd0gEJ6x+SfCzkQG6pU9CihzK52qw6qHb8A9MHN+3uwD8Wx/Mw SJqk/fYF4BWqL1qRr4D/hwdRusZvM3qRvw84HMjcIVETlT4ljnGYZAAS00JXKkfkPNvk bad0rrpoa6+VrBkTWn8oCw6vNA4e2vHVrPDz9AcUk7g5gQ20FHKaIORUXvvMgxF/CB6E QFcXxktc15ZdC1JLdytPu3QGZSNxfYJZTTrn7b0Azko61cD2hahanW5QPHeKl19PG73J Kg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g2bxsqyup-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 16 May 2022 10:37:32 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 6CB523F70CB; Mon, 16 May 2022 10:37:30 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes Date: Mon, 16 May 2022 23:05:51 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vpypM7MbsbBLTN2GXq-sPiIe3gDnmZ9D X-Proofpoint-GUID: vpypM7MbsbBLTN2GXq-sPiIe3gDnmZ9D X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API to set queue attributes at runtime and API to get weight and affinity. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 4 ++ drivers/event/cnxk/cn9k_eventdev.c | 4 ++ drivers/event/cnxk/cnxk_eventdev.c | 91 ++++++++++++++++++++++++-- drivers/event/cnxk/cnxk_eventdev.h | 19 ++++++ 5 files changed, 113 insertions(+), 6 deletions(-) diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index 7633c6e3a2..bee69bf8f4 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,6 +12,7 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y +runtime_queue_attr = y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 94829e789c..450e1bf50c 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -846,9 +846,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 987888d3db..3de22d7f4e 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1084,9 +1084,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index be021d86c9..a2829b817e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; } int @@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority); - /* Normalize <0-255> to <0-7> */ - return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF, - queue_conf->priority / 32); + uint8_t priority, weight, affinity; + + /* Default weight and affinity */ + dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + + priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, + priority, weight, affinity); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); } void @@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +int +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t *attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT) + *attr_value = dev->mlt_prio[queue_id].weight; + else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY) + *attr_value = dev->mlt_prio[queue_id].affinity; + else + return -EINVAL; + + return 0; +} + +int +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint64_t attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t priority, weight, affinity; + struct rte_event_queue_conf *conf; + + conf = &event_dev->data->queues_cfg[queue_id]; + + switch (attr_id) { + case RTE_EVENT_QUEUE_ATTR_PRIORITY: + conf->priority = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + dev->mlt_prio[queue_id].weight = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + dev->mlt_prio[queue_id].affinity = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS: + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES: + case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG: + case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE: + /* FALLTHROUGH */ + plt_sso_dbg("Unsupported attribute id %u", attr_id); + return -ENOTSUP; + default: + plt_err("Invalid attribute id %u", attr_id); + return -EINVAL; + } + + priority = CNXK_QOS_NORMALIZE(conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); +} + void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 5564746e6d..531f6d1a84 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,6 +38,11 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_PRIORITY_CNT (0x8) +#define CNXK_SSO_WEIGHT_MAX (0x3f) +#define CNXK_SSO_WEIGHT_MIN (0x3) +#define CNXK_SSO_WEIGHT_CNT (CNXK_SSO_WEIGHT_MAX - CNXK_SSO_WEIGHT_MIN + 1) +#define CNXK_SSO_AFFINITY_CNT (0x10) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -54,6 +59,8 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_QOS_NORMALIZE(val, min, max, cnt) \ + (min + val / ((max + cnt - 1) / cnt)) #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ do { \ if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ @@ -79,6 +86,11 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; +struct cnxk_sso_mlt_prio { + uint8_t weight; + uint8_t affinity; +}; + struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -108,6 +120,7 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; + struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -234,6 +247,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); +int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); +int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf); int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,