From patchwork Sun May 15 09:53:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111147 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4991BA00BE; Sun, 15 May 2022 11:56:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D04840A87; Sun, 15 May 2022 11:56:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4CBD440A7D for ; Sun, 15 May 2022 11:56:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24F6xwNW014345; Sun, 15 May 2022 02:54:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=4kitNO01fzlEckazjJJBSt9X6U1PjgRhKESE/Irm+I0=; b=KgwEY+69ceH6F4U4Osqq5qh4ue0iy5YdBHLU4FWTjbrCnQEXmCsG6fotzWCAKlTNwWN7 sxArQxwhY5IlssIEHZih3cQRKAQDwR8McX+eHPcfLYMxyTIJa4OPHFOV9W2L8V9ic0yD yAmuZ5QgnnuYxCNq13Ho8FpcCzPi2AVPUQZ2GztG7kh+fAhG62s08WX3f5qnp6k1lsZN cu6mbJwDf8PXKN/nW6mnu8/qfLjotQMRm0p2iA2SrBB/PWeKy1d0LAJunV4agqmsI3ju Zp+xpWViIHOY78C4RtuMvKjb0B6magCYxbbC/gU8re/+Z2xubwFWHh2dkxS/I3hHtDqM uQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g29sq2tc4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 15 May 2022 02:54:17 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 15 May 2022 02:54:16 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 15 May 2022 02:54:16 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id E9E933F7093; Sun, 15 May 2022 02:54:13 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v3 1/5] eventdev: support to set queue attributes at runtime Date: Sun, 15 May 2022 15:23:09 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: nk0jPv04od1Ppp2xbzpQH49vBYvA9ZYl X-Proofpoint-GUID: nk0jPv04od1Ppp2xbzpQH49vBYvA9ZYl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-15_05,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rte_event_queue_setup(). PMD's supporting this feature should expose the capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- doc/guides/eventdevs/features/default.ini | 1 + doc/guides/rel_notes/release_22_07.rst | 5 ++++ lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++ lib/eventdev/rte_eventdev.c | 26 ++++++++++++++++++ lib/eventdev/rte_eventdev.h | 33 ++++++++++++++++++++++- lib/eventdev/version.map | 3 +++ 6 files changed, 89 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 2ea233463a..00360f60c6 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -17,6 +17,7 @@ runtime_port_link = multiple_queue_port = carry_flow_id = maintenance_free = +runtime_queue_attr = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 88d6e96cc1..a7a912d665 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -65,6 +65,11 @@ New Features * Added support for promiscuous mode on Windows. * Added support for MTU on Windows. +* **Added support for setting queue attributes at runtime in eventdev.** + + Added new API ``rte_event_queue_attr_set()``, to set event queue attributes + at runtime. + Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..3b85d9f7a5 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Set an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /** * Retrieve the default event port configuration. * @@ -1211,6 +1231,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_set_t queue_attr_set; + /**< Set an event queue attribute. */ eventdev_port_default_conf_get_t port_def_conf; /**< Get default port configuration. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..a31e99be02 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, return 0; } +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_queue(dev, queue_id)) { + RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id); + return -EINVAL; + } + + if (!(dev->data->event_dev_cap & + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) { + RTE_EDEV_LOG_ERR( + "Device %" PRIu8 "does not support changing queue attributes at runtime", + dev_id); + return -ENOTSUP; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP); + return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id, + attr_value); +} + int rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..c1163ee8ec 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -225,7 +225,7 @@ struct rte_event; /**< Event scheduling prioritization is based on the priority associated with * each event queue. * - * @see rte_event_queue_setup() + * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ #define RTE_EVENT_DEV_CAP_EVENT_QOS (1ULL << 1) /**< Event scheduling prioritization is based on the priority associated with @@ -307,6 +307,13 @@ struct rte_event; * global pool, or process signaling related to load balancing. */ +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11) +/**< Event device is capable of changing the queue attributes at runtime i.e + * after rte_event_queue_setup() or rte_event_start() call sequence. If this + * flag is not set, eventdev queue attributes can only be configured during + * rte_event_queue_setup(). + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -702,6 +709,30 @@ int rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, uint32_t *attr_value); +/** + * Set an event queue attribute. + * + * @param dev_id + * Eventdev id + * @param queue_id + * Eventdev queue id + * @param attr_id + * The attribute ID to set + * @param attr_value + * The attribute value to set + * + * @return + * - 0: Successfully set attribute. + * - -EINVAL: invalid device, queue or attr_id. + * - -ENOTSUP: device does not support setting event attribute. + * - -EBUSY: device is in running state + * - <0: failed to set event queue attribute + */ +__rte_experimental +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /* Event port specific APIs */ /* Event port configuration bitmap flags */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..c581b75c18 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_queue_attr_set; }; INTERNAL { From patchwork Sun May 15 09:53:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111146 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 325B9A00BE; Sun, 15 May 2022 11:56:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2235540A7D; Sun, 15 May 2022 11:56:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2EF9D40143 for ; Sun, 15 May 2022 11:56:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24F6TFTG009685; Sun, 15 May 2022 02:54:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=x+iMM+AmEfc3Vy4qq7JvuhWj54LwXdflX6mcDHqlIQg=; b=jCKh9EzaWQ4tjkGm/GeKIPYtVZdCji8L693+4fg1z3Z8SIYcE5PRTzhw/6TU4XtuA1ot Vn8wftJhmuUqnhgQVEAbuM4rstMcxoTCUZLp6MGNdFPx/R/lODBRdT1Y55YQhawXGg21 2XMdu+MLvFqDqw+Ltm1AglAyIXexcGTI9S9rqp4hYJbYK/MsH/zGz1A8yRSccW7k/7bV AIbp83ZR91rf/crcm3Fsy2GaBMICpGgaqtQ7liqUEnIJHzaomKyaWrl7GvP5cR+b2jea oBlViLGPETHUZFQrejp/DcIE6VlcGsfU0ZWuik+o2ki9HM1t2Qqkb3ABl9E9e/Xbhj4J mw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g29sq2tc6-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 15 May 2022 02:54:20 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 15 May 2022 02:54:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 15 May 2022 02:54:19 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id ADFBE3F70FF; Sun, 15 May 2022 02:54:16 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v3 2/5] eventdev: add weight and affinity to queue attributes Date: Sun, 15 May 2022 15:23:10 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: FYB3X3VrrnwktAYzdnZJnbq4hmarO6Es X-Proofpoint-GUID: FYB3X3VrrnwktAYzdnZJnbq4hmarO6Es X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-15_05,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Extended eventdev queue QoS attributes to support weight and affinity. If queues are of the same priority, events from the queue with highest weight will be scheduled first. Affinity indicates the number of times, the subsequent schedule calls from an event port will use the same event queue. Schedule call selects another queue if current queue goes empty or schedule count reaches affinity count. To avoid ABI break, weight and affinity attributes are not yet added to queue config structure and rely on PMD for managing it. New eventdev op queue_attr_get can be used to get it from the PMD. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- doc/guides/rel_notes/release_22_07.rst | 7 +++++ lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++ lib/eventdev/rte_eventdev.c | 12 ++++++++ lib/eventdev/rte_eventdev.h | 38 ++++++++++++++++++++++++-- 4 files changed, 77 insertions(+), 2 deletions(-) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index a7a912d665..f35a31bbdf 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -70,6 +70,13 @@ New Features Added new API ``rte_event_queue_attr_set()``, to set event queue attributes at runtime. +* **Added new queues attributes weight and affinity in eventdev.** + + Defined new event queue attributes weight and affinity as below: + + * ``RTE_EVENT_QUEUE_ATTR_WEIGHT`` + * ``RTE_EVENT_QUEUE_ATTR_AFFINITY`` + Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 3b85d9f7a5..5495aee4f6 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Get an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param[out] attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); + /** * Set an event queue attribute at runtime. * @@ -1231,6 +1251,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_get_t queue_attr_get; + /**< Get an event queue attribute. */ eventdev_queue_attr_set_t queue_attr_set; /**< Set an event queue attribute. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index a31e99be02..12b261f923 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, *attr_value = conf->schedule_type; break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; default: return -EINVAL; }; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index c1163ee8ec..5d38996f6b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -222,8 +222,14 @@ struct rte_event; /* Event device capability bitmap flags */ #define RTE_EVENT_DEV_CAP_QUEUE_QOS (1ULL << 0) -/**< Event scheduling prioritization is based on the priority associated with - * each event queue. +/**< Event scheduling prioritization is based on the priority and weight + * associated with each event queue. Events from a queue with highest priority + * is scheduled first. If the queues are of same priority, weight of the queues + * are considered to select a queue in a weighted round robin fashion. + * Subsequent dequeue calls from an event port could see events from the same + * event queue, if the queue is configured with an affinity count. Affinity + * count is the number of subsequent dequeue calls, in which an event port + * should use the same event queue if the queue is non-empty * * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ @@ -331,6 +337,26 @@ struct rte_event; * @see rte_event_port_link() */ +/* Event queue scheduling weights */ +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255 +/**< Highest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0 +/**< Lowest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + +/* Event queue scheduling affinity */ +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255 +/**< Highest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0 +/**< Lowest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + /** * Get the total number of event devices that have been successfully * initialised. @@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, * The schedule type of the queue. */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 +/** + * The weight of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5 +/** + * Affinity of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6 /** * Get an attribute from a queue. From patchwork Sun May 15 09:53:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111148 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E919BA00BE; Sun, 15 May 2022 11:56:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5AFB842B5B; Sun, 15 May 2022 11:56:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D9F8E40A81 for ; Sun, 15 May 2022 11:56:26 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24F66x5X025939; Sun, 15 May 2022 02:54:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=llNKxbyEXkKyiDlFmEc6cITnT2GDedhxxYiUGYC3o7o=; b=bRVjafzSnsgCqlK7E1FPsS8CPor8zlUbXfjmzXVegrME6V7sM42kWvBiFpOD4sKqm2ob jXcB4MJvMr6OErec1XooSsIfby571v96k2RbCX++v7bYevxuBIGd6pwo1MEohGezhLgH E3U9PJhZG1Zm2gi384bZbqjdnyN4lyiNdd8cz+VADn9pXl/bMX1aB1tUZjkpWhNEG9eX K5H/G7y/iaMYWyNZsff1FSbSp9g2CBMb15PZSnlneOXTCycHA5Mo5w2HWePH8rxzXQpl q9WXhY8vYXX4FluyT79fagJemw48seLPaIieFBOzHqISlVILOsnLfgebJwbIWrjYfyyR ow== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g29sq2tcc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 15 May 2022 02:54:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 15 May 2022 02:54:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 15 May 2022 02:54:21 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 6E3323F707A; Sun, 15 May 2022 02:54:19 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v3 3/5] test/event: test cases to test runtime queue attribute Date: Sun, 15 May 2022 15:23:11 +0530 Message-ID: <48346ab9039e257b8dc35fc3d59c3eee2be885ef.1652607951.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: F8PoqtVy8rW66_aUoCWDDsY8Zwj0X8CQ X-Proofpoint-GUID: F8PoqtVy8rW66_aUoCWDDsY8Zwj0X8CQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-15_05,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime. Signed-off-by: Shijith Thotton --- app/test/test_eventdev.c | 201 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 201 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 4f51042bda..336529038e 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -385,6 +385,201 @@ test_eventdev_queue_attr_priority(void) return TEST_SUCCESS; } +static int +test_eventdev_queue_attr_priority_runtime(void) +{ + uint32_t queue_count, queue_req, prio, deq_cnt; + struct rte_event_queue_conf qconf; + struct rte_event_port_conf pconf; + struct rte_event_dev_info info; + struct rte_event event = { + .op = RTE_EVENT_OP_NEW, + .event_type = RTE_EVENT_TYPE_CPU, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .u64 = 0xbadbadba, + }; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + /* Need at least 2 queues to test LOW and HIGH priority. */ + TEST_ASSERT(queue_count > 1, "Not enough event queues, needed 2"); + queue_req = 2; + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + ret = rte_event_queue_attr_set(TEST_DEV_ID, 0, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + RTE_EVENT_DEV_PRIORITY_LOWEST); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + TEST_ASSERT_SUCCESS(ret, "Queue0 priority set failed"); + + ret = rte_event_queue_attr_set(TEST_DEV_ID, 1, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + RTE_EVENT_DEV_PRIORITY_HIGHEST); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + TEST_ASSERT_SUCCESS(ret, "Queue1 priority set failed"); + + /* Setup event port 0 */ + ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info"); + ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup port0"); + ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0); + TEST_ASSERT(ret == (int)queue_count, "Failed to link port, device %d", + TEST_DEV_ID); + + ret = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID); + + for (i = 0; i < (int)queue_req; i++) { + event.queue_id = i; + while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) + rte_pause(); + } + + prio = RTE_EVENT_DEV_PRIORITY_HIGHEST; + deq_cnt = 0; + while (deq_cnt < queue_req) { + uint32_t queue_prio; + + if (rte_event_dequeue_burst(TEST_DEV_ID, 0, &event, 1, 0) == 0) + continue; + + ret = rte_event_queue_attr_get(TEST_DEV_ID, event.queue_id, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + &queue_prio); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue priority get failed"); + TEST_ASSERT(queue_prio >= prio, + "Received event from a lower priority queue first"); + prio = queue_prio; + deq_cnt++; + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_weight_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + ret = rte_event_queue_attr_set( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, set_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue weight set failed"); + + ret = rte_event_queue_attr_get( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue weight get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong weight value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_affinity_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + ret = rte_event_queue_attr_set( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, set_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue affinity set failed"); + + ret = rte_event_queue_attr_get( + TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, &get_val); + if (ret == -ENOTSUP) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(ret, "Queue affinity get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong affinity value for queue%d", i); + } + + return TEST_SUCCESS; +} + static int test_eventdev_queue_attr_nb_atomic_flows(void) { @@ -964,6 +1159,12 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_queue_count), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_priority), + TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, + test_eventdev_queue_attr_priority_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_weight_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_affinity_runtime), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_nb_atomic_flows), TEST_CASE_ST(eventdev_configure_setup, NULL, From patchwork Sun May 15 09:53:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111149 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 917DAA00BE; Sun, 15 May 2022 11:56:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7DBE642670; Sun, 15 May 2022 11:56:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 97E0640143 for ; Sun, 15 May 2022 11:56:30 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24F6TFTH009685; Sun, 15 May 2022 02:54:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=m+AvGG7Vl1oTIXMOQAdtxgkai5PvzhqpLzPGp354N8c=; b=iXY4AGnVhs/QffmGiP+4Il6097L3w7/iHx5Me3GPKV952EZw0Af1BxjlAKFsLoLFX4ng Otfey3TXux4DybUvPSX2GZRzglH2dpqe0B2dgw0jnX9pSy75jg5bTfeXP98ECGJhyLE8 r9cvzRzpBcAk7Ear2mazRhMv3R6BpptGyTejM3ex6UjnduN9sd9kZzvgbWoH23IZOyU1 3LqSlsIGhLgdYIm+3b/N8SvmxdgnqFPnaRIwlWekEkAgMfEoL/xoI+Pe5uxKCUb/pBDg ORyR17fntl7PqoVCE7jHkezxFeA3OO+bgqN1ZYnXbdjM6Rz/q4iT1didpdNbaug+41ry 4g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g29sq2tcg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 15 May 2022 02:54:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 15 May 2022 02:54:25 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 15 May 2022 02:54:25 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 34C763F707B; Sun, 15 May 2022 02:54:21 -0700 (PDT) From: Shijith Thotton To: , CC: Pavan Nikhilesh , , , , Shijith Thotton , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao Subject: [PATCH v3 4/5] common/cnxk: use lock when accessing mbox of SSO Date: Sun, 15 May 2022 15:23:12 +0530 Message-ID: <126044ddcd758575c0a07f496ed19f45a677930f.1652607951.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 2K5q9wbEYbQiHSf8LrEYwdGgoiS85Xge X-Proofpoint-GUID: 2K5q9wbEYbQiHSf8LrEYwdGgoiS85Xge X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-15_05,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Since mbox is now accessed from multiple threads, use lock to synchronize access. Signed-off-by: Pavan Nikhilesh Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_sso.c | 174 +++++++++++++++++++++-------- drivers/common/cnxk/roc_sso_priv.h | 1 + drivers/common/cnxk/roc_tim.c | 134 ++++++++++++++-------- 3 files changed, 215 insertions(+), 94 deletions(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index f8a0a96533..358d37a9f2 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, } rc = mbox_process_msg(dev->mbox, rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf) } rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type, } req->modify = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type) } req->partial = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso) mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt); - if (rc < 0) { + if (rc) { plt_err("Failed to get free resource count\n"); - return rc; + return -EIO; } roc_sso->max_hwgrp = rsrc_cnt->sso; @@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp) mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_hws; i++) sso->hws_msix_offset[i] = rsp->ssow_msixoff[i]; @@ -285,53 +285,71 @@ int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws, struct roc_sso_hws_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_hws_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_hws_stats *) mbox_alloc_msg_sso_hws_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->hws = hws; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->arbitration = req_rsp->arbitration; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, struct roc_sso_hwgrp_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_grp_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_grp_stats *) mbox_alloc_msg_sso_grp_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->grp = hwgrp; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->aw_status = req_rsp->aw_status; stats->dq_pc = req_rsp->dq_pc; @@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, stats->ts_pc = req_rsp->ts_pc; stats->wa_pc = req_rsp->wa_pc; stats->ws_pc = req_rsp->ws_pc; - return 0; + +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -358,10 +379,12 @@ int roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, uint8_t nb_qos, uint32_t nb_xaq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_qos_cfg *req; int i, rc; + plt_spinlock_lock(&sso->mbox_lock); for (i = 0; i < nb_qos; i++) { uint8_t xaq_prcnt = qos[i].xaq_prcnt; uint8_t iaq_prcnt = qos[i].iaq_prcnt; @@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); if (req == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); - if (req == NULL) - return -ENOSPC; + if (req == NULL) { + rc = -ENOSPC; + goto fail; + } } req->grp = qos[i].hwgrp; req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100; @@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, 100; } - return mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, - roc_sso->xae_waes, roc_sso->xaq_buf_size, - roc_sso->nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, + roc_sso->xae_waes, roc_sso->xaq_buf_size, + roc_sso->nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps) req->npa_aura_id = npa_aura_id; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps) return -EINVAL; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_priority *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox); if (req == NULL) - return rc; + goto fail; req->grp = hwgrp; req->weight = weight; req->affinity = affinity; req->priority = priority; rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + plt_spinlock_unlock(&sso->mbox_lock); plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight, affinity, priority); return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) if (roc_sso->max_hws < nb_hws) return -ENOENT; + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws); if (rc < 0) { plt_err("Unable to attach SSO HWS LFs"); - return rc; + goto fail; } rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); @@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) goto sso_msix_fail; } + plt_spinlock_unlock(&sso->mbox_lock); roc_sso->nb_hwgrp = nb_hwgrp; roc_sso->nb_hws = nb_hws; @@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); hwgrp_atch_fail: sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso) roc_sso->nb_hwgrp = 0; roc_sso->nb_hws = 0; + plt_spinlock_unlock(&sso->mbox_lock); } int @@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso = roc_sso_to_sso_priv(roc_sso); memset(sso, 0, sizeof(*sso)); pci_dev = roc_sso->pci_dev; + plt_spinlock_init(&sso->mbox_lock); rc = dev_init(&sso->dev, pci_dev); if (rc < 0) { @@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) goto fail; } + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_get(roc_sso); if (rc < 0) { plt_err("Failed to get SSO resources"); @@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso->pci_dev = pci_dev; sso->dev.drv_inited = true; roc_sso->lmt_base = sso->dev.lmt_base; + plt_spinlock_unlock(&sso->mbox_lock); return 0; link_mem_free: @@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) rsrc_fail: rc |= dev_fini(&sso->dev, pci_dev); fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..674e4e0a39 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -22,6 +22,7 @@ struct sso { /* SSO link mapping. */ struct plt_bitmap **link_map; void *link_map_mem; + plt_spinlock_t mbox_lock; } __plt_cache_aligned; enum sso_err_status { diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index cefd9bc89d..0f9209937b 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -8,15 +8,16 @@ static int tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct dev *dev = &sso->dev; struct msix_offset_rsp *rsp; int i, rc; mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_ring; i++) tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i]; @@ -88,20 +89,23 @@ int roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, uint32_t *cur_bkt) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_enable_rsp *rsp; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_enable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (cur_bkt) @@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, if (start_tsc) *start_tsc = rsp->timestarted; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_disable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } uintptr_t @@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz, uint32_t interval, uint64_t intervalns, uint64_t clockfreq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_config_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_config_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; req->bigendian = false; req->bucketsize = bucket_sz; @@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, req->gpioedge = TIM_GPIO_LTOH_TRANS; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src, uint64_t clockfreq, uint64_t *intervalns, uint64_t *interval) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_intvl_req *req; struct tim_intvl_rsp *rsp; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox); if (req == NULL) - return rc; + goto fail; req->clockfreq = clockfreq; req->clocksource = clk_src; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } *intervalns = rsp->intvl_ns; *interval = rsp->intvl_cyc; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) struct dev *dev = &sso->dev; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_alloc(dev->mbox); if (req == NULL) - return rc; + goto fail; req->npa_pf_func = idev_npa_pffunc_get(); req->sso_pf_func = idev_sso_pffunc_get(); req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (clk) @@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (rc < 0) { plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); free_req = mbox_alloc_msg_tim_lf_free(dev->mbox); - if (free_req == NULL) - return -ENOSPC; + if (free_req == NULL) { + rc = -ENOSPC; + goto fail; + } free_req->ring = ring_id; - mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_free(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); if (rc < 0) { tim_err_desc(rc); - return rc; + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return 0; } @@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim) struct rsrc_attach_req *attach_req; struct rsrc_detach_req *detach_req; struct free_rsrcs_rsp *free_rsrc; - struct dev *dev; + struct sso *sso; uint16_t nb_lfs; + struct dev *dev; int rc; if (roc_tim == NULL || roc_tim->roc_sso == NULL) return TIM_ERR_PARAM; + sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + dev = &sso->dev; PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ); - dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; nb_lfs = roc_tim->nb_lfs; + plt_spinlock_lock(&sso->mbox_lock); mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc); - if (rc < 0) { + if (rc) { plt_err("Unable to get free rsrc count."); - return 0; + nb_lfs = 0; + goto fail; } if (nb_lfs && (free_rsrc->tim < nb_lfs)) { plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs, free_rsrc->tim); - return 0; + nb_lfs = 0; + goto fail; } attach_req = mbox_alloc_msg_attach_resources(dev->mbox); - if (attach_req == NULL) - return -ENOSPC; + if (attach_req == NULL) { + nb_lfs = 0; + goto fail; + } attach_req->modify = true; attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim; nb_lfs = attach_req->timlfs; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { plt_err("Unable to attach TIM LFs."); - return 0; + nb_lfs = 0; + goto fail; } rc = tim_fill_msix(roc_tim, nb_lfs); @@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim) plt_err("Unable to get TIM MSIX vectors"); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); - if (detach_req == NULL) - return -ENOSPC; + if (detach_req == NULL) { + nb_lfs = 0; + goto fail; + } detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); - - return 0; + nb_lfs = 0; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return nb_lfs; } void roc_tim_fini(struct roc_tim *roc_tim) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct rsrc_detach_req *detach_req; + struct dev *dev = &sso->dev; + plt_spinlock_lock(&sso->mbox_lock); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); PLT_ASSERT(detach_req); detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); + plt_spinlock_unlock(&sso->mbox_lock); } From patchwork Sun May 15 09:53:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111150 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2ED7BA00BE; Sun, 15 May 2022 11:56:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ACD5D42B6B; Sun, 15 May 2022 11:56:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3DA4942B5A for ; Sun, 15 May 2022 11:56:33 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24F51O2A017270; Sun, 15 May 2022 02:54:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=231qAXi0OIkwpH/5CvzoJR19O+bdbMw6h1tlFOHQO7E=; b=BsksPPezDNIxAo6eGhMx0H6vUC1xJQHhozabBRWIr/7wt/2DRoanSzYd+MKCc4oY/If9 Q81cFmmJf1VSDt1vM+UvxrJ8gsD4HxiAKPLfIF1r7jTtwfl5+18oTOgapGJeKf7HVqiT Ae/Jy2CaRTT2J9H6kOxwz1fWN1Dv+pYdGSXZ7uooq3IP7Aa0t/mQ6VEERxYU+qDkZrYo ax/pujwDWXT2bieo8BfWGdqxcXqA4CWtc8FzEE/4y4yZUSK08qbQP2b5X3EaaHlSDuW7 lWf50NgB+0WYC4KFWBzgSpJ+Kvq1aY65zIdaIMtOt5zFmSL7uRnHXdlM7oi7BrJbfJJK jQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g29sq2tck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 15 May 2022 02:54:29 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 15 May 2022 02:54:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 15 May 2022 02:54:28 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id E88D83F707A; Sun, 15 May 2022 02:54:25 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v3 5/5] event/cnxk: support to set runtime queue attributes Date: Sun, 15 May 2022 15:23:13 +0530 Message-ID: <2818c8c220f5c8ae37f738bc864e60e2ee6a7d06.1652607951.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: _GQILvNAFT7EpMdun8qPBMfcewPHVwRb X-Proofpoint-GUID: _GQILvNAFT7EpMdun8qPBMfcewPHVwRb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-15_05,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API to set queue attributes at runtime and API to get weight and affinity. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 4 ++ drivers/event/cnxk/cn9k_eventdev.c | 4 ++ drivers/event/cnxk/cnxk_eventdev.c | 91 ++++++++++++++++++++++++-- drivers/event/cnxk/cnxk_eventdev.h | 19 ++++++ 5 files changed, 113 insertions(+), 6 deletions(-) diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index 7633c6e3a2..bee69bf8f4 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,6 +12,7 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y +runtime_queue_attr = y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 9b4d2895ec..f6973bb691 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 4bba477dd1..7cb59bbbfa 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index be021d86c9..a2829b817e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; } int @@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority); - /* Normalize <0-255> to <0-7> */ - return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF, - queue_conf->priority / 32); + uint8_t priority, weight, affinity; + + /* Default weight and affinity */ + dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + + priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, + priority, weight, affinity); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); } void @@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +int +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t *attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT) + *attr_value = dev->mlt_prio[queue_id].weight; + else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY) + *attr_value = dev->mlt_prio[queue_id].affinity; + else + return -EINVAL; + + return 0; +} + +int +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint64_t attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t priority, weight, affinity; + struct rte_event_queue_conf *conf; + + conf = &event_dev->data->queues_cfg[queue_id]; + + switch (attr_id) { + case RTE_EVENT_QUEUE_ATTR_PRIORITY: + conf->priority = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + dev->mlt_prio[queue_id].weight = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + dev->mlt_prio[queue_id].affinity = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS: + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES: + case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG: + case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE: + /* FALLTHROUGH */ + plt_sso_dbg("Unsupported attribute id %u", attr_id); + return -ENOTSUP; + default: + plt_err("Invalid attribute id %u", attr_id); + return -EINVAL; + } + + priority = CNXK_QOS_NORMALIZE(conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); +} + void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 5564746e6d..531f6d1a84 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,6 +38,11 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_PRIORITY_CNT (0x8) +#define CNXK_SSO_WEIGHT_MAX (0x3f) +#define CNXK_SSO_WEIGHT_MIN (0x3) +#define CNXK_SSO_WEIGHT_CNT (CNXK_SSO_WEIGHT_MAX - CNXK_SSO_WEIGHT_MIN + 1) +#define CNXK_SSO_AFFINITY_CNT (0x10) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -54,6 +59,8 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_QOS_NORMALIZE(val, min, max, cnt) \ + (min + val / ((max + cnt - 1) / cnt)) #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ do { \ if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ @@ -79,6 +86,11 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; +struct cnxk_sso_mlt_prio { + uint8_t weight; + uint8_t affinity; +}; + struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -108,6 +120,7 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; + struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -234,6 +247,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); +int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); +int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf); int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,