From patchwork Tue Apr 5 05:40:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109142 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74439A0505; Tue, 5 Apr 2022 07:41:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EC244282F; Tue, 5 Apr 2022 07:41:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C0A3E42805 for ; Tue, 5 Apr 2022 07:41:48 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2350Gn6b010030; Mon, 4 Apr 2022 22:41:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=P8/s48P5Y2cuzIdWdPLPgHIbNALchADdhJc+u+eWEmM=; b=HJWwS+pDEJvSz6p3xxAIlaKGMfY98D6lrZQuMv0wgVm0ojkXTmwqc5ePJUR8ZhzaIzM/ Ui+VqSkvgpNZ8/2Vrl3TpoWZidFWWIcC3v0hDuYK1/XFXngc4ku6u+VvY8f/vLUXG8Rb UeLeTzQNBlogmnVyuOlWv8sLTyI8KxHdOTpU8zngwJIsBiMV4Rekktfdpoz640Q6rG1G C2J+RQx2vjxBHyMwId49mXJJmgCwHNXg3g9yF6bFENILHIUTh7ZX1C5BDm6jLtPIYKKG 5cKHIgSjjtpcAYDbSSW0YsJqb1tNKTBj2iK/3jFwyUIUlfcwpN396A7Oi51Cw5Nexg1o ZA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3f6p0ptsex-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 Apr 2022 22:41:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 4 Apr 2022 22:41:41 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id C99A63F7059; Mon, 4 Apr 2022 22:41:39 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Ray Kinsella Subject: [PATCH v2 1/6] eventdev: support to set queue attributes at runtime Date: Tue, 5 Apr 2022 11:10:58 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Ahe51yjlJQVNjrxKZ8rJZi4cVBebkM7g X-Proofpoint-GUID: Ahe51yjlJQVNjrxKZ8rJZi4cVBebkM7g X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rte_event_queue_setup(). PMD's supporting this feature should expose the capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- doc/guides/eventdevs/features/default.ini | 1 + lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++ lib/eventdev/rte_eventdev.c | 26 ++++++++++++++++++ lib/eventdev/rte_eventdev.h | 33 ++++++++++++++++++++++- lib/eventdev/version.map | 3 +++ 5 files changed, 84 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 2ea233463a..00360f60c6 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -17,6 +17,7 @@ runtime_port_link = multiple_queue_port = carry_flow_id = maintenance_free = +runtime_queue_attr = ; ; Features of a default Ethernet Rx adapter. diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..3b85d9f7a5 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Set an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /** * Retrieve the default event port configuration. * @@ -1211,6 +1231,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_set_t queue_attr_set; + /**< Set an event queue attribute. */ eventdev_port_default_conf_get_t port_def_conf; /**< Get default port configuration. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..a31e99be02 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, return 0; } +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_queue(dev, queue_id)) { + RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id); + return -EINVAL; + } + + if (!(dev->data->event_dev_cap & + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) { + RTE_EDEV_LOG_ERR( + "Device %" PRIu8 "does not support changing queue attributes at runtime", + dev_id); + return -ENOTSUP; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP); + return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id, + attr_value); +} + int rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..16e9d5fb5b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -225,7 +225,7 @@ struct rte_event; /**< Event scheduling prioritization is based on the priority associated with * each event queue. * - * @see rte_event_queue_setup() + * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ #define RTE_EVENT_DEV_CAP_EVENT_QOS (1ULL << 1) /**< Event scheduling prioritization is based on the priority associated with @@ -307,6 +307,13 @@ struct rte_event; * global pool, or process signaling related to load balancing. */ +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11) +/**< Event device is capable of changing the queue attributes at runtime i.e after + * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is + * not set, eventdev queue attributes can only be configured during + * rte_event_queue_setup(). + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -702,6 +709,30 @@ int rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, uint32_t *attr_value); +/** + * Set an event queue attribute. + * + * @param dev_id + * Eventdev id + * @param queue_id + * Eventdev queue id + * @param attr_id + * The attribute ID to set + * @param attr_value + * The attribute value to set + * + * @return + * - 0: Successfully set attribute. + * - -EINVAL: invalid device, queue or attr_id. + * - -ENOTSUP: device does not support setting event attribute. + * - -EBUSY: device is in running state + * - <0: failed to set event queue attribute + */ +__rte_experimental +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); + /* Event port specific APIs */ /* Event port configuration bitmap flags */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..c581b75c18 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_queue_attr_set; }; INTERNAL { From patchwork Tue Apr 5 05:40:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109141 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18B7AA0505; Tue, 5 Apr 2022 07:41:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F069410F6; Tue, 5 Apr 2022 07:41:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CCC80410E5 for ; Tue, 5 Apr 2022 07:41:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2351QRvk018544; Mon, 4 Apr 2022 22:41:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=hk/zi/sn0FSXRUm8tzQpQtDOuJUUYowQkuFJOuLHabw=; b=OzXFH6e3yEkxWFwo24VuBfyjsvb9WXBLh6OA55DtU90ZfkDD9FHWnXk8AZQWDnnjEzY7 0tTaLLIn+QOPgDbSYGAurnECgwdbLd/HvHOQbH4K/GSrZCRs0VJUI9QpXXVxy1eC1Dgz 25OYCSe/NRRehYrLakyOg1eHWjM3Rznr4fBS1wzuRWfaCb4FejFiFhWiTx44fA5xA0YE NIv2LMKyh6xXVwB2OLgt8diY27Qqb4BO2B3HObmylUBHIu9Pzk0wv7lPho79LzB1A71l vvEpPtT5312G0rZzFxDFqpdSt3M4Lk53f+lSEcOC42O9mE8egpua/IFu47uU42VJ7NVE Vg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f6kupk7un-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:45 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 Apr 2022 22:41:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 Apr 2022 22:41:44 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 765F53F7040; Mon, 4 Apr 2022 22:41:42 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , Subject: [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes Date: Tue, 5 Apr 2022 11:10:59 +0530 Message-ID: <3ca1a9399508dd7812cb9f36f8a2989a07e113e2.1649136534.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wDvRHagzxQY2M4xdUvkzwmPQNVUV3PRm X-Proofpoint-GUID: wDvRHagzxQY2M4xdUvkzwmPQNVUV3PRm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Extended eventdev queue QoS attributes to support weight and affinity. If queues are of same priority, events from the queue with highest weight will be scheduled first. Affinity indicates the number of times, the subsequent schedule calls from an event port will use the same event queue. Schedule call selects another queue if current queue goes empty or schedule count reaches affinity count. To avoid ABI break, weight and affinity attributes are not yet added to queue config structure and relies on PMD for managing it. New eventdev op queue_attr_get can be used to get it from the PMD. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++++++++ lib/eventdev/rte_eventdev.c | 12 ++++++++++++ lib/eventdev/rte_eventdev.h | 38 +++++++++++++++++++++++++++++++++++-- 3 files changed, 70 insertions(+), 2 deletions(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 3b85d9f7a5..5495aee4f6 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Get an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param[out] attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); + /** * Set an event queue attribute at runtime. * @@ -1231,6 +1251,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_get_t queue_attr_get; + /**< Get an event queue attribute. */ eventdev_queue_attr_set_t queue_attr_set; /**< Set an event queue attribute. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index a31e99be02..12b261f923 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, *attr_value = conf->schedule_type; break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; default: return -EINVAL; }; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 16e9d5fb5b..a6fbaf1c11 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -222,8 +222,14 @@ struct rte_event; /* Event device capability bitmap flags */ #define RTE_EVENT_DEV_CAP_QUEUE_QOS (1ULL << 0) -/**< Event scheduling prioritization is based on the priority associated with - * each event queue. +/**< Event scheduling prioritization is based on the priority and weight + * associated with each event queue. Events from a queue with highest priority + * is scheduled first. If the queues are of same priority, weight of the queues + * are considered to select a queue in a weighted round robin fashion. + * Subsequent dequeue calls from an event port could see events from the same + * event queue, if the queue is configured with an affinity count. Affinity + * count is the number of subsequent dequeue calls, in which an event port + * should use the same event queue if the queue is non-empty * * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ @@ -331,6 +337,26 @@ struct rte_event; * @see rte_event_port_link() */ +/* Event queue scheduling weights */ +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255 +/**< Highest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0 +/**< Lowest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + +/* Event queue scheduling affinity */ +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255 +/**< Highest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0 +/**< Lowest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + /** * Get the total number of event devices that have been successfully * initialised. @@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, * The schedule type of the queue. */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 +/** + * The weight of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5 +/** + * Affinity of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6 /** * Get an attribute from a queue. From patchwork Tue Apr 5 05:41:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109143 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DBEC5A0505; Tue, 5 Apr 2022 07:42:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 102164283C; Tue, 5 Apr 2022 07:41:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4E9494283C for ; Tue, 5 Apr 2022 07:41:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2351OF5u017793; Mon, 4 Apr 2022 22:41:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=az74EPH1WwheDzfwWGagS89UEEVzd+WwrmbHf8xUiU4=; b=E4dfsDpPwWtjffZIJdSnzZCzCh9jYPao0/4vWZW0ii/FF2IpNVRyBkLT+VMrwz4aN87H qKWdMOK8ztxDl8uaTe5Asm82vX89br5KSxnzyz4gyQNDvTNsH5fIr2MK/OuMm0GrrxFl nBv5LofAPrEpM901r21ius/TQ9s7Nt3MYF6m7hKUzfsyE1XOCnLc6iTvHWIA90MQxKb+ 4uhUI1x2ZQ+SGNqIkLZZGg+b0BDJUNDei6ggRmYRY04cboBOL3l2xfip4bONphldWNe8 82yWX+vxgbfdZkNOQ6nrDktS4zi+VCQpHMcmUBip+M8sOpY6cW0fSjZQFteAjitsoZuO QQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f6kupk7v0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:48 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 4 Apr 2022 22:41:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 4 Apr 2022 22:41:47 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id DB5D83F7059; Mon, 4 Apr 2022 22:41:44 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Ray Kinsella Subject: [PATCH v2 3/6] doc: announce change in event queue conf structure Date: Tue, 5 Apr 2022 11:11:00 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: flgOoqBu_Op1eScMSsgYnS4GjffD-Q2w X-Proofpoint-GUID: flgOoqBu_Op1eScMSsgYnS4GjffD-Q2w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Structure rte_event_queue_conf will be extended to include fields to support weight and affinity attribute. Once it gets added in DPDK 22.11, eventdev internal op, queue_attr_get can be removed. Signed-off-by: Shijith Thotton --- doc/guides/rel_notes/deprecation.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 4e5b23c53d..04125db681 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -125,3 +125,6 @@ Deprecation Notices applications should be updated to use the ``dmadev`` library instead, with the underlying HW-functionality being provided by the ``ioat`` or ``idxd`` dma drivers + +* eventdev: New fields to represent event queue weight and affinity will be + added to ``rte_event_queue_conf`` structure in DPDK 22.11. From patchwork Tue Apr 5 05:41:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109144 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B8D7A0505; Tue, 5 Apr 2022 07:42:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B51142847; Tue, 5 Apr 2022 07:41:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F39144283C for ; Tue, 5 Apr 2022 07:41:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2351W9gL010031; Mon, 4 Apr 2022 22:41:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=MeYbVaSYIEkxeeL8UczKNqekfJsWp4kttc55tpnm0Pg=; b=S/haNjOaHfXHMhXkHVKuYLkhkYAN2Ir+QFsl1sc9zlHVxt2kXKHQr5ISvXJLqua0414M puqk6x59IkX6lqv/ki0RqLpoqOYNnc0bOWPKYC1HPK9qxKB+rer68xFtMBKutuVcXk9+ NSJuyrdHOIgZc85foE1ymEvj5RB38OEV4a+euYN2lhvQMC988qMF8MZhJAnmJK/uihkg ryoX3RXlxhqWmSHJZVt8/D6ZILQd0XmU8N+enMrEBE+/p5u0b3mMYfeD2+f7mUm5kwn/ Ib2agOz7KgdQlalUKoiRrpnc/BWfrwJV2BGS+IjXdNlmBwfL9xUqbGBhgk4qbrVyEb5o CA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3f6p0ptsfg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:51 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 4 Apr 2022 22:41:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 4 Apr 2022 22:41:49 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 8E6C23F7040; Mon, 4 Apr 2022 22:41:47 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , Subject: [PATCH v2 4/6] test/event: test cases to test runtime queue attribute Date: Tue, 5 Apr 2022 11:11:01 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: IwpHhQM5zIl9ZdHPaMYTYnS7NLW-2gzN X-Proofpoint-GUID: IwpHhQM5zIl9ZdHPaMYTYnS7NLW-2gzN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime. Signed-off-by: Shijith Thotton --- app/test/test_eventdev.c | 149 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 149 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 4f51042bda..1af93d3b77 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -385,6 +385,149 @@ test_eventdev_queue_attr_priority(void) return TEST_SUCCESS; } +static int +test_eventdev_queue_attr_priority_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + set_val), + "Queue priority set failed"); + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_get(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + &get_val), + "Queue priority get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong priority value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_weight_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_WEIGHT, + set_val), + "Queue weight set failed"); + TEST_ASSERT_SUCCESS(rte_event_queue_attr_get( + TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val), + "Queue weight get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong weight value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_affinity_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t get_val; + uint64_t set_val; + + set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_AFFINITY, + set_val), + "Queue affinity set failed"); + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_get(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_AFFINITY, + &get_val), + "Queue affinity get failed"); + TEST_ASSERT_EQUAL(get_val, set_val, + "Wrong affinity value for queue%d", i); + } + + return TEST_SUCCESS; +} + static int test_eventdev_queue_attr_nb_atomic_flows(void) { @@ -964,6 +1107,12 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_queue_count), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_priority), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_priority_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_weight_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_affinity_runtime), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_nb_atomic_flows), TEST_CASE_ST(eventdev_configure_setup, NULL, From patchwork Tue Apr 5 05:41:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109145 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94E33A0505; Tue, 5 Apr 2022 07:42:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3923542856; Tue, 5 Apr 2022 07:41:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BFCA642841 for ; Tue, 5 Apr 2022 07:41:54 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2351Xemf010479; Mon, 4 Apr 2022 22:41:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=llgmvyYRTzL0dsuAO94EZBxTUhcxs23s2x3OH7fyydw=; b=CNlgY/DPZx7T7nxm1bLCwMpkaaWCZ9+mH7VsRPJId3YD//PrbGm0uHQL3FiqQ1/E/xn6 gsvy+63L76V2Sy7z92LGNHHM/mbnlBPzlXKnKgW0lxw+rjeXUQzCdfXtrh1RBxIqvARL HKlC2wLceIRffFo1bv/+A8bUYS9PuSQMsPIDsv4gbfifJr2NHjuBfd96wPTvwk0CQtdN k59dwVhJs8zfKmINl0/MqgvJOp5GHXbTXpgTnkLIV5rjP7YloZcsNV4BHPWniKyuGEM5 wDiZ3gWNmNJi0kZ7REGIrnOWl3SqasUC75ndAaUjxoVGMvU/iMOF9yR0SCo46VQ8F9Cn iA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3f6p0ptsfu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:54 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 Apr 2022 22:41:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 4 Apr 2022 22:41:51 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 04D633F7059; Mon, 4 Apr 2022 22:41:49 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , Subject: [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes Date: Tue, 5 Apr 2022 11:11:02 +0530 Message-ID: <67b6f4fefe4d7d00b3f4806acb4aecd8dd727744.1649136534.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: z4MZ2hdQaARjAFOP1W8GpRmNgQkxkKAW X-Proofpoint-GUID: z4MZ2hdQaARjAFOP1W8GpRmNgQkxkKAW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API to set queue attributes at runtime and API to get weight and affinity. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 4 ++ drivers/event/cnxk/cn9k_eventdev.c | 4 ++ drivers/event/cnxk/cnxk_eventdev.c | 91 ++++++++++++++++++++++++-- drivers/event/cnxk/cnxk_eventdev.h | 16 +++++ 5 files changed, 110 insertions(+), 6 deletions(-) diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index 7633c6e3a2..bee69bf8f4 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,6 +12,7 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y +runtime_queue_attr = y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 9b4d2895ec..f6973bb691 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 4bba477dd1..7cb59bbbfa 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index be021d86c9..e07cb589f2 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; } int @@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority); - /* Normalize <0-255> to <0-7> */ - return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF, - queue_conf->priority / 32); + uint8_t priority, weight, affinity; + + /* Default weight and affinity */ + dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + + priority = CNXK_QOS_NORMALIZE(queue_conf->priority, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, + CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, + priority, weight, affinity); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); } void @@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +int +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t *attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT) + *attr_value = dev->mlt_prio[queue_id].weight; + else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY) + *attr_value = dev->mlt_prio[queue_id].affinity; + else + return -EINVAL; + + return 0; +} + +int +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint64_t attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t priority, weight, affinity; + struct rte_event_queue_conf *conf; + + conf = &event_dev->data->queues_cfg[queue_id]; + + switch (attr_id) { + case RTE_EVENT_QUEUE_ATTR_PRIORITY: + conf->priority = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + dev->mlt_prio[queue_id].weight = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + dev->mlt_prio[queue_id].affinity = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS: + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES: + case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG: + case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE: + /* FALLTHROUGH */ + plt_sso_dbg("Unsupported attribute id %u", attr_id); + return -ENOTSUP; + default: + plt_err("Invalid attribute id %u", attr_id); + return -EINVAL; + } + + priority = CNXK_QOS_NORMALIZE(conf->priority, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, + CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); +} + void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 5564746e6d..cde8fc0c67 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,6 +38,9 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_PRIORITY_CNT (8) +#define CNXK_SSO_WEIGHT_CNT (64) +#define CNXK_SSO_AFFINITY_CNT (16) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -54,6 +57,7 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_QOS_NORMALIZE(val, max, cnt) (val / ((max + cnt - 1) / cnt)) #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ do { \ if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ @@ -79,6 +83,11 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; +struct cnxk_sso_mlt_prio { + uint8_t weight; + uint8_t affinity; +}; + struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -108,6 +117,7 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; + struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -234,6 +244,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); +int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); +int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf); int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, From patchwork Tue Apr 5 05:41:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109146 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B5563A0505; Tue, 5 Apr 2022 07:42:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1ACCB4284B; Tue, 5 Apr 2022 07:41:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F0EF942859 for ; Tue, 5 Apr 2022 07:41:57 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2351OF5x017793; Mon, 4 Apr 2022 22:41:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=m+AvGG7Vl1oTIXMOQAdtxgkai5PvzhqpLzPGp354N8c=; b=END1IOdU8uDGWBsO7soYkcPmg4Zs2YzEtzCofudNgqbPZmhYdzyl7jLI2V9/CUoSIbtF LGIxl8+iiIJcoAk/U5HJgrHIFmxfosr4cLUvdHprkE22RcV/1G/6HcXnp3AR2SnMf+zL lURD2O8uwlDq1RkA3tzpP4m3MLYUZtBYrCHAfg2m1FaRi1KgYvYx4fHuhjdyg25tgj5V S2T0oB9NJXyf3hY0OH4iqjWa5gRdFCAo73tcSd5daM0T2ZWo5YlaVXN8Cnx9PWkdOvky Xct5Fnje2yfRY8O80/btxO+Oo9HR7D0orZbTEZe5OuTHqGy7iJzrtbACVPVkoxH7mKY8 yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f6kupk7vs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 Apr 2022 22:41:56 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 Apr 2022 22:41:55 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 Apr 2022 22:41:55 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 6ED1F3F7040; Mon, 4 Apr 2022 22:41:52 -0700 (PDT) From: Shijith Thotton To: , CC: Pavan Nikhilesh , , , Shijith Thotton , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao Subject: [PATCH v2 6/6] common/cnxk: use lock when accessing mbox of SSO Date: Tue, 5 Apr 2022 11:11:03 +0530 Message-ID: <9c22418754c23d37e29ea63ad476d8743bcb8743.1649136534.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: IPIOHYIFO5rDbVnUYtbrtv0qIRFDhW5Q X-Proofpoint-GUID: IPIOHYIFO5rDbVnUYtbrtv0qIRFDhW5Q X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-04_09,2022-03-31_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Since mbox is now accessed from multiple threads, use lock to synchronize access. Signed-off-by: Pavan Nikhilesh Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_sso.c | 174 +++++++++++++++++++++-------- drivers/common/cnxk/roc_sso_priv.h | 1 + drivers/common/cnxk/roc_tim.c | 134 ++++++++++++++-------- 3 files changed, 215 insertions(+), 94 deletions(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index f8a0a96533..358d37a9f2 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, } rc = mbox_process_msg(dev->mbox, rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf) } rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type, } req->modify = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type) } req->partial = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso) mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt); - if (rc < 0) { + if (rc) { plt_err("Failed to get free resource count\n"); - return rc; + return -EIO; } roc_sso->max_hwgrp = rsrc_cnt->sso; @@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp) mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_hws; i++) sso->hws_msix_offset[i] = rsp->ssow_msixoff[i]; @@ -285,53 +285,71 @@ int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws, struct roc_sso_hws_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_hws_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_hws_stats *) mbox_alloc_msg_sso_hws_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->hws = hws; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->arbitration = req_rsp->arbitration; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, struct roc_sso_hwgrp_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_grp_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_grp_stats *) mbox_alloc_msg_sso_grp_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->grp = hwgrp; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->aw_status = req_rsp->aw_status; stats->dq_pc = req_rsp->dq_pc; @@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, stats->ts_pc = req_rsp->ts_pc; stats->wa_pc = req_rsp->wa_pc; stats->ws_pc = req_rsp->ws_pc; - return 0; + +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -358,10 +379,12 @@ int roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, uint8_t nb_qos, uint32_t nb_xaq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_qos_cfg *req; int i, rc; + plt_spinlock_lock(&sso->mbox_lock); for (i = 0; i < nb_qos; i++) { uint8_t xaq_prcnt = qos[i].xaq_prcnt; uint8_t iaq_prcnt = qos[i].iaq_prcnt; @@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); if (req == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); - if (req == NULL) - return -ENOSPC; + if (req == NULL) { + rc = -ENOSPC; + goto fail; + } } req->grp = qos[i].hwgrp; req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100; @@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, 100; } - return mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, - roc_sso->xae_waes, roc_sso->xaq_buf_size, - roc_sso->nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, + roc_sso->xae_waes, roc_sso->xaq_buf_size, + roc_sso->nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps) req->npa_aura_id = npa_aura_id; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps) return -EINVAL; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_priority *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox); if (req == NULL) - return rc; + goto fail; req->grp = hwgrp; req->weight = weight; req->affinity = affinity; req->priority = priority; rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + plt_spinlock_unlock(&sso->mbox_lock); plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight, affinity, priority); return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) if (roc_sso->max_hws < nb_hws) return -ENOENT; + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws); if (rc < 0) { plt_err("Unable to attach SSO HWS LFs"); - return rc; + goto fail; } rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); @@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) goto sso_msix_fail; } + plt_spinlock_unlock(&sso->mbox_lock); roc_sso->nb_hwgrp = nb_hwgrp; roc_sso->nb_hws = nb_hws; @@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); hwgrp_atch_fail: sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso) roc_sso->nb_hwgrp = 0; roc_sso->nb_hws = 0; + plt_spinlock_unlock(&sso->mbox_lock); } int @@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso = roc_sso_to_sso_priv(roc_sso); memset(sso, 0, sizeof(*sso)); pci_dev = roc_sso->pci_dev; + plt_spinlock_init(&sso->mbox_lock); rc = dev_init(&sso->dev, pci_dev); if (rc < 0) { @@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) goto fail; } + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_get(roc_sso); if (rc < 0) { plt_err("Failed to get SSO resources"); @@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso->pci_dev = pci_dev; sso->dev.drv_inited = true; roc_sso->lmt_base = sso->dev.lmt_base; + plt_spinlock_unlock(&sso->mbox_lock); return 0; link_mem_free: @@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) rsrc_fail: rc |= dev_fini(&sso->dev, pci_dev); fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..674e4e0a39 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -22,6 +22,7 @@ struct sso { /* SSO link mapping. */ struct plt_bitmap **link_map; void *link_map_mem; + plt_spinlock_t mbox_lock; } __plt_cache_aligned; enum sso_err_status { diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index cefd9bc89d..0f9209937b 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -8,15 +8,16 @@ static int tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct dev *dev = &sso->dev; struct msix_offset_rsp *rsp; int i, rc; mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_ring; i++) tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i]; @@ -88,20 +89,23 @@ int roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, uint32_t *cur_bkt) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_enable_rsp *rsp; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_enable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (cur_bkt) @@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, if (start_tsc) *start_tsc = rsp->timestarted; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_disable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } uintptr_t @@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz, uint32_t interval, uint64_t intervalns, uint64_t clockfreq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_config_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_config_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; req->bigendian = false; req->bucketsize = bucket_sz; @@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, req->gpioedge = TIM_GPIO_LTOH_TRANS; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src, uint64_t clockfreq, uint64_t *intervalns, uint64_t *interval) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_intvl_req *req; struct tim_intvl_rsp *rsp; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox); if (req == NULL) - return rc; + goto fail; req->clockfreq = clockfreq; req->clocksource = clk_src; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } *intervalns = rsp->intvl_ns; *interval = rsp->intvl_cyc; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) struct dev *dev = &sso->dev; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_alloc(dev->mbox); if (req == NULL) - return rc; + goto fail; req->npa_pf_func = idev_npa_pffunc_get(); req->sso_pf_func = idev_sso_pffunc_get(); req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (clk) @@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (rc < 0) { plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); free_req = mbox_alloc_msg_tim_lf_free(dev->mbox); - if (free_req == NULL) - return -ENOSPC; + if (free_req == NULL) { + rc = -ENOSPC; + goto fail; + } free_req->ring = ring_id; - mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_free(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); if (rc < 0) { tim_err_desc(rc); - return rc; + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return 0; } @@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim) struct rsrc_attach_req *attach_req; struct rsrc_detach_req *detach_req; struct free_rsrcs_rsp *free_rsrc; - struct dev *dev; + struct sso *sso; uint16_t nb_lfs; + struct dev *dev; int rc; if (roc_tim == NULL || roc_tim->roc_sso == NULL) return TIM_ERR_PARAM; + sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + dev = &sso->dev; PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ); - dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; nb_lfs = roc_tim->nb_lfs; + plt_spinlock_lock(&sso->mbox_lock); mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc); - if (rc < 0) { + if (rc) { plt_err("Unable to get free rsrc count."); - return 0; + nb_lfs = 0; + goto fail; } if (nb_lfs && (free_rsrc->tim < nb_lfs)) { plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs, free_rsrc->tim); - return 0; + nb_lfs = 0; + goto fail; } attach_req = mbox_alloc_msg_attach_resources(dev->mbox); - if (attach_req == NULL) - return -ENOSPC; + if (attach_req == NULL) { + nb_lfs = 0; + goto fail; + } attach_req->modify = true; attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim; nb_lfs = attach_req->timlfs; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { plt_err("Unable to attach TIM LFs."); - return 0; + nb_lfs = 0; + goto fail; } rc = tim_fill_msix(roc_tim, nb_lfs); @@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim) plt_err("Unable to get TIM MSIX vectors"); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); - if (detach_req == NULL) - return -ENOSPC; + if (detach_req == NULL) { + nb_lfs = 0; + goto fail; + } detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); - - return 0; + nb_lfs = 0; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return nb_lfs; } void roc_tim_fini(struct roc_tim *roc_tim) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct rsrc_detach_req *detach_req; + struct dev *dev = &sso->dev; + plt_spinlock_lock(&sso->mbox_lock); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); PLT_ASSERT(detach_req); detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); + plt_spinlock_unlock(&sso->mbox_lock); }