From patchwork Tue Mar 29 13:11:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109012 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABD36A0508; Tue, 29 Mar 2022 15:12:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE983428CA; Tue, 29 Mar 2022 15:12:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5C03E428C5 for ; Tue, 29 Mar 2022 15:12:05 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T9P8uE008233; Tue, 29 Mar 2022 06:12:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=QtoTdN/XDSWtMQJnFuCSzpF09yhKIZYWFyOzXzaRIXg=; b=cyR/bmy+h9YbEC5c97cmfyCs7iW9M+nYvf7dU+sn0rCTpry3dgI9wpsjUq1V3N66Tbep 8SgG0sSEwtRX9/g6hZf6xMO1u1kPJnbSv3aBey+sxsGCv7gu+jxqSaOIFlPjT2qjWp8C 9JtIOwlcAmreHixFBCgXYdWfUz4Yv2JRWa/h7Y85C3mTX9elnNsVhW6UfUTuIQFN5Nbg BzaGDW+K0pbwyMgKtvDXXWkWwonTRvgxdZR5Use00bH5v86PHNDNCvhWcAEBqYKIRUTN 8gwLI5H+Wk+qeYVBl38vSK431Q05x3FVG2A4/MJ1ZcvXT0Hrm7UcGcZW8L5frB2QaP6z Dw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f3yf3rwkh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 29 Mar 2022 06:12:00 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 29 Mar 2022 06:11:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 29 Mar 2022 06:11:58 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 0D3463F703F; Tue, 29 Mar 2022 06:11:56 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , "Ray Kinsella" Subject: [PATCH 1/6] eventdev: support to set queue attributes at runtime Date: Tue, 29 Mar 2022 18:41:00 +0530 Message-ID: <159a14ece2480a3704ee34ee0d81dda331c16957.1648549553.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 7oXdjdNBEaDhCKWFIYcH4UiL2qBzSM9c X-Proofpoint-GUID: 7oXdjdNBEaDhCKWFIYcH4UiL2qBzSM9c X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_queue_attr_set(), to set event queue attributes at runtime from the values set during initialization using rte_event_queue_setup(). PMD's supporting this feature should expose the capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/default.ini | 1 + lib/eventdev/eventdev_pmd.h | 22 +++++++++++++ lib/eventdev/rte_eventdev.c | 31 ++++++++++++++++++ lib/eventdev/rte_eventdev.h | 38 ++++++++++++++++++++++- lib/eventdev/version.map | 3 ++ 5 files changed, 94 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 2ea233463a..00360f60c6 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -17,6 +17,7 @@ runtime_port_link = multiple_queue_port = carry_flow_id = maintenance_free = +runtime_queue_attr = ; ; Features of a default Ethernet Rx adapter. diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..6182749503 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Set an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t attr_value); + /** * Retrieve the default event port configuration. * @@ -1211,6 +1231,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_set_t queue_attr_set; + /**< Set an event queue attribute. */ eventdev_port_default_conf_get_t port_def_conf; /**< Get default port configuration. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..13c8af877e 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -844,6 +844,37 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, return 0; } +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint32_t attr_value) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_queue(dev, queue_id)) { + RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id); + return -EINVAL; + } + + if (attr_id > RTE_EVENT_QUEUE_ATTR_MAX) { + RTE_EDEV_LOG_ERR("Invalid attribute ID %" PRIu8, attr_id); + return -EINVAL; + } + + if (!(dev->data->event_dev_cap & + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) { + RTE_EDEV_LOG_ERR( + "Device %" PRIu8 "does not support changing queue attributes at runtime", + dev_id); + return -ENOTSUP; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP); + return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id, + attr_value); +} + int rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..19710cd0c5 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -225,7 +225,7 @@ struct rte_event; /**< Event scheduling prioritization is based on the priority associated with * each event queue. * - * @see rte_event_queue_setup() + * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ #define RTE_EVENT_DEV_CAP_EVENT_QOS (1ULL << 1) /**< Event scheduling prioritization is based on the priority associated with @@ -307,6 +307,13 @@ struct rte_event; * global pool, or process signaling related to load balancing. */ +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11) +/**< Event device is capable of changing the queue attributes at runtime i.e after + * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is + * not set, eventdev queue attributes can only be configured during + * rte_event_queue_setup(). + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -678,6 +685,11 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 +/** + * Maximum supported attribute ID. + */ +#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE + /** * Get an attribute from a queue. * @@ -702,6 +714,30 @@ int rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, uint32_t *attr_value); +/** + * Set an event queue attribute. + * + * @param dev_id + * Eventdev id + * @param queue_id + * Eventdev queue id + * @param attr_id + * The attribute ID to set + * @param attr_value + * The attribute value to set + * + * @return + * - 0: Successfully set attribute. + * - -EINVAL: invalid device, queue or attr_id. + * - -ENOTSUP: device does not support setting event attribute. + * - -EBUSY: device is in running state + * - <0: failed to set event queue attribute + */ +__rte_experimental +int +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, + uint32_t attr_value); + /* Event port specific APIs */ /* Event port configuration bitmap flags */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..c581b75c18 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_queue_attr_set; }; INTERNAL { From patchwork Tue Mar 29 13:11:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109011 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA29CA0508; Tue, 29 Mar 2022 15:12:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE705428B5; Tue, 29 Mar 2022 15:12:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 87A2B42843 for ; Tue, 29 Mar 2022 15:12:03 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T9P8ti008243 for ; Tue, 29 Mar 2022 06:12:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=qHtbYpCpKEUOY1xmqf21wxTqQi0b8JDwBF8JmdRM9cw=; b=JhuBL3EwieZaviK/zdXAJ1+Ybn70wbA8rAG4w48HyiIJiGCxIsrhEnE6T2oRTnB56M8V 8eNfGVendmXOH7VhKiqnDhU9fB5MFxcL3fbPqIasN/DIfsIeV+0ExXGUvj/LvzOtIuf4 DHxhekObDjYiVSn7240kS4ujsEFaTWWoNe8BO0f1iz/qADoRX2lkETd9TlXYJu1Xpej7 7H1G4X2i/LOOy9jcAlC+sNOxgd4TcY/14E2O/cEszAIHlIqLxida3MpeD0GpXIramZWf eSqSlD/sKSNWAwZkqgrXCCus2IvC/IitCyd6qcrlNHKPZHIPN+pyNz0aaGN5kIyRaUaa Jg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f3yf3rwkp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 29 Mar 2022 06:12:02 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 29 Mar 2022 06:12:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 29 Mar 2022 06:12:00 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 4E93B3F705D; Tue, 29 Mar 2022 06:11:59 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , Subject: [PATCH 2/6] eventdev: add weight and affinity to queue attributes Date: Tue, 29 Mar 2022 18:41:01 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: TWhxpPtSZz0R48MoL_jwPllW7wz3AXtn X-Proofpoint-GUID: TWhxpPtSZz0R48MoL_jwPllW7wz3AXtn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Extended eventdev queue QoS attributes to support weight and affinity. If queues are of same priority, events from the queue with highest weight will be scheduled first. Affinity indicates the number of times, the subsequent schedule calls from an event port will use the same event queue. Schedule call selects another queue if current queue goes empty or schedule count reaches affinity count. To avoid ABI break, weight and affinity attributes are not yet added to queue config structure and relies on PMD for managing it. New eventdev op queue_attr_get can be used to get it from the PMD. Signed-off-by: Shijith Thotton --- lib/eventdev/eventdev_pmd.h | 22 ++++++++++++++++++++ lib/eventdev/rte_eventdev.c | 12 +++++++++++ lib/eventdev/rte_eventdev.h | 41 +++++++++++++++++++++++++++++++++---- 3 files changed, 71 insertions(+), 4 deletions(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 6182749503..f19df98a7a 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); +/** + * Get an event queue attribute at runtime. + * + * @param dev + * Event device pointer + * @param queue_id + * Event queue index + * @param attr_id + * Event queue attribute id + * @param[out] attr_value + * Event queue attribute value + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); + /** * Set an event queue attribute at runtime. * @@ -1231,6 +1251,8 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ + eventdev_queue_attr_get_t queue_attr_get; + /**< Get an event queue attribute. */ eventdev_queue_attr_set_t queue_attr_set; /**< Set an event queue attribute. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 13c8af877e..37f0e54bf3 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, *attr_value = conf->schedule_type; break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST; + if (dev->dev_ops->queue_attr_get) + return (*dev->dev_ops->queue_attr_get)( + dev, queue_id, attr_id, attr_value); + break; default: return -EINVAL; }; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 19710cd0c5..fa16fc5dcb 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -222,8 +222,14 @@ struct rte_event; /* Event device capability bitmap flags */ #define RTE_EVENT_DEV_CAP_QUEUE_QOS (1ULL << 0) -/**< Event scheduling prioritization is based on the priority associated with - * each event queue. +/**< Event scheduling prioritization is based on the priority and weight + * associated with each event queue. Events from a queue with highest priority + * is scheduled first. If the queues are of same priority, a queue with highest + * weight is selected. Subsequent schedules from an event port could see events + * from the same event queue if the queue is configured with an affinity count. + * Affinity count of a queue indicates the number of times, the subsequent + * schedule calls from an event port should use the same queue if the queue is + * non-empty. * * @see rte_event_queue_setup(), rte_event_queue_attr_set() */ @@ -331,6 +337,26 @@ struct rte_event; * @see rte_event_port_link() */ +/* Event queue scheduling weights */ +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255 +/**< Highest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0 +/**< Lowest weight of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + +/* Event queue scheduling affinity */ +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255 +/**< Highest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0 +/**< Lowest scheduling affinity of an event queue + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set() + */ + /** * Get the total number of event devices that have been successfully * initialised. @@ -684,11 +710,18 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id, * The schedule type of the queue. */ #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4 - +/** + * The weight of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5 +/** + * Affinity of the queue. + */ +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6 /** * Maximum supported attribute ID. */ -#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE +#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_AFFINITY /** * Get an attribute from a queue. From patchwork Tue Mar 29 13:11:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109014 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6641A0508; Tue, 29 Mar 2022 15:12:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C734428E9; Tue, 29 Mar 2022 15:12:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 233D8428E2 for ; Tue, 29 Mar 2022 15:12:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T6sMHx023015; Tue, 29 Mar 2022 06:12:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=az74EPH1WwheDzfwWGagS89UEEVzd+WwrmbHf8xUiU4=; b=QslzJ/NAi/aMx7XIxQEmRwD+4NCl38qrGaunmSVN7DXWlN9w17YQP4FjEzui+nvHu9/7 3lMsP3NPatQSgX48mT/c8MxFk0AbRy5rwEtNbPpszJrC1QrpiVBdHnEfyQZuTmoiuHY2 5+nUgKjqLVMlrZOckKzTfCFlGtv5uGuBnC5CCkmNEPLIiLcc6781XgrpDQRo23wVHuq8 cDATOIIBZ8zNPNCej173dUrXwrwz4ejFgK1rwrMUytw/PzRv7WKON7LvXhXkH9zADAJD r4TFcuorkxsX+vxaX4ZttylQPgrdaGLwwLZJ5mcMEQ5S/eoD7D5CL7MZ556OiCHwynB3 Fg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3f22bnc8sn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 29 Mar 2022 06:12:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 29 Mar 2022 06:12:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 29 Mar 2022 06:12:03 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 4F72C3F7062; Tue, 29 Mar 2022 06:12:01 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , "Ray Kinsella" Subject: [PATCH 3/6] doc: announce change in event queue conf structure Date: Tue, 29 Mar 2022 18:41:02 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: 2J7Ga-JcjJ5YF1CUJM5O04Q0EPITF-Hs X-Proofpoint-ORIG-GUID: 2J7Ga-JcjJ5YF1CUJM5O04Q0EPITF-Hs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Structure rte_event_queue_conf will be extended to include fields to support weight and affinity attribute. Once it gets added in DPDK 22.11, eventdev internal op, queue_attr_get can be removed. Signed-off-by: Shijith Thotton --- doc/guides/rel_notes/deprecation.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 4e5b23c53d..04125db681 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -125,3 +125,6 @@ Deprecation Notices applications should be updated to use the ``dmadev`` library instead, with the underlying HW-functionality being provided by the ``ioat`` or ``idxd`` dma drivers + +* eventdev: New fields to represent event queue weight and affinity will be + added to ``rte_event_queue_conf`` structure in DPDK 22.11. From patchwork Tue Mar 29 13:11:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109013 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D3F0A0508; Tue, 29 Mar 2022 15:12:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFC32428E2; Tue, 29 Mar 2022 15:12:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5F689428D2 for ; Tue, 29 Mar 2022 15:12:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T9P8tk008243 for ; Tue, 29 Mar 2022 06:12:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/9RBlvh1AT0XynaG6kni/IdvaLGaSnJIbgRf2MDb5z0=; b=TDtW6kwQlRDsayqNG/X31CHfEBTqTUSYRwu9gSJor+Ev4mk/dNLe2jaXysh64S4j9yBo CBqX9IavdfJrn+xPZqz8nzxnACwYh6saXzTAvuQ+uiahVE1zurs18wS6kXxOiu3E4pcp ra9DTWurLDvCqrcKf68D5tBcWj6p7qmpFgB8TFg1le2a0HaqMP7ZIlcUR3h4U3izzNUs rEIUmjWLW+rW7wY5Sjg67BDylEyNPaDzkkpS6VVGgaf9Tc5VPtzHbvoV+KZ9Yzjpofdy pAJQhapDxQYNbiAYc8lq8j+fsgFtOMmktP62DDFwdTKSFznDxAG9MuCZRWwZWrJf3FFx kQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f3yf3rwm5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 29 Mar 2022 06:12:06 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 29 Mar 2022 06:12:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 29 Mar 2022 06:12:04 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 953AE3F7052; Tue, 29 Mar 2022 06:12:03 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , Subject: [PATCH 4/6] test/event: test cases to test runtime queue attribute Date: Tue, 29 Mar 2022 18:41:03 +0530 Message-ID: <19889493d6ef46c33f00e6e7a3f3ceff5a13405c.1648549553.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: xab1Una1QKXSbJlS0duwP-oV7FPsrAnm X-Proofpoint-GUID: xab1Una1QKXSbJlS0duwP-oV7FPsrAnm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test cases to test changing of queue QoS attributes priority, weight and affinity at runtime. Signed-off-by: Shijith Thotton --- app/test/test_eventdev.c | 146 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 4f51042bda..b9ec319ad9 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -385,6 +385,146 @@ test_eventdev_queue_attr_priority(void) return TEST_SUCCESS; } +static int +test_eventdev_queue_attr_priority_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t attr_val, tmp; + + attr_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + attr_val), + "Queue priority set failed"); + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_get(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_PRIORITY, + &tmp), + "Queue priority get failed"); + TEST_ASSERT_EQUAL(tmp, attr_val, + "Wrong priority value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_weight_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t attr_val, tmp; + + attr_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_WEIGHT, + attr_val), + "Queue weight set failed"); + TEST_ASSERT_SUCCESS(rte_event_queue_attr_get( + TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_WEIGHT, &tmp), + "Queue weight get failed"); + TEST_ASSERT_EQUAL(tmp, attr_val, + "Wrong weight value for queue%d", i); + } + + return TEST_SUCCESS; +} + +static int +test_eventdev_queue_attr_affinity_runtime(void) +{ + struct rte_event_queue_conf qconf; + struct rte_event_dev_info info; + uint32_t queue_count; + int i, ret; + + ret = rte_event_dev_info_get(TEST_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) + return TEST_SKIPPED; + + TEST_ASSERT_SUCCESS(rte_event_dev_attr_get( + TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), + "Queue count get failed"); + + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i); + ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i); + } + + for (i = 0; i < (int)queue_count; i++) { + uint32_t attr_val, tmp; + + attr_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_set(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_AFFINITY, + attr_val), + "Queue affinity set failed"); + TEST_ASSERT_SUCCESS( + rte_event_queue_attr_get(TEST_DEV_ID, i, + RTE_EVENT_QUEUE_ATTR_AFFINITY, + &tmp), + "Queue affinity get failed"); + TEST_ASSERT_EQUAL(tmp, attr_val, + "Wrong affinity value for queue%d", i); + } + + return TEST_SUCCESS; +} + static int test_eventdev_queue_attr_nb_atomic_flows(void) { @@ -964,6 +1104,12 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_queue_count), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_priority), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_priority_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_weight_runtime), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_queue_attr_affinity_runtime), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_queue_attr_nb_atomic_flows), TEST_CASE_ST(eventdev_configure_setup, NULL, From patchwork Tue Mar 29 13:11:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109015 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ACB2AA0508; Tue, 29 Mar 2022 15:12:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 60454428EE; Tue, 29 Mar 2022 15:12:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 77BEB428DA for ; Tue, 29 Mar 2022 15:12:09 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T9OvpU008073 for ; Tue, 29 Mar 2022 06:12:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+PEaNmMwNjkHl5FNHMEDbdSmdYBw3EtgqVd5/3OMT+Y=; b=ckFFG3ZkQhMXoB1ZIUzhPbxzHr3ik6txj5bw2/J9DLZtkynVXnzJQZ3bZnviWSQ6odON CZKVz6Q6dj4UfVMiY5KbcGCDpxFtZyDFFp6ulYA94zhWbN4DM4KavTVqGlyy8vXxx22l xttwhi7GIAvJXMSmc/zKc5HNnXLXWqSVXkT6NLbcmeX4liMiUqLCG2s8WklWrthYh5Li P2A4I2JK8i0797ChF1L6Unlk6uqSz2CukFsQmLuSbsR9Z9wl9bE8aHaBAEayXSVFo8gn kqUEzXnLryeTxvMZRcPGrABZdtRCLonxtuUKVNqhQJ0pdgNhqjwjXfyOWEllvo35fEL9 aw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3f3yf3rwmd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 29 Mar 2022 06:12:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 29 Mar 2022 06:12:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 29 Mar 2022 06:12:06 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 9BFAB3F7062; Tue, 29 Mar 2022 06:12:05 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , Subject: [PATCH 5/6] event/cnxk: support to set runtime queue attributes Date: Tue, 29 Mar 2022 18:41:04 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 9ME_wqikWS0vlsAnNdlCSxlbIpvTz-JP X-Proofpoint-GUID: 9ME_wqikWS0vlsAnNdlCSxlbIpvTz-JP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API to set queue attributes at runtime and API to get weight and affinity. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 4 ++ drivers/event/cnxk/cn9k_eventdev.c | 4 ++ drivers/event/cnxk/cnxk_eventdev.c | 81 ++++++++++++++++++++++++-- drivers/event/cnxk/cnxk_eventdev.h | 16 +++++ 5 files changed, 100 insertions(+), 6 deletions(-) diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index 7633c6e3a2..bee69bf8f4 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,6 +12,7 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y +runtime_queue_attr = y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 9b4d2895ec..f6973bb691 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 4bba477dd1..7cb59bbbfa 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index be021d86c9..73f1029779 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; } int @@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority); - /* Normalize <0-255> to <0-7> */ - return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF, - queue_conf->priority / 32); + uint8_t priority, weight, affinity; + + /* Default weight and affinity */ + dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_HIGHEST; + dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + + priority = CNXK_QOS_NORMALIZE(queue_conf->priority, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, + CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, + priority, weight, affinity); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); } void @@ -314,6 +331,58 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +int +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t *attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + *attr_value = attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT ? + dev->mlt_prio[queue_id].weight : + dev->mlt_prio[queue_id].affinity; + + return 0; +} + +int +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t priority, weight, affinity; + struct rte_event_queue_conf *conf; + + conf = &event_dev->data->queues_cfg[queue_id]; + + switch (attr_id) { + case RTE_EVENT_QUEUE_ATTR_PRIORITY: + conf->priority = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + dev->mlt_prio[queue_id].weight = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + dev->mlt_prio[queue_id].affinity = attr_value; + break; + default: + plt_sso_dbg("Ignored setting attribute id %u", attr_id); + return 0; + } + + priority = CNXK_QOS_NORMALIZE(conf->priority, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, + CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); +} + void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 5564746e6d..8037cbbb3b 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,6 +38,9 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_PRIORITY_CNT (8) +#define CNXK_SSO_WEIGHT_CNT (64) +#define CNXK_SSO_AFFINITY_CNT (16) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -54,6 +57,7 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_QOS_NORMALIZE(val, max, cnt) (val / ((max + cnt - 1) / cnt)) #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ do { \ if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ @@ -79,6 +83,11 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; +struct cnxk_sso_mlt_prio { + uint8_t weight; + uint8_t affinity; +}; + struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -108,6 +117,7 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; + struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -234,6 +244,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); +int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); +int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t attr_value); void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf); int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, From patchwork Tue Mar 29 13:11:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 109016 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A858A0508; Tue, 29 Mar 2022 15:12:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2BC76428F4; Tue, 29 Mar 2022 15:12:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C3996428F2 for ; Tue, 29 Mar 2022 15:12:12 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22T6cT1n022933 for ; Tue, 29 Mar 2022 06:12:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=m+AvGG7Vl1oTIXMOQAdtxgkai5PvzhqpLzPGp354N8c=; b=P3PSHt8lU/oeZk5XON6LoyEhhrOKKdF33IV5scuP66sZfTwYWH/Y7kgEDv+FnqJnXHyM 5Vp3gAGxgzKASF3fDyCFPiIW3q76iag/KLO5jEZqqediTlwSn8jEhrcFKfVhRRqE2mo9 bBGGzOwTW1zb34wjF6GLUVQgymvNf9Tpub486BQcCeT0VYSvhneMyk0Swo4QifuCrQTO O/XNjrPvg3GbQOGY1P3kgsiUeF4jH0YY8s4fdPVgRrymc3tJ2snf4KZpV2ukBG+eLtsZ U9f5WlzWlzd7uOcEY95BQMBdwriniwUQvTGLsa5XI1T83ycIwUMZR9N8Qpx5slYiNvmt NQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3f22bnc8t9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 29 Mar 2022 06:12:12 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 29 Mar 2022 06:12:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 29 Mar 2022 06:12:09 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 9D6913F706B; Tue, 29 Mar 2022 06:12:07 -0700 (PDT) From: Shijith Thotton To: , CC: Pavan Nikhilesh , Shijith Thotton , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao Subject: [PATCH 6/6] common/cnxk: use lock when accessing mbox of SSO Date: Tue, 29 Mar 2022 18:41:05 +0530 Message-ID: <96caf9dfb6d53089a61fd45a8fb628f0c5e98b4f.1648549553.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: Vx4xyWw2JqX4-vbz-aOeL_CJOpv-Nk_n X-Proofpoint-ORIG-GUID: Vx4xyWw2JqX4-vbz-aOeL_CJOpv-Nk_n X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-29_04,2022-03-29_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Since mbox is now accessed from multiple threads, use lock to synchronize access. Signed-off-by: Pavan Nikhilesh Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_sso.c | 174 +++++++++++++++++++++-------- drivers/common/cnxk/roc_sso_priv.h | 1 + drivers/common/cnxk/roc_tim.c | 134 ++++++++++++++-------- 3 files changed, 215 insertions(+), 94 deletions(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index f8a0a96533..358d37a9f2 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, } rc = mbox_process_msg(dev->mbox, rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf) } rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) + return -EIO; return 0; } @@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type, } req->modify = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type) } req->partial = true; - if (mbox_process(dev->mbox) < 0) + if (mbox_process(dev->mbox)) return -EIO; return 0; @@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso) mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt); - if (rc < 0) { + if (rc) { plt_err("Failed to get free resource count\n"); - return rc; + return -EIO; } roc_sso->max_hwgrp = rsrc_cnt->sso; @@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp) mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_hws; i++) sso->hws_msix_offset[i] = rsp->ssow_msixoff[i]; @@ -285,53 +285,71 @@ int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws, struct roc_sso_hws_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_hws_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_hws_stats *) mbox_alloc_msg_sso_hws_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->hws = hws; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->arbitration = req_rsp->arbitration; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, struct roc_sso_hwgrp_stats *stats) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); struct sso_grp_stats *req_rsp; + struct dev *dev = &sso->dev; int rc; + plt_spinlock_lock(&sso->mbox_lock); req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats( dev->mbox); if (req_rsp == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } req_rsp = (struct sso_grp_stats *) mbox_alloc_msg_sso_grp_get_stats(dev->mbox); - if (req_rsp == NULL) - return -ENOSPC; + if (req_rsp == NULL) { + rc = -ENOSPC; + goto fail; + } } req_rsp->grp = hwgrp; rc = mbox_process_msg(dev->mbox, (void **)&req_rsp); - if (rc) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } stats->aw_status = req_rsp->aw_status; stats->dq_pc = req_rsp->dq_pc; @@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, stats->ts_pc = req_rsp->ts_pc; stats->wa_pc = req_rsp->wa_pc; stats->ws_pc = req_rsp->ws_pc; - return 0; + +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -358,10 +379,12 @@ int roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, uint8_t nb_qos, uint32_t nb_xaq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_qos_cfg *req; int i, rc; + plt_spinlock_lock(&sso->mbox_lock); for (i = 0; i < nb_qos; i++) { uint8_t xaq_prcnt = qos[i].xaq_prcnt; uint8_t iaq_prcnt = qos[i].iaq_prcnt; @@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); if (req == NULL) { rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox); - if (req == NULL) - return -ENOSPC; + if (req == NULL) { + rc = -ENOSPC; + goto fail; + } } req->grp = qos[i].hwgrp; req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100; @@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos, 100; } - return mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, - roc_sso->xae_waes, roc_sso->xaq_buf_size, - roc_sso->nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae, + roc_sso->xae_waes, roc_sso->xaq_buf_size, + roc_sso->nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, int roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps) req->npa_aura_id = npa_aura_id; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps) return -EINVAL; req->hwgrps = hwgrps; - return mbox_process(dev->mbox); + if (mbox_process(dev->mbox)) + return -EIO; + + return 0; } int roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; + int rc; - return sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_lock(&sso->mbox_lock); + rc = sso_hwgrp_release_xaq(dev, hwgrps); + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority) { - struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct dev *dev = &sso->dev; struct sso_grp_priority *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox); if (req == NULL) - return rc; + goto fail; req->grp = hwgrp; req->weight = weight; req->affinity = affinity; req->priority = priority; rc = mbox_process(dev->mbox); - if (rc < 0) - return rc; + if (rc) { + rc = -EIO; + goto fail; + } + plt_spinlock_unlock(&sso->mbox_lock); plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight, affinity, priority); return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) if (roc_sso->max_hws < nb_hws) return -ENOENT; + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws); if (rc < 0) { plt_err("Unable to attach SSO HWS LFs"); - return rc; + goto fail; } rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp); @@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) goto sso_msix_fail; } + plt_spinlock_unlock(&sso->mbox_lock); roc_sso->nb_hwgrp = nb_hwgrp; roc_sso->nb_hws = nb_hws; @@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP); hwgrp_atch_fail: sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS); +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso) roc_sso->nb_hwgrp = 0; roc_sso->nb_hws = 0; + plt_spinlock_unlock(&sso->mbox_lock); } int @@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso = roc_sso_to_sso_priv(roc_sso); memset(sso, 0, sizeof(*sso)); pci_dev = roc_sso->pci_dev; + plt_spinlock_init(&sso->mbox_lock); rc = dev_init(&sso->dev, pci_dev); if (rc < 0) { @@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) goto fail; } + plt_spinlock_lock(&sso->mbox_lock); rc = sso_rsrc_get(roc_sso); if (rc < 0) { plt_err("Failed to get SSO resources"); @@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) sso->pci_dev = pci_dev; sso->dev.drv_inited = true; roc_sso->lmt_base = sso->dev.lmt_base; + plt_spinlock_unlock(&sso->mbox_lock); return 0; link_mem_free: @@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso) rsrc_fail: rc |= dev_fini(&sso->dev, pci_dev); fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..674e4e0a39 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -22,6 +22,7 @@ struct sso { /* SSO link mapping. */ struct plt_bitmap **link_map; void *link_map_mem; + plt_spinlock_t mbox_lock; } __plt_cache_aligned; enum sso_err_status { diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index cefd9bc89d..0f9209937b 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -8,15 +8,16 @@ static int tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct tim *tim = roc_tim_to_tim_priv(roc_tim); + struct dev *dev = &sso->dev; struct msix_offset_rsp *rsp; int i, rc; mbox_alloc_msg_msix_offset(dev->mbox); rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) - return rc; + if (rc) + return -EIO; for (i = 0; i < nb_ring; i++) tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i]; @@ -88,20 +89,23 @@ int roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, uint32_t *cur_bkt) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_enable_rsp *rsp; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_enable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (cur_bkt) @@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc, if (start_tsc) *start_tsc = rsp->timestarted; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_ring_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_disable_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } uintptr_t @@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz, uint32_t interval, uint64_t intervalns, uint64_t clockfreq) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_config_req *req; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_config_ring(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; req->bigendian = false; req->bucketsize = bucket_sz; @@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id, req->gpioedge = TIM_GPIO_LTOH_TRANS; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; } - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src, uint64_t clockfreq, uint64_t *intervalns, uint64_t *interval) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + struct dev *dev = &sso->dev; struct tim_intvl_req *req; struct tim_intvl_rsp *rsp; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox); if (req == NULL) - return rc; + goto fail; req->clockfreq = clockfreq; req->clocksource = clk_src; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } *intervalns = rsp->intvl_ns; *interval = rsp->intvl_cyc; - return 0; +fail: + plt_spinlock_unlock(&sso->mbox_lock); + return rc; } int @@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) struct dev *dev = &sso->dev; int rc = -ENOSPC; + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_alloc(dev->mbox); if (req == NULL) - return rc; + goto fail; req->npa_pf_func = idev_npa_pffunc_get(); req->sso_pf_func = idev_sso_pffunc_get(); req->ring = ring_id; rc = mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { + if (rc) { tim_err_desc(rc); - return rc; + rc = -EIO; + goto fail; } if (clk) @@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (rc < 0) { plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); free_req = mbox_alloc_msg_tim_lf_free(dev->mbox); - if (free_req == NULL) - return -ENOSPC; + if (free_req == NULL) { + rc = -ENOSPC; + goto fail; + } free_req->ring = ring_id; - mbox_process(dev->mbox); + rc = mbox_process(dev->mbox); + if (rc) + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return rc; } @@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); + plt_spinlock_lock(&sso->mbox_lock); req = mbox_alloc_msg_tim_lf_free(dev->mbox); if (req == NULL) - return rc; + goto fail; req->ring = ring_id; rc = mbox_process(dev->mbox); if (rc < 0) { tim_err_desc(rc); - return rc; + rc = -EIO; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return 0; } @@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim) struct rsrc_attach_req *attach_req; struct rsrc_detach_req *detach_req; struct free_rsrcs_rsp *free_rsrc; - struct dev *dev; + struct sso *sso; uint16_t nb_lfs; + struct dev *dev; int rc; if (roc_tim == NULL || roc_tim->roc_sso == NULL) return TIM_ERR_PARAM; + sso = roc_sso_to_sso_priv(roc_tim->roc_sso); + dev = &sso->dev; PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ); - dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; nb_lfs = roc_tim->nb_lfs; + plt_spinlock_lock(&sso->mbox_lock); mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc); - if (rc < 0) { + if (rc) { plt_err("Unable to get free rsrc count."); - return 0; + nb_lfs = 0; + goto fail; } if (nb_lfs && (free_rsrc->tim < nb_lfs)) { plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs, free_rsrc->tim); - return 0; + nb_lfs = 0; + goto fail; } attach_req = mbox_alloc_msg_attach_resources(dev->mbox); - if (attach_req == NULL) - return -ENOSPC; + if (attach_req == NULL) { + nb_lfs = 0; + goto fail; + } attach_req->modify = true; attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim; nb_lfs = attach_req->timlfs; rc = mbox_process(dev->mbox); - if (rc < 0) { + if (rc) { plt_err("Unable to attach TIM LFs."); - return 0; + nb_lfs = 0; + goto fail; } rc = tim_fill_msix(roc_tim, nb_lfs); @@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim) plt_err("Unable to get TIM MSIX vectors"); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); - if (detach_req == NULL) - return -ENOSPC; + if (detach_req == NULL) { + nb_lfs = 0; + goto fail; + } detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); - - return 0; + nb_lfs = 0; } +fail: + plt_spinlock_unlock(&sso->mbox_lock); return nb_lfs; } void roc_tim_fini(struct roc_tim *roc_tim) { - struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev; + struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso); struct rsrc_detach_req *detach_req; + struct dev *dev = &sso->dev; + plt_spinlock_lock(&sso->mbox_lock); detach_req = mbox_alloc_msg_detach_resources(dev->mbox); PLT_ASSERT(detach_req); detach_req->partial = true; detach_req->timlfs = true; mbox_process(dev->mbox); + plt_spinlock_unlock(&sso->mbox_lock); }