From patchwork Wed Aug 10 07:36:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 114801 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 796ECA0543; Wed, 10 Aug 2022 09:43:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2243740694; Wed, 10 Aug 2022 09:43:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 60F524068E for ; Wed, 10 Aug 2022 09:43:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27A6ab81026439; Wed, 10 Aug 2022 00:41:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=FBciHJebEjK4cEzX6ned8IfgoiZ+a796J9mixIPMVys=; b=BwcBT84hp66/0lhExNY4lb1FQYIiMCT8oXxx9zdrTcc6cbydjZDEpBi1VYOuoyNczLg+ 5cEfAcXQzFUBMTzWAY+OlouEE6TaVT00QgkKzUoBUe4xzs0wZxvdF4d6u3MVdkizJKkz 2BY0tOYw99ogK3o/0aR93Fv83YTmlc+q19kVePoE4F4MwHYBnxqznLoPIPYPoVzQSgVZ W7c5gHj51YC84Bwk9iHyfy/BKXRa8Q8IgOkU43NnY4XFX8poOZhhBtSSrj18SWVehlwZ 268prohN9EKvYySTsvHbumLuFwdqSqIyyi9Dv/lPUCxLoqw5iAOuz47GCWZ/DF6C2bFm dg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3huwr326d8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Aug 2022 00:41:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 10 Aug 2022 00:41:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 10 Aug 2022 00:41:42 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 8597E3F704F; Wed, 10 Aug 2022 00:41:40 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , Ray Kinsella , Pavan Nikhilesh Subject: [PATCH] eventdev: add weight and affinity attributes to queue conf Date: Wed, 10 Aug 2022 13:06:52 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: Q2sYv04h53gYz0XNuagjSqfJM-CiQXpX X-Proofpoint-ORIG-GUID: Q2sYv04h53gYz0XNuagjSqfJM-CiQXpX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-10_03,2022-08-09_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added new fields to represent event queue weight and affinity in rte_event_queue_conf structure. Internal op to get queue attribute is removed as it is no longer needed. Updated driver to use the new field. Signed-off-by: Shijith Thotton Acked-by: Jerin Jacob --- doc/guides/rel_notes/deprecation.rst | 3 -- doc/guides/rel_notes/release_22_11.rst | 3 ++ drivers/event/cnxk/cn10k_eventdev.c | 1 - drivers/event/cnxk/cn9k_eventdev.c | 1 - drivers/event/cnxk/cnxk_eventdev.c | 42 ++++++-------------------- drivers/event/cnxk/cnxk_eventdev.h | 9 ------ lib/eventdev/eventdev_pmd.h | 22 -------------- lib/eventdev/rte_eventdev.c | 10 +++--- lib/eventdev/rte_eventdev.h | 16 ++++++++++ 9 files changed, 33 insertions(+), 74 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index e7583cae4c..13e7c6370e 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -206,9 +206,6 @@ Deprecation Notices ``rte_event_vector::elem_offset`` gives the number of valid elements left to process from the ``rte_event_vector::elem_offset``. -* eventdev: New fields to represent event queue weight and affinity - will be added to ``rte_event_queue_conf`` structure in DPDK 22.11. - * metrics: The function ``rte_metrics_init`` will have a non-void return in order to notify errors instead of calling ``rte_exit``. diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 8c021cf050..8ffd71e650 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -88,6 +88,9 @@ API Changes ABI Changes ----------- +* eventdev: Added ``weight`` and ``affinity`` fields to ``rte_event_queue_conf`` + structure. + .. This section should contain ABI changes. Sample format: * sample: Add a short 1-2 sentence description of the ABI change diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 5a0cab40a9..aa8ae394bc 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -922,7 +922,6 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, - .queue_attr_get = cnxk_sso_queue_attribute_get, .queue_attr_set = cnxk_sso_queue_attribute_set, .port_def_conf = cnxk_sso_port_def_conf, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 2e27030049..58c72a580a 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1153,7 +1153,6 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, - .queue_attr_get = cnxk_sso_queue_attribute_get, .queue_attr_set = cnxk_sso_queue_attribute_set, .port_def_conf = cnxk_sso_port_def_conf, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 97dcf7b66e..45c53ffb4e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -321,6 +321,8 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, queue_conf->nb_atomic_order_sequences = (1ULL << 20); queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES; queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL; + queue_conf->weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + queue_conf->affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; } int @@ -330,18 +332,12 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint8_t priority, weight, affinity; - /* Default weight and affinity */ - dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST; - dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; - priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0, RTE_EVENT_DEV_PRIORITY_LOWEST, CNXK_SSO_PRIORITY_CNT); - weight = CNXK_QOS_NORMALIZE( - dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, - RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); - affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, - RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + weight = CNXK_QOS_NORMALIZE(queue_conf->weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(queue_conf->affinity, 0, RTE_EVENT_QUEUE_AFFINITY_HIGHEST, CNXK_SSO_AFFINITY_CNT); plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, @@ -358,22 +354,6 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } -int -cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, - uint32_t attr_id, uint32_t *attr_value) -{ - struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT) - *attr_value = dev->mlt_prio[queue_id].weight; - else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY) - *attr_value = dev->mlt_prio[queue_id].affinity; - else - return -EINVAL; - - return 0; -} - int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, uint32_t attr_id, uint64_t attr_value) @@ -389,10 +369,10 @@ cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, conf->priority = attr_value; break; case RTE_EVENT_QUEUE_ATTR_WEIGHT: - dev->mlt_prio[queue_id].weight = attr_value; + conf->weight = attr_value; break; case RTE_EVENT_QUEUE_ATTR_AFFINITY: - dev->mlt_prio[queue_id].affinity = attr_value; + conf->affinity = attr_value; break; case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS: case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES: @@ -409,11 +389,9 @@ cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, priority = CNXK_QOS_NORMALIZE(conf->priority, 0, RTE_EVENT_DEV_PRIORITY_LOWEST, CNXK_SSO_PRIORITY_CNT); - weight = CNXK_QOS_NORMALIZE( - dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, - RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); - affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, - RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + weight = CNXK_QOS_NORMALIZE(conf->weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(conf->affinity, 0, RTE_EVENT_QUEUE_AFFINITY_HIGHEST, CNXK_SSO_AFFINITY_CNT); return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index bfd0c5627e..d78fb4ea2f 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -88,11 +88,6 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; -struct cnxk_sso_mlt_prio { - uint8_t weight; - uint8_t affinity; -}; - struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -123,7 +118,6 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; - struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -250,9 +244,6 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); -int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, - uint8_t queue_id, uint32_t attr_id, - uint32_t *attr_value); int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, uint32_t attr_id, uint64_t attr_value); diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 69402668d8..8879e43feb 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -341,26 +341,6 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev, typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev, uint8_t queue_id); -/** - * Get an event queue attribute at runtime. - * - * @param dev - * Event device pointer - * @param queue_id - * Event queue index - * @param attr_id - * Event queue attribute id - * @param[out] attr_value - * Event queue attribute value - * - * @return - * - 0: Success. - * - <0: Error code on failure. - */ -typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev, - uint8_t queue_id, uint32_t attr_id, - uint32_t *attr_value); - /** * Set an event queue attribute at runtime. * @@ -1268,8 +1248,6 @@ struct eventdev_ops { /**< Set up an event queue. */ eventdev_queue_release_t queue_release; /**< Release an event queue. */ - eventdev_queue_attr_get_t queue_attr_get; - /**< Get an event queue attribute. */ eventdev_queue_attr_set_t queue_attr_set; /**< Set an event queue attribute. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 1dc4f966be..b96185b25d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -859,15 +859,13 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id, break; case RTE_EVENT_QUEUE_ATTR_WEIGHT: *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST; - if (dev->dev_ops->queue_attr_get) - return (*dev->dev_ops->queue_attr_get)( - dev, queue_id, attr_id, attr_value); + if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS) + *attr_value = conf->weight; break; case RTE_EVENT_QUEUE_ATTR_AFFINITY: *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST; - if (dev->dev_ops->queue_attr_get) - return (*dev->dev_ops->queue_attr_get)( - dev, queue_id, attr_id, attr_value); + if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS) + *attr_value = conf->affinity; break; default: return -EINVAL; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 6a6f6ea4c1..f1908b82b2 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -640,6 +640,22 @@ struct rte_event_queue_conf { * event device supported priority value. * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability */ + uint8_t weight; + /**< Weight of the event queue relative to other event queues. + * The requested weight should be in the range of + * [RTE_EVENT_DEV_WEIGHT_HIGHEST, RTE_EVENT_DEV_WEIGHT_LOWEST]. + * The implementation shall normalize the requested weight to event + * device supported weight value. + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. + */ + uint8_t affinity; + /**< Affinity of the event queue relative to other event queues. + * The requested affinity should be in the range of + * [RTE_EVENT_DEV_AFFINITY_HIGHEST, RTE_EVENT_DEV_AFFINITY_LOWEST]. + * The implementation shall normalize the requested affinity to event + * device supported affinity value. + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability. + */ }; /**