From patchwork Mon May 16 17:35:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 111193 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 564BCA00BE; Mon, 16 May 2022 19:39:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3FE5C42B76; Mon, 16 May 2022 19:39:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 35BCF42B76 for ; Mon, 16 May 2022 19:39:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GAZ2fV011450; Mon, 16 May 2022 10:37:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Ei/Qgg/BT9PobNYEme8+gVv9Sqfd8CX4+Yr1qXmsoUo=; b=c3WwWQKnWf4Pt2SUHqILc30Mds6ySTAQGkhN/Ks40rHKuyhNvchHo+NNH3IIFKpWWKXf 1ItcmdwSVxJClhyEsnEd0gEJ6x+SfCzkQG6pU9CihzK52qw6qHb8A9MHN+3uwD8Wx/Mw SJqk/fYF4BWqL1qRr4D/hwdRusZvM3qRvw84HMjcIVETlT4ljnGYZAAS00JXKkfkPNvk bad0rrpoa6+VrBkTWn8oCw6vNA4e2vHVrPDz9AcUk7g5gQ20FHKaIORUXvvMgxF/CB6E QFcXxktc15ZdC1JLdytPu3QGZSNxfYJZTTrn7b0Azko61cD2hahanW5QPHeKl19PG73J Kg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g2bxsqyup-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 16 May 2022 10:37:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 16 May 2022 10:37:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 16 May 2022 10:37:32 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 6CB523F70CB; Mon, 16 May 2022 10:37:30 -0700 (PDT) From: Shijith Thotton To: , CC: Shijith Thotton , , , , Subject: [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes Date: Mon, 16 May 2022 23:05:51 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vpypM7MbsbBLTN2GXq-sPiIe3gDnmZ9D X-Proofpoint-GUID: vpypM7MbsbBLTN2GXq-sPiIe3gDnmZ9D X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API to set queue attributes at runtime and API to get weight and affinity. Signed-off-by: Shijith Thotton --- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 4 ++ drivers/event/cnxk/cn9k_eventdev.c | 4 ++ drivers/event/cnxk/cnxk_eventdev.c | 91 ++++++++++++++++++++++++-- drivers/event/cnxk/cnxk_eventdev.h | 19 ++++++ 5 files changed, 113 insertions(+), 6 deletions(-) diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index 7633c6e3a2..bee69bf8f4 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,6 +12,7 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y +runtime_queue_attr = y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 94829e789c..450e1bf50c 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -846,9 +846,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 987888d3db..3de22d7f4e 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1084,9 +1084,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, + .queue_def_conf = cnxk_sso_queue_def_conf, .queue_setup = cnxk_sso_queue_setup, .queue_release = cnxk_sso_queue_release, + .queue_attr_get = cnxk_sso_queue_attribute_get, + .queue_attr_set = cnxk_sso_queue_attribute_set, + .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index be021d86c9..a2829b817e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; } int @@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - - plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority); - /* Normalize <0-255> to <0-7> */ - return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF, - queue_conf->priority / 32); + uint8_t priority, weight, affinity; + + /* Default weight and affinity */ + dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST; + dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST; + + priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id, + priority, weight, affinity); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); } void @@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +int +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint32_t *attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT) + *attr_value = dev->mlt_prio[queue_id].weight; + else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY) + *attr_value = dev->mlt_prio[queue_id].affinity; + else + return -EINVAL; + + return 0; +} + +int +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id, + uint32_t attr_id, uint64_t attr_value) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t priority, weight, affinity; + struct rte_event_queue_conf *conf; + + conf = &event_dev->data->queues_cfg[queue_id]; + + switch (attr_id) { + case RTE_EVENT_QUEUE_ATTR_PRIORITY: + conf->priority = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_WEIGHT: + dev->mlt_prio[queue_id].weight = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_AFFINITY: + dev->mlt_prio[queue_id].affinity = attr_value; + break; + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS: + case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES: + case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG: + case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE: + /* FALLTHROUGH */ + plt_sso_dbg("Unsupported attribute id %u", attr_id); + return -ENOTSUP; + default: + plt_err("Invalid attribute id %u", attr_id); + return -EINVAL; + } + + priority = CNXK_QOS_NORMALIZE(conf->priority, 0, + RTE_EVENT_DEV_PRIORITY_LOWEST, + CNXK_SSO_PRIORITY_CNT); + weight = CNXK_QOS_NORMALIZE( + dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN, + RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT); + affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0, + RTE_EVENT_QUEUE_AFFINITY_HIGHEST, + CNXK_SSO_AFFINITY_CNT); + + return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity, + priority); +} + void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 5564746e6d..531f6d1a84 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,6 +38,11 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_PRIORITY_CNT (0x8) +#define CNXK_SSO_WEIGHT_MAX (0x3f) +#define CNXK_SSO_WEIGHT_MIN (0x3) +#define CNXK_SSO_WEIGHT_CNT (CNXK_SSO_WEIGHT_MAX - CNXK_SSO_WEIGHT_MIN + 1) +#define CNXK_SSO_AFFINITY_CNT (0x10) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -54,6 +59,8 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_QOS_NORMALIZE(val, min, max, cnt) \ + (min + val / ((max + cnt - 1) / cnt)) #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ do { \ if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ @@ -79,6 +86,11 @@ struct cnxk_sso_qos { uint16_t iaq_prcnt; }; +struct cnxk_sso_mlt_prio { + uint8_t weight; + uint8_t affinity; +}; + struct cnxk_sso_evdev { struct roc_sso sso; uint8_t max_event_queues; @@ -108,6 +120,7 @@ struct cnxk_sso_evdev { uint64_t *timer_adptr_sz; uint16_t vec_pool_cnt; uint64_t *vec_pools; + struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Dev args */ uint32_t xae_cnt; uint8_t qos_queue_cnt; @@ -234,6 +247,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, const struct rte_event_queue_conf *queue_conf); void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id); +int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint32_t *attr_value); +int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, + uint8_t queue_id, uint32_t attr_id, + uint64_t attr_value); void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, struct rte_event_port_conf *port_conf); int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,