From patchwork Fri Feb 3 10:05:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 123011 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 70AEC41BBB; Fri, 3 Feb 2023 11:05:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1EA940A79; Fri, 3 Feb 2023 11:05:56 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id BD4ED42BC9; Fri, 3 Feb 2023 11:05:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675418755; x=1706954755; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XWMJGCXHaKe8+r41tX2lcOEVrMnJeInVts8J4U7vppw=; b=fFWvvabO+ZfNN2WfP8odzLSzRXc1N3IRhVcOZhsuTPqcJGExEsVoAJtT 0B/uQJULNTuVe02c67ZS+B7xmOaBfPMKDFxHOM8BapRXZGkrc1BJ2qfRv NXJyisrwwch265hzGliqETUB34y9X3KTc2XRgiiMFh53mCIbPbTHev/EF KlfnehIz8CLj/2AZK5wT4hgf4ttSApnmdV8ECJlliOerTWboYkc4or+U/ vRM/jYrCY3BT3iy+5uVc18LzsVhv1+v0bn9vprv81XJ/QZtJtROdWFn0h b/6A+mKeQuJ4472egXRK6ZaBBf4Job1vECXZJ/t0lzNzm78C8f0byXo+Y g==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="316705047" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="316705047" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 02:05:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994465474" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="994465474" Received: from silpixa00401385.ir.intel.com ([10.237.214.158]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 02:05:52 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jasvinder.singh@intel.com, Bruce Richardson , stable@dpdk.org, Cristian Dumitrescu Subject: [PATCH 1/4] examples/qos_sched: fix errors when TX port not up Date: Fri, 3 Feb 2023 10:05:30 +0000 Message-Id: <20230203100533.10377-2-bruce.richardson@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230203100533.10377-1-bruce.richardson@intel.com> References: <20230203100533.10377-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The TX port config will fail if the port is not up, so wait 10 seconds on startup for it to start. Fixes: de3cfa2c9823 ("sched: initial import") Cc: stable@dpdk.org Signed-off-by: Bruce Richardson Acked-by: Cristian Dumitrescu --- examples/qos_sched/init.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index 0709aec10c..6020367705 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -326,6 +326,8 @@ int app_init(void) for(i = 0; i < nb_pfc; i++) { uint32_t socket = rte_lcore_to_socket_id(qos_conf[i].rx_core); struct rte_ring *ring; + struct rte_eth_link link = {0}; + int retry_count = 100, retry_delay = 100; /* try every 100ms for 10 sec */ snprintf(ring_name, MAX_NAME_LEN, "ring-%u-%u", i, qos_conf[i].rx_core); ring = rte_ring_lookup(ring_name); @@ -356,6 +358,14 @@ int app_init(void) app_init_port(qos_conf[i].rx_port, qos_conf[i].mbuf_pool); app_init_port(qos_conf[i].tx_port, qos_conf[i].mbuf_pool); + rte_eth_link_get(qos_conf[i].tx_port, &link); + if (link.link_status == 0) + printf("Waiting for link on port %u\n", qos_conf[i].tx_port); + while (link.link_status == 0 && retry_count--) { + rte_delay_ms(retry_delay); + rte_eth_link_get(qos_conf[i].tx_port, &link); + } + qos_conf[i].sched_port = app_init_sched_port(qos_conf[i].tx_port, socket); } From patchwork Fri Feb 3 10:05:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 123012 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1F1B41BBB; Fri, 3 Feb 2023 11:06:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06EDC42C24; Fri, 3 Feb 2023 11:05:59 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id BCD8342C24 for ; Fri, 3 Feb 2023 11:05:57 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675418757; x=1706954757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ahHexmqq4iWZIj0VDjtR35oiQX7Qd6SUWMEmKavhPfQ=; b=d001XLdDbSZXJ/NZ2ykCBt/caqXyVvZTXd7djjL3nckNm6PxL1KQdbwi GsKfk5HE97fQXD1Rl6rpnLKOZzjabmORgtuAoMhWJnzqk7Dlj8RKwD7l8 hXl+f9sShSm9w0eSIoDxTkE+IvljlYMNAuBNO0p5Wn6EJh+jbH0U+XFMa vDuDS+CHdAg6HswbVS6I+RXRsIi3OYOENZypxjKWGVwYPcVpxhBCCVjIS 9VdEnBRFIi0SIt5epsW+Mp9W+fq4hW4AHgVaIcfMPqs6GHg4igOpxfV6r Ji8dYcJfJnNdG21w7LdU6JXsCyjZKuDnm0AQIXb2GdcZseARbIXufHXQ/ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="316705051" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="316705051" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 02:05:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994465491" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="994465491" Received: from silpixa00401385.ir.intel.com ([10.237.214.158]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 02:05:55 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jasvinder.singh@intel.com, Bruce Richardson , Cristian Dumitrescu Subject: [PATCH 2/4] examples/qos_sched: remove TX buffering Date: Fri, 3 Feb 2023 10:05:31 +0000 Message-Id: <20230203100533.10377-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230203100533.10377-1-bruce.richardson@intel.com> References: <20230203100533.10377-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Since the qos_sched app does batch dequeues from the QoS block, there is little point in trying to batch further in the app - just send out the full burst of packets that were received from the QoS block. With modern CPUs and write-combining doorbells, the cost of doing smaller TX's is reduced anyway for the worst case. Signed-off-by: Bruce Richardson Acked-by: Cristian Dumitrescu --- examples/qos_sched/app_thread.c | 94 ++++----------------------------- examples/qos_sched/main.c | 12 ----- examples/qos_sched/main.h | 6 --- 3 files changed, 9 insertions(+), 103 deletions(-) diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c index dbc878b553..1ea732aa91 100644 --- a/examples/qos_sched/app_thread.c +++ b/examples/qos_sched/app_thread.c @@ -104,82 +104,21 @@ app_rx_thread(struct thread_conf **confs) } } - - -/* Send the packet to an output interface - * For performance reason function returns number of packets dropped, not sent, - * so 0 means that all packets were sent successfully - */ - -static inline void -app_send_burst(struct thread_conf *qconf) -{ - struct rte_mbuf **mbufs; - uint32_t n, ret; - - mbufs = (struct rte_mbuf **)qconf->m_table; - n = qconf->n_mbufs; - - do { - ret = rte_eth_tx_burst(qconf->tx_port, qconf->tx_queue, mbufs, (uint16_t)n); - /* we cannot drop the packets, so re-send */ - /* update number of packets to be sent */ - n -= ret; - mbufs = (struct rte_mbuf **)&mbufs[ret]; - } while (n); -} - - -/* Send the packet to an output interface */ -static void -app_send_packets(struct thread_conf *qconf, struct rte_mbuf **mbufs, uint32_t nb_pkt) -{ - uint32_t i, len; - - len = qconf->n_mbufs; - for(i = 0; i < nb_pkt; i++) { - qconf->m_table[len] = mbufs[i]; - len++; - /* enough pkts to be sent */ - if (unlikely(len == burst_conf.tx_burst)) { - qconf->n_mbufs = len; - app_send_burst(qconf); - len = 0; - } - } - - qconf->n_mbufs = len; -} - void app_tx_thread(struct thread_conf **confs) { struct rte_mbuf *mbufs[burst_conf.qos_dequeue]; struct thread_conf *conf; int conf_idx = 0; - int retval; - const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; + int nb_pkts; while ((conf = confs[conf_idx])) { - retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void **)mbufs, + nb_pkts = rte_ring_sc_dequeue_burst(conf->tx_ring, (void **)mbufs, burst_conf.qos_dequeue, NULL); - if (likely(retval != 0)) { - app_send_packets(conf, mbufs, burst_conf.qos_dequeue); - - conf->counter = 0; /* reset empty read loop counter */ - } - - conf->counter++; - - /* drain ring and TX queues */ - if (unlikely(conf->counter > drain_tsc)) { - /* now check is there any packets left to be transmitted */ - if (conf->n_mbufs != 0) { - app_send_burst(conf); - - conf->n_mbufs = 0; - } - conf->counter = 0; + if (likely(nb_pkts != 0)) { + uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkts); + if (nb_pkts != nb_tx) + rte_pktmbuf_free_bulk(&mbufs[nb_pkts], nb_pkts - nb_tx); } conf_idx++; @@ -230,7 +169,6 @@ app_mixed_thread(struct thread_conf **confs) struct rte_mbuf *mbufs[burst_conf.ring_burst]; struct thread_conf *conf; int conf_idx = 0; - const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; while ((conf = confs[conf_idx])) { uint32_t nb_pkt; @@ -250,23 +188,9 @@ app_mixed_thread(struct thread_conf **confs) nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs, burst_conf.qos_dequeue); if (likely(nb_pkt > 0)) { - app_send_packets(conf, mbufs, nb_pkt); - - conf->counter = 0; /* reset empty read loop counter */ - } - - conf->counter++; - - /* drain ring and TX queues */ - if (unlikely(conf->counter > drain_tsc)) { - - /* now check is there any packets left to be transmitted */ - if (conf->n_mbufs != 0) { - app_send_burst(conf); - - conf->n_mbufs = 0; - } - conf->counter = 0; + uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkt); + if (nb_tx != nb_pkt) + rte_pktmbuf_free_bulk(&mbufs[nb_tx], nb_pkt - nb_tx); } conf_idx++; diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c index dc6a17a646..b3c2c9ef23 100644 --- a/examples/qos_sched/main.c +++ b/examples/qos_sched/main.c @@ -105,12 +105,6 @@ app_main_loop(__rte_unused void *dummy) } else if (mode == (APP_TX_MODE | APP_WT_MODE)) { for (i = 0; i < wt_idx; i++) { - wt_confs[i]->m_table = rte_malloc("table_wt", sizeof(struct rte_mbuf *) - * burst_conf.tx_burst, RTE_CACHE_LINE_SIZE); - - if (wt_confs[i]->m_table == NULL) - rte_panic("flow %u unable to allocate memory buffer\n", i); - RTE_LOG(INFO, APP, "flow %u lcoreid %u sched+write port %u\n", i, lcore_id, wt_confs[i]->tx_port); @@ -120,12 +114,6 @@ app_main_loop(__rte_unused void *dummy) } else if (mode == APP_TX_MODE) { for (i = 0; i < tx_idx; i++) { - tx_confs[i]->m_table = rte_malloc("table_tx", sizeof(struct rte_mbuf *) - * burst_conf.tx_burst, RTE_CACHE_LINE_SIZE); - - if (tx_confs[i]->m_table == NULL) - rte_panic("flow %u unable to allocate memory buffer\n", i); - RTE_LOG(INFO, APP, "flow%u lcoreid%u write port%u\n", i, lcore_id, tx_confs[i]->tx_port); } diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h index 76a68f585f..b9c301483a 100644 --- a/examples/qos_sched/main.h +++ b/examples/qos_sched/main.h @@ -37,8 +37,6 @@ extern "C" { #define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */ #define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */ -#define BURST_TX_DRAIN_US 100 - #ifndef APP_MAX_LCORE #if (RTE_MAX_LCORE > 64) #define APP_MAX_LCORE 64 @@ -75,10 +73,6 @@ struct thread_stat struct thread_conf { - uint32_t counter; - uint32_t n_mbufs; - struct rte_mbuf **m_table; - uint16_t rx_port; uint16_t tx_port; uint16_t rx_queue; From patchwork Fri Feb 3 10:05:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 123013 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F37841BBB; Fri, 3 Feb 2023 11:06:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3494642D12; Fri, 3 Feb 2023 11:06:02 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 52DEB42D16 for ; Fri, 3 Feb 2023 11:06:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675418760; x=1706954760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eBfl7x4xOzUq7hjhXhCS2QinKqIKPFCkiUbnRWsIPtE=; b=hHAWe5yPAyjDXodzDrVBAJeR6WBUlvYH6yBdAUhmKiipzsgK19G8ga8I /5eiw090KUf4zVyu9eagxz3WKHGfbGmb/RQYrWKG7DisRMeQXqKgoj3nJ LzO5FC/1ZNil2F+EFw2AsFc5k1b7QOxWh8vz0G4UaO/FH3TXT71p5/EQ2 F+yrlMnTqQb+hpjunUGRQZnOpiKH9y1CuDxAxrQeqK+RC5YCJt5C8+RsQ 640mv6R3fAYqkdmry+/GU4lBPAepFtoE4X9+aAt/YgCeOkOUqJ1+91LSk m0sXIWMPNy93KykzRNrPxz6GFXDO0vzlGZM5SBhQoAaRYa+L9xuEMcMX2 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="316705056" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="316705056" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 02:05:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994465499" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="994465499" Received: from silpixa00401385.ir.intel.com ([10.237.214.158]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 02:05:58 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jasvinder.singh@intel.com, Bruce Richardson , Cristian Dumitrescu Subject: [PATCH 3/4] examples/qos_sched: use bigger bursts on dequeue Date: Fri, 3 Feb 2023 10:05:32 +0000 Message-Id: <20230203100533.10377-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230203100533.10377-1-bruce.richardson@intel.com> References: <20230203100533.10377-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org While performance of the QoS block drops sharply if the dequeue size is greater than or equal to the enqueue size, increasing the dequeue size to just under the enqueue one gives improved performance when the scheduler is not keeping up with the line rate. Signed-off-by: Bruce Richardson Acked-by: Cristian Dumitrescu --- doc/guides/sample_app_ug/qos_scheduler.rst | 2 +- examples/qos_sched/main.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- 2.37.2 diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst index f376554dd9..9936b99172 100644 --- a/doc/guides/sample_app_ug/qos_scheduler.rst +++ b/doc/guides/sample_app_ug/qos_scheduler.rst @@ -91,7 +91,7 @@ Optional application parameters include: * B = I/O RX lcore write burst size to the output software rings, worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64) -* C = QoS dequeue size (the default value is 32) +* C = QoS dequeue size (the default value is 63) * D = Worker lcore write burst size to the NIC TX (the default value is 64) diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h index b9c301483a..d8f3e32c83 100644 --- a/examples/qos_sched/main.h +++ b/examples/qos_sched/main.h @@ -26,7 +26,7 @@ extern "C" { #define MAX_PKT_RX_BURST 64 #define PKT_ENQUEUE 64 -#define PKT_DEQUEUE 32 +#define PKT_DEQUEUE 63 #define MAX_PKT_TX_BURST 64 #define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */ From patchwork Fri Feb 3 10:05:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 123014 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83D1841BBB; Fri, 3 Feb 2023 11:06:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6845942D32; Fri, 3 Feb 2023 11:06:05 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 4701B42D2F for ; Fri, 3 Feb 2023 11:06:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675418763; x=1706954763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o/sqCiW8vydt3bQaRwXMR1/1WREbyJBdFP1cTgrpLTw=; b=kX0nOAIs8L2Tcw2b7adD2VQpZo+/CZMI+Vj6jfygYNIgi8bEUlRUOi/p 7pIKM0dsNB4wGpLK+OvvyAZmvRcAscid/0yfE1q0tJ8nKnkNZZqaV22bf 3lJKilAhTUHnrZy+dh4Jbcfmo4hEpBjaaLyeie4Td0MHgsLYXPQzLGTvE Cjd8dAbX5haeKBboBoJhWfGDiDcodZELq/R5WozLU2wohuKQLT2ChPto3 nvokPtc53RTY2k+nH89/QRnK1qNbKjZIZeQpWLjy2chAnDCWF1unrOCq3 iUYjx/GBBfhZqjcBecXk+kYd7/7riI4lJo1KdBxzOFYiMyY8bMqoad9Bx g==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="316705062" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="316705062" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 02:06:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994465520" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="994465520" Received: from silpixa00401385.ir.intel.com ([10.237.214.158]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 02:06:01 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jasvinder.singh@intel.com, Bruce Richardson , Cristian Dumitrescu Subject: [PATCH 4/4] examples/qos_sched: remove limit on core ids Date: Fri, 3 Feb 2023 10:05:33 +0000 Message-Id: <20230203100533.10377-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230203100533.10377-1-bruce.richardson@intel.com> References: <20230203100533.10377-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The qos_sched app was limited to using lcores between 0 and 64 only, even if RTE_MAX_LCORE was set to a higher value (as it is by default). Remove some of the checks on the lcore ids in order to support running with core ids > 64. Signed-off-by: Bruce Richardson Acked-by: Cristian Dumitrescu --- examples/qos_sched/args.c | 72 ++------------------------------------- examples/qos_sched/main.h | 10 +----- 2 files changed, 4 insertions(+), 78 deletions(-) diff --git a/examples/qos_sched/args.c b/examples/qos_sched/args.c index b2959499ae..e97273152a 100644 --- a/examples/qos_sched/args.c +++ b/examples/qos_sched/args.c @@ -24,7 +24,6 @@ static uint32_t app_main_core = 1; static uint32_t app_numa_mask; -static uint64_t app_used_core_mask = 0; static uint64_t app_used_port_mask = 0; static uint64_t app_used_rx_port_mask = 0; static uint64_t app_used_tx_port_mask = 0; @@ -82,43 +81,6 @@ app_usage(const char *prgname) } -/* returns core mask used by DPDK */ -static uint64_t -app_eal_core_mask(void) -{ - uint64_t cm = 0; - uint32_t i; - - for (i = 0; i < APP_MAX_LCORE; i++) { - if (rte_lcore_has_role(i, ROLE_RTE)) - cm |= (1ULL << i); - } - - cm |= (1ULL << rte_get_main_lcore()); - - return cm; -} - - -/* returns total number of cores presented in a system */ -static uint32_t -app_cpu_core_count(void) -{ - int i, len; - char path[PATH_MAX]; - uint32_t ncores = 0; - - for (i = 0; i < APP_MAX_LCORE; i++) { - len = snprintf(path, sizeof(path), SYS_CPU_DIR, i); - if (len <= 0 || (unsigned)len >= sizeof(path)) - continue; - - if (access(path, F_OK) == 0) - ncores++; - } - - return ncores; -} /* returns: number of values parsed @@ -261,15 +223,6 @@ app_parse_flow_conf(const char *conf_str) app_used_tx_port_mask |= mask; app_used_port_mask |= mask; - mask = 1lu << pconf->rx_core; - app_used_core_mask |= mask; - - mask = 1lu << pconf->wt_core; - app_used_core_mask |= mask; - - mask = 1lu << pconf->tx_core; - app_used_core_mask |= mask; - nb_pfc++; return 0; @@ -322,7 +275,7 @@ app_parse_args(int argc, char **argv) int opt, ret; int option_index; char *prgname = argv[0]; - uint32_t i, nb_lcores; + uint32_t i; static struct option lgopts[] = { {OPT_PFC, 1, NULL, OPT_PFC_NUM}, @@ -425,23 +378,6 @@ app_parse_args(int argc, char **argv) } } - /* check main core index validity */ - for (i = 0; i <= app_main_core; i++) { - if (app_used_core_mask & RTE_BIT64(app_main_core)) { - RTE_LOG(ERR, APP, "Main core index is not configured properly\n"); - app_usage(prgname); - return -1; - } - } - app_used_core_mask |= RTE_BIT64(app_main_core); - - if ((app_used_core_mask != app_eal_core_mask()) || - (app_main_core != rte_get_main_lcore())) { - RTE_LOG(ERR, APP, "EAL core mask not configured properly, must be %" PRIx64 - " instead of %" PRIx64 "\n" , app_used_core_mask, app_eal_core_mask()); - return -1; - } - if (nb_pfc == 0) { RTE_LOG(ERR, APP, "Packet flow not configured!\n"); app_usage(prgname); @@ -449,15 +385,13 @@ app_parse_args(int argc, char **argv) } /* sanity check for cores assignment */ - nb_lcores = app_cpu_core_count(); - for(i = 0; i < nb_pfc; i++) { - if (qos_conf[i].rx_core >= nb_lcores) { + if (qos_conf[i].rx_core >= RTE_MAX_LCORE) { RTE_LOG(ERR, APP, "pfc %u: invalid RX lcore index %u\n", i + 1, qos_conf[i].rx_core); return -1; } - if (qos_conf[i].wt_core >= nb_lcores) { + if (qos_conf[i].wt_core >= RTE_MAX_LCORE) { RTE_LOG(ERR, APP, "pfc %u: invalid WT lcore index %u\n", i + 1, qos_conf[i].wt_core); return -1; diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h index d8f3e32c83..bc647ec595 100644 --- a/examples/qos_sched/main.h +++ b/examples/qos_sched/main.h @@ -37,15 +37,7 @@ extern "C" { #define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */ #define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */ -#ifndef APP_MAX_LCORE -#if (RTE_MAX_LCORE > 64) -#define APP_MAX_LCORE 64 -#else -#define APP_MAX_LCORE RTE_MAX_LCORE -#endif -#endif - -#define MAX_DATA_STREAMS (APP_MAX_LCORE/2) +#define MAX_DATA_STREAMS RTE_MAX_LCORE/2 #define MAX_SCHED_SUBPORTS 8 #define MAX_SCHED_PIPES 4096 #define MAX_SCHED_PIPE_PROFILES 256