From patchwork Fri Feb 3 10:05:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 123012 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1F1B41BBB; Fri, 3 Feb 2023 11:06:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06EDC42C24; Fri, 3 Feb 2023 11:05:59 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id BCD8342C24 for ; Fri, 3 Feb 2023 11:05:57 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675418757; x=1706954757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ahHexmqq4iWZIj0VDjtR35oiQX7Qd6SUWMEmKavhPfQ=; b=d001XLdDbSZXJ/NZ2ykCBt/caqXyVvZTXd7djjL3nckNm6PxL1KQdbwi GsKfk5HE97fQXD1Rl6rpnLKOZzjabmORgtuAoMhWJnzqk7Dlj8RKwD7l8 hXl+f9sShSm9w0eSIoDxTkE+IvljlYMNAuBNO0p5Wn6EJh+jbH0U+XFMa vDuDS+CHdAg6HswbVS6I+RXRsIi3OYOENZypxjKWGVwYPcVpxhBCCVjIS 9VdEnBRFIi0SIt5epsW+Mp9W+fq4hW4AHgVaIcfMPqs6GHg4igOpxfV6r Ji8dYcJfJnNdG21w7LdU6JXsCyjZKuDnm0AQIXb2GdcZseARbIXufHXQ/ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="316705051" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="316705051" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 02:05:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994465491" X-IronPort-AV: E=Sophos;i="5.97,270,1669104000"; d="scan'208";a="994465491" Received: from silpixa00401385.ir.intel.com ([10.237.214.158]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 02:05:55 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: jasvinder.singh@intel.com, Bruce Richardson , Cristian Dumitrescu Subject: [PATCH 2/4] examples/qos_sched: remove TX buffering Date: Fri, 3 Feb 2023 10:05:31 +0000 Message-Id: <20230203100533.10377-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230203100533.10377-1-bruce.richardson@intel.com> References: <20230203100533.10377-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Since the qos_sched app does batch dequeues from the QoS block, there is little point in trying to batch further in the app - just send out the full burst of packets that were received from the QoS block. With modern CPUs and write-combining doorbells, the cost of doing smaller TX's is reduced anyway for the worst case. Signed-off-by: Bruce Richardson Acked-by: Cristian Dumitrescu --- examples/qos_sched/app_thread.c | 94 ++++----------------------------- examples/qos_sched/main.c | 12 ----- examples/qos_sched/main.h | 6 --- 3 files changed, 9 insertions(+), 103 deletions(-) diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c index dbc878b553..1ea732aa91 100644 --- a/examples/qos_sched/app_thread.c +++ b/examples/qos_sched/app_thread.c @@ -104,82 +104,21 @@ app_rx_thread(struct thread_conf **confs) } } - - -/* Send the packet to an output interface - * For performance reason function returns number of packets dropped, not sent, - * so 0 means that all packets were sent successfully - */ - -static inline void -app_send_burst(struct thread_conf *qconf) -{ - struct rte_mbuf **mbufs; - uint32_t n, ret; - - mbufs = (struct rte_mbuf **)qconf->m_table; - n = qconf->n_mbufs; - - do { - ret = rte_eth_tx_burst(qconf->tx_port, qconf->tx_queue, mbufs, (uint16_t)n); - /* we cannot drop the packets, so re-send */ - /* update number of packets to be sent */ - n -= ret; - mbufs = (struct rte_mbuf **)&mbufs[ret]; - } while (n); -} - - -/* Send the packet to an output interface */ -static void -app_send_packets(struct thread_conf *qconf, struct rte_mbuf **mbufs, uint32_t nb_pkt) -{ - uint32_t i, len; - - len = qconf->n_mbufs; - for(i = 0; i < nb_pkt; i++) { - qconf->m_table[len] = mbufs[i]; - len++; - /* enough pkts to be sent */ - if (unlikely(len == burst_conf.tx_burst)) { - qconf->n_mbufs = len; - app_send_burst(qconf); - len = 0; - } - } - - qconf->n_mbufs = len; -} - void app_tx_thread(struct thread_conf **confs) { struct rte_mbuf *mbufs[burst_conf.qos_dequeue]; struct thread_conf *conf; int conf_idx = 0; - int retval; - const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; + int nb_pkts; while ((conf = confs[conf_idx])) { - retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void **)mbufs, + nb_pkts = rte_ring_sc_dequeue_burst(conf->tx_ring, (void **)mbufs, burst_conf.qos_dequeue, NULL); - if (likely(retval != 0)) { - app_send_packets(conf, mbufs, burst_conf.qos_dequeue); - - conf->counter = 0; /* reset empty read loop counter */ - } - - conf->counter++; - - /* drain ring and TX queues */ - if (unlikely(conf->counter > drain_tsc)) { - /* now check is there any packets left to be transmitted */ - if (conf->n_mbufs != 0) { - app_send_burst(conf); - - conf->n_mbufs = 0; - } - conf->counter = 0; + if (likely(nb_pkts != 0)) { + uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkts); + if (nb_pkts != nb_tx) + rte_pktmbuf_free_bulk(&mbufs[nb_pkts], nb_pkts - nb_tx); } conf_idx++; @@ -230,7 +169,6 @@ app_mixed_thread(struct thread_conf **confs) struct rte_mbuf *mbufs[burst_conf.ring_burst]; struct thread_conf *conf; int conf_idx = 0; - const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; while ((conf = confs[conf_idx])) { uint32_t nb_pkt; @@ -250,23 +188,9 @@ app_mixed_thread(struct thread_conf **confs) nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs, burst_conf.qos_dequeue); if (likely(nb_pkt > 0)) { - app_send_packets(conf, mbufs, nb_pkt); - - conf->counter = 0; /* reset empty read loop counter */ - } - - conf->counter++; - - /* drain ring and TX queues */ - if (unlikely(conf->counter > drain_tsc)) { - - /* now check is there any packets left to be transmitted */ - if (conf->n_mbufs != 0) { - app_send_burst(conf); - - conf->n_mbufs = 0; - } - conf->counter = 0; + uint16_t nb_tx = rte_eth_tx_burst(conf->tx_port, 0, mbufs, nb_pkt); + if (nb_tx != nb_pkt) + rte_pktmbuf_free_bulk(&mbufs[nb_tx], nb_pkt - nb_tx); } conf_idx++; diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c index dc6a17a646..b3c2c9ef23 100644 --- a/examples/qos_sched/main.c +++ b/examples/qos_sched/main.c @@ -105,12 +105,6 @@ app_main_loop(__rte_unused void *dummy) } else if (mode == (APP_TX_MODE | APP_WT_MODE)) { for (i = 0; i < wt_idx; i++) { - wt_confs[i]->m_table = rte_malloc("table_wt", sizeof(struct rte_mbuf *) - * burst_conf.tx_burst, RTE_CACHE_LINE_SIZE); - - if (wt_confs[i]->m_table == NULL) - rte_panic("flow %u unable to allocate memory buffer\n", i); - RTE_LOG(INFO, APP, "flow %u lcoreid %u sched+write port %u\n", i, lcore_id, wt_confs[i]->tx_port); @@ -120,12 +114,6 @@ app_main_loop(__rte_unused void *dummy) } else if (mode == APP_TX_MODE) { for (i = 0; i < tx_idx; i++) { - tx_confs[i]->m_table = rte_malloc("table_tx", sizeof(struct rte_mbuf *) - * burst_conf.tx_burst, RTE_CACHE_LINE_SIZE); - - if (tx_confs[i]->m_table == NULL) - rte_panic("flow %u unable to allocate memory buffer\n", i); - RTE_LOG(INFO, APP, "flow%u lcoreid%u write port%u\n", i, lcore_id, tx_confs[i]->tx_port); } diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h index 76a68f585f..b9c301483a 100644 --- a/examples/qos_sched/main.h +++ b/examples/qos_sched/main.h @@ -37,8 +37,6 @@ extern "C" { #define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */ #define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */ -#define BURST_TX_DRAIN_US 100 - #ifndef APP_MAX_LCORE #if (RTE_MAX_LCORE > 64) #define APP_MAX_LCORE 64 @@ -75,10 +73,6 @@ struct thread_stat struct thread_conf { - uint32_t counter; - uint32_t n_mbufs; - struct rte_mbuf **m_table; - uint16_t rx_port; uint16_t tx_port; uint16_t rx_queue;