From patchwork Sun Oct 17 23:19:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaoxiang Liu X-Patchwork-Id: 101928 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42AE8A0547; Mon, 18 Oct 2021 01:20:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B632040041; Mon, 18 Oct 2021 01:20:13 +0200 (CEST) Received: from m12-13.163.com (m12-13.163.com [220.181.12.13]) by mails.dpdk.org (Postfix) with ESMTP id 026224003C for ; Mon, 18 Oct 2021 01:20:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id:MIME-Version; bh=plV8N +kY9ARay9EhoL0Ehmowtx/fl6m8csLiAc93Ymc=; b=iOxKYTiruX1427fnZrxgK PKYMDNwVIZ6QbPegCvTAyGxRCuIOtHXz/CpqKnRNfcGIutKCORsvwC/KejCZB0bb tBO2lP6YnEf/dh8oElVsZ6UEpDoTw7XbRnfsz68/inEXgjXBvBkNKVA0mEPwXJrq eZcQ17QO+tU+FynaSX9CtA= Received: from DESKTOP-ONA2IA7.localdomain (unknown [115.197.124.95]) by smtp9 (Coremail) with SMTP id DcCowABnJsuer2xh2+UbJw--.62459S4; Mon, 18 Oct 2021 07:20:07 +0800 (CST) From: Gaoxiang Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, liugaoxiang@huawei.com, Gaoxiang Liu Date: Mon, 18 Oct 2021 07:19:52 +0800 Message-Id: <20211017231952.162-1-gaoxiangliu0@163.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210928014348.1747-1-gaoxiangliu0@163.com> References: <20210928014348.1747-1-gaoxiangliu0@163.com> MIME-Version: 1.0 X-CM-TRANSID: DcCowABnJsuer2xh2+UbJw--.62459S4 X-Coremail-Antispam: 1Uf129KBjvJXoWxJF17Cw4kWrWDZF18try5urg_yoWrWr4DpF y3K3sxAF45XanrtanxJrZ8Zw1rK34fCrW3KFZrGa4I9a1UCry3uayIga4Iqr1UGFW7Ars8 Cr4YqF1UKFWYv3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jiKsUUUUUU= X-Originating-IP: [115.197.124.95] X-CM-SenderInfo: xjdr5xxdqjzxjxq6il2tof0z/xtbBQgcvOl++Nn8heAAAsh Subject: [dpdk-dev] [PATCH v3] net/vhost: merge vhost stats loop in vhost Tx/Rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To improve performance in vhost Tx/Rx, merge vhost stats loop. eth_vhost_tx has 2 loop of send num iteraion. It can be merge into one. eth_vhost_rx has the same issue as Tx. Signed-off-by: Gaoxiang Liu Reviewed-by: Maxime Coquelin Signed-off-by: Gaoxiang Liu Signed-off-by: Gaoxiang Liu --- v2: * Fix coding style issues. v3: * add __rte_always_inline to vhost_update_single_packet_xstats. --- drivers/net/vhost/rte_eth_vhost.c | 64 ++++++++++++++----------------- 1 file changed, 29 insertions(+), 35 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a202931e9a..021195ae57 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -335,39 +335,30 @@ vhost_count_xcast_packets(struct vhost_queue *vq, } } -static void -vhost_update_packet_xstats(struct vhost_queue *vq, struct rte_mbuf **bufs, - uint16_t count, uint64_t nb_bytes, - uint64_t nb_missed) +static __rte_always_inline void +vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf) { uint32_t pkt_len = 0; - uint64_t i = 0; uint64_t index; struct vhost_stats *pstats = &vq->stats; - pstats->xstats[VHOST_BYTE] += nb_bytes; - pstats->xstats[VHOST_MISSED_PKT] += nb_missed; - pstats->xstats[VHOST_UNICAST_PKT] += nb_missed; - - for (i = 0; i < count ; i++) { - pstats->xstats[VHOST_PKT]++; - pkt_len = bufs[i]->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, bufs[i]); + pstats->xstats[VHOST_PKT]++; + pkt_len = buf->pkt_len; + if (pkt_len == 64) { + pstats->xstats[VHOST_64_PKT]++; + } else if (pkt_len > 64 && pkt_len < 1024) { + index = (sizeof(pkt_len) * 8) + - __builtin_clz(pkt_len) - 5; + pstats->xstats[index]++; + } else { + if (pkt_len < 64) + pstats->xstats[VHOST_UNDERSIZE_PKT]++; + else if (pkt_len <= 1522) + pstats->xstats[VHOST_1024_TO_1522_PKT]++; + else if (pkt_len > 1522) + pstats->xstats[VHOST_1523_TO_MAX_PKT]++; } + vhost_count_xcast_packets(vq, buf); } static uint16_t @@ -376,7 +367,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) struct vhost_queue *r = q; uint16_t i, nb_rx = 0; uint16_t nb_receive = nb_bufs; - uint64_t nb_bytes = 0; if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) return 0; @@ -411,11 +401,11 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (r->internal->vlan_strip) rte_vlan_strip(bufs[i]); - nb_bytes += bufs[i]->pkt_len; - } + r->stats.bytes += bufs[i]->pkt_len; + r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len; - r->stats.bytes += nb_bytes; - vhost_update_packet_xstats(r, bufs, nb_rx, nb_bytes, 0); + vhost_update_single_packet_xstats(r, bufs[i]); + } out: rte_atomic32_set(&r->while_queuing, 0); @@ -471,16 +461,20 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) + for (i = 0; likely(i < nb_tx); i++) { nb_bytes += bufs[i]->pkt_len; + vhost_update_single_packet_xstats(r, bufs[i]); + } nb_missed = nb_bufs - nb_tx; r->stats.pkts += nb_tx; r->stats.bytes += nb_bytes; - r->stats.missed_pkts += nb_bufs - nb_tx; + r->stats.missed_pkts += nb_missed; - vhost_update_packet_xstats(r, bufs, nb_tx, nb_bytes, nb_missed); + r->stats.xstats[VHOST_BYTE] += nb_bytes; + r->stats.xstats[VHOST_MISSED_PKT] += nb_missed; + r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed; /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and * ifHCOutBroadcastPkts counters are increased when packets are not