From patchwork Tue May 10 20:17:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 110999 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 144D7A0093; Tue, 10 May 2022 22:17:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7810410F2; Tue, 10 May 2022 22:17:28 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 3DA36406B4 for ; Tue, 10 May 2022 22:17:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213846; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HmsXkGPExsVWeGBd4XGp3ddz+GbOlp/Yfu2oFWNC3Fw=; b=Dihb2BlFs8ps+k7dc704+cu5BFA/hxpB6y2IAe80NWeKDNw/RXXV/Sk+0trvk0wF99dyfm c9HNXg2EJg9BdxAYXRFbEJ3ScmTTZICqPIJvd1h36gaaito7DLXjw4ZmAfn6YyT+3ngWg+ 840XYohWSrN6vsd2XDQfybzX4f9RZYY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-31-8B4g2QX-P_quwhy0wLZRmA-1; Tue, 10 May 2022 16:17:25 -0400 X-MC-Unique: 8B4g2QX-P_quwhy0wLZRmA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 30EEB8002BF; Tue, 10 May 2022 20:17:25 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0D7CC40CFD06; Tue, 10 May 2022 20:17:23 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 1/5] vhost: add per-virtqueue statistics support Date: Tue, 10 May 2022 22:17:16 +0200 Message-Id: <20220510201720.1262368-2-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces new APIs for the application to query and reset per-virtqueue statistics. The patch also introduces generic counters. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- doc/guides/prog_guide/vhost_lib.rst | 24 ++++++ lib/vhost/rte_vhost.h | 99 ++++++++++++++++++++++++ lib/vhost/socket.c | 4 +- lib/vhost/version.map | 4 +- lib/vhost/vhost.c | 112 +++++++++++++++++++++++++++- lib/vhost/vhost.h | 18 ++++- lib/vhost/virtio_net.c | 55 ++++++++++++++ 7 files changed, 311 insertions(+), 5 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index f287b76ebf..0f91215a90 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -130,6 +130,15 @@ The following is an overview of some key Vhost API functions: It is disabled by default. + - ``RTE_VHOST_USER_NET_STATS_ENABLE`` + + Per-virtqueue statistics collection will be enabled when this flag is set. + When enabled, the application may use rte_vhost_stats_get_names() and + rte_vhost_stats_get() to collect statistics, and rte_vhost_stats_reset() to + reset them. + + It is disabled by default + * ``rte_vhost_driver_set_features(path, features)`` This function sets the feature bits the vhost-user driver supports. The @@ -282,6 +291,21 @@ The following is an overview of some key Vhost API functions: Clear inflight packets which are submitted to DMA engine in vhost async data path. Completed packets are returned to applications through ``pkts``. +* ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` + + This function returns the names of the queue statistics. It requires + statistics collection to be enabled at registration time. + +* ``rte_vhost_vring_stats_get(int vid, uint16_t queue_id, struct rte_vhost_stat *stats, unsigned int n)`` + + This function returns the queue statistics. It requires statistics + collection to be enabled at registration time. + +* ``rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)`` + + This function resets the queue statistics. It requires statistics + collection to be enabled at registration time. + Vhost-user Implementations -------------------------- diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index c733f857c6..266048f3e5 100644 --- a/lib/vhost/rte_vhost.h +++ b/lib/vhost/rte_vhost.h @@ -39,6 +39,7 @@ extern "C" { #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) #define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7) #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS (1ULL << 8) +#define RTE_VHOST_USER_NET_STATS_ENABLE (1ULL << 9) /* Features. */ #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE @@ -321,6 +322,32 @@ struct rte_vhost_power_monitor_cond { uint8_t match; }; +/** Maximum name length for the statistics counters */ +#define RTE_VHOST_STATS_NAME_SIZE 64 + +/** + * Vhost virtqueue statistics structure + * + * This structure is used by rte_vhost_vring_stats_get() to provide + * virtqueue statistics to the calling application. + * It maps a name ID, corresponding to an index in the array returned + * by rte_vhost_vring_stats_get_names(), to a statistic value. + */ +struct rte_vhost_stat { + uint64_t id; /**< The index in xstats name array. */ + uint64_t value; /**< The statistic counter value. */ +}; + +/** + * Vhost virtqueue statistic name element + * + * This structure is used by rte_vhost_vring_stats_get_names() to + * provide virtqueue statistics names to the calling application. + */ +struct rte_vhost_stat_name { + char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */ +}; + /** * Convert guest physical address to host virtual address * @@ -1063,6 +1090,78 @@ __rte_experimental int rte_vhost_slave_config_change(int vid, bool need_reply); +/** + * Retrieve names of statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param name + * array of at least size elements to be filled. + * If set to NULL, the function returns the required number of elements. + * @param size + * The number of elements in stats_names array. + * @return + * - Success if greater than 0 and lower or equal to *size*. The return value + * indicates the number of elements filled in the *names* array. + * - Failure if greater than *size*. The return value indicates the number of + * elements the *names* array that should be given to succeed. + * - Failure if lower than 0. The device ID or queue ID is invalid or + + statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size); + +/** + * Retrieve statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param stats + * A pointer to a table of structure of type rte_vhost_stat to be filled with + * virtqueue statistics ids and values. + * @param n + * The number of elements in stats array. + * @return + * - Success if greater than 0 and lower or equal to *n*. The return value + * indicates the number of elements filled in the *stats* array. + * - Failure if greater than *n*. The return value indicates the number of + * elements the *stats* array that should be given to succeed. + * - Failure if lower than 0. The device ID or queue ID is invalid, or + * statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n); + +/** + * Reset statistics of a Vhost virtqueue. + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @return + * - Success if 0. Statistics have been reset. + * - Failure if lower than 0. The device ID or queue ID is invalid, or + * statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_reset(int vid, uint16_t queue_id); + #ifdef __cplusplus } #endif diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index b304339de9..baf6c338d3 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -42,6 +42,7 @@ struct vhost_user_socket { bool linearbuf; bool async_copy; bool net_compliant_ol_flags; + bool stats_enabled; /* * The "supported_features" indicates the feature bits the @@ -227,7 +228,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) vhost_set_ifname(vid, vsocket->path, size); vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net, - vsocket->net_compliant_ol_flags); + vsocket->net_compliant_ol_flags, vsocket->stats_enabled); vhost_attach_vdpa_device(vid, vsocket->vdpa_dev); @@ -863,6 +864,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT; vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY; vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS; + vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE; if (vsocket->async_copy && (flags & (RTE_VHOST_USER_IOMMU_SUPPORT | diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 5841315386..34df41556d 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -90,7 +90,9 @@ EXPERIMENTAL { # added in 22.07 rte_vhost_async_get_inflight_thread_unsafe; - + rte_vhost_vring_stats_get_names; + rte_vhost_vring_stats_get; + rte_vhost_vring_stats_reset; }; INTERNAL { diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index df0bb9d043..b22f10de72 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -24,6 +24,28 @@ struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE]; pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER; +struct vhost_vq_stats_name_off { + char name[RTE_VHOST_STATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { + {"good_packets", offsetof(struct vhost_virtqueue, stats.packets)}, + {"good_bytes", offsetof(struct vhost_virtqueue, stats.bytes)}, + {"multicast_packets", offsetof(struct vhost_virtqueue, stats.multicast)}, + {"broadcast_packets", offsetof(struct vhost_virtqueue, stats.broadcast)}, + {"undersize_packets", offsetof(struct vhost_virtqueue, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct vhost_virtqueue, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct vhost_virtqueue, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct vhost_virtqueue, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct vhost_virtqueue, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, +}; + +#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) + /* Called with iotlb_lock read-locked */ uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -755,7 +777,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len) } void -vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) +vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled) { struct virtio_net *dev = get_device(vid); @@ -770,6 +792,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS; else dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS; + if (stats_enabled) + dev->flags |= VIRTIO_DEV_STATS_ENABLED; + else + dev->flags &= ~VIRTIO_DEV_STATS_ENABLED; } void @@ -1971,5 +1997,89 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id, return 0; } + +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size) +{ + struct virtio_net *dev = get_device(vid); + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (name == NULL || size < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + for (i = 0; i < VHOST_NB_VQ_STATS; i++) + snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s", + (queue_id & 1) ? "rx" : "tx", + queue_id / 2, vhost_vq_stat_strings[i].name); + + return VHOST_NB_VQ_STATS; +} + +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (stats == NULL || n < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + for (i = 0; i < VHOST_NB_VQ_STATS; i++) { + stats[i].value = + *(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset); + stats[i].id = i; + } + rte_spinlock_unlock(&vq->access_lock); + + return VHOST_NB_VQ_STATS; +} + +int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + memset(&vq->stats, 0, sizeof(vq->stats)); + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO); RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..01b97011aa 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -37,6 +37,8 @@ #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4) /* Used to indicate that the virtio_net tx code should fill TX ol_flags */ #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5) +/* Used to indicate the application has requested statistics collection */ +#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6) /* Backend value set by guest. */ #define VIRTIO_DEV_STOPPED -1 @@ -121,6 +123,18 @@ struct vring_used_elem_packed { uint32_t count; }; +/** + * Virtqueue statistics + */ +struct virtqueue_stats { + uint64_t packets; + uint64_t bytes; + uint64_t multicast; + uint64_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + uint64_t size_bins[8]; +}; + /** * iovec */ @@ -305,6 +319,7 @@ struct vhost_virtqueue { #define VIRTIO_UNINITIALIZED_NOTIF (-1) struct vhost_vring_addr ring_addrs; + struct virtqueue_stats stats; } __rte_cache_aligned; /* Virtio device status as per Virtio specification */ @@ -780,7 +795,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx); void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev); void vhost_set_ifname(int, const char *if_name, unsigned int if_len); -void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags); +void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled); void vhost_enable_extbuf(int vid); void vhost_enable_linearbuf(int vid); int vhost_enable_guest_notification(struct virtio_net *dev, @@ -957,5 +972,4 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } - #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..b1ea9fa4a5 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -47,6 +47,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring; } +/* + * This function must be called with virtqueue's access_lock taken. + */ +static inline void +vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count) +{ + struct virtqueue_stats *stats = &vq->stats; + int i; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return; + + for (i = 0; i < count; i++) { + struct rte_ether_addr *ea; + struct rte_mbuf *pkt = pkts[i]; + uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt); + + stats->packets++; + stats->bytes += pkt_len; + + if (pkt_len == 64) { + stats->size_bins[1]++; + } else if (pkt_len > 64 && pkt_len < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5; + stats->size_bins[bin]++; + } else { + if (pkt_len < 64) + stats->size_bins[0]++; + else if (pkt_len < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } + } +} + static __rte_always_inline int64_t vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx, @@ -1509,6 +1557,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, else nb_tx = virtio_dev_rx_split(dev, vq, pkts, count); + vhost_queue_stats_update(dev, vq, pkts, nb_tx); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq); @@ -2064,6 +2114,8 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + out: rte_spinlock_unlock(&vq->access_lock); @@ -3113,6 +3165,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, * learning table will get updated first. */ pkts[0] = rarp_mbuf; + vhost_queue_stats_update(dev, vq, pkts, 1); pkts++; count -= 1; } @@ -3129,6 +3182,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count); } + vhost_queue_stats_update(dev, vq, pkts, count); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq); From patchwork Tue May 10 20:17:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 111001 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B12CDA0093; Tue, 10 May 2022 22:17:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9922442835; Tue, 10 May 2022 22:17:30 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 6B00A406B4 for ; Tue, 10 May 2022 22:17:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213847; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8L8EkVWGXp434ZmgDMgRuxl57ndeF6h3z7H+9uqRFxM=; b=IcaApf73+11mN93Fb8O3AsFvtI+faT5fRTxm0oXfGkRo0oqeDdf6Q57ecukjPJ0YPXVRhu vQaqNh8o31JxVmobaMPTdLN9d2N5Z43L5REaW8sQQboXtB8QvcXKfYUvm7lADhaDHVSLv3 QgI6PYdKQfcEZuBdvDJcTidREduCQwc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-488-ITOdoEiwPvub8qloVFXo0Q-1; Tue, 10 May 2022 16:17:26 -0400 X-MC-Unique: ITOdoEiwPvub8qloVFXo0Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 684BF100BAAB; Tue, 10 May 2022 20:17:26 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6CE59400DFAB; Tue, 10 May 2022 20:17:25 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 2/5] net/vhost: move to Vhost library stats API Date: Tue, 10 May 2022 22:17:17 +0200 Message-Id: <20220510201720.1262368-3-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that we have Vhost statistics APIs, this patch replaces Vhost PMD extended statistics implementation with calls to the new API. It will enable getting more statistics for counters that cannot be implemented at the PMD level. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++------------------- 1 file changed, 122 insertions(+), 226 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a248a65df4..8dee629fb0 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = { } }; -enum vhost_xstats_pkts { - VHOST_UNDERSIZE_PKT = 0, - VHOST_64_PKT, - VHOST_65_TO_127_PKT, - VHOST_128_TO_255_PKT, - VHOST_256_TO_511_PKT, - VHOST_512_TO_1023_PKT, - VHOST_1024_TO_1522_PKT, - VHOST_1523_TO_MAX_PKT, - VHOST_BROADCAST_PKT, - VHOST_MULTICAST_PKT, - VHOST_UNICAST_PKT, - VHOST_PKT, - VHOST_BYTE, - VHOST_MISSED_PKT, - VHOST_ERRORS_PKT, - VHOST_ERRORS_FRAGMENTED, - VHOST_ERRORS_JABBER, - VHOST_UNKNOWN_PROTOCOL, - VHOST_XSTATS_MAX, -}; - struct vhost_stats { uint64_t pkts; uint64_t bytes; uint64_t missed_pkts; - uint64_t xstats[VHOST_XSTATS_MAX]; }; struct vhost_queue { @@ -140,138 +117,92 @@ struct rte_vhost_vring_state { static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; -#define VHOST_XSTATS_NAME_SIZE 64 - -struct vhost_xstats_name_off { - char name[VHOST_XSTATS_NAME_SIZE]; - uint64_t offset; -}; - -/* [rx]_is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, - {"fragmented_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])}, - {"jabber_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])}, - {"unknown_protos_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])}, -}; - -/* [tx]_ is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, -}; - -#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \ - sizeof(vhost_rxport_stat_strings[0])) - -#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \ - sizeof(vhost_txport_stat_strings[0])) - static int vhost_dev_xstats_reset(struct rte_eth_dev *dev) { - struct vhost_queue *vq = NULL; - unsigned int i = 0; + struct vhost_queue *vq; + int ret, i; for (i = 0; i < dev->data->nb_rx_queues; i++) { vq = dev->data->rx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } + for (i = 0; i < dev->data->nb_tx_queues; i++) { vq = dev->data->tx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } return 0; } static int -vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, - unsigned int limit __rte_unused) + unsigned int limit) { - unsigned int t = 0; - int count = 0; - int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; + struct rte_vhost_stat_name *name; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; - if (!xstats_names) + nstats += ret; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats_names || limit < (unsigned int)nstats) return nstats; - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "rx_%s", vhost_rxport_stat_strings[t].name); - count++; - } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "tx_%s", vhost_txport_stat_strings[t].name); - count++; + + name = calloc(nstats, sizeof(*name)); + if (!name) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; + } + + for (i = 0; i < count; i++) + strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE); + + free(name); + return count; } @@ -279,86 +210,67 @@ static int vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - unsigned int i; - unsigned int t; - unsigned int count = 0; - struct vhost_queue *vq = NULL; - unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; - - if (n < nxstats) - return nxstats; - - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - vq = dev->data->rx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_rxport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + struct rte_vhost_stat *stats; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - vq = dev->data->tx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_txport_stat_strings[t].offset); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats || n < (unsigned int)nstats) + return nstats; + + stats = calloc(nstats, sizeof(*stats)); + if (!stats) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) { + free(stats); + return ret; } - xstats[count].id = count; - count++; + + count += ret; } - return count; -} -static inline void -vhost_count_xcast_packets(struct vhost_queue *vq, - struct rte_mbuf *mbuf) -{ - struct rte_ether_addr *ea = NULL; - struct vhost_stats *pstats = &vq->stats; + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) { + free(stats); + return ret; + } - ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - pstats->xstats[VHOST_BROADCAST_PKT]++; - else - pstats->xstats[VHOST_MULTICAST_PKT]++; - } else { - pstats->xstats[VHOST_UNICAST_PKT]++; + count += ret; } -} -static __rte_always_inline void -vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf) -{ - uint32_t pkt_len = 0; - uint64_t index; - struct vhost_stats *pstats = &vq->stats; - - pstats->xstats[VHOST_PKT]++; - pkt_len = buf->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, buf); + for (i = 0; i < count; i++) { + xstats[i].id = stats[i].id; + xstats[i].value = stats[i].value; + } + + free(stats); + + return nstats; } static uint16_t @@ -402,9 +314,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) rte_vlan_strip(bufs[i]); r->stats.bytes += bufs[i]->pkt_len; - r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len; - - vhost_update_single_packet_xstats(r, bufs[i]); } out: @@ -461,10 +370,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) { + for (i = 0; likely(i < nb_tx); i++) nb_bytes += bufs[i]->pkt_len; - vhost_update_single_packet_xstats(r, bufs[i]); - } nb_missed = nb_bufs - nb_tx; @@ -472,17 +379,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) r->stats.bytes += nb_bytes; r->stats.missed_pkts += nb_missed; - r->stats.xstats[VHOST_BYTE] += nb_bytes; - r->stats.xstats[VHOST_MISSED_PKT] += nb_missed; - r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed; - - /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and - * ifHCOutBroadcastPkts counters are increased when packets are not - * transmitted successfully. - */ - for (i = nb_tx; i < nb_bufs; i++) - vhost_count_xcast_packets(r, bufs[i]); - for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); out: @@ -1566,7 +1462,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) int ret = 0; char *iface_name; uint16_t queues; - uint64_t flags = 0; + uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE; uint64_t disable_flags = 0; int client_mode = 0; int iommu_support = 0; From patchwork Tue May 10 20:17:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 111002 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8DED7A0093; Tue, 10 May 2022 22:17:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 94E284283B; Tue, 10 May 2022 22:17:32 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 92E114283B for ; Tue, 10 May 2022 22:17:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oLYSM6nN11yzpPPOsg0bkr+kQkSUboHjqMZWwftaTK0=; b=dg5m+TODZDNc+nOYhY2PZ5aPnScsDmBvTjxD2hWNqyueq8Z0ky2sUU7+3ulUWw8cv2nD2/ UoeKaldQIrcPjeQuI3kHUrWeVYjTULlC6DU39LSwMRz0uMHZqGnYmHRPYUmapBTTQh0APY NZfO/5upZUhuDsNzkCljlbTV70kYFUQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-70-FELqa642N96swleKYFCtVA-1; Tue, 10 May 2022 16:17:28 -0400 X-MC-Unique: FELqa642N96swleKYFCtVA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A051F185A794; Tue, 10 May 2022 20:17:27 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id A472E40CFD06; Tue, 10 May 2022 20:17:26 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 3/5] vhost: add statistics for guest notifications Date: Tue, 10 May 2022 22:17:18 +0200 Message-Id: <20220510201720.1262368-4-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds a new virtqueue statistic for guest notifications. It is useful to deduce from hypervisor side whether the corresponding guest Virtio device is using Kernel Virtio-net driver or DPDK Virtio PMD. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 1 + lib/vhost/vhost.h | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index b22f10de72..fa708c1f9c 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -42,6 +42,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])}, {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, + {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 01b97011aa..13c5c2266d 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -133,6 +133,7 @@ struct virtqueue_stats { uint64_t broadcast; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ uint64_t size_bins[8]; + uint64_t guest_notifications; }; /** @@ -871,6 +872,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) (vq->callfd >= 0)) || unlikely(!signalled_used_valid)) { eventfd_write(vq->callfd, (eventfd_t) 1); + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.guest_notifications++; if (dev->notify_ops->guest_notified) dev->notify_ops->guest_notified(dev->vid); } @@ -879,6 +882,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT) && (vq->callfd >= 0)) { eventfd_write(vq->callfd, (eventfd_t)1); + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.guest_notifications++; if (dev->notify_ops->guest_notified) dev->notify_ops->guest_notified(dev->vid); } From patchwork Tue May 10 20:17:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 111004 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7BCDA0093; Tue, 10 May 2022 22:17:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83E7242843; Tue, 10 May 2022 22:17:34 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 517994283D for ; Tue, 10 May 2022 22:17:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yFu7BPzhj5bgAx7KzgOL+m2MSJOw5UoX67igMnn4JpA=; b=hPJDwsBn9/qHxF0Ldh6ehK44gmfgWZ9rYlUdN0SbUfrepVJ/4AMLjV9MGjoWMEtFMttBTf q3xCTySf9lA9kX9bglVYhVUkNqTgVzES3NKeTK+DH0X9aRl2BB3cL1vFipavKhIfeTVeiq Il9hZMJFoC+Y6amRSyBh00mHmpFc5k4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-DlgbJX-iNWOcVSd5YTcxsg-1; Tue, 10 May 2022 16:17:29 -0400 X-MC-Unique: DlgbJX-iNWOcVSd5YTcxsg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D6BFD802819; Tue, 10 May 2022 20:17:28 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id DBA4040CF8ED; Tue, 10 May 2022 20:17:27 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 4/5] vhost: add statistics for IOTLB Date: Tue, 10 May 2022 22:17:19 +0200 Message-Id: <20220510201720.1262368-5-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds statistics for IOTLB hits and misses. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 10 +++++++++- lib/vhost/vhost.h | 2 ++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index fa708c1f9c..721b3a3247 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)}, + {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)}, + {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) @@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, tmp_size = *size; vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm); - if (tmp_size == *size) + if (tmp_size == *size) { + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.iotlb_hits++; return vva; + } + + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.iotlb_misses++; iova += tmp_size; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 13c5c2266d..872675207e 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -134,6 +134,8 @@ struct virtqueue_stats { /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ uint64_t size_bins[8]; uint64_t guest_notifications; + uint64_t iotlb_hits; + uint64_t iotlb_misses; }; /** From patchwork Tue May 10 20:17:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 111003 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5F27A0093; Tue, 10 May 2022 22:17:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 921CE4282B; Tue, 10 May 2022 22:17:33 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 0A25F4282E for ; Tue, 10 May 2022 22:17:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fgh/JaoE7AuhCPks92d5TYEQgnGedg99mZtYs+tQHjU=; b=TMF3c3nZU58OGmzOrTnEUsi8o7jq6uSn4ISGeRKFFxE7mGq8JxSKwFeQKp/8MiTuWI7TiM R4rm8AjWYAQQEMcHqa1UTDkqXp/xp3aPL1Khv3EZIFTIuBuHe/aWR/paG2sW1/Gxb2BdlO 5in68r0dgwVwwvViZNK+B9jDA6ddDQQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-50-S-UECU-SPxWWoA0r15QhbA-1; Tue, 10 May 2022 16:17:30 -0400 X-MC-Unique: S-UECU-SPxWWoA0r15QhbA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1A9E5811E75; Tue, 10 May 2022 20:17:30 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F03D40CF8ED; Tue, 10 May 2022 20:17:28 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 5/5] vhost: add statistics for in-flight packets Date: Tue, 10 May 2022 22:17:20 +0200 Message-Id: <20220510201720.1262368-6-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds statistics for packets in-flight submission and completion, when Vhost async mode is used. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 2 ++ lib/vhost/vhost.h | 2 ++ lib/vhost/virtio_net.c | 6 ++++++ 3 files changed, 10 insertions(+) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 721b3a3247..d9d31b2d03 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -45,6 +45,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)}, {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)}, {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)}, + {"inflight_submitted", offsetof(struct vhost_virtqueue, stats.inflight_submitted)}, + {"inflight_completed", offsetof(struct vhost_virtqueue, stats.inflight_completed)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 872675207e..1573d0afe9 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -136,6 +136,8 @@ struct virtqueue_stats { uint64_t guest_notifications; uint64_t iotlb_hits; uint64_t iotlb_misses; + uint64_t inflight_submitted; + uint64_t inflight_completed; }; /** diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index b1ea9fa4a5..c8905c770a 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2115,6 +2115,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; out: rte_spinlock_unlock(&vq->access_lock); @@ -2158,6 +2159,9 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + return n_pkts_cpl; } @@ -2207,6 +2211,8 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, nb_tx = virtio_dev_rx_async_submit_split(dev, vq, queue_id, pkts, count, dma_id, vchan_id); + vq->stats.inflight_submitted += nb_tx; + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq);