From patchwork Thu Mar 24 12:46:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 108840 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40CBBA04FF; Thu, 24 Mar 2022 13:46:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E2E84282E; Thu, 24 Mar 2022 13:46:52 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 7482842827 for ; Thu, 24 Mar 2022 13:46:49 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648126009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4bf8uosmLKHCLADwqQPWfAZxpqnw/g+ULBxKQhabhaI=; b=N57vh0Mq/pEC8zqE5TxVXX505+zlO9maDtxZmbXDZTnW9aM0RPI8PJ/pkQk2IyW0yNWL8p sWHAdDZM/u5F2eiQIF7ECsK+6cSWGj4CXN2qJiS50hN0/jt5uNtpGuWcoDdWrXbHs3rUia bV3IC+IZBjIEqyBDu9bLHpBfisStnuw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-235-29k-N8ApOuO4ca-9gAUsWQ-1; Thu, 24 Mar 2022 08:46:46 -0400 X-MC-Unique: 29k-N8ApOuO4ca-9gAUsWQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 737921C0EDCA; Thu, 24 Mar 2022 12:46:45 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 53D0840CF8F7; Thu, 24 Mar 2022 12:46:44 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin , stable@dpdk.org Subject: [PATCH v2 1/5] vhost: fix missing virtqueue lock protection Date: Thu, 24 Mar 2022 13:46:34 +0100 Message-Id: <20220324124638.32672-2-maxime.coquelin@redhat.com> In-Reply-To: <20220324124638.32672-1-maxime.coquelin@redhat.com> References: <20220324124638.32672-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch ensures virtqueue metadata are not being modified while rte_vhost_vring_call() is executed. Fixes: 6c299bb7322f ("vhost: introduce vring call API") Cc: stable@dpdk.org Signed-off-by: Maxime Coquelin Reviewed-by: David Marchand --- lib/vhost/vhost.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index bc88148347..2f96a28dac 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1291,11 +1291,15 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx) if (!vq) return -1; + rte_spinlock_lock(&vq->access_lock); + if (vq_is_packed(dev)) vhost_vring_call_packed(dev, vq); else vhost_vring_call_split(dev, vq); + rte_spinlock_unlock(&vq->access_lock); + return 0; } From patchwork Thu Mar 24 12:46:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 108839 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 918B7A04FF; Thu, 24 Mar 2022 13:46:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5642642828; Thu, 24 Mar 2022 13:46:51 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id D0C7542827 for ; Thu, 24 Mar 2022 13:46:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648126008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tf76LItJZbFZ065yaM2IywTe3rugoH6LqV1l7Z+CE8k=; b=W0VfgYRk4HaPUYuBw/V8LX4uPQHF6lF3njaXJo5uC0utRioxRhcuC+oShX25esXHVWchl+ TpDeJf+7knDnldL6WpxjrnNNaVCjje/2bcZisCj/le/T2JcijGObaj+nrsu1BlCiK3hcsf fTdRIicCryWI0xnX2inSZOL40a9a3M4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-501-umZ9v7O_NTmFXeqBRs4HXg-1; Thu, 24 Mar 2022 08:46:47 -0400 X-MC-Unique: umZ9v7O_NTmFXeqBRs4HXg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DFE36804184; Thu, 24 Mar 2022 12:46:46 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id B981840CFD07; Thu, 24 Mar 2022 12:46:45 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH v2 2/5] vhost: add per-virtqueue statistics support Date: Thu, 24 Mar 2022 13:46:35 +0100 Message-Id: <20220324124638.32672-3-maxime.coquelin@redhat.com> In-Reply-To: <20220324124638.32672-1-maxime.coquelin@redhat.com> References: <20220324124638.32672-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces new APIs for the application to query and reset per-virtqueue statistics. The patch also introduces generic counters. Signed-off-by: Maxime Coquelin --- doc/guides/prog_guide/vhost_lib.rst | 24 ++++++ lib/vhost/rte_vhost.h | 99 +++++++++++++++++++++++++ lib/vhost/socket.c | 4 +- lib/vhost/version.map | 5 ++ lib/vhost/vhost.c | 109 +++++++++++++++++++++++++++- lib/vhost/vhost.h | 18 ++++- lib/vhost/virtio_net.c | 53 ++++++++++++++ 7 files changed, 308 insertions(+), 4 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 886f8f5e72..8cd1c3ddc7 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -130,6 +130,15 @@ The following is an overview of some key Vhost API functions: It is disabled by default. + - ``RTE_VHOST_USER_NET_STATS_ENABLE`` + + Per-virtqueue statistics collection will be enabled when this flag is set. + When enabled, the application may use rte_vhost_stats_get_names() and + rte_vhost_stats_get() to collect statistics, and rte_vhost_stats_reset() to + reset them. + + It is disabled by default + * ``rte_vhost_driver_set_features(path, features)`` This function sets the feature bits the vhost-user driver supports. The @@ -276,6 +285,21 @@ The following is an overview of some key Vhost API functions: Clear inflight packets which are submitted to DMA engine in vhost async data path. Completed packets are returned to applications through ``pkts``. +* ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` + + This function returns the names of the queue statistics. It requires + statistics collection to be enabled at registration time. + +* ``rte_vhost_vring_stats_get(int vid, uint16_t queue_id, struct rte_vhost_stat *stats, unsigned int n)`` + + This function returns the queue statistics. It requires statistics + collection to be enabled at registration time. + +* ``rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)`` + + This function resets the queue statistics. It requires statistics + collection to be enabled at registration time. + Vhost-user Implementations -------------------------- diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index c733f857c6..65cfd90bae 100644 --- a/lib/vhost/rte_vhost.h +++ b/lib/vhost/rte_vhost.h @@ -39,6 +39,7 @@ extern "C" { #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) #define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7) #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS (1ULL << 8) +#define RTE_VHOST_USER_NET_STATS_ENABLE (1ULL << 9) /* Features. */ #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE @@ -321,6 +322,32 @@ struct rte_vhost_power_monitor_cond { uint8_t match; }; +/** Maximum name length for the statistics counters */ +#define RTE_VHOST_STATS_NAME_SIZE 64 + +/** + * Vhost virtqueue statistics structure + * + * This structure is used by rte_vhost_vring_stats_get() to provide + * virtqueue statistics to the calling application. + * It maps a name ID, corresponding to an index in the array returned + * by rte_vhost_vring_stats_get_names(), to a statistic value. + */ +struct rte_vhost_stat { + uint64_t id; /**< The index in xstats name array. */ + uint64_t value; /**< The statistic counter value. */ +}; + +/** + * Vhost virtqueue statistic name element + * + * This structure is used by rte_vhost_vring_stats_get_anmes() to + * provide virtqueue statistics names to the calling application. + */ +struct rte_vhost_stat_name { + char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */ +}; + /** * Convert guest physical address to host virtual address * @@ -1063,6 +1090,78 @@ __rte_experimental int rte_vhost_slave_config_change(int vid, bool need_reply); +/** + * Retrieve names of statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param stats_names + * array of at least size elements to be filled. + * If set to NULL, the function returns the required number of elements. + * @param size + * The number of elements in stats_names array. + * @return + * - Success if greater than 0 and lower or equal to *size*. The return value + * indicates the number of elements filled in the *names* array. + * - Failure if greater than *size*. The return value indicates the number of + * elements the *names* array that should be given to succeed. + * - Failure if lower than 0. The device ID or queue ID is invalid or + + statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size); + +/** + * Retrieve statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param stats + * A pointer to a table of structure of type rte_vhost_stat to be filled with + * virtqueue statistics ids and values. + * @param n + * The number of elements in stats array. + * @return + * - Success if greater than 0 and lower or equal to *n*. The return value + * indicates the number of elements filled in the *stats* array. + * - Failure if greater than *n*. The return value indicates the number of + * elements the *stats* array that should be given to succeed. + * - Failure if lower than 0. The device ID or queue ID is invalid, or + * statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n); + +/** + * Reset statistics of a Vhost virtqueue. + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @return + * - Success if 0. Statistics have been reset. + * - Failure if lower than 0. The device ID or queue ID is invalid, or + * statistics collection is not enabled. + */ +__rte_experimental +int +rte_vhost_vring_stats_reset(int vid, uint16_t queue_id); + #ifdef __cplusplus } #endif diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index b304339de9..baf6c338d3 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -42,6 +42,7 @@ struct vhost_user_socket { bool linearbuf; bool async_copy; bool net_compliant_ol_flags; + bool stats_enabled; /* * The "supported_features" indicates the feature bits the @@ -227,7 +228,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) vhost_set_ifname(vid, vsocket->path, size); vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net, - vsocket->net_compliant_ol_flags); + vsocket->net_compliant_ol_flags, vsocket->stats_enabled); vhost_attach_vdpa_device(vid, vsocket->vdpa_dev); @@ -863,6 +864,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT; vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY; vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS; + vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE; if (vsocket->async_copy && (flags & (RTE_VHOST_USER_IOMMU_SUPPORT | diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 0a66c5840c..e33a215a58 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -87,6 +87,11 @@ EXPERIMENTAL { # added in 22.03 rte_vhost_async_dma_configure; + + # added in 22.07 + rte_vhost_vring_stats_get_names; + rte_vhost_vring_stats_get; + rte_vhost_vring_stats_reset; }; INTERNAL { diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 2f96a28dac..80ed024019 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -24,6 +24,28 @@ struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE]; pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER; +struct vhost_vq_stats_name_off { + char name[RTE_VHOST_STATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { + {"good_packets", offsetof(struct vhost_virtqueue, stats.packets)}, + {"good_bytes", offsetof(struct vhost_virtqueue, stats.bytes)}, + {"multicast_packets", offsetof(struct vhost_virtqueue, stats.multicast)}, + {"broadcast_packets", offsetof(struct vhost_virtqueue, stats.broadcast)}, + {"undersize_packets", offsetof(struct vhost_virtqueue, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct vhost_virtqueue, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct vhost_virtqueue, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct vhost_virtqueue, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct vhost_virtqueue, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, +}; + +#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) + /* Called with iotlb_lock read-locked */ uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -755,7 +777,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len) } void -vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) +vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled) { struct virtio_net *dev = get_device(vid); @@ -770,6 +792,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS; else dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS; + if (stats_enabled) + dev->flags |= VIRTIO_DEV_STATS_ENABLED; + else + dev->flags &= ~VIRTIO_DEV_STATS_ENABLED; } void @@ -1945,5 +1971,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id, return 0; } + +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size) +{ + struct virtio_net *dev = get_device(vid); + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (name == NULL || size < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + for (i = 0; i < VHOST_NB_VQ_STATS; i++) + snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s", + (queue_id & 1) ? "rx" : "tx", + queue_id / 2, vhost_vq_stat_strings[i].name); + + return VHOST_NB_VQ_STATS; +} + +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (stats == NULL || n < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + for (i = 0; i < VHOST_NB_VQ_STATS; i++) { + stats[i].value = + *(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset); + stats[i].id = i; + } + rte_spinlock_unlock(&vq->access_lock); + + return VHOST_NB_VQ_STATS; +} + +int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + memset(&vq->stats, 0, sizeof(vq->stats)); + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO); RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..01b97011aa 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -37,6 +37,8 @@ #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4) /* Used to indicate that the virtio_net tx code should fill TX ol_flags */ #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5) +/* Used to indicate the application has requested statistics collection */ +#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6) /* Backend value set by guest. */ #define VIRTIO_DEV_STOPPED -1 @@ -121,6 +123,18 @@ struct vring_used_elem_packed { uint32_t count; }; +/** + * Virtqueue statistics + */ +struct virtqueue_stats { + uint64_t packets; + uint64_t bytes; + uint64_t multicast; + uint64_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + uint64_t size_bins[8]; +}; + /** * iovec */ @@ -305,6 +319,7 @@ struct vhost_virtqueue { #define VIRTIO_UNINITIALIZED_NOTIF (-1) struct vhost_vring_addr ring_addrs; + struct virtqueue_stats stats; } __rte_cache_aligned; /* Virtio device status as per Virtio specification */ @@ -780,7 +795,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx); void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev); void vhost_set_ifname(int, const char *if_name, unsigned int if_len); -void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags); +void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled); void vhost_enable_extbuf(int vid); void vhost_enable_linearbuf(int vid); int vhost_enable_guest_notification(struct virtio_net *dev, @@ -957,5 +972,4 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } - #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..96b894dffe 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -47,6 +47,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring; } +/* + * This function must be called with virtqueue's access_lock taken. + */ +static inline void +vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count) +{ + struct virtqueue_stats *stats = &vq->stats; + int i; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return; + + for (i = 0; i < count; i++) { + struct rte_ether_addr *ea; + struct rte_mbuf *pkt = pkts[i]; + uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt); + + stats->packets++; + stats->bytes += pkt_len; + + if (pkt_len == 64) { + stats->size_bins[1]++; + } else if (pkt_len > 64 && pkt_len < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5; + stats->size_bins[bin]++; + } else { + if (pkt_len < 64) + stats->size_bins[0]++; + else if (pkt_len < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } + } +} + static __rte_always_inline int64_t vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx, @@ -1509,6 +1557,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, else nb_tx = virtio_dev_rx_split(dev, vq, pkts, count); + vhost_queue_stats_update(dev, vq, pkts, nb_tx); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq); @@ -3113,6 +3163,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, * learning table will get updated first. */ pkts[0] = rarp_mbuf; + vhost_queue_stats_update(dev, vq, pkts, 1); pkts++; count -= 1; } @@ -3129,6 +3180,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count); } + vhost_queue_stats_update(dev, vq, pkts, count); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq); From patchwork Thu Mar 24 12:46:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 108841 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E769A04FF; Thu, 24 Mar 2022 13:47:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A65C4283B; Thu, 24 Mar 2022 13:46:53 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 103B24282B for ; Thu, 24 Mar 2022 13:46:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648126011; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d9pBxKdU4J3HkW9xW4l/N9E0bkts4bRSoh+MtB3dqWQ=; b=JBsGFGyiQmGj2eDQsd46o+hpUWX5EQicHpLOlFnUWg43yZTCJh015Lo48TbMvd5RilvvPd LfmilNTXc5qhtREIU6unvGiNHpNZJLsx6zjHzZT9qn/Mhs45wSmLj4jw6tbuyAthgTHwcE gYuSrPduielSH483c0dEqzHJnk9r6bw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-530-V93gm9C3N0qGEdHnSL0TRw-1; Thu, 24 Mar 2022 08:46:48 -0400 X-MC-Unique: V93gm9C3N0qGEdHnSL0TRw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1F8C41044562; Thu, 24 Mar 2022 12:46:48 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3009340CFD07; Thu, 24 Mar 2022 12:46:47 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH v2 3/5] net/vhost: move to Vhost library stats API Date: Thu, 24 Mar 2022 13:46:36 +0100 Message-Id: <20220324124638.32672-4-maxime.coquelin@redhat.com> In-Reply-To: <20220324124638.32672-1-maxime.coquelin@redhat.com> References: <20220324124638.32672-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that we have Vhost statistics APIs, this patch replaces Vhost PMD extented statistics implementation with calls to the new API. It will enable getting more statistics for counters that cannot be implmented at the PMD level. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++------------------- 1 file changed, 120 insertions(+), 228 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 070f0e6dfd..bac1c0acba 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = { } }; -enum vhost_xstats_pkts { - VHOST_UNDERSIZE_PKT = 0, - VHOST_64_PKT, - VHOST_65_TO_127_PKT, - VHOST_128_TO_255_PKT, - VHOST_256_TO_511_PKT, - VHOST_512_TO_1023_PKT, - VHOST_1024_TO_1522_PKT, - VHOST_1523_TO_MAX_PKT, - VHOST_BROADCAST_PKT, - VHOST_MULTICAST_PKT, - VHOST_UNICAST_PKT, - VHOST_PKT, - VHOST_BYTE, - VHOST_MISSED_PKT, - VHOST_ERRORS_PKT, - VHOST_ERRORS_FRAGMENTED, - VHOST_ERRORS_JABBER, - VHOST_UNKNOWN_PROTOCOL, - VHOST_XSTATS_MAX, -}; - struct vhost_stats { uint64_t pkts; uint64_t bytes; uint64_t missed_pkts; - uint64_t xstats[VHOST_XSTATS_MAX]; }; struct vhost_queue { @@ -140,138 +117,92 @@ struct rte_vhost_vring_state { static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; -#define VHOST_XSTATS_NAME_SIZE 64 - -struct vhost_xstats_name_off { - char name[VHOST_XSTATS_NAME_SIZE]; - uint64_t offset; -}; - -/* [rx]_is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, - {"fragmented_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])}, - {"jabber_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])}, - {"unknown_protos_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])}, -}; - -/* [tx]_ is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, -}; - -#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \ - sizeof(vhost_rxport_stat_strings[0])) - -#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \ - sizeof(vhost_txport_stat_strings[0])) - static int vhost_dev_xstats_reset(struct rte_eth_dev *dev) { - struct vhost_queue *vq = NULL; - unsigned int i = 0; + struct vhost_queue *vq; + int ret, i; for (i = 0; i < dev->data->nb_rx_queues; i++) { vq = dev->data->rx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } + for (i = 0; i < dev->data->nb_tx_queues; i++) { vq = dev->data->tx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } return 0; } static int -vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, - unsigned int limit __rte_unused) + unsigned int limit) { - unsigned int t = 0; - int count = 0; - int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; + struct rte_vhost_stat_name *name; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; - if (!xstats_names) + nstats += ret; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats_names || limit < (unsigned int)nstats) return nstats; - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "rx_%s", vhost_rxport_stat_strings[t].name); - count++; - } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "tx_%s", vhost_txport_stat_strings[t].name); - count++; + + name = calloc(nstats, sizeof(*name)); + if (!name) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; + } + + for (i = 0; i < count; i++) + strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE); + + free(name); + return count; } @@ -279,86 +210,63 @@ static int vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - unsigned int i; - unsigned int t; - unsigned int count = 0; - struct vhost_queue *vq = NULL; - unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; - - if (n < nxstats) - return nxstats; - - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - vq = dev->data->rx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_rxport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + struct rte_vhost_stat *stats; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - vq = dev->data->tx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_txport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - return count; -} -static inline void -vhost_count_xcast_packets(struct vhost_queue *vq, - struct rte_mbuf *mbuf) -{ - struct rte_ether_addr *ea = NULL; - struct vhost_stats *pstats = &vq->stats; - - ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - pstats->xstats[VHOST_BROADCAST_PKT]++; - else - pstats->xstats[VHOST_MULTICAST_PKT]++; - } else { - pstats->xstats[VHOST_UNICAST_PKT]++; + if (!xstats || n < (unsigned int)nstats) + return nstats; + + stats = calloc(nstats, sizeof(*stats)); + if (!stats) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) + return ret; + + count += ret; } -} -static __rte_always_inline void -vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf) -{ - uint32_t pkt_len = 0; - uint64_t index; - struct vhost_stats *pstats = &vq->stats; - - pstats->xstats[VHOST_PKT]++; - pkt_len = buf->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, buf); + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) + return ret; + + count += ret; + } + + for (i = 0; i < count; i++) { + xstats[i].id = stats[i].id; + xstats[i].value = stats[i].value; + } + + free(stats); + + return nstats; } static uint16_t @@ -402,9 +310,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) rte_vlan_strip(bufs[i]); r->stats.bytes += bufs[i]->pkt_len; - r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len; - - vhost_update_single_packet_xstats(r, bufs[i]); } out: @@ -461,10 +366,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) { + for (i = 0; likely(i < nb_tx); i++) nb_bytes += bufs[i]->pkt_len; - vhost_update_single_packet_xstats(r, bufs[i]); - } nb_missed = nb_bufs - nb_tx; @@ -472,17 +375,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) r->stats.bytes += nb_bytes; r->stats.missed_pkts += nb_missed; - r->stats.xstats[VHOST_BYTE] += nb_bytes; - r->stats.xstats[VHOST_MISSED_PKT] += nb_missed; - r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed; - - /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and - * ifHCOutBroadcastPkts counters are increased when packets are not - * transmitted successfully. - */ - for (i = nb_tx; i < nb_bufs; i++) - vhost_count_xcast_packets(r, bufs[i]); - for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); out: @@ -1555,7 +1447,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) int ret = 0; char *iface_name; uint16_t queues; - uint64_t flags = 0; + uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE; uint64_t disable_flags = 0; int client_mode = 0; int iommu_support = 0; From patchwork Thu Mar 24 12:46:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 108842 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CECD2A04FF; Thu, 24 Mar 2022 13:47:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 881834283F; Thu, 24 Mar 2022 13:46:54 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 77D6E42839 for ; Thu, 24 Mar 2022 13:46:53 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648126013; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Oej7xKPEY/fBSrVyhUV8IBfUJvzn7NWbDhuU3QizcB0=; b=RFIDCzHiVQ6jwcZt0vciI5mjIkkfpwbp/dctgut3bJaBMu7TwScJho+tlDZiEbG7kFvfDv KXxEz1bowEGFduK0NuHySBIcFD5i27vpIvymOhOdNz7wvHEZGmnb6h6lso8jg+bweT18Zs EZQX+VnpABtGCIUyskp1vMCF5pua0n4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-451-wXva82sDP36m5x8GFnRGbQ-1; Thu, 24 Mar 2022 08:46:49 -0400 X-MC-Unique: wXva82sDP36m5x8GFnRGbQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5FE8A1044565; Thu, 24 Mar 2022 12:46:49 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 64E7540CFD07; Thu, 24 Mar 2022 12:46:48 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH v2 4/5] vhost: add statistics for guest notifications Date: Thu, 24 Mar 2022 13:46:37 +0100 Message-Id: <20220324124638.32672-5-maxime.coquelin@redhat.com> In-Reply-To: <20220324124638.32672-1-maxime.coquelin@redhat.com> References: <20220324124638.32672-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds a new virtqueue statistic for guest notifications. It is useful to deduce from hypervisor side whether the corresponding guest Virtio device is using Kernel Virtio-net driver or DPDK Virtio PMD. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 1 + lib/vhost/vhost.h | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 80ed024019..58b58fc40e 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -42,6 +42,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])}, {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, + {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 01b97011aa..13c5c2266d 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -133,6 +133,7 @@ struct virtqueue_stats { uint64_t broadcast; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ uint64_t size_bins[8]; + uint64_t guest_notifications; }; /** @@ -871,6 +872,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) (vq->callfd >= 0)) || unlikely(!signalled_used_valid)) { eventfd_write(vq->callfd, (eventfd_t) 1); + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.guest_notifications++; if (dev->notify_ops->guest_notified) dev->notify_ops->guest_notified(dev->vid); } @@ -879,6 +882,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT) && (vq->callfd >= 0)) { eventfd_write(vq->callfd, (eventfd_t)1); + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.guest_notifications++; if (dev->notify_ops->guest_notified) dev->notify_ops->guest_notified(dev->vid); } From patchwork Thu Mar 24 12:46:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 108843 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87BDDA04FF; Thu, 24 Mar 2022 13:47:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8150942842; Thu, 24 Mar 2022 13:46:55 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id A17D24283C for ; Thu, 24 Mar 2022 13:46:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648126012; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3fGz6m/1THsLZe5sS33kfhogFJkH6YyAsqXOXcPiGOA=; b=EoW/TVeyQEAD7sidB/CpMQmXdDME/YHswLSmNM0p/Sg8djnc4L0Pe3IzPuSwfdMsMpzirh l38C5y+eGuIgWTy3pxz9AerKVgXLQhW56ux1660gX4PmEA/leCaeNR62zC3K8uNgU10ZVy KUG+lrunp79J58Twk5teWHd0zHcb55w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-221-1xF_3bN0MMuJeRo0uIbKgw-1; Thu, 24 Mar 2022 08:46:51 -0400 X-MC-Unique: 1xF_3bN0MMuJeRo0uIbKgw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9DF6F803B22; Thu, 24 Mar 2022 12:46:50 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id A44D340CF8F7; Thu, 24 Mar 2022 12:46:49 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH v2 5/5] vhost: add statistics for IOTLB Date: Thu, 24 Mar 2022 13:46:38 +0100 Message-Id: <20220324124638.32672-6-maxime.coquelin@redhat.com> In-Reply-To: <20220324124638.32672-1-maxime.coquelin@redhat.com> References: <20220324124638.32672-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds statistics for IOTLB hits and misses. Signed-off-by: Maxime Coquelin --- lib/vhost/vhost.c | 10 +++++++++- lib/vhost/vhost.h | 3 +++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 58b58fc40e..7f4fafdcb0 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)}, + {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)}, + {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) @@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, tmp_size = *size; vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm); - if (tmp_size == *size) + if (tmp_size == *size) { + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.iotlb_hits++; return vva; + } + + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + vq->stats.iotlb_misses++; iova += tmp_size; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 13c5c2266d..e876fc157b 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -134,6 +134,9 @@ struct virtqueue_stats { /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ uint64_t size_bins[8]; uint64_t guest_notifications; + uint64_t iotlb_hits; + uint64_t iotlb_misses; + uint64_t iotlb_errors; }; /**