From patchwork Tue Oct 17 14:24:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Cindy Lu X-Patchwork-Id: 132758 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B21D4318D; Tue, 17 Oct 2023 16:24:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C633042DE7; Tue, 17 Oct 2023 16:24:19 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 97AEE4281D for ; Tue, 17 Oct 2023 16:24:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697552658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HYUw+3ThWUuH+Gr39oHJmslcxUm82DVWbCzIJPNL1vg=; b=QnK1MmE0dhoOYQWS2Gsi4IPfJbu12cDOLn05J/8SiVNqtzaC02FN4zMZOUaD/uYu/FoD8z 5weVQZFTtKvgOxDl4AFzTbJBbOUQVRqQBnCWSvo6LLd7//a3EVihYGb7dliSi5Kjf+YoO+ Kqln7kZhGP7PaWsO0HEWCcM86sXTb24= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-695-RP6RiXC6NwSOle7PRJgteg-1; Tue, 17 Oct 2023 10:24:16 -0400 X-MC-Unique: RP6RiXC6NwSOle7PRJgteg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2992D8F15D8; Tue, 17 Oct 2023 14:24:16 +0000 (UTC) Received: from server.redhat.com (unknown [10.72.112.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E7B72166B2E; Tue, 17 Oct 2023 14:24:13 +0000 (UTC) From: Cindy Lu To: lulu@redhat.com, jasowang@redhat.com, xieyongji@bytedance.com, dev@dpdk.org, maxime.coquelin@redhat.com Subject: [RFC v2 1/2] vduse: add mapping process in vduse create and destroy Date: Tue, 17 Oct 2023 22:24:02 +0800 Message-Id: <20231017142403.2995341-2-lulu@redhat.com> In-Reply-To: <20231017142403.2995341-1-lulu@redhat.com> References: <20231017142403.2995341-1-lulu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org the change in the creation process is add 1> check if we need to do reconnect 2> Use ioctl get the reconnect info (size and max page number) from kernel 3> mapping 1 page for reconnect  status + vq_numbers pages for every vqs, The change in the destroy process is the add the related unmap process Signed-off-by: Cindy Lu --- lib/vhost/vduse.c | 148 ++++++++++++++++++++++++++++++++++++---------- lib/vhost/vhost.h | 10 ++++ 2 files changed, 127 insertions(+), 31 deletions(-) diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index 4f36277d3b..9b7f829a7a 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -376,6 +376,19 @@ vduse_device_create(const char *path) uint64_t features = VDUSE_NET_SUPPORTED_FEATURES; struct vduse_dev_config *dev_config = NULL; const char *name = path + strlen("/dev/vduse/"); + char reconnect_dev[PATH_MAX]; + struct vhost_reconnect_data *log = NULL; + struct vduse_reconnect_mmap_info mmap_info; + bool reconnect = false; + + ret = snprintf(reconnect_dev, sizeof(reconnect_dev), "%s/%s", "/dev/vduse", name); + if (access(reconnect_dev, F_OK) == 0) { + reconnect = true; + VHOST_LOG_CONFIG(name, INFO, "Device already exists, reconnecting...\n"); + } else { + reconnect = false; + VHOST_LOG_CONFIG(name, ERR, "device %s not exists, creating...\n", reconnect_dev); + } /* If first device, create events dispatcher thread */ if (vduse_events_thread == false) { @@ -407,16 +420,8 @@ vduse_device_create(const char *path) } if (ioctl(control_fd, VDUSE_SET_API_VERSION, &ver)) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", - ver, strerror(errno)); - ret = -1; - goto out_ctrl_close; - } - - dev_config = malloc(offsetof(struct vduse_dev_config, config) + - sizeof(vnet_config)); - if (!dev_config) { - VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", ver, + strerror(errno)); ret = -1; goto out_ctrl_close; } @@ -424,7 +429,7 @@ vduse_device_create(const char *path) ret = rte_vhost_driver_get_queue_num(path, &max_queue_pairs); if (ret < 0) { VHOST_LOG_CONFIG(name, ERR, "Failed to get max queue pairs\n"); - goto out_free; + goto out_ctrl_close; } VHOST_LOG_CONFIG(path, INFO, "VDUSE max queue pairs: %u\n", max_queue_pairs); @@ -435,23 +440,34 @@ vduse_device_create(const char *path) else total_queues += 1; /* Includes ctrl queue */ - vnet_config.max_virtqueue_pairs = max_queue_pairs; - memset(dev_config, 0, sizeof(struct vduse_dev_config)); - - strncpy(dev_config->name, name, VDUSE_NAME_MAX - 1); - dev_config->device_id = VIRTIO_ID_NET; - dev_config->vendor_id = 0; - dev_config->features = features; - dev_config->vq_num = total_queues; - dev_config->vq_align = sysconf(_SC_PAGE_SIZE); - dev_config->config_size = sizeof(struct virtio_net_config); - memcpy(dev_config->config, &vnet_config, sizeof(vnet_config)); + if (reconnect != true) { + dev_config = + malloc(offsetof(struct vduse_dev_config, config) + sizeof(vnet_config)); + if (!dev_config) { + VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + ret = -1; + goto out_ctrl_close; + } - ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); - if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", - strerror(errno)); - goto out_free; + vnet_config.max_virtqueue_pairs = max_queue_pairs; + memset(dev_config, 0, sizeof(struct vduse_dev_config)); + + strncpy(dev_config->name, name, VDUSE_NAME_MAX - 1); + dev_config->device_id = VIRTIO_ID_NET; + dev_config->vendor_id = 0; + dev_config->features = features; + dev_config->vq_num = total_queues; + dev_config->vq_align = sysconf(_SC_PAGE_SIZE); + dev_config->config_size = sizeof(struct virtio_net_config); + memcpy(dev_config->config, &vnet_config, sizeof(vnet_config)); + + ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); + free(dev_config); + if (ret < 0) { + VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", + strerror(errno)); + goto out_ctrl_close; + } } dev_fd = open(path, O_RDWR); @@ -485,10 +501,43 @@ vduse_device_create(const char *path) strncpy(dev->ifname, path, IF_NAME_SZ - 1); dev->vduse_ctrl_fd = control_fd; dev->vduse_dev_fd = dev_fd; + + ret = ioctl(dev_fd, VDUSE_GET_RECONNECT_INFO, &mmap_info); + if (ret < 0) { + VHOST_LOG_CONFIG(name, ERR, "Failed to get reconnect info VDUSE device: %s\n", + strerror(errno)); + goto out_dev_close; + } + dev->mmap_info.size = mmap_info.size; + dev->mmap_info.max_index = mmap_info.max_index; + log = (struct vhost_reconnect_data *)mmap(NULL, mmap_info.size, PROT_READ | PROT_WRITE, + MAP_SHARED, dev->vduse_dev_fd, 0); + if (log == MAP_FAILED) { + VHOST_LOG_CONFIG(name, ERR, "Failed to mapping vduse reconnect data\n"); + goto out_dev_close; + } + + dev->log = log; + + if (reconnect == true) { + dev->status = dev->log->status; + log->version = VHOST_VDUSE_API_VERSION; + log->reconnect_time += 1; + log->nr_vrings = total_queues; + } + vhost_setup_virtio_net(dev->vid, true, true, true, true); + if (total_queues > mmap_info.max_index - 1) { + VHOST_LOG_CONFIG(name, ERR, "The max vring number %d lager then %d\n", total_queues, + mmap_info.max_index - 1); + goto out_dev_close; + } + for (i = 0; i < total_queues; i++) { struct vduse_vq_config vq_cfg = { 0 }; + struct vhost_reconnect_vring *log_vq; + struct vhost_virtqueue *vq; ret = alloc_vring_queue(dev, i); if (ret) { @@ -496,6 +545,22 @@ vduse_device_create(const char *path) goto out_dev_destroy; } + log_vq = (struct vhost_reconnect_vring *)mmap(NULL, mmap_info.size, + PROT_READ | PROT_WRITE, MAP_SHARED, + dev->vduse_dev_fd, + (i + 1) * mmap_info.size); + if (log_vq == MAP_FAILED) { + VHOST_LOG_CONFIG(name, ERR, "Failed to mapping vring %d reconnect data\n", + i); + + goto out_dev_destroy; + } + + vq = dev->virtqueue[i]; + vq->log = log_vq; + if (reconnect) + continue; + vq_cfg.index = i; vq_cfg.max_size = 1024; @@ -516,7 +581,8 @@ vduse_device_create(const char *path) } fdset_pipe_notify(&vduse.fdset); - free(dev_config); + if (reconnect && dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK) + vduse_device_start(dev, true); return 0; @@ -526,8 +592,6 @@ vduse_device_create(const char *path) if (dev_fd >= 0) close(dev_fd); ioctl(control_fd, VDUSE_DESTROY_DEV, name); -out_free: - free(dev_config); out_ctrl_close: close(control_fd); @@ -553,7 +617,29 @@ vduse_device_destroy(const char *path) if (vid == RTE_MAX_VHOST_DEVICE) return -1; - + if (dev->log) { + for (uint32_t i = 0; i < dev->log->nr_vrings; i++) { + struct vhost_virtqueue *vq; + + vq = dev->virtqueue[i]; + if (vq->log) { + ret = munmap(vq->log, dev->mmap_info.size); + if (ret) { + VHOST_LOG_CONFIG(name, ERR, + "Failed to unmap device %s vq %d: %s\n", + path, i, strerror(errno)); + ret = -1; + } + } + } + ret = munmap(dev->log, dev->mmap_info.size); + if (ret) { + VHOST_LOG_CONFIG(name, ERR, "Failed to unmap device %s dev status: %s\n", + path, strerror(errno)); + ret = -1; + } + } + dev->log = NULL; if (dev->cvq && dev->cvq->kickfd >= 0) { fdset_del(&vduse.fdset, dev->cvq->kickfd); fdset_pipe_notify(&vduse.fdset); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index c8f2a0d43a..1879c6875b 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -19,6 +19,7 @@ #include #include #include +#include #include "rte_vhost.h" #include "vdpa_driver.h" @@ -344,6 +345,8 @@ struct vhost_virtqueue { struct vhost_vring_addr ring_addrs; struct virtqueue_stats stats; + + struct vhost_reconnect_vring *log; } __rte_cache_aligned; /* Virtio device status as per Virtio specification */ @@ -537,6 +540,9 @@ struct virtio_net { struct rte_vhost_user_extern_ops extern_ops; struct vhost_backend_ops *backend_ops; + + struct vhost_reconnect_data *log; + struct vduse_reconnect_mmap_info mmap_info; } __rte_cache_aligned; static inline void @@ -582,6 +588,10 @@ vq_inc_last_avail_packed(struct vhost_virtqueue *vq, uint16_t num) vq->avail_wrap_counter ^= 1; vq->last_avail_idx -= vq->size; } + if (vq->log) { + vq->log->last_avail_idx = vq->last_avail_idx; + vq->log->avail_wrap_counter = vq->avail_wrap_counter; + } } void __vhost_log_cache_write(struct virtio_net *dev,