From patchwork Tue Jun 6 08:18:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 128180 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 911AD42C3E; Tue, 6 Jun 2023 10:20:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD3C542D85; Tue, 6 Jun 2023 10:19:46 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 69EA842D56 for ; Tue, 6 Jun 2023 10:19:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686039585; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YPQYGUKXd3T5G0nzSO7UYxof2jO85vBggzbd+oY0ZrI=; b=In4UD6wxCkzkPN8CiaYpoaXwpjoMjI6bK9czYfby42bQ/ScHMOZEqgCZaVrkK8lJgMZFY1 TlCmIPUsnJVhywqWdKgRCZ2Klwf2Um3pXTkSjhoE8Vt2O1TUwScqN8tHOckpSfqs589MfT iLHcOQ6mrioOrp3mOQW4LnFGabt3yps= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-155-hsdVi_GmP3y7aueGXhFQuw-1; Tue, 06 Jun 2023 04:19:42 -0400 X-MC-Unique: hsdVi_GmP3y7aueGXhFQuw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 73642801182; Tue, 6 Jun 2023 08:19:41 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECF2140CF8F6; Tue, 6 Jun 2023 08:19:38 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, mkp@redhat.com, fbl@redhat.com, jasowang@redhat.com, cunming.liang@intel.com, xieyongji@bytedance.com, echaudro@redhat.com, eperezma@redhat.com, amorenoz@redhat.com, lulu@redhat.com Cc: Maxime Coquelin Subject: [PATCH v5 15/26] vhost: add control virtqueue support Date: Tue, 6 Jun 2023 10:18:41 +0200 Message-Id: <20230606081852.71003-16-maxime.coquelin@redhat.com> In-Reply-To: <20230606081852.71003-1-maxime.coquelin@redhat.com> References: <20230606081852.71003-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In order to support multi-queue with VDUSE, having control queue support is required. This patch adds control queue implementation, it will be used later when adding VDUSE support. Only split ring layout is supported for now, packed ring support will be added later. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/meson.build | 1 + lib/vhost/vhost.h | 2 + lib/vhost/virtio_net_ctrl.c | 286 ++++++++++++++++++++++++++++++++++++ lib/vhost/virtio_net_ctrl.h | 10 ++ 4 files changed, 299 insertions(+) create mode 100644 lib/vhost/virtio_net_ctrl.c create mode 100644 lib/vhost/virtio_net_ctrl.h diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build index 05679447db..d211a0bd37 100644 --- a/lib/vhost/meson.build +++ b/lib/vhost/meson.build @@ -27,6 +27,7 @@ sources = files( 'vhost_crypto.c', 'vhost_user.c', 'virtio_net.c', + 'virtio_net_ctrl.c', ) headers = files( 'rte_vdpa.h', diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index ea2798b0bf..04267a3ac5 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -527,6 +527,8 @@ struct virtio_net { int postcopy_ufd; int postcopy_listening; + struct vhost_virtqueue *cvq; + struct rte_vdpa_device *vdpa_dev; /* context data for the external message handlers */ diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c new file mode 100644 index 0000000000..6b583a0ce6 --- /dev/null +++ b/lib/vhost/virtio_net_ctrl.c @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include +#include +#include + +#include "iotlb.h" +#include "vhost.h" +#include "virtio_net_ctrl.h" + +struct virtio_net_ctrl { + uint8_t class; + uint8_t command; + uint8_t command_data[]; +}; + +struct virtio_net_ctrl_elem { + struct virtio_net_ctrl *ctrl_req; + uint16_t head_idx; + uint16_t n_descs; + uint8_t *desc_ack; +}; + +static int +virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, + struct virtio_net_ctrl_elem *ctrl_elem) + __rte_shared_locks_required(&cvq->iotlb_lock) +{ + uint16_t avail_idx, desc_idx, n_descs = 0; + uint64_t desc_len, desc_addr, desc_iova, data_len = 0; + uint8_t *ctrl_req; + struct vring_desc *descs; + + avail_idx = __atomic_load_n(&cvq->avail->idx, __ATOMIC_ACQUIRE); + if (avail_idx == cvq->last_avail_idx) { + VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n"); + return 0; + } + + desc_idx = cvq->avail->ring[cvq->last_avail_idx]; + if (desc_idx >= cvq->size) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index, dropping\n"); + goto err; + } + + ctrl_elem->head_idx = desc_idx; + + if (cvq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) { + desc_len = cvq->desc[desc_idx].len; + desc_iova = cvq->desc[desc_idx].addr; + + descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, + desc_iova, &desc_len, VHOST_ACCESS_RO); + if (!descs || desc_len != cvq->desc[desc_idx].len) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + goto err; + } + + desc_idx = 0; + } else { + descs = cvq->desc; + } + + while (1) { + desc_len = descs[desc_idx].len; + desc_iova = descs[desc_idx].addr; + + n_descs++; + + if (descs[desc_idx].flags & VRING_DESC_F_WRITE) { + if (ctrl_elem->desc_ack) { + VHOST_LOG_CONFIG(dev->ifname, ERR, + "Unexpected ctrl chain layout\n"); + goto err; + } + + if (desc_len != sizeof(uint8_t)) { + VHOST_LOG_CONFIG(dev->ifname, ERR, + "Invalid ack size for ctrl req, dropping\n"); + goto err; + } + + ctrl_elem->desc_ack = (uint8_t *)(uintptr_t)vhost_iova_to_vva(dev, cvq, + desc_iova, &desc_len, VHOST_ACCESS_WO); + if (!ctrl_elem->desc_ack || desc_len != sizeof(uint8_t)) { + VHOST_LOG_CONFIG(dev->ifname, ERR, + "Failed to map ctrl ack descriptor\n"); + goto err; + } + } else { + if (ctrl_elem->desc_ack) { + VHOST_LOG_CONFIG(dev->ifname, ERR, + "Unexpected ctrl chain layout\n"); + goto err; + } + + data_len += desc_len; + } + + if (!(descs[desc_idx].flags & VRING_DESC_F_NEXT)) + break; + + desc_idx = descs[desc_idx].next; + } + + desc_idx = ctrl_elem->head_idx; + + if (cvq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) + ctrl_elem->n_descs = 1; + else + ctrl_elem->n_descs = n_descs; + + if (!ctrl_elem->desc_ack) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Missing ctrl ack descriptor\n"); + goto err; + } + + if (data_len < sizeof(ctrl_elem->ctrl_req->class) + sizeof(ctrl_elem->ctrl_req->command)) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Invalid control header size\n"); + goto err; + } + + ctrl_elem->ctrl_req = malloc(data_len); + if (!ctrl_elem->ctrl_req) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to alloc ctrl request\n"); + goto err; + } + + ctrl_req = (uint8_t *)ctrl_elem->ctrl_req; + + if (cvq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) { + desc_len = cvq->desc[desc_idx].len; + desc_iova = cvq->desc[desc_idx].addr; + + descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, + desc_iova, &desc_len, VHOST_ACCESS_RO); + if (!descs || desc_len != cvq->desc[desc_idx].len) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + goto free_err; + } + + desc_idx = 0; + } else { + descs = cvq->desc; + } + + while (!(descs[desc_idx].flags & VRING_DESC_F_WRITE)) { + desc_len = descs[desc_idx].len; + desc_iova = descs[desc_idx].addr; + + desc_addr = vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); + if (!desc_addr || desc_len < descs[desc_idx].len) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl descriptor\n"); + goto free_err; + } + + memcpy(ctrl_req, (void *)(uintptr_t)desc_addr, desc_len); + ctrl_req += desc_len; + + if (!(descs[desc_idx].flags & VRING_DESC_F_NEXT)) + break; + + desc_idx = descs[desc_idx].next; + } + + cvq->last_avail_idx++; + if (cvq->last_avail_idx >= cvq->size) + cvq->last_avail_idx -= cvq->size; + + if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) + vhost_avail_event(cvq) = cvq->last_avail_idx; + + return 1; + +free_err: + free(ctrl_elem->ctrl_req); +err: + cvq->last_avail_idx++; + if (cvq->last_avail_idx >= cvq->size) + cvq->last_avail_idx -= cvq->size; + + if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) + vhost_avail_event(cvq) = cvq->last_avail_idx; + + return -1; +} + +static uint8_t +virtio_net_ctrl_handle_req(struct virtio_net *dev, struct virtio_net_ctrl *ctrl_req) +{ + uint8_t ret = VIRTIO_NET_ERR; + + if (ctrl_req->class == VIRTIO_NET_CTRL_MQ && + ctrl_req->command == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { + uint16_t queue_pairs; + uint32_t i; + + queue_pairs = *(uint16_t *)(uintptr_t)ctrl_req->command_data; + VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs\n", queue_pairs); + ret = VIRTIO_NET_OK; + + for (i = 0; i < dev->nr_vring; i++) { + struct vhost_virtqueue *vq = dev->virtqueue[i]; + bool enable; + + if (vq == dev->cvq) + continue; + + if (i < queue_pairs * 2) + enable = true; + else + enable = false; + + vq->enabled = enable; + if (dev->notify_ops->vring_state_changed) + dev->notify_ops->vring_state_changed(dev->vid, i, enable); + } + } + + return ret; +} + +static int +virtio_net_ctrl_push(struct virtio_net *dev, struct virtio_net_ctrl_elem *ctrl_elem) +{ + struct vhost_virtqueue *cvq = dev->cvq; + struct vring_used_elem *used_elem; + + used_elem = &cvq->used->ring[cvq->last_used_idx]; + used_elem->id = ctrl_elem->head_idx; + used_elem->len = ctrl_elem->n_descs; + + cvq->last_used_idx++; + if (cvq->last_used_idx >= cvq->size) + cvq->last_used_idx -= cvq->size; + + __atomic_store_n(&cvq->used->idx, cvq->last_used_idx, __ATOMIC_RELEASE); + + vhost_vring_call_split(dev, dev->cvq); + + free(ctrl_elem->ctrl_req); + + return 0; +} + +int +virtio_net_ctrl_handle(struct virtio_net *dev) +{ + int ret = 0; + + if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "Packed ring not supported yet\n"); + return -1; + } + + if (!dev->cvq) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "missing control queue\n"); + return -1; + } + + rte_rwlock_read_lock(&dev->cvq->access_lock); + vhost_user_iotlb_rd_lock(dev->cvq); + + while (1) { + struct virtio_net_ctrl_elem ctrl_elem; + + memset(&ctrl_elem, 0, sizeof(struct virtio_net_ctrl_elem)); + + ret = virtio_net_ctrl_pop(dev, dev->cvq, &ctrl_elem); + if (ret <= 0) + break; + + *ctrl_elem.desc_ack = virtio_net_ctrl_handle_req(dev, ctrl_elem.ctrl_req); + + ret = virtio_net_ctrl_push(dev, &ctrl_elem); + if (ret < 0) + break; + } + + vhost_user_iotlb_rd_unlock(dev->cvq); + rte_rwlock_read_unlock(&dev->cvq->access_lock); + + return ret; +} diff --git a/lib/vhost/virtio_net_ctrl.h b/lib/vhost/virtio_net_ctrl.h new file mode 100644 index 0000000000..9a90f4b9da --- /dev/null +++ b/lib/vhost/virtio_net_ctrl.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#ifndef _VIRTIO_NET_CTRL_H +#define _VIRTIO_NET_CTRL_H + +int virtio_net_ctrl_handle(struct virtio_net *dev); + +#endif