From patchwork Tue Oct 26 16:28:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 102952 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9171CA0547; Tue, 26 Oct 2021 18:29:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9B5D4114D; Tue, 26 Oct 2021 18:29:27 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 94C014113C for ; Tue, 26 Oct 2021 18:29:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635265761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AmXoYJyUMlKvIwqhbe721AhgZ8O2xEFvE8vTBa36eNo=; b=FLuytC1PzBwb/oMtz2kFP80hy4lyKJQKn1sV4VblR+g+uj709y/qP/aUM5+/8rlfprbUnB PC1edXBC5Gb+ncefEZ6ov7TAIFICBLdzoZRlquU94vojhW7I//Fiq10drk8sTJIrc7QVNG Vw87udEr6qTWL57EWmE2rkYW6xHwZE4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-349-6jVQcuTvNmaYBJkXByDzhQ-1; Tue, 26 Oct 2021 12:29:18 -0400 X-MC-Unique: 6jVQcuTvNmaYBJkXByDzhQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3156B1397B01; Tue, 26 Oct 2021 16:29:17 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.37]) by smtp.corp.redhat.com (Postfix) with ESMTP id 61ABC10016FE; Tue, 26 Oct 2021 16:29:15 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, wenwux.ma@intel.com, bruce.richardson@intel.com, john.mcnamara@intel.com Cc: Maxime Coquelin Date: Tue, 26 Oct 2021 18:28:52 +0200 Message-Id: <20211026162904.482987-4-maxime.coquelin@redhat.com> In-Reply-To: <20211026162904.482987-1-maxime.coquelin@redhat.com> References: <20211026162904.482987-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 03/15] vhost: simplify async IO vectors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" IO vectors implementation is unnecessarily complex, mixing source and destinations vectors in the same array. This patch declares two arrays, one for the source and one for the destination. It also get rid off seg_awaits variable in both packed and split implementation, which is the same as iovec_idx. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.h | 5 +++-- lib/vhost/virtio_net.c | 28 +++++++++++----------------- 2 files changed, 14 insertions(+), 19 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 8c1c33c852..686f468eff 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -49,7 +49,7 @@ #define MAX_PKT_BURST 32 #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) -#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 4) +#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2) #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \ ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \ @@ -133,7 +133,8 @@ struct vhost_async { struct rte_vhost_async_channel_ops ops; struct rte_vhost_iov_iter it_pool[VHOST_MAX_ASYNC_IT]; - struct iovec vec_pool[VHOST_MAX_ASYNC_VEC]; + struct iovec src_iovec[VHOST_MAX_ASYNC_VEC]; + struct iovec dst_iovec[VHOST_MAX_ASYNC_VEC]; /* data transfer status */ struct async_inflight_info *pkts_info; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index d9ac85f829..2a243701c0 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1512,14 +1512,12 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_async *async = vq->async; struct rte_vhost_iov_iter *it_pool = async->it_pool; - struct iovec *vec_pool = async->vec_pool; struct rte_vhost_async_desc tdes[MAX_PKT_BURST]; - struct iovec *src_iovec = vec_pool; - struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + struct iovec *src_iovec = async->src_iovec; + struct iovec *dst_iovec = async->dst_iovec; struct async_inflight_info *pkts_info = async->pkts_info; uint32_t n_pkts = 0, pkt_err = 0; int32_t n_xfer; - uint16_t segs_await = 0; uint16_t iovec_idx = 0, it_idx = 0, slot_idx = 0; /* @@ -1562,7 +1560,6 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, pkts_info[slot_idx].mbuf = pkts[pkt_idx]; iovec_idx += it_pool[it_idx].nr_segs; - segs_await += it_pool[it_idx].nr_segs; it_idx += 2; vq->last_avail_idx += num_buffers; @@ -1573,8 +1570,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, * - unused async iov number is less than max vhost vector */ if (unlikely(pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - ((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < - BUF_VECTOR_MAX))) { + (VHOST_MAX_ASYNC_VEC - iovec_idx < BUF_VECTOR_MAX))) { n_xfer = async->ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx); if (likely(n_xfer >= 0)) { @@ -1588,7 +1584,6 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, iovec_idx = 0; it_idx = 0; - segs_await = 0; if (unlikely(n_pkts < pkt_burst_idx)) { /* @@ -1745,8 +1740,11 @@ vhost_enqueue_async_packed(struct virtio_net *dev, if (unlikely(++tries > max_tries)) return -1; - if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx, &desc_count, buf_vec, &nr_vec, - &buf_id, &len, VHOST_ACCESS_RW) < 0)) + if (unlikely(fill_vec_buf_packed(dev, vq, + avail_idx, &desc_count, + buf_vec, &nr_vec, + &buf_id, &len, + VHOST_ACCESS_RW) < 0)) return -1; len = RTE_MIN(len, size); @@ -1832,14 +1830,12 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_async *async = vq->async; struct rte_vhost_iov_iter *it_pool = async->it_pool; - struct iovec *vec_pool = async->vec_pool; struct rte_vhost_async_desc tdes[MAX_PKT_BURST]; - struct iovec *src_iovec = vec_pool; - struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + struct iovec *src_iovec = async->src_iovec; + struct iovec *dst_iovec = async->dst_iovec; struct async_inflight_info *pkts_info = async->pkts_info; uint32_t n_pkts = 0, pkt_err = 0; uint16_t slot_idx = 0; - uint16_t segs_await = 0; uint16_t iovec_idx = 0, it_idx = 0; do { @@ -1861,7 +1857,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, pkts_info[slot_idx].nr_buffers = num_buffers; pkts_info[slot_idx].mbuf = pkts[pkt_idx]; iovec_idx += it_pool[it_idx].nr_segs; - segs_await += it_pool[it_idx].nr_segs; it_idx += 2; pkt_idx++; @@ -1874,7 +1869,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, * - unused async iov number is less than max vhost vector */ if (unlikely(pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - ((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < BUF_VECTOR_MAX))) { + (VHOST_MAX_ASYNC_VEC - iovec_idx < BUF_VECTOR_MAX))) { n_xfer = async->ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx); if (likely(n_xfer >= 0)) { @@ -1888,7 +1883,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, iovec_idx = 0; it_idx = 0; - segs_await = 0; if (unlikely(n_pkts < pkt_burst_idx)) { /*