From patchwork Thu Oct 4 08:13:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 46035 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8925E1B16A; Thu, 4 Oct 2018 10:15:55 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 4176F1B142; Thu, 4 Oct 2018 10:15:54 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8BAAFC04B2DF; Thu, 4 Oct 2018 08:15:53 +0000 (UTC) Received: from localhost.localdomain (unknown [10.36.112.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id A7B5E3091322; Thu, 4 Oct 2018 08:15:47 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, tiwei.bie@intel.com, zhihong.wang@intel.com, jfreimann@redhat.com, nicknickolaev@gmail.com, i.maximets@samsung.com, bruce.richardson@intel.com, alejandro.lucero@netronome.com Cc: dgilbert@redhat.com, stable@dpdk.org, Maxime Coquelin Date: Thu, 4 Oct 2018 10:13:58 +0200 Message-Id: <20181004081403.8039-15-maxime.coquelin@redhat.com> In-Reply-To: <20181004081403.8039-1-maxime.coquelin@redhat.com> References: <20181004081403.8039-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 04 Oct 2018 08:15:53 +0000 (UTC) Subject: [dpdk-dev] [PATCH v3 14/19] vhost: avoid useless VhostUserMemory copy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The VHOST_USER_SET_MEM_TABLE payload is copied when handled, whereas it could directly be referenced. This is not very important, but next, we'll need to update the payload and send it back to Qemu. Signed-off-by: Dr. David Alan Gilbert Signed-off-by: Maxime Coquelin Acked-by: Ilya Maximets --- lib/librte_vhost/vhost_user.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index d61be2da6..1c6d8d2de 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -829,7 +829,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, int main_fd __rte_unused) { struct virtio_net *dev = *pdev; - struct VhostUserMemory memory = msg->payload.memory; + struct VhostUserMemory *memory = &msg->payload.memory; struct rte_vhost_mem_region *reg; void *mmap_addr; uint64_t mmap_size; @@ -839,17 +839,17 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, int populate; int fd; - if (memory.nregions > VHOST_MEMORY_MAX_NREGIONS) { + if (memory->nregions > VHOST_MEMORY_MAX_NREGIONS) { RTE_LOG(ERR, VHOST_CONFIG, - "too many memory regions (%u)\n", memory.nregions); + "too many memory regions (%u)\n", memory->nregions); return VH_RESULT_ERR; } - if (dev->mem && !vhost_memory_changed(&memory, dev->mem)) { + if (dev->mem && !vhost_memory_changed(memory, dev->mem)) { RTE_LOG(INFO, VHOST_CONFIG, "(%d) memory regions not changed\n", dev->vid); - for (i = 0; i < memory.nregions; i++) + for (i = 0; i < memory->nregions; i++) close(msg->fds[i]); return VH_RESULT_OK; @@ -881,25 +881,25 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, } dev->mem = rte_zmalloc("vhost-mem-table", sizeof(struct rte_vhost_memory) + - sizeof(struct rte_vhost_mem_region) * memory.nregions, 0); + sizeof(struct rte_vhost_mem_region) * memory->nregions, 0); if (dev->mem == NULL) { RTE_LOG(ERR, VHOST_CONFIG, "(%d) failed to allocate memory for dev->mem\n", dev->vid); return VH_RESULT_ERR; } - dev->mem->nregions = memory.nregions; + dev->mem->nregions = memory->nregions; - for (i = 0; i < memory.nregions; i++) { + for (i = 0; i < memory->nregions; i++) { fd = msg->fds[i]; reg = &dev->mem->regions[i]; - reg->guest_phys_addr = memory.regions[i].guest_phys_addr; - reg->guest_user_addr = memory.regions[i].userspace_addr; - reg->size = memory.regions[i].memory_size; + reg->guest_phys_addr = memory->regions[i].guest_phys_addr; + reg->guest_user_addr = memory->regions[i].userspace_addr; + reg->size = memory->regions[i].memory_size; reg->fd = fd; - mmap_offset = memory.regions[i].mmap_offset; + mmap_offset = memory->regions[i].mmap_offset; /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -reg->size) {