From patchwork Wed Jan 19 15:10:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 106056 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB1BAA0352; Wed, 19 Jan 2022 07:36:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7EF1410E3; Wed, 19 Jan 2022 07:36:39 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id D1922410E3 for ; Wed, 19 Jan 2022 07:36:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642574198; x=1674110198; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=r4wihYa6XdqrMuV5DOogULw9x1oLGncg9opHekWMT8g=; b=Q6e1Q89f8LUrj7lkF5C8SvM+JaI5Mjp98PfLO5BiZct5SaTVHoZR/No8 hjg2qPeZdX7tnasHOaPIS380EwqaSd3ITNSDLx02zQJ7rRelDCuDxQMWr LF1CcreOYG/ZMzMxYOX1XY2jxG9I1ztFMJUyg9ImCXXKktgnLerdP1S9G KzuDsjsWNlvps3kIkLdnp1xi79NoxcOAJziKAAVcmK0TBopgF39+4eJTv bqwNX4oDn0SoKPXAs5yrbJdwaiir2rcWEJb3SmfZ+73CR00t5IteKFvRu N4YUK8IM24FBiib1zJYa15kzL56O3zahWDimIyA4ES9F87pGwthvXNUSh w==; X-IronPort-AV: E=McAfee;i="6200,9189,10231"; a="305727776" X-IronPort-AV: E=Sophos;i="5.88,299,1635231600"; d="scan'208";a="305727776" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 22:36:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,299,1635231600"; d="scan'208";a="518056372" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga007.jf.intel.com with ESMTP; 18 Jan 2022 22:36:34 -0800 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yuanx.wang@intel.com, Xuan Ding Subject: [PATCH v2 2/2] vhost: fix physical address mapping Date: Wed, 19 Jan 2022 15:10:16 +0000 Message-Id: <20220119151016.9970-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220119151016.9970-1-xuan.ding@intel.com> References: <20220119151016.9970-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding When choosing IOVA as PA mode, IOVA is likely to be discontinuous, which requires page by page mapping for DMA devices. To be consistent, this patch implements page by page mapping instead of mapping at the region granularity for both IOVA as VA and PA mode. Fixes: 7c61fa08b716 ("vhost: enable IOMMU for async vhost") Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 1 + lib/vhost/vhost_user.c | 116 ++++++++++++++++++++--------------------- 2 files changed, 57 insertions(+), 60 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index ca7f58039d..9521ae56da 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -355,6 +355,7 @@ struct vring_packed_desc_event { struct guest_page { uint64_t guest_phys_addr; uint64_t host_iova; + uint64_t host_user_addr; uint64_t size; }; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 95c9df697e..48c08716ba 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -143,57 +143,56 @@ get_blk_size(int fd) return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; } -static int -async_dma_map(struct rte_vhost_mem_region *region, bool do_map) +static void +async_dma_map(struct virtio_net *dev, bool do_map) { - uint64_t host_iova; int ret = 0; - - host_iova = rte_mem_virt2iova((void *)(uintptr_t)region->host_user_addr); + uint32_t i; + struct guest_page *page; if (do_map) { - /* Add mapped region into the default container of DPDK. */ - ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* - * DMA device may bind with kernel driver, in this case, - * we don't need to program IOMMU manually. However, if no - * device is bound with vfio/uio in DPDK, and vfio kernel - * module is loaded, the API will still be called and return - * with ENODEV/ENOSUP. - * - * DPDK vfio only returns ENODEV/ENOSUP in very similar - * situations(vfio either unsupported, or supported - * but no devices found). Either way, no mappings could be - * performed. We treat it as normal case in async path. - */ - if (rte_errno == ENODEV || rte_errno == ENOTSUP) - return 0; - - VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); - /* DMA mapping errors won't stop VHST_USER_SET_MEM_TABLE. */ - return 0; + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_iova, + page->size); + if (ret) { + /* + * DMA device may bind with kernel driver, in this case, + * we don't need to program IOMMU manually. However, if no + * device is bound with vfio/uio in DPDK, and vfio kernel + * module is loaded, the API will still be called and return + * with ENODEV. + * + * DPDK vfio only returns ENODEV in very similar situations + * (vfio either unsupported, or supported but no devices found). + * Either way, no mappings could be performed. We treat it as + * normal case in async path. This is a workaround. + */ + if (rte_errno == ENODEV) + return; + + /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ + VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); + } } } else { - /* Remove mapped region from the default container of DPDK. */ - ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* like DMA map, ignore the kernel driver case when unmap. */ - if (rte_errno == EINVAL) - return 0; - - VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); - return ret; + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_iova, + page->size); + if (ret) { + /* like DMA map, ignore the kernel driver case when unmap. */ + if (rte_errno == EINVAL) + return; + + VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); + } } } - - return ret; } static void @@ -205,12 +204,12 @@ free_mem_region(struct virtio_net *dev) if (!dev || !dev->mem) return; + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, false); + for (i = 0; i < dev->mem->nregions; i++) { reg = &dev->mem->regions[i]; if (reg->host_user_addr) { - if (dev->async_copy && rte_vfio_is_enabled("vfio")) - async_dma_map(reg, false); - munmap(reg->mmap_addr, reg->mmap_size); close(reg->fd); } @@ -978,7 +977,7 @@ vhost_user_set_vring_base(struct virtio_net **pdev, static int add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, - uint64_t host_iova, uint64_t size) + uint64_t host_iova, uint64_t host_user_addr, uint64_t size) { struct guest_page *page, *last_page; struct guest_page *old_pages; @@ -999,8 +998,9 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, if (dev->nr_guest_pages > 0) { last_page = &dev->guest_pages[dev->nr_guest_pages - 1]; /* merge if the two pages are continuous */ - if (host_iova == last_page->host_iova + - last_page->size) { + if (host_iova == last_page->host_iova + last_page->size + && guest_phys_addr == last_page->guest_phys_addr + last_page->size + && host_user_addr == last_page->host_user_addr + last_page->size) { last_page->size += size; return 0; } @@ -1009,6 +1009,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, page = &dev->guest_pages[dev->nr_guest_pages++]; page->guest_phys_addr = guest_phys_addr; page->host_iova = host_iova; + page->host_user_addr = host_user_addr; page->size = size; return 0; @@ -1028,7 +1029,8 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, size = page_size - (guest_phys_addr & (page_size - 1)); size = RTE_MIN(size, reg_size); - if (add_one_guest_page(dev, guest_phys_addr, host_iova, size) < 0) + if (add_one_guest_page(dev, guest_phys_addr, host_iova, + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1040,7 +1042,7 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, host_iova = rte_mem_virt2iova((void *)(uintptr_t) host_user_addr); if (add_one_guest_page(dev, guest_phys_addr, host_iova, - size) < 0) + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1215,7 +1217,6 @@ vhost_user_mmap_region(struct virtio_net *dev, uint64_t mmap_size; uint64_t alignment; int populate; - int ret; /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { @@ -1274,14 +1275,6 @@ vhost_user_mmap_region(struct virtio_net *dev, VHOST_LOG_CONFIG(ERR, "adding guest pages to region failed.\n"); return -1; } - - if (rte_vfio_is_enabled("vfio")) { - ret = async_dma_map(region, true); - if (ret) { - VHOST_LOG_CONFIG(ERR, "Configure IOMMU for DMA engine failed\n"); - return -1; - } - } } VHOST_LOG_CONFIG(INFO, @@ -1420,6 +1413,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, dev->mem->nregions++; } + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, true); + if (vhost_user_postcopy_register(dev, main_fd, msg) < 0) goto free_mem_table;