From patchwork Wed Nov 10 06:06:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 104094 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0FC3FA034F; Wed, 10 Nov 2021 07:20:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9661B4014D; Wed, 10 Nov 2021 07:20:07 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id C93EA40142 for ; Wed, 10 Nov 2021 07:20:05 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10163"; a="219815446" X-IronPort-AV: E=Sophos;i="5.87,223,1631602800"; d="scan'208";a="219815446" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2021 22:20:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,223,1631602800"; d="scan'208";a="533943451" Received: from dpdk-xuanding-dev2.sh.intel.com ([10.67.119.250]) by orsmga001.jf.intel.com with ESMTP; 09 Nov 2021 22:20:02 -0800 From: Xuan Ding To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yuanx.wang@intel.com, xingguang.he@intel.com, Xuan Ding Date: Wed, 10 Nov 2021 06:06:41 +0000 Message-Id: <20211110060641.7666-1-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211110054630.61524-1-xuan.ding@intel.com> References: <20211110054630.61524-1-xuan.ding@intel.com> Subject: [dpdk-dev] [PATCH v3] vhost: fix physical address mapping X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When choosing IOVA as PA mode, IOVA is likely to be discontinuous, which requires page by page mapping for DMA devices. To be consistent, this patch implements page by page mapping instead of mapping at the region granularity for both IOVA as VA and PA mode. Fixes: 7c61fa08b716 ("vhost: enable IOMMU for async vhost") Signed-off-by: Xuan Ding --- v3: * Fix commit title. v2: * Fix a format issue. --- lib/vhost/vhost.h | 1 + lib/vhost/vhost_user.c | 105 ++++++++++++++++++++--------------------- 2 files changed, 53 insertions(+), 53 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 7085e0885c..d246538ca5 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -355,6 +355,7 @@ struct vring_packed_desc_event { struct guest_page { uint64_t guest_phys_addr; uint64_t host_phys_addr; + uint64_t host_user_addr; uint64_t size; }; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index a781346c4d..37cdedda3c 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -144,52 +144,55 @@ get_blk_size(int fd) } static int -async_dma_map(struct rte_vhost_mem_region *region, bool do_map) +async_dma_map(struct virtio_net *dev, bool do_map) { - uint64_t host_iova; int ret = 0; - - host_iova = rte_mem_virt2iova((void *)(uintptr_t)region->host_user_addr); + uint32_t i; + struct guest_page *page; if (do_map) { - /* Add mapped region into the default container of DPDK. */ - ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* - * DMA device may bind with kernel driver, in this case, - * we don't need to program IOMMU manually. However, if no - * device is bound with vfio/uio in DPDK, and vfio kernel - * module is loaded, the API will still be called and return - * with ENODEV/ENOSUP. - * - * DPDK vfio only returns ENODEV/ENOSUP in very similar - * situations(vfio either unsupported, or supported - * but no devices found). Either way, no mappings could be - * performed. We treat it as normal case in async path. - */ - if (rte_errno == ENODEV || rte_errno == ENOTSUP) + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_phys_addr, + page->size); + if (ret) { + /* + * DMA device may bind with kernel driver, in this case, + * we don't need to program IOMMU manually. However, if no + * device is bound with vfio/uio in DPDK, and vfio kernel + * module is loaded, the API will still be called and return + * with ENODEV/ENOSUP. + * + * DPDK vfio only returns ENODEV/ENOSUP in very similar + * situations(vfio either unsupported, or supported + * but no devices found). Either way, no mappings could be + * performed. We treat it as normal case in async path. + */ + if (rte_errno == ENODEV || rte_errno == ENOTSUP) + return 0; + + VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); + /* DMA mapping errors won't stop VHST_USER_SET_MEM_TABLE. */ return 0; - - VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); - /* DMA mapping errors won't stop VHST_USER_SET_MEM_TABLE. */ - return 0; + } } } else { - /* Remove mapped region from the default container of DPDK. */ - ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* like DMA map, ignore the kernel driver case when unmap. */ - if (rte_errno == EINVAL) - return 0; + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_phys_addr, + page->size); + if (ret) { + /* like DMA map, ignore the kernel driver case when unmap. */ + if (rte_errno == EINVAL) + return 0; - VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); - return ret; + VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); + return ret; + } } } @@ -205,12 +208,12 @@ free_mem_region(struct virtio_net *dev) if (!dev || !dev->mem) return; + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, false); + for (i = 0; i < dev->mem->nregions; i++) { reg = &dev->mem->regions[i]; if (reg->host_user_addr) { - if (dev->async_copy && rte_vfio_is_enabled("vfio")) - async_dma_map(reg, false); - munmap(reg->mmap_addr, reg->mmap_size); close(reg->fd); } @@ -978,7 +981,7 @@ vhost_user_set_vring_base(struct virtio_net **pdev, static int add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, - uint64_t host_phys_addr, uint64_t size) + uint64_t host_phys_addr, uint64_t host_user_addr, uint64_t size) { struct guest_page *page, *last_page; struct guest_page *old_pages; @@ -1009,6 +1012,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, page = &dev->guest_pages[dev->nr_guest_pages++]; page->guest_phys_addr = guest_phys_addr; page->host_phys_addr = host_phys_addr; + page->host_user_addr = host_user_addr; page->size = size; return 0; @@ -1028,7 +1032,8 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, size = page_size - (guest_phys_addr & (page_size - 1)); size = RTE_MIN(size, reg_size); - if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, size) < 0) + if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1040,7 +1045,7 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, host_phys_addr = rte_mem_virt2iova((void *)(uintptr_t) host_user_addr); if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, - size) < 0) + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1215,7 +1220,6 @@ vhost_user_mmap_region(struct virtio_net *dev, uint64_t mmap_size; uint64_t alignment; int populate; - int ret; /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { @@ -1274,14 +1278,6 @@ vhost_user_mmap_region(struct virtio_net *dev, VHOST_LOG_CONFIG(ERR, "adding guest pages to region failed.\n"); return -1; } - - if (rte_vfio_is_enabled("vfio")) { - ret = async_dma_map(region, true); - if (ret) { - VHOST_LOG_CONFIG(ERR, "Configure IOMMU for DMA engine failed\n"); - return -1; - } - } } VHOST_LOG_CONFIG(INFO, @@ -1420,6 +1416,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, dev->mem->nregions++; } + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, true); + if (vhost_user_postcopy_register(dev, main_fd, msg) < 0) goto free_mem_table;