From patchwork Wed Sep 1 05:30:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 97660 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBD0FA0C53; Wed, 1 Sep 2021 07:42:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ABFDC40698; Wed, 1 Sep 2021 07:42:45 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 21E7C40041 for ; Wed, 1 Sep 2021 07:42:42 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10093"; a="215504646" X-IronPort-AV: E=Sophos;i="5.84,368,1620716400"; d="scan'208";a="215504646" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Aug 2021 22:42:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,368,1620716400"; d="scan'208";a="498578398" Received: from dpdk-xuanding-dev2.sh.intel.com ([10.67.119.115]) by fmsmga008.fm.intel.com with ESMTP; 31 Aug 2021 22:42:19 -0700 From: Xuan Ding To: dev@dpdk.org, anatoly.burakov@intel.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: jiayu.hu@intel.com, bruce.richardson@intel.com, sunil.pai.g@intel.com, Xuan Ding Date: Wed, 1 Sep 2021 05:30:44 +0000 Message-Id: <20210901053044.109901-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210901053044.109901-1-xuan.ding@intel.com> References: <20210901053044.109901-1-xuan.ding@intel.com> Subject: [dpdk-dev] [PATCH 2/2] vhost: enable IOMMU for async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The use of IOMMU has many advantages, such as isolation and address translation. This patch extends the capbility of DMA engine to use IOMMU if the DMA device is bound to vfio. When set memory table, the guest memory will be mapped into the default container of DPDK. Signed-off-by: Xuan Ding --- lib/vhost/vhost_user.c | 46 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 031c578e54..48617fc708 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -45,6 +45,7 @@ #include #include #include +#include #include "iotlb.h" #include "vhost.h" @@ -141,6 +142,36 @@ get_blk_size(int fd) return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; } +static int +async_dma_map(struct rte_vhost_mem_region *region, bool do_map) +{ + int ret = 0; + uint64_t host_iova; + host_iova = rte_mem_virt2iova((void *)(uintptr_t)region->host_user_addr); + if (do_map) { + /* Add mapped region into the default container of DPDK. */ + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + region->host_user_addr, + host_iova, + region->size); + if (ret) { + VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); + return ret; + } + } else { + /* Remove mapped region from the default container of DPDK. */ + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + region->host_user_addr, + host_iova, + region->size); + if (ret) { + VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); + return ret; + } + } + return ret; +} + static void free_mem_region(struct virtio_net *dev) { @@ -153,6 +184,9 @@ free_mem_region(struct virtio_net *dev) for (i = 0; i < dev->mem->nregions; i++) { reg = &dev->mem->regions[i]; if (reg->host_user_addr) { + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(reg, false); + munmap(reg->mmap_addr, reg->mmap_size); close(reg->fd); } @@ -1157,6 +1191,7 @@ vhost_user_mmap_region(struct virtio_net *dev, uint64_t mmap_size; uint64_t alignment; int populate; + int ret; /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { @@ -1210,13 +1245,22 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - if (dev->async_copy) + if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { VHOST_LOG_CONFIG(ERR, "adding guest pages to region failed.\n"); return -1; } + if (rte_vfio_is_enabled("vfio")) { + ret = async_dma_map(region, true); + if (ret) { + VHOST_LOG_CONFIG(ERR, "Configure IOMMU for DMA engine failed\n"); + return -1; + } + } + } + VHOST_LOG_CONFIG(INFO, "guest memory region size: 0x%" PRIx64 "\n" "\t guest physical addr: 0x%" PRIx64 "\n"