From patchwork Tue Sep 10 03:28:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunjian Wang X-Patchwork-Id: 143833 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B14E54594E; Tue, 10 Sep 2024 05:29:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 852A14021F; Tue, 10 Sep 2024 05:29:04 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 1BFB840151; Tue, 10 Sep 2024 05:29:01 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4X2pyt2VBGzfZ2S; Tue, 10 Sep 2024 11:26:46 +0800 (CST) Received: from kwepemd500024.china.huawei.com (unknown [7.221.188.194]) by mail.maildlp.com (Postfix) with ESMTPS id 47E20140132; Tue, 10 Sep 2024 11:28:56 +0800 (CST) Received: from localhost (10.174.242.157) by kwepemd500024.china.huawei.com (7.221.188.194) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 10 Sep 2024 11:28:55 +0800 From: Yunjian Wang To: CC: , , , Lipei Liang , Subject: [PATCH] vfio: check iova if already mapped before do map Date: Tue, 10 Sep 2024 11:28:54 +0800 Message-ID: <1725938934-48952-1-git-send-email-wangyunjian@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 MIME-Version: 1.0 X-Originating-IP: [10.174.242.157] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500024.china.huawei.com (7.221.188.194) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Lipei Liang If we map two continuous memory area A and B, current implementation will merge these two segments into one, as area C. But, if area A and B are mapped again, after sort, there while be A, C, B in mem maps, as A and B divided by C, these segs couldn't be merged. In other words, if segments A and B that are adjacent, are mapped twice, there while be two map entries Corresponding to A or B. So when we partially unmap adjacent area A and B, entry C will be Residual within mem maps. Then map an other memory area D which size is different with A, but within area C, the area C while be mistakenly found by find_user_mem_maps when unmapping area D. As area D and area C are of different chunk size, this resulted in failed to unmap area D. Fix this by check iova if already mapped before do dma map, if iova is absolutely within mem maps, return whithout vfio map, while iova is overlapping with entry in mem maps, return -EEXISTS. Fixes: 56259f7fc010 ("vfio: allow partially unmapping adjacent memory") Cc: stable@dpdk.org Signed-off-by: Lipei Liang --- lib/eal/linux/eal_vfio.c | 52 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 50 insertions(+), 2 deletions(-) diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index 4e69e72e3b..cd32284fc6 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -216,6 +216,39 @@ copy_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *add_maps, } } +/* * + * check if iova area is already mapped or overlaps with existing mapped, + * @return + * 0 if iova area is not exist + * 1 if iova area is already mapped + * -1 if overlaps between iova area and existing mapped + */ +static int +check_iova_in_map(struct user_mem_maps *user_mem_maps, uint64_t iova, uint64_t len) +{ + int i; + uint64_t iova_end = iova + len; + uint64_t map_iova_end; + uint64_t map_iova_off; + uint64_t map_chunk; + + for (i = 0; i < user_mem_maps->n_maps; i++) { + map_iova_off = iova - user_mem_maps->maps[i].iova; + map_iova_end = user_mem_maps->maps[i].iova + user_mem_maps->maps[i].len; + map_chunk = user_mem_maps->maps[i].chunk; + + if ((user_mem_maps->maps[i].iova >= iova_end) || (iova >= map_iova_end)) + continue; + + if ((user_mem_maps->maps[i].iova <= iova) && (iova_end <= map_iova_end) && + (len == map_chunk) && ((map_iova_off % map_chunk) == 0)) + return 1; + + return -1; + } + return 0; +} + /* try merging two maps into one, return 1 if succeeded */ static int merge_map(struct user_mem_map *left, struct user_mem_map *right) @@ -1873,6 +1906,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, struct user_mem_maps *user_mem_maps; bool has_partial_unmap; int ret = 0; + int iova_check = 0; user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); @@ -1882,6 +1916,22 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } + + /* do we have partial unmap support? */ + has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap; + /* check if we can map this region */ + if (!has_partial_unmap) { + iova_check = check_iova_in_map(user_mem_maps, iova, len); + if (iova_check == 1) { + goto out; + } else if (iova_check < 0) { + EAL_LOG(ERR, "Overlapping DMA regions not allowed"); + rte_errno = ENOTSUP; + ret = -1; + goto out; + } + } + /* map the entry */ if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) { /* technically, this will fail if there are currently no devices @@ -1895,8 +1945,6 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } - /* do we have partial unmap support? */ - has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap; /* create new user mem map entry */ new_map = &user_mem_maps->maps[user_mem_maps->n_maps++];