From patchwork Thu Dec 17 19:06:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 85375 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05E8BA09F6; Thu, 17 Dec 2020 20:06:31 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0BA79CA51; Thu, 17 Dec 2020 20:06:28 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id B6E7FC9D2; Thu, 17 Dec 2020 20:06:24 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BHJ6Fli011276; Thu, 17 Dec 2020 11:06:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=ZDljhYSTmb8u017DksyaXrKC7pXIqAR6/8O3nF4LRSY=; b=eG065AbpcJA9JolXPRC/NVRFGMn1DoN6/E/o2y/XThswgHNEc20fs/Y4QeQ7mUklsuC1 ywJTNjpgWaMx1WVeje7iLFSIvW6cYshcgaGzFiOYbkXjqw2Ki2LEGS4q5ZopsUYwmU/H gaMXMg9iRQs9IPBcRj7qzULRHRJq2HvlaTWliq3KkvgMEGdYQl+sF34KoCrJfrsNEvFK aoPYkeUIAL/XDBI6u23LaPUDZ3gJZhwmsTBzn//Thp0Ime57KLGM18REYQhjFkh+bhVe G7F2h27aCvY/TYn5Nu1aDwaSe53+NU+ztMcGmlz6F87qfExuyJFY1vfOMo935IWIFNKf pA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35g4rp1pgy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 17 Dec 2020 11:06:22 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:21 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:21 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 17 Dec 2020 11:06:21 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 46B493F703F; Thu, 17 Dec 2020 11:06:19 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram , Date: Fri, 18 Dec 2020 00:36:01 +0530 Message-ID: <20201217190604.29803-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201217190604.29803-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201217190604.29803-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-17_13:2020-12-17, 2020-12-17 signatures=0 Subject: [dpdk-dev] [PATCH v6 1/4] vfio: revert changes for map contiguous areas in one go X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to save DMA entries limited by kernel both for externel memory and hugepage memory, an attempt was made to map physically contiguous memory in one go. This cannot be done as VFIO IOMMU type1 does not support partially unmapping a previously mapped memory region while Heap can request for multi page mapping and partial unmapping. Hence for going back to old method of mapping/unmapping at memseg granularity, this commit reverts commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go") Also add documentation on what module parameter needs to be used to increase the per-container dma map limit for VFIO. Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go") Cc: anatoly.burakov@intel.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram Acked-by: Anatoly Burakov Acked-by: David Christensen --- doc/guides/linux_gsg/linux_drivers.rst | 10 ++++++ lib/librte_eal/linux/eal_vfio.c | 59 +++++----------------------------- 2 files changed, 18 insertions(+), 51 deletions(-) diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst index 90635a4..9a662a7 100644 --- a/doc/guides/linux_gsg/linux_drivers.rst +++ b/doc/guides/linux_gsg/linux_drivers.rst @@ -25,6 +25,16 @@ To make use of VFIO, the ``vfio-pci`` module must be loaded: VFIO kernel is usually present by default in all distributions, however please consult your distributions documentation to make sure that is the case. +For DMA mapping of either external memory or hugepages, VFIO interface is used. +VFIO does not support partial unmap of once mapped memory. Hence DPDK's memory is +mapped in hugepage granularity or system page granularity. Number of DMA +mappings is limited by kernel with user locked memory limit of a process(rlimit) +for system/hugepage memory. Another per-container overall limit applicable both +for external memory and system memory was added in kernel 5.1 defined by +VFIO module parameter ``dma_entry_limit`` with a default value of 64K. +When application is out of DMA entries, these limits need to be adjusted to +increase the allowed limit. + Since Linux version 5.7, the ``vfio-pci`` module supports the creation of virtual functions. After the PF is bound to ``vfio-pci`` module, diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c index 0500824..64b134d 100644 --- a/lib/librte_eal/linux/eal_vfio.c +++ b/lib/librte_eal/linux/eal_vfio.c @@ -517,11 +517,9 @@ static void vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, void *arg __rte_unused) { - rte_iova_t iova_start, iova_expected; struct rte_memseg_list *msl; struct rte_memseg *ms; size_t cur_len = 0; - uint64_t va_start; msl = rte_mem_virt2memseg_list(addr); @@ -539,63 +537,22 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, /* memsegs are contiguous in memory */ ms = rte_mem_virt2memseg(addr, msl); - - /* - * This memory is not guaranteed to be contiguous, but it still could - * be, or it could have some small contiguous chunks. Since the number - * of VFIO mappings is limited, and VFIO appears to not concatenate - * adjacent mappings, we have to do this ourselves. - * - * So, find contiguous chunks, then map them. - */ - va_start = ms->addr_64; - iova_start = iova_expected = ms->iova; while (cur_len < len) { - bool new_contig_area = ms->iova != iova_expected; - bool last_seg = (len - cur_len) == ms->len; - bool skip_last = false; - - /* only do mappings when current contiguous area ends */ - if (new_contig_area) { - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, va_start, - iova_start, - iova_expected - iova_start, 1); - else - vfio_dma_mem_map(default_vfio_cfg, va_start, - iova_start, - iova_expected - iova_start, 0); - va_start = ms->addr_64; - iova_start = ms->iova; - } /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { RTE_LOG(DEBUG, EAL, "Memory segment at %p has bad IOVA, skipping\n", ms->addr); - skip_last = true; + goto next; } - iova_expected = ms->iova + ms->len; + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, ms->addr_64, + ms->iova, ms->len, 1); + else + vfio_dma_mem_map(default_vfio_cfg, ms->addr_64, + ms->iova, ms->len, 0); +next: cur_len += ms->len; ++ms; - - /* - * don't count previous segment, and don't attempt to - * dereference a potentially invalid pointer. - */ - if (skip_last && !last_seg) { - iova_expected = iova_start = ms->iova; - va_start = ms->addr_64; - } else if (!skip_last && last_seg) { - /* this is the last segment and we're not skipping */ - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, va_start, - iova_start, - iova_expected - iova_start, 1); - else - vfio_dma_mem_map(default_vfio_cfg, va_start, - iova_start, - iova_expected - iova_start, 0); - } } } From patchwork Thu Dec 17 19:06:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 85377 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C01C2A09F6; Thu, 17 Dec 2020 20:07:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3C1BCCA62; Thu, 17 Dec 2020 20:06:32 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 34EEDCA54; Thu, 17 Dec 2020 20:06:28 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BHJ5ZI9006168; Thu, 17 Dec 2020 11:06:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=e1SBslWV6Vr61UokJ+Zw6yDhEbTDTcEkljf3GhMyB4c=; b=eHiifUnrlYGM/ykSPRZjhVKcHAq0plKj1qmrwzdwobszcFs0bJZytLPuhHs/Eyg8cqdT Razsvcgv7Q3iK6JJq2r5OKZVkSfGFqaVnwOiCyvy89ReGbsD6hITGm9q6hPm5SNIhGMC u0+FOy0kb97LZMXZPmady2G8kEjyICqkKJmNplSu/qMIcXe+5+EgDP17X6vOOpCEEcTB LURsREAXzqikn2F37ofmYecH3WKzvaecQmWz0UV2DI46SW7Im76J9ySkHJdW2lUvLC0M jkrdV0Xa4c9vR1PSncs+4sFOjel7/5P4GeyWzDe3FZy0Fj1XglUuV22Hb3KkJiel3Qgy 2Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 35cx8tgrtg-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 17 Dec 2020 11:06:26 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:24 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 17 Dec 2020 11:06:24 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 38AFE3F703F; Thu, 17 Dec 2020 11:06:21 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram , Date: Fri, 18 Dec 2020 00:36:02 +0530 Message-ID: <20201217190604.29803-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201217190604.29803-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201217190604.29803-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-17_13:2020-12-17, 2020-12-17 signatures=0 Subject: [dpdk-dev] [PATCH v6 2/4] vfio: fix DMA mapping granularity for type1 IOVA as VA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Partial unmapping is not supported for VFIO IOMMU type1 by kernel. Though kernel gives return as zero, the unmapped size returned will not be same as expected. So check for returned unmap size and return error. For IOVA as PA, DMA mapping is already at memseg size granularity. Do the same even for IOVA as VA mode as DMA map/unmap triggered by heap allocations, maintain granularity of memseg page size so that heap expansion and contraction does not have this issue. For user requested DMA map/unmap disallow partial unmapping for VFIO type1. Fixes: 73a639085938 ("vfio: allow to map other memory regions") Cc: anatoly.burakov@intel.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram Acked-by: Anatoly Burakov Acked-by: David Christensen --- lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------ lib/librte_eal/linux/eal_vfio.h | 1 + 2 files changed, 29 insertions(+), 6 deletions(-) diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c index 64b134d..b15b758 100644 --- a/lib/librte_eal/linux/eal_vfio.c +++ b/lib/librte_eal/linux/eal_vfio.c @@ -70,6 +70,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_TYPE1, .name = "Type 1", + .partial_unmap = false, .dma_map_func = &vfio_type1_dma_map, .dma_user_map_func = &vfio_type1_dma_mem_map }, @@ -77,6 +78,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_SPAPR, .name = "sPAPR", + .partial_unmap = true, .dma_map_func = &vfio_spapr_dma_map, .dma_user_map_func = &vfio_spapr_dma_mem_map }, @@ -84,6 +86,7 @@ static const struct vfio_iommu_type iommu_types[] = { { .type_id = RTE_VFIO_NOIOMMU, .name = "No-IOMMU", + .partial_unmap = true, .dma_map_func = &vfio_noiommu_dma_map, .dma_user_map_func = &vfio_noiommu_dma_mem_map }, @@ -526,12 +529,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, /* for IOVA as VA mode, no need to care for IOVA addresses */ if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { uint64_t vfio_va = (uint64_t)(uintptr_t)addr; - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 1); - else - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, - len, 0); + uint64_t page_sz = msl->page_sz; + + /* Maintain granularity of DMA map/unmap to memseg size */ + for (; cur_len < len; cur_len += page_sz) { + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 1); + else + vfio_dma_mem_map(default_vfio_cfg, vfio_va, + vfio_va, page_sz, 0); + vfio_va += page_sz; + } + return; } @@ -1348,6 +1358,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, RTE_LOG(ERR, EAL, " cannot clear DMA remapping, error %i (%s)\n", errno, strerror(errno)); return -1; + } else if (dma_unmap.size != len) { + RTE_LOG(ERR, EAL, " unexpected size %"PRIu64" of DMA " + "remapping cleared instead of %"PRIu64"\n", + (uint64_t)dma_unmap.size, len); + rte_errno = EIO; + return -1; } } @@ -1823,6 +1839,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* we're partially unmapping a previously mapped region, so we * need to split entry into two. */ + if (!vfio_cfg->vfio_iommu_type->partial_unmap) { + RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + rte_errno = ENOTSUP; + ret = -1; + goto out; + } if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); rte_errno = ENOMEM; diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h index cb2d35f..6ebaca6 100644 --- a/lib/librte_eal/linux/eal_vfio.h +++ b/lib/librte_eal/linux/eal_vfio.h @@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type { int type_id; const char *name; + bool partial_unmap; vfio_dma_user_func_t dma_user_map_func; vfio_dma_func_t dma_map_func; }; From patchwork Thu Dec 17 19:06:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 85378 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 801B7A09F6; Thu, 17 Dec 2020 20:07:25 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD3A3CA8B; Thu, 17 Dec 2020 20:06:33 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 7E1ECCA5C for ; Thu, 17 Dec 2020 20:06:30 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BHJ62JA010809; Thu, 17 Dec 2020 11:06:28 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=7KPgaB/G5FP37Vo8YpNNA3PthimotlphMQy8Z33a6So=; b=Rh4I3KbpwpEO3tJmtFXf0UpnbTcdDtQh5mTSKPEJz7qSZ6XAQ3bYa2BJAJaj8ATCfgyH QtmwnnvNKOBUwlTsnHYiG/wbiQVx3ywyHk1nU8IE2+sPWf1zKr9dvE+XJuc4s3iScp7I LUUFbRBNkflAOft+HFgUpPZljpIQtu5fmscdSVqh3kCrU18k54BZtEVkSUqJsYCnVQ7p 2NjXjSsl24OQXy6+cRX6zkEoeUF1amwO2vcfI8FqnTjYl4APaRAxI8eUKm0SoENXb4Wf LlgaF9waXSYW2H3Utv1xmT70M9l/g7IYvfg/nSe05J0g+//FOWsMOVWHbT29v68NiR2d RA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35g4rp1ph9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 17 Dec 2020 11:06:28 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:27 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:27 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 17 Dec 2020 11:06:27 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 557033F703F; Thu, 17 Dec 2020 11:06:25 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram Date: Fri, 18 Dec 2020 00:36:03 +0530 Message-ID: <20201217190604.29803-4-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201217190604.29803-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201217190604.29803-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-17_13:2020-12-17, 2020-12-17 signatures=0 Subject: [dpdk-dev] [PATCH v6 3/4] test: add test case to validate VFIO DMA map/unmap X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Test case alloc's system pages and tries to performs a user DMA map and unmap both partially and fully. Signed-off-by: Nithin Dabilpuram Acked-by: Anatoly Burakov --- app/test/meson.build | 1 + app/test/test_vfio.c | 107 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 108 insertions(+) create mode 100644 app/test/test_vfio.c diff --git a/app/test/meson.build b/app/test/meson.build index 94fd39f..d9eedb6 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -139,6 +139,7 @@ test_sources = files('commands.c', 'test_trace_register.c', 'test_trace_perf.c', 'test_version.c', + 'test_vfio.c', 'virtual_pmd.c' ) diff --git a/app/test/test_vfio.c b/app/test/test_vfio.c new file mode 100644 index 0000000..c35efed --- /dev/null +++ b/app/test/test_vfio.c @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "test.h" + +static int +test_memory_vfio_dma_map(void) +{ + uint64_t sz1, sz2, sz = 2 * rte_mem_page_size(); + uint64_t unmap1, unmap2; + uint8_t *alloc_mem; + uint8_t *mem; + int ret; + + /* Allocate twice size of requirement from heap to align later */ + alloc_mem = malloc(sz * 2); + if (!alloc_mem) { + printf("Skipping test as unable to alloc %"PRIx64"B from heap\n", + sz * 2); + return 1; + } + + /* Force page allocation */ + memset(alloc_mem, 0, sz * 2); + + mem = RTE_PTR_ALIGN(alloc_mem, rte_mem_page_size()); + + /* map the whole region */ + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + (uintptr_t)mem, (rte_iova_t)mem, sz); + if (ret) { + /* Check if VFIO is not available or no device is probed */ + if (rte_errno == ENOTSUP || rte_errno == ENODEV) { + ret = 1; + goto fail; + } + printf("Failed to dma map whole region, ret=%d(%s)\n", + ret, rte_strerror(rte_errno)); + goto fail; + } + + unmap1 = (uint64_t)mem + (sz / 2); + sz1 = sz / 2; + unmap2 = (uint64_t)mem; + sz2 = sz / 2; + /* unmap the partial region */ + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + unmap1, (rte_iova_t)unmap1, sz1); + if (ret) { + if (rte_errno == ENOTSUP) { + printf("Partial dma unmap not supported\n"); + unmap2 = (uint64_t)mem; + sz2 = sz; + } else { + printf("Failed to unmap second half region, ret=%d(%s)\n", + ret, rte_strerror(rte_errno)); + goto fail; + } + } + + /* unmap the remaining region */ + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + unmap2, (rte_iova_t)unmap2, sz2); + if (ret) { + printf("Failed to unmap remaining region, ret=%d(%s)\n", ret, + rte_strerror(rte_errno)); + goto fail; + } + +fail: + free(alloc_mem); + return ret; +} + +static int +test_vfio(void) +{ + int ret; + + /* test for vfio dma map/unmap */ + ret = test_memory_vfio_dma_map(); + if (ret == 1) { + printf("VFIO dma map/unmap unsupported\n"); + } else if (ret < 0) { + printf("Error vfio dma map/unmap, ret=%d\n", ret); + return -1; + } + + return 0; +} + +REGISTER_TEST_COMMAND(vfio_autotest, test_vfio); From patchwork Thu Dec 17 19:06:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 85379 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84C05A09F6; Thu, 17 Dec 2020 20:07:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CF41ECAAB; Thu, 17 Dec 2020 20:06:36 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 424F3CA98 for ; Thu, 17 Dec 2020 20:06:34 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BHJ5kwZ006426; Thu, 17 Dec 2020 11:06:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=/npYs+VmBB2UAY5hOPxp3mdJxEzOqdsIz0Cu/lLizUs=; b=ffalnfy4Dp4KPU/0LbwBpCnDTo2+P7lqIKsxTxbxzCF1maknbJxEHWhWRCqI2hSYBFdv KpUeI+JyD02Xf//jZ80mlqMnBEfux6pXx68M+t35nDrbt56XF8BnME0nWbixqmS03BQa 3EqA/vZo4acB3n52/c/bRQQSzgKtN8V6TuFQPQTAEtrbh+U14D4N3z0RsZx6ZwIEeans OsVawtIChX7uY2Wr9qsGO33R5YwGJL4RayV6VsikWPUlXY7QWvrGZmR2tXnae+a9Boi+ QrBQkqYlg165APnwipXF0+tME/juWb3lo7guNqwpnPFJHN7+6NvnvkITOrcnoLQA+NSa bA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 35cx8tgruh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 17 Dec 2020 11:06:32 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:31 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Dec 2020 11:06:30 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 17 Dec 2020 11:06:30 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 093F33F703F; Thu, 17 Dec 2020 11:06:27 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram Date: Fri, 18 Dec 2020 00:36:04 +0530 Message-ID: <20201217190604.29803-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201217190604.29803-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201217190604.29803-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-17_13:2020-12-17, 2020-12-17 signatures=0 Subject: [dpdk-dev] [PATCH v6 4/4] test: change external memory test to use system page sz X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently external memory test uses 4K page size. VFIO DMA mapping works only with system page granularity. Earlier it was working because all the contiguous mappings were coalesced and mapped in one-go which ended up becoming a lot bigger page. Now that VFIO DMA mappings both in IOVA as VA and IOVA as PA mode, are being done at memseg list granularity, we need to use system page size. Signed-off-by: Nithin Dabilpuram --- app/test/test_external_mem.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/app/test/test_external_mem.c b/app/test/test_external_mem.c index 7eb81f6..5edf88b 100644 --- a/app/test/test_external_mem.c +++ b/app/test/test_external_mem.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -532,8 +533,8 @@ test_extmem_basic(void *addr, size_t len, size_t pgsz, rte_iova_t *iova, static int test_external_mem(void) { + size_t pgsz = rte_mem_page_size(); size_t len = EXTERNAL_MEM_SZ; - size_t pgsz = RTE_PGSIZE_4K; rte_iova_t iova[len / pgsz]; void *addr; int ret, n_pages;