From patchwork Mon Nov 6 01:41:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Monjalon X-Patchwork-Id: 31183 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A58251B268; Mon, 6 Nov 2017 02:42:15 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 46CCA1B255 for ; Mon, 6 Nov 2017 02:42:13 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id EB89420C18; Sun, 5 Nov 2017 20:42:12 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute1.internal (MEProxy); Sun, 05 Nov 2017 20:42:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=mesmtp; bh=v/raf/z2AZF0m5 9OTGqIAfcuDuN/xXiF/xAi9pV+mZk=; b=EdDy0+GVzjmTp5QK0Bn7I6hf4hD4La qe1VHG0/7Jjf+wK9Q8SaQg9DKPlzZ95NyXVHrHI45+ZVmJJWym5elxpzCFpCDA00 ZLk3YNEKhrNsGExi7tQRtYZ59yq4rACBTQYeCS6+aiTmVvAZCaTEN6l4QrttXDsn FrnFr0LzuufBQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=v/raf/z2AZF0m59OTGqIAfcuDuN/xXiF/xAi9pV+mZk=; b=nZEquSj0 38Cd4NysL8VuDfhg94UitP73ha+EWm8ysbjO7rUQ4rtCTuGMC9Kn2PYh2Wn1OgP2 hgivYy/Vml7co+b4YT8WKE5pId3LtW9B/SU2tgUR7u2rc7WOuEzu4rVbqKGAf3C7 vXYFbNcHAB+l+mWkJRzyj16feV8I1HYr3LCflprh1QDl7R4a9nEegoPGF3heBxKo DTkSglZ0m9r0KYcOBSTr6um95rSnFGUngM9svWufOqvxpQzuQhqR6fVN59F6YqJT 8LWn4D3eH2Z4OKcRngnLVUyg7IxeL0devG9EMhCc3qjPGGvEeSsn3vYppoYKDEnm skhEYeWir8xONQ== X-ME-Sender: Received: from xps.monjalon.net (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 23BC72494B; Sun, 5 Nov 2017 20:42:12 -0500 (EST) From: Thomas Monjalon To: Santosh Shukla Cc: olivier.matz@6wind.com, sergio.gonzalez.monroy@intel.com, anatoly.burakov@intel.com, dev@dpdk.org Date: Mon, 6 Nov 2017 02:41:29 +0100 Message-Id: <20171106014141.13266-4-thomas@monjalon.net> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171106014141.13266-1-thomas@monjalon.net> References: <20170814151537.29454-1-santosh.shukla@caviumnetworks.com> <20171106014141.13266-1-thomas@monjalon.net> Subject: [dpdk-dev] [PATCH v4 03/15] mem: rename segment address from physical to IOVA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Santosh Shukla Renaming rte_memseg {.phys_addr} to {.iova} Keep the deprecated name in an anonymous union to avoid breaking the API. Use rte_iova_t and RTE_BAD_IOVA where appropriate in memory segment handling. Signed-off-by: Santosh Shukla Reviewed-by: Anatoly Burakov Signed-off-by: Thomas Monjalon --- lib/librte_eal/bsdapp/eal/eal_memory.c | 8 ++++---- lib/librte_eal/common/eal_common_memory.c | 4 ++-- lib/librte_eal/common/include/rte_memory.h | 8 ++++++-- lib/librte_eal/common/rte_malloc.c | 6 +++--- lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++++++---------- lib/librte_eal/linuxapp/eal/eal_vfio.c | 6 +++--- 6 files changed, 28 insertions(+), 24 deletions(-) diff --git a/lib/librte_eal/bsdapp/eal/eal_memory.c b/lib/librte_eal/bsdapp/eal/eal_memory.c index 65c96b05e..66fab768f 100644 --- a/lib/librte_eal/bsdapp/eal/eal_memory.c +++ b/lib/librte_eal/bsdapp/eal/eal_memory.c @@ -56,7 +56,7 @@ rte_mem_virt2phy(const void *virtaddr) /* XXX not implemented. This function is only used by * rte_mempool_virt2phy() when hugepages are disabled. */ (void)virtaddr; - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; } int @@ -73,7 +73,7 @@ rte_eal_hugepage_init(void) /* for debug purposes, hugetlbfs can be disabled */ if (internal_config.no_hugetlbfs) { addr = malloc(internal_config.memory); - mcfg->memseg[0].phys_addr = (phys_addr_t)(uintptr_t)addr; + mcfg->memseg[0].iova = (rte_iova_t)(uintptr_t)addr; mcfg->memseg[0].addr = addr; mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K; mcfg->memseg[0].len = internal_config.memory; @@ -88,7 +88,7 @@ rte_eal_hugepage_init(void) hpi = &internal_config.hugepage_info[i]; for (j = 0; j < hpi->num_pages[0]; j++) { struct rte_memseg *seg; - uint64_t physaddr; + rte_iova_t physaddr; int error; size_t sysctl_size = sizeof(physaddr); char physaddr_str[64]; @@ -114,7 +114,7 @@ rte_eal_hugepage_init(void) seg = &mcfg->memseg[seg_idx++]; seg->addr = addr; - seg->phys_addr = physaddr; + seg->iova = physaddr; seg->hugepage_sz = hpi->hugepage_sz; seg->len = hpi->hugepage_sz; seg->nchannel = mcfg->nchannel; diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index 8f9d06f86..fc6c44da1 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -95,11 +95,11 @@ rte_dump_physmem_layout(FILE *f) if (mcfg->memseg[i].addr == NULL) break; - fprintf(f, "Segment %u: phys:0x%"PRIx64", len:%zu, " + fprintf(f, "Segment %u: IOVA:0x%"PRIx64", len:%zu, " "virt:%p, socket_id:%"PRId32", " "hugepage_sz:%"PRIu64", nchannel:%"PRIx32", " "nrank:%"PRIx32"\n", i, - mcfg->memseg[i].phys_addr, + mcfg->memseg[i].iova, mcfg->memseg[i].len, mcfg->memseg[i].addr, mcfg->memseg[i].socket_id, diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index f7eed9ab6..78f48503e 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -107,7 +107,11 @@ typedef uint64_t rte_iova_t; * Physical memory segment descriptor. */ struct rte_memseg { - phys_addr_t phys_addr; /**< Start physical address. */ + RTE_STD_C11 + union { + phys_addr_t phys_addr; /**< deprecated - Start physical address. */ + rte_iova_t iova; /**< Start IO address. */ + }; RTE_STD_C11 union { void *addr; /**< Start virtual address. */ @@ -138,7 +142,7 @@ int rte_mem_lock_page(const void *virt); * @param virt * The virtual address. * @return - * The physical address or RTE_BAD_PHYS_ADDR on error. + * The physical address or RTE_BAD_IOVA on error. */ phys_addr_t rte_mem_virt2phy(const void *virt); diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c index d65c05a4d..0028128a5 100644 --- a/lib/librte_eal/common/rte_malloc.c +++ b/lib/librte_eal/common/rte_malloc.c @@ -255,13 +255,13 @@ rte_malloc_virt2phy(const void *addr) const struct malloc_elem *elem = malloc_elem_from_data(addr); if (elem == NULL) return RTE_BAD_PHYS_ADDR; - if (elem->ms->phys_addr == RTE_BAD_PHYS_ADDR) - return RTE_BAD_PHYS_ADDR; + if (elem->ms->iova == RTE_BAD_IOVA) + return RTE_BAD_IOVA; if (rte_eal_iova_mode() == RTE_IOVA_VA) paddr = (uintptr_t)addr; else - paddr = elem->ms->phys_addr + + paddr = elem->ms->iova + ((uintptr_t)addr - (uintptr_t)elem->ms->addr); return paddr; } diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 10b42d2fe..284758ac4 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -133,7 +133,7 @@ rte_mem_virt2phy(const void *virtaddr) /* Cannot parse /proc/self/pagemap, no need to log errors everywhere */ if (!phys_addrs_available) - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; /* standard page size */ page_size = getpagesize(); @@ -142,7 +142,7 @@ rte_mem_virt2phy(const void *virtaddr) if (fd < 0) { RTE_LOG(ERR, EAL, "%s(): cannot open /proc/self/pagemap: %s\n", __func__, strerror(errno)); - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; } virt_pfn = (unsigned long)virtaddr / page_size; @@ -151,7 +151,7 @@ rte_mem_virt2phy(const void *virtaddr) RTE_LOG(ERR, EAL, "%s(): seek error in /proc/self/pagemap: %s\n", __func__, strerror(errno)); close(fd); - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; } retval = read(fd, &page, PFN_MASK_SIZE); @@ -159,12 +159,12 @@ rte_mem_virt2phy(const void *virtaddr) if (retval < 0) { RTE_LOG(ERR, EAL, "%s(): cannot read /proc/self/pagemap: %s\n", __func__, strerror(errno)); - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; } else if (retval != PFN_MASK_SIZE) { RTE_LOG(ERR, EAL, "%s(): read %d bytes from /proc/self/pagemap " "but expected %d:\n", __func__, retval, PFN_MASK_SIZE); - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; } /* @@ -172,7 +172,7 @@ rte_mem_virt2phy(const void *virtaddr) * pagemap.txt in linux Documentation) */ if ((page & 0x7fffffffffffffULL) == 0) - return RTE_BAD_PHYS_ADDR; + return RTE_BAD_IOVA; physaddr = ((page & 0x7fffffffffffffULL) * page_size) + ((unsigned long)virtaddr % page_size); @@ -1031,9 +1031,9 @@ rte_eal_hugepage_init(void) return -1; } if (rte_eal_iova_mode() == RTE_IOVA_VA) - mcfg->memseg[0].phys_addr = (uintptr_t)addr; + mcfg->memseg[0].iova = (uintptr_t)addr; else - mcfg->memseg[0].phys_addr = RTE_BAD_PHYS_ADDR; + mcfg->memseg[0].iova = RTE_BAD_IOVA; mcfg->memseg[0].addr = addr; mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K; mcfg->memseg[0].len = internal_config.memory; @@ -1282,7 +1282,7 @@ rte_eal_hugepage_init(void) if (j == RTE_MAX_MEMSEG) break; - mcfg->memseg[j].phys_addr = hugepage[i].physaddr; + mcfg->memseg[j].iova = hugepage[i].physaddr; mcfg->memseg[j].addr = hugepage[i].final_va; mcfg->memseg[j].len = hugepage[i].size; mcfg->memseg[j].socket_id = hugepage[i].socket_id; @@ -1293,7 +1293,7 @@ rte_eal_hugepage_init(void) #ifdef RTE_ARCH_PPC_64 /* Use the phy and virt address of the last page as segment * address for IBM Power architecture */ - mcfg->memseg[j].phys_addr = hugepage[i].physaddr; + mcfg->memseg[j].iova = hugepage[i].physaddr; mcfg->memseg[j].addr = hugepage[i].final_va; #endif mcfg->memseg[j].len += mcfg->memseg[j].hugepage_sz; diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c index 5bbcdf9b9..b60921487 100644 --- a/lib/librte_eal/linuxapp/eal/eal_vfio.c +++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c @@ -713,7 +713,7 @@ vfio_type1_dma_map(int vfio_container_fd) if (rte_eal_iova_mode() == RTE_IOVA_VA) dma_map.iova = dma_map.vaddr; else - dma_map.iova = ms[i].phys_addr; + dma_map.iova = ms[i].iova; dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); @@ -772,7 +772,7 @@ vfio_spapr_dma_map(int vfio_container_fd) break; create.window_size = RTE_MAX(create.window_size, - ms[i].phys_addr + ms[i].len); + ms[i].iova + ms[i].len); } /* sPAPR requires window size to be a power of 2 */ @@ -816,7 +816,7 @@ vfio_spapr_dma_map(int vfio_container_fd) if (rte_eal_iova_mode() == RTE_IOVA_VA) dma_map.iova = dma_map.vaddr; else - dma_map.iova = ms[i].phys_addr; + dma_map.iova = ms[i].iova; dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;