Message ID | 1427974230-8572-1-git-send-email-jerry.lilijun@huawei.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 366F07F1C; Thu, 2 Apr 2015 13:30:57 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by dpdk.org (Postfix) with ESMTP id 42BE67F1C for <dev@dpdk.org>; Thu, 2 Apr 2015 13:30:54 +0200 (CEST) Received: from 172.24.2.119 (EHLO szxeml430-hub.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BDY44100; Thu, 02 Apr 2015 19:30:47 +0800 (CST) Received: from localhost (10.177.19.236) by szxeml430-hub.china.huawei.com (10.82.67.185) with Microsoft SMTP Server id 14.3.158.1; Thu, 2 Apr 2015 19:30:36 +0800 From: <jerry.lilijun@huawei.com> To: <dev@dpdk.org> Date: Thu, 2 Apr 2015 19:30:30 +0800 Message-ID: <1427974230-8572-1-git-send-email-jerry.lilijun@huawei.com> X-Mailer: git-send-email 1.9.4.msysgit.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.177.19.236] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.551D2867.00EC, ss=1, re=0.001, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 045011132941f608296dd354be6691c7 Subject: [dpdk-dev] [PATCH] eal: decrease the memory init time with many hugepages setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Lilijun (Jerry)
April 2, 2015, 11:30 a.m. UTC
From: Lilijun <jerry.lilijun@huawei.com> In the function map_all_hugepages(), hugepage memory is truly allocated by memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the dpdk memory initialization when 40000 2M hugepages are setup in host os. In fact we can only write one byte to finish the allocation. Signed-off-by: Lilijun <jerry.lilijun@huawei.com> --- lib/librte_eal/linuxapp/eal/eal_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Comments
2015-04-02 19:30, jerry.lilijun@huawei.com: > From: Lilijun <jerry.lilijun@huawei.com> > > In the function map_all_hugepages(), hugepage memory is truly allocated by > memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the > dpdk memory initialization when 40000 2M hugepages are setup in host os. Yes it's something we should try to reduce. > In fact we can only write one byte to finish the allocation. Isn't it a security hole? This article speaks about "prezeroing optimizations" in Linux kernel: http://landley.net/writing/memory-faq.txt
On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon@6wind.com> wrote: > 2015-04-02 19:30, jerry.lilijun@huawei.com: > > From: Lilijun <jerry.lilijun@huawei.com> > > > > In the function map_all_hugepages(), hugepage memory is truly allocated > by > > memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the > > dpdk memory initialization when 40000 2M hugepages are setup in host os. > > Yes it's something we should try to reduce. > I have a patch in my tree that does the same opto, but it is commented out right now. In our case, 2/3's of the startup time for our entire app was due to that particular call - memset(virtaddr, 0, hugepage_sz). Just zeroing 1 byte per huge page reduces that by 30% in my tests. The only reason I have it commented out is that I didn't have time to make sure there weren't side-effects for DPDK or my app. For normal shared memory on Linux, pages are initialized to zero automatically once they are touched, so the memset isn't required but I wasn't sure whether that applied to huge pages. Also wasn't sure how hugetlbfs factored into the equation. Hopefully someone can chime in on that. Would love to uncomment the opto :) > In fact we can only write one byte to finish the allocation. > > Isn't it a security hole? > Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal pages, then definitely not. Even if the kernel doesn't pre-zero the pages, if DPDK takes care of properly initializing memory structures on startup as they are carved out of the huge pages, then it isn't a security hole. However, that approach is susceptible to bit rot... You can audit the code and make sure everything is kosher at first, but you have to worry about new code making assumptions about how memory is initialized. > This article speaks about "prezeroing optimizations" in Linux kernel: > http://landley.net/writing/memory-faq.txt I read through that when I was trying to figure out what whether huge pages were pre-zeroed or not. It doesn't talk about huge pages much beyond why they are useful for reducing TLB swaps. Jay
On 02/04/2015 14:41, Jay Rolette wrote: > On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon@6wind.com> > wrote: > >> 2015-04-02 19:30, jerry.lilijun@huawei.com: >>> From: Lilijun <jerry.lilijun@huawei.com> >>> >>> In the function map_all_hugepages(), hugepage memory is truly allocated >> by >>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >> Yes it's something we should try to reduce. >> > I have a patch in my tree that does the same opto, but it is commented out > right now. In our case, 2/3's of the startup time for our entire app was > due to that particular call - memset(virtaddr, 0, hugepage_sz). Just > zeroing 1 byte per huge page reduces that by 30% in my tests. > > The only reason I have it commented out is that I didn't have time to make > sure there weren't side-effects for DPDK or my app. For normal shared > memory on Linux, pages are initialized to zero automatically once they are > touched, so the memset isn't required but I wasn't sure whether that > applied to huge pages. Also wasn't sure how hugetlbfs factored into the > equation. > > Hopefully someone can chime in on that. Would love to uncomment the opto :) > I think the opto/patch is good ;) I had a look at the Linux kernel sources (mm/hugetlb.c)and at least since 2.6.32 (minimum Linux kernel version supported by DPDK) the kernel clears the hugepage (clear_huge_page) when it faults (hugetlb_no_page). Primary DPDK apps do clear_hugedir, clearing previously allocated hugepages, thus triggering hugepage faults (hugetlb_no_page) during map_all_hugepages. Note that even when we exit a primary DPDK app, hugepages remain allocated, reason why apps such as dump_cfg are able to retrieve config/memory information. Sergio >> In fact we can only write one byte to finish the allocation. >> >> Isn't it a security hole? >> > Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal > pages, then definitely not. > > Even if the kernel doesn't pre-zero the pages, if DPDK takes care of > properly initializing memory structures on startup as they are carved out > of the huge pages, then it isn't a security hole. However, that approach is > susceptible to bit rot... You can audit the code and make sure everything > is kosher at first, but you have to worry about new code making assumptions > about how memory is initialized. > > >> This article speaks about "prezeroing optimizations" in Linux kernel: >> http://landley.net/writing/memory-faq.txt > > I read through that when I was trying to figure out what whether huge pages > were pre-zeroed or not. It doesn't talk about huge pages much beyond why > they are useful for reducing TLB swaps. > > Jay > >
2015-04-03 10:04, Gonzalez Monroy, Sergio: > On 02/04/2015 14:41, Jay Rolette wrote: > > On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon@6wind.com> > > wrote: > > > >> 2015-04-02 19:30, jerry.lilijun@huawei.com: > >>> From: Lilijun <jerry.lilijun@huawei.com> > >>> > >>> In the function map_all_hugepages(), hugepage memory is truly allocated > >> by > >>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the > >>> dpdk memory initialization when 40000 2M hugepages are setup in host os. > >> Yes it's something we should try to reduce. > >> > > I have a patch in my tree that does the same opto, but it is commented out > > right now. In our case, 2/3's of the startup time for our entire app was > > due to that particular call - memset(virtaddr, 0, hugepage_sz). Just > > zeroing 1 byte per huge page reduces that by 30% in my tests. > > > > The only reason I have it commented out is that I didn't have time to make > > sure there weren't side-effects for DPDK or my app. For normal shared > > memory on Linux, pages are initialized to zero automatically once they are > > touched, so the memset isn't required but I wasn't sure whether that > > applied to huge pages. Also wasn't sure how hugetlbfs factored into the > > equation. > > > > Hopefully someone can chime in on that. Would love to uncomment the opto :) > > > I think the opto/patch is good ;) > > I had a look at the Linux kernel sources (mm/hugetlb.c)and at least > since 2.6.32 (minimum > Linux kernel version supported by DPDK) the kernel clears the hugepage > (clear_huge_page) > when it faults (hugetlb_no_page). > > Primary DPDK apps do clear_hugedir, clearing previously allocated > hugepages, thus triggering > hugepage faults (hugetlb_no_page) during map_all_hugepages. > > Note that even when we exit a primary DPDK app, hugepages remain > allocated, reason why > apps such as dump_cfg are able to retrieve config/memory information. OK, thanks Sergio. So the patch should add a comment to explain page fault reason of memset and why 1 byte is enough. I think we should also consider remap_all_hugepages() function. > >> Isn't it a security hole? > >> > > Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal > > pages, then definitely not. > > > > Even if the kernel doesn't pre-zero the pages, if DPDK takes care of > > properly initializing memory structures on startup as they are carved out > > of the huge pages, then it isn't a security hole. However, that approach is > > susceptible to bit rot... You can audit the code and make sure everything > > is kosher at first, but you have to worry about new code making assumptions > > about how memory is initialized. > > > >> This article speaks about "prezeroing optimizations" in Linux kernel: > >> http://landley.net/writing/memory-faq.txt > > > > I read through that when I was trying to figure out what whether huge pages > > were pre-zeroed or not. It doesn't talk about huge pages much beyond why > > they are useful for reducing TLB swaps. > > > > Jay
On 2015/4/3 17:14, Thomas Monjalon wrote: > 2015-04-03 10:04, Gonzalez Monroy, Sergio: >> On 02/04/2015 14:41, Jay Rolette wrote: >>> On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon@6wind.com> >>> wrote: >>> >>>> 2015-04-02 19:30, jerry.lilijun@huawei.com: >>>>> From: Lilijun <jerry.lilijun@huawei.com> >>>>> >>>>> In the function map_all_hugepages(), hugepage memory is truly allocated >>>> by >>>>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>>>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >>>> Yes it's something we should try to reduce. >>>> >>> I have a patch in my tree that does the same opto, but it is commented out >>> right now. In our case, 2/3's of the startup time for our entire app was >>> due to that particular call - memset(virtaddr, 0, hugepage_sz). Just >>> zeroing 1 byte per huge page reduces that by 30% in my tests. >>> >>> The only reason I have it commented out is that I didn't have time to make >>> sure there weren't side-effects for DPDK or my app. For normal shared >>> memory on Linux, pages are initialized to zero automatically once they are >>> touched, so the memset isn't required but I wasn't sure whether that >>> applied to huge pages. Also wasn't sure how hugetlbfs factored into the >>> equation. >>> >>> Hopefully someone can chime in on that. Would love to uncomment the opto :) >>> >> I think the opto/patch is good ;) >> >> I had a look at the Linux kernel sources (mm/hugetlb.c)and at least >> since 2.6.32 (minimum >> Linux kernel version supported by DPDK) the kernel clears the hugepage >> (clear_huge_page) >> when it faults (hugetlb_no_page). >> >> Primary DPDK apps do clear_hugedir, clearing previously allocated >> hugepages, thus triggering >> hugepage faults (hugetlb_no_page) during map_all_hugepages. >> >> Note that even when we exit a primary DPDK app, hugepages remain >> allocated, reason why >> apps such as dump_cfg are able to retrieve config/memory information. > > OK, thanks Sergio. > > So the patch should add a comment to explain page fault reason of memset and > why 1 byte is enough. > I think we should also consider remap_all_hugepages() function. Thanks very much. I will update the comments and send it again. > >>>> Isn't it a security hole? >>>> >>> Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal >>> pages, then definitely not. >>> >>> Even if the kernel doesn't pre-zero the pages, if DPDK takes care of >>> properly initializing memory structures on startup as they are carved out >>> of the huge pages, then it isn't a security hole. However, that approach is >>> susceptible to bit rot... You can audit the code and make sure everything >>> is kosher at first, but you have to worry about new code making assumptions >>> about how memory is initialized. >>> >>>> This article speaks about "prezeroing optimizations" in Linux kernel: >>>> http://landley.net/writing/memory-faq.txt >>> >>> I read through that when I was trying to figure out what whether huge pages >>> were pre-zeroed or not. It doesn't talk about huge pages much beyond why >>> they are useful for reducing TLB swaps. >>> >>> Jay > > > > . >
On 03/04/2015 10:14, Thomas Monjalon wrote: > 2015-04-03 10:04, Gonzalez Monroy, Sergio: >> On 02/04/2015 14:41, Jay Rolette wrote: >>> On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon@6wind.com> >>> wrote: >>> >>>> 2015-04-02 19:30, jerry.lilijun@huawei.com: >>>>> From: Lilijun <jerry.lilijun@huawei.com> >>>>> >>>>> In the function map_all_hugepages(), hugepage memory is truly allocated >>>> by >>>>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>>>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >>>> Yes it's something we should try to reduce. >>>> >>> I have a patch in my tree that does the same opto, but it is commented out >>> right now. In our case, 2/3's of the startup time for our entire app was >>> due to that particular call - memset(virtaddr, 0, hugepage_sz). Just >>> zeroing 1 byte per huge page reduces that by 30% in my tests. >>> >>> The only reason I have it commented out is that I didn't have time to make >>> sure there weren't side-effects for DPDK or my app. For normal shared >>> memory on Linux, pages are initialized to zero automatically once they are >>> touched, so the memset isn't required but I wasn't sure whether that >>> applied to huge pages. Also wasn't sure how hugetlbfs factored into the >>> equation. >>> >>> Hopefully someone can chime in on that. Would love to uncomment the opto :) >>> >> I think the opto/patch is good ;) >> >> I had a look at the Linux kernel sources (mm/hugetlb.c)and at least >> since 2.6.32 (minimum >> Linux kernel version supported by DPDK) the kernel clears the hugepage >> (clear_huge_page) >> when it faults (hugetlb_no_page). >> >> Primary DPDK apps do clear_hugedir, clearing previously allocated >> hugepages, thus triggering >> hugepage faults (hugetlb_no_page) during map_all_hugepages. >> >> Note that even when we exit a primary DPDK app, hugepages remain >> allocated, reason why >> apps such as dump_cfg are able to retrieve config/memory information. > OK, thanks Sergio. > > So the patch should add a comment to explain page fault reason of memset and > why 1 byte is enough. > I think we should also consider remap_all_hugepages() function. Good point! You are right, I don't think we would even need to do memset at all in remap_all_hugepages as we already have touched/allocated those pages. Sergio >>>> Isn't it a security hole? >>>> >>> Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal >>> pages, then definitely not. >>> >>> Even if the kernel doesn't pre-zero the pages, if DPDK takes care of >>> properly initializing memory structures on startup as they are carved out >>> of the huge pages, then it isn't a security hole. However, that approach is >>> susceptible to bit rot... You can audit the code and make sure everything >>> is kosher at first, but you have to worry about new code making assumptions >>> about how memory is initialized. >>> >>>> This article speaks about "prezeroing optimizations" in Linux kernel: >>>> http://landley.net/writing/memory-faq.txt >>> I read through that when I was trying to figure out what whether huge pages >>> were pre-zeroed or not. It doesn't talk about huge pages much beyond why >>> they are useful for reducing TLB swaps. >>> >>> Jay >
diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 5f9f92e..8bbee9c 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, if (orig) { hugepg_tbl[i].orig_va = virtaddr; - memset(virtaddr, 0, hugepage_sz); + memset(virtaddr, 0, 1); } else { hugepg_tbl[i].final_va = virtaddr;