Show a cover letter.

GET /api/covers/106096/?format=api
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 106096,
    "url": "http://patchwork.dpdk.org/api/covers/106096/?format=api",
    "web_url": "http://patchwork.dpdk.org/project/dpdk/cover/20220119210917.765505-1-dkozlyuk@nvidia.com/",
    "project": {
        "id": 1,
        "url": "http://patchwork.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20220119210917.765505-1-dkozlyuk@nvidia.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20220119210917.765505-1-dkozlyuk@nvidia.com",
    "date": "2022-01-19T21:09:11",
    "name": "[v2,0/6] Fast restart with many hugepages",
    "submitter": {
        "id": 2248,
        "url": "http://patchwork.dpdk.org/api/people/2248/?format=api",
        "name": "Dmitry Kozlyuk",
        "email": "dkozlyuk@nvidia.com"
    },
    "mbox": "http://patchwork.dpdk.org/project/dpdk/cover/20220119210917.765505-1-dkozlyuk@nvidia.com/mbox/",
    "series": [
        {
            "id": 21266,
            "url": "http://patchwork.dpdk.org/api/series/21266/?format=api",
            "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=21266",
            "date": "2022-01-19T21:09:11",
            "name": "Fast restart with many hugepages",
            "version": 2,
            "mbox": "http://patchwork.dpdk.org/series/21266/mbox/"
        }
    ],
    "comments": "http://patchwork.dpdk.org/api/covers/106096/comments/",
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 6153BA0351;\n\tWed, 19 Jan 2022 22:09:38 +0100 (CET)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 215BA41147;\n\tWed, 19 Jan 2022 22:09:38 +0100 (CET)",
            "from NAM04-DM6-obe.outbound.protection.outlook.com\n (mail-dm6nam08on2060.outbound.protection.outlook.com [40.107.102.60])\n by mails.dpdk.org (Postfix) with ESMTP id D4A284013F\n for <dev@dpdk.org>; Wed, 19 Jan 2022 22:09:36 +0100 (CET)",
            "from MWHPR17CA0049.namprd17.prod.outlook.com (2603:10b6:300:93::11)\n by BY5PR12MB5527.namprd12.prod.outlook.com (2603:10b6:a03:1d5::18)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Wed, 19 Jan\n 2022 21:09:34 +0000",
            "from CO1NAM11FT043.eop-nam11.prod.protection.outlook.com\n (2603:10b6:300:93:cafe::fd) by MWHPR17CA0049.outlook.office365.com\n (2603:10b6:300:93::11) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.8 via Frontend\n Transport; Wed, 19 Jan 2022 21:09:34 +0000",
            "from mail.nvidia.com (12.22.5.235) by\n CO1NAM11FT043.mail.protection.outlook.com (10.13.174.193) with Microsoft SMTP\n Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id\n 15.20.4909.7 via Frontend Transport; Wed, 19 Jan 2022 21:09:33 +0000",
            "from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com\n (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18;\n Wed, 19 Jan 2022 21:09:33 +0000",
            "from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com\n (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Wed, 19 Jan 2022\n 13:09:31 -0800"
        ],
        "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=SYUlI+lzWvlPf6Z5dAtJqPp5Y3xLmZWfwfow5uNNgOrFv9eV3+8nDj/5bWItaxugrOW7ViN0FVOXFY+MpLDA/7p/x+r/R3E2jSCdXT93s+lZ9XqYY5uxjX2gaq+J/UcfPDNFvQytWLQrKmdr1soufjsRDe2SluEVIQU7N5rHKoeZ6WBYym7l5YVd1rC7IVTVhZrqU7tzzpBoTgoIbMXtfTJtgvt/R/yS6JtEnG9/JkUJfxLnVj5XeLfzxuSGYVRajAo9dDP0fsbJvNM4D1n6TNjZUjw65dr2NiLSA22DRNHOFL/7wysiP+6D+lZI2D9jISffR4ERD/zRiWTPpE5WfQ==",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=LJbd385sqE8FnFTIIB+CyeTkZs+dITXloEeCZhFtKrw=;\n b=nakDJHBVu3NZg3/ybzWNc+Im2YXwhpshuSVo0oSjN8tos2dGPEvaPaYQ89v2CGQFQ/lDbjrR/GobxGzh9Rs6qyJd4tywg+jm3Nszt4Zc57b8p+qot8lR+Ao2z45KhB2Ua2D6bHX3CwWxH1doNP1/BlC6mTtu+5Okl+2K6vI1fnbyfNSA5bULPlyiWVshS+PEOXFEzUVsSgq9ySEQJCN921FJkd7JDsQGElx+wCneJ8Wm1fcj3e5z90bGOhjYHNUSCDlCqEZHw1yxOX7gUzmHjUmv3ah4qk3VIVAAVLIsBXYUxE+C9Typ4EuVcIkhoejs7dAvt3ApvhgY/GNpWikUcw==",
        "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass (sender ip is\n 12.22.5.235) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com;\n dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;\n dkim=none (message not signed); arc=none",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=LJbd385sqE8FnFTIIB+CyeTkZs+dITXloEeCZhFtKrw=;\n b=gzE9L1CK4BjtMS49zF1NyfHCK5LCx06uVacLPKSRji/BIYlGNwHMMLrMoVxU3+zV79xrOeXML048G6q+VPoJmUZJNjA6lJlhO15fvmB9ucyHOosKcQmuCTqXH48mFzkLr/5X54ENWCQNnI80r+EmDiP4LnnW7eHM++li6G6aOyl9VEuY30cAqQXDw8N69XxUo5Jd7zM0gMPJ09xkNrcuCwe/lUdRsGMfD9xBSvqZCgsYOtHTeQDzcN7S98LPoQPoorf/BAX32j/9HgllTPr/HtTfnPHu+kqNJQeBrwOrpUxYZMBt9K0VlWFHoLi5elSgpdZikPCCJnbxWUYHDFK0tg==",
        "X-MS-Exchange-Authentication-Results": "spf=pass (sender IP is 12.22.5.235)\n smtp.mailfrom=nvidia.com; dkim=none (message not signed)\n header.d=none;dmarc=pass action=none header.from=nvidia.com;",
        "Received-SPF": "Pass (protection.outlook.com: domain of nvidia.com designates\n 12.22.5.235 as permitted sender) receiver=protection.outlook.com;\n client-ip=12.22.5.235; helo=mail.nvidia.com;",
        "From": "Dmitry Kozlyuk <dkozlyuk@nvidia.com>",
        "To": "<dev@dpdk.org>",
        "CC": "Bruce Richardson <bruce.richardson@intel.com>, Anatoly Burakov\n <anatoly.burakov@intel.com>, Viacheslav Ovsiienko <viacheslavo@nvidia.com>,\n David Marchand <david.marchand@redhat.com>, Thomas Monjalon\n <thomas@monjalon.net>, Lior Margalit <lmargalit@nvidia.com>",
        "Subject": "[PATCH v2 0/6] Fast restart with many hugepages",
        "Date": "Wed, 19 Jan 2022 23:09:11 +0200",
        "Message-ID": "<20220119210917.765505-1-dkozlyuk@nvidia.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "In-Reply-To": "<20220117080801.481568-1-dkozlyuk@nvidia.com>",
        "References": "<20220117080801.481568-1-dkozlyuk@nvidia.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "Content-Type": "text/plain",
        "X-Originating-IP": "[10.126.230.35]",
        "X-ClientProxiedBy": "HQMAIL105.nvidia.com (172.20.187.12) To\n rnnvmail201.nvidia.com (10.129.68.8)",
        "X-EOPAttributedMessage": "0",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-Office365-Filtering-Correlation-Id": "baf3a5e7-cb56-44ba-58da-08d9db8ffe32",
        "X-MS-TrafficTypeDiagnostic": "BY5PR12MB5527:EE_",
        "X-LD-Processed": "43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr",
        "X-Microsoft-Antispam-PRVS": "\n <BY5PR12MB5527699A9D98C7407CEE9FD1B9599@BY5PR12MB5527.namprd12.prod.outlook.com>",
        "X-MS-Oob-TLC-OOBClassifiers": "OLM:8882;",
        "X-MS-Exchange-SenderADCheck": "1",
        "X-MS-Exchange-AntiSpam-Relay": "0",
        "X-Microsoft-Antispam": "BCL:0;",
        "X-Microsoft-Antispam-Message-Info": "\n /FrkcMXYiwXztMVlrLQwoWPPsmwjtFsZ9kISk9nNZ6rAxmLz19AApu0vUu6vzC4x61am0cmy6XdKGNA3z21l+/xB3LVxwG6IvqhR0qOcb7R15RbUcuLvmfYhlLWnhL+9Y4Csd/DBkTGvfrmgnLG2L67TsPEXVgKv1esm5PFu2eDnJBdKsHTiRuTlNf/0973ZMZIyf4s2GYDZqT3DMMV+CCxdfK3x5GSOeQg17FEGpKJNlt6whfFHJNP9LmdpfnxXsr3R22E3PtaZsWzP5YbhJebtWtAqb3WGhNYLZNwODD+LO2FxqvSWZUhu0KbZoVuO1fNHhYT4USEjCaD9SqKc47mwLQDVHx/rA5+/zrteNFFgOZ75byuR6PM8Po0mrfD68CltuYtveyEbFeWKPVg76qEc7CAf87w6WTKpozQtFTz0Wwx8qX2YcBQnle+ysbUQZFAaglx2h/vhidntNLtkLmBcFIkiALJRFlLBeR44m1ymxaq1qy/OG6Z0AuWs9Yq0Ayxy5rwh0RHV8Js50OecUeaY/wluSW5hU478jJ4oJUYZeG7ObMyuWkqDwh1t/fZs2Wu+OBcp8RczYBUTUS91IdO/envGJGfDGl9jDixP8GwFO0sfVNphWbNeP32CLkmSI+YvNIRIJVfGDjnBrEm6E/asE/MXKcFPKktiJShkoyidAjYCiTCAAa9BMmPQ8aAtkAH+CDx5U+T+ju9hNGvLoleVSL/a9fcKyuZ8H3SmLw6yAV7FOWNJgDwW5aZlUq0xmJLV/9VVyU4uw5pww6/NUF8dN0hmQ3VZSv1yulu2WbCHg/Y97ouaSez9sLeHVjyQ79QfAUtwnA0/GWWmKPjfo1kItoUx1Y8xenSFTc9HeMWuzh88Kusa+Cip7nAhOG2Z",
        "X-Forefront-Antispam-Report": "CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:;\n IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE;\n SFS:(4636009)(46966006)(40470700002)(36840700001)(7696005)(508600001)(83380400001)(36860700001)(6916009)(5660300002)(186003)(2906002)(40460700001)(47076005)(81166007)(336012)(107886003)(55016003)(966005)(8936002)(86362001)(82310400004)(356005)(4326008)(6666004)(2616005)(70586007)(26005)(426003)(36756003)(1076003)(70206006)(54906003)(316002)(16526019)(6286002)(8676002)(36900700001);\n DIR:OUT; SFP:1101;",
        "X-OriginatorOrg": "Nvidia.com",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "19 Jan 2022 21:09:33.9904 (UTC)",
        "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n baf3a5e7-cb56-44ba-58da-08d9db8ffe32",
        "X-MS-Exchange-CrossTenant-Id": "43083d15-7273-40c1-b7db-39efd9ccc17a",
        "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "\n TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235];\n Helo=[mail.nvidia.com]",
        "X-MS-Exchange-CrossTenant-AuthSource": "\n CO1NAM11FT043.eop-nam11.prod.protection.outlook.com",
        "X-MS-Exchange-CrossTenant-AuthAs": "Anonymous",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "BY5PR12MB5527",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org"
    },
    "content": "This patchset is a new design and implementation of [1].\n\nv2:\n  * Fix hugepage file removal when they are no longer used.\n    Disable removal with --huge-unlink=never as intended.\n    Document this behavior difference. (Bruce)\n  * Improve documentation, commit messages, and naming. (Thomas)\n\n# Problem Statement\n\nLarge allocations that involve mapping new hugepages are slow.\nThis is problematic, for example, in the following use case.\nA single-process application allocates ~1TB of mempools at startup.\nSometimes the app needs to restart as quick as possible.\nAllocating the hugepages anew takes as long as 15 seconds,\nwhile the new process could just pick up all the memory\nleft by the old one (reinitializing the contents as needed).\n\nAlmost all of mmap(2) time spent in the kernel\nis clearing the memory, i.e. filling it with zeros.\nThis is done if a file in hugetlbfs is mapped\nfor the first time system-wide, i.e. a hugepage is committed\nto prevent data leaks from the previous users of the same hugepage.\nFor example, mapping 32 GB from a new file may take 2.16 seconds,\nwhile mapping the same pages again takes only 0.3 ms.\nSecurity put aside, e.g. when the environment is controlled,\nthis effort is wasted for the memory intended for DMA,\nbecause its content will be overwritten anyway.\n\nLinux EAL explicitly removes hugetlbfs files at initialization\nand before mapping to force the kernel clear the memory.\nThis allows the memory allocator to clean memory on only on freeing.\n\n# Solution\n\nAdd a new mode allowing EAL to remap existing hugepage files.\nWhile it is intended to make restarts faster in the first place,\nit makes any startup faster except the cold one\n(with no existing files).\n\nIt is the administrator who accepts security risks\nimplied by reusing hugepages.\nThe new mode is an opt-in and a warning is logged.\n\nThe feature is Linux-only as it is related\nto mapping hugepages from files which only Linux does.\nIt is inherently incompatible with --in-memory,\nfor --huge-unlink see below.\n\nThere is formally no breakage of API contract,\nbut there is a behavior change in the new mode:\nrte_malloc*() and rte_memzone_reserve*() may return dirty memory\n(previously they were returning clean memory from free heap elements).\nTheir contract has always explicitly allowed this,\nbut still there may be users relying on the traditional behavior.\nSuch users will need to fix their code to use the new mode.\n\n# Implementation\n\n## User Interface\n\nThere is --huge-unlink switch in the same area to remove hugepage files\nbefore mapping them. It is infeasible to use with the new mode,\nbecause the point is to keep hugepage files for fast future restarts.\nExtend --huge-unlink option to represent only valid combinations:\n\n* --huge-unlink=existing OR no option (for compatibility):\n  unlink files at initialization\n  and before opening them as a precaution.\n\n* --huge-unlink=always OR just --huge-unlink (for compatibility):\n  same as above + unlink created files before mapping.\n\n* --huge-unlink=never:\n  the new mode, do not unlink hugepages files, reuse them.\n\nThis option was always Linux-only, but it is kept as common\nin case there are users who expect it to be a no-op on other systems.\n(Adding a separate --huge-reuse option was also considered,\nbut there is no obvious benefit and more combinations to test.)\n\n## EAL\n\nIf a memseg is mapped dirty, it is marked with RTE_MEMSEG_FLAG_DIRTY\nso that the memory allocator may clear the memory if need be.\nSee patch 5/6 description for details how this is done\nin different memory mapping modes.\n\nThe memory manager tracks whether an element is clean or dirty.\nIf rte_zmalloc*() allocates from a dirty element,\nthe memory is cleared before handling it to the user.\nOn freeing, the allocator joins adjacent free elements,\nbut in the new mode it may not be feasible to clear the free memory\nif the joint element is dirty (contains dirty parts).\nIn any case, memory will be cleared only once,\neither on freeing or on allocation.\nSee patch 3/6 for details.\nPatch 2/6 adds a benchmark to see how time is distributed\nbetween allocation and freeing in different modes.\n\nBesides clearing memory, each mmap() call takes some time.\nFor example, 1024 calls for 1 TB may take ~300 ms.\nThe time of one call mapping N hugepages is O(N),\nbecause inside the kernel hugepages are allocated ony by one.\nSyscall overhead is negligeable even for one page.\nHence, it does not make sense to reduce the number of mmap() calls,\nwhich would essentially move the loop over pages into the kernel.\n\n[1]: http://inbox.dpdk.org/dev/20211011085644.2716490-3-dkozlyuk@nvidia.com/\n\nDmitry Kozlyuk (6):\n  doc: add hugepage mapping details\n  app/test: add allocator performance benchmark\n  mem: add dirty malloc element support\n  eal: refactor --huge-unlink storage\n  eal/linux: allow hugepage file reuse\n  eal: extend --huge-unlink for hugepage file reuse\n\n app/test/meson.build                          |   2 +\n app/test/test_eal_flags.c                     |  25 +++\n app/test/test_malloc_perf.c                   | 174 ++++++++++++++++++\n doc/guides/linux_gsg/linux_eal_parameters.rst |  24 ++-\n .../prog_guide/env_abstraction_layer.rst      | 107 ++++++++++-\n doc/guides/rel_notes/release_22_03.rst        |   7 +\n lib/eal/common/eal_common_options.c           |  48 ++++-\n lib/eal/common/eal_internal_cfg.h             |  10 +-\n lib/eal/common/malloc_elem.c                  |  22 ++-\n lib/eal/common/malloc_elem.h                  |  11 +-\n lib/eal/common/malloc_heap.c                  |  18 +-\n lib/eal/common/rte_malloc.c                   |  21 ++-\n lib/eal/include/rte_memory.h                  |   8 +-\n lib/eal/linux/eal.c                           |   3 +-\n lib/eal/linux/eal_hugepage_info.c             | 118 +++++++++---\n lib/eal/linux/eal_memalloc.c                  | 173 ++++++++++-------\n lib/eal/linux/eal_memory.c                    |   2 +-\n 17 files changed, 644 insertions(+), 129 deletions(-)\n create mode 100644 app/test/test_malloc_perf.c"
}