From patchwork Thu Nov 19 10:52:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Burakov, Anatoly" X-Patchwork-Id: 84363 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1831A04DD; Thu, 19 Nov 2020 11:52:59 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 64CD45947; Thu, 19 Nov 2020 11:52:52 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 59EE72AB; Thu, 19 Nov 2020 11:52:49 +0100 (CET) IronPort-SDR: O+Evsi2Od8Es0x4NUvpC2rcQH0RW6l73MgB0NqCdN3XUFR6cPT6cF/pDuZshUO240QdFKlYdO/ ZPpYLwsm52EA== X-IronPort-AV: E=McAfee;i="6000,8403,9809"; a="168697371" X-IronPort-AV: E=Sophos;i="5.77,490,1596524400"; d="scan'208";a="168697371" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 02:52:47 -0800 IronPort-SDR: VSPyDn4zTSV6WIg56TtheuZkYMJx+/x7gHhy1Ja62ZxES0GnZ49QePiOOgibJjNxAh/i740Xtj cNd10zY+Gz4A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,490,1596524400"; d="scan'208";a="330894664" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.222.52]) by orsmga006.jf.intel.com with ESMTP; 19 Nov 2020 02:52:45 -0800 From: Anatoly Burakov To: dev@dpdk.org Cc: ferruh.yigit@intel.com, bruce.richardson@intel.com, padraig.j.connolly@intel.com, stable@dpdk.org Date: Thu, 19 Nov 2020 10:52:44 +0000 Message-Id: <863ac4ae1f8a1002b06e092cafba0ddaf6c7b1bd.1605783141.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <242515b3b0d4ac57ee86cada96af90fb78e14997.1598363848.git.anatoly.burakov@intel.com> References: <242515b3b0d4ac57ee86cada96af90fb78e14997.1598363848.git.anatoly.burakov@intel.com> Subject: [dpdk-dev] [PATCH v4 1/2] doc: clarify instructions on running as non-root X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The current instructions are slightly out of date when it comes to providing information about setting up the system for using DPDK as non-root, so update them. Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov Reviewed-by: Ferruh Yigit Acked-by: Bruce Richardson --- Notes: v2: - Moved VFIO description to be first doc/guides/linux_gsg/enable_func.rst | 58 ++++++++++++++++++++-------- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/doc/guides/linux_gsg/enable_func.rst b/doc/guides/linux_gsg/enable_func.rst index aab32252ea..29e1b90217 100644 --- a/doc/guides/linux_gsg/enable_func.rst +++ b/doc/guides/linux_gsg/enable_func.rst @@ -60,22 +60,51 @@ The application can then determine what action to take, if any, if the HPET is n if any, and on what is available on the system at runtime. Running DPDK Applications Without Root Privileges --------------------------------------------------------- +------------------------------------------------- + +In order to run DPDK as non-root, the following Linux filesystem objects' +permissions should be adjusted to ensure that the Linux account being used to +run the DPDK application has access to them: + +* All directories which serve as hugepage mount points, for example, ``/dev/hugepages`` + +* If the HPET is to be used, ``/dev/hpet`` + +When running as non-root user, there may be some additional resource limits +that are imposed by the system. Specifically, the following resource limits may +need to be adjusted in order to ensure normal DPDK operation: + +* RLIMIT_LOCKS (number of file locks that can be held by a process) + +* RLIMIT_NOFILE (number of open file descriptors that can be held open by a process) + +* RLIMIT_MEMLOCK (amount of pinned pages the process is allowed to have) + +The above limits can usually be adjusted by editing +``/etc/security/limits.conf`` file, and rebooting. + +Additionally, depending on which kernel driver is in use, the relevant +resources also should be accessible by the user running the DPDK application. + +For ``vfio-pci`` kernel driver, the following Linux file system objects' +permissions should be adjusted: + +* The VFIO device file, ``/dev/vfio/vfio`` + +* The directories under ``/dev/vfio`` that correspond to IOMMU group numbers of + devices intended to be used by DPDK, for example, ``/dev/vfio/50`` .. note:: - The instructions below will allow running DPDK as non-root with older - Linux kernel versions. However, since version 4.0, the kernel does not allow - unprivileged processes to read the physical address information from - the pagemaps file, making it impossible for those processes to use HW - devices which require physical addresses + The instructions below will allow running DPDK with ``igb_uio`` or + ``uio_pci_generic`` drivers as non-root with older Linux kernel versions. + However, since version 4.0, the kernel does not allow unprivileged processes + to read the physical address information from the pagemaps file, making it + impossible for those processes to be used by non-privileged users. In such + cases, using the VFIO driver is recommended. -Although applications using the DPDK use network ports and other hardware resources directly, -with a number of small permission adjustments it is possible to run these applications as a user other than "root". -To do so, the ownership, or permissions, on the following Linux file system objects should be adjusted to ensure that -the Linux user account being used to run the DPDK application has access to them: - -* All directories which serve as hugepage mount points, for example, ``/mnt/huge`` +For ``igb_uio`` or ``uio_pci_generic`` kernel drivers, the following Linux file +system objects' permissions should be adjusted: * The userspace-io device files in ``/dev``, for example, ``/dev/uio0``, ``/dev/uio1``, and so on @@ -84,11 +113,6 @@ the Linux user account being used to run the DPDK application has access to them /sys/class/uio/uio0/device/config /sys/class/uio/uio0/device/resource* -* If the HPET is to be used, ``/dev/hpet`` - -.. note:: - - On some Linux installations, ``/dev/hugepages`` is also a hugepage mount point created by default. Power Management and Power Saving Functionality ----------------------------------------------- From patchwork Thu Nov 19 10:52:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Burakov, Anatoly" X-Patchwork-Id: 84364 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B4B1A04DD; Thu, 19 Nov 2020 11:53:18 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 09606C872; Thu, 19 Nov 2020 11:52:54 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A3878F90; Thu, 19 Nov 2020 11:52:49 +0100 (CET) IronPort-SDR: xkHbNcXrkqtgct+QuXCENYHx50hvF7+fx2W2Cl9Ygbbn/KFK4BHTo9pP9SEzDUKlUSDwj629BP 7fKKSvWck48A== X-IronPort-AV: E=McAfee;i="6000,8403,9809"; a="168697373" X-IronPort-AV: E=Sophos;i="5.77,490,1596524400"; d="scan'208";a="168697373" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 02:52:48 -0800 IronPort-SDR: jFuLp3jvRGspizkD6gyrtJWINfFb+AJFnwSLveyI5QoTSTW8brujK3SJHEXkDTbCCUeN9ZxoC3 3y7ZWAQfMXXw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,490,1596524400"; d="scan'208";a="330894670" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.222.52]) by orsmga006.jf.intel.com with ESMTP; 19 Nov 2020 02:52:47 -0800 From: Anatoly Burakov To: dev@dpdk.org Cc: ferruh.yigit@intel.com, bruce.richardson@intel.com, padraig.j.connolly@intel.com, stable@dpdk.org Date: Thu, 19 Nov 2020 10:52:45 +0000 Message-Id: <232d6d0413903434d0dcef516570d424d0dfab53.1605783141.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <863ac4ae1f8a1002b06e092cafba0ddaf6c7b1bd.1605783141.git.anatoly.burakov@intel.com> References: <863ac4ae1f8a1002b06e092cafba0ddaf6c7b1bd.1605783141.git.anatoly.burakov@intel.com> In-Reply-To: <242515b3b0d4ac57ee86cada96af90fb78e14997.1598363848.git.anatoly.burakov@intel.com> References: <242515b3b0d4ac57ee86cada96af90fb78e14997.1598363848.git.anatoly.burakov@intel.com> Subject: [dpdk-dev] [PATCH v4 2/2] doc/linux_gsg: update information on using hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Current information regarding hugepage usage is a little out of date. Update it to include information on in-memory mode, as well as on default mountpoints provided by systemd. Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov Acked-by: Bruce Richardson --- Notes: v3: - Clarified wording around non-default hugepage sizes v2: - Reworked the description - Put runtime reservation first, and boot time as an alternative - Clarified wording and fixed typos - Mentioned that some kernel versions not supporting reserving 1G pages doc/guides/linux_gsg/sys_reqs.rst | 70 +++++++++++++++++++------------ 1 file changed, 44 insertions(+), 26 deletions(-) diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst index 6ecdc04aa9..3966c30456 100644 --- a/doc/guides/linux_gsg/sys_reqs.rst +++ b/doc/guides/linux_gsg/sys_reqs.rst @@ -147,8 +147,35 @@ Without hugepages, high TLB miss rates would occur with the standard 4k page siz Reserving Hugepages for DPDK Use ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The allocation of hugepages should be done at boot time or as soon as possible after system boot -to prevent memory from being fragmented in physical memory. +The reservation of hugepages can be performed at run time. This is done by +echoing the number of hugepages required to a ``nr_hugepages`` file in the +``/sys/kernel/`` directory corresponding to a specific page size (in +Kilobytes). For a single-node system, the command to use is as follows +(assuming that 1024 of 2MB pages are required):: + + echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + +On a NUMA machine, the above command will usually divide the number of hugepages +equally across all NUMA nodes (assuming there is enough memory on all NUMA +nodes). However, pages can also be reserved explicitly on individual NUMA +nodes using a ``nr_hugepages`` file in the ``/sys/devices/`` directory:: + + echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages + echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages + +.. note:: + + Some kernel versions may not allow reserving 1 GB hugepages at run time, so + reserving them at boot time may be the only option. Please see below for + instructions. + +**Alternative:** + +In the general case, reserving hugepages at run time is perfectly fine, but in +use cases where having lots of physically contiguous memory is required, it is +preferable to reserve hugepages at boot time, as that will help in preventing +physical memory from becoming heavily fragmented. + To reserve hugepages at boot time, a parameter is passed to the Linux kernel on the kernel command line. For 2 MB pages, just pass the hugepages option to the kernel. For example, to reserve 1024 pages of 2 MB, use:: @@ -177,35 +204,26 @@ the number of hugepages reserved at boot time is generally divided equally betwe See the Documentation/admin-guide/kernel-parameters.txt file in your Linux source tree for further details of these and other kernel options. -**Alternative:** - -For 2 MB pages, there is also the option of allocating hugepages after the system has booted. -This is done by echoing the number of hugepages required to a nr_hugepages file in the ``/sys/devices/`` directory. -For a single-node system, the command to use is as follows (assuming that 1024 pages are required):: - - echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages - -On a NUMA machine, pages should be allocated explicitly on separate nodes:: - - echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages - echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages - -.. note:: - - For 1G pages, it is not possible to reserve the hugepage memory after the system has booted. - Using Hugepages with the DPDK ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Once the hugepage memory is reserved, to make the memory available for DPDK use, perform the following steps:: +If secondary process support is not required, DPDK is able to use hugepages +without any configuration by using "in-memory" mode. Please see +:ref:`linux_eal_parameters` for more details. + +If secondary process support is required, mount points for hugepages need to be +created. On modern Linux distributions, a default mount point for hugepages is provided +by the system and is located at ``/dev/hugepages``. This mount point will use the +default hugepage size set by the kernel parameters as described above. + +However, in order to use hugepage sizes other than the default, it is necessary +to manually create mount points for those hugepage sizes (e.g. 1GB pages). + +To make the hugepages of size 1GB available for DPDK use, perform the following steps:: mkdir /mnt/huge - mount -t hugetlbfs nodev /mnt/huge + mount -t hugetlbfs pagesize=1GB /mnt/huge The mount point can be made permanent across reboots, by adding the following line to the ``/etc/fstab`` file:: - nodev /mnt/huge hugetlbfs defaults 0 0 - -For 1GB pages, the page size must be specified as a mount option:: - - nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0 + nodev /mnt/huge hugetlbfs pagesize=1GB 0 0