[5/5] doc: use ordered lists

Message ID 20231123114405.2611371-6-david.marchand@redhat.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series Some documentation fixes |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/github-robot: build success github build: passed
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS

Commit Message

David Marchand Nov. 23, 2023, 11:44 a.m. UTC
  Prefer automatically ordered lists by using #.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/eventdevs/dlb2.rst                 | 29 ++++++-----
 doc/guides/eventdevs/dpaa.rst                 |  2 +-
 .../linux_gsg/nic_perf_intel_platform.rst     | 10 ++--
 doc/guides/nics/cnxk.rst                      |  4 +-
 doc/guides/nics/dpaa2.rst                     | 19 +++----
 doc/guides/nics/enetc.rst                     |  6 +--
 doc/guides/nics/enetfec.rst                   | 12 ++---
 doc/guides/nics/i40e.rst                      | 16 +++---
 doc/guides/nics/mlx4.rst                      | 32 ++++++------
 doc/guides/nics/mlx5.rst                      | 18 +++----
 doc/guides/nics/mvpp2.rst                     | 49 ++++++++++---------
 doc/guides/nics/pfe.rst                       |  8 +--
 doc/guides/nics/tap.rst                       | 14 +++---
 doc/guides/platform/bluefield.rst             |  4 +-
 doc/guides/platform/cnxk.rst                  | 26 +++++-----
 doc/guides/platform/dpaa.rst                  | 14 +++---
 doc/guides/platform/dpaa2.rst                 | 20 ++++----
 doc/guides/platform/mlx5.rst                  | 14 +++---
 doc/guides/platform/octeontx.rst              | 22 ++++-----
 .../prog_guide/env_abstraction_layer.rst      | 10 ++--
 doc/guides/prog_guide/graph_lib.rst           | 39 ++++++++-------
 doc/guides/prog_guide/rawdev.rst              | 28 ++++++-----
 doc/guides/prog_guide/rte_flow.rst            | 12 ++---
 doc/guides/prog_guide/stack_lib.rst           |  8 +--
 doc/guides/prog_guide/trace_lib.rst           | 12 ++---
 doc/guides/rawdevs/ifpga.rst                  |  5 +-
 doc/guides/sample_app_ug/ip_pipeline.rst      |  4 +-
 doc/guides/sample_app_ug/pipeline.rst         |  4 +-
 doc/guides/sample_app_ug/vdpa.rst             | 26 +++++-----
 doc/guides/windows_gsg/run_apps.rst           |  8 +--
 30 files changed, 250 insertions(+), 225 deletions(-)
  

Comments

Bruce Richardson Nov. 23, 2023, 11:53 a.m. UTC | #1
On Thu, Nov 23, 2023 at 12:44:05PM +0100, David Marchand wrote:
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Haven't checked all instances, but definely agree with the idea.
If not merged for 23.11, please merge early for 24.03 in case of churn
during development.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>
  
Dariusz Sosnowski Nov. 23, 2023, 5:23 p.m. UTC | #2
Hi,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Thursday, November 23, 2023 12:44
> Subject: [PATCH 5/5] doc: use ordered lists
> 
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  doc/guides/eventdevs/dlb2.rst                 | 29 ++++++-----
>  doc/guides/eventdevs/dpaa.rst                 |  2 +-
>  .../linux_gsg/nic_perf_intel_platform.rst     | 10 ++--
>  doc/guides/nics/cnxk.rst                      |  4 +-
>  doc/guides/nics/dpaa2.rst                     | 19 +++----
>  doc/guides/nics/enetc.rst                     |  6 +--
>  doc/guides/nics/enetfec.rst                   | 12 ++---
>  doc/guides/nics/i40e.rst                      | 16 +++---
>  doc/guides/nics/mlx4.rst                      | 32 ++++++------
>  doc/guides/nics/mlx5.rst                      | 18 +++----
>  doc/guides/nics/mvpp2.rst                     | 49 ++++++++++---------
>  doc/guides/nics/pfe.rst                       |  8 +--
>  doc/guides/nics/tap.rst                       | 14 +++---
>  doc/guides/platform/bluefield.rst             |  4 +-
>  doc/guides/platform/cnxk.rst                  | 26 +++++-----
>  doc/guides/platform/dpaa.rst                  | 14 +++---
>  doc/guides/platform/dpaa2.rst                 | 20 ++++----
>  doc/guides/platform/mlx5.rst                  | 14 +++---
>  doc/guides/platform/octeontx.rst              | 22 ++++-----
>  .../prog_guide/env_abstraction_layer.rst      | 10 ++--
>  doc/guides/prog_guide/graph_lib.rst           | 39 ++++++++-------
>  doc/guides/prog_guide/rawdev.rst              | 28 ++++++-----
>  doc/guides/prog_guide/rte_flow.rst            | 12 ++---
>  doc/guides/prog_guide/stack_lib.rst           |  8 +--
>  doc/guides/prog_guide/trace_lib.rst           | 12 ++---
>  doc/guides/rawdevs/ifpga.rst                  |  5 +-
>  doc/guides/sample_app_ug/ip_pipeline.rst      |  4 +-
>  doc/guides/sample_app_ug/pipeline.rst         |  4 +-
>  doc/guides/sample_app_ug/vdpa.rst             | 26 +++++-----
>  doc/guides/windows_gsg/run_apps.rst           |  8 +--
>  30 files changed, 250 insertions(+), 225 deletions(-)
Looks good to me. Thank you.

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Best regards,
Dariusz Sosnowski
  

Patch

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 6a273d6f45..2532d92888 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -271,24 +271,29 @@  certain reconfiguration sequences that are valid in the eventdev API but not
 supported by the PMD.
 
 Specifically, the PMD supports the following configuration sequence:
-1. Configure and start the device
-2. Stop the device
-3. (Optional) Reconfigure the device
-4. (Optional) If step 3 is run:
 
-   a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
-   b. The reconfigured port(s) lose their previous queue links.
+#. Configure and start the device
 
-5. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
-6. Restart the device. If the device is reconfigured in step 3 but one or more
+#. Stop the device
+
+#. (Optional) Reconfigure the device
+   Setup queue(s). The reconfigured queue(s) lose their previous port links.
+   The reconfigured port(s) lose their previous queue links.
+   Link port(s) to queue(s)
+
+#. Restart the device. If the device is reconfigured in step 3 but one or more
    of its ports or queues are not, the PMD will apply their previous
    configuration (including port->queue links) at this time.
 
 The PMD does not support the following configuration sequences:
-1. Configure and start the device
-2. Stop the device
-3. Setup queue or setup port
-4. Start the device
+
+#. Configure and start the device
+
+#. Stop the device
+
+#. Setup queue or setup port
+
+#. Start the device
 
 This sequence is not supported because the event device must be reconfigured
 before its ports or queues can be.
diff --git a/doc/guides/eventdevs/dpaa.rst b/doc/guides/eventdevs/dpaa.rst
index 266f92d159..33d41fc7c4 100644
--- a/doc/guides/eventdevs/dpaa.rst
+++ b/doc/guides/eventdevs/dpaa.rst
@@ -64,7 +64,7 @@  Example:
 Limitations
 -----------
 
-1. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
+#. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
    Please configure export DPAA_NUM_PUSH_QUEUES=0
 
 Platform Requirement
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index dbfaf4e350..4a5815dfb9 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -127,7 +127,7 @@  The following are some recommendations on GRUB boot settings:
 Configurations before running DPDK
 ----------------------------------
 
-1. Reserve huge pages.
+#. Reserve huge pages.
    See the earlier section on :ref:`linux_gsg_hugepages` for more details.
 
    .. code-block:: console
@@ -147,7 +147,7 @@  Configurations before running DPDK
       # Mount to the specific folder.
       mount -t hugetlbfs nodev /mnt/huge
 
-2. Check the CPU layout using the DPDK ``cpu_layout`` utility:
+#. Check the CPU layout using the DPDK ``cpu_layout`` utility:
 
    .. code-block:: console
 
@@ -157,7 +157,7 @@  Configurations before running DPDK
 
    Or run ``lscpu`` to check the cores on each socket.
 
-3. Check your NIC id and related socket id:
+#. Check your NIC id and related socket id:
 
    .. code-block:: console
 
@@ -181,5 +181,5 @@  Configurations before running DPDK
    **Note**: To get the best performance, ensure that the core and NICs are in the same socket.
    In the example above ``85:00.0`` is on socket 1 and should be used by cores on socket 1 for the best performance.
 
-4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
-More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
+#. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
+   More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 21063a80ff..9ec52e380f 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -311,10 +311,10 @@  Runtime Config Options
 
    In CN10K, in event mode, driver can work in two modes,
 
-   1. Inbound encrypted traffic received by probed ipsec inline device while
+   #. Inbound encrypted traffic received by probed ipsec inline device while
       plain traffic post decryption is received by ethdev.
 
-   2. Both Inbound encrypted traffic and plain traffic post decryption are
+   #. Both Inbound encrypted traffic and plain traffic post decryption are
       received by ethdev.
 
    By default event mode works using inline device i.e mode ``1``.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 2d113f53df..c0d3e7a178 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -563,8 +563,9 @@  Traffic Management API
 DPAA2 PMD supports generic DPDK Traffic Management API which allows to
 configure the following features:
 
-1. Hierarchical scheduling
-2. Traffic shaping
+#. Hierarchical scheduling
+
+#. Traffic shaping
 
 Internally TM is represented by a hierarchy (tree) of nodes.
 Node which has a parent is called a leaf whereas node without
@@ -602,19 +603,19 @@  Usage example
 
 For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`.
 
-1. Run testpmd as follows:
+#. Run testpmd as follows:
 
    .. code-block:: console
 
 	./dpdk-testpmd  -c 0xf -n 1 -- -i --portmask 0x3 --nb-cores=1 --txq=4 --rxq=4
 
-2. Stop all ports:
+#. Stop all ports:
 
    .. code-block:: console
 
 	testpmd> port stop all
 
-3. Add shaper profile:
+#. Add shaper profile:
 
    One port level shaper and strict priority on all 4 queues of port 0:
 
@@ -642,7 +643,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 	add port tm leaf node 0 3 8 0 500 1 -1 0 0 0 0
 	port tm hierarchy commit 0 no
 
-4. Create flows as per the source IP addresses:
+#. Create flows as per the source IP addresses:
 
    .. code-block:: console
 
@@ -655,7 +656,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 	flow create 1 group 0 priority 4 ingress pattern ipv4 src is \
 	10.10.10.4 / end actions queue index 3 / end
 
-5. Start all ports
+#. Start all ports
 
    .. code-block:: console
 
@@ -663,10 +664,10 @@  For a detailed usage description please refer to "Traffic Management" section in
 
 
 
-6. Enable forwarding
+#. Enable forwarding
 
    .. code-block:: console
 
 		testpmd> start
 
-7. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0
+#. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0
diff --git a/doc/guides/nics/enetc.rst b/doc/guides/nics/enetc.rst
index 855bacfd0f..e96260f96a 100644
--- a/doc/guides/nics/enetc.rst
+++ b/doc/guides/nics/enetc.rst
@@ -76,15 +76,15 @@  Prerequisites
 There are three main pre-requisites for executing ENETC PMD on a ENETC
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index ad28c8f8fb..c1adb64369 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -102,28 +102,28 @@  Prerequisites
 There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain
    <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting
    <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-.. note::
+   .. note::
 
-   Branch is 'lf-5.10.y'
+      Branch is 'lf-5.10.y'
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used.
    For example, Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland
    which can be obtained from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. The Ethernet device will be registered as virtual device,
+#. The Ethernet device will be registered as virtual device,
    so ENETFEC has dependency on **rte_bus_vdev** library
    and it is mandatory to use `--vdev` with value `net_enetfec`
    to run DPDK application.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 6cd1165521..3432eabb36 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -964,7 +964,7 @@  See :numref:`figure_intel_perf_test_setup` for the performance test setup.
    Performance Test Setup
 
 
-1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
+#. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
    The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
    for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
    Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
@@ -972,23 +972,23 @@  See :numref:`figure_intel_perf_test_setup` for the performance test setup.
       82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
       85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
 
-2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
+#. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
 
-3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
+#. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
    In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
    are 18-35 and 54-71.
    Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
    cores from different cores (e.g core18 and core19).
 
-4. Bind these two ports to igb_uio.
+#. Bind these two ports to igb_uio.
 
-5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
+#. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
    will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
 
-6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
+#. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
    Compile the ``l3fwd sample`` with the default lpm mode.
 
-7. The command line of running l3fwd would be something like the following::
+#. The command line of running l3fwd would be something like the following::
 
       ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
               -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
@@ -996,7 +996,7 @@  See :numref:`figure_intel_perf_test_setup` for the performance test setup.
    This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
    core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
 
-8. Configure the traffic at a traffic generator.
+#. Configure the traffic at a traffic generator.
 
    * Start creating a stream on packet generator.
 
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index c6279f51d0..50962caeda 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -234,9 +234,9 @@  NVIDIA MLNX_OFED as a fallback
 Installing NVIDIA MLNX_OFED
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-1. Download latest NVIDIA MLNX_OFED.
+#. Download latest NVIDIA MLNX_OFED.
 
-2. Install the required libraries and kernel modules either by installing
+#. Install the required libraries and kernel modules either by installing
    only the required set, or by installing the entire NVIDIA MLNX_OFED:
 
    For bare metal use::
@@ -251,22 +251,22 @@  Installing NVIDIA MLNX_OFED
 
         ./mlnxofedinstall --dpdk --upstream-libs --guest
 
-3. Verify the firmware is the correct one::
+#. Verify the firmware is the correct one::
 
         ibv_devinfo
 
-4. Set all ports links to Ethernet, follow instructions on the screen::
+#. Set all ports links to Ethernet, follow instructions on the screen::
 
         connectx_port_config
 
-5. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
+#. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
 
 .. _qsg:
 
 Quick Start Guide
 -----------------
 
-1. Set all ports links to Ethernet::
+#. Set all ports links to Ethernet::
 
         PCI=<NIC PCI address>
         echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
@@ -280,7 +280,7 @@  Quick Start Guide
 
 .. _QSG_2:
 
-2. In case of bare metal or hypervisor, configure optimized steering mode
+#. In case of bare metal or hypervisor, configure optimized steering mode
    by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``::
 
         options mlx4_core log_num_mgm_entry_size=-7
@@ -290,7 +290,7 @@  Quick Start Guide
         If VLAN filtering is used, set log_num_mgm_entry_size=-1.
         Performance degradation can occur on this case.
 
-3. Restart the driver::
+#. Restart the driver::
 
         /etc/init.d/openibd restart
 
@@ -298,17 +298,17 @@  Quick Start Guide
 
         service openibd restart
 
-4. Install DPDK and you are ready to go.
+#. Install DPDK and you are ready to go.
    See :doc:`compilation instructions <../linux_gsg/build_dpdk>`.
 
 Performance tuning
 ------------------
 
-1. Verify the optimized steering mode is configured::
+#. Verify the optimized steering mode is configured::
 
         cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
 
-2. Use the CPU near local NUMA node to which the PCIe adapter is connected,
+#. Use the CPU near local NUMA node to which the PCIe adapter is connected,
    for better performance. For VMs, verify that the right CPU
    and NUMA node are pinned according to the above. Run::
 
@@ -316,21 +316,21 @@  Performance tuning
 
    to identify the NUMA node to which the PCIe adapter is connected.
 
-3. If more than one adapter is used, and root complex capabilities allow
+#. If more than one adapter is used, and root complex capabilities allow
    to put both adapters on the same NUMA node without PCI bandwidth degradation,
    it is recommended to locate both adapters on the same NUMA node.
    This in order to forward packets from one to the other without
    NUMA performance penalty.
 
-4. Disable pause frames::
+#. Disable pause frames::
 
         ethtool -A <netdev> rx off tx off
 
-5. Verify IO non-posted prefetch is disabled by default. This can be checked
+#. Verify IO non-posted prefetch is disabled by default. This can be checked
    via the BIOS configuration. Please contact you server provider for more
    information about the settings.
 
-.. note::
+   .. note::
 
         On some machines, depends on the machine integrator, it is beneficial
         to set the PCI max read request parameter to 1K. This can be
@@ -347,7 +347,7 @@  Performance tuning
         The XXX can be different on different systems. Make sure to configure
         according to the setpci output.
 
-6. To minimize overhead of searching Memory Regions:
+#. To minimize overhead of searching Memory Regions:
 
    - '--socket-mem' is recommended to pin memory by predictable amount.
    - Configure per-lcore cache when creating Mempools for packet buffer.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 39a8c5d7b4..d002106765 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1535,15 +1535,15 @@  Use <sfnum> to probe SF representor::
 Performance tuning
 ------------------
 
-1. Configure aggressive CQE Zipping for maximum performance::
+#. Configure aggressive CQE Zipping for maximum performance::
 
         mlxconfig -d <mst device> s CQE_COMPRESSION=1
 
-  To set it back to the default CQE Zipping mode use::
+   To set it back to the default CQE Zipping mode use::
 
         mlxconfig -d <mst device> s CQE_COMPRESSION=0
 
-2. In case of virtualization:
+#. In case of virtualization:
 
    - Make sure that hypervisor kernel is 3.16 or newer.
    - Configure boot with ``iommu=pt``.
@@ -1551,7 +1551,7 @@  Performance tuning
    - Make sure to allocate a VM on huge pages.
    - Make sure to set CPU pinning.
 
-3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
+#. Use the CPU near local NUMA node to which the PCIe adapter is connected,
    for better performance. For VMs, verify that the right CPU
    and NUMA node are pinned according to the above. Run::
 
@@ -1559,21 +1559,21 @@  Performance tuning
 
    to identify the NUMA node to which the PCIe adapter is connected.
 
-4. If more than one adapter is used, and root complex capabilities allow
+#. If more than one adapter is used, and root complex capabilities allow
    to put both adapters on the same NUMA node without PCI bandwidth degradation,
    it is recommended to locate both adapters on the same NUMA node.
    This in order to forward packets from one to the other without
    NUMA performance penalty.
 
-5. Disable pause frames::
+#. Disable pause frames::
 
         ethtool -A <netdev> rx off tx off
 
-6. Verify IO non-posted prefetch is disabled by default. This can be checked
+#. Verify IO non-posted prefetch is disabled by default. This can be checked
    via the BIOS configuration. Please contact you server provider for more
    information about the settings.
 
-.. note::
+   .. note::
 
         On some machines, depends on the machine integrator, it is beneficial
         to set the PCI max read request parameter to 1K. This can be
@@ -1590,7 +1590,7 @@  Performance tuning
         The XXX can be different on different systems. Make sure to configure
         according to the setpci output.
 
-7. To minimize overhead of searching Memory Regions:
+#. To minimize overhead of searching Memory Regions:
 
    - '--socket-mem' is recommended to pin memory by predictable amount.
    - Configure per-lcore cache when creating Mempools for packet buffer.
diff --git a/doc/guides/nics/mvpp2.rst b/doc/guides/nics/mvpp2.rst
index cbfa47afd8..e3d4b6a479 100644
--- a/doc/guides/nics/mvpp2.rst
+++ b/doc/guides/nics/mvpp2.rst
@@ -572,9 +572,11 @@  Traffic metering and policing
 
 MVPP2 PMD supports DPDK traffic metering and policing that allows the following:
 
-1. Meter ingress traffic.
-2. Do policing.
-3. Gather statistics.
+#. Meter ingress traffic.
+
+#. Do policing.
+
+#. Gather statistics.
 
 For an additional description please refer to DPDK :doc:`Traffic Metering and Policing API <../prog_guide/traffic_metering_and_policing>`.
 
@@ -592,25 +594,25 @@  The following capabilities are not supported:
 Usage example
 ~~~~~~~~~~~~~
 
-1. Run testpmd user app:
+#. Run testpmd user app:
 
    .. code-block:: console
 
 		./dpdk-testpmd --vdev=eth_mvpp2,iface=eth0,iface=eth2 -c 6 -- -i -p 3 -a --txd 1024 --rxd 1024
 
-2. Create meter profile:
+#. Create meter profile:
 
    .. code-block:: console
 
 		testpmd> add port meter profile 0 0 srtcm_rfc2697 2000 256 256
 
-3. Create meter:
+#. Create meter:
 
    .. code-block:: console
 
 		testpmd> create port meter 0 0 0 yes d d d 0 1 0
 
-4. Create flow rule witch meter attached:
+#. Create flow rule witch meter attached:
 
    .. code-block:: console
 
@@ -628,10 +630,13 @@  Traffic Management API
 MVPP2 PMD supports generic DPDK Traffic Management API which allows to
 configure the following features:
 
-1. Hierarchical scheduling
-2. Traffic shaping
-3. Congestion management
-4. Packet marking
+#. Hierarchical scheduling
+
+#. Traffic shaping
+
+#. Congestion management
+
+#. Packet marking
 
 Internally TM is represented by a hierarchy (tree) of nodes.
 Node which has a parent is called a leaf whereas node without
@@ -671,20 +676,20 @@  Usage example
 
 For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`.
 
-1. Run testpmd as follows:
+#. Run testpmd as follows:
 
    .. code-block:: console
 
 		./dpdk-testpmd --vdev=net_mrvl,iface=eth0,iface=eth2,cfg=./qos_config -c 7 -- \
 		-i -p 3 --disable-hw-vlan-strip --rxq 3 --txq 3 --txd 1024 --rxd 1024
 
-2. Stop all ports:
+#. Stop all ports:
 
    .. code-block:: console
 
 		testpmd> port stop all
 
-3. Add shaper profile:
+#. Add shaper profile:
 
    .. code-block:: console
 
@@ -698,7 +703,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 		70000   - Bucket size in bytes.
 		0       - Packet length adjustment - ignored.
 
-4. Add non-leaf node for port 0:
+#. Add non-leaf node for port 0:
 
    .. code-block:: console
 
@@ -717,7 +722,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 		 3  - Enable statistics for both number of transmitted packets and bytes.
 		 0  - Number of shared shapers.
 
-5. Add leaf node for tx queue 0:
+#. Add leaf node for tx queue 0:
 
    .. code-block:: console
 
@@ -737,7 +742,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-6. Add leaf node for tx queue 1:
+#. Add leaf node for tx queue 1:
 
    .. code-block:: console
 
@@ -757,7 +762,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-7. Add leaf node for tx queue 2:
+#. Add leaf node for tx queue 2:
 
    .. code-block:: console
 
@@ -777,18 +782,18 @@  For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-8. Commit hierarchy:
+#. Commit hierarchy:
 
    .. code-block:: console
 
 		testpmd> port tm hierarchy commit 0 no
 
-  Parameters have following meaning::
+   Parameters have following meaning::
 
 		0  - Id of a port.
 		no - Do not flush TM hierarchy if commit fails.
 
-9. Start all ports
+#. Start all ports
 
    .. code-block:: console
 
@@ -796,7 +801,7 @@  For a detailed usage description please refer to "Traffic Management" section in
 
 
 
-10. Enable forwarding
+#. Enable forwarding
 
    .. code-block:: console
 
diff --git a/doc/guides/nics/pfe.rst b/doc/guides/nics/pfe.rst
index 748c382573..172ae80984 100644
--- a/doc/guides/nics/pfe.rst
+++ b/doc/guides/nics/pfe.rst
@@ -110,21 +110,21 @@  Prerequisites
 Below are some pre-requisites for executing PFE PMD on a PFE
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. The ethernet device will be registered as virtual device, so pfe has dependency on
+#. The ethernet device will be registered as virtual device, so pfe has dependency on
    **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_pfe` to
    run DPDK application.
 
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 449e747994..d4f45c02a1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -249,19 +249,19 @@  It is possible to support different RSS hash algorithms by updating file
 ``tap_bpf_program.c``  In order to add a new RSS hash algorithm follow these
 steps:
 
-1. Write the new RSS implementation in file ``tap_bpf_program.c``
+#. Write the new RSS implementation in file ``tap_bpf_program.c``
 
-BPF programs which are uploaded to the kernel correspond to
-C functions under different ELF sections.
+   BPF programs which are uploaded to the kernel correspond to
+   C functions under different ELF sections.
 
-2. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
+#. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
 
-3. Use make to compile  `tap_bpf_program.c`` via ``LLVM`` into an object file
-   and extract the resulting instructions into ``tap_bpf_insn.h``.
+#. Use make to compile  `tap_bpf_program.c`` via ``LLVM`` into an object file
+   and extract the resulting instructions into ``tap_bpf_insn.h``::
 
     cd bpf; make
 
-4. Recompile the TAP PMD.
+#. Recompile the TAP PMD.
 
 The C arrays are uploaded to the kernel using BPF system calls.
 
diff --git a/doc/guides/platform/bluefield.rst b/doc/guides/platform/bluefield.rst
index 322b08a217..954686affc 100644
--- a/doc/guides/platform/bluefield.rst
+++ b/doc/guides/platform/bluefield.rst
@@ -25,11 +25,11 @@  Supported BlueField Platforms
 Common Offload HW Drivers
 -------------------------
 
-1. **NIC Driver**
+#. **NIC Driver**
 
    See :doc:`../nics/mlx5` for NVIDIA mlx5 NIC driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    This is based on the crypto extension support of armv8. See
    :doc:`../cryptodevs/armv8` for armv8 crypto driver information.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index b901062c93..70065e3d96 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -95,9 +95,11 @@  It is only a configuration driver used in control path.
 The :numref:`figure_cnxk_resource_virtualization` diagram also shows a
 resource provisioning example where,
 
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
+#. PFx and PFx-VF0 bound to Linux netdev driver.
+
+#. PFx-VF1 ethdev driver bound to the first DPDK application.
+
+#. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
 
 LBK HW Access
 -------------
@@ -179,7 +181,7 @@  Procedure to Setup Platform
 There are three main prerequisites for setting up DPDK on cnxk
 compatible board:
 
-1. **RVU AF Linux kernel driver**
+#. **RVU AF Linux kernel driver**
 
    The dependent kernel drivers can be obtained from the
    `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
@@ -188,7 +190,7 @@  compatible board:
 
    Linux kernel should be configured with the following features enabled:
 
-.. code-block:: console
+   .. code-block:: console
 
         # 64K pages enabled for better performance
         CONFIG_ARM64_64K_PAGES=y
@@ -218,7 +220,7 @@  compatible board:
         # Enable if OCTEONTX2 DMA PF driver required
         CONFIG_OCTEONTX2_DPI_PF=n
 
-2. **ARM64 Linux Tool Chain**
+#. **ARM64 Linux Tool Chain**
 
    For example, the *aarch64* Linaro Toolchain, which can be obtained from
    `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
@@ -226,7 +228,7 @@  compatible board:
    Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
    optimized for cnxk CPU.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem may be used. For example,
    Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
@@ -261,11 +263,13 @@  context or stats using debugfs.
 
 Enable ``debugfs`` by:
 
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUG_FS=y``.
-2. Boot OCTEON CN9K/CN10K with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
+#. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUG_FS=y``.
 
-.. code-block:: console
+#. Boot OCTEON CN9K/CN10K with debugfs supported kernel.
+
+#. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
+
+   .. code-block:: console
 
        # mount -t debugfs none /sys/kernel/debug
 
diff --git a/doc/guides/platform/dpaa.rst b/doc/guides/platform/dpaa.rst
index 389692907d..282a2f45ee 100644
--- a/doc/guides/platform/dpaa.rst
+++ b/doc/guides/platform/dpaa.rst
@@ -22,15 +22,15 @@  processors-and-mcus/qoriq-layerscape-arm-processors:QORIQ-ARM>`_.
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Nics Driver**
+#. **Nics Driver**
 
    See :doc:`../nics/dpaa` for NXP dpaa nic driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    See :doc:`../cryptodevs/dpaa_sec` for NXP dpaa cryptodev driver information.
 
-3. **Eventdev Driver**
+#. **Eventdev Driver**
 
    See :doc:`../eventdevs/dpaa` for NXP dpaa eventdev driver information.
 
@@ -41,22 +41,22 @@  Steps To Setup Platform
 There are four main pre-requisites for executing DPAA PMD on a DPAA
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. **FMC Tool**
+#. **FMC Tool**
 
    If one is planning to use more than 1 Recv queue and hardware capability to
    parse, classify and distribute the packets, the Frame Manager Configuration
diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index a9fcad6ca2..2b0d93a976 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -24,23 +24,23 @@  processors-and-mcus/qoriq-layerscape-arm-processors:QORIQ-ARM>`_.
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Nics Driver**
+#. **Nics Driver**
 
    See :doc:`../nics/dpaa2` for NXP dpaa2 nic driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    See :doc:`../cryptodevs/dpaa2_sec` for NXP dpaa2 cryptodev driver information.
 
-3. **Eventdev Driver**
+#. **Eventdev Driver**
 
    See :doc:`../eventdevs/dpaa2` for NXP dpaa2 eventdev driver information.
 
-4. **Rawdev AIOP CMDIF Driver**
+#. **Rawdev AIOP CMDIF Driver**
 
    See :doc:`../rawdevs/dpaa2_cmdif` for NXP dpaa2 AIOP command interface driver information.
 
-5. **DMA Driver**
+#. **DMA Driver**
 
    See :doc:`../dmadevs/dpaa2` for NXP dpaa2 QDMA driver information.
 
@@ -51,27 +51,27 @@  Steps To Setup Platform
 There are four main pre-requisites for executing DPAA2 PMD on a DPAA2
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. **Resource Scripts**
+#. **Resource Scripts**
 
    DPAA2 based resources can be configured easily with the help of ready scripts
    as provided in the DPDK Extra repository.
 
-5. **Build Config**
+#. **Build Config**
 
    Use dpaa build configs, they work for both DPAA2 and DPAA platforms.
 
diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index 73dadd4064..400000e284 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -361,34 +361,34 @@  Sub-Function is a portion of the PCI device,
 it has its own dedicated queues.
 An SF shares PCI-level resources with other SFs and/or with its parent PCI function.
 
-0. Requirement::
+#. Requirement::
 
       MLNX_OFED version >= 5.4-0.3.3.0
 
-1. Configure SF feature::
+#. Configure SF feature::
 
       # Run mlxconfig on both PFs on host and ECPFs on BlueField.
       mlxconfig -d <mst device> set PER_PF_NUM_SF=1 PF_TOTAL_SF=252 PF_SF_BAR_SIZE=12
 
-2. Enable switchdev mode::
+#. Enable switchdev mode::
 
       mlxdevm dev eswitch set pci/<DBDF> mode switchdev
 
-3. Add SF port::
+#. Add SF port::
 
       mlxdevm port add pci/<DBDF> flavour pcisf pfnum 0 sfnum <sfnum>
 
       Get SFID from output: pci/<DBDF>/<SFID>
 
-4. Modify MAC address::
+#. Modify MAC address::
 
       mlxdevm port function set pci/<DBDF>/<SFID> hw_addr <MAC>
 
-5. Activate SF port::
+#. Activate SF port::
 
       mlxdevm port function set pci/<DBDF>/<ID> state active
 
-6. Devargs to probe SF device::
+#. Devargs to probe SF device::
 
       auxiliary:mlx5_core.sf.<num>,class=eth:regex
 
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
index 1459dc7109..b01f51ba4d 100644
--- a/doc/guides/platform/octeontx.rst
+++ b/doc/guides/platform/octeontx.rst
@@ -15,15 +15,15 @@  More information about SoC can be found at `Cavium, Inc Official Website
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Crypto Driver**
+#. **Crypto Driver**
    See :doc:`../cryptodevs/octeontx` for octeontx crypto driver
    information.
 
-2. **Eventdev Driver**
+#. **Eventdev Driver**
    See :doc:`../eventdevs/octeontx` for octeontx ssovf eventdev driver
    information.
 
-3. **Mempool Driver**
+#. **Mempool Driver**
    See :doc:`../mempool/octeontx` for octeontx fpavf mempool driver
    information.
 
@@ -33,24 +33,24 @@  Steps To Setup Platform
 There are three main pre-prerequisites for setting up Platform drivers on
 OCTEON TX compatible board:
 
-1. **OCTEON TX Linux kernel PF driver for Network acceleration HW blocks**
+#. **OCTEON TX Linux kernel PF driver for Network acceleration HW blocks**
 
    The OCTEON TX Linux kernel drivers (includes the required PF driver for the
    Platform drivers) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
    along with build, install and dpdk usage instructions.
 
-.. note::
+   .. note::
 
-   The PF driver and the required microcode for the crypto offload block will be
-   available with OCTEON TX SDK only. So for using crypto offload, follow the steps
-   mentioned in :ref:`setup_platform_using_OCTEON_TX_SDK`.
+      The PF driver and the required microcode for the crypto offload block will be
+      available with OCTEON TX SDK only. So for using crypto offload, follow the steps
+      mentioned in :ref:`setup_platform_using_OCTEON_TX_SDK`.
 
-2. **ARM64 Tool Chain**
+#. **ARM64 Tool Chain**
 
    For example, the *aarch64* Linaro Toolchain, which can be obtained from
    `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
@@ -60,7 +60,7 @@  OCTEON TX compatible board:
    as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
    to bring up a OCTEON TX board. Please refer :ref:`setup_platform_using_OCTEON_TX_SDK`.
 
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+#. Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
 
 .. _setup_platform_using_OCTEON_TX_SDK:
 
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6debf54efb..9559c12a98 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -807,15 +807,15 @@  Known Issues
 
   This means, use cases involving preemptible pthreads should consider using rte_ring carefully.
 
-  1. It CAN be used for preemptible single-producer and single-consumer use case.
+  #. It CAN be used for preemptible single-producer and single-consumer use case.
 
-  2. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
+  #. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
 
-  3. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
+  #. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
 
-  4. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
+  #. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
 
-  5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+  #. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
 
   Alternatively, applications can use the lock-free stack mempool handler. When
   considering this handler, note that:
diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst
index 96cff9ccc7..ad09bdfe26 100644
--- a/doc/guides/prog_guide/graph_lib.rst
+++ b/doc/guides/prog_guide/graph_lib.rst
@@ -346,31 +346,32 @@  handling where every packet could be going to different next node.
 
 Example of intermediate node implementation with home run:
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-1. Start with speculation that next_node = node->ctx.
-This could be the next_node application used in the previous function call of this node.
 
-2. Get the next_node stream array with required space using
-``rte_node_next_stream_get(next_node, space)``.
+#. Start with speculation that next_node = node->ctx.
+   This could be the next_node application used in the previous function call of this node.
 
-3. while n_left_from > 0 (i.e packets left to be sent) prefetch next pkt_set
-and process current pkt_set to find their next node
+#. Get the next_node stream array with required space using
+   ``rte_node_next_stream_get(next_node, space)``.
 
-4. if all the next nodes of the current pkt_set match speculated next node,
-just count them as successfully speculated(``last_spec``) till now and
-continue the loop without actually moving them to the next node. else if there is
-a mismatch, copy all the pkt_set pointers that were ``last_spec`` and move the
-current pkt_set to their respective next's nodes using ``rte_enqueue_next_x1()``.
-Also, one of the next_node can be updated as speculated next_node if it is more
-probable. Finally, reset ``last_spec`` to zero.
+#. while n_left_from > 0 (i.e packets left to be sent) prefetch next pkt_set
+   and process current pkt_set to find their next node
 
-5. if n_left_from != 0 then goto 3) to process remaining packets.
+#. if all the next nodes of the current pkt_set match speculated next node,
+   just count them as successfully speculated(``last_spec``) till now and
+   continue the loop without actually moving them to the next node. else if there is
+   a mismatch, copy all the pkt_set pointers that were ``last_spec`` and move the
+   current pkt_set to their respective next's nodes using ``rte_enqueue_next_x1()``.
+   Also, one of the next_node can be updated as speculated next_node if it is more
+   probable. Finally, reset ``last_spec`` to zero.
 
-6. if last_spec == nb_objs, All the objects passed were successfully speculated
-to single next node. So, the current stream can be moved to next node using
-``rte_node_next_stream_move(node, next_node)``.
-This is the ``home run`` where memcpy of buffer pointers to next node is avoided.
+#. if n_left_from != 0 then goto 3) to process remaining packets.
 
-7. Update the ``node->ctx`` with more probable next node.
+#. if last_spec == nb_objs, All the objects passed were successfully speculated
+   to single next node. So, the current stream can be moved to next node using
+   ``rte_node_next_stream_move(node, next_node)``.
+   This is the ``home run`` where memcpy of buffer pointers to next node is avoided.
+
+#. Update the ``node->ctx`` with more probable next node.
 
 Graph object memory layout
 --------------------------
diff --git a/doc/guides/prog_guide/rawdev.rst b/doc/guides/prog_guide/rawdev.rst
index 488e0a7ef6..07a2c4e73c 100644
--- a/doc/guides/prog_guide/rawdev.rst
+++ b/doc/guides/prog_guide/rawdev.rst
@@ -13,11 +13,13 @@  In terms of device flavor (type) support, DPDK currently has ethernet
 
 For a new type of device, for example an accelerator, there are not many
 options except:
-1. create another lib/MySpecialDev, driver/MySpecialDrv and use it
-through Bus/PMD model.
-2. Or, create a vdev and implement necessary custom APIs which are directly
-exposed from driver layer. However this may still require changes in bus code
-in DPDK.
+
+#. create another lib/MySpecialDev, driver/MySpecialDrv and use it
+   through Bus/PMD model.
+
+#. Or, create a vdev and implement necessary custom APIs which are directly
+   exposed from driver layer. However this may still require changes in bus code
+   in DPDK.
 
 The DPDK Rawdev library is an abstraction that provides the DPDK framework a
 way to manage such devices in a generic manner without expecting changes to
@@ -30,19 +32,19 @@  Design
 
 Key factors guiding design of the Rawdevice library:
 
-1. Following are some generic operations which can be treated as applicable
+#. Following are some generic operations which can be treated as applicable
    to a large subset of device types. None of the operations are mandatory to
    be implemented by a driver. Application should also be designed for proper
    handling for unsupported APIs.
 
-  * Device Start/Stop - In some cases, 'reset' might also be required which
-    has different semantics than a start-stop-start cycle.
-  * Configuration - Device, Queue or any other sub-system configuration
-  * I/O - Sending a series of buffers which can enclose any arbitrary data
-  * Statistics - Fetch arbitrary device statistics
-  * Firmware Management - Firmware load/unload/status
+   * Device Start/Stop - In some cases, 'reset' might also be required which
+     has different semantics than a start-stop-start cycle.
+   * Configuration - Device, Queue or any other sub-system configuration
+   * I/O - Sending a series of buffers which can enclose any arbitrary data
+   * Statistics - Fetch arbitrary device statistics
+   * Firmware Management - Firmware load/unload/status
 
-2. Application API should be able to pass along arbitrary state information
+#. Application API should be able to pass along arbitrary state information
    to/from device driver. This can be achieved by maintaining context
    information through opaque data or pointers.
 
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b5d4b0e929..627b845bfb 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3987,17 +3987,17 @@  Flow rules management can be done via special lockless flow management queues.
 
 The asynchronous flow rule insertion logic can be broken into two phases.
 
-1. Initialization stage as shown here:
+#. Initialization stage as shown here:
 
-.. _figure_rte_flow_async_init:
+   .. _figure_rte_flow_async_init:
 
-.. figure:: img/rte_flow_async_init.*
+   .. figure:: img/rte_flow_async_init.*
 
-2. Main loop as presented on a datapath application example:
+#. Main loop as presented on a datapath application example:
 
-.. _figure_rte_flow_async_usage:
+   .. _figure_rte_flow_async_usage:
 
-.. figure:: img/rte_flow_async_usage.*
+   .. figure:: img/rte_flow_async_usage.*
 
 Enqueue creation operation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 3097cab0c2..975d3ad796 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -75,10 +75,12 @@  compare-and-swap instruction to atomically update both the stack top pointer
 and a modification counter. The ABA problem can occur without a modification
 counter if, for example:
 
-1. Thread A reads head pointer X and stores the pointed-to list element.
-2. Other threads modify the list such that the head pointer is once again X,
+#. Thread A reads head pointer X and stores the pointed-to list element.
+
+#. Other threads modify the list such that the head pointer is once again X,
    but its pointed-to data is different than what thread A read.
-3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+#. Thread A changes the head pointer with a compare-and-swap and succeeds.
 
 In this case thread A would not detect that the list had changed, and would
 both pop stale data and incorrect change the head pointer. By adding a
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e5718feddc..d9b17abe90 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -175,13 +175,13 @@  events.
 
 There are many tools you can use to read DPDK traces:
 
-1. ``babeltrace`` is a command-line utility that converts trace formats; it
-supports the format that DPDK trace library produces, CTF, as well as a
-basic text output that can be grep'ed.
-The babeltrace command is part of the Open Source Babeltrace project.
+#. ``babeltrace`` is a command-line utility that converts trace formats; it
+   supports the format that DPDK trace library produces, CTF, as well as a
+   basic text output that can be grep'ed.
+   The babeltrace command is part of the Open Source Babeltrace project.
 
-2. ``Trace Compass`` is a graphical user interface for viewing and analyzing
-any type of logs or traces, including DPDK traces.
+#. ``Trace Compass`` is a graphical user interface for viewing and analyzing
+   any type of logs or traces, including DPDK traces.
 
 Use the babeltrace command-line tool
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rawdevs/ifpga.rst b/doc/guides/rawdevs/ifpga.rst
index 41877aeddf..c0901ddeae 100644
--- a/doc/guides/rawdevs/ifpga.rst
+++ b/doc/guides/rawdevs/ifpga.rst
@@ -244,13 +244,14 @@  through partitioning of individual dedicated resources, or virtualization of
 shared resources. OFS provides several models to share the AFU resources via
 PR mechanism and hardware-based virtualization schemes.
 
-1. Legacy model.
+#. Legacy model.
    With legacy model FPGA cards like Intel PAC N3000 or N5000, there is
    a notion that the boundary between the AFU and the shell is also the unit of
    PR for those FPGA platforms. This model is only able to handle a
    single context, because it only has one PR engine, and one PR region which
    has an associated Port device.
-2. Multiple VFs per PR slot.
+
+#. Multiple VFs per PR slot.
    In this model, available AFU resources may allow instantiation of many VFs
    which have a dedicated PCIe function with their own dedicated MMIO space, or
    partition a region of MMIO space on a single PCIe function. Intel PAC N6000
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index f30ac5e19d..ff5ee67ec2 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -127,9 +127,9 @@  The main thread is creating and managing all the application objects based on CL
 Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread
 executes two tasks in time-sharing mode:
 
-1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
+#. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
 
-2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
+#. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
    messages send by the main thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules
    to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
 
diff --git a/doc/guides/sample_app_ug/pipeline.rst b/doc/guides/sample_app_ug/pipeline.rst
index 7c86bf484a..58ed0d296a 100644
--- a/doc/guides/sample_app_ug/pipeline.rst
+++ b/doc/guides/sample_app_ug/pipeline.rst
@@ -111,8 +111,8 @@  The main thread is creating and managing all the application objects based on CL
 Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread
 executes two tasks in time-sharing mode:
 
-1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
+#. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
 
-2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
+#. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
    messages send by the main thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules
    to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 6b6de53e48..5a71a70e37 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -40,11 +40,15 @@  where
   (n starts from 0).
 * --interactive means run the vdpa sample in interactive mode:
 
-  1. help: show help message
-  2. list: list all available vdpa devices
-  3. create: create a new vdpa port with socket file and vdpa device address
-  4. stats: show statistics of virtio queues
-  5. quit: unregister vhost driver and exit the application
+  #. help: show help message
+
+  #. list: list all available vdpa devices
+
+  #. create: create a new vdpa port with socket file and vdpa device address
+
+  #. stats: show statistics of virtio queues
+
+  #. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
@@ -100,21 +104,21 @@  vDPA supports cross-backend live migration, user can migrate SW vhost backend
 VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is
 the source host with SW vhost VM and B is the destination host with vDPA.
 
-1. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
+#. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
    in migration-listen mode:
 
-.. code-block:: console
+   .. code-block:: console
 
         B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))
 
-2. Start the migration (on source host):
+#. Start the migration (on source host):
 
-.. code-block:: console
+   .. code-block:: console
 
         A: (qemu) migrate -d tcp:<B ip>:4444 (or other PORT)
 
-3. Check the status (on source host):
+#. Check the status (on source host):
 
-.. code-block:: console
+   .. code-block:: console
 
         A: (qemu) info migrate
diff --git a/doc/guides/windows_gsg/run_apps.rst b/doc/guides/windows_gsg/run_apps.rst
index 08f110d0b5..2584144c4c 100644
--- a/doc/guides/windows_gsg/run_apps.rst
+++ b/doc/guides/windows_gsg/run_apps.rst
@@ -10,16 +10,16 @@  Grant *Lock pages in memory* Privilege
 Use of hugepages ("large pages" in Windows terminology) requires
 ``SeLockMemoryPrivilege`` for the user running an application.
 
-1. Open *Local Security Policy* snap-in, either:
+#. Open *Local Security Policy* snap-in, either:
 
    * Control Panel / Computer Management / Local Security Policy;
    * or Win+R, type ``secpol``, press Enter.
 
-2. Open *Local Policies / User Rights Assignment / Lock pages in memory.*
+#. Open *Local Policies / User Rights Assignment / Lock pages in memory.*
 
-3. Add desired users or groups to the list of grantees.
+#. Add desired users or groups to the list of grantees.
 
-4. Privilege is applied upon next logon. In particular, if privilege has been
+#. Privilege is applied upon next logon. In particular, if privilege has been
    granted to current user, a logoff is required before it is available.
 
 See `Large-Page Support`_ in MSDN for details.