[RFC] ethdev: introduce shared Rx queue

Message ID 20210727034204.20649-1-xuemingl@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series [RFC] ethdev: introduce shared Rx queue |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS

Commit Message

Xueming Li July 27, 2021, 3:42 a.m. UTC
  In eth PMD driver model, each RX queue was pre-loaded with mbufs for
saving incoming packets. When number of SF or VF scale out in a switch
domain, the memory consumption became significant. Most important,
polling all ports leads to high cache miss, high latency and low
throughput.

To save memory and speed up, this patch introduces shared RX queue.
Ports with same configuration in a switch domain could share RX queue
set by specifying offloading flag RTE_ETH_RX_OFFLOAD_SHARED_RXQ. Polling
a member port in shared RX queue receives packets for all member ports.
Source port is identified by mbuf->port.

Queue number of ports in shared group should be identical. Queue index
is 1:1 mapped in shared group.

Shared RX queue is supposed to be polled on same thread.

Multiple groups is supported by group ID.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 lib/ethdev/rte_ethdev.c | 1 +
 lib/ethdev/rte_ethdev.h | 7 +++++++
 2 files changed, 8 insertions(+)
  

Comments

Andrew Rybchenko July 28, 2021, 7:56 a.m. UTC | #1
On 7/27/21 6:42 AM, Xueming Li wrote:
> In eth PMD driver model, each RX queue was pre-loaded with mbufs for
> saving incoming packets. When number of SF or VF scale out in a switch
> domain, the memory consumption became significant. Most important,
> polling all ports leads to high cache miss, high latency and low
> throughput.
> 
> To save memory and speed up, this patch introduces shared RX queue.
> Ports with same configuration in a switch domain could share RX queue
> set by specifying offloading flag RTE_ETH_RX_OFFLOAD_SHARED_RXQ. Polling
> a member port in shared RX queue receives packets for all member ports.
> Source port is identified by mbuf->port.
> 
> Queue number of ports in shared group should be identical. Queue index
> is 1:1 mapped in shared group.
> 
> Shared RX queue is supposed to be polled on same thread.
> 
> Multiple groups is supported by group ID.
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>

It looks like it could be useful to artificial benchmarks, but
absolutely useless for real life. SFs and VFs are used by VMs
(or containers?) to have its own part of HW. If so, SF or VF
Rx and Tx queues live in a VM and cannot be shared.

Sharing makes sense for representors, but it is not mentioned in
the description.
  
Xueming Li July 28, 2021, 8:20 a.m. UTC | #2
Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, July 28, 2021 3:57 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
> Yigit <ferruh.yigit@intel.com>
> Subject: Re: [RFC] ethdev: introduce shared Rx queue
> 
> On 7/27/21 6:42 AM, Xueming Li wrote:
> > In eth PMD driver model, each RX queue was pre-loaded with mbufs for
> > saving incoming packets. When number of SF or VF scale out in a switch
> > domain, the memory consumption became significant. Most important,
> > polling all ports leads to high cache miss, high latency and low
> > throughput.
> >
> > To save memory and speed up, this patch introduces shared RX queue.
> > Ports with same configuration in a switch domain could share RX queue
> > set by specifying offloading flag RTE_ETH_RX_OFFLOAD_SHARED_RXQ.
> > Polling a member port in shared RX queue receives packets for all member ports.
> > Source port is identified by mbuf->port.
> >
> > Queue number of ports in shared group should be identical. Queue index
> > is 1:1 mapped in shared group.
> >
> > Shared RX queue is supposed to be polled on same thread.
> >
> > Multiple groups is supported by group ID.
> >
> > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> 
> It looks like it could be useful to artificial benchmarks, but absolutely useless for real life. SFs and VFs are used by VMs (or containers?)
> to have its own part of HW. If so, SF or VF Rx and Tx queues live in a VM and cannot be shared.

Thanks for looking at this! Agree, SF and VF can't be shared.

> 
> Sharing makes sense for representors, but it is not mentioned in the description.

Yes, the major target is representors, ports in same switch domain, I'll emphasis this in next version.
  
Xueming Li Oct. 18, 2021, 12:59 p.m. UTC | #3
In current DPDK framework, all Rx queues is pre-loaded with mbufs for
incoming packets. When number of representors scale out in a switch
domain, the memory consumption became significant. Further more,
polling all ports leads to high cache miss, high latency and low
throughputs.

This patch introduces shared Rx queue. PF and representors in same
Rx domain and switch domain could share Rx queue set by specifying
non-zero share group value in Rx queue configuration.

All ports that share Rx queue actually shares hardware descriptor
queue and feed all Rx queues with one descriptor supply, memory is saved.

Polling any queue using same shared Rx queue receives packets from all
member ports. Source port is identified by mbuf->port.

Multiple groups is supported by group ID. Port queue number in a shared
group should be identical. Queue index is 1:1 mapped in shared group.
An example of two share groups:
 Group1, 4 shared Rx queues per member port: PF, repr0, repr1
 Group2, 2 shared Rx queues per member port: repr2, repr3, ... repr127
 Poll first port for each group:
  core	port	queue
  0	0	0
  1	0	1
  2	0	2
  3	0	3
  4	2	0
  5	2	1

Shared Rx queue must be polled on single thread or core. If both PF0 and
representor0 joined same share group, can't poll pf0rxq0 on core1 and
rep0rxq0 on core2. Actually, polling one port within share group is
sufficient since polling any port in group will return packets for any
port in group.

There was some discussion to aggregate member ports in same group into a
dummy port, several ways to achieve it. Since it optional, need to collect
more feedback and requirement from user, make better decision later.

v1:
  - initial version
v2:
  - add testpmd patches
v3:
  - change common forwarding api to macro for performance, thanks Jerin.
  - save global variable accessed in forwarding to flowstream to minimize
    cache miss
  - combined patches for each forwarding engine
  - support multiple groups in testpmd "--share-rxq" parameter
  - new api to aggregate shared rxq group
v4:
  - spelling fixes
  - remove shared-rxq support for all forwarding engines
  - add dedicate shared-rxq forwarding engine
v5:
 - fix grammars
 - remove aggregate api and leave it for later discussion
 - add release notes
 - add deployment example
v6:
 - replace RxQ offload flag with device offload capability flag
 - add Rx domain
 - RxQ is shared when share group > 0
 - update testpmd accordingly
v7:
 - fix testpmd share group id allocation
 - change rx_domain to 16bits
v8:
 - add new patch for testpmd to show device Rx domain ID and capability
 - new share_qid in RxQ configuration

Xueming Li (6):
  ethdev: introduce shared Rx queue
  app/testpmd: dump device capability and Rx domain info
  app/testpmd: new parameter to enable shared Rx queue
  app/testpmd: dump port info for shared Rx queue
  app/testpmd: force shared Rx queue polled on same core
  app/testpmd: add forwarding engine for shared Rx queue

 app/test-pmd/config.c                         | 114 +++++++++++++-
 app/test-pmd/meson.build                      |   1 +
 app/test-pmd/parameters.c                     |  13 ++
 app/test-pmd/shared_rxq_fwd.c                 | 148 ++++++++++++++++++
 app/test-pmd/testpmd.c                        |  25 ++-
 app/test-pmd/testpmd.h                        |   5 +
 app/test-pmd/util.c                           |   3 +
 doc/guides/nics/features.rst                  |  13 ++
 doc/guides/nics/features/default.ini          |   1 +
 .../prog_guide/switch_representation.rst      |  11 ++
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 doc/guides/testpmd_app_ug/run_app.rst         |   8 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |   5 +-
 lib/ethdev/rte_ethdev.c                       |   8 +
 lib/ethdev/rte_ethdev.h                       |  24 +++
 15 files changed, 379 insertions(+), 6 deletions(-)
 create mode 100644 app/test-pmd/shared_rxq_fwd.c
  

Patch

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1106f5896..632a0e890b 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -127,6 +127,7 @@  static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
 	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_ETH_RX_OFFLOAD_BIT2STR(SHARED_RXQ),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351f..5c63751be0 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1047,6 +1047,7 @@  struct rte_eth_rxconf {
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
+	uint32_t shared_group; /**< Shared port group index in switch domain. */
 	/**
 	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
@@ -1373,6 +1374,12 @@  struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
 #define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
+/**
+ * RXQ is shared within ports in switch domain to save memory and avoid
+ * polling every port. Any port in group could be used to receive packets.
+ * Real source port number saved in mbuf->port field.
+ */
+#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
 
 #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
 				 DEV_RX_OFFLOAD_UDP_CKSUM | \