[dpdk-dev,v4] net/bonding: reduce slave starvation on rx poll

Message ID 20170321151212.53854-1-keith.wiles@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/checkpatch success coding style OK

Commit Message

Wiles, Keith March 21, 2017, 3:12 p.m. UTC
  When polling the bonded ports for RX packets the old driver would
always start with the first slave in the list. If the requested
number of packets is filled on the first port in a two port config
then the second port could be starved or have larger number of
missed packet errors.

The code attempts to start with a different slave each time RX poll
is done to help eliminate starvation of slave ports. The effect of
the previous code was much lower performance for two slaves in the
bond then just the one slave.

The performance drop was detected when the application can not poll
the rings of rx packets fast enough and the packets per second for
two or more ports was at the threshold thoughput of the application.
At this threshold the slaves would see very little or no drops in
the case of one slave. Then enable the second slave you would see
a large drop rate on the two slave bond and reduction in thoughput.

Signed-off-by: Keith Wiles <keith.wiles@intel.com>
---
v4 - fix saving of next slave to reduce skipping a slave.
v3 - remove more checkpatch errors
v2 - remove checkpatch errors

 drivers/net/bonding/rte_eth_bond_pmd.c     | 21 +++++++++++++++------
 drivers/net/bonding/rte_eth_bond_private.h |  3 ++-
 2 files changed, 17 insertions(+), 7 deletions(-)
  

Comments

Ferruh Yigit March 21, 2017, 4:03 p.m. UTC | #1
On 3/21/2017 3:12 PM, Keith Wiles wrote:
> When polling the bonded ports for RX packets the old driver would
> always start with the first slave in the list. If the requested
> number of packets is filled on the first port in a two port config
> then the second port could be starved or have larger number of
> missed packet errors.
> 
> The code attempts to start with a different slave each time RX poll
> is done to help eliminate starvation of slave ports. The effect of
> the previous code was much lower performance for two slaves in the
> bond then just the one slave.
> 
> The performance drop was detected when the application can not poll
> the rings of rx packets fast enough and the packets per second for
> two or more ports was at the threshold thoughput of the application.
> At this threshold the slaves would see very little or no drops in
> the case of one slave. Then enable the second slave you would see
> a large drop rate on the two slave bond and reduction in thoughput.
> 
> Signed-off-by: Keith Wiles <keith.wiles@intel.com>

Acked-by: Declan Doherty <declan.doherty@intel.com>

Applied to dpdk-next-net/master, thanks.
  

Patch

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f3ac9e273..bc6e277ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@ 
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -145,7 +145,7 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
 	uint8_t slaves[RTE_MAX_ETHPORTS];
-	uint8_t slave_count;
+	uint8_t slave_count, idx;
 
 	uint8_t collecting;  /* current slave collecting status */
 	const uint8_t promisc = internals->promiscuous_en;
@@ -159,12 +159,18 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
 
+	idx = internals->active_slave;
+	if (idx >= slave_count) {
+		internals->active_slave = 0;
+		idx = 0;
+	}
 	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[idx]],
+					 COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -187,8 +193,8 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 					!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
-					bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
-						bufs[j]);
+					bond_mode_8023ad_handle_slow_pkt(
+					    internals, slaves[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -201,8 +207,11 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 			} else
 				j++;
 		}
+		if (unlikely(++idx == slave_count))
+			idx = 0;
 	}
 
+	internals->active_slave = idx;
 	return num_rx_total;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 5a411e22b..c8db09005 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@ 
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -144,6 +144,7 @@  struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
+	uint8_t active_slave;		/**< Next active_slave to poll */
 	uint8_t active_slave_count;		/**< Number of active slaves */
 	uint8_t active_slaves[RTE_MAX_ETHPORTS];	/**< Active slave list */