From patchwork Tue Mar 7 20:22:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wiles, Keith" X-Patchwork-Id: 21571 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 728742B86; Tue, 7 Mar 2017 21:23:04 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 6F5FB1BBE for ; Tue, 7 Mar 2017 21:23:01 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP; 07 Mar 2017 12:23:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,258,1486454400"; d="scan'208";a="57281846" Received: from kblack-mobl2.amr.corp.intel.com ([10.255.71.106]) by orsmga002.jf.intel.com with ESMTP; 07 Mar 2017 12:22:59 -0800 From: Keith Wiles To: dev@dpdk.org Date: Tue, 7 Mar 2017 14:22:55 -0600 Message-Id: <20170307202255.31812-1-keith.wiles@intel.com> X-Mailer: git-send-email 2.10.1 Subject: [dpdk-dev] [PATCH] net/bonding: reduce slave starvation on rx poll X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When polling the bonded ports for RX packets the old driver would always start with the first slave in the list. If the requested number of packets is filled on the first port in a two port config then the second port could be starved or have larger number of missed packet errors. The code attempts to start with a different slave each time RX poll is done to help eliminate starvation of slave ports. The effect of the previous code was much lower performance for two slaves in the bond then just the one slave. The performance drop was detected when the application can not poll the rings of rx packets fast enough and the packets per second for two or more ports was at the threshold thoughput of the application. At this threshold the slaves would see very little or no drops in the case of one slave. Then enable the second slave you would see a large drop rate on the two slave bond and reduction in thoughput. Signed-off-by: Keith Wiles --- drivers/net/bonding/rte_eth_bond_pmd.c | 16 +++++++++++----- drivers/net/bonding/rte_eth_bond_private.h | 3 ++- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index f3ac9e273..aa84514ed 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -1,7 +1,7 @@ /*- * BSD LICENSE * - * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved. * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -145,7 +145,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW); uint16_t num_rx_total = 0; /* Total number of received packets */ uint8_t slaves[RTE_MAX_ETHPORTS]; - uint8_t slave_count; + uint8_t slave_count, idx; uint8_t collecting; /* current slave collecting status */ const uint8_t promisc = internals->promiscuous_en; @@ -159,12 +159,15 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, memcpy(slaves, internals->active_slaves, sizeof(internals->active_slaves[0]) * slave_count); + idx = internals->active_slave; + if(idx >= slave_count) + internals->active_slave = idx = 0; for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) { j = num_rx_total; - collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING); + collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[idx]], COLLECTING); /* Read packets from this slave */ - num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id, + num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id, &bufs[num_rx_total], nb_pkts - num_rx_total); for (k = j; k < 2 && k < num_rx_total; k++) @@ -187,7 +190,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, !is_same_ether_addr(&bond_mac, &hdr->d_addr)))) { if (hdr->ether_type == ether_type_slow_be) { - bond_mode_8023ad_handle_slow_pkt(internals, slaves[i], + bond_mode_8023ad_handle_slow_pkt(internals, slaves[idx], bufs[j]); } else rte_pktmbuf_free(bufs[j]); @@ -201,8 +204,11 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, } else j++; } + if(unlikely(++idx == slave_count)) + idx = 0; } + internals->active_slave = idx + 1; return num_rx_total; } diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h index 5a411e22b..6151d5631 100644 --- a/drivers/net/bonding/rte_eth_bond_private.h +++ b/drivers/net/bonding/rte_eth_bond_private.h @@ -1,7 +1,7 @@ /*- * BSD LICENSE * - * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved. * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -144,6 +144,7 @@ struct bond_dev_private { uint16_t nb_rx_queues; /**< Total number of rx queues */ uint16_t nb_tx_queues; /**< Total number of tx queues*/ + uint8_t active_slave; /**< Next active_slave to poll */ uint8_t active_slave_count; /**< Number of active slaves */ uint8_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */