From patchwork Wed Sep 19 15:48:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chas Williams <3chas3@gmail.com> X-Patchwork-Id: 44956 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 67BDB4CE4; Wed, 19 Sep 2018 17:48:34 +0200 (CEST) Received: from mail-qt0-f196.google.com (mail-qt0-f196.google.com [209.85.216.196]) by dpdk.org (Postfix) with ESMTP id AC0094CA2; Wed, 19 Sep 2018 17:48:32 +0200 (CEST) Received: by mail-qt0-f196.google.com with SMTP id j7-v6so5545439qtp.2; Wed, 19 Sep 2018 08:48:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ZhciuuyXU++mkLAul0wVRwOIEjPTvQV/Lw/yArC6JFg=; b=l1b1Y3R/wvsMM4AsHRme7A5KHZ4x2sNMg6GclgkLa+4zlzhcLzekFRvXpaUupEmPbF 9TW+LTovXv5Qc5iglxffXhMHLaZWiC7Rrr7GFh2EN/MKKuX308C5gHU9jxDIcuHvnySG G4LFsyy63eboMerN2+8kl3xwj1u2R4JktveBLjCwlYVl2NIn8Fx0V0z7wnwnwnvJcaHZ sFsQj3K+hl9ktD7KYra7DYF29wUfStYnS73N69HklWLbY7XiyQ/VNUeave/lka8tudDB X8yOqdm0DbKWe/ifJRhiobKgm88eL9PW+VHkIF5/pwPIxVxJL2RLt7afyM+efaeC0916 Fu4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ZhciuuyXU++mkLAul0wVRwOIEjPTvQV/Lw/yArC6JFg=; b=eIQbOCG6FQG22ymKzfD8zLBV+zAE0i1OPQyAx8SprvgOm2FzeTrAzEs18a4ypzz2ok SGdreKkXl1p8eoVQk/FCZVo/jilo12KAl8YgTdSGtjpZgqu9z1qTLRAWi2EO46sYZlPq OSHYPuBv6Qeab5UaCOuV1sJNzlm6Ikys6PuWvbf165bIZ8rHXdkizK+gAHG/6Y4cWnnG vZt3kRlI6Ah+qSg6nKlEJzxAIeUHUgqEvxUO7gsnMRymxJuEGZ+5EwFIsXL0j4e3HfNp 8qpQ2tnCNUfxTZLfMM/jU5xdmThbD84O3Q7YLeezRSSa0a4iEOCz7aAAZVXAMS1A9ZOT bE/w== X-Gm-Message-State: APzg51BB4ExjQj0kctw5I5bhkvWpHOq0jAjyk4OczeShPjBLck6e7rz3 pR1t51bc1tiO0TC/Dfq0+ZJesJn5/3U= X-Google-Smtp-Source: ANB0VdZusmckHBHBsmveeJjKVHGJ9sK5sgGj+Ba3ugC9Rsc5jswgoYet4gMaspRe/M5Qfs41RnkHYQ== X-Received: by 2002:a0c:d8a4:: with SMTP id q33-v6mr25537731qvj.240.1537372111874; Wed, 19 Sep 2018 08:48:31 -0700 (PDT) Received: from monolith.fios-router.home (pool-173-79-169-217.washdc.fios.verizon.net. [173.79.169.217]) by smtp.gmail.com with ESMTPSA id b82-v6sm2376294qkg.3.2018.09.19.08.48.31 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 19 Sep 2018 08:48:31 -0700 (PDT) From: Chas Williams <3chas3@gmail.com> To: dev@dpdk.org Cc: declan.doherty@intel.com, matan@mellanox.com, ehkinzie@gmail.com, Chas Williams , stable@dpdk.org Date: Wed, 19 Sep 2018 11:48:25 -0400 Message-Id: <20180919154825.5183-1-3chas3@gmail.com> X-Mailer: git-send-email 2.14.4 Subject: [dpdk-dev] [PATCH] net/bonding: ensure fairness among slaves X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chas Williams Some PMDs, especially ones with vector receives, require a minimum number of receive buffers in order to receive any packets. If the first slave read leaves less than this number available, a read from the next slave may return 0 implying that the slave doesn't have any packets which results in skipping over that slave as the next active slave. To fix this, implement round robin for the slaves during receive that is only advanced to the next slave at the end of each receive burst. This should also provide some additional fairness in processing in bond_ethdev_rx_burst as well. Fixes: 2efb58cbab6e ("bond: new link bonding library") Cc: stable@dpdk.org Signed-off-by: Chas Williams Acked-by: Luca Boccassi Acked-by: Matan Azrad --- drivers/net/bonding/rte_eth_bond_pmd.c | 50 ++++++++++++++++++++++------------ 1 file changed, 32 insertions(+), 18 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index b84f32263..f25faa75c 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -58,28 +58,33 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { struct bond_dev_private *internals; - uint16_t num_rx_slave = 0; uint16_t num_rx_total = 0; - + uint16_t slave_count; + uint16_t active_slave; int i; /* Cast to structure, containing bonded device's port id and queue id */ struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue; - internals = bd_rx_q->dev_private; + slave_count = internals->active_slave_count; + active_slave = internals->active_slave; + for (i = 0; i < slave_count && nb_pkts; i++) { + uint16_t num_rx_slave; - for (i = 0; i < internals->active_slave_count && nb_pkts; i++) { /* Offset of pointer to *bufs increases as packets are received * from other slaves */ - num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i], + num_rx_slave = rte_eth_rx_burst( + internals->active_slaves[active_slave], bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts); - if (num_rx_slave) { - num_rx_total += num_rx_slave; - nb_pkts -= num_rx_slave; - } + num_rx_total += num_rx_slave; + nb_pkts -= num_rx_slave; + if (++active_slave == slave_count) + active_slave = 0; } + if (++internals->active_slave == slave_count) + internals->active_slave = 0; return num_rx_total; } @@ -258,25 +263,32 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, uint16_t num_rx_total = 0; /* Total number of received packets */ uint16_t slaves[RTE_MAX_ETHPORTS]; uint16_t slave_count; - - uint16_t i, idx; + uint16_t active_slave; + uint16_t i; /* Copy slave list to protect against slave up/down changes during tx * bursting */ slave_count = internals->active_slave_count; + active_slave = internals->active_slave; memcpy(slaves, internals->active_slaves, sizeof(internals->active_slaves[0]) * slave_count); - for (i = 0, idx = internals->active_slave; - i < slave_count && num_rx_total < nb_pkts; i++, idx++) { - idx = idx % slave_count; + for (i = 0; i < slave_count && nb_pkts; i++) { + uint16_t num_rx_slave; /* Read packets from this slave */ - num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id, - &bufs[num_rx_total], nb_pkts - num_rx_total); + num_rx_slave = rte_eth_rx_burst(slaves[active_slave], + bd_rx_q->queue_id, + bufs + num_rx_total, nb_pkts); + num_rx_total += num_rx_slave; + nb_pkts -= num_rx_slave; + + if (++active_slave == slave_count) + active_slave = 0; } - internals->active_slave = idx; + if (++internals->active_slave == slave_count) + internals->active_slave = 0; return num_rx_total; } @@ -459,7 +471,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, idx = 0; } - internals->active_slave = idx; + if (++internals->active_slave == slave_count) + internals->active_slave = 0; + return num_rx_total; }