[dpdk-dev,v2] net/null: support bulk allocation

Message ID 1520552441-20833-1-git-send-email-malleshx.koujalagi@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Mallesh Koujalagi March 8, 2018, 11:40 p.m. UTC
  Bulk allocation of multiple mbufs increased more than ~2%  and less
than 8% throughput on single core (1.8 GHz), based on usage for example
1: Testpmd case: Two null devices with copy 8% improvement.
    testpmd -c 0x3 -n 4 --socket-mem 1024,1024
	--vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
	-- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
	--rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
2. Ovs switch case: 2% improvement.
$VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
        options:dpdk-devargs=eth_null0,size=64,copy=1
$VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
        options:dpdk-devargs=eth_null1,size=64,copy=1

Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
---
 drivers/net/null/rte_eth_null.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
  

Comments

Ferruh Yigit March 9, 2018, 11:09 a.m. UTC | #1
On 3/8/2018 11:40 PM, Mallesh Koujalagi wrote:
> Bulk allocation of multiple mbufs increased more than ~2%  and less
> than 8% throughput on single core (1.8 GHz), based on usage for example
> 1: Testpmd case: Two null devices with copy 8% improvement.
>     testpmd -c 0x3 -n 4 --socket-mem 1024,1024
> 	--vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
> 	-- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
> 	--rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
> 2. Ovs switch case: 2% improvement.
> $VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
>         options:dpdk-devargs=eth_null0,size=64,copy=1
> $VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
>         options:dpdk-devargs=eth_null1,size=64,copy=1
> 
> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
  
Ferruh Yigit March 16, 2018, 2:08 p.m. UTC | #2
On 3/9/2018 11:09 AM, Ferruh Yigit wrote:
> On 3/8/2018 11:40 PM, Mallesh Koujalagi wrote:
>> Bulk allocation of multiple mbufs increased more than ~2%  and less
>> than 8% throughput on single core (1.8 GHz), based on usage for example
>> 1: Testpmd case: Two null devices with copy 8% improvement.
>>     testpmd -c 0x3 -n 4 --socket-mem 1024,1024
>> 	--vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
>> 	-- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
>> 	--rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
>> 2. Ovs switch case: 2% improvement.
>> $VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
>>         options:dpdk-devargs=eth_null0,size=64,copy=1
>> $VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
>>         options:dpdk-devargs=eth_null1,size=64,copy=1
>>
>> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
> 
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

Applied to dpdk-next-net/master, thanks.
  

Patch

diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 9385ffd..c019d2d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -105,10 +105,10 @@  eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		return 0;
 
 	packet_size = h->internals->packet_size;
+	if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+		return 0;
+
 	for (i = 0; i < nb_bufs; i++) {
-		bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
-		if (!bufs[i])
-			break;
 		bufs[i]->data_len = (uint16_t)packet_size;
 		bufs[i]->pkt_len = packet_size;
 		bufs[i]->port = h->internals->port_id;
@@ -130,10 +130,10 @@  eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		return 0;
 
 	packet_size = h->internals->packet_size;
+	if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+		return 0;
+
 	for (i = 0; i < nb_bufs; i++) {
-		bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
-		if (!bufs[i])
-			break;
 		rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
 					packet_size);
 		bufs[i]->data_len = (uint16_t)packet_size;