[dpdk-dev] mbuf: reduce pktmbuf init cycles

Message ID 20170605163807.31941-1-jerin.jacob@caviumnetworks.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Jerin Jacob June 5, 2017, 4:38 p.m. UTC
  There is no need for initializing the complete
packet buffer with zero as the packet data area will be
overwritten by the NIC Rx HW anyway.

The testpmd configures the packet mempool
with around 180k buffers with
2176B size. In existing scheme, the init routine
needs to memset around ~370MB vs the proposed scheme
requires only around ~44MB on 128B cache aligned system.

Useful in running DPDK in HW simulators/emulators,
where millions of cycles have an impact on boot time.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mbuf/rte_mbuf.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
  

Comments

Olivier Matz June 23, 2017, 9:42 a.m. UTC | #1
On Mon,  5 Jun 2017 22:08:07 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> There is no need for initializing the complete
> packet buffer with zero as the packet data area will be
> overwritten by the NIC Rx HW anyway.
> 
> The testpmd configures the packet mempool
> with around 180k buffers with
> 2176B size. In existing scheme, the init routine
> needs to memset around ~370MB vs the proposed scheme
> requires only around ~44MB on 128B cache aligned system.
> 
> Useful in running DPDK in HW simulators/emulators,
> where millions of cycles have an impact on boot time.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mbuf/rte_mbuf.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 0e3e36a58..1d5ce7816 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
>  	RTE_ASSERT(mp->elt_size >= mbuf_size);
>  	RTE_ASSERT(buf_len <= UINT16_MAX);
>  
> -	memset(m, 0, mp->elt_size);
> -
> +	memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
>  	/* start of buffer is after mbuf structure and priv data */
>  	m->priv_size = priv_size;
>  	m->buf_addr = (char *)m + mbuf_size;

Yes, I don't foresee any risk to do that.

I'm just wondering why RTE_PKTMBUF_HEADROOM should be zeroed.
For example, rte_pktmbuf_free() does not touch the data either, so
after some packets processing, we also have garbage data in the
headroom.

Olivier
  
Jerin Jacob June 23, 2017, 10:06 a.m. UTC | #2
-----Original Message-----
> Date: Fri, 23 Jun 2017 11:42:30 +0200
> From: Olivier Matz <olivier.matz@6wind.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles
> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu)
> 
> On Mon,  5 Jun 2017 22:08:07 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> > There is no need for initializing the complete
> > packet buffer with zero as the packet data area will be
> > overwritten by the NIC Rx HW anyway.
> > 
> > The testpmd configures the packet mempool
> > with around 180k buffers with
> > 2176B size. In existing scheme, the init routine
> > needs to memset around ~370MB vs the proposed scheme
> > requires only around ~44MB on 128B cache aligned system.
> > 
> > Useful in running DPDK in HW simulators/emulators,
> > where millions of cycles have an impact on boot time.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> >  lib/librte_mbuf/rte_mbuf.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> > index 0e3e36a58..1d5ce7816 100644
> > --- a/lib/librte_mbuf/rte_mbuf.c
> > +++ b/lib/librte_mbuf/rte_mbuf.c
> > @@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
> >  	RTE_ASSERT(mp->elt_size >= mbuf_size);
> >  	RTE_ASSERT(buf_len <= UINT16_MAX);
> >  
> > -	memset(m, 0, mp->elt_size);
> > -
> > +	memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
> >  	/* start of buffer is after mbuf structure and priv data */
> >  	m->priv_size = priv_size;
> >  	m->buf_addr = (char *)m + mbuf_size;
> 
> Yes, I don't foresee any risk to do that.
> 
> I'm just wondering why RTE_PKTMBUF_HEADROOM should be zeroed.
> For example, rte_pktmbuf_free() does not touch the data either, so
> after some packets processing, we also have garbage data in the
> headroom.

Yes. Headroom can be garbage as application pull the packet offset up and writes
new header on encapsulation use case.

I will the send v2 with clearing only mbuf area.

> 
> Olivier
  

Patch

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e3e36a58..1d5ce7816 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -131,8 +131,7 @@  rte_pktmbuf_init(struct rte_mempool *mp,
 	RTE_ASSERT(mp->elt_size >= mbuf_size);
 	RTE_ASSERT(buf_len <= UINT16_MAX);
 
-	memset(m, 0, mp->elt_size);
-
+	memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
 	/* start of buffer is after mbuf structure and priv data */
 	m->priv_size = priv_size;
 	m->buf_addr = (char *)m + mbuf_size;