[v1] net/vhost: clear data of packet mbuf after sending pkts

Message ID 20220301072802.1349736-1-yuying.zhang@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series [v1] net/vhost: clear data of packet mbuf after sending pkts |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-abi-testing success Testing PASS

Commit Message

Zhang, Yuying March 1, 2022, 7:28 a.m. UTC
  The PMD frees a packet mbuf back into its original mempool
after sending a packet. However, old data is not cleaned up
which causes error in payload of new packets. This patch clear
data of packet mbuf before freeing mbuf.

Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
Cc: stable@dpdk.org

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
---
 drivers/net/vhost/rte_eth_vhost.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)
  

Comments

Ling, WeiX March 1, 2022, 8:38 a.m. UTC | #1
> -----Original Message-----
> From: Yuying Zhang <yuying.zhang@intel.com>
> Sent: Tuesday, March 1, 2022 3:28 PM
> To: dev@dpdk.org; maxime.coquelin@redhat.com; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: Zhang, Yuying <yuying.zhang@intel.com>; stable@dpdk.org
> Subject: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> The PMD frees a packet mbuf back into its original mempool after sending a
> packet. However, old data is not cleaned up which causes error in payload of
> new packets. This patch clear data of packet mbuf before freeing mbuf.
> 
> Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
> ---
Tested-by: Wei Ling <weix.ling@intel.com>
  
David Marchand March 1, 2022, 8:43 a.m. UTC | #2
On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
>
> The PMD frees a packet mbuf back into its original mempool
> after sending a packet. However, old data is not cleaned up
> which causes error in payload of new packets. This patch clear
> data of packet mbuf before freeing mbuf.

This patch looks wrong to me.
What is the actual issue you want to fix?
  
Zhang, Yuying March 1, 2022, 9:02 a.m. UTC | #3
Hi Marchand,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, March 1, 2022 4:44 PM
> To: Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
> >
> > The PMD frees a packet mbuf back into its original mempool after
> > sending a packet. However, old data is not cleaned up which causes
> > error in payload of new packets. This patch clear data of packet mbuf
> > before freeing mbuf.
> 
> This patch looks wrong to me.
> What is the actual issue you want to fix?

eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.

> 
> 
> --
> David Marchand
  
David Marchand March 1, 2022, 9:47 a.m. UTC | #4
On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com> wrote:
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Tuesday, March 1, 2022 4:44 PM
> > To: Zhang, Yuying <yuying.zhang@intel.com>
> > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> >
> > On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:
> > >
> > > The PMD frees a packet mbuf back into its original mempool after
> > > sending a packet. However, old data is not cleaned up which causes
> > > error in payload of new packets. This patch clear data of packet mbuf
> > > before freeing mbuf.
> >
> > This patch looks wrong to me.
> > What is the actual issue you want to fix?
>
> eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
> Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.

With the proposed patch, if the mbuf refcnt is != 0, you are shooting
the data while some other part of the application might be needing it.

Plus, there should be no expectation about a mbuf data content when
retrieving one from a mempool.
The only bytes that are guaranteed to be initialised by the mbuf API
are its metadata.


If there is an issue somewhere in dpdk where the mbuf data content is
expected to be 0 on allocation, please point at it.
Or share the full test that failed.
  
Stephen Hemminger March 1, 2022, 5:05 p.m. UTC | #5
On Tue, 1 Mar 2022 10:47:32 +0100
David Marchand <david.marchand@redhat.com> wrote:

> On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com> wrote:
> > > -----Original Message-----
> > > From: David Marchand <david.marchand@redhat.com>
> > > Sent: Tuesday, March 1, 2022 4:44 PM
> > > To: Zhang, Yuying <yuying.zhang@intel.com>
> > > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> > >
> > > On Tue, Mar 1, 2022 at 8:29 AM Yuying Zhang <yuying.zhang@intel.com> wrote:  
> > > >
> > > > The PMD frees a packet mbuf back into its original mempool after
> > > > sending a packet. However, old data is not cleaned up which causes
> > > > error in payload of new packets. This patch clear data of packet mbuf
> > > > before freeing mbuf.  
> > >
> > > This patch looks wrong to me.
> > > What is the actual issue you want to fix?  
> >
> > eth_vhost_tx() frees the packet mbuf back into its original mempool every time after a packet sent without clearing the data field.
> > Then packet transmit  function will get bulk directly without reset. New generated packet contains old data of previous packet. This is wrong.  
> 
> With the proposed patch, if the mbuf refcnt is != 0, you are shooting
> the data while some other part of the application might be needing it.
> 
> Plus, there should be no expectation about a mbuf data content when
> retrieving one from a mempool.
> The only bytes that are guaranteed to be initialised by the mbuf API
> are its metadata.
> 
> 
> If there is an issue somewhere in dpdk where the mbuf data content is
> expected to be 0 on allocation, please point at it.
> Or share the full test that failed.
> 
> 

Agree. There is no guarantee that mbuf you get was not just used by
some other driver or library. Only the fields in rte_pktmbuf_reset are
guaranteed to be set.
  
Zhang, Yuying March 2, 2022, 8:58 a.m. UTC | #6
Hi Marchand,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, March 1, 2022 5:48 PM
> To: Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending pkts
> 
> On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com>
> wrote:

...

> >
> > eth_vhost_tx() frees the packet mbuf back into its original mempool every
> time after a packet sent without clearing the data field.
> > Then packet transmit  function will get bulk directly without reset. New
> generated packet contains old data of previous packet. This is wrong.
> 
> With the proposed patch, if the mbuf refcnt is != 0, you are shooting the data
> while some other part of the application might be needing it.
> 
> Plus, there should be no expectation about a mbuf data content when retrieving
> one from a mempool.
> The only bytes that are guaranteed to be initialised by the mbuf API are its
> metadata.
> 
> 
> If there is an issue somewhere in dpdk where the mbuf data content is expected
> to be 0 on allocation, please point at it.
> Or share the full test that failed.

According to the test_plan guide of dpdk (https://doc.dpdk.org/dts/test_plans/loopback_virtio_user_server_mode_test_plan.html),
Test Case 13 (loopback packed ring all path payload check test using server mode and multi-queues), the payload of each packet must be the same.
The packet of first stream is initialized value 0. Then this packet is put back into mempool(actually, the local cache of the core). 
The packet of rest stream is got from local_cache directly and contains the first packet's header data in the payload. Therefore, the payload of the packets
are different. 

> 
> 
> --
> David Marchand
  
Chenbo Xia March 3, 2022, 6:49 a.m. UTC | #7
> -----Original Message-----
> From: Zhang, Yuying <yuying.zhang@intel.com>
> Sent: Wednesday, March 2, 2022 4:59 PM
> To: David Marchand <david.marchand@redhat.com>
> Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>; Xia,
> Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>;
> stephen@networkplumber.org
> Subject: RE: [PATCH v1] net/vhost: clear data of packet mbuf after sending
> pkts
> 
> Hi Marchand,
> 
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Tuesday, March 1, 2022 5:48 PM
> > To: Zhang, Yuying <yuying.zhang@intel.com>
> > Cc: dev <dev@dpdk.org>; Maxime Coquelin <maxime.coquelin@redhat.com>;
> > Xia, Chenbo <chenbo.xia@intel.com>; dpdk stable <stable@dpdk.org>
> > Subject: Re: [PATCH v1] net/vhost: clear data of packet mbuf after sending
> pkts
> >
> > On Tue, Mar 1, 2022 at 10:02 AM Zhang, Yuying <yuying.zhang@intel.com>
> > wrote:
> 
> ...
> 
> > >
> > > eth_vhost_tx() frees the packet mbuf back into its original mempool every
> > time after a packet sent without clearing the data field.
> > > Then packet transmit  function will get bulk directly without reset. New
> > generated packet contains old data of previous packet. This is wrong.
> >
> > With the proposed patch, if the mbuf refcnt is != 0, you are shooting the
> data
> > while some other part of the application might be needing it.
> >
> > Plus, there should be no expectation about a mbuf data content when
> retrieving
> > one from a mempool.
> > The only bytes that are guaranteed to be initialised by the mbuf API are its
> > metadata.
> >
> >
> > If there is an issue somewhere in dpdk where the mbuf data content is
> expected
> > to be 0 on allocation, please point at it.
> > Or share the full test that failed.
> 
> According to the test_plan guide of dpdk
> (https://doc.dpdk.org/dts/test_plans/loopback_virtio_user_server_mode_test_pla
> n.html),
> Test Case 13 (loopback packed ring all path payload check test using server
> mode and multi-queues), the payload of each packet must be the same.
> The packet of first stream is initialized value 0. Then this packet is put
> back into mempool(actually, the local cache of the core).
> The packet of rest stream is got from local_cache directly and contains the
> first packet's header data in the payload. Therefore, the payload of the
> packets
> are different.

Could you explain more about the problem?

But anyway I think this fix is wrong. After we're clear about the problem,
there should be another solution.

Thanks,
Chenbo

> 
> >
> >
> > --
> > David Marchand
  

Patch

diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..92ed07a334 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -417,10 +417,11 @@  static uint16_t
 eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 {
 	struct vhost_queue *r = q;
-	uint16_t i, nb_tx = 0;
+	uint16_t i, j, nb_tx = 0;
 	uint16_t nb_send = 0;
 	uint64_t nb_bytes = 0;
 	uint64_t nb_missed = 0;
+	void *data = NULL;
 
 	if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0))
 		return 0;
@@ -483,8 +484,13 @@  eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	for (i = nb_tx; i < nb_bufs; i++)
 		vhost_count_xcast_packets(r, bufs[i]);
 
-	for (i = 0; likely(i < nb_tx); i++)
+	for (i = 0; likely(i < nb_tx); i++) {
+		for (j = 0; j < bufs[i]->nb_segs; j++) {
+			data = rte_pktmbuf_mtod(bufs[i], void *);
+			memset(data, 0, bufs[i]->data_len);
+		}
 		rte_pktmbuf_free(bufs[i]);
+	}
 out:
 	rte_atomic32_set(&r->while_queuing, 0);