[v5] mbuf: fix reset on mbuf free
Checks
Commit Message
m->nb_seg must be reset on mbuf free whatever the value of m->next,
because it can happen that m->nb_seg is != 1. For instance in this
case:
m1 = rte_pktmbuf_alloc(mp);
rte_pktmbuf_append(m1, 500);
m2 = rte_pktmbuf_alloc(mp);
rte_pktmbuf_append(m2, 500);
rte_pktmbuf_chain(m1, m2);
m0 = rte_pktmbuf_alloc(mp);
rte_pktmbuf_append(m0, 500);
rte_pktmbuf_chain(m0, m1);
As rte_pktmbuf_chain() does not reset nb_seg in the initial m1
segment (this is not required), after this code the mbuf chain
have 3 segments:
- m0: next=m1, nb_seg=3
- m1: next=m2, nb_seg=2
- m2: next=NULL, nb_seg=1
Then split this chain between m1 and m2, it would result in 2 packets:
- first packet
- m0: next=m1, nb_seg=2
- m1: next=NULL, nb_seg=2
- second packet
- m2: next=NULL, nb_seg=1
Freeing the first packet will not restore nb_seg=1 in the second
segment. This is an issue because it is expected that mbufs stored
in pool have their nb_seg field set to 1.
Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/test_mbuf.c | 69 ++++++++++++++++++++++++++++++++++++++++
lib/mbuf/rte_mbuf.c | 4 +--
lib/mbuf/rte_mbuf.h | 8 ++---
lib/mbuf/rte_mbuf_core.h | 13 ++++++--
4 files changed, 86 insertions(+), 8 deletions(-)
Comments
> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Thursday, September 30, 2021 12:37 AM
> To: dev@dpdk.org
> Cc: ajit.khaparde@broadcom.com; ajitkhaparde@gmail.com; Ali Alnubani
> <alialnu@nvidia.com>; andrew.rybchenko@oktetlabs.ru;
> konstantin.ananyev@intel.com; mb@smartsharesystems.com;
> stable@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH v5] mbuf: fix reset on mbuf free
>
> m->nb_seg must be reset on mbuf free whatever the value of m->next,
> because it can happen that m->nb_seg is != 1. For instance in this
> case:
>
> m1 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m1, 500);
> m2 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m2, 500);
> rte_pktmbuf_chain(m1, m2);
> m0 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m0, 500);
> rte_pktmbuf_chain(m0, m1);
>
> As rte_pktmbuf_chain() does not reset nb_seg in the initial m1 segment (this
> is not required), after this code the mbuf chain have 3 segments:
> - m0: next=m1, nb_seg=3
> - m1: next=m2, nb_seg=2
> - m2: next=NULL, nb_seg=1
>
> Then split this chain between m1 and m2, it would result in 2 packets:
> - first packet
> - m0: next=m1, nb_seg=2
> - m1: next=NULL, nb_seg=2
> - second packet
> - m2: next=NULL, nb_seg=1
>
> Freeing the first packet will not restore nb_seg=1 in the second segment.
> This is an issue because it is expected that mbufs stored in pool have their
> nb_seg field set to 1.
>
> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
Tested-by: Ali Alnubani <alialnu@nvidia.com>
On Wed, Sep 29, 2021 at 11:37 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> m->nb_seg must be reset on mbuf free whatever the value of m->next,
> because it can happen that m->nb_seg is != 1. For instance in this
> case:
>
> m1 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m1, 500);
> m2 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m2, 500);
> rte_pktmbuf_chain(m1, m2);
> m0 = rte_pktmbuf_alloc(mp);
> rte_pktmbuf_append(m0, 500);
> rte_pktmbuf_chain(m0, m1);
>
> As rte_pktmbuf_chain() does not reset nb_seg in the initial m1
> segment (this is not required), after this code the mbuf chain
> have 3 segments:
> - m0: next=m1, nb_seg=3
> - m1: next=m2, nb_seg=2
> - m2: next=NULL, nb_seg=1
>
> Then split this chain between m1 and m2, it would result in 2 packets:
> - first packet
> - m0: next=m1, nb_seg=2
> - m1: next=NULL, nb_seg=2
> - second packet
> - m2: next=NULL, nb_seg=1
>
> Freeing the first packet will not restore nb_seg=1 in the second
> segment. This is an issue because it is expected that mbufs stored
> in pool have their nb_seg field set to 1.
>
> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Ali Alnubani <alialnu@nvidia.com>
Applied, thanks.
I know it's too late for this patch [1], but I am afraid it was performance tested using test-pmd with RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE [2], so the test didn't involve the function fixed by this patch: rte_pktmbuf_prefree_seg().
If so, the performance test results were completely irrelevant. The same goes for other patches related to code bypassed by RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE.
RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE is a performance optimization with many application limitations, and should not be enabled for performance testing in the CI environment. Alternatively, performance testing can be performed both with and without this optimization.
[1] https://patchwork.dpdk.org/project/dpdk/patch/20210113132734.1636-1-olivier.matz@6wind.com/
[2] http://git.dpdk.org/dpdk/commit/app/test-pmd/testpmd.c?id=07e5f7bd65718461e4c63757248b9c7bab08341f
@@ -2684,6 +2684,70 @@ test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
return -1;
}
+/* check that m->nb_segs and m->next are reset on mbuf free */
+static int
+test_nb_segs_and_next_reset(void)
+{
+ struct rte_mbuf *m0 = NULL, *m1 = NULL, *m2 = NULL;
+ struct rte_mempool *pool = NULL;
+
+ pool = rte_pktmbuf_pool_create("test_mbuf_reset",
+ 3, 0, 0, MBUF_DATA_SIZE, SOCKET_ID_ANY);
+ if (pool == NULL)
+ GOTO_FAIL("Failed to create mbuf pool");
+
+ /* alloc mbufs */
+ m0 = rte_pktmbuf_alloc(pool);
+ m1 = rte_pktmbuf_alloc(pool);
+ m2 = rte_pktmbuf_alloc(pool);
+ if (m0 == NULL || m1 == NULL || m2 == NULL)
+ GOTO_FAIL("Failed to allocate mbuf");
+
+ /* append data in all of them */
+ if (rte_pktmbuf_append(m0, 500) == NULL ||
+ rte_pktmbuf_append(m1, 500) == NULL ||
+ rte_pktmbuf_append(m2, 500) == NULL)
+ GOTO_FAIL("Failed to append data in mbuf");
+
+ /* chain them in one mbuf m0 */
+ rte_pktmbuf_chain(m1, m2);
+ rte_pktmbuf_chain(m0, m1);
+ if (m0->nb_segs != 3 || m0->next != m1 || m1->next != m2 ||
+ m2->next != NULL) {
+ m1 = m2 = NULL;
+ GOTO_FAIL("Failed to chain mbufs");
+ }
+
+ /* split m0 chain in two, between m1 and m2 */
+ m0->nb_segs = 2;
+ m1->next = NULL;
+ m2->nb_segs = 1;
+
+ /* free the 2 mbuf chains m0 and m2 */
+ rte_pktmbuf_free(m0);
+ rte_pktmbuf_free(m2);
+
+ /* realloc the 3 mbufs */
+ m0 = rte_mbuf_raw_alloc(pool);
+ m1 = rte_mbuf_raw_alloc(pool);
+ m2 = rte_mbuf_raw_alloc(pool);
+ if (m0 == NULL || m1 == NULL || m2 == NULL)
+ GOTO_FAIL("Failed to reallocate mbuf");
+
+ /* ensure that m->next and m->nb_segs are reset allocated mbufs */
+ if (m0->nb_segs != 1 || m0->next != NULL ||
+ m1->nb_segs != 1 || m1->next != NULL ||
+ m2->nb_segs != 1 || m2->next != NULL)
+ GOTO_FAIL("nb_segs or next was not reset properly");
+
+ return 0;
+
+fail:
+ if (pool != NULL)
+ rte_mempool_free(pool);
+ return -1;
+}
+
static int
test_mbuf(void)
{
@@ -2874,6 +2938,11 @@ test_mbuf(void)
goto err;
}
+ /* test reset of m->nb_segs and m->next on mbuf free */
+ if (test_nb_segs_and_next_reset() < 0) {
+ printf("test_nb_segs_and_next_reset() failed\n");
+ goto err;
+ }
ret = 0;
err:
@@ -134,10 +134,10 @@ rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
rte_mbuf_ext_refcnt_set(m->shinfo, 1);
m->ol_flags = EXT_ATTACHED_MBUF;
- if (m->next != NULL) {
+ if (m->next != NULL)
m->next = NULL;
+ if (m->nb_segs != 1)
m->nb_segs = 1;
- }
rte_mbuf_raw_free(m);
}
@@ -1346,10 +1346,10 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
return NULL;
}
- if (m->next != NULL) {
+ if (m->next != NULL)
m->next = NULL;
+ if (m->nb_segs != 1)
m->nb_segs = 1;
- }
return m;
@@ -1363,10 +1363,10 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
return NULL;
}
- if (m->next != NULL) {
+ if (m->next != NULL)
m->next = NULL;
+ if (m->nb_segs != 1)
m->nb_segs = 1;
- }
rte_mbuf_refcnt_set(m, 1);
return m;
@@ -508,7 +508,12 @@ struct rte_mbuf {
* or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag.
*/
uint16_t refcnt;
- uint16_t nb_segs; /**< Number of segments. */
+
+ /**
+ * Number of segments. Only valid for the first segment of an mbuf
+ * chain.
+ */
+ uint16_t nb_segs;
/** Input port (16 bits to support more than 256 virtual ports).
* The event eth Tx adapter uses this field to specify the output port.
@@ -604,7 +609,11 @@ struct rte_mbuf {
/* second cache line - fields only used in slow path or on TX */
RTE_MARKER cacheline1 __rte_cache_min_aligned;
- struct rte_mbuf *next; /**< Next segment of scattered packet. */
+ /**
+ * Next segment of scattered packet. Must be NULL in the last segment or
+ * in case of non-segmented packet.
+ */
+ struct rte_mbuf *next;
/* fields to support TX offloads */
RTE_STD_C11