net/mlx5: fix partial inline of fine grain packets
Checks
Commit Message
Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.
This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.
Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")
Cc: viacheslavo@nvidia.com
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_tx.h | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
Comments
Hi,
> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Wednesday, November 17, 2021 11:51 AM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: [PATCH] net/mlx5: fix partial inline of fine grain packets
>
> Assuming a user tried to send multi-segment packets, with
> RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
> minimum inlining requirements (such as ConnectX-4 Lx or when user
> specified them explicitly), sending such packets caused segfault.
> Segfault was caused by failed invariants in mlx5_tx_packet_multi_inline
> function.
>
> This patch introduces a logic for multi-segment packets, with
> RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf
> scanning for filling inline buffer and inline only minimal amount of data
> required.
>
> Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first
> segments")
> Cc: viacheslavo@nvidia.com
> Cc: stable@dpdk.org
Patch applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
MLX5_ASSERT(txq->inlen_mode >=
MLX5_ESEG_MIN_INLINE_SIZE);
MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
- inlen = txq->inlen_mode;
+ inlen = RTE_MIN(txq->inlen_mode, inlen);
} else if (vlan && !txq->vlan_en) {
/*
* VLAN insertion is requested and hardware does not
@@ -1946,6 +1946,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
} else {
goto do_first;
}
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+ goto do_build;
/*
* Now we know the minimal amount of data is requested
* to inline. Check whether we should inline the buffers
@@ -1978,6 +1980,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
mbuf = NEXT(mbuf);
/* There should be not end of packet. */
MLX5_ASSERT(mbuf);
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+ break;
nxlen = inlen + rte_pktmbuf_data_len(mbuf);
} while (unlikely(nxlen < txq->inlen_send));
}
@@ -2005,6 +2009,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
* Estimate the number of Data Segments conservatively,
* supposing no any mbufs is being freed during inlining.
*/
+do_build:
MLX5_ASSERT(inlen <= txq->inlen_send);
ds = NB_SEGS(loc->mbuf) + 2 + (inlen -
MLX5_ESEG_MIN_INLINE_SIZE +