net/mlx5: fix partial inline of fine grain packets

Message ID 20211117095050.74177-1-dsosnowski@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series net/mlx5: fix partial inline of fine grain packets |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/github-robot: build success github build: passed
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS

Commit Message

Dariusz Sosnowski Nov. 17, 2021, 9:50 a.m. UTC
  Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.

This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.

Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")
Cc: viacheslavo@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_tx.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)
  

Comments

Raslan Darawsheh Nov. 17, 2021, 11:53 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Wednesday, November 17, 2021 11:51 AM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: [PATCH] net/mlx5: fix partial inline of fine grain packets
> 
> Assuming a user tried to send multi-segment packets, with
> RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
> minimum inlining requirements (such as ConnectX-4 Lx or when user
> specified them explicitly), sending such packets caused segfault.
> Segfault was caused by failed invariants in mlx5_tx_packet_multi_inline
> function.
> 
> This patch introduces a logic for multi-segment packets, with
> RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf
> scanning for filling inline buffer and inline only minimal amount of data
> required.
> 
> Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first
> segments")
> Cc: viacheslavo@nvidia.com
> Cc: stable@dpdk.org

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index ad13b5e608..bc629983fa 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -1933,7 +1933,7 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(txq->inlen_mode >=
 				    MLX5_ESEG_MIN_INLINE_SIZE);
 			MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
-			inlen = txq->inlen_mode;
+			inlen = RTE_MIN(txq->inlen_mode, inlen);
 		} else if (vlan && !txq->vlan_en) {
 			/*
 			 * VLAN insertion is requested and hardware does not
@@ -1946,6 +1946,8 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		} else {
 			goto do_first;
 		}
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+			goto do_build;
 		/*
 		 * Now we know the minimal amount of data is requested
 		 * to inline. Check whether we should inline the buffers
@@ -1978,6 +1980,8 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 				mbuf = NEXT(mbuf);
 				/* There should be not end of packet. */
 				MLX5_ASSERT(mbuf);
+				if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+					break;
 				nxlen = inlen + rte_pktmbuf_data_len(mbuf);
 			} while (unlikely(nxlen < txq->inlen_send));
 		}
@@ -2005,6 +2009,7 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * Estimate the number of Data Segments conservatively,
 	 * supposing no any mbufs is being freed during inlining.
 	 */
+do_build:
 	MLX5_ASSERT(inlen <= txq->inlen_send);
 	ds = NB_SEGS(loc->mbuf) + 2 + (inlen -
 				       MLX5_ESEG_MIN_INLINE_SIZE +