[v2,1/2] examples/ip_fragmentation: fix fail to send un-fragmented packets

Message ID 20190718101113.13909-2-konstantin.ananyev@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series Few fixes for ip_fragmentation |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS

Commit Message

Ananyev, Konstantin July 18, 2019, 10:11 a.m. UTC
  With latest changes l3fwd_simple_forward() blindly set
(PKT_TX_IPV4 | PKT_TX_IP_CKSUM) ol_flags for all IPv4 packets.
Though for un-fragmented packets we also do have to set l3_len
to make HW IP cksum offload to work properly.
That causes HWi/PMD to drop or generate invalid packets.
Though for un-fragmented packets we don't need to regenerate
IPv4 cksum, as L3 header is not modified.
Fix by setting ol_flags only when required.

Fixes: 16863bbb4a41 ("examples/ip_fragmentation: enable IP checksum offload")

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 examples/ip_fragmentation/main.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)
  

Patch

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index ccaf23ff0..c30dd5b5a 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -245,8 +245,10 @@  l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 	uint8_t ipv6;
 	uint16_t port_out;
 	int32_t len2;
+	uint64_t ol_flags;
 
 	ipv6 = 0;
+	ol_flags = 0;
 	rxq = &qconf->rx_queue_list[queueid];
 
 	/* by default, send everything back to the source port */
@@ -289,6 +291,9 @@  l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			/* Free input packet */
 			rte_pktmbuf_free(m);
 
+			/* request HW to regenerate IPv4 cksum */
+			ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+
 			/* If we fail to fragment the packet */
 			if (unlikely (len2 < 0))
 				return;
@@ -348,11 +353,13 @@  l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			rte_panic("No headroom in mbuf.\n");
 		}
 
+		m->ol_flags |= ol_flags;
 		m->l2_len = sizeof(struct rte_ether_hdr);
 
 		/* 02:00:00:00:00:xx */
 		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
-		*((uint64_t *)d_addr_bytes) = 0x000000000002 + ((uint64_t)port_out << 40);
+		*((uint64_t *)d_addr_bytes) = 0x000000000002 +
+			((uint64_t)port_out << 40);
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[port_out],
@@ -363,7 +370,6 @@  l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 		} else {
 			eth_hdr->ether_type =
 				rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
-			m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
 		}
 	}