net/ena: update Rx checksum errors on Rx path

Message ID 20190528082834.28697-1-mk@semihalf.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series net/ena: update Rx checksum errors on Rx path |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS
ci/Intel-compilation fail Compilation issues

Commit Message

Michal Krawczyk May 28, 2019, 8:28 a.m. UTC
  Rx checksum flags and input errors shouldn't be updated on Tx, as it
would work only for packets forwarding.

The ierrors statistic should be updated on Rx, right after checking
Rx checksum flags if the Rx checksum offload is enabled.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
---
 drivers/net/ena/ena_ethdev.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)
  

Comments

Ferruh Yigit June 6, 2019, 2:52 p.m. UTC | #1
On 5/28/2019 9:28 AM, Michal Krawczyk wrote:
> Rx checksum flags and input errors shouldn't be updated on Tx, as it
> would work only for packets forwarding.
> 
> The ierrors statistic should be updated on Rx, right after checking
> Rx checksum flags if the Rx checksum offload is enabled.
> 
> Signed-off-by: Michal Krawczyk <mk@semihalf.com>

Applied to dpdk-next-net/master, thanks.
  

Patch

diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index b6651fc0f..2cc4a169f 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -2105,8 +2105,10 @@  static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		ena_rx_mbuf_prepare(mbuf_head, &ena_rx_ctx);
 
 		if (unlikely(mbuf_head->ol_flags &
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD)))
+			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) {
+			rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors);
 			++rx_ring->rx_stats.bad_csum;
+		}
 
 		mbuf_head->hash.rss = ena_rx_ctx.hash;
 
@@ -2334,10 +2336,6 @@  static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* Set TX offloads flags, if applicable */
 		ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads);
 
-		if (unlikely(mbuf->ol_flags &
-			     (PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD)))
-			rte_atomic64_inc(&tx_ring->adapter->drv_stats->ierrors);
-
 		rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]);
 
 		/* Process first segment taking into