From patchwork Thu Apr 13 06:16:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 125988 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3014C4292F; Thu, 13 Apr 2023 08:18:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE8CE42D36; Thu, 13 Apr 2023 08:17:34 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 5676042D39 for ; Thu, 13 Apr 2023 08:17:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681366653; x=1712902653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ka50OVYICO3H0ccRSyg3wHWWdaW7/2rj1nTiKpIzHtA=; b=JT/uY4fWKWV3pM9PPXo4CKnb2smvLZ38YFCPJVsMRDFaZ6niCUTCUuLk Im6JpkhPLNFZWGIyShCWKpWUgjEZxm5F0nLV2P9JMAO796ExThS8qnZOX A6PBefpZVf9N5uoel/4IewRmWUjg1/s03SVjU2yUbKuMAn7UyLLH7E8Yv lwdA8+SoO7q8HQQGLLptNslnV42NthETEFNLuE7RxYuc4vp6BfBS8gbZr eJltjFsUIwrgL1vWCh4QHRWJVcRrk5Zz/jtdz1DPGCTr8IGAx7tYy5wuT K/idDn933+RSkRp6k4Ez9zF1BoPEyRqbhd2s6Ma3FLdjoxC8m47iIVG6b Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="341595392" X-IronPort-AV: E=Sophos;i="5.98,339,1673942400"; d="scan'208";a="341595392" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2023 23:17:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="691824309" X-IronPort-AV: E=Sophos;i="5.98,339,1673942400"; d="scan'208";a="691824309" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga007.fm.intel.com with ESMTP; 12 Apr 2023 23:17:30 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, Junfeng Guo , Rushil Gupta , Joshua Washington , Jeroen de Borst Subject: [PATCH 08/10] net/gve: enable Tx checksum offload for DQO Date: Thu, 13 Apr 2023 14:16:48 +0800 Message-Id: <20230413061650.796940-9-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230413061650.796940-1-junfeng.guo@intel.com> References: <20230413061650.796940-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable Tx checksum offload once any flag of L4 checksum is set. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Joshua Washington Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 4 ++++ drivers/net/gve/gve_tx_dqo.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 617bb55a85..4a0e860afa 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -38,6 +38,10 @@ #define GVE_MAX_MTU RTE_ETHER_MTU #define GVE_MIN_MTU RTE_ETHER_MIN_MTU +#define GVE_TX_CKSUM_OFFLOAD_MASK ( \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG) + /* A list of pages registered with the device during setup and used by a queue * as buffers */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 578a409616..b38eeaea4b 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -78,6 +78,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t mask, sw_mask; uint16_t nb_to_clean; uint16_t nb_tx = 0; + uint64_t ol_flags; uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; @@ -104,6 +105,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->nb_free < tx_pkt->nb_segs) break; + ol_flags = tx_pkt->ol_flags; nb_used = tx_pkt->nb_segs; do { @@ -128,6 +130,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* fill the last descriptor with End of Packet (EOP) bit */ txd->pkt.end_of_packet = 1; + if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK) + txd->pkt.checksum_offload_enable = 1; + txq->nb_free -= nb_used; txq->nb_used += nb_used; }