From patchwork Fri Feb 17 07:32:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124115 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4CDE41CBC; Fri, 17 Feb 2023 08:39:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB55C42D42; Fri, 17 Feb 2023 08:39:27 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 1D09242D40 for ; Fri, 17 Feb 2023 08:39:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619566; x=1708155566; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OCFVDbREgwN0+T6KO01XQZsgsJb6U4qkLyfIevwCb6w=; b=Szec/tPrpiAQFwsSpfFh8bPvD4mSiLeQ1gPG02moF9/kLwpzYW0Kmd51 yEFJoZtAD8JKJ3S3efL+3DYTTZNpsPpKajiLnahpyOX2OWlZAVYaOpfjC l4KcLYQ6I2ZiMlgimFFNPS4cHPhUyojjwjoofkHHcj4fbRuvSdg2an4cP cQy2M9Qia/WgT3ElQhbCBkt7/xmKhBkM6eDTlgZBFkn4mQ3i6Agi+DiR5 0Wjv0hFagbKOz/hB+h2dXn2i+3vqcU0klBF+a0tlhTVmeEO9+KAqiLp5L eZ/o1mOY1ezJvYh19c2W//zW4dTxZuPySdWjhc3gjqenmeCCFSF75Nwjb g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153103" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153103" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458728" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458728" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:22 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 08/10] net/gve: enable Tx checksum offload for DQO Date: Fri, 17 Feb 2023 15:32:26 +0800 Message-Id: <20230217073228.340815-9-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable Tx checksum offload once any flag of L4 checksum is set. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 4 ++++ drivers/net/gve/gve_tx_dqo.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index a8e0dd5f3d..bca6e86ef0 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -38,6 +38,10 @@ #define GVE_MAX_MTU RTE_ETHER_MTU #define GVE_MIN_MTU RTE_ETHER_MIN_MTU +#define GVE_TX_CKSUM_OFFLOAD_MASK ( \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG) + /* A list of pages registered with the device during setup and used by a queue * as buffers */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 450cf71a6b..e925d6c3d0 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -77,6 +77,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t mask, sw_mask; uint16_t nb_to_clean; uint16_t nb_tx = 0; + uint64_t ol_flags; uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; @@ -103,6 +104,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->nb_free < tx_pkt->nb_segs) break; + ol_flags = tx_pkt->ol_flags; nb_used = tx_pkt->nb_segs; do { @@ -127,6 +129,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* fill the last descriptor with End of Packet (EOP) bit */ txd->pkt.end_of_packet = 1; + if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK) + txd->pkt.checksum_offload_enable = 1; + txq->nb_free -= nb_used; txq->nb_used += nb_used; }