From patchwork Fri Sep 22 15:29:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 131816 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AE6BF42614; Fri, 22 Sep 2023 09:09:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D08140265; Fri, 22 Sep 2023 09:09:14 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id D1D1D40151; Fri, 22 Sep 2023 09:09:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695366553; x=1726902553; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=7K1VPfrz1eNhiHlBGIIP2vUmx+wBFmuXUt8tNCRUoSU=; b=Juuh6F0nVUJpDYxFLHxpGZtp+j3ZDpfMw2aoypHkiCTziK83DfbaNVXP 6AKJ/etOMFiciuflUVNUdHyNvs4MsNhiAPS6DthMBjWD3WJbmQ+VyyTd6 wWXstAcg8ShV/dfez/4avPSsbeKBY7+OiVHfE+QLPdzsrUG6ZGtvRn+ix Pp3RVMq6MEExnFp5f9oPOZlBnDiu2JbH4nJ4Gtevd4u9kiWPTrmICN60s VU9dQGP74ZYnOrs3hRuiCpz1N00VLbCOk9rpVTlM3Ljhh94An5AY02W4u iFHfFxvErvtCXd+3vQiy9UDctuGU/mvqOVh85isgZpGMC5R8J0t85xcQI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="380665786" X-IronPort-AV: E=Sophos;i="6.03,167,1694761200"; d="scan'208";a="380665786" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2023 00:09:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="994431283" X-IronPort-AV: E=Sophos;i="6.03,167,1694761200"; d="scan'208";a="994431283" Received: from dpdk-beileix-icelake.sh.intel.com ([10.67.116.231]) by fmsmga006.fm.intel.com with ESMTP; 22 Sep 2023 00:09:10 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, Beilei Xing , stable@dpdk.org Subject: [PATCH] common/idpf: fix Tx checksum offload Date: Fri, 22 Sep 2023 15:29:35 +0000 Message-Id: <20230922152935.146302-1-beilei.xing@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing For multi-segment packets, the Tx checksum offload doesn't work except the last segment, because other segments don't enable HW checksum offload successfully. Fixes: ef47d95e9031 ("net/idpf: fix TSO") Fixes: 8c6098afa075 ("common/idpf: add Rx/Tx data path") Cc: stable@dpdk.org Signed-off-by: Beilei Xing Tested-by: Zhimin Huang --- drivers/common/idpf/idpf_common_rxtx.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index e6d2486272..83b131ef28 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -871,6 +871,7 @@ idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_to_clean; uint16_t nb_tx = 0; uint64_t ol_flags; + uint8_t cmd_dtype; uint16_t nb_ctx; if (unlikely(txq == NULL) || unlikely(!txq->q_started)) @@ -902,6 +903,7 @@ idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (txq->nb_free < tx_pkt->nb_segs) break; + cmd_dtype = 0; ol_flags = tx_pkt->ol_flags; tx_offload.l2_len = tx_pkt->l2_len; tx_offload.l3_len = tx_pkt->l3_len; @@ -911,6 +913,9 @@ idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, nb_ctx = idpf_calc_context_desc(ol_flags); nb_used = tx_pkt->nb_segs + nb_ctx; + if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) + cmd_dtype = IDPF_TXD_FLEX_FLOW_CMD_CS_EN; + /* context descriptor */ if (nb_ctx != 0) { volatile union idpf_flex_tx_ctx_desc *ctx_desc = @@ -933,8 +938,8 @@ idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Setup TX descriptor */ txd->buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); - txd->qw1.cmd_dtype = - rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE); + cmd_dtype |= IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE; + txd->qw1.cmd_dtype = cmd_dtype; txd->qw1.rxr_bufsize = tx_pkt->data_len; txd->qw1.compl_tag = sw_id; tx_id++; @@ -948,8 +953,6 @@ idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* fill the last descriptor with End of Packet (EOP) bit */ txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP; - if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) - txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; txq->nb_free = (uint16_t)(txq->nb_free - nb_used); txq->nb_used = (uint16_t)(txq->nb_used + nb_used); @@ -1424,6 +1427,9 @@ idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } } + if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) + td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; + if (nb_ctx != 0) { /* Setup TX context descriptor if required */ volatile union idpf_flex_tx_ctx_desc *ctx_txd = @@ -1487,9 +1493,6 @@ idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, txq->nb_used = 0; } - if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) - td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; - txd->qw1 |= rte_cpu_to_le_16(td_cmd << IDPF_TXD_QW1_CMD_S); }