From patchwork Mon Oct 31 08:33:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 119355 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CEB8A00C2; Mon, 31 Oct 2022 10:06:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C33842BA6; Mon, 31 Oct 2022 10:04:56 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 998024282E for ; Mon, 31 Oct 2022 10:04:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667207071; x=1698743071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5rMWBxa9m5j0aPYu9kicTt9PwRayfVE/iv6Pv7zetVU=; b=ihzScHMId/119tl4o/9VUMdi+f+y1Z70coFzPNEG+vjcIfYRCtj4FXg+ yhy7zeVyc7836dKf4pJg1TKdjlKHR3H/DY2mFk9UwQ/wgIc59CAu/6PB+ VNY2jhYEc+5ktwkUUjO0YBTkDqjyz/rAbgYLlbBdPchgV1zc/GPwkWJsE UsAeVuzoSD1YDZs2lBUXQeGVnUI0BVnmzJK3vMkWHLI4FgLZcFHew7Mc+ SwLK8OrNBbSb3o93Ctst7kOvFSnFZmIwge2HoBtMYvusEa4w8NhgjkBMf eLLMcQ7g3le/rGZCwur5nbAfGvAjKn5e9jaqC2enjrkINorKZ3RdD3FF6 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10516"; a="335507848" X-IronPort-AV: E=Sophos;i="5.95,227,1661842800"; d="scan'208";a="335507848" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 02:04:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10516"; a="878664918" X-IronPort-AV: E=Sophos;i="5.95,227,1661842800"; d="scan'208";a="878664918" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by fmsmga006.fm.intel.com with ESMTP; 31 Oct 2022 02:04:24 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Junfeng Guo , Xiaoyun Li Subject: [PATCH v18 15/18] net/idpf: add support for Rx offloading Date: Mon, 31 Oct 2022 08:33:43 +0000 Message-Id: <20221031083346.16558-16-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20221031083346.16558-1-beilei.xing@intel.com> References: <20221031051556.98549-1-beilei.xing@intel.com> <20221031083346.16558-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Junfeng Guo Add Rx offloading support: - support CHKSUM and RSS offload for split queue model - support CHKSUM offload for single queue model Signed-off-by: Beilei Xing Signed-off-by: Xiaoyun Li Signed-off-by: Junfeng Guo --- doc/guides/nics/features/idpf.ini | 5 ++ drivers/net/idpf/idpf_ethdev.c | 6 ++ drivers/net/idpf/idpf_rxtx.c | 123 ++++++++++++++++++++++++++++++ drivers/net/idpf/idpf_vchnl.c | 18 +++++ 4 files changed, 152 insertions(+) diff --git a/doc/guides/nics/features/idpf.ini b/doc/guides/nics/features/idpf.ini index d722c49fde..868571654f 100644 --- a/doc/guides/nics/features/idpf.ini +++ b/doc/guides/nics/features/idpf.ini @@ -3,8 +3,13 @@ ; ; Refer to default.ini for the full list of available PMD features. ; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; [Features] MTU update = Y +L3 checksum offload = P +L4 checksum offload = P Linux = Y x86-32 = Y x86-64 = Y diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 58560ea404..a09f104425 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -61,6 +61,12 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL; + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM; + dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS; dev_info->default_txconf = (struct rte_eth_txconf) { diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index a980714060..f15e61a785 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -1209,6 +1209,73 @@ idpf_stop_queues(struct rte_eth_dev *dev) } } +#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S \ + (RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) + +static inline uint64_t +idpf_splitq_rx_csum_offload(uint8_t err) +{ + uint64_t flags = 0; + + if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0)) + return flags; + + if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD); + return flags; + } + + if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + + if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0)) + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + + if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + + if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + + return flags; +} + +#define IDPF_RX_FLEX_DESC_ADV_HASH1_S 0 +#define IDPF_RX_FLEX_DESC_ADV_HASH2_S 16 +#define IDPF_RX_FLEX_DESC_ADV_HASH3_S 24 + +static inline uint64_t +idpf_splitq_rx_rss_offload(struct rte_mbuf *mb, + volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) +{ + uint8_t status_err0_qw0; + uint64_t flags = 0; + + status_err0_qw0 = rx_desc->status_err0_qw0; + + if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) { + flags |= RTE_MBUF_F_RX_RSS_HASH; + mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) << + IDPF_RX_FLEX_DESC_ADV_HASH1_S) | + ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) << + IDPF_RX_FLEX_DESC_ADV_HASH2_S) | + ((uint32_t)(rx_desc->hash3) << + IDPF_RX_FLEX_DESC_ADV_HASH3_S); + } + + return flags; +} + static void idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq) { @@ -1282,9 +1349,11 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pktlen_gen_bufq_id; struct idpf_rx_queue *rxq; const uint32_t *ptype_tbl; + uint8_t status_err0_qw1; struct rte_mbuf *rxm; uint16_t rx_id_bufq1; uint16_t rx_id_bufq2; + uint64_t pkt_flags; uint16_t pkt_len; uint16_t bufq_id; uint16_t gen_id; @@ -1349,11 +1418,18 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->next = NULL; rxm->nb_segs = 1; rxm->port = rxq->port_id; + rxm->ol_flags = 0; rxm->packet_type = ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) & VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >> VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S]; + status_err0_qw1 = rx_desc->status_err0_qw1; + pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1); + pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc); + + rxm->ol_flags |= pkt_flags; + rx_pkts[nb_rx++] = rxm; } @@ -1513,6 +1589,48 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S \ + (RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \ + RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) + +/* Translate the rx descriptor status and error fields to pkt flags */ +static inline uint64_t +idpf_rxd_to_pkt_flags(uint16_t status_error) +{ + uint64_t flags = 0; + + if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0)) + return flags; + + if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD); + return flags; + } + + if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + + if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0)) + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + + if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + + if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0)) + flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + else + flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + + return flags; +} + static inline void idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold, uint16_t rx_id) @@ -1546,6 +1664,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct rte_mbuf *rxm; struct rte_mbuf *nmb; uint16_t rx_status0; + uint64_t pkt_flags; uint64_t dma_addr; uint16_t nb_rx; @@ -1611,10 +1730,14 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->pkt_len = rx_packet_len; rxm->data_len = rx_packet_len; rxm->port = rxq->port_id; + rxm->ol_flags = 0; + pkt_flags = idpf_rxd_to_pkt_flags(rx_status0); rxm->packet_type = ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) & VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)]; + rxm->ol_flags |= pkt_flags; + rx_pkts[nb_rx++] = rxm; } rxq->rx_tail = rx_id; diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c index 4ef354df89..00ac5b2a6b 100644 --- a/drivers/net/idpf/idpf_vchnl.c +++ b/drivers/net/idpf/idpf_vchnl.c @@ -528,6 +528,24 @@ idpf_vc_get_caps(struct idpf_adapter *adapter) memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities)); + caps_msg.csum_caps = + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP | + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP | + VIRTCHNL2_CAP_TX_CSUM_GENERIC | + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP | + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP | + VIRTCHNL2_CAP_RX_CSUM_GENERIC; + caps_msg.rss_caps = VIRTCHNL2_CAP_RSS_IPV4_TCP | VIRTCHNL2_CAP_RSS_IPV4_UDP |