From patchwork Tue Nov 2 05:39:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 103421 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5274CA0C4E; Tue, 2 Nov 2021 06:40:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8657E410FF; Tue, 2 Nov 2021 06:40:30 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id F0F15410FE for ; Tue, 2 Nov 2021 06:40:27 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10155"; a="218382100" X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="218382100" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2021 22:40:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="488945138" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.216]) by orsmga007.jf.intel.com with ESMTP; 01 Nov 2021 22:40:24 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, ting.xu@intel.com, junfeng.guo@intel.com Date: Tue, 2 Nov 2021 13:39:15 +0800 Message-Id: <20211102053918.4063391-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102053918.4063391-1-junfeng.guo@intel.com> References: <20211101083612.2380503-5-junfeng.guo@intel.com> <20211102053918.4063391-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 1/4] net/ice/base: add method to disable FDIR SWAP option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In this patch, we introduced a new parameter to enable/disable the FDIR SWAP option by setting the swap and inset register set with certain values. Signed-off-by: Junfeng Guo --- drivers/net/ice/base/ice_flex_pipe.c | 44 ++++++++++++++++++++++++++-- drivers/net/ice/base/ice_flex_pipe.h | 3 +- drivers/net/ice/base/ice_flow.c | 2 +- 3 files changed, 45 insertions(+), 4 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index f35d59f4f5..06a233990f 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -4952,6 +4952,43 @@ ice_add_prof_attrib(struct ice_prof_map *prof, u8 ptg, u16 ptype, return ICE_SUCCESS; } +/** + * ice_disable_fd_swap - set register appropriately to disable FD swap + * @hw: pointer to the HW struct + * @prof_id: profile ID + */ +void ice_disable_fd_swap(struct ice_hw *hw, u16 prof_id) +{ + u8 swap_val = ICE_SWAP_VALID; + u8 i; + /* Since the SWAP Flag in the Programming Desc doesn't work, + * here add method to disable the SWAP Option via setting + * certain SWAP and INSET register set. + */ + for (i = 0; i < hw->blk[ICE_BLK_FD].es.fvw / 4; i++) { + u32 raw_swap = 0; + u32 raw_in = 0; + u8 j; + + for (j = 0; j < 4; j++) { + raw_swap |= (swap_val++) << (j * BITS_PER_BYTE); + raw_in |= ICE_INSET_DFLT << (j * BITS_PER_BYTE); + } + + /* write the FDIR swap register set */ + wr32(hw, GLQF_FDSWAP(prof_id, i), raw_swap); + + ice_debug(hw, ICE_DBG_INIT, "swap wr(%d, %d): %x = %08x\n", + prof_id, i, GLQF_FDSWAP(prof_id, i), raw_swap); + + /* write the FDIR inset register set */ + wr32(hw, GLQF_FDINSET(prof_id, i), raw_in); + + ice_debug(hw, ICE_DBG_INIT, "inset wr(%d, %d): %x = %08x\n", + prof_id, i, GLQF_FDINSET(prof_id, i), raw_in); + } +} + /** * ice_add_prof - add profile * @hw: pointer to the HW struct @@ -4962,6 +4999,7 @@ ice_add_prof_attrib(struct ice_prof_map *prof, u8 ptg, u16 ptype, * @attr_cnt: number of elements in attrib array * @es: extraction sequence (length of array is determined by the block) * @masks: mask for extraction sequence + * @fd_swap: enable/disable FDIR paired src/dst fields swap option * * This function registers a profile, which matches a set of PTYPES with a * particular extraction sequence. While the hardware profile is allocated @@ -4971,7 +5009,7 @@ ice_add_prof_attrib(struct ice_prof_map *prof, u8 ptg, u16 ptype, enum ice_status ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], const struct ice_ptype_attributes *attr, u16 attr_cnt, - struct ice_fv_word *es, u16 *masks) + struct ice_fv_word *es, u16 *masks, bool fd_swap) { u32 bytes = DIVIDE_AND_ROUND_UP(ICE_FLOW_PTYPE_MAX, BITS_PER_BYTE); ice_declare_bitmap(ptgs_used, ICE_XLT1_CNT); @@ -4991,7 +5029,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], status = ice_alloc_prof_id(hw, blk, &prof_id); if (status) goto err_ice_add_prof; - if (blk == ICE_BLK_FD) { + if (blk == ICE_BLK_FD && fd_swap) { /* For Flow Director block, the extraction sequence may * need to be altered in the case where there are paired * fields that have no match. This is necessary because @@ -5002,6 +5040,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], status = ice_update_fd_swap(hw, prof_id, es); if (status) goto err_ice_add_prof; + } else if (blk == ICE_BLK_FD) { + ice_disable_fd_swap(hw, prof_id); } status = ice_update_prof_masking(hw, blk, prof_id, masks); if (status) diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index 9733c4b214..dd332312dd 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -61,10 +61,11 @@ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype); /* XLT2/VSI group functions */ enum ice_status ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig); +void ice_disable_fd_swap(struct ice_hw *hw, u16 prof_id); enum ice_status ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], const struct ice_ptype_attributes *attr, u16 attr_cnt, - struct ice_fv_word *es, u16 *masks); + struct ice_fv_word *es, u16 *masks, bool fd_swap); void ice_init_all_prof_masks(struct ice_hw *hw); void ice_shutdown_all_prof_masks(struct ice_hw *hw); struct ice_prof_map * diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 96d54b494d..77b6b130c1 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -2244,7 +2244,7 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, /* Add a HW profile for this flow profile */ status = ice_add_prof(hw, blk, prof_id, (u8 *)params->ptypes, params->attr, params->attr_cnt, params->es, - params->mask); + params->mask, true); if (status) { ice_debug(hw, ICE_DBG_FLOW, "Error adding a HW flow profile\n"); goto out; From patchwork Tue Nov 2 05:39:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 103422 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 293D8A0C4E; Tue, 2 Nov 2021 06:40:38 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9D2DB41109; Tue, 2 Nov 2021 06:40:34 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 3E94241104 for ; Tue, 2 Nov 2021 06:40:32 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10155"; a="218382108" X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="218382108" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2021 22:40:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="488945157" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.216]) by orsmga007.jf.intel.com with ESMTP; 01 Nov 2021 22:40:27 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, ting.xu@intel.com, junfeng.guo@intel.com Date: Tue, 2 Nov 2021 13:39:16 +0800 Message-Id: <20211102053918.4063391-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102053918.4063391-1-junfeng.guo@intel.com> References: <20211101083612.2380503-5-junfeng.guo@intel.com> <20211102053918.4063391-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 2/4] net/ice/base: add function to set HW profile for raw flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Based on the parser library, we can directly set HW profile and associate the main/ctrl vsi. This patch set also updated the release date in README. Signed-off-by: Junfeng Guo --- drivers/net/ice/base/README | 2 +- drivers/net/ice/base/ice_flex_pipe.c | 49 ++++++++++++++++ drivers/net/ice/base/ice_flex_pipe.h | 3 + drivers/net/ice/base/ice_flow.c | 84 ++++++++++++++++++++++++++++ drivers/net/ice/base/ice_flow.h | 4 ++ 5 files changed, 141 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README index 87a1cebfac..0b23fc4795 100644 --- a/drivers/net/ice/base/README +++ b/drivers/net/ice/base/README @@ -6,7 +6,7 @@ IntelĀ® ICE driver ================== This directory contains source code of FreeBSD ice driver of version -2021.04.29 released by the team which develops +2021.10.28 released by the team which develops basic drivers for any ice NIC. The directory of base/ contains the original source package. This driver is valid for the product(s) listed below diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 06a233990f..395787806b 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -6365,3 +6365,52 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) return status; } + +/** + * ice_flow_assoc_hw_prof - add profile id flow for main/ctrl VSI flow entry + * @hw: pointer to the HW struct + * @blk: HW block + * @dest_vsi_handle: dest VSI handle + * @fdir_vsi_handle: fdir programming VSI handle + * @id: profile id (handle) + * + * Calling this function will update the hardware tables to enable the + * profile indicated by the ID parameter for the VSIs specified in the VSI + * array. Once successfully called, the flow will be enabled. + */ +enum ice_status +ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, + u16 dest_vsi_handle, u16 fdir_vsi_handle, int id) +{ + enum ice_status status = ICE_SUCCESS; + u16 vsi_num; + + vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); + status = ice_add_prof_id_flow(hw, blk, vsi_num, id); + if (status) { + ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for main VSI flow entry, %d\n", + status); + goto err_add_prof; + } + + if (blk != ICE_BLK_FD) + return status; + + vsi_num = ice_get_hw_vsi_num(hw, fdir_vsi_handle); + status = ice_add_prof_id_flow(hw, blk, vsi_num, id); + if (status) { + ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for ctrl VSI flow entry, %d\n", + status); + goto err_add_entry; + } + + return status; + +err_add_entry: + vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); + ice_rem_prof_id_flow(hw, blk, vsi_num, id); +err_add_prof: + ice_flow_rem_prof(hw, blk, id); + + return status; +} diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index dd332312dd..23ba45564a 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -76,6 +76,9 @@ enum ice_status ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl); enum ice_status ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl); +enum ice_status +ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, + u16 dest_vsi_handle, u16 fdir_vsi_handle, int id); enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len); enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len); diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 77b6b130c1..f699dbbc74 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -2524,6 +2524,90 @@ ice_flow_disassoc_prof(struct ice_hw *hw, enum ice_block blk, return status; } +#define FLAG_GTP_EH_PDU_LINK BIT_ULL(13) +#define FLAG_GTP_EH_PDU BIT_ULL(14) + +#define FLAG_GTPU_MSK \ + (FLAG_GTP_EH_PDU | FLAG_GTP_EH_PDU_LINK) +#define FLAG_GTPU_DW \ + (FLAG_GTP_EH_PDU | FLAG_GTP_EH_PDU_LINK) +#define FLAG_GTPU_UP \ + (FLAG_GTP_EH_PDU) +/** + * ice_flow_set_hw_prof - Set HW flow profile based on the parsed profile info + * @hw: pointer to the HW struct + * @dest_vsi_handle: dest VSI handle + * @fdir_vsi_handle: fdir programming VSI handle + * @prof: stores parsed profile info from raw flow + * @blk: classification stage + */ +enum ice_status +ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, + u16 fdir_vsi_handle, struct ice_parser_profile *prof, + enum ice_block blk) +{ + int id = ice_find_first_bit(prof->ptypes, UINT16_MAX); + struct ice_flow_prof_params *params; + u8 fv_words = hw->blk[blk].es.fvw; + enum ice_status status; + u16 vsi_num; + int i, idx; + + params = (struct ice_flow_prof_params *)ice_malloc(hw, sizeof(*params)); + if (!params) + return ICE_ERR_NO_MEMORY; + + for (i = 0; i < ICE_MAX_FV_WORDS; i++) { + params->es[i].prot_id = ICE_PROT_INVALID; + params->es[i].off = ICE_FV_OFFSET_INVAL; + } + + for (i = 0; i < prof->fv_num; i++) { + if (hw->blk[blk].es.reverse) + idx = fv_words - i - 1; + else + idx = i; + params->es[idx].prot_id = prof->fv[i].proto_id; + params->es[idx].off = prof->fv[i].offset; + params->mask[idx] = CPU_TO_BE16(prof->fv[i].msk); + } + + switch (prof->flags) { + case FLAG_GTPU_DW: + params->attr = ice_attr_gtpu_down; + params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_down); + break; + case FLAG_GTPU_UP: + params->attr = ice_attr_gtpu_up; + params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_up); + break; + default: + if (prof->flags_msk & FLAG_GTPU_MSK) { + params->attr = ice_attr_gtpu_session; + params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_session); + } + break; + } + + status = ice_add_prof(hw, blk, id, (u8 *)prof->ptypes, + params->attr, params->attr_cnt, + params->es, params->mask, false); + if (status) + goto free_params; + + status = ice_flow_assoc_hw_prof(hw, blk, dest_vsi_handle, + fdir_vsi_handle, id); + if (status) + goto free_params; + + return ICE_SUCCESS; + +free_params: + ice_free(hw, params); + + return status; +} + /** * ice_flow_add_prof - Add a flow profile for packet segments and matched fields * @hw: pointer to the HW struct diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h index 371d960066..dea7b3c0e8 100644 --- a/drivers/net/ice/base/ice_flow.h +++ b/drivers/net/ice/base/ice_flow.h @@ -548,6 +548,10 @@ enum ice_status ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, u16 vsig); enum ice_status +ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, + u16 fdir_vsi_handle, struct ice_parser_profile *prof, + enum ice_block blk); +enum ice_status ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u8 *hw_prof); From patchwork Tue Nov 2 05:39:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 103423 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6309FA0C4E; Tue, 2 Nov 2021 06:40:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B075441104; Tue, 2 Nov 2021 06:40:37 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 636684111E for ; Tue, 2 Nov 2021 06:40:36 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10155"; a="218382114" X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="218382114" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2021 22:40:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="488945163" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.216]) by orsmga007.jf.intel.com with ESMTP; 01 Nov 2021 22:40:32 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, ting.xu@intel.com, junfeng.guo@intel.com Date: Tue, 2 Nov 2021 13:39:17 +0800 Message-Id: <20211102053918.4063391-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102053918.4063391-1-junfeng.guo@intel.com> References: <20211101083612.2380503-5-junfeng.guo@intel.com> <20211102053918.4063391-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 3/4] app/testpmd: update Max RAW pattern size to 512 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update max size for pattern in struct rte_flow_item_raw to enable protocol agnostic flow offloading. Signed-off-by: Junfeng Guo --- app/test-pmd/cmdline_flow.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 24b224e632..0a2205804e 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -495,7 +495,7 @@ enum index { }; /** Maximum size for pattern in struct rte_flow_item_raw. */ -#define ITEM_RAW_PATTERN_SIZE 40 +#define ITEM_RAW_PATTERN_SIZE 512 /** Maximum size for GENEVE option data pattern in bytes. */ #define ITEM_GENEVE_OPT_DATA_SIZE 124 From patchwork Tue Nov 2 05:39:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 103424 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2EA83A0C4E; Tue, 2 Nov 2021 06:40:49 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 110B7410FD; Tue, 2 Nov 2021 06:40:44 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id A181B4068F for ; Tue, 2 Nov 2021 06:40:42 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10155"; a="231446390" X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="231446390" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2021 22:40:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,201,1631602800"; d="scan'208";a="488945172" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.216]) by orsmga007.jf.intel.com with ESMTP; 01 Nov 2021 22:40:36 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, ting.xu@intel.com, junfeng.guo@intel.com Date: Tue, 2 Nov 2021 13:39:18 +0800 Message-Id: <20211102053918.4063391-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102053918.4063391-1-junfeng.guo@intel.com> References: <20211101083612.2380503-5-junfeng.guo@intel.com> <20211102053918.4063391-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v9 4/4] net/ice: enable protocol agnostic flow offloading in FDIR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Protocol agnostic flow offloading in Flow Director is enabled by this patch based on the Parser Library, using existing rte_flow raw API. Note that the raw flow requires: 1. byte string of raw target packet bits. 2. byte string of mask of target packet. Here is an example: FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3: flow create 0 ingress pattern raw \ pattern spec \ 00000000000000000000000008004500001400004000401000000000000001020304 \ pattern mask \ 000000000000000000000000000000000000000000000000000000000000ffffffff \ / end actions queue index 3 / mark id 3 / end Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto) is optional in our cases. To avoid redundancy, we just omit the mask of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix '0x' for the spec and mask byte (hex) strings are also omitted here. Signed-off-by: Junfeng Guo --- doc/guides/rel_notes/release_21_11.rst | 1 + drivers/net/ice/ice_ethdev.h | 14 ++ drivers/net/ice/ice_fdir_filter.c | 231 +++++++++++++++++++++++++ drivers/net/ice/ice_generic_flow.c | 7 + drivers/net/ice/ice_generic_flow.h | 3 + 5 files changed, 256 insertions(+) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 98d50a160b..36fdee0a98 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -167,6 +167,7 @@ New Features * **Updated Intel ice driver.** + * Added protocol agnostic flow offloading support in Flow Director. * Added 1PPS out support by a devargs. * Added IPv4 and L4 (TCP/UDP/SCTP) checksum hash support in RSS flow. * Added DEV_RX_OFFLOAD_TIMESTAMP support. diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 0e42c4c063..d021e7fd0b 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -318,6 +318,11 @@ struct ice_fdir_filter_conf { uint64_t input_set_o; /* used for non-tunnel or tunnel outer fields */ uint64_t input_set_i; /* only for tunnel inner fields */ uint32_t mark_flag; + + struct ice_parser_profile *prof; + bool parser_ena; + u8 *pkt_buf; + u8 pkt_len; }; #define ICE_MAX_FDIR_FILTER_NUM (1024 * 16) @@ -487,6 +492,14 @@ struct ice_devargs { uint8_t pps_out_ena; }; +/** + * Structure to store fdir fv entry. + */ +struct ice_fdir_prof_info { + struct ice_parser_profile prof; + u64 fdir_actived_cnt; +}; + /** * Structure to store private data for each PF/VF instance. */ @@ -510,6 +523,7 @@ struct ice_adapter { struct rte_timecounter tx_tstamp_tc; bool ptp_ena; uint64_t time_hw; + struct ice_fdir_prof_info fdir_prof_info[ICE_MAX_PTGS]; #ifdef RTE_ARCH_X86 bool rx_use_avx2; bool rx_use_avx512; diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index bd627e3aa8..b5cd6d4ae6 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -107,6 +107,7 @@ ICE_INSET_NAT_T_ESP_SPI) static struct ice_pattern_match_item ice_fdir_pattern_list[] = { + {pattern_raw, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype, ICE_FDIR_INSET_ETH, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, ICE_FDIR_INSET_ETH_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_udp, ICE_FDIR_INSET_ETH_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -1188,6 +1189,24 @@ ice_fdir_is_tunnel_profile(enum ice_fdir_tunnel_type tunnel_type) return 0; } +static int +ice_fdir_add_del_raw(struct ice_pf *pf, + struct ice_fdir_filter_conf *filter, + bool add) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt; + rte_memcpy(pkt, filter->pkt_buf, filter->pkt_len); + + struct ice_fltr_desc desc; + memset(&desc, 0, sizeof(desc)); + filter->input.comp_report = ICE_FXD_FLTR_QW0_COMP_REPORT_SW; + ice_fdir_get_prgm_desc(hw, &filter->input, &desc, add); + + return ice_fdir_programming(pf, &desc); +} + static int ice_fdir_add_del_filter(struct ice_pf *pf, struct ice_fdir_filter_conf *filter, @@ -1303,6 +1322,68 @@ ice_fdir_create_filter(struct ice_adapter *ad, struct ice_fdir_fltr_pattern key; bool is_tun; int ret; + int i; + + if (filter->parser_ena) { + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + int id = ice_find_first_bit(filter->prof->ptypes, UINT16_MAX); + int ptg = hw->blk[ICE_BLK_FD].xlt1.t[id]; + u16 ctrl_vsi = pf->fdir.fdir_vsi->idx; + u16 main_vsi = pf->main_vsi->idx; + bool fv_found = false; + + struct ice_fdir_prof_info *pi = &ad->fdir_prof_info[ptg]; + if (pi->fdir_actived_cnt != 0) { + for (i = 0; i < ICE_MAX_FV_WORDS; i++) + if (pi->prof.fv[i].proto_id != + filter->prof->fv[i].proto_id || + pi->prof.fv[i].offset != + filter->prof->fv[i].offset || + pi->prof.fv[i].msk != + filter->prof->fv[i].msk) + break; + if (i == ICE_MAX_FV_WORDS) { + fv_found = true; + pi->fdir_actived_cnt++; + } + } + + if (!fv_found) { + ret = ice_flow_set_hw_prof(hw, main_vsi, ctrl_vsi, + filter->prof, ICE_BLK_FD); + if (ret) + goto error; + } + + ret = ice_fdir_add_del_raw(pf, filter, true); + if (ret) + goto error; + + if (!fv_found) { + for (i = 0; i < filter->prof->fv_num; i++) { + pi->prof.fv[i].proto_id = + filter->prof->fv[i].proto_id; + pi->prof.fv[i].offset = + filter->prof->fv[i].offset; + pi->prof.fv[i].msk = filter->prof->fv[i].msk; + } + pi->fdir_actived_cnt = 1; + } + + if (filter->mark_flag == 1) + ice_fdir_rx_parsing_enable(ad, 1); + + entry = rte_zmalloc("fdir_entry", sizeof(*entry), 0); + if (!entry) + goto error; + + rte_memcpy(entry, filter, sizeof(*filter)); + + flow->rule = entry; + + return 0; + } ice_fdir_extract_fltr_key(&key, filter); node = ice_fdir_entry_lookup(fdir_info, &key); @@ -1381,6 +1462,11 @@ ice_fdir_create_filter(struct ice_adapter *ad, free_entry: rte_free(entry); return -rte_errno; + +error: + rte_free(filter->prof); + rte_free(filter->pkt_buf); + return -rte_errno; } static int @@ -1397,6 +1483,44 @@ ice_fdir_destroy_filter(struct ice_adapter *ad, filter = (struct ice_fdir_filter_conf *)flow->rule; + if (filter->parser_ena) { + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + int id = ice_find_first_bit(filter->prof->ptypes, UINT16_MAX); + int ptg = hw->blk[ICE_BLK_FD].xlt1.t[id]; + u16 ctrl_vsi = pf->fdir.fdir_vsi->idx; + u16 main_vsi = pf->main_vsi->idx; + enum ice_block blk = ICE_BLK_FD; + u16 vsi_num; + + ret = ice_fdir_add_del_raw(pf, filter, false); + if (ret) + return -rte_errno; + + struct ice_fdir_prof_info *pi = &ad->fdir_prof_info[ptg]; + if (pi->fdir_actived_cnt != 0) { + pi->fdir_actived_cnt--; + if (!pi->fdir_actived_cnt) { + vsi_num = ice_get_hw_vsi_num(hw, ctrl_vsi); + ice_rem_prof_id_flow(hw, blk, vsi_num, id); + + vsi_num = ice_get_hw_vsi_num(hw, main_vsi); + ice_rem_prof_id_flow(hw, blk, vsi_num, id); + } + } + + if (filter->mark_flag == 1) + ice_fdir_rx_parsing_enable(ad, 0); + + flow->rule = NULL; + + rte_free(filter->prof); + rte_free(filter->pkt_buf); + rte_free(filter); + + return 0; + } + is_tun = ice_fdir_is_tunnel_profile(filter->tunnel_type); if (filter->counter) { @@ -1675,6 +1799,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; enum rte_flow_item_type l4 = RTE_FLOW_ITEM_TYPE_END; enum ice_fdir_tunnel_type tunnel_type = ICE_FDIR_TUNNEL_TYPE_NONE; + const struct rte_flow_item_raw *raw_spec, *raw_mask; const struct rte_flow_item_eth *eth_spec, *eth_mask; const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; @@ -1702,6 +1827,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, struct ice_fdir_extra *p_ext_data; struct ice_fdir_v4 *p_v4 = NULL; struct ice_fdir_v6 *p_v6 = NULL; + struct ice_parser_result rslt; + struct ice_parser *psr; + uint8_t item_num = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) @@ -1713,6 +1841,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, item->type == RTE_FLOW_ITEM_TYPE_GTP_PSC) { is_outer = false; } + item_num++; } /* This loop parse flow pattern and distinguish Non-tunnel and tunnel @@ -1733,6 +1862,101 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, &input_set_i : &input_set_o; switch (item_type) { + case RTE_FLOW_ITEM_TYPE_RAW: + raw_spec = item->spec; + raw_mask = item->mask; + + if (item_num != 1) + break; + + /* convert raw spec & mask from byte string to int */ + unsigned char *tmp_spec = + (uint8_t *)(uintptr_t)raw_spec->pattern; + unsigned char *tmp_mask = + (uint8_t *)(uintptr_t)raw_mask->pattern; + uint16_t udp_port = 0; + uint16_t tmp_val = 0; + uint8_t pkt_len = 0; + uint8_t tmp = 0; + int i, j; + + pkt_len = strlen((char *)(uintptr_t)raw_spec->pattern); + if (strlen((char *)(uintptr_t)raw_mask->pattern) != + pkt_len) + return -rte_errno; + + for (i = 0, j = 0; i < pkt_len; i += 2, j++) { + tmp = tmp_spec[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_spec[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_spec[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_spec[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_spec[j] = tmp_val + tmp - '0'; + + tmp = tmp_mask[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_mask[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_mask[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_mask[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_mask[j] = tmp_val + tmp - '0'; + } + + pkt_len /= 2; + + if (ice_parser_create(&ad->hw, &psr)) + return -rte_errno; + if (ice_get_open_tunnel_port(&ad->hw, TNL_VXLAN, + &udp_port)) + ice_parser_vxlan_tunnel_set(psr, udp_port, + true); + if (ice_parser_run(psr, tmp_spec, pkt_len, &rslt)) + return -rte_errno; + ice_parser_destroy(psr); + + if (!tmp_mask) + return -rte_errno; + + filter->prof = (struct ice_parser_profile *) + ice_malloc(&ad->hw, sizeof(*filter->prof)); + if (!filter->prof) + return -ENOMEM; + + if (ice_parser_profile_init(&rslt, tmp_spec, tmp_mask, + pkt_len, ICE_BLK_FD, true, filter->prof)) + return -rte_errno; + + u8 *pkt_buf = (u8 *)ice_malloc(&ad->hw, pkt_len + 1); + if (!pkt_buf) + return -ENOMEM; + rte_memcpy(pkt_buf, tmp_spec, pkt_len); + filter->pkt_buf = pkt_buf; + + filter->pkt_len = pkt_len; + + filter->parser_ena = true; + + break; + case RTE_FLOW_ITEM_TYPE_ETH: flow_type = ICE_FLTR_PTYPE_NON_IP_L2; eth_spec = item->spec; @@ -2198,6 +2422,7 @@ ice_fdir_parse(struct ice_adapter *ad, struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; + bool raw = false; int ret; memset(filter, 0, sizeof(*filter)); @@ -2213,7 +2438,13 @@ ice_fdir_parse(struct ice_adapter *ad, ret = ice_fdir_parse_pattern(ad, pattern, error, filter); if (ret) goto error; + + if (item->pattern_list[0] == RTE_FLOW_ITEM_TYPE_RAW) + raw = true; + input_set = filter->input_set_o | filter->input_set_i; + input_set = raw ? ~input_set : input_set; + if (!input_set || filter->input_set_o & ~(item->input_set_mask_o | ICE_INSET_ETHERTYPE) || filter->input_set_i & ~item->input_set_mask_i) { diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 02f854666a..d3391c86c0 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -65,6 +65,12 @@ enum rte_flow_item_type pattern_empty[] = { RTE_FLOW_ITEM_TYPE_END, }; +/* raw */ +enum rte_flow_item_type pattern_raw[] = { + RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_END, +}; + /* L2 */ enum rte_flow_item_type pattern_ethertype[] = { RTE_FLOW_ITEM_TYPE_ETH, @@ -2081,6 +2087,7 @@ struct ice_ptype_match { }; static struct ice_ptype_match ice_ptype_map[] = { + {pattern_raw, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4_udp, ICE_PTYPE_IPV4_UDP_PAY}, {pattern_eth_ipv4_tcp, ICE_PTYPE_IPV4_TCP_PAY}, diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 8845a3e156..1b030c0466 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -124,6 +124,9 @@ /* empty pattern */ extern enum rte_flow_item_type pattern_empty[]; +/* raw pattern */ +extern enum rte_flow_item_type pattern_raw[]; + /* L2 */ extern enum rte_flow_item_type pattern_ethertype[]; extern enum rte_flow_item_type pattern_ethertype_vlan[];