From patchwork Wed Dec 20 02:33:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 32515 X-Patchwork-Delegate: helin.zhang@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 27DAB1DBF; Wed, 20 Dec 2017 03:40:56 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id D7223271 for ; Wed, 20 Dec 2017 03:40:54 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Dec 2017 18:40:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,429,1508828400"; d="scan'208";a="14093440" Received: from dpdk2.bj.intel.com ([172.16.182.81]) by fmsmga004.fm.intel.com with ESMTP; 19 Dec 2017 18:40:52 -0800 From: Wei Zhao To: dev@dpdk.org Cc: wenzhuo.lu@intel.com, Wei Zhao Date: Wed, 20 Dec 2017 10:33:13 +0800 Message-Id: <20171220023313.143102-1-wei.zhao1@intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171220021008.142687-1-wei.zhao1@intel.com> References: <20171220021008.142687-1-wei.zhao1@intel.com> Subject: [dpdk-dev] [PATCH v2] net/ixgbe: add flow parser ntuple support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ixgbe ntuple filter in rte_flow need to support diversion data with less than 5 tuple parameters.So add this new support in parser code. Signed-off-by: Wei Zhao --- v2: -fix coding style issue. --- drivers/net/ixgbe/ixgbe_flow.c | 84 ++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 39 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index 8f964cf..f69f0c4 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -310,48 +310,49 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, } } - /* get the IPv4 info */ - if (!item->spec || !item->mask) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Invalid ntuple mask"); - return -rte_errno; - } - /*Not supported last point for range*/ - if (item->last) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - item, "Not supported last point for range"); - return -rte_errno; - - } + if (item->mask) { + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } - ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; - /** - * Only support src & dst addresses, protocol, - * others should be masked. - */ - if (ipv4_mask->hdr.version_ihl || - ipv4_mask->hdr.type_of_service || - ipv4_mask->hdr.total_length || - ipv4_mask->hdr.packet_id || - ipv4_mask->hdr.fragment_offset || - ipv4_mask->hdr.time_to_live || - ipv4_mask->hdr.hdr_checksum) { + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by ntuple filter"); - return -rte_errno; - } + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } - filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; - filter->src_ip_mask = ipv4_mask->hdr.src_addr; - filter->proto_mask = ipv4_mask->hdr.next_proto_id; + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; - ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; - filter->dst_ip = ipv4_spec->hdr.dst_addr; - filter->src_ip = ipv4_spec->hdr.src_addr; - filter->proto = ipv4_spec->hdr.next_proto_id; + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + } /* check if the next not void item is TCP or UDP */ item = next_no_void_pattern(pattern, item); @@ -366,8 +367,13 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* get the TCP/UDP info */ if ((item->type != RTE_FLOW_ITEM_TYPE_END) && + (!item->spec && !item->mask)) { + goto action; + } + + /* get the TCP/UDP/SCTP info */ + if (item->type != RTE_FLOW_ITEM_TYPE_END && (!item->spec || !item->mask)) { memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); rte_flow_error_set(error, EINVAL,