From patchwork Mon Oct 23 21:07:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 133198 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BD04431E6; Mon, 23 Oct 2023 23:07:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C2F0406B4; Mon, 23 Oct 2023 23:07:42 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2069.outbound.protection.outlook.com [40.107.101.69]) by mails.dpdk.org (Postfix) with ESMTP id 89FE540695 for ; Mon, 23 Oct 2023 23:07:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TLt2I45XI0Ug7hZ6iYLi3t3+RwqJ0TAeu4r3i8mMmsz0Sk+or90Y0FLxhsw6wXI5uMmmdQrfVIqIJ1FVps2pA6I/j+zUiY1verPcZDD6/FjX1Z81CUGUgO05bbjm9lhX25E0HsWuAFs16CQt8Zvd9iilCOWjkBSwYTahCR+fpRDiryO7936/SrpkF14YHkMsEtgzdcZTVoUUcELJyyPFEDweK512PL5M18rOpukUDinrr2oPCihPmN5bY6JSZubO1ZuAnJSsPoLPEQKl45671cp+/Cv4rK6fr8T0elabixfH16/52fE1wf6KcYaN7imFPJBx/uk1Q8iAbDj9+CaF+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MDSMEtx8cwTo/YaFlXvvAv2yBlSW3e8H6UC/YYKAILA=; b=l951MeqFl5vpUwZJPgCOIbTAbSCL2DNButVFghCVxygoCtlfuMqdJ5RJjz8z3zLiycXBr5zNJx3DlIzknxIwTW+UNc4qhR9Hu4UjmCFgStH4j//mqjCX7eHAca6b9IX9jObm3ubTKSZiM56VYUmkMajSsaKyAC6XHDsUi8kafnv7+BINLK6iDPPKxWMJu7LXvOFzRII7TBzaD9taTX9A8lcm+8MLjOxx5D8nx7MoxVrtVGyySh2wbgJJ9wf6McuDfRgS99h+F5+Jwgm4p0H0IBZunoJbsVq9J5cAR4JTuoDlPlv5B/J7Of5QW1FJJFHI3c1CgN+iDWxQyY7XRIWoSQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MDSMEtx8cwTo/YaFlXvvAv2yBlSW3e8H6UC/YYKAILA=; b=LVA+ytfsOlJCHxOR4DZoUpTVgJqbi1IQCAMD1OkijOWII5CfcyBHEReAAE3wpoIw18N3UNoH3Ejht3TkGgtQueFXVIPh9IQcTaffyvHF9xOZfXHBNMSnVvNVu8SOKxzlcYYpGBJF6laAEaUkWWpkGi+hEkvDB2uJxswd7ggMiYFG+sCHBuNh4XG2rl5g9lzrIKt4Mi6ZbvkWqrD/pEbfFz0Vu6gDkvmC6BAIbGWkbwzjKOGVjsnNLYNthb1ULYkF4D8uzbuAOuZCQ9Sq4xFPRW14v/EnRI6fS3QGAckL9aKR6VzRD/ot4PqDjDRVpkVHbJ+Ve3BgzvrgO4RLKiVoKg== Received: from BN9PR03CA0232.namprd03.prod.outlook.com (2603:10b6:408:f8::27) by PH7PR12MB5831.namprd12.prod.outlook.com (2603:10b6:510:1d6::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36; Mon, 23 Oct 2023 21:07:38 +0000 Received: from SN1PEPF0002BA4E.namprd03.prod.outlook.com (2603:10b6:408:f8:cafe::56) by BN9PR03CA0232.outlook.office365.com (2603:10b6:408:f8::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.33 via Frontend Transport; Mon, 23 Oct 2023 21:07:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF0002BA4E.mail.protection.outlook.com (10.167.242.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.15 via Frontend Transport; Mon, 23 Oct 2023 21:07:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 23 Oct 2023 14:07:36 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 23 Oct 2023 14:07:29 -0700 From: Alexander Kozyrev To: CC: , , , , , Subject: [PATCH v2 2/7] net/mlx5: add support for ptype match in hardware steering Date: Tue, 24 Oct 2023 00:07:02 +0300 Message-ID: <20231023210707.1344241-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20231023210707.1344241-1-akozyrev@nvidia.com> References: <20231009163617.3999365-1-akozyrev@nvidia.com> <20231023210707.1344241-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4E:EE_|PH7PR12MB5831:EE_ X-MS-Office365-Filtering-Correlation-Id: cae887bf-5c7f-4cad-3a22-08dbd40c15e5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0SwYMdFLPOWERIlvijWZPRGqYqLFy6sVTEir0lrh8BU+XKclAcHK/VT3MqAPWiHuz5F7rtRhZzvBWbY8myDb9P3LeulEdTSHj3miuJ0QErZdTEEvs9DtaeTBGriwXY/oiHqJh0jkj+GSexD2ObkLPEwbxzEj0//FeSWMOPjV/Gl4UoovoSDEXEdtLEfpAVMWobFlTeOxrSbzMMp0njspuLHI2d7OjTzgw/oXN9cIA4ZLitLXU8z/2G76jVIPo3G10QNwNkVe1F+iL2sMr4hZ9to1h4dKrA3XZl+n6jloWvsqAedJUaC893f7GGhtHR4/fDVJS82dKV10XTeLP/xcPZSVuEj0Dvwbxx4dUBmGdCrEQbG9qdwmOaBUpaNOnEQtfYbOgRWUpOeN1/qmYKZiv8pUxOnyqSRAq4hcQzj/RvbGA1aJe9cU6HVt3RFvYoDBuCxPJD3xMj/1ZpfLduYXADcBaKdPmZVrGS3i4VMSy7GEq0ILVdiDwjir3hniJN7B4OE2KwMr//nhnqhjpmduikszSdw3oQvjfErZxqFZhPHkEeMF7dUYv+Jr2gcPS+Ku3E+tPZ2PczE7KX4zDofVmcZ5+oKBNYcb9IkK8n3gBukTdRsjyJB1/1CwgRYny4TCZN1vWre/aRsuWU+x/GMJ2H+t+mNMQTUybk4BWyyPo8xpH2BpXhiehPJ9jch4O5Dk X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(230922051799003)(82310400011)(1800799009)(451199024)(186009)(36840700001)(46966006)(2906002)(316002)(54906003)(2616005)(70586007)(6916009)(356005)(107886003)(70206006)(1076003)(7636003)(508600001)(426003)(47076005)(336012)(6666004)(83380400001)(36756003)(86362001)(5660300002)(8936002)(4326008)(8676002)(36860700001)(26005)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2023 21:07:37.3534 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cae887bf-5c7f-4cad-3a22-08dbd40c15e5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5831 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The packet type matching provides quick way of finding out L2/L3/L4 protocols in a given packet. That helps with optimized flow rules matching, eliminating the need of stacking all the packet headers in the matching criteria. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/hws/mlx5dr_definer.c | 161 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 7 ++ drivers/net/mlx5/mlx5_flow.h | 3 + drivers/net/mlx5/mlx5_flow_hw.c | 1 + 4 files changed, 172 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 95b5d4b70e..8d846984e7 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -16,11 +16,15 @@ #define STE_NO_VLAN 0x0 #define STE_SVLAN 0x1 #define STE_CVLAN 0x2 +#define STE_NO_L3 0x0 #define STE_IPV4 0x1 #define STE_IPV6 0x2 +#define STE_NO_L4 0x0 #define STE_TCP 0x1 #define STE_UDP 0x2 #define STE_ICMP 0x3 +#define STE_NO_TUN 0x0 +#define STE_ESP 0x3 #define MLX5DR_DEFINER_QUOTA_BLOCK 0 #define MLX5DR_DEFINER_QUOTA_PASS 2 @@ -277,6 +281,82 @@ mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); } +static void +mlx5dr_definer_ptype_l2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_PTYPE_L2_I); + const struct rte_flow_item_ptype *v = item_spec; + uint32_t packet_type = v->packet_type & + (inner ? RTE_PTYPE_INNER_L2_MASK : RTE_PTYPE_L2_MASK); + uint8_t l2_type = STE_NO_VLAN; + + if (packet_type == (inner ? RTE_PTYPE_INNER_L2_ETHER : RTE_PTYPE_L2_ETHER)) + l2_type = STE_NO_VLAN; + else if (packet_type == (inner ? RTE_PTYPE_INNER_L2_ETHER_VLAN : RTE_PTYPE_L2_ETHER_VLAN)) + l2_type = STE_CVLAN; + else if (packet_type == (inner ? RTE_PTYPE_INNER_L2_ETHER_QINQ : RTE_PTYPE_L2_ETHER_QINQ)) + l2_type = STE_SVLAN; + + DR_SET(tag, l2_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ptype_l3_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_PTYPE_L3_I); + const struct rte_flow_item_ptype *v = item_spec; + uint32_t packet_type = v->packet_type & + (inner ? RTE_PTYPE_INNER_L3_MASK : RTE_PTYPE_L3_MASK); + uint8_t l3_type = STE_NO_L3; + + if (packet_type == (inner ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4)) + l3_type = STE_IPV4; + else if (packet_type == (inner ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6)) + l3_type = STE_IPV6; + + DR_SET(tag, l3_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ptype_l4_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_PTYPE_L4_I); + const struct rte_flow_item_ptype *v = item_spec; + uint32_t packet_type = v->packet_type & + (inner ? RTE_PTYPE_INNER_L4_MASK : RTE_PTYPE_L4_MASK); + uint8_t l4_type = STE_NO_L4; + + if (packet_type == (inner ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP)) + l4_type = STE_TCP; + else if (packet_type == (inner ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP)) + l4_type = STE_UDP; + else if (packet_type == (inner ? RTE_PTYPE_INNER_L4_ICMP : RTE_PTYPE_L4_ICMP)) + l4_type = STE_ICMP; + + DR_SET(tag, l4_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ptype_tunnel_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ptype *v = item_spec; + uint32_t packet_type = v->packet_type & RTE_PTYPE_TUNNEL_MASK; + uint8_t tun_type = STE_NO_TUN; + + if (packet_type == RTE_PTYPE_TUNNEL_ESP) + tun_type = STE_ESP; + + DR_SET(tag, tun_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + static void mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, const void *item_spec, @@ -1709,6 +1789,83 @@ mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_conv_item_ptype(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ptype *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (!(m->packet_type & + (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK | RTE_PTYPE_L4_MASK | RTE_PTYPE_TUNNEL_MASK | + RTE_PTYPE_INNER_L2_MASK | RTE_PTYPE_INNER_L3_MASK | RTE_PTYPE_INNER_L4_MASK))) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->packet_type & RTE_PTYPE_L2_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L2, false)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l2_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, false); + } + + if (m->packet_type & RTE_PTYPE_INNER_L2_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L2, true)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l2_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, true); + } + + if (m->packet_type & RTE_PTYPE_L3_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L3, false)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l3_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, false); + } + + if (m->packet_type & RTE_PTYPE_INNER_L3_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L3, true)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l3_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, true); + } + + if (m->packet_type & RTE_PTYPE_L4_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L4, false)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l4_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, false); + } + + if (m->packet_type & RTE_PTYPE_INNER_L4_MASK) { + fc = &cd->fc[DR_CALC_FNAME(PTYPE_L4, true)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_l4_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, true); + } + + if (m->packet_type & RTE_PTYPE_TUNNEL_MASK) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_PTYPE_TUNNEL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ptype_tunnel_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, false); + } + + return 0; +} + static int mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, struct rte_flow_item *item, @@ -2332,6 +2489,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_ib_l4(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_IB_BTH; break; + case RTE_FLOW_ITEM_TYPE_PTYPE: + ret = mlx5dr_definer_conv_item_ptype(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_PTYPE; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index f5a541bc17..ea07f55d52 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -141,6 +141,13 @@ enum mlx5dr_definer_fname { MLX5DR_DEFINER_FNAME_IB_L4_OPCODE, MLX5DR_DEFINER_FNAME_IB_L4_QPN, MLX5DR_DEFINER_FNAME_IB_L4_A, + MLX5DR_DEFINER_FNAME_PTYPE_L2_O, + MLX5DR_DEFINER_FNAME_PTYPE_L2_I, + MLX5DR_DEFINER_FNAME_PTYPE_L3_O, + MLX5DR_DEFINER_FNAME_PTYPE_L3_I, + MLX5DR_DEFINER_FNAME_PTYPE_L4_O, + MLX5DR_DEFINER_FNAME_PTYPE_L4_I, + MLX5DR_DEFINER_FNAME_PTYPE_TUNNEL, MLX5DR_DEFINER_FNAME_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 903ff66d72..98b267245c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -233,6 +233,9 @@ enum mlx5_feature_name { /* IB BTH ITEM. */ #define MLX5_FLOW_ITEM_IB_BTH (1ull << 51) +/* PTYPE ITEM */ +#define MLX5_FLOW_ITEM_PTYPE (1ull << 52) + /* NSH ITEM */ #define MLX5_FLOW_ITEM_NSH (1ull << 53) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 6fcf654e4a..34b3c9e6ad 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -5382,6 +5382,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ESP: case RTE_FLOW_ITEM_TYPE_FLEX: case RTE_FLOW_ITEM_TYPE_IB_BTH: + case RTE_FLOW_ITEM_TYPE_PTYPE: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /*