From patchwork Fri Oct 1 19:34:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100344 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBAA6A0032; Fri, 1 Oct 2021 21:34:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F0B411E0; Fri, 1 Oct 2021 21:34:50 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2045.outbound.protection.outlook.com [40.107.93.45]) by mails.dpdk.org (Postfix) with ESMTP id 4F859411C1 for ; Fri, 1 Oct 2021 21:34:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ggpqDCiOFXytgeVz8AmbcBadyKx6ddROqqC5WuROP5JlJ+CoTYZsXa1bh+c///AzzQ+egx2ZCtFoni5hFj932ZVmo7n+D24R2MAC+fsNGBCazDJZCepyats+/rixTHZXaCssvPwLOVc4WtBhU2o++ygmxBF2531eqOCWGv0DrM/AvEwVeNlCuKOfbLiW4Rv6Mof/r00yYuilluo9yaVA2cmc4v1/JwpdjcEjRW+8/+U72VNeN+OViUkmkOhw47fYIuHxJML4lEDBl2owHNstmhvWdYS91gDEfZqr8bX7eKCbiw0Hp+CGuIVAlY0ejNc3bN9xaDlihIvCAVSokzshww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fLXepSjWDNXRPwBO3dz7OddbG/6VeucwJjwr6rQBupk=; b=RneOxwwZRZXW+xCy9cAE3+jllsI9Jro8h4sIgfwa5CIQYRfsOm0rYijXoNNAW5jM/WpBuPgsYOf3mo/Mgj36IiEvsXE3jjuyYIxKZhfSbMsb7K/0c9okP0qE4MKMk/SNMd3CznAU44QxqfFD8xKI61GA+gcy8ap+FeznV0c0UlY/0+t1YiEhBd57ZqT+k3p5OjCOy4G9nfINDj4IYqA6VLkV4xNRr0WLNFmIHi2w1A1ZtFkN5yKWYK3JKRtvwEVYVmvf6LFh64D4GWb2uFdW4wTj0ddJOtaK14J1oT8r0z/McEBgOHnm9dU6f3vV5EKEn2dgQFYluZ77N23oOlz1gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fLXepSjWDNXRPwBO3dz7OddbG/6VeucwJjwr6rQBupk=; b=SqJdAlbuvq+IhMWMr06aCxFLWM99L6nMF9mR77lRGMErix3F7jiZJMvuMJYNWmbGP8YIxpeMgaxDamPa7F5K2/bInyNclkhX+5lWp7+0fmZeamOkv5ByAz19eYoFoXGBS6L84Onbb3HpVv42zZ1aXgqaq7K7EGoVvBoxYzCr8IWtbP+KS+RJuZ1igg3uJcXRzwEfJHA8SChecw7yKUGmOm7OCBxO0Bep44uo93D6q/sQnItavrcBt97+6/lA1mkfvtJ4siPQ5FS7cc9RoKkrQ8CFUPHE7gFqEaEWn4rQ0bVIBQHYe8WqV9Au30OPUlGy20Ka1YWdoQJVMbPNxOpZ8w== Received: from BN6PR13CA0072.namprd13.prod.outlook.com (2603:10b6:404:11::34) by MN2PR12MB4110.namprd12.prod.outlook.com (2603:10b6:208:1dd::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Fri, 1 Oct 2021 19:34:45 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::f1) by BN6PR13CA0072.outlook.office365.com (2603:10b6:404:11::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.7 via Frontend Transport; Fri, 1 Oct 2021 19:34:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:44 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:42 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:02 +0300 Message-ID: <20211001193415.23288-2-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7d7193bb-b668-4160-355f-08d9851285e8 X-MS-TrafficTypeDiagnostic: MN2PR12MB4110: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rDQW57FuixmKUwPwZ4osyC8i1xZ22Bd2cyy43ad9Tg93xEdKFb33oHohkSqegUCXlvtWC2VV6zI+ukDSayz4Y5ckXFGvoOucIedzfqPLWb7p5b049VVsUw4+kpREtSdwedBbjh3Qf3DIrLUuQUqg/wYc4QpvNb7cABGRziMFKAGa9chiXKoJei50WqtRrE0GFZXzSEVEGGO4gb7Sc1Rdd9VcAQceYeqiZGh1aZJvyuign2oGi1vPMlSPyYEuosZfm1ZjxFWL40hA+jasAMce4hH9ekQSnvRzvc+2HI8oCZoIpqNl6dVj3H1w8q91D8ObYq42QOD72DZedqotASqScXJb4En91DAHhjKdbe1UpPM5ybo5miMJZ477C+fPpVgS0zOXorwTn3zgtelqEzX7H7ICmZCUOzebK33sAS/akhzc4kd7Acwv5uWSt/qdM2NnSs3KtojVzJ+Y/0iKvWgnhvAYByTjfxLiT4TYjUus+snnnIAKIedtMCVrSBow36MinYlAU5ti0hslh0RgWaBRy7ll5EAjTnuQnAkMi3L6YCSBODSBnrqoaZXvX7NyN/mnjJG3sSBvClbCOxFgER6WP63pDi4zXV/+yb9G2DKGZ8hanEIv+bphLbvMFqHystLARCm9495ipCuIr9WfaBp/RTWuEXNDjZUgIxO4XGUym9fEsZxWi+FZENewEsRP8MkChNNjlMCiUGNDfN6/xo6Bbw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(356005)(7636003)(36756003)(4326008)(86362001)(30864003)(316002)(54906003)(8936002)(7696005)(5660300002)(70586007)(8676002)(6916009)(70206006)(2616005)(1076003)(36860700001)(26005)(336012)(508600001)(6286002)(55016002)(16526019)(186003)(6666004)(426003)(47076005)(83380400001)(2906002)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:44.9718 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7d7193bb-b668-4160-355f-08d9851285e8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4110 Subject: [dpdk-dev] [PATCH v2 01/14] ethdev: introduce configurable flexible item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 1. Introduction and Retrospective Nowadays the networks are evolving fast and wide, the network structures are getting more and more complicated, the new application areas are emerging. To address these challenges the new network protocols are continuously being developed, considered by technical communities, adopted by industry and, eventually implemented in hardware and software. The DPDK framework follows the common trends and if we bother to glance at the RTE Flow API header we see the multiple new items were introduced during the last years since the initial release. The new protocol adoption and implementation process is not straightforward and takes time, the new protocol passes development, consideration, adoption, and implementation phases. The industry tries to mitigate and address the forthcoming network protocols, for example, many hardware vendors are implementing flexible and configurable network protocol parsers. As DPDK developers, could we anticipate the near future in the same fashion and introduce the similar flexibility in RTE Flow API? Let's check what we already have merged in our project, and we see the nice raw item (rte_flow_item_raw). At the first glance, it looks superior and we can try to implement a flow matching on the header of some relatively new tunnel protocol, say on the GENEVE header with variable length options. And, under further consideration, we run into the raw item limitations: - only fixed size network header can be represented - the entire network header pattern of fixed format (header field offsets are fixed) must be provided - the search for patterns is not robust (the wrong matches might be triggered), and actually is not supported by existing PMDs - no explicitly specified relations with preceding and following items - no tunnel hint support As the result, implementing the support for tunnel protocols like aforementioned GENEVE with variable extra protocol option with flow raw item becomes very complicated and would require multiple flows and multiple raw items chained in the same flow (by the way, there is no support found for chained raw items in implemented drivers). This RFC introduces the dedicated flex item (rte_flow_item_flex) to handle matches with existing and new network protocol headers in a unified fashion. 2. Flex Item Life Cycle Let's assume there are the requirements to support the new network protocol with RTE Flows. What is given within protocol specification: - header format - header length, (can be variable, depending on options) - potential presence of extra options following or included in the header the header - the relations with preceding protocols. For example, the GENEVE follows UDP, eCPRI can follow either UDP or L2 header - the relations with following protocols. For example, the next layer after tunnel header can be L2 or L3 - whether the new protocol is a tunnel and the header is a splitting point between outer and inner layers The supposed way to operate with flex item: - application defines the header structures according to protocol specification - application calls rte_flow_flex_item_create() with desired configuration according to the protocol specification, it creates the flex item object over specified ethernet device and prepares PMD and underlying hardware to handle flex item. On item creation call PMD backing the specified ethernet device returns the opaque handle identifying the object have been created - application uses the rte_flow_item_flex with obtained handle in the flows, the values/masks to match with fields in the header are specified in the flex item per flow as for regular items (except that pattern buffer combines all fields) - flows with flex items match with packets in a regular fashion, the values and masks for the new protocol header match are taken from the flex items in the flows - application destroys flows with flex items - application calls rte_flow_flex_item_release() as part of ethernet device API and destroys the flex item object in PMD and releases the engaged hardware resources 3. Flex Item Structure The flex item structure is intended to be used as part of the flow pattern like regular RTE flow items and provides the mask and value to match with fields of the protocol item was configured for. struct rte_flow_item_flex { void *handle; uint32_t length; const uint8_t* pattern; }; The handle is some opaque object maintained on per device basis by underlying driver. The protocol header fields are considered as bit fields, all offsets and widths are expressed in bits. The pattern is the buffer containing the bit concatenation of all the fields presented at item configuration time, in the same order and same amount. If byte boundary alignment is needed an application can use a dummy type field, this is just some kind of gap filler. The length field specifies the pattern buffer length in bytes and is needed to allow rte_flow_copy() operations. The approach of multiple pattern pointers and lengths (per field) was considered and found clumsy - it seems to be much suitable for the application to maintain the single structure within the single pattern buffer. 4. Flex Item Configuration The flex item configuration consists of the following parts: - header field descriptors: - next header - next protocol - sample to match - input link descriptors - output link descriptors The field descriptors tell driver and hardware what data should be extracted from the packet and then presented to match in the flows. Each field is a bit pattern. It has width, offset from the header beginning, mode of offset calculation, and offset related parameters. The next header field is special, no data are actually taken from the packet, but its offset is used as pointer to the next header in the packet, in other word the next header offset specifies the size of the header being parsed by flex item. There is one more special field - next protocol, it specifies where the next protocol identifier is contained and packet data sampled from this field will be used to determine the next protocol header type to continue packet parsing. The next protocol field is like eth_type field in MAC2, or proto field in IPv4/v6 headers. The sample fields are used to represent the data be sampled from the packet and then matched with established flows. There are several methods supposed to calculate field offset in runtime depending on configuration and packet content: - FIELD_MODE_FIXED - fixed offset. The bit offset from header beginning is permanent and defined by field_base configuration parameter. - FIELD_MODE_OFFSET - the field bit offset is extracted from other header field (indirect offset field). The resulting field offset to match is calculated from as: field_base + (*field_offset & offset_mask) << field_shift This mode is useful to sample some extra options following the main header with field containing main header length. Also, this mode can be used to calculate offset to the next protocol header, for example - IPv4 header contains the 4-bit field with IPv4 header length expressed in dwords. One more example - this mode would allow us to skip GENEVE header variable length options. - FIELD_MODE_BITMASK - the field bit offset is extracted from other header field (indirect offset field), the latter is considered as bitmask containing some number of one bits, the resulting field offset to match is calculated as: field_base + bitcount(*field_offset & offset_mask) << field_shift This mode would be useful to skip the GTP header and its extra options with specified flags. - FIELD_MODE_DUMMY - dummy field, optionally used for byte boundary alignment in pattern. Pattern mask and data are ignored in the match. All configuration parameters besides field size and offset are ignored. The offset mode list can be extended by vendors according to hardware supported options. The input link configuration section tells the driver after what protocols and at what conditions the flex item can follow. Input link specified the preceding header pattern, for example for GENEVE it can be UDP item specifying match on destination port with value 6081. The flex item can follow multiple header types and multiple input links should be specified. At flow creation type the item with one of input link types should precede the flex item and driver will select the correct flex item settings, depending on actual flow pattern. The output link configuration section tells the driver how to continue packet parsing after the flex item protocol. If multiple protocols can follow the flex item header the flex item should contain the field with next protocol identifier, and the parsing will be continued depending on the data contained in this field in the actual packet. The flex item fields can participate in RSS hash calculation, the dedicated flag is present in field description to specify what fields should be provided for hashing. 5. Flex Item Chaining If there are multiple protocols supposed to be supported with flex items in chained fashion - two or more flex items within the same flow and these ones might be neighbors in pattern - it means the flex items are mutual referencing. In this case, the item that occurred first should be created with empty output link list or with the list including existing items, and then the second flex item should be created referencing the first flex item as input arc. Also, the hardware resources used by flex items to handle the packet can be limited. If there are multiple flex items that are supposed to be used within the same flow it would be nice to provide some hint for the driver that these two or more flex items are intended for simultaneous usage. The fields of items should be assigned with hint indices and these indices from two or more flex items should not overlap (be unique per field). For this case, the driver will try to engage not overlapping hardware resources and provide independent handling of the fields with unique indices. If the hint index is zero the driver assigns resources on its own. 6. Example of New Protocol Handling Let's suppose we have the requirements to handle the new tunnel protocol that follows UDP header with destination port 0xFADE and is followed by MAC header. Let the new protocol header format be like this: struct new_protocol_header { rte_be32 header_length; /* length in dwords, including options */ rte_be32 specific0; /* some protocol data, no intention */ rte_be32 specific1; /* to match in flows on these fields */ rte_be32 crucial; /* data of interest, match is needed */ rte_be32 options[0]; /* optional protocol data, variable length */ }; The supposed flex item configuration: struct rte_flow_item_flex_field field0 = { .field_mode = FIELD_MODE_DUMMY, /* Affects match pattern only */ .field_size = 96, /* three dwords from the beginning */ }; struct rte_flow_item_flex_field field1 = { .field_mode = FIELD_MODE_FIXED, .field_size = 32, /* Field size is one dword */ .field_base = 96, /* Skip three dwords from the beginning */ }; struct rte_flow_item_udp spec0 = { .hdr = { .dst_port = RTE_BE16(0xFADE), } }; struct rte_flow_item_udp mask0 = { .hdr = { .dst_port = RTE_BE16(0xFFFF), } }; struct rte_flow_item_flex_link link0 = { .item = { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &spec0, .mask = &mask0, }; struct rte_flow_item_flex_conf conf = { .next_header = { .field_mode = FIELD_MODE_OFFSET, .field_base = 0, .offset_base = 0, .offset_mask = 0xFFFFFFFF, .offset_shift = 2 /* Expressed in dwords, shift left by 2 */ }, .sample = { &field0, &field1, }, .sample_num = 2, .input_link[0] = &link0, .input_num = 1 }; Let's suppose we have created the flex item successfully, and PMD returned the handle 0x123456789A. We can use the following item pattern to match the crucial field in the packet with value 0x00112233: struct new_protocol_header spec_pattern = { .crucial = RTE_BE32(0x00112233), }; struct new_protocol_header mask_pattern = { .crucial = RTE_BE32(0xFFFFFFFF), }; struct rte_flow_item_flex spec_flex = { .handle = 0x123456789A .length = sizeiof(struct new_protocol_header), .pattern = &spec_pattern, }; struct rte_flow_item_flex mask_flex = { .length = sizeof(struct new_protocol_header), .pattern = &mask_pattern, }; struct rte_flow_item item_to_match = { .type = RTE_FLOW_ITEM_TYPE_FLEX, .spec = &spec_flex, .mask = &mask_flex, }; Signed-off-by: Viacheslav Ovsiienko --- doc/guides/prog_guide/rte_flow.rst | 24 +++ doc/guides/rel_notes/release_21_11.rst | 7 + lib/ethdev/rte_ethdev.h | 1 + lib/ethdev/rte_flow.h | 228 +++++++++++++++++++++++++ 4 files changed, 260 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 2b42d5ec8c..628f30cea7 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1425,6 +1425,30 @@ Matches a conntrack state after conntrack action. - ``flags``: conntrack packet state flags. - Default ``mask`` matches all state bits. +Item: ``FLEX`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Matches with the network protocol header of preliminary configured format. +The application describes the desired header structure, defines the header +fields attributes and header relations with preceding and following +protocols and configures the ethernet devices accordingly via +rte_flow_flex_item_create() routine. + +- ``handle``: the flex item handle returned by the PMD on successful + rte_flow_flex_item_create() call. The item handle is unique within + the device port, mask for this field is ignored. +- ``length``: match pattern length in bytes. If the length does not cover + all fields defined in item configuration, the pattern spec and mask are + supposed to be appended with zeroes till the full configured item length. +- ``pattern``: pattern to match. The protocol header fields are considered + as bit fields, all offsets and widths are expressed in bits. The pattern + is the buffer containing the bit concatenation of all the fields presented + at item configuration time, in the same order and same amount. The most + regular way is to define all the header fields in the flex item configuration + and directly use the header structure as pattern template, i.e. application + just can fill the header structures with desired match values and masks and + specify these structures as flex item pattern directly. + Actions ~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 73e377a007..170797f9e9 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -55,6 +55,13 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Introduced RTE Flow Flex Item.** + + * The configurable RTE Flow Flex Item provides the capability to introdude + the arbitrary user specified network protocol header, configure the device + hardware accordingly, and perform match on this header with desired patterns + and masks. + * **Enabled new devargs parser.** * Enabled devargs syntax diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index afdc53b674..e9ad7673e9 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -558,6 +558,7 @@ struct rte_eth_rss_conf { * it takes the reserved value 0 as input for the hash function. */ #define ETH_RSS_L4_CHKSUM (1ULL << 35) +#define ETH_RSS_FLEX (1ULL << 36) /* * We use the following macros to combine with above ETH_RSS_* for diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 7b1ed7f110..eccb1e1791 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -574,6 +574,15 @@ enum rte_flow_item_type { * @see struct rte_flow_item_conntrack. */ RTE_FLOW_ITEM_TYPE_CONNTRACK, + + /** + * Matches a configured set of fields at runtime calculated offsets + * over the generic network header with variable length and + * flexible pattern + * + * @see struct rte_flow_item_flex. + */ + RTE_FLOW_ITEM_TYPE_FLEX, }; /** @@ -1839,6 +1848,160 @@ struct rte_flow_item { const void *mask; /**< Bit-mask applied to spec and last. */ }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ITEM_TYPE_FLEX + * + * Matches a specified set of fields within the network protocol + * header. Each field is presented as set of bits with specified width, and + * bit offset (this is dynamic one - can be calulated by several methods + * in runtime) from the header beginning. + * + * The pattern is concatenation of all bit fields configured at item creation + * by rte_flow_flex_item_create() exactly in the same order and amount, no + * fields can be omitted or swapped. The dummy mode field can be used for + * pattern byte boundary alignment, least significant bit in byte goes first. + * Only the fields specified in sample_data configuration parameter participate + * in pattern construction. + * + * If pattern length is smaller than configured fields overall length it is + * extended with trailing zeroes, both for value and mask. + * + * This type does not support ranges (struct rte_flow_item.last). + */ +struct rte_flow_item_flex { + struct rte_flow_item_flex_handle *handle; /**< Opaque item handle. */ + uint32_t length; /**< Pattern length in bytes. */ + const uint8_t *pattern; /**< Combined bitfields pattern to match. */ +}; +/** + * Field bit offset calculation mode. + */ +enum rte_flow_item_flex_field_mode { + /** + * Dummy field, used for byte boundary alignment in pattern. + * Pattern mask and data are ignored in the match. All configuration + * parameters besides field size are ignored. + */ + FIELD_MODE_DUMMY = 0, + /** + * Fixed offset field. The bit offset from header beginning is + * is permanent and defined by field_base parameter. + */ + FIELD_MODE_FIXED, + /** + * The field bit offset is extracted from other header field (indirect + * offset field). The resulting field offset to match is calculated as: + * + * field_base + (*field_offset & offset_mask) << field_shift + */ + FIELD_MODE_OFFSET, + /** + * The field bit offset is extracted from other header field (indirect + * offset field), the latter is considered as bitmask containing some + * number of one bits, the resulting field offset to match is + * calculated as: + * + * field_base + bitcount(*field_offset & offset_mask) << field_shift + */ + FIELD_MODE_BITMASK, +}; + +/** + * Flex item field tunnel mode + */ +enum rte_flow_item_flex_tunnel_mode { + FLEX_TUNNEL_MODE_FIRST = 0, /**< First item occurrence. */ + FLEX_TUNNEL_MODE_OUTER = 1, /**< Outer item. */ + FLEX_TUNNEL_MODE_INNER = 2 /**< Inner item. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +__extension__ +struct rte_flow_item_flex_field { + /** Defines how match field offset is calculated over the packet. */ + enum rte_flow_item_flex_field_mode field_mode; + uint32_t field_size; /**< Match field size in bits. */ + int32_t field_base; /**< Match field offset in bits. */ + uint32_t offset_base; /**< Indirect offset field offset in bits. */ + uint32_t offset_mask; /**< Indirect offset field bit mask. */ + int32_t offset_shift; /**< Indirect offset multiply factor. */ + uint16_t tunnel_count:2; /**< 0-first occurrence, 1-outer, 2-inner.*/ + uint16_t rss_hash:1; /**< Field participates in RSS hash calculation. */ + uint16_t field_id; /**< device hint, for flows with multiple items. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +struct rte_flow_item_flex_link { + /** + * Preceding/following header. The item type must be always provided. + * For preceding one item must specify the header value/mask to match + * for the link be taken and start the flex item header parsing. + */ + struct rte_flow_item item; + /** + * Next field value to match to continue with one of the configured + * next protocols. + */ + uint32_t next; + /** + * Specifies whether flex item represents tunnel protocol + */ + bool tunnel; +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +struct rte_flow_item_flex_conf { + /** + * The next header offset, it presents the network header size covered + * by the flex item and can be obtained with all supported offset + * calculating methods (fixed, dedicated field, bitmask, etc). + */ + struct rte_flow_item_flex_field next_header; + /** + * Specifies the next protocol field to match with link next protocol + * values and continue packet parsing with matching link. + */ + struct rte_flow_item_flex_field next_protocol; + /** + * The fields will be sampled and presented for explicit match + * with pattern in the rte_flow_flex_item. There can be multiple + * fields descriptors, the number should be specified by sample_num. + */ + struct rte_flow_item_flex_field *sample_data; + /** Number of field descriptors in the sample_data array. */ + uint32_t sample_num; + /** + * Input link defines the flex item relation with preceding + * header. It specified the preceding item type and provides pattern + * to match. The flex item will continue parsing and will provide the + * data to flow match in case if there is the match with one of input + * links. + */ + struct rte_flow_item_flex_link *input_link; + /** Number of link descriptors in the input link array. */ + uint32_t input_num; + /** + * Output link defines the next protocol field value to match and + * the following protocol header to continue packet parsing. Also + * defines the tunnel-related behaviour. + */ + struct rte_flow_item_flex_link *output_link; + /** Number of link descriptors in the output link array. */ + uint32_t output_num; +}; + /** * Action types. * @@ -4288,6 +4451,71 @@ rte_flow_tunnel_item_release(uint16_t port_id, struct rte_flow_item *items, uint32_t num_of_items, struct rte_flow_error *error); + +/** + * Create the flex item with specified configuration over + * the Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] conf + * Item configuration. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * Non-NULL opaque pointer on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_item_flex_handle * +rte_flow_flex_item_create(uint16_t port_id, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); + +/** + * Release the flex item on the specified Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle of the item existing on the specified device. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_flex_item_release(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error); + +/** + * Modify the flex item on the specified Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle of the item existing on the specified device. + * @param[in] conf + * Item new configuration. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_flex_item_update(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif