From patchwork Fri Oct 1 19:34:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100344 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBAA6A0032; Fri, 1 Oct 2021 21:34:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F0B411E0; Fri, 1 Oct 2021 21:34:50 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2045.outbound.protection.outlook.com [40.107.93.45]) by mails.dpdk.org (Postfix) with ESMTP id 4F859411C1 for ; Fri, 1 Oct 2021 21:34:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ggpqDCiOFXytgeVz8AmbcBadyKx6ddROqqC5WuROP5JlJ+CoTYZsXa1bh+c///AzzQ+egx2ZCtFoni5hFj932ZVmo7n+D24R2MAC+fsNGBCazDJZCepyats+/rixTHZXaCssvPwLOVc4WtBhU2o++ygmxBF2531eqOCWGv0DrM/AvEwVeNlCuKOfbLiW4Rv6Mof/r00yYuilluo9yaVA2cmc4v1/JwpdjcEjRW+8/+U72VNeN+OViUkmkOhw47fYIuHxJML4lEDBl2owHNstmhvWdYS91gDEfZqr8bX7eKCbiw0Hp+CGuIVAlY0ejNc3bN9xaDlihIvCAVSokzshww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fLXepSjWDNXRPwBO3dz7OddbG/6VeucwJjwr6rQBupk=; b=RneOxwwZRZXW+xCy9cAE3+jllsI9Jro8h4sIgfwa5CIQYRfsOm0rYijXoNNAW5jM/WpBuPgsYOf3mo/Mgj36IiEvsXE3jjuyYIxKZhfSbMsb7K/0c9okP0qE4MKMk/SNMd3CznAU44QxqfFD8xKI61GA+gcy8ap+FeznV0c0UlY/0+t1YiEhBd57ZqT+k3p5OjCOy4G9nfINDj4IYqA6VLkV4xNRr0WLNFmIHi2w1A1ZtFkN5yKWYK3JKRtvwEVYVmvf6LFh64D4GWb2uFdW4wTj0ddJOtaK14J1oT8r0z/McEBgOHnm9dU6f3vV5EKEn2dgQFYluZ77N23oOlz1gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fLXepSjWDNXRPwBO3dz7OddbG/6VeucwJjwr6rQBupk=; b=SqJdAlbuvq+IhMWMr06aCxFLWM99L6nMF9mR77lRGMErix3F7jiZJMvuMJYNWmbGP8YIxpeMgaxDamPa7F5K2/bInyNclkhX+5lWp7+0fmZeamOkv5ByAz19eYoFoXGBS6L84Onbb3HpVv42zZ1aXgqaq7K7EGoVvBoxYzCr8IWtbP+KS+RJuZ1igg3uJcXRzwEfJHA8SChecw7yKUGmOm7OCBxO0Bep44uo93D6q/sQnItavrcBt97+6/lA1mkfvtJ4siPQ5FS7cc9RoKkrQ8CFUPHE7gFqEaEWn4rQ0bVIBQHYe8WqV9Au30OPUlGy20Ka1YWdoQJVMbPNxOpZ8w== Received: from BN6PR13CA0072.namprd13.prod.outlook.com (2603:10b6:404:11::34) by MN2PR12MB4110.namprd12.prod.outlook.com (2603:10b6:208:1dd::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Fri, 1 Oct 2021 19:34:45 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::f1) by BN6PR13CA0072.outlook.office365.com (2603:10b6:404:11::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.7 via Frontend Transport; Fri, 1 Oct 2021 19:34:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:44 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:42 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:02 +0300 Message-ID: <20211001193415.23288-2-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7d7193bb-b668-4160-355f-08d9851285e8 X-MS-TrafficTypeDiagnostic: MN2PR12MB4110: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rDQW57FuixmKUwPwZ4osyC8i1xZ22Bd2cyy43ad9Tg93xEdKFb33oHohkSqegUCXlvtWC2VV6zI+ukDSayz4Y5ckXFGvoOucIedzfqPLWb7p5b049VVsUw4+kpREtSdwedBbjh3Qf3DIrLUuQUqg/wYc4QpvNb7cABGRziMFKAGa9chiXKoJei50WqtRrE0GFZXzSEVEGGO4gb7Sc1Rdd9VcAQceYeqiZGh1aZJvyuign2oGi1vPMlSPyYEuosZfm1ZjxFWL40hA+jasAMce4hH9ekQSnvRzvc+2HI8oCZoIpqNl6dVj3H1w8q91D8ObYq42QOD72DZedqotASqScXJb4En91DAHhjKdbe1UpPM5ybo5miMJZ477C+fPpVgS0zOXorwTn3zgtelqEzX7H7ICmZCUOzebK33sAS/akhzc4kd7Acwv5uWSt/qdM2NnSs3KtojVzJ+Y/0iKvWgnhvAYByTjfxLiT4TYjUus+snnnIAKIedtMCVrSBow36MinYlAU5ti0hslh0RgWaBRy7ll5EAjTnuQnAkMi3L6YCSBODSBnrqoaZXvX7NyN/mnjJG3sSBvClbCOxFgER6WP63pDi4zXV/+yb9G2DKGZ8hanEIv+bphLbvMFqHystLARCm9495ipCuIr9WfaBp/RTWuEXNDjZUgIxO4XGUym9fEsZxWi+FZENewEsRP8MkChNNjlMCiUGNDfN6/xo6Bbw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(356005)(7636003)(36756003)(4326008)(86362001)(30864003)(316002)(54906003)(8936002)(7696005)(5660300002)(70586007)(8676002)(6916009)(70206006)(2616005)(1076003)(36860700001)(26005)(336012)(508600001)(6286002)(55016002)(16526019)(186003)(6666004)(426003)(47076005)(83380400001)(2906002)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:44.9718 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7d7193bb-b668-4160-355f-08d9851285e8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4110 Subject: [dpdk-dev] [PATCH v2 01/14] ethdev: introduce configurable flexible item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 1. Introduction and Retrospective Nowadays the networks are evolving fast and wide, the network structures are getting more and more complicated, the new application areas are emerging. To address these challenges the new network protocols are continuously being developed, considered by technical communities, adopted by industry and, eventually implemented in hardware and software. The DPDK framework follows the common trends and if we bother to glance at the RTE Flow API header we see the multiple new items were introduced during the last years since the initial release. The new protocol adoption and implementation process is not straightforward and takes time, the new protocol passes development, consideration, adoption, and implementation phases. The industry tries to mitigate and address the forthcoming network protocols, for example, many hardware vendors are implementing flexible and configurable network protocol parsers. As DPDK developers, could we anticipate the near future in the same fashion and introduce the similar flexibility in RTE Flow API? Let's check what we already have merged in our project, and we see the nice raw item (rte_flow_item_raw). At the first glance, it looks superior and we can try to implement a flow matching on the header of some relatively new tunnel protocol, say on the GENEVE header with variable length options. And, under further consideration, we run into the raw item limitations: - only fixed size network header can be represented - the entire network header pattern of fixed format (header field offsets are fixed) must be provided - the search for patterns is not robust (the wrong matches might be triggered), and actually is not supported by existing PMDs - no explicitly specified relations with preceding and following items - no tunnel hint support As the result, implementing the support for tunnel protocols like aforementioned GENEVE with variable extra protocol option with flow raw item becomes very complicated and would require multiple flows and multiple raw items chained in the same flow (by the way, there is no support found for chained raw items in implemented drivers). This RFC introduces the dedicated flex item (rte_flow_item_flex) to handle matches with existing and new network protocol headers in a unified fashion. 2. Flex Item Life Cycle Let's assume there are the requirements to support the new network protocol with RTE Flows. What is given within protocol specification: - header format - header length, (can be variable, depending on options) - potential presence of extra options following or included in the header the header - the relations with preceding protocols. For example, the GENEVE follows UDP, eCPRI can follow either UDP or L2 header - the relations with following protocols. For example, the next layer after tunnel header can be L2 or L3 - whether the new protocol is a tunnel and the header is a splitting point between outer and inner layers The supposed way to operate with flex item: - application defines the header structures according to protocol specification - application calls rte_flow_flex_item_create() with desired configuration according to the protocol specification, it creates the flex item object over specified ethernet device and prepares PMD and underlying hardware to handle flex item. On item creation call PMD backing the specified ethernet device returns the opaque handle identifying the object have been created - application uses the rte_flow_item_flex with obtained handle in the flows, the values/masks to match with fields in the header are specified in the flex item per flow as for regular items (except that pattern buffer combines all fields) - flows with flex items match with packets in a regular fashion, the values and masks for the new protocol header match are taken from the flex items in the flows - application destroys flows with flex items - application calls rte_flow_flex_item_release() as part of ethernet device API and destroys the flex item object in PMD and releases the engaged hardware resources 3. Flex Item Structure The flex item structure is intended to be used as part of the flow pattern like regular RTE flow items and provides the mask and value to match with fields of the protocol item was configured for. struct rte_flow_item_flex { void *handle; uint32_t length; const uint8_t* pattern; }; The handle is some opaque object maintained on per device basis by underlying driver. The protocol header fields are considered as bit fields, all offsets and widths are expressed in bits. The pattern is the buffer containing the bit concatenation of all the fields presented at item configuration time, in the same order and same amount. If byte boundary alignment is needed an application can use a dummy type field, this is just some kind of gap filler. The length field specifies the pattern buffer length in bytes and is needed to allow rte_flow_copy() operations. The approach of multiple pattern pointers and lengths (per field) was considered and found clumsy - it seems to be much suitable for the application to maintain the single structure within the single pattern buffer. 4. Flex Item Configuration The flex item configuration consists of the following parts: - header field descriptors: - next header - next protocol - sample to match - input link descriptors - output link descriptors The field descriptors tell driver and hardware what data should be extracted from the packet and then presented to match in the flows. Each field is a bit pattern. It has width, offset from the header beginning, mode of offset calculation, and offset related parameters. The next header field is special, no data are actually taken from the packet, but its offset is used as pointer to the next header in the packet, in other word the next header offset specifies the size of the header being parsed by flex item. There is one more special field - next protocol, it specifies where the next protocol identifier is contained and packet data sampled from this field will be used to determine the next protocol header type to continue packet parsing. The next protocol field is like eth_type field in MAC2, or proto field in IPv4/v6 headers. The sample fields are used to represent the data be sampled from the packet and then matched with established flows. There are several methods supposed to calculate field offset in runtime depending on configuration and packet content: - FIELD_MODE_FIXED - fixed offset. The bit offset from header beginning is permanent and defined by field_base configuration parameter. - FIELD_MODE_OFFSET - the field bit offset is extracted from other header field (indirect offset field). The resulting field offset to match is calculated from as: field_base + (*field_offset & offset_mask) << field_shift This mode is useful to sample some extra options following the main header with field containing main header length. Also, this mode can be used to calculate offset to the next protocol header, for example - IPv4 header contains the 4-bit field with IPv4 header length expressed in dwords. One more example - this mode would allow us to skip GENEVE header variable length options. - FIELD_MODE_BITMASK - the field bit offset is extracted from other header field (indirect offset field), the latter is considered as bitmask containing some number of one bits, the resulting field offset to match is calculated as: field_base + bitcount(*field_offset & offset_mask) << field_shift This mode would be useful to skip the GTP header and its extra options with specified flags. - FIELD_MODE_DUMMY - dummy field, optionally used for byte boundary alignment in pattern. Pattern mask and data are ignored in the match. All configuration parameters besides field size and offset are ignored. The offset mode list can be extended by vendors according to hardware supported options. The input link configuration section tells the driver after what protocols and at what conditions the flex item can follow. Input link specified the preceding header pattern, for example for GENEVE it can be UDP item specifying match on destination port with value 6081. The flex item can follow multiple header types and multiple input links should be specified. At flow creation type the item with one of input link types should precede the flex item and driver will select the correct flex item settings, depending on actual flow pattern. The output link configuration section tells the driver how to continue packet parsing after the flex item protocol. If multiple protocols can follow the flex item header the flex item should contain the field with next protocol identifier, and the parsing will be continued depending on the data contained in this field in the actual packet. The flex item fields can participate in RSS hash calculation, the dedicated flag is present in field description to specify what fields should be provided for hashing. 5. Flex Item Chaining If there are multiple protocols supposed to be supported with flex items in chained fashion - two or more flex items within the same flow and these ones might be neighbors in pattern - it means the flex items are mutual referencing. In this case, the item that occurred first should be created with empty output link list or with the list including existing items, and then the second flex item should be created referencing the first flex item as input arc. Also, the hardware resources used by flex items to handle the packet can be limited. If there are multiple flex items that are supposed to be used within the same flow it would be nice to provide some hint for the driver that these two or more flex items are intended for simultaneous usage. The fields of items should be assigned with hint indices and these indices from two or more flex items should not overlap (be unique per field). For this case, the driver will try to engage not overlapping hardware resources and provide independent handling of the fields with unique indices. If the hint index is zero the driver assigns resources on its own. 6. Example of New Protocol Handling Let's suppose we have the requirements to handle the new tunnel protocol that follows UDP header with destination port 0xFADE and is followed by MAC header. Let the new protocol header format be like this: struct new_protocol_header { rte_be32 header_length; /* length in dwords, including options */ rte_be32 specific0; /* some protocol data, no intention */ rte_be32 specific1; /* to match in flows on these fields */ rte_be32 crucial; /* data of interest, match is needed */ rte_be32 options[0]; /* optional protocol data, variable length */ }; The supposed flex item configuration: struct rte_flow_item_flex_field field0 = { .field_mode = FIELD_MODE_DUMMY, /* Affects match pattern only */ .field_size = 96, /* three dwords from the beginning */ }; struct rte_flow_item_flex_field field1 = { .field_mode = FIELD_MODE_FIXED, .field_size = 32, /* Field size is one dword */ .field_base = 96, /* Skip three dwords from the beginning */ }; struct rte_flow_item_udp spec0 = { .hdr = { .dst_port = RTE_BE16(0xFADE), } }; struct rte_flow_item_udp mask0 = { .hdr = { .dst_port = RTE_BE16(0xFFFF), } }; struct rte_flow_item_flex_link link0 = { .item = { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &spec0, .mask = &mask0, }; struct rte_flow_item_flex_conf conf = { .next_header = { .field_mode = FIELD_MODE_OFFSET, .field_base = 0, .offset_base = 0, .offset_mask = 0xFFFFFFFF, .offset_shift = 2 /* Expressed in dwords, shift left by 2 */ }, .sample = { &field0, &field1, }, .sample_num = 2, .input_link[0] = &link0, .input_num = 1 }; Let's suppose we have created the flex item successfully, and PMD returned the handle 0x123456789A. We can use the following item pattern to match the crucial field in the packet with value 0x00112233: struct new_protocol_header spec_pattern = { .crucial = RTE_BE32(0x00112233), }; struct new_protocol_header mask_pattern = { .crucial = RTE_BE32(0xFFFFFFFF), }; struct rte_flow_item_flex spec_flex = { .handle = 0x123456789A .length = sizeiof(struct new_protocol_header), .pattern = &spec_pattern, }; struct rte_flow_item_flex mask_flex = { .length = sizeof(struct new_protocol_header), .pattern = &mask_pattern, }; struct rte_flow_item item_to_match = { .type = RTE_FLOW_ITEM_TYPE_FLEX, .spec = &spec_flex, .mask = &mask_flex, }; Signed-off-by: Viacheslav Ovsiienko --- doc/guides/prog_guide/rte_flow.rst | 24 +++ doc/guides/rel_notes/release_21_11.rst | 7 + lib/ethdev/rte_ethdev.h | 1 + lib/ethdev/rte_flow.h | 228 +++++++++++++++++++++++++ 4 files changed, 260 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 2b42d5ec8c..628f30cea7 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1425,6 +1425,30 @@ Matches a conntrack state after conntrack action. - ``flags``: conntrack packet state flags. - Default ``mask`` matches all state bits. +Item: ``FLEX`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Matches with the network protocol header of preliminary configured format. +The application describes the desired header structure, defines the header +fields attributes and header relations with preceding and following +protocols and configures the ethernet devices accordingly via +rte_flow_flex_item_create() routine. + +- ``handle``: the flex item handle returned by the PMD on successful + rte_flow_flex_item_create() call. The item handle is unique within + the device port, mask for this field is ignored. +- ``length``: match pattern length in bytes. If the length does not cover + all fields defined in item configuration, the pattern spec and mask are + supposed to be appended with zeroes till the full configured item length. +- ``pattern``: pattern to match. The protocol header fields are considered + as bit fields, all offsets and widths are expressed in bits. The pattern + is the buffer containing the bit concatenation of all the fields presented + at item configuration time, in the same order and same amount. The most + regular way is to define all the header fields in the flex item configuration + and directly use the header structure as pattern template, i.e. application + just can fill the header structures with desired match values and masks and + specify these structures as flex item pattern directly. + Actions ~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 73e377a007..170797f9e9 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -55,6 +55,13 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Introduced RTE Flow Flex Item.** + + * The configurable RTE Flow Flex Item provides the capability to introdude + the arbitrary user specified network protocol header, configure the device + hardware accordingly, and perform match on this header with desired patterns + and masks. + * **Enabled new devargs parser.** * Enabled devargs syntax diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index afdc53b674..e9ad7673e9 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -558,6 +558,7 @@ struct rte_eth_rss_conf { * it takes the reserved value 0 as input for the hash function. */ #define ETH_RSS_L4_CHKSUM (1ULL << 35) +#define ETH_RSS_FLEX (1ULL << 36) /* * We use the following macros to combine with above ETH_RSS_* for diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 7b1ed7f110..eccb1e1791 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -574,6 +574,15 @@ enum rte_flow_item_type { * @see struct rte_flow_item_conntrack. */ RTE_FLOW_ITEM_TYPE_CONNTRACK, + + /** + * Matches a configured set of fields at runtime calculated offsets + * over the generic network header with variable length and + * flexible pattern + * + * @see struct rte_flow_item_flex. + */ + RTE_FLOW_ITEM_TYPE_FLEX, }; /** @@ -1839,6 +1848,160 @@ struct rte_flow_item { const void *mask; /**< Bit-mask applied to spec and last. */ }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ITEM_TYPE_FLEX + * + * Matches a specified set of fields within the network protocol + * header. Each field is presented as set of bits with specified width, and + * bit offset (this is dynamic one - can be calulated by several methods + * in runtime) from the header beginning. + * + * The pattern is concatenation of all bit fields configured at item creation + * by rte_flow_flex_item_create() exactly in the same order and amount, no + * fields can be omitted or swapped. The dummy mode field can be used for + * pattern byte boundary alignment, least significant bit in byte goes first. + * Only the fields specified in sample_data configuration parameter participate + * in pattern construction. + * + * If pattern length is smaller than configured fields overall length it is + * extended with trailing zeroes, both for value and mask. + * + * This type does not support ranges (struct rte_flow_item.last). + */ +struct rte_flow_item_flex { + struct rte_flow_item_flex_handle *handle; /**< Opaque item handle. */ + uint32_t length; /**< Pattern length in bytes. */ + const uint8_t *pattern; /**< Combined bitfields pattern to match. */ +}; +/** + * Field bit offset calculation mode. + */ +enum rte_flow_item_flex_field_mode { + /** + * Dummy field, used for byte boundary alignment in pattern. + * Pattern mask and data are ignored in the match. All configuration + * parameters besides field size are ignored. + */ + FIELD_MODE_DUMMY = 0, + /** + * Fixed offset field. The bit offset from header beginning is + * is permanent and defined by field_base parameter. + */ + FIELD_MODE_FIXED, + /** + * The field bit offset is extracted from other header field (indirect + * offset field). The resulting field offset to match is calculated as: + * + * field_base + (*field_offset & offset_mask) << field_shift + */ + FIELD_MODE_OFFSET, + /** + * The field bit offset is extracted from other header field (indirect + * offset field), the latter is considered as bitmask containing some + * number of one bits, the resulting field offset to match is + * calculated as: + * + * field_base + bitcount(*field_offset & offset_mask) << field_shift + */ + FIELD_MODE_BITMASK, +}; + +/** + * Flex item field tunnel mode + */ +enum rte_flow_item_flex_tunnel_mode { + FLEX_TUNNEL_MODE_FIRST = 0, /**< First item occurrence. */ + FLEX_TUNNEL_MODE_OUTER = 1, /**< Outer item. */ + FLEX_TUNNEL_MODE_INNER = 2 /**< Inner item. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +__extension__ +struct rte_flow_item_flex_field { + /** Defines how match field offset is calculated over the packet. */ + enum rte_flow_item_flex_field_mode field_mode; + uint32_t field_size; /**< Match field size in bits. */ + int32_t field_base; /**< Match field offset in bits. */ + uint32_t offset_base; /**< Indirect offset field offset in bits. */ + uint32_t offset_mask; /**< Indirect offset field bit mask. */ + int32_t offset_shift; /**< Indirect offset multiply factor. */ + uint16_t tunnel_count:2; /**< 0-first occurrence, 1-outer, 2-inner.*/ + uint16_t rss_hash:1; /**< Field participates in RSS hash calculation. */ + uint16_t field_id; /**< device hint, for flows with multiple items. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +struct rte_flow_item_flex_link { + /** + * Preceding/following header. The item type must be always provided. + * For preceding one item must specify the header value/mask to match + * for the link be taken and start the flex item header parsing. + */ + struct rte_flow_item item; + /** + * Next field value to match to continue with one of the configured + * next protocols. + */ + uint32_t next; + /** + * Specifies whether flex item represents tunnel protocol + */ + bool tunnel; +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + */ +struct rte_flow_item_flex_conf { + /** + * The next header offset, it presents the network header size covered + * by the flex item and can be obtained with all supported offset + * calculating methods (fixed, dedicated field, bitmask, etc). + */ + struct rte_flow_item_flex_field next_header; + /** + * Specifies the next protocol field to match with link next protocol + * values and continue packet parsing with matching link. + */ + struct rte_flow_item_flex_field next_protocol; + /** + * The fields will be sampled and presented for explicit match + * with pattern in the rte_flow_flex_item. There can be multiple + * fields descriptors, the number should be specified by sample_num. + */ + struct rte_flow_item_flex_field *sample_data; + /** Number of field descriptors in the sample_data array. */ + uint32_t sample_num; + /** + * Input link defines the flex item relation with preceding + * header. It specified the preceding item type and provides pattern + * to match. The flex item will continue parsing and will provide the + * data to flow match in case if there is the match with one of input + * links. + */ + struct rte_flow_item_flex_link *input_link; + /** Number of link descriptors in the input link array. */ + uint32_t input_num; + /** + * Output link defines the next protocol field value to match and + * the following protocol header to continue packet parsing. Also + * defines the tunnel-related behaviour. + */ + struct rte_flow_item_flex_link *output_link; + /** Number of link descriptors in the output link array. */ + uint32_t output_num; +}; + /** * Action types. * @@ -4288,6 +4451,71 @@ rte_flow_tunnel_item_release(uint16_t port_id, struct rte_flow_item *items, uint32_t num_of_items, struct rte_flow_error *error); + +/** + * Create the flex item with specified configuration over + * the Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] conf + * Item configuration. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * Non-NULL opaque pointer on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_item_flex_handle * +rte_flow_flex_item_create(uint16_t port_id, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); + +/** + * Release the flex item on the specified Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle of the item existing on the specified device. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_flex_item_release(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error); + +/** + * Modify the flex item on the specified Ethernet device. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle of the item existing on the specified device. + * @param[in] conf + * Item new configuration. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_flex_item_update(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif From patchwork Fri Oct 1 19:34:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100345 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44E22A0032; Fri, 1 Oct 2021 21:35:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98239411FE; Fri, 1 Oct 2021 21:34:51 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2077.outbound.protection.outlook.com [40.107.93.77]) by mails.dpdk.org (Postfix) with ESMTP id C8C4F411DC for ; Fri, 1 Oct 2021 21:34:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ML3vLBmhKDBXOIcEKKjEshp+FnR0UU19rxPOfypWeTQ9/L0evZICI8+juYjHg57S9QaHoYssRiMGTtLkSRnSWQQ9qhpN1a1I45QwmprTgW1CbXc6xaNCglEqrRiAyIH87iQx/bTBdA6Q3ybW4BqJNPymO8bletTk2Qgaex8VpJuR4oOtZcD4bj19277PxI9KhArrXwn7w+xQ/0zWVxDeIuxvUBiqMf4sOtE00kqRYP41kAodn/htqi2tB/3FwrHCkSxSrUf1yAeb5uBlzbxX0rx2vQ73/rCadCBxl7hnpZW4OI6qhFhwSynUmK8qq/72yiyj0iXy6x4edvk20H2LJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+tSKHujJZ17nlCVv2IOpIob9cvqI1kK4PuYm1culmvI=; b=StFSvWC/PLgtZ58dOhv/jn/wkyM3Sj8eaM9+yDYX2rNBYQ7Akoj5oKtP6CSDtSmP5CzpU6+4fPJYx1mje6/87Vjch5vqHNDH1a9WuWn7mwb4QSm4E/CRrvPKP0vf6yLT4hW1s/vp09Ag8Yag9k5nuaCdyKqtcG2NF30Fig7eOfw35cUnGz29aaXPUQnshsI3MnlwQM9yn/l5zbXGRWELpSx7Q8KMFYmHIiB4hHnb8d7chbu+/TwabjVig1zcYXJnWclSNUgtF3JOtgpF/i1cWQ44LCgfVQBdNVEYvG5qvHNStf0XRVaj/SeNhEoR6JPjQa03VKmKbE3wohOZY3rpTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+tSKHujJZ17nlCVv2IOpIob9cvqI1kK4PuYm1culmvI=; b=QdzYA8Bksa+1JmozMBOgHfXu2QWKQvdX6r9Gk1uZJAbm4I2PD+PMMkJJP2cTrZxwOPHD1Vos0+NlxbjUiNB7CtDR/jR8JsGmgyOJKPTcAUxURTD4yKNu9gkuLx+VQqwrYMCEQnlWFsmw92HbJJsntFixOFaf8IqUu1X53FG/nWX0DvY5A+OO415qVZuY1t9FKS+TD3lWMFOCEJ53fmk8upwLDe466qL2+beoKIunP4nGymesO4zVcjoHmbnKH/x2lzBniICbyGpGZjwvGdmGUmE5VKPeL/zUpbYahchfKLTZqSJm4jZWMEpHu7znK7BEPvv6obqt1iQTlCnDaI3MVw== Received: from BN6PR13CA0055.namprd13.prod.outlook.com (2603:10b6:404:11::17) by DM5PR12MB1338.namprd12.prod.outlook.com (2603:10b6:3:71::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.20; Fri, 1 Oct 2021 19:34:48 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::9b) by BN6PR13CA0055.outlook.office365.com (2603:10b6:404:11::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.7 via Frontend Transport; Fri, 1 Oct 2021 19:34:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:47 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:44 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:03 +0300 Message-ID: <20211001193415.23288-3-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8a6d7633-07bd-4b61-d2f0-08d98512876c X-MS-TrafficTypeDiagnostic: DM5PR12MB1338: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1247; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OsuzWdlI3MKgf7cUyx8kZygCBCqSTik77u1MH/BapAefFIQLOJfMzI9G70jGG/+Y7SYfzBbdRVwxNq9oYFpRzQEzCUbJ4uWMpvzo3zCCmUH8dy9IohuSKzqHm5cogSFBuJfkrcYRzw0C2vCi0j6Z2R9+cYp6XoeDGTcEgx83Yu0c2yKh6eiMVqC5q+6KPBzuoLE641uNW/JG9WWz62roEw0lQIbelTtTT0v2JGpJimjMMk1uxC9GRSX+rlpUD30sJBQ1qCdJDMKQKnKA/DZ/pV8h+ATBUJJTRbxu5K43jmCr3Lzfkh6b0UtmBOm4/F08q8h1pt/b0y0/ajfEip7zotDrngy4n5NhokY0eMoeI+iPAnv3Ie82RLLk9S06bqydlOvZXc7rq2GJv3X8aoH33OzGxGrV6Alw2bpTaaSWhzep1lUFo8w7WmOrZnhlJugzjgTT9HBkZIUXbKLD/+jEv+Y1BbLPzRk5SexaaFdtAtRffN+QKFAhuTRL3f9ukyDfNUCNSbQw/mUI/2Gjc+QaAeNlpFWC3Ni8M9mBzYflwVZs4xJYNtZS7FRX0kRPUtXsGVRE4NVmCoAI4B7cp5Sevvegd1YQxVCeoqKmMZDasKmdeWKvTGvq31OXxN7SFGrKkiZ9a/MK0sD4vzPVZBMDl7PuXft1croE0y6VIlgO63vMoIuxF+yTHy3HaCTbtSEkK4YA/S+q5PequoM4dmD5NA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6286002)(7636003)(6666004)(70206006)(186003)(356005)(55016002)(82310400003)(1076003)(7696005)(316002)(8676002)(26005)(8936002)(83380400001)(5660300002)(2906002)(4326008)(54906003)(36860700001)(6916009)(2616005)(426003)(336012)(47076005)(86362001)(508600001)(70586007)(16526019)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:47.5875 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8a6d7633-07bd-4b61-d2f0-08d98512876c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1338 Subject: [dpdk-dev] [PATCH v2 02/14] ethdev: support flow elements with variable length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson RTE flow API provides RAW item type for packet patterns of variable length. The RAW item structure has fixed size members that describe the variable pattern length and methods to process it. A new RTE flow item type with variable length pattern that does not fit the RAW item meta description could not use the RAW item. For example, the new flow item that references 64 bits PMD handler cannot be described by the RAW item. The patch allows RTE conv helper functions to process custom flow items with variable length pattern. Signed-off-by: Gregory Etelson --- lib/ethdev/rte_flow.c | 68 ++++++++++++++++++++++++++++++++++--------- 1 file changed, 55 insertions(+), 13 deletions(-) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 8cb7a069c8..fe199eaeb3 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -30,13 +30,54 @@ uint64_t rte_flow_dynf_metadata_mask; struct rte_flow_desc_data { const char *name; size_t size; + size_t (*desc_fn)(void *dst, const void *src); }; +/** + * + * @param buf + * Destination memory. + * @param data + * Source memory + * @param size + * Requested copy size + * @param desc + * rte_flow_desc_item - for flow item conversion. + * rte_flow_desc_action - for flow action conversion. + * @param type + * Offset into the desc param or negative value for private flow elements. + */ +static inline size_t +rte_flow_conv_copy(void *buf, const void *data, const size_t size, + const struct rte_flow_desc_data *desc, int type) +{ + /** + * allow PMD private flow item + * see 5d1bff8fe2 + * "ethdev: allow negative values in flow rule types" + */ + size_t sz = type >= 0 ? desc[type].size : sizeof(void *); + if (buf == NULL || data == NULL) + return 0; + rte_memcpy(buf, data, (size > sz ? sz : size)); + if (desc[type].desc_fn) + sz += desc[type].desc_fn(size > 0 ? buf : NULL, data); + return sz; +} + /** Generate flow_item[] entry. */ #define MK_FLOW_ITEM(t, s) \ [RTE_FLOW_ITEM_TYPE_ ## t] = { \ .name = # t, \ - .size = s, \ + .size = s, \ + .desc_fn = NULL,\ + } + +#define MK_FLOW_ITEM_FN(t, s, fn) \ + [RTE_FLOW_ITEM_TYPE_ ## t] = {\ + .name = # t, \ + .size = s, \ + .desc_fn = fn, \ } /** Information about known flow pattern items. */ @@ -107,8 +148,17 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { [RTE_FLOW_ACTION_TYPE_ ## t] = { \ .name = # t, \ .size = s, \ + .desc_fn = NULL,\ + } + +#define MK_FLOW_ACTION_FN(t, fn) \ + [RTE_FLOW_ACTION_TYPE_ ## t] = { \ + .name = # t, \ + .size = 0, \ + .desc_fn = fn,\ } + /** Information about known flow actions. */ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(END, 0), @@ -527,12 +577,8 @@ rte_flow_conv_item_spec(void *buf, const size_t size, } break; default: - /** - * allow PMD private flow item - */ - off = (int)item->type >= 0 ? - rte_flow_desc_item[item->type].size : sizeof(void *); - rte_memcpy(buf, data, (size > off ? off : size)); + off = rte_flow_conv_copy(buf, data, size, + rte_flow_desc_item, item->type); break; } return off; @@ -634,12 +680,8 @@ rte_flow_conv_action_conf(void *buf, const size_t size, } break; default: - /** - * allow PMD private flow action - */ - off = (int)action->type >= 0 ? - rte_flow_desc_action[action->type].size : sizeof(void *); - rte_memcpy(buf, action->conf, (size > off ? off : size)); + off = rte_flow_conv_copy(buf, action->conf, size, + rte_flow_desc_action, action->type); break; } return off; From patchwork Fri Oct 1 19:34:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100346 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C533DA0032; Fri, 1 Oct 2021 21:35:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4BFB41206; Fri, 1 Oct 2021 21:34:52 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2050.outbound.protection.outlook.com [40.107.96.50]) by mails.dpdk.org (Postfix) with ESMTP id 1A45A411C9 for ; Fri, 1 Oct 2021 21:34:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Eu0L+Z3IKq1f5mfdoNRoUT4htSga7TNEaX1taHN9wdQVHGLIwuSPyrCSFIvTKvfkiA1XvxxGGN7f0JdhyezdTcut91YNejhpY82EDDu1A72iUtBIySltsuUJkqj3X1T+4bFERlHY45bUqn19IouGryJO6PkoFudjKQkKIOe0mE6sHzIkHwVXHtsO+4QHqhg39TQtBO8NIM/LQORAMM1i5K1uyWlB7w+ST8lCz1z+Ewq+DZIzU0rMLJr/sUEV7loXx+3dG6LTedFUJH2xbSFzwpmMgKisbWC49zGduKyl5tJbuCOEO3UODQTS3ADtSvJswJMd2omXpOXh3jxzbOy3Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FmZ65eVJrvGx3qYc2X8I97qIIGViaZV9d384pl0uaOM=; b=kE4lP8oB0ZyoF5KpJWkvcYQmWjXW8tX2m5ObmTX2sQEPOWb9tOo1GNH/RCcRi7LVlQ/9Z8euwLLIdZeFIXSmZBWNFS2nZEfCRVytN0X6i4mGRPMGueT/LArj7EaKlK71maOhbqBhC2qykF52yHOtCd1DbuFuAPE6To/on/AQw3z5UNuE/6W0nvrf7iNN9sQiez4soMSAJEuYVeM6dLMwL7gSGguRu1tw/4bUMYnpDV/xmEeZLeY9EMM7a6cnAIf46TRLzLG48ja5XkX4RFNrDZJgvY5FV54Pybnr4TS5Qt5Xz0obYwVQ6y0rUE6xQF1qtBY+wM6SOcP0wNdkN/VPWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FmZ65eVJrvGx3qYc2X8I97qIIGViaZV9d384pl0uaOM=; b=sCcMZ2pcBjtz1zdkNDqvXrhmZlNCRESPoGbqRnZQd4GU9BjCbtzdfAc8cYjAb2L11FQ0LFiIARztp4yUfUdJ6MBoa5EM8bxKTFgpn0lURsJSX/HKJxRIJguMqXdiaVynopPF9WBHMAZ1ClY2vtOQW2cQeASptCUk2rB+TOGWBZbrMdHURipik2w7//l+txtdNVPnPNOmDV8rphLCW4vmjVIPVNjmVxOdQAraI1+W/wea3fipL5ZdDo8Il6heSTODpHXg90qogRiWR75aTEKJcRmwBsnfGOZXn81zh88/igR9fS6Oq9LbNfx4RAoib6vkuPM6Epf26UgAqCUNcgxAcw== Received: from BN6PR13CA0055.namprd13.prod.outlook.com (2603:10b6:404:11::17) by CH0PR12MB5028.namprd12.prod.outlook.com (2603:10b6:610:e3::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Fri, 1 Oct 2021 19:34:49 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:404:11:cafe::9b) by BN6PR13CA0055.outlook.office365.com (2603:10b6:404:11::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.7 via Frontend Transport; Fri, 1 Oct 2021 19:34:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:48 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:46 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:04 +0300 Message-ID: <20211001193415.23288-4-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 96668ac0-f36e-4f8a-5137-08d985128811 X-MS-TrafficTypeDiagnostic: CH0PR12MB5028: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5236; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TGQvJnrK6oyW8WaFGTi3+xA+Kh2tR+dJJl4nTtlVaO7thctfZpLEIJkEmZZTbASFI2M+v7s3fUjmC5wSySxhzGiuauHFCYJaEyJNgqHyD+CAAa0ziM5mQUy7QAmaEVLy8TX4+Hu0eVhZiYB/fHN1klZU76Ahs6xgOYpFyNQWwwdYJf43kuyMBxsi8KzjrJUUqEAeHdJv1siZQkmthn/BLPh12WI7WmgCCj3trxHqi/TtYnvaeiAvpEKFjrIUQC3DAC3pslVhLk6Dj3DudqsLHhhDBQqNk2KrO5or8bh7E7si85HEXHj5fzWQAEkKFCzRGBN8qU4VOl2vfnYsTFi7X+5R25I9b8kY7sZgkmYJzRRVl07gw75dIqbghawlXI//pqGtqzheoOYeFPGkeyZBY+JnkH7g+WuD4ScReZza91EhCvSUC/oyWjbtyNED8+Ogq2mYc2G9yMHYaRBoxdvJlfeaUTJkj1EY9wnp3f/7WZ6HoRHoLki/EGWvWH/DUn5pb0PgxtirNAySBLiSLqaxPTAyZnJhvmsB7GVRYlZQc8o6fdsnWU+NfKkLyh/ns45uXEXlGNxb0wj1qPwPY92UFxRhZv/WnCm3+23Jeia+cIKeK2EmGbzjDF3T214UlZJsGccxsrqL/8gsml9sIDtXnvOAaLq9XndEOB05ov/zZ6I76Smp7IkpniNaUZu61d1/Lh6JHFTFXCI9/+FyjmidFQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(1076003)(8676002)(6916009)(36756003)(47076005)(54906003)(86362001)(6286002)(2616005)(4326008)(2906002)(508600001)(55016002)(16526019)(186003)(83380400001)(316002)(70586007)(7636003)(8936002)(70206006)(36860700001)(5660300002)(356005)(426003)(26005)(6666004)(82310400003)(336012)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:48.6800 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 96668ac0-f36e-4f8a-5137-08d985128811 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5028 Subject: [dpdk-dev] [PATCH v2 03/14] ethdev: implement RTE flex item API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson RTE flex item API was introduced in "ethdev: introduce configurable flexible item" patch. The API allows DPDK application to define parser for custom network header in port hardware and offload flows that will match the custom header elements. Signed-off-by: Gregory Etelson --- lib/ethdev/rte_flow.c | 73 ++++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 13 +++++++ lib/ethdev/version.map | 5 +++ 3 files changed, 91 insertions(+) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index fe199eaeb3..74f74d6009 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -80,6 +80,19 @@ rte_flow_conv_copy(void *buf, const void *data, const size_t size, .desc_fn = fn, \ } +static size_t +rte_flow_item_flex_conv(void *buf, const void *data) +{ + struct rte_flow_item_flex *dst = buf; + const struct rte_flow_item_flex *src = data; + if (buf) { + dst->pattern = rte_memcpy + ((void *)((uintptr_t)(dst + 1)), src->pattern, + src->length); + } + return src->length; +} + /** Information about known flow pattern items. */ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(END, 0), @@ -141,6 +154,8 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)), MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)), MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)), + MK_FLOW_ITEM_FN(FLEX, sizeof(struct rte_flow_item_flex), + rte_flow_item_flex_conv), }; /** Generate flow_action[] entry. */ @@ -1308,3 +1323,61 @@ rte_flow_tunnel_item_release(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow_item_flex_handle * +rte_flow_flex_item_create(uint16_t port_id, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_item_flex_handle *handle; + + if (unlikely(!ops)) + return NULL; + if (unlikely(!ops->flex_item_create)) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; + } + handle = ops->flex_item_create(dev, conf, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_flex_item_release(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops || !ops->flex_item_release)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + ret = ops->flex_item_release(dev, handle, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_flex_item_update(uint16_t port_id, + const struct rte_flow_item_flex_handle *handle, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops || !ops->flex_item_update)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + ret = ops->flex_item_update(dev, handle, conf, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 46f62c2ec2..aed2ac03ad 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -139,6 +139,19 @@ struct rte_flow_ops { struct rte_flow_item *pmd_items, uint32_t num_of_items, struct rte_flow_error *err); + struct rte_flow_item_flex_handle *(*flex_item_create) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); + int (*flex_item_release) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error); + int (*flex_item_update) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 904bce6ea1..994c57f4b2 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -247,6 +247,11 @@ EXPERIMENTAL { rte_mtr_meter_policy_delete; rte_mtr_meter_policy_update; rte_mtr_meter_policy_validate; + + # added in 21.11 + rte_flow_flex_item_create; + rte_flow_flex_item_release; + rte_flow_flex_item_update; }; INTERNAL { From patchwork Fri Oct 1 19:34:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100347 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54590A0032; Fri, 1 Oct 2021 21:35:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9CD841211; Fri, 1 Oct 2021 21:34:54 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2070.outbound.protection.outlook.com [40.107.93.70]) by mails.dpdk.org (Postfix) with ESMTP id A8D4941211 for ; Fri, 1 Oct 2021 21:34:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PBMnKkuip9dEcZYGevFOBojKIWbaWV7FtQRLL4mKxnoY0OMjpI/5xOno36lRhfd0VTcOL1mw9odFM2kVrk9uMtmglGKeyYOk70d5F6NHLThSzUMI9tSuRcforI0aIS8B7teDKgz3tRnvBYdhAFfkle7JUzxFkMRth1njLlZb382qV1uaIkQrMiJkjd4sjHggi2dVJXzW5Ba2SzUtdsY2um1D3AHTI6DhmuZiusPFskQoCuyMuS9Ngo5iEuaZaD1tzrSSgz8hRgmYBpAJISvj8X8fGoJ3uETj0RJ+fPhQoiyT4xXEtutoZQH3QfDp2j+2rrBdEACtjO14BQq/2S/r0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=brGGP38fYc8maS+lvIdUsMhFBfKwfKiZjBfN3Tl4QC8=; b=S9aUpObdLUC+BEhwAybxWl7ZOSrER7xPRLF5Qrg6vIDnPKSQtdYuIvigYQ0mvr+EAtbeEWtIuUYZWTaodGDdkHlb/jrYG2rKgLdM8lDXVzB6fw8DsBKzacNHSBykbr3K5l8xa0KH2bxueGz9hp0+wdwYqvw5jaDkYkzGTUQQEqx1pv3DFUegjP5imCx1eXScjBblNXRA47bXSg3T34K3P7zmh8yajxleoAxm8luJYyJz2kDtAqV74gfnuomp2oXg8+RjAfbGb1VCp41ES6II831jJJt4SM3tcKJNBshpaxyxUzIRLNPUPY5sqPQo+GeHamYc85Xxv8WTorDz6e2R8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=brGGP38fYc8maS+lvIdUsMhFBfKwfKiZjBfN3Tl4QC8=; b=JSNfI0uSQO5XbRxlGX8JgO6VuR3jBDaX7LCRFOvWLvWr6T6XeRxXSfLJHTAatvHsmiNm0mjbbKa5qbhccKq9KgLrXydVTNNglbeSpGwj9izoPYylE5MjYFczwknHckcf8atJEPQxyLKeDKMPsAzKaGO9zRTwh+0dK0jrqaJrhjRR0VXX1mSphwgjitNCrxVSZEHa6wDmUSTFICmCgdYl5Calf5RhIbuO2Z1lkOIGDOHiVJjg9yvTGzJJSfCbTc+81tF/GgwDcllCu6LJ5LAuVZJPcz8x5s4AWtcDdYCXuZ2m21GZHC/mHxHyLv5Vx2ULhF49cwskym9bJSyOwjprJQ== Received: from BN8PR16CA0024.namprd16.prod.outlook.com (2603:10b6:408:4c::37) by DM4PR12MB5150.namprd12.prod.outlook.com (2603:10b6:5:391::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14; Fri, 1 Oct 2021 19:34:52 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::d2) by BN8PR16CA0024.outlook.office365.com (2603:10b6:408:4c::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16 via Frontend Transport; Fri, 1 Oct 2021 19:34:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:51 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:48 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:05 +0300 Message-ID: <20211001193415.23288-5-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4971f9ea-3fbe-4a8c-bde4-08d9851289bd X-MS-TrafficTypeDiagnostic: DM4PR12MB5150: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Je1S31Emp5cJreSBtxwzik9QFrLFI7KyElu2IZAz3dWm9ihmVcOkDqr0nHiXm1UfTwcZZCgbF0s1dFpi4GdPtp800mzy4qdEGAvRWZXOmmwWGRPMRKORvzSIox+Wl8aw/Opmh4kwcTLA90wdiFTP+ULk7f/aTXEiNS7w+yzVqjKm8BJTlFQ2MLY3sEln1EDBfLeDsdKnNGJ0lv/w0qiOjqB9it+lMgQi7Dr5yn9EUPl9KtNmYn/XswTSHlvIkExly/RsobteumGejqESFyB22hrSfUeMPBQYzNt3qPUKWSUxBc6nDVo1BlREplvn0E514bUQGIkVkr5jQO6rChfy08xNe0vOvRBgHfAdz2xB8jyijAfgQXSSt9cwMUk7dwjtF2U7MyNLMYkSpWvG+/BrqX6FWTDNESxmUsM5++mzlOULbLZ4ReQqZJkbHVWKTIC6OVTo5/baZIe1IT+QQDZ4Ly5UCD5chGTB9nXYIAyvQTuwuUDPHcIVUmnytySHlXgkUMvZ6PoFw45CtGX5IIJ/tB2nHdh05RjrdcfNWtfGEtLVjg6cxXuTRvHH6iwCSWSELJZUYrJfeWuEl/UrkuDr3XcILbGOdfqBQc+Dp/tEYhmwkHm2Z4b8cdcJUrVDv07GYXwdLFj50Th41lfrvc97AGsUhXR8cvYde4/4Ef9vWkCBhID1GjKr+RlV+h0r8T4+Qmgjj7oG2l/RTmhaPO6YVA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(36756003)(8676002)(6286002)(70206006)(36860700001)(186003)(336012)(426003)(4326008)(8936002)(6916009)(55016002)(6666004)(1076003)(5660300002)(7696005)(2906002)(2616005)(47076005)(70586007)(26005)(86362001)(82310400003)(54906003)(508600001)(316002)(356005)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:51.4725 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4971f9ea-3fbe-4a8c-bde4-08d9851289bd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5150 Subject: [dpdk-dev] [PATCH v2 04/14] app/testpmd: add jansson library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson Testpmd interactive mode provides CLI to configure application commands. Testpmd reads CLI command and parameters from STDIN, and converts input into C objects with internal parser. The patch adds jansson dependency to testpmd. With jansson, testpmd can read input in JSON format from STDIN or input file and convert it into C object using jansson library calls. Signed-off-by: Gregory Etelson --- app/test-pmd/meson.build | 5 +++++ app/test-pmd/testpmd.h | 3 +++ 2 files changed, 8 insertions(+) diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build index 98f3289bdf..3a8babd604 100644 --- a/app/test-pmd/meson.build +++ b/app/test-pmd/meson.build @@ -61,3 +61,8 @@ if dpdk_conf.has('RTE_LIB_BPF') sources += files('bpf_cmd.c') deps += 'bpf' endif +jansson_dep = dependency('jansson', required: false, method: 'pkg-config') +if jansson_dep.found() + dpdk_conf.set('RTE_HAS_JANSSON', 1) + ext_deps += jansson_dep +endif diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 5863b2f43f..876a341cf0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -14,6 +14,9 @@ #include #include #include +#ifdef RTE_HAS_JANSSON +#include +#endif #define RTE_PORT_ALL (~(portid_t)0x0) From patchwork Fri Oct 1 19:34:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100348 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC8D3A0032; Fri, 1 Oct 2021 21:35:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C77E4122D; Fri, 1 Oct 2021 21:34:58 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2045.outbound.protection.outlook.com [40.107.223.45]) by mails.dpdk.org (Postfix) with ESMTP id 508FA411DB for ; Fri, 1 Oct 2021 21:34:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IX1GJmVTOrBhOXkU940XboDxiIARhUPBNl07WaVR4YM2FLa+L+Btj3rnisjka5MJaiQNo9tKb0krg2rcu2fi1vmjrwXEwci+84h+9BH/lCWVnPHTlc8Rrf1Z5rX4bbPLzHxvETeOdt8VCgOP8IEgtvME+nFLH/+h0kHOxDCDGc7uYvh40VJ3kVluIkhejzd2xzlgqNiZ1SYx5MJ+K8BlTpUK5LYKPCWvIQNcQSfGaFj09LagZ39Co8LVwaffkzlDfWod3lL+B0sd+2a3nbPjyTk4HYBva8fP5n/b3+AxeaDil0aggPp9iYKEbSx+tLlc2HucPvD+WpX4qy0en5e7kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fP97M3yb+3QEguUblAcJX44MnV1gp4v8Cy3/s3l/CP8=; b=l57wUILUusthNwmT+V4Gmb6Zswl0p8i7CGR39E3eKOe4EqZ8iQr6ftx+eL86uWYxD6btm4mksGqZYLXNE/QuSvnevxQWImdPgJEQtuA1Zzu0iQvz6+ih530S59p7BwqmFiIX2/MHoDeW08hQPKQy2Nfa4e/xkYnfpGUd2zi90IyR2eVJF3XVlKu693oJ+xtpUJNVRazRXL4jbF6aZrI39iq88Yn4z2YHWO7SjhWpfPQJLphc30zP1EI+Fb1358IbbKiZbJcnNwmh8LjotRBYNhlNogtQl8s+dnqr/W4lCMu4+nqQybJM6Blxx+f7omaznJgyFoXbcNMGXavsCOR5aw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fP97M3yb+3QEguUblAcJX44MnV1gp4v8Cy3/s3l/CP8=; b=sDkXfCYXwOdbsdMzqe89rDBW5NrRpR3revJeR7ZS7CzjumyBVu/+q2UNTT9po8S4sE0daVbcGE7OScFjFOKLUr5hidZj88zDt9+wKs+8A6TvvZ0Fr22HvYByBj07a0/vfJi23Oamb7b7fnsRrfC/FfU/joW2PClDtE45CXusS+qEqyoUOxWXYhKuOLkrqGbqEQTQxvMfFHTRDWWY4LaH+xrJEDqO7jkmlJeDFA1mg1ERyN+gUP1iUTPJxNO5AJaS774PneHj7BTTUv8CnLiM9CmJ+eUazFlYUk/Qq3SliXOQdSwU7993EBN1R1/IrffWQcqREU9Fk7BVr/IIhlh4/g== Received: from BN8PR16CA0024.namprd16.prod.outlook.com (2603:10b6:408:4c::37) by BN6PR12MB1345.namprd12.prod.outlook.com (2603:10b6:404:18::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Fri, 1 Oct 2021 19:34:53 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::e) by BN8PR16CA0024.outlook.office365.com (2603:10b6:408:4c::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16 via Frontend Transport; Fri, 1 Oct 2021 19:34:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:53 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:50 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:06 +0300 Message-ID: <20211001193415.23288-6-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f8971b18-f5dd-487c-7558-08d985128aec X-MS-TrafficTypeDiagnostic: BN6PR12MB1345: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:240; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IV4/BoPtJqaRPNbt9cWUD6VhzE4uCAZ2/phsMYxq7dARPflRqjCPWyIA0pBW3guL2MVbohWl0CMO6/DcYzbwNob3RCoXMOjcdWGYjveolHvNcy+37iT8mNBruAr4MxEExKuOamsN++8sLfr01Ur/qlyYv6Jh4l8JLUZuRsxVy0rVrIUEgobkJNNBH1JQ8aFPH0GyJRTKoSdI2F9GWlD2RFQEbgn/rX/lqFB5MqAR74+fFqWPWTduMe4Ab6V8qaQpBihNOFpfU1GLupGxTvQTFxDUzielV7vZzvhIqwIcPFyn44YSe6vqsYK71Hko0LjgA9Tns3zLLt1qJIUY9f9YjE6MRakEGp0VX0KniwpaxcER7YZ6gVMHtcCYJXBBcfCN52Hw112X4orJ5iD1AQxF62fVsbDCT+Cl+SQZjW2xhM39HI2prvUtxU8TNUeI9dZ0EmdWGsWhuoA/ooyz/o/9wBPFvg7MKsyxe9DyY+t0aFHwTxhTonjQsiCM5qs0+loxfsZx3EgF83HjW/0COizAse6ArVnbQ5wmfjHlXOKIBsKd2xrfFPnY1tbIQuc7HZ4Qx7lDznDO5ZJi+DvBhx0dRxNIMlsSRzjwkqjQ7K1/qIc0BB+4u0dggtiy0unEIrIQXMlvdkyZB0PmLzIGQQ6EvBufZZxO2xQG6yZHfe8YlVqBPhhBWBrvNke36AehcwDJ3DP0kBTzjsf8RCOXC4rX9A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(1076003)(86362001)(508600001)(5660300002)(186003)(36860700001)(70206006)(426003)(70586007)(7696005)(356005)(2906002)(6916009)(336012)(55016002)(26005)(82310400003)(8936002)(2616005)(47076005)(8676002)(7636003)(54906003)(36756003)(30864003)(6666004)(83380400001)(4326008)(6286002)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:53.4484 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f8971b18-f5dd-487c-7558-08d985128aec X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1345 Subject: [dpdk-dev] [PATCH v2 05/14] app/testpmd: add flex item CLI commands X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson Network port hardware is shipped with fixed number of supported network protocols. If application must work with a protocol that is not included in the port hardware by default, it can try to add the new protocol to port hardware. Flex item or flex parser is port infrastructure that allows application to add support for a custom network header and offload flows to match the header elements. Application must complete the following tasks to create a flow rule that matches custom header: 1. Create flow item object in port hardware. Application must provide custom header configuration to PMD. PMD will use that configuration to create flex item object in port hardware. 2. Create flex patterns to match. Flex pattern has a spec and a mask components, like a regular flow item. Combined together, spec and mask can target unique data sequence or a number of data sequences in the custom header. Flex patterns of the same flex item can have different lengths. Flex pattern is identified by unique handler value. 3. Create a flow rule with a flex flow item that references flow pattern. Testpmd flex CLI commands are: testpmd> flow flex_item create testpmd> set flex_pattern \ spec mask testpmd> set flex_pattern is testpmd> flow create ... \ / flex item is pattern is / ... The patch works with the jansson library API. Jansson development files must be present: jansson.pc, jansson.h libjansson.[a,so] Signed-off-by: Gregory Etelson --- app/test-pmd/cmdline.c | 2 + app/test-pmd/cmdline_flow.c | 801 +++++++++++++++++++- app/test-pmd/testpmd.c | 1 - app/test-pmd/testpmd.h | 15 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 119 +++ 5 files changed, 936 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index a9efd027c3..a673e6ef08 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -17822,6 +17822,8 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_show_fec_mode, (cmdline_parse_inst_t *)&cmd_set_fec_mode, (cmdline_parse_inst_t *)&cmd_show_capability, + (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern, + (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern, NULL, }; diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index bb22294dd3..8817b4e210 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -54,6 +54,8 @@ enum index { COMMON_PRIORITY_LEVEL, COMMON_INDIRECT_ACTION_ID, COMMON_POLICY_ID, + COMMON_FLEX_HANDLE, + COMMON_FLEX_TOKEN, /* TOP-level command. */ ADD, @@ -81,6 +83,13 @@ enum index { AGED, ISOLATE, TUNNEL, + FLEX, + + /* Flex arguments */ + FLEX_ITEM_INIT, + FLEX_ITEM_CREATE, + FLEX_ITEM_MODIFY, + FLEX_ITEM_DESTROY, /* Tunnel arguments. */ TUNNEL_CREATE, @@ -306,6 +315,9 @@ enum index { ITEM_POL_PORT, ITEM_POL_METER, ITEM_POL_POLICY, + ITEM_FLEX, + ITEM_FLEX_ITEM_HANDLE, + ITEM_FLEX_PATTERN_HANDLE, /* Validate/create actions. */ ACTIONS, @@ -844,6 +856,11 @@ struct buffer { struct { uint32_t policy_id; } policy;/**< Policy arguments. */ + struct { + uint16_t token; + uintptr_t uintptr; + char filename[128]; + } flex; /**< Flex arguments*/ } args; /**< Command arguments. */ }; @@ -871,6 +888,14 @@ struct parse_action_priv { .size = s, \ }) +static const enum index next_flex_item[] = { + FLEX_ITEM_INIT, + FLEX_ITEM_CREATE, + FLEX_ITEM_MODIFY, + FLEX_ITEM_DESTROY, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1000,6 +1025,7 @@ static const enum index next_item[] = { ITEM_GENEVE_OPT, ITEM_INTEGRITY, ITEM_CONNTRACK, + ITEM_FLEX, END_SET, ZERO, }; @@ -1368,6 +1394,13 @@ static const enum index item_integrity_lv[] = { ZERO, }; +static const enum index item_flex[] = { + ITEM_FLEX_PATTERN_HANDLE, + ITEM_FLEX_ITEM_HANDLE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -1724,6 +1757,9 @@ static int parse_set_sample_action(struct context *, const struct token *, static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int +parse_flex_handle(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); static int parse_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1840,6 +1876,8 @@ static int parse_isolate(struct context *, const struct token *, static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_flex(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); static int parse_int(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1904,6 +1942,19 @@ static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static void flex_item_create(portid_t port_id, uint16_t flex_id, + const char *filename); +static void flex_item_modify(portid_t port_id, uint16_t flex_id, + const char *filename); +static void flex_item_destroy(portid_t port_id, uint16_t flex_id); +struct flex_pattern { + struct rte_flow_item_flex spec, mask; + uint8_t spec_pattern[FLEX_MAX_FLOW_PATTERN_LENGTH]; + uint8_t mask_pattern[FLEX_MAX_FLOW_PATTERN_LENGTH]; +}; + +static struct flex_item *flex_items[RTE_MAX_ETHPORTS][FLEX_MAX_PARSERS_NUM]; +static struct flex_pattern flex_patterns[FLEX_MAX_PATTERNS_NUM]; /** Token definitions. */ static const struct token token_list[] = { @@ -2040,6 +2091,20 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_none, }, + [COMMON_FLEX_TOKEN] = { + .name = "{flex token}", + .type = "flex token", + .help = "flex token", + .call = parse_int, + .comp = comp_none, + }, + [COMMON_FLEX_HANDLE] = { + .name = "{flex handle}", + .type = "FLEX HANDLE", + .help = "fill flex item data", + .call = parse_flex_handle, + .comp = comp_none, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2056,7 +2121,8 @@ static const struct token token_list[] = { AGED, QUERY, ISOLATE, - TUNNEL)), + TUNNEL, + FLEX)), .call = parse_init, }, /* Top-level command. */ @@ -2168,6 +2234,52 @@ static const struct token token_list[] = { ARGS_ENTRY(struct buffer, port)), .call = parse_isolate, }, + [FLEX] = { + .name = "flex_item", + .help = "flex item API", + .next = NEXT(next_flex_item), + .call = parse_flex, + }, + [FLEX_ITEM_INIT] = { + .name = "init", + .help = "flex item init", + .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token), + ARGS_ENTRY(struct buffer, port)), + .next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN), + NEXT_ENTRY(COMMON_PORT_ID)), + .call = parse_flex + }, + [FLEX_ITEM_CREATE] = { + .name = "create", + .help = "flex item create", + .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.filename), + ARGS_ENTRY(struct buffer, args.flex.token), + ARGS_ENTRY(struct buffer, port)), + .next = NEXT(NEXT_ENTRY(COMMON_FILE_PATH), + NEXT_ENTRY(COMMON_FLEX_TOKEN), + NEXT_ENTRY(COMMON_PORT_ID)), + .call = parse_flex + }, + [FLEX_ITEM_MODIFY] = { + .name = "modify", + .help = "flex item modify", + .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.filename), + ARGS_ENTRY(struct buffer, args.flex.token), + ARGS_ENTRY(struct buffer, port)), + .next = NEXT(NEXT_ENTRY(COMMON_FILE_PATH), + NEXT_ENTRY(COMMON_FLEX_TOKEN), + NEXT_ENTRY(COMMON_PORT_ID)), + .call = parse_flex + }, + [FLEX_ITEM_DESTROY] = { + .name = "destroy", + .help = "flex item destroy", + .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token), + ARGS_ENTRY(struct buffer, port)), + .next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN), + NEXT_ENTRY(COMMON_PORT_ID)), + .call = parse_flex + }, [TUNNEL] = { .name = "tunnel", .help = "new tunnel API", @@ -3608,6 +3720,27 @@ static const struct token token_list[] = { item_param), .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)), }, + [ITEM_FLEX] = { + .name = "flex", + .help = "match flex header", + .priv = PRIV_ITEM(FLEX, sizeof(struct rte_flow_item_flex)), + .next = NEXT(item_flex), + .call = parse_vc, + }, + [ITEM_FLEX_ITEM_HANDLE] = { + .name = "item", + .help = "flex item handle", + .next = NEXT(item_flex, NEXT_ENTRY(COMMON_FLEX_HANDLE), + NEXT_ENTRY(ITEM_PARAM_IS)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_flex, handle)), + }, + [ITEM_FLEX_PATTERN_HANDLE] = { + .name = "pattern", + .help = "flex pattern handle", + .next = NEXT(item_flex, NEXT_ENTRY(COMMON_FLEX_HANDLE), + NEXT_ENTRY(ITEM_PARAM_IS)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_flex, pattern)), + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -6999,6 +7132,44 @@ parse_isolate(struct context *ctx, const struct token *token, return len; } +static int +parse_flex(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (out->command == ZERO) { + if (ctx->curr != FLEX) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + } else { + switch (ctx->curr) { + default: + break; + case FLEX_ITEM_INIT: + case FLEX_ITEM_CREATE: + case FLEX_ITEM_MODIFY: + case FLEX_ITEM_DESTROY: + out->command = ctx->curr; + break; + } + } + + return len; +} + static int parse_tunnel(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -7661,6 +7832,71 @@ parse_set_init(struct context *ctx, const struct token *token, return len; } +/* + * Replace testpmd handles in a flex flow item with real values. + */ +static int +parse_flex_handle(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct rte_flow_item_flex *spec, *mask; + const struct rte_flow_item_flex *src_spec, *src_mask; + const struct arg *arg = pop_args(ctx); + uint32_t offset; + uint16_t handle; + int ret; + + if (!arg) { + printf("Bad environment\n"); + return -1; + } + offset = arg->offset; + push_args(ctx, arg); + ret = parse_int(ctx, token, str, len, buf, size); + if (ret <= 0 || !ctx->object) + return ret; + if (ctx->port >= RTE_MAX_ETHPORTS) { + printf("Bad port\n"); + return -1; + } + if (offset == offsetof(struct rte_flow_item_flex, handle)) { + const struct flex_item *fp; + struct rte_flow_item_flex *item_flex = ctx->object; + handle = (uint16_t)(uintptr_t)item_flex->handle; + if (handle >= FLEX_MAX_PARSERS_NUM) { + printf("Bad flex item handle\n"); + return -1; + } + fp = flex_items[ctx->port][handle]; + if (!fp) { + printf("Bad flex item handle\n"); + return -1; + } + item_flex->handle = fp->flex_handle; + } else if (offset == offsetof(struct rte_flow_item_flex, pattern)) { + handle = (uint16_t)(uintptr_t) + ((struct rte_flow_item_flex *)ctx->object)->pattern; + if (handle >= FLEX_MAX_PATTERNS_NUM) { + printf("Bad pattern handle\n"); + return -1; + } + src_spec = &flex_patterns[handle].spec; + src_mask = &flex_patterns[handle].mask; + spec = ctx->object; + mask = spec + 2; /* spec, last, mask */ + /* fill flow rule spec and mask parameters */ + spec->length = src_spec->length; + spec->pattern = src_spec->pattern; + mask->length = src_mask->length; + mask->pattern = src_mask->pattern; + } else { + printf("Bad arguments - unknown flex item offset\n"); + return -1; + } + return ret; +} + /** No completion. */ static int comp_none(struct context *ctx, const struct token *token, @@ -8167,6 +8403,17 @@ cmd_flow_parsed(const struct buffer *in) port_meter_policy_add(in->port, in->args.policy.policy_id, in->args.vc.actions); break; + case FLEX_ITEM_CREATE: + flex_item_create(in->port, in->args.flex.token, + in->args.flex.filename); + break; + case FLEX_ITEM_MODIFY: + flex_item_modify(in->port, in->args.flex.token, + in->args.flex.filename); + break; + case FLEX_ITEM_DESTROY: + flex_item_destroy(in->port, in->args.flex.token); + break; default: break; } @@ -8618,6 +8865,11 @@ cmd_set_raw_parsed(const struct buffer *in) case RTE_FLOW_ITEM_TYPE_PFCP: size = sizeof(struct rte_flow_item_pfcp); break; + case RTE_FLOW_ITEM_TYPE_FLEX: + size = item->spec ? + ((const struct rte_flow_item_flex *) + item->spec)->length : 0; + break; default: fprintf(stderr, "Error - Not supported item\n"); goto error; @@ -8800,3 +9052,550 @@ cmdline_parse_inst_t cmd_show_set_raw_all = { NULL, }, }; + +#ifdef RTE_HAS_JANSSON +static __rte_always_inline bool +match_strkey(const char *key, const char *pattern) +{ + return strncmp(key, pattern, strlen(key)) == 0; +} + +static struct flex_item * +flex_parser_fetch(uint16_t port_id, uint16_t flex_id) +{ + if (port_id >= RTE_MAX_ETHPORTS) { + printf("Invalid port_id: %u\n", port_id); + return FLEX_PARSER_ERR; + } + if (flex_id >= FLEX_MAX_PARSERS_NUM) { + printf("Invalid flex item flex_id: %u\n", flex_id); + return FLEX_PARSER_ERR; + } + return flex_items[port_id][flex_id]; +} + +static void +flex_item_destroy(portid_t port_id, uint16_t flex_id) +{ + int ret; + struct rte_flow_error error; + struct flex_item *fp = flex_parser_fetch(port_id, flex_id); + if (fp == FLEX_PARSER_ERR) { + printf("Bad parameters: port_id=%u flex_id=%u\n", + port_id, flex_id); + return; + } + if (!fp) + return; + ret = rte_flow_flex_item_release(port_id, fp->flex_handle, &error); + if (!ret) { + free(fp); + flex_items[port_id][flex_id] = NULL; + printf("port-%u: released flex item #%u\n", + port_id, flex_id); + + } else { + printf("port-%u: cannot release flex item #%u: %s\n", + port_id, flex_id, error.message); + } +} + +static int +flex_field_parse(json_t *jfld, struct rte_flow_item_flex_field *fld) +{ + const char *key; + json_t *je; + +#define FLEX_FIELD_GET(fm, t) \ +do { \ + if (!strncmp(key, # fm, strlen(# fm))) { \ + if (json_is_real(je)) \ + fld->fm = (t) json_real_value(je); \ + else if (json_is_integer(je)) \ + fld->fm = (t) json_integer_value(je); \ + else \ + return -EINVAL; \ + } \ +} while (0) + + json_object_foreach(jfld, key, je) { + FLEX_FIELD_GET(field_size, uint32_t); + FLEX_FIELD_GET(field_base, int32_t); + FLEX_FIELD_GET(offset_base, uint32_t); + FLEX_FIELD_GET(offset_mask, uint32_t); + FLEX_FIELD_GET(offset_shift, int32_t); + FLEX_FIELD_GET(tunnel_count, uint16_t); + FLEX_FIELD_GET(field_id, uint16_t); + FLEX_FIELD_GET(rss_hash, uint16_t); + if (match_strkey(key, "field_mode")) { + const char *mode; + if (!json_is_string(je)) + return -EINVAL; + mode = json_string_value(je); + if (match_strkey(mode, "FIELD_MODE_DUMMY")) + fld->field_mode = FIELD_MODE_DUMMY; + else if (match_strkey(mode, "FIELD_MODE_FIXED")) + fld->field_mode = FIELD_MODE_FIXED; + else if (match_strkey(mode, "FIELD_MODE_OFFSET")) + fld->field_mode = FIELD_MODE_OFFSET; + else if (match_strkey(mode, "FIELD_MODE_BITMASK")) + fld->field_mode = FIELD_MODE_BITMASK; + else + return -EINVAL; + } + } + return 0; +} + +enum flex_link_type { + FLEX_LINK_IN = 0, + FLEX_LINK_OUT = 1 +}; + +static int +flex_link_item_parse(const char *pattern, struct rte_flow_item *item) +{ +#define FLEX_PARSE_DATA_SIZE 1024 + + int ret; + uint8_t *ptr, data[FLEX_PARSE_DATA_SIZE] = {0,}; + char flow_rule[256]; + struct context saved_flow_ctx = cmd_flow_context; + + sprintf(flow_rule, "flow create 0 pattern %s / end", pattern); + pattern = flow_rule; + cmd_flow_context_init(&cmd_flow_context); + do { + ret = cmd_flow_parse(NULL, pattern, (void *)data, sizeof(data)); + if (ret > 0) { + pattern += ret; + while (isspace(*pattern)) + pattern++; + } + } while (ret > 0 && strlen(pattern)); + if (ret >= 0 && !strlen(pattern)) { + struct rte_flow_item *src = + ((struct buffer *)data)->args.vc.pattern; + item->type = src->type; + if (src->spec) { + ptr = (void *)(uintptr_t)item->spec; + memcpy(ptr, src->spec, FLEX_MAX_FLOW_PATTERN_LENGTH); + } else { + item->spec = NULL; + } + if (src->mask) { + ptr = (void *)(uintptr_t)item->mask; + memcpy(ptr, src->mask, FLEX_MAX_FLOW_PATTERN_LENGTH); + } else { + item->mask = NULL; + } + if (src->last) { + ptr = (void *)(uintptr_t)item->last; + memcpy(ptr, src->last, FLEX_MAX_FLOW_PATTERN_LENGTH); + } else { + item->last = NULL; + } + ret = 0; + } + cmd_flow_context = saved_flow_ctx; + return ret; +} + +static int +flex_link_parse(json_t *jobj, struct rte_flow_item_flex_link *link, + enum flex_link_type link_type) +{ + const char *key; + json_t *je; + int ret; + json_object_foreach(jobj, key, je) { + if (match_strkey(key, "item")) { + if (!json_is_string(je)) + return -EINVAL; + ret = flex_link_item_parse(json_string_value(je), + &link->item); + if (ret) + return -EINVAL; + if (link_type == FLEX_LINK_IN) { + if (!link->item.spec || !link->item.mask) + return -EINVAL; + if (link->item.last) + return -EINVAL; + } + } + if (match_strkey(key, "next")) { + if (json_is_integer(je)) + link->next = (typeof(link->next)) + json_integer_value(je); + else if (json_is_real(je)) + link->next = (typeof(link->next)) + json_real_value(je); + else + return -EINVAL; + } + if (match_strkey(key, "tunnel")) { + if (!json_is_true(je) && !json_is_false(je)) + return -EINVAL; + link->tunnel = json_boolean_value(je); + } + } + return 0; +} + +static int flex_item_config(json_t *jroot, + struct rte_flow_item_flex_conf *flex_conf) +{ + const char *key; + json_t *jobj = NULL; + int ret; + + json_object_foreach(jroot, key, jobj) { + if (match_strkey(key, "next_header")) { + ret = flex_field_parse(jobj, &flex_conf->next_header); + if (ret) { + printf("Can't parse next_header field\n"); + goto out; + } + } else if (match_strkey(key, "next_protocol")) { + ret = flex_field_parse(jobj, + &flex_conf->next_protocol); + if (ret) { + printf("Can't parse next_protocol field\n"); + goto out; + } + } else if (match_strkey(key, "sample_data")) { + json_t *ji; + uint32_t i, size = json_array_size(jobj); + for (i = 0; i < size; i++) { + ji = json_array_get(jobj, i); + ret = flex_field_parse + (ji, flex_conf->sample_data + i); + if (ret) { + printf("Can't parse sample_data field(s)\n"); + goto out; + } + } + flex_conf->sample_num = size; + } else if (match_strkey(key, "input_link")) { + json_t *ji; + uint32_t i, size = json_array_size(jobj); + for (i = 0; i < size; i++) { + ji = json_array_get(jobj, i); + ret = flex_link_parse(ji, + flex_conf->input_link + i, + FLEX_LINK_IN); + if (ret) { + printf("Can't parse input_link(s)\n"); + goto out; + } + } + flex_conf->input_num = size; + } else if (match_strkey(key, "output_link")) { + json_t *ji; + uint32_t i, size = json_array_size(jobj); + for (i = 0; i < size; i++) { + ji = json_array_get(jobj, i); + ret = flex_link_parse + (ji, flex_conf->output_link + i, + FLEX_LINK_OUT); + if (ret) { + printf("Can't parse output_link(s)\n"); + goto out; + } + } + flex_conf->output_num = size; + } + } +out: + return ret; +} + +static struct flex_item * +flex_item_init(void) +{ +#define ALIGN(x) (((x) + sizeof(uintptr_t) - 1) & ~(sizeof(uintptr_t) - 1)) + + size_t base_size, samples_size, links_size, spec_size; + struct rte_flow_item_flex_conf *conf; + struct flex_item *fp; + uint8_t (*pattern)[FLEX_MAX_FLOW_PATTERN_LENGTH]; + int i; + base_size = ALIGN(sizeof(*conf)); + samples_size = ALIGN(FLEX_ITEM_MAX_SAMPLES_NUM * + sizeof(conf->sample_data[0])); + links_size = ALIGN(FLEX_ITEM_MAX_LINKS_NUM * + sizeof(conf->input_link[0])); + /* spec & mask for all input links */ + spec_size = 2 * FLEX_MAX_FLOW_PATTERN_LENGTH * FLEX_ITEM_MAX_LINKS_NUM; + fp = calloc(1, base_size + samples_size + 2 * links_size + spec_size); + if (fp == NULL) { + printf("Can't allocate memory for flex item\n"); + return NULL; + } + conf = &fp->flex_conf; + conf->sample_data = (typeof(conf->sample_data)) + ((uint8_t *)fp + base_size); + conf->input_link = (typeof(conf->input_link)) + ((uint8_t *)conf->sample_data + samples_size); + conf->output_link = (typeof(conf->output_link)) + ((uint8_t *)conf->input_link + links_size); + pattern = (typeof(pattern))((uint8_t *)conf->output_link + links_size); + for (i = 0; i < FLEX_ITEM_MAX_LINKS_NUM; i++) { + struct rte_flow_item_flex_link *in = conf->input_link + i; + in->item.spec = pattern++; + in->item.mask = pattern++; + } + return fp; +} + +static void +flex_item_modify(portid_t port_id, uint16_t flex_id, const char *filename) +{ + struct rte_flow_error flow_error; + json_error_t json_error; + json_t *jroot = NULL; + struct flex_item *fp = flex_parser_fetch(port_id, flex_id); + struct flex_item *modified_fp; + int ret; + + if (fp == FLEX_PARSER_ERR) { + printf("Bad parameters: port_id=%u flex_id=%u\n", + port_id, flex_id); + return; + } + if (!fp) { + printf("port-%u: flex item #%u not available\n", + port_id, flex_id); + return; + } + jroot = json_load_file(filename, 0, &json_error); + if (!jroot) { + printf("Bad JSON file \"%s\"\n", filename); + return; + } + modified_fp = flex_item_init(); + if (!modified_fp) { + printf("Could not allocate flex item\n"); + goto out; + } + ret = flex_item_config(jroot, &modified_fp->flex_conf); + if (ret) + goto out; + ret = rte_flow_flex_item_update(port_id, fp->flex_handle, + &modified_fp->flex_conf, + &flow_error); + if (!ret) { + modified_fp->flex_handle = fp->flex_handle; + flex_items[port_id][flex_id] = modified_fp; + printf("port-%u: modified flex item #%u\n", port_id, flex_id); + modified_fp = NULL; + free(fp); + } else { + free(modified_fp); + } +out: + if (modified_fp) + free(modified_fp); + if (jroot) + json_decref(jroot); +} + +static void +flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename) +{ + struct rte_flow_error flow_error; + json_error_t json_error; + json_t *jroot = NULL; + struct flex_item *fp = flex_parser_fetch(port_id, flex_id); + int ret; + + if (fp == FLEX_PARSER_ERR) { + printf("Bad parameters: port_id=%u flex_id=%u\n", + port_id, flex_id); + return; + } + if (fp) { + printf("port-%u: flex item #%u is already in use\n", + port_id, flex_id); + return; + } + jroot = json_load_file(filename, 0, &json_error); + if (!jroot) { + printf("Bad JSON file \"%s\": %s\n", filename, json_error.text); + return; + } + fp = flex_item_init(); + if (!fp) { + printf("Could not allocate flex item\n"); + goto out; + } + ret = flex_item_config(jroot, &fp->flex_conf); + if (ret) + goto out; + fp->flex_handle = rte_flow_flex_item_create(port_id, + &fp->flex_conf, + &flow_error); + if (fp->flex_handle) { + flex_items[port_id][flex_id] = fp; + printf("port-%u: created flex item #%u\n", port_id, flex_id); + fp = NULL; + } else { + printf("port-%u: flex item #%u creation failed: %s\n", + port_id, flex_id, + flow_error.message ? flow_error.message : ""); + } +out: + if (fp) + free(fp); + if (jroot) + json_decref(jroot); +} + +#else /* RTE_HAS_JANSSON */ +static void flex_item_create(__rte_unused portid_t port_id, + __rte_unused uint16_t flex_id, + __rte_unused const char *filename) +{ + printf("no JSON library\n"); +} + +static void flex_item_modify(__rte_unused portid_t port_id, + __rte_unused uint16_t flex_id, + __rte_unused const char *filename) +{ + printf("no JSON library\n"); +} + +static void flex_item_destroy(__rte_unused portid_t port_id, + __rte_unused uint16_t flex_id) +{ + printf("no JSON library\n"); +} +#endif /* RTE_HAS_JANSSON */ + +struct flex_pattern_set { + cmdline_fixed_string_t set, flex_pattern; + cmdline_fixed_string_t is_spec, mask; + cmdline_fixed_string_t spec_data, mask_data; + uint16_t id; +}; + +static cmdline_parse_token_string_t flex_pattern_set_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, set, "set"); +static cmdline_parse_token_string_t flex_pattern_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, + flex_pattern, "flex_pattern"); +static cmdline_parse_token_string_t flex_pattern_is_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, + is_spec, "is"); +static cmdline_parse_token_string_t flex_pattern_spec_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, + is_spec, "spec"); +static cmdline_parse_token_string_t flex_pattern_mask_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, mask, "mask"); +static cmdline_parse_token_string_t flex_pattern_spec_data_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, spec_data, NULL); +static cmdline_parse_token_string_t flex_pattern_mask_data_token = + TOKEN_STRING_INITIALIZER(struct flex_pattern_set, mask_data, NULL); +static cmdline_parse_token_num_t flex_pattern_id_token = + TOKEN_NUM_INITIALIZER(struct flex_pattern_set, id, RTE_UINT16); + +/* + * flex pattern data - spec or mask is a string representation of byte array + * in hexadecimal format. Each byte in data string must have 2 characters: + * 0x15 - "15" + * 0x1 - "01" + * Bytes in data array are in network order. + */ +static uint32_t +flex_pattern_data(const char *str, uint8_t *data) +{ + uint32_t i, len = strlen(str); + char b[3], *endptr; + + if (len & 01) + return 0; + len /= 2; + if (len >= FLEX_MAX_FLOW_PATTERN_LENGTH) + return 0; + for (i = 0, b[2] = '\0'; i < len; i++) { + b[0] = str[2 * i]; + b[1] = str[2 * i + 1]; + data[i] = strtoul(b, &endptr, 16); + if (endptr != &b[2]) + return 0; + } + return len; +} + +static void +flex_pattern_parsed_fn(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct flex_pattern_set *res = parsed_result; + struct flex_pattern *fp; + bool full_spec; + + if (res->id >= FLEX_MAX_PATTERNS_NUM) { + printf("Bad flex pattern id\n"); + return; + } + fp = flex_patterns + res->id; + memset(fp->spec_pattern, 0, sizeof(fp->spec_pattern)); + memset(fp->mask_pattern, 0, sizeof(fp->mask_pattern)); + fp->spec.length = flex_pattern_data(res->spec_data, fp->spec_pattern); + if (!fp->spec.length) { + printf("Bad flex pattern spec\n"); + return; + } + full_spec = strncmp(res->is_spec, "spec", strlen("spec")) == 0; + if (full_spec) { + fp->mask.length = flex_pattern_data(res->mask_data, + fp->mask_pattern); + if (!fp->mask.length) { + printf("Bad flex pattern mask\n"); + return; + } + } else { + memset(fp->mask_pattern, 0xFF, fp->spec.length); + fp->mask.length = fp->spec.length; + } + if (fp->mask.length != fp->spec.length) { + printf("Spec length do not match mask length\n"); + return; + } + fp->spec.pattern = fp->spec_pattern; + fp->mask.pattern = fp->mask_pattern; + printf("created pattern #%u\n", res->id); +} + +cmdline_parse_inst_t cmd_set_flex_is_pattern = { + .f = flex_pattern_parsed_fn, + .data = NULL, + .help_str = "set flex_pattern is ", + .tokens = { + (void *)&flex_pattern_set_token, + (void *)&flex_pattern_token, + (void *)&flex_pattern_id_token, + (void *)&flex_pattern_is_token, + (void *)&flex_pattern_spec_data_token, + NULL, + } +}; + +cmdline_parse_inst_t cmd_set_flex_spec_pattern = { + .f = flex_pattern_parsed_fn, + .data = NULL, + .help_str = "set flex_pattern spec mask ", + .tokens = { + (void *)&flex_pattern_set_token, + (void *)&flex_pattern_token, + (void *)&flex_pattern_id_token, + (void *)&flex_pattern_spec_token, + (void *)&flex_pattern_spec_data_token, + (void *)&flex_pattern_mask_token, + (void *)&flex_pattern_mask_data_token, + NULL, + } +}; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 97ae52e17e..0f76d4c551 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -4017,7 +4017,6 @@ main(int argc, char** argv) rte_stats_bitrate_reg(bitrate_data); } #endif - #ifdef RTE_LIB_CMDLINE if (strlen(cmdline_filename) != 0) cmdline_read_from_file(cmdline_filename); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 876a341cf0..36d4e29b83 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -282,6 +282,19 @@ struct fwd_engine { packet_fwd_t packet_fwd; /**< Mandatory. */ }; +struct flex_item { + struct rte_flow_item_flex_conf flex_conf; + struct rte_flow_item_flex_handle *flex_handle; + uint32_t flex_id; +}; + +#define FLEX_ITEM_MAX_SAMPLES_NUM 16 +#define FLEX_ITEM_MAX_LINKS_NUM 16 +#define FLEX_MAX_FLOW_PATTERN_LENGTH 64 +#define FLEX_MAX_PARSERS_NUM 8 +#define FLEX_MAX_PATTERNS_NUM 64 +#define FLEX_PARSER_ERR ((struct flex_item *)-1) + #define BURST_TX_WAIT_US 1 #define BURST_TX_RETRIES 64 @@ -306,6 +319,8 @@ extern struct fwd_engine * fwd_engines[]; /**< NULL terminated array. */ extern cmdline_parse_inst_t cmd_set_raw; extern cmdline_parse_inst_t cmd_show_set_raw; extern cmdline_parse_inst_t cmd_show_set_raw_all; +extern cmdline_parse_inst_t cmd_set_flex_is_pattern; +extern cmdline_parse_inst_t cmd_set_flex_spec_pattern; extern uint16_t mempool_flags; diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index bbef706374..5efc626260 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -5091,3 +5091,122 @@ For example to unload BPF filter from TX queue 0, port 0: .. code-block:: console testpmd> bpf-unload tx 0 0 + +Flex Item Functions +------------------- + +The following sections show functions that configure and create flex item object, +create flex pattern and use it in a flow rule. +The commands will use 20 bytes IPv4 header for examples: + +:: + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | ver | IHL | TOS | length | DW0 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | identification | flg | frag. offset | DW1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | TTL | protocol | checksum | DW2 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | source IP address | DW3 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | destination IP address | DW4 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + + +Create flex item +~~~~~~~~~~~~~~~~ + +Flex item object is created by PMD according to a new header configuration. The +header configuration is compiled by the testpmd and stored in +``rte_flow_item_flex_conf`` type variable. + +:: + + # flow flex_item create + testpmd> flow flex_item init 0 3 ipv4_flex_config.json + port-0: created flex item #3 + +Flex item configuration is kept in external JSON file. +It describes the following header elements: + +**New header length.** + +Specify whether the new header has fixed or variable length and the basic/minimal +header length value. + +If header length is not fixed, header location with a value that completes header +length calculation and scale/offset function must be added. + +Scale function depends on port hardware. + +**Next protocol.** + +Describes location in the new header that specify following network header type. + +**Flow match samples.** + +Describes locations in the new header that will be used in flow rules. + +Number of flow samples and sample maximal length depend of port hardware. + +**Input trigger.** + +Describes preceding network header configuration. + +**Output trigger.** + +Describes conditions that trigger transfer to following network header + +.. code-block:: json + + { + "next_header": { "field_mode": "FIELD_MODE_FIXED", "field_size": 20}, + "next_protocol": {"field_size": 8, "field_base": 72}, + "sample_data": [ + { "field_mode": "FIELD_MODE_FIXED", "field_size": 32, "field_base": 0}, + { "field_mode": "FIELD_MODE_FIXED", "field_size": 32, "field_base": 32}, + { "field_mode": "FIELD_MODE_FIXED", "field_size": 32, "field_base": 64}, + { "field_mode": "FIELD_MODE_FIXED", "field_size": 32, "field_base": 96} + ], + "input_link": [ + {"item": "eth type is 0x0800"}, + {"item": "vlan inner_type is 0x0800"} + ], + "output_link": [ + {"item": "udp", "next": 17}, + {"item": "tcp", "next": 6}, + {"item": "icmp", "next": 1} + ] + } + + +Flex pattern and flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Flex pattern describe parts of network header that will trigger flex flow item hit in a flow rule. +Flex pattern directly related to flex item samples configuration. +Flex pattern can be shared between ports. + +**Flex pattern and flow rule to match IPv4 version and 20 bytes length** + +:: + + # set flex_pattern is + testpmd> flow flex_item pattern 5 is 45FF + created pattern #5 + + testpmd> flow create 0 ingress pattern eth / ipv4 / udp / flex item is 3 pattern is 5 / end actions mark id 1 / queue index 0 / end + Flow rule #0 created + +**Flex pattern and flow rule to match packets with source address 1.2.3.4** + +:: + + testpmd> flow flex_item pattern 2 spec 45000000000000000000000001020304 mask FF0000000000000000000000FFFFFFFF + created pattern #2 + + testpmd> flow create 0 ingress pattern eth / ipv4 / udp / flex item is 3 pattern is 2 / end actions mark id 1 / queue index 0 / end + Flow rule #0 created From patchwork Fri Oct 1 19:34:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100349 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E6A6A0032; Fri, 1 Oct 2021 21:35:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA2BC4123C; Fri, 1 Oct 2021 21:34:59 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2075.outbound.protection.outlook.com [40.107.236.75]) by mails.dpdk.org (Postfix) with ESMTP id 3699D4121F for ; Fri, 1 Oct 2021 21:34:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UDu6j0i8UgmPnwRGRC0UCvQfFzXdglzxQE5xyqiLzq7kq+knC4BJdkc2OHZ2tmhXDy5gbffgrqIIBpdaQHazSXlG0Lgft8oKWIV6cRm08LnLKShSXegaigMXUbK7nQQN6GQDwbW8pmmRmUNyOwIxaV4z/8d2OE8T9+guBRJkssbtRF40E4x1jn1yAewqgm1IaUPRCGb8VCWqfRR4eR4Ub3A1EKushJalb6SEMk/vIkDdj9ysE05g9+X2hsx9s+0LMxJAudL59Gdc0t3YDHeX1FDWFPT6i8sIjWJTID3TBmYlNkESCcmeHtnRQqa+4vLfVVaF9B3SJNkGPiSx5HZOEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=L9yNKVQyZE42O2m15AVjOk2FbC+C8qH+bHBvicSxoVY=; b=QY5S1DABXvzUJIfq3eqsGWrSHZqA0HQb4095ynFw+7QkNCJRDaaAlrgWAx3o+onN4E7RD08gM6xm+1/rJ6r7wWy22zJBCNmEcNrkCuPYMjvs27w7wf8RlqzwYHvgo+XhG/Ck1HzusTXLVrs+LhInDkq0zmItInBQC9jbUViks7sD1gBLeta1rEjD9d9Ielp0zuf6xzZJXtAespgiO2qGUzkASw5pFaZHPIcq6pqmSzSm5rPzzANbmzXY9xuXR88GCjCfoziadrV6jBAck+Mtr9i6MJcZlNNpEQv9a2yIu03ZEh51SlXrsuwjChDIqC5rhwafzPfwLIvm6Bd68K+fuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L9yNKVQyZE42O2m15AVjOk2FbC+C8qH+bHBvicSxoVY=; b=s/4yiBe287qZm8MHK0fZR8vneclf8zgsT30fk5r3zHJ/VOqDNJB7OWn5sfc6gHl4/HuhaUDopQ7tq6Ltx67FvKypjjh/0MeHgS7ishSDzzrfgj6Tga7oJtp+91N4zn53Q5qsDMFShOpID2xw8YGzRjXtyf75m00NQU8hgiWxTb0FvhdUgo5Yd6jkhpIzCw/Qs561P6IXXQekKGSnwecXWKKxlmxg0m3DH3RR3kRQ+N7HZZNUkiBlH4uPX/FWZJZOfNStEBPDiSotlrHnkiyKYJk+05xUXh0pIv3V2mckOZfWs/7w2YabHEJnmT7izRahhBivu6WN4IheS6DNuC5wLQ== Received: from BN8PR16CA0034.namprd16.prod.outlook.com (2603:10b6:408:4c::47) by CH2PR12MB4150.namprd12.prod.outlook.com (2603:10b6:610:a6::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.14; Fri, 1 Oct 2021 19:34:55 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::56) by BN8PR16CA0034.outlook.office365.com (2603:10b6:408:4c::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.13 via Frontend Transport; Fri, 1 Oct 2021 19:34:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:54 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:52 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:07 +0300 Message-ID: <20211001193415.23288-7-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 35c9638f-319b-4f1f-1391-08d985128bcf X-MS-TrafficTypeDiagnostic: CH2PR12MB4150: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4714; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LCkwx8rxk/YQ9ClUQIkK8NVX19YmCjx7sSZs8hXhluN28Q056AvStBtXLPLqD1OAfWHtpFkosel8nuCa04EIUoRGxlyMUfNDmmvA4BVRJXcAXFaYTe3CsZy3CGi1/0/gA9dKxHo7ZIaANiSNGPyoGjdMxhBQrYNbsN/wruaNZkj8oRgXt6Vk6ATa5YvG+4i7LV/PGDvBqKruQn5pzd9UxwE7pEC8Q6wNMtNLbN8hrj4KvaCXgHGSUXdqFqeGwACJuWKxrOwidxfMCsFVEHhvruOmtz87WHspSGsPuyq+Zak4mHFJ51e9zpOA5TYknPOH0A+I9oUFvvegyt2ro+4yd3+nR0Y7G7vMrY1hs1e2nDgH04NO7tZxZB4LxN1MEz457eWwP19xvxESATiRVI0LIlfmm1IeQNExpWbGTP6udUy7o9ZjKuRKDqGPHbMfMnyQV+DZESYKnoVB6KK2xYBRRINIyExMEfjU5zhLeMKvJ/+ZTpXQRGBi502VQyc/1PlKqg4gB429yawIHJY4AsCxwOQsqf7O/ugSulvJ6gYnsjAuIahwKqGr34JttofKdhTTdqY2fOfg7PwhsLdVkHJKshXpvjJ/egI/9NVCLKZzzoUABlVRfCn5poEDfTWW/ujVF5B8cKYa+WZUXGf5W+30Ef8+DWrerAry0ljA5IHJ+/fwj/++M2Vemy9dpxcQg8yafocldbfMv4egZ2TES/fdDg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6666004)(16526019)(36756003)(55016002)(47076005)(54906003)(508600001)(4326008)(186003)(6286002)(36860700001)(86362001)(70586007)(8936002)(82310400003)(5660300002)(2616005)(7696005)(70206006)(7636003)(426003)(356005)(8676002)(2906002)(83380400001)(1076003)(336012)(26005)(316002)(6916009); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:54.9286 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 35c9638f-319b-4f1f-1391-08d985128bcf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4150 Subject: [dpdk-dev] [PATCH v2 06/14] common/mlx5: refactor HCA attributes query X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is the common part of code querying the HCA attributes from the device, and this part can be commoditized as dedicated routine. Signed-off-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 173 +++++++++++---------------- 1 file changed, 73 insertions(+), 100 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 56407cc332..8273e98146 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -13,6 +13,42 @@ #include "mlx5_common_log.h" #include "mlx5_malloc.h" +static void * +mlx5_devx_get_hca_cap(void *ctx, uint32_t *in, uint32_t *out, + int *err, uint32_t flags) +{ + const size_t size_in = MLX5_ST_SZ_DW(query_hca_cap_in) * sizeof(int); + const size_t size_out = MLX5_ST_SZ_DW(query_hca_cap_out) * sizeof(int); + int status, syndrome, rc; + + if (err) + *err = 0; + memset(in, 0, size_in); + memset(out, 0, size_out); + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, flags); + rc = mlx5_glue->devx_general_cmd(ctx, in, size_in, out, size_out); + if (rc) { + DRV_LOG(ERR, + "Failed to query devx HCA capabilities func %#02x", + flags >> 1); + if (err) + *err = rc > 0 ? -rc : rc; + return NULL; + } + status = MLX5_GET(query_hca_cap_out, out, status); + syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); + if (status) { + DRV_LOG(ERR, + "Failed to query devx HCA capabilities func %#02x status %x, syndrome = %x", + flags >> 1, status, syndrome); + if (err) + *err = -1; + return NULL; + } + return MLX5_ADDR_OF(query_hca_cap_out, out, capability); +} + /** * Perform read access to the registers. Reads data from register * and writes ones to the specified buffer. @@ -472,21 +508,15 @@ static void mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx, struct mlx5_hca_vdpa_attr *vdpa_attr) { - uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {0}; - uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)] = {0}; - void *hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); - int status, syndrome, rc; + uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)]; + uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)]; + void *hcattr; - MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION | - MLX5_HCA_CAP_OPMOD_GET_CUR); - rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); - status = MLX5_GET(query_hca_cap_out, out, status); - syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); - if (rc || status) { - RTE_LOG(DEBUG, PMD, "Failed to query devx VDPA capabilities," - " status %x, syndrome = %x", status, syndrome); + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, NULL, + MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { + RTE_LOG(DEBUG, PMD, "Failed to query devx VDPA capabilities"); vdpa_attr->valid = 0; } else { vdpa_attr->valid = 1; @@ -741,27 +771,15 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, { uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {0}; uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)] = {0}; - void *hcattr; - int status, syndrome, rc, i; uint64_t general_obj_types_supported = 0; + void *hcattr; + int rc, i; - MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | - MLX5_HCA_CAP_OPMOD_GET_CUR); - - rc = mlx5_glue->devx_general_cmd(ctx, - in, sizeof(in), out, sizeof(out)); - if (rc) - goto error; - status = MLX5_GET(query_hca_cap_out, out, status); - syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); - if (status) { - DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, " - "status %x, syndrome = %x", status, syndrome); - return -1; - } - hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; attr->flow_counter_bulk_alloc_bitmap = MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc); attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr, @@ -884,19 +902,13 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD); if (attr->qos.sup) { - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | - MLX5_HCA_CAP_OPMOD_GET_CUR); - rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), - out, sizeof(out)); - if (rc) - goto error; - if (status) { - DRV_LOG(DEBUG, "Failed to query devx QOS capabilities," - " status %x, syndrome = %x", status, syndrome); - return -1; + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { + DRV_LOG(DEBUG, "Failed to query devx QOS capabilities"); + return rc; } - hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); attr->qos.flow_meter_old = MLX5_GET(qos_cap, hcattr, flow_meter_old); attr->qos.log_max_flow_meter = @@ -925,27 +937,14 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, mlx5_devx_cmd_query_hca_vdpa_attr(ctx, &attr->vdpa); if (!attr->eth_net_offloads) return 0; - /* Query Flow Sampler Capability From FLow Table Properties Layout. */ - memset(in, 0, sizeof(in)); - memset(out, 0, sizeof(out)); - MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | - MLX5_HCA_CAP_OPMOD_GET_CUR); - - rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); - if (rc) - goto error; - status = MLX5_GET(query_hca_cap_out, out, status); - syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); - if (status) { - DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, " - "status %x, syndrome = %x", status, syndrome); + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { attr->log_max_ft_sampler_num = 0; - return -1; + return rc; } - hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); attr->log_max_ft_sampler_num = MLX5_GET (flow_table_nic_cap, hcattr, flow_table_properties_nic_receive.log_max_ft_sampler_num); @@ -960,27 +959,13 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, (flow_table_nic_cap, hcattr, ft_field_support_2_nic_receive.outer_ipv4_ihl); /* Query HCA offloads for Ethernet protocol. */ - memset(in, 0, sizeof(in)); - memset(out, 0, sizeof(out)); - MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS | - MLX5_HCA_CAP_OPMOD_GET_CUR); - - rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); - if (rc) { + mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { attr->eth_net_offloads = 0; - goto error; + return rc; } - status = MLX5_GET(query_hca_cap_out, out, status); - syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); - if (status) { - DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, " - "status %x, syndrome = %x", status, syndrome); - attr->eth_net_offloads = 0; - return -1; - } - hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); attr->wqe_vlan_insert = MLX5_GET(per_protocol_networking_offload_caps, hcattr, wqe_vlan_insert); attr->csum_cap = MLX5_GET(per_protocol_networking_offload_caps, @@ -1017,26 +1002,14 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, hcattr, rss_ind_tbl_cap); /* Query HCA attribute for ROCE. */ if (attr->roce) { - memset(in, 0, sizeof(in)); - memset(out, 0, sizeof(out)); - MLX5_SET(query_hca_cap_in, in, opcode, - MLX5_CMD_OP_QUERY_HCA_CAP); - MLX5_SET(query_hca_cap_in, in, op_mod, - MLX5_GET_HCA_CAP_OP_MOD_ROCE | - MLX5_HCA_CAP_OPMOD_GET_CUR); - rc = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), - out, sizeof(out)); - if (rc) - goto error; - status = MLX5_GET(query_hca_cap_out, out, status); - syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); - if (status) { + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ROCE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { DRV_LOG(DEBUG, - "Failed to query devx HCA ROCE capabilities, " - "status %x, syndrome = %x", status, syndrome); - return -1; + "Failed to query devx HCA ROCE capabilities"); + return rc; } - hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); attr->qp_ts_format = MLX5_GET(roce_caps, hcattr, qp_ts_format); } if (attr->eth_virt && From patchwork Fri Oct 1 19:34:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100350 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D088A0032; Fri, 1 Oct 2021 21:35:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9BBE41244; Fri, 1 Oct 2021 21:35:01 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2060.outbound.protection.outlook.com [40.107.94.60]) by mails.dpdk.org (Postfix) with ESMTP id 1406E41245 for ; Fri, 1 Oct 2021 21:35:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VzC/T+z4BrjQJQuvkcetit+OGI9sln1ES2EwL9xp5x6Oofn6HCnhS6ZgUbbDh5NXxphmP+xjHKUJ+fkVWvGGCk2XtMsJaD71gZXW+bPm60KKpzQ7vw2BFm3TGQ2c/zJ/7uPSXUdHiRTzaERFnk+UVzzU5X77AmcldIyA9XqRMpyDWMh3/PqrekvNY1Ca2Eaz0bSZnDt71DetP/ctEMxYz3W8pSoa8rhFJeANCy18NmZPARjvcUZ8alKrAOxRwFc4BxO7uKITFMK54NTmnWyroRV5XjtS1KcL8I8DPCXuyqzfgbfQ05xckN0i7e8QuyJ702BijAyPOv/BdvJNjFjbOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=krJNwU4i8QalYZx3Dy+KWCkDmGIqCG9VmFqWbMMUr/w=; b=iuyDKHChlQ8JfagwNI3hLBPtAg/xhWGfsRbl5ppPVrAKVIBYm4Z36teg6JzuBK+o/Xty9vx9TWGrBVtOdkfrW9RARiHwc6QRSkpPmWSujFlQjAVoFnK+rare+jlO56PzYALZcwhjpmTs8XV100aHhsPHuuyD+eKzsSDEhtRH2S0h4jNcgMUsPptuG+CyIOJuQBDpXyh03GziCE4AuF7VdmvO2JgPK+KSAKZiOuj/guPKZ9A7wcPjeeZ4Kx/mHqEVZmrxof6//JbdtZkCfZqdJA58fKWXQnQ32it5RH5cuvHQRmm6lbvTB43fc+dONwDcBq8OtE7Y8Bq7KUL+695I/g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=krJNwU4i8QalYZx3Dy+KWCkDmGIqCG9VmFqWbMMUr/w=; b=j3mD1EWo05LcoYMdpS8fqdbdzoD1Eo6REG4aM9Z4Fq88LLGhp5jGPNaw2okN2PcXDjHcbRTOQUMejJD3hxUp4fiu1J06h8ofpELa5zPA9y19iBtQcfKfb8c+XnpF6G0IVG/DtZmsQTaltAdSaAecbwzJexrno8QuogaCsRm36Nky05whht7/loxV+a0wrqOfUMo1uFGYCt+fZ6gR/SgsYJUQYx96rLGP9IZdECq4akBGezF/gktrCGpJQZ5a7ZyaQs02x9Uz41ExGORsOTiwPR6F7HhSc918U/0ThGCJsi0qDoWgqqCf6ECfNWAE2QaDeZ5CsJ2OLmjJUQiU7PZbHg== Received: from BN6PR14CA0019.namprd14.prod.outlook.com (2603:10b6:404:79::29) by CY4PR12MB1926.namprd12.prod.outlook.com (2603:10b6:903:11b::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15; Fri, 1 Oct 2021 19:34:58 +0000 Received: from BN8NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::b5) by BN6PR14CA0019.outlook.office365.com (2603:10b6:404:79::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.20 via Frontend Transport; Fri, 1 Oct 2021 19:34:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT068.mail.protection.outlook.com (10.13.177.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:57 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:54 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:08 +0300 Message-ID: <20211001193415.23288-8-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f2786a0e-c803-488b-32e6-08d985128d7e X-MS-TrafficTypeDiagnostic: CY4PR12MB1926: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sxlHP58oEY92wjPK4r32/kAAMmofeHILfYfaF4zO+/P/nK4sFkqS+K7gGeztyUA0r2mu6gxYkUKcsT7qRlIHmwQQFuYPj/Xx+i4pyKUcoqQykSHv/mtsM+TZEQHxIqPpTcmzYxar+M9MjSmpTNP64o/OSNpiZ9D+pJjAagVxTT3trqbNa07eJY6c6Xwdnz0cL5/twrR32grqxTTTKpcJ1ZvigYWDqZJdGr5Ur644F/n8D39CsZIGUrqpJEmScyC9uUsBQiicV6tk1YfOshmm483XfsUjPUyoYORnPliNzuftoJ3phGHyKtmVvw0vCFLpi8CPUyfjP4T2EdIhUcMcP50/AK/1yetTiDxy2/OQvxs/pil4S8Yv9NM7oxnbsZPBZVsbh+Ofcv2WkuuHIjSELVUPkD0MaUZ98VJZIPNnr0c598NkQWwQphp0yQgNRy6G+y68wzZlQIbFSDtki4YoZQVX49t85+zHC04qm/XvkJguzUB7wxPHGNJ0Donhdz9dPzOLVtyzOYA2B8eHstkuHEqhJTSnR2H0/lrOTbUPyOHPuVW5/8n216bvPKaA+aDsKRzqbLK8WzuPE7K0dVgHlLCqPOyXqsasQ4IbrdFdfLFZ+NgsdNKQNydvocNJfXt0Cd6u/JGoXlSv7e3P8EjsHCWhnoojtd5EoVWsAOew1UslY1p4kDymKoodf92O4sSoc75ezzVWU+SjCC403kXZSQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36756003)(82310400003)(7696005)(336012)(6916009)(508600001)(4326008)(186003)(16526019)(6666004)(5660300002)(356005)(86362001)(26005)(7636003)(55016002)(2616005)(8936002)(316002)(6286002)(54906003)(8676002)(36860700001)(2906002)(83380400001)(70206006)(47076005)(70586007)(1076003)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:57.7013 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f2786a0e-c803-488b-32e6-08d985128d7e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1926 Subject: [dpdk-dev] [PATCH v2 07/14] common/mlx5: extend flex parser capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson MLX5 PARSE_GRAPH_NODE is the main data structure used by the Flex Parser when a new parsing protocol is defined. While software creates PARSE_GRAPH_NODE object for a new protocol, it must verify that configuration parameters it uses comply with hardware limits. The patch queries hardware PARSE_GRAPH_NODE capabilities and stores ones in PMD internal configuration structure: - query capabilties from parse_graph_node attribute page - query max_num_prog_sample_field capability from HCA page 2 Signed-off-by: Gregory Etelson --- drivers/common/mlx5/mlx5_devx_cmds.c | 57 ++++++++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 65 +++++++++++++++++++++++++++- drivers/common/mlx5/mlx5_prm.h | 50 ++++++++++++++++++++- 3 files changed, 168 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 8273e98146..294ac480dc 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -729,6 +729,53 @@ mlx5_devx_cmd_create_flex_parser(void *ctx, return parse_flex_obj; } +static int +mlx5_devx_cmd_query_hca_parse_graph_node_cap + (void *ctx, struct mlx5_hca_flex_attr *attr) +{ + uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)]; + uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)]; + void *hcattr; + int rc; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + attr->node_in = MLX5_GET(parse_graph_node_cap, hcattr, node_in); + attr->node_out = MLX5_GET(parse_graph_node_cap, hcattr, node_out); + attr->header_length_mode = MLX5_GET(parse_graph_node_cap, hcattr, + header_length_mode); + attr->sample_offset_mode = MLX5_GET(parse_graph_node_cap, hcattr, + sample_offset_mode); + attr->max_num_arc_in = MLX5_GET(parse_graph_node_cap, hcattr, + max_num_arc_in); + attr->max_num_arc_out = MLX5_GET(parse_graph_node_cap, hcattr, + max_num_arc_out); + attr->max_num_sample = MLX5_GET(parse_graph_node_cap, hcattr, + max_num_sample); + attr->sample_id_in_out = MLX5_GET(parse_graph_node_cap, hcattr, + sample_id_in_out); + attr->max_base_header_length = MLX5_GET(parse_graph_node_cap, hcattr, + max_base_header_length); + attr->max_sample_base_offset = MLX5_GET(parse_graph_node_cap, hcattr, + max_sample_base_offset); + attr->max_next_header_offset = MLX5_GET(parse_graph_node_cap, hcattr, + max_next_header_offset); + attr->header_length_mask_width = MLX5_GET(parse_graph_node_cap, hcattr, + header_length_mask_width); + /* Get the max supported samples from HCA CAP 2 */ + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + attr->max_num_prog_sample = + MLX5_GET(cmd_hca_cap_2, hcattr, max_num_prog_sample_field); + return 0; +} + static int mlx5_devx_query_pkt_integrity_match(void *hcattr) { @@ -933,6 +980,16 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, log_max_num_meter_aso); } } + /* + * Flex item support needs max_num_prog_sample_field + * from the Capabilities 2 table for PARSE_GRAPH_NODE + */ + if (attr->parse_graph_flex_node) { + rc = mlx5_devx_cmd_query_hca_parse_graph_node_cap + (ctx, &attr->flex); + if (rc) + return -1; + } if (attr->vdpa.valid) mlx5_devx_cmd_query_hca_vdpa_attr(ctx, &attr->vdpa); if (!attr->eth_net_offloads) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index e576e30f24..fcd0b12e22 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -8,6 +8,7 @@ #include "mlx5_glue.h" #include "mlx5_prm.h" #include +#include /* * Defines the amount of retries to allocate the first UAR in the page. @@ -94,6 +95,64 @@ struct mlx5_hca_flow_attr { uint32_t tunnel_header_2_3; }; +/** + * Accumulate port PARSE_GRAPH_NODE capabilities from + * PARSE_GRAPH_NODE Capabilities and HCA Capabilities 2 tables + */ +__extension__ +struct mlx5_hca_flex_attr { + uint32_t node_in; + uint32_t node_out; + uint16_t header_length_mode; + uint16_t sample_offset_mode; + uint8_t max_num_arc_in; + uint8_t max_num_arc_out; + uint8_t max_num_sample; + uint8_t max_num_prog_sample:5; /* From HCA CAP 2 */ + uint8_t sample_id_in_out:1; + uint16_t max_base_header_length; + uint8_t max_sample_base_offset; + uint16_t max_next_header_offset; + uint8_t header_length_mask_width; +}; + +/* ISO C restricts enumerator values to range of 'int' */ +__extension__ +enum { + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_HEAD = RTE_BIT32(1), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_MAC = RTE_BIT32(2), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_IP = RTE_BIT32(3), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_GRE = RTE_BIT32(4), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_UDP = RTE_BIT32(5), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_MPLS = RTE_BIT32(6), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_TCP = RTE_BIT32(7), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_VXLAN_GRE = RTE_BIT32(8), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_GENEVE = RTE_BIT32(9), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_IPSEC_ESP = RTE_BIT32(10), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_IPV4 = RTE_BIT32(11), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_IPV6 = RTE_BIT32(12), + PARSE_GRAPH_NODE_CAP_SUPPORTED_PROTOCOL_PROGRAMMABLE = RTE_BIT32(31) +}; + +enum { + PARSE_GRAPH_NODE_CAP_LENGTH_MODE_FIXED = RTE_BIT32(0), + PARSE_GRAPH_NODE_CAP_LENGTH_MODE_EXPLISIT_FIELD = RTE_BIT32(1), + PARSE_GRAPH_NODE_CAP_LENGTH_MODE_BITMASK_FIELD = RTE_BIT32(2) +}; + +/* + * DWORD shift is the base for calculating header_length_field_mask + * value in the MLX5_GRAPH_NODE_LEN_FIELD mode. + */ +#define MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD 0x02 + +static inline uint32_t +mlx5_hca_parse_graph_node_base_hdr_len_mask + (const struct mlx5_hca_flex_attr *attr) +{ + return (1 << attr->header_length_mask_width) - 1; +} + /* HCA supports this number of time periods for LRO. */ #define MLX5_LRO_NUM_SUPP_PERIODS 4 @@ -164,6 +223,7 @@ struct mlx5_hca_attr { struct mlx5_hca_qos_attr qos; struct mlx5_hca_vdpa_attr vdpa; struct mlx5_hca_flow_attr flow; + struct mlx5_hca_flex_attr flex; int log_max_qp_sz; int log_max_cq_sz; int log_max_qp; @@ -570,8 +630,9 @@ int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj, uint32_t ids[], uint32_t num); __rte_internal -struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void *ctx, - struct mlx5_devx_graph_node_attr *data); +struct mlx5_devx_obj * +mlx5_devx_cmd_create_flex_parser(void *ctx, + struct mlx5_devx_graph_node_attr *data); __rte_internal int mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id, diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index d361bcf90e..3ff14b4a5a 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -975,7 +975,14 @@ struct mlx5_ifc_fte_match_set_misc4_bits { u8 prog_sample_field_id_2[0x20]; u8 prog_sample_field_value_3[0x20]; u8 prog_sample_field_id_3[0x20]; - u8 reserved_at_100[0x100]; + u8 prog_sample_field_value_4[0x20]; + u8 prog_sample_field_id_4[0x20]; + u8 prog_sample_field_value_5[0x20]; + u8 prog_sample_field_id_5[0x20]; + u8 prog_sample_field_value_6[0x20]; + u8 prog_sample_field_id_6[0x20]; + u8 prog_sample_field_value_7[0x20]; + u8 prog_sample_field_id_7[0x20]; }; struct mlx5_ifc_fte_match_set_misc5_bits { @@ -1244,6 +1251,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, + MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1750,6 +1758,27 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 reserved_at_1c0[0x620]; }; +/** + * PARSE_GRAPH_NODE Capabilities Field Descriptions + */ +struct mlx5_ifc_parse_graph_node_cap_bits { + u8 node_in[0x20]; + u8 node_out[0x20]; + u8 header_length_mode[0x10]; + u8 sample_offset_mode[0x10]; + u8 max_num_arc_in[0x08]; + u8 max_num_arc_out[0x08]; + u8 max_num_sample[0x08]; + u8 reserved_at_78[0x07]; + u8 sample_id_in_out[0x1]; + u8 max_base_header_length[0x10]; + u8 reserved_at_90[0x08]; + u8 max_sample_base_offset[0x08]; + u8 max_next_header_offset[0x10]; + u8 reserved_at_b0[0x08]; + u8 header_length_mask_width[0x08]; +}; + struct mlx5_ifc_flow_table_prop_layout_bits { u8 ft_support[0x1]; u8 flow_tag[0x1]; @@ -1844,9 +1873,14 @@ struct mlx5_ifc_flow_table_nic_cap_bits { ft_field_support_2_nic_receive; }; +/* + * HCA Capabilities 2 + */ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 reserved_at_0[0x80]; /* End of DW4. */ - u8 reserved_at_80[0xb]; + u8 reserved_at_80[0x3]; + u8 max_num_prog_sample_field[0x5]; + u8 reserved_at_88[0x3]; u8 log_max_num_reserved_qpn[0x5]; u8 reserved_at_90[0x3]; u8 log_reserved_qpn_granularity[0x5]; @@ -3877,6 +3911,12 @@ enum mlx5_parse_graph_flow_match_sample_offset_mode { MLX5_GRAPH_SAMPLE_OFFSET_BITMASK = 0x2, }; +enum mlx5_parse_graph_flow_match_sample_tunnel_mode { + MLX5_GRAPH_SAMPLE_TUNNEL_OUTER = 0x0, + MLX5_GRAPH_SAMPLE_TUNNEL_INNER = 0x1, + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST = 0x2 +}; + /* Node index for an input / output arc of the flex parser graph. */ enum mlx5_parse_graph_arc_node_index { MLX5_GRAPH_ARC_NODE_NULL = 0x0, @@ -3890,9 +3930,15 @@ enum mlx5_parse_graph_arc_node_index { MLX5_GRAPH_ARC_NODE_VXLAN_GPE = 0x8, MLX5_GRAPH_ARC_NODE_GENEVE = 0x9, MLX5_GRAPH_ARC_NODE_IPSEC_ESP = 0xa, + MLX5_GRAPH_ARC_NODE_IPV4 = 0xb, + MLX5_GRAPH_ARC_NODE_IPV6 = 0xc, MLX5_GRAPH_ARC_NODE_PROGRAMMABLE = 0x1f, }; +#define MLX5_PARSE_GRAPH_FLOW_SAMPLE_MAX 8 +#define MLX5_PARSE_GRAPH_IN_ARC_MAX 8 +#define MLX5_PARSE_GRAPH_OUT_ARC_MAX 8 + /** * Convert a user mark to flow mark. * From patchwork Fri Oct 1 19:34:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100351 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DF42A0032; Fri, 1 Oct 2021 21:35:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F8A841236; Fri, 1 Oct 2021 21:35:05 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2057.outbound.protection.outlook.com [40.107.244.57]) by mails.dpdk.org (Postfix) with ESMTP id 1523241247; Fri, 1 Oct 2021 21:35:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z8ySeexh2ifRUE60bEQJswzjeOCyqevxdEDc71sq4rpcmws0qThkkUi0cesSRWxHsGlm3EeYlC9oXht1K2s05pQFsqQGIcvyu0DnBTzU/E+psgmI6LB4A64aupVhYCvc+7UQ+sK3CeQuPNIc8olZqDtvuwVJtdSZUY3EClh2Suz2vPa+8w+MYWeMTFuntxL2dOAQfej75u8jSvXHUCPQHWkXljWsBAi/2VdXBg3arneLrOCjiMsQf8WyTwBOMkdKZBX+WX0TiXaKq0NOhtPlKp8St9tvIxiq6f3mSJosGghzoc8juqk8lm0ytakLLSVqy7cmYcz7BFB4m9WHmM0ucw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aziLYMuP1jjKpVZ+6KIAU/+u5eoAspeI5VLFxBEFPWg=; b=OPj4UR1ItwqlH8o859BaucSldmSk5z1Z2Zefu06lUB4siuc9i+zQy7WP2O2wB7OaPfjn9XNYQbybpz9fuUJONcIzkaEwVerwF9ii13VI8tbfh8UueH0bFmvCKPSzyLtSOQEsFLjCq6rCgHALD45kvT4RRA9vuZ3dENPtDN6mZ7/EUmyAPJaUlnkCAEboZbjuN2l8aqNKeklX5fXHVL1UPFBVQwMfWPvu2fTE8ACIS1Clwd36yVDjUw/9ii6etyBuu3kiFtx5g0XWH6VNOozU5dAQVEwZDZ04ujXTKt5tjY7Y2vxZyMkAQ7xkFsbrD1bcrzHFEm8j4Pg4c8Y9nVf3Jg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aziLYMuP1jjKpVZ+6KIAU/+u5eoAspeI5VLFxBEFPWg=; b=V8/pj7tH/mybkJT/QnAH9Y5lBbO4u87RX/qla8x69ldaL4bkIawH0jMASIUz1qXObbG3iWFDJQECNq1k0p0oJGD5Q7Osk8AHNlomeEmqJJzdnVJbG5YBSpZpVwWqMS3P5VrgTe3bMtiKaOLdrcFcMY8FMkONCDkhdRTV6xMveeaGPWm+tiKeerh4cIvixD3ophBZMO5yHtcQMIQ3WkDjLybvVgOZ1Jj+yTYeF+EpF+LpTtEBl+8bfYgYi5tDepJiDW78xHKkLNm76F7BsEKFUVYwop8ixwICDJLLsoHzhu9PJUmsm0HM6mc/+0gaRc72HhOk2jK3zxQWdccFiXYB9Q== Received: from BN6PR14CA0020.namprd14.prod.outlook.com (2603:10b6:404:79::30) by MWHPR12MB1309.namprd12.prod.outlook.com (2603:10b6:300:10::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14; Fri, 1 Oct 2021 19:35:00 +0000 Received: from BN8NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::c8) by BN6PR14CA0020.outlook.office365.com (2603:10b6:404:79::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.20 via Frontend Transport; Fri, 1 Oct 2021 19:35:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT068.mail.protection.outlook.com (10.13.177.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:34:59 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:56 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , , Date: Fri, 1 Oct 2021 22:34:09 +0300 Message-ID: <20211001193415.23288-9-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 04b23be5-93c0-4ef5-dbc3-08d985128ea2 X-MS-TrafficTypeDiagnostic: MWHPR12MB1309: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:983; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wSsCWEfJswwV6r0AsPk9TkBTJnSfxbvNs2zpqc3nHan5EuA+DVf4HxdNEna6K/tw/+RTRQNVWKne6OMheA3IJoFx2S2CYl1n4MHSPiInTUVOOx+IBmuZBeVfgmPMRKkWEEwqZNn+Odyhzkm0STUi2tgjmLaoCj1G72KUmxSgUfC2jLCyNWCc1ALcwcB+Q2qy+zrwAwExRme1oMcsus5TF9G7d38Kso9S7ox8Nu3w7ClUSC+9dmvzgjEfgdrRt4MQSBI7ZZMToqbSDiByCiHcJ76wh+J+3dtoOz2mP7tHhgAigo/0XRRp5eo7ZL2J45nU07jwcEUZXUZekYVCGpNaNSm173k9thEwhDUiKq2P/najuagP7+ydtF/VutMyJtY6sooC/4Ndu18R6N1PTLBI8VkyKZZmGgjj+2KJl1mPVA4b807uGHxKaAAR60RiHyFMB3d/t8majmDxwD70Hsv6bK64e/wtToz6AANsXtwbXp6pIpxA1cIL4IiCmyVajNKNjaFLU2msqINNF6KAhF7tqRYu+1J9sxHp9/7gkYThHNiKlEWeZkX3TI1Pt+ahTPJzOlY9/jWkYhAL6Hg7NOTD1OiRzDYTkbDLQfiA+jm6oUtJddk0pQwWv8+sKOqGbXcAGimswXdA69TFwBOjOkjhkI+96gIU4v0p55JlkkZd/1WlKk82iJN8P2O0T8N3UE1jR0kcXqmJF0xLK7pey3Gefg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(8936002)(6916009)(356005)(55016002)(316002)(7636003)(4326008)(8676002)(86362001)(36756003)(54906003)(5660300002)(70586007)(508600001)(6286002)(2616005)(83380400001)(2906002)(70206006)(26005)(186003)(16526019)(7696005)(6666004)(36860700001)(82310400003)(47076005)(336012)(426003)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:34:59.6967 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 04b23be5-93c0-4ef5-dbc3-08d985128ea2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1309 Subject: [dpdk-dev] [PATCH v2 08/14] common/mlx5: fix flex parser DevX creation routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson Add missing modify_field_select, next_header_field_size field values setting. Fixes: 38119ebe01d6 ("common/mlx5: add DevX command for flex parsers") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson --- drivers/common/mlx5/mlx5_devx_cmds.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 294ac480dc..43e51e3f95 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -620,10 +620,9 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj, return ret; } - struct mlx5_devx_obj * mlx5_devx_cmd_create_flex_parser(void *ctx, - struct mlx5_devx_graph_node_attr *data) + struct mlx5_devx_graph_node_attr *data) { uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0}; uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; @@ -647,12 +646,18 @@ mlx5_devx_cmd_create_flex_parser(void *ctx, MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH); MLX5_SET(parse_graph_flex, flex, header_length_mode, data->header_length_mode); + MLX5_SET64(parse_graph_flex, flex, modify_field_select, + data->modify_field_select); MLX5_SET(parse_graph_flex, flex, header_length_base_value, data->header_length_base_value); MLX5_SET(parse_graph_flex, flex, header_length_field_offset, data->header_length_field_offset); MLX5_SET(parse_graph_flex, flex, header_length_field_shift, data->header_length_field_shift); + MLX5_SET(parse_graph_flex, flex, next_header_field_offset, + data->next_header_field_offset); + MLX5_SET(parse_graph_flex, flex, next_header_field_size, + data->next_header_field_size); MLX5_SET(parse_graph_flex, flex, header_length_field_mask, data->header_length_field_mask); for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { From patchwork Fri Oct 1 19:34:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100352 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F7FDA0032; Fri, 1 Oct 2021 21:35:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 584B241254; Fri, 1 Oct 2021 21:35:06 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam08on2077.outbound.protection.outlook.com [40.107.102.77]) by mails.dpdk.org (Postfix) with ESMTP id 538DF411D9 for ; Fri, 1 Oct 2021 21:35:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=i/IPJzEpZ0qvitqvce6tWBy/3YxNANrAJ1oOkf53JrurB7tWbgkPwBGWu9keLbMQ92BrBBB42dGkU4Ny+P3rxez3RQDWzgXuQ5+XRjstmklR6WAazy1tnVtzRbUPNCPklwjOgZHsT3I7HiJEng//IZAut+mbvZuxSNlB19ThBfekDJ5OFnb7eYW8AqxkirQUh+KNkM+Hl82ghNwp9rzwxVL5Mfrg/7eETL9+0qvdA2rsY76YcusTaBhB60kBUn8TT0wsjfK2847LdW2ceg7/B7EbyGZ0+Ui+NZkWwehtjQjkkQ9LEk7JT1PyTw0Ryb8nDmeXARlE+7v82BF0y1Bt0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=74dGVAbQmYV9m9222VYMZ2rkxm1WWrAMObUJ78yDTNE=; b=dwDxHLSAmW9ol5Clxdn6wVfO9B3b7NuPZEXYZmSacO5Nta7pddgoVSabH4jGltKx+7EbFUCjLq4251GgpNjLzGzWctp5m9Un3BUrRXOorZBPVx9aR8GcrRMA0dZGjCbz9Uxhf2whs8/aoAT6V7H3CKoclv2/WqKuXNoRXqab1MLsBaR/Y9rMfR5OliiCU65/2AZfDJUdpsxVYJs/ooYd6rzBEk6r6cnd3+qD0IhtEOgabjMQ7xqAQWUiICaqJGgC/v2ZfAy0Vibl1aiwu7qGgHqsZwylYjbO4iwWuA1YMGc/xJyVhX6An9nQW4BQVtOGEA5ZLrAlKI/Qj0nH4kXyNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=74dGVAbQmYV9m9222VYMZ2rkxm1WWrAMObUJ78yDTNE=; b=UPhE1pbltaokW1JczqkG/CCm3Yt7vxfBGvxd2vKUOM4ooQNceRzk6iEf3vwF7PghgAJTZOUqgExqD6idIYrADPcD6e+Ci6YXoJXE+7FFGxvNcvF0XwBury6ekERd7qbBg64wkVVpKe/kXA7ruVorzJXjKyxWrknNwUM1wUaUEKA4HbkScMVd8MRE3a37U0f7G9TQBJnBle5Go1ddOrvGdJDiiMZgO+Di8dKFeGjHxwh9AmhPD57pvZKUG+nXgFanOFwOJOVS5/Do6a5Xk8rlIE9BWKusPinlfDf4E8e5j4BHeXOvnwiLyqRE5gERnd9DnYJAQeh5zeEesQTLX5xIeA== Received: from BN6PR17CA0019.namprd17.prod.outlook.com (2603:10b6:404:65::29) by BL1PR12MB5351.namprd12.prod.outlook.com (2603:10b6:208:317::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16; Fri, 1 Oct 2021 19:35:02 +0000 Received: from BN8NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:404:65:cafe::94) by BN6PR17CA0019.outlook.office365.com (2603:10b6:404:65::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.17 via Frontend Transport; Fri, 1 Oct 2021 19:35:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT018.mail.protection.outlook.com (10.13.176.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:02 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:34:59 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:10 +0300 Message-ID: <20211001193415.23288-10-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5f1591a6-6eb8-4054-1465-08d985129007 X-MS-TrafficTypeDiagnostic: BL1PR12MB5351: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2733; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GlXFqsiJyw/g2DNHyeRW0VOouyxVplJZO2XAwkAKTC1dkao2R6DJomvXr6mlJ7BodjMBck8tbpxkZnhPnHBvP+ukrqEFr/mxU5EsFCbMMjrAgVZONWPS9gSAbxoCTLU8kMKPrDg0J0f/ObRZQK2zNGjG/Ia1UCIxKWjaP6Ho/Ymd6cCVnZQSxHaYr7+QtNzOhfE0w6YZVfny04EFSkUDQXtwVen8bECdWuzjqxKETrSVFg202GdnwhcpLG9EvpeD79fJYYrWcJ0WIRs7vF0VSng0TdcG2qK/s7Reu2hJqMd+HvNs8ZCHglzU+VV8B8lxZAkeBVoJMz8fdvCgJ/12/ZgpIOo3IV1O5WQSYvLALH6yOyUuiq5STYtCFKn0wVvk4DZ9l4LzyOsOfkkRMpGXbj16T1a8JgkeBYBKC/ed/mWp9iq7rRQsm3NkLe6jnu2E6mzbghFf5mPhKwN9pd9YcgDxpvEO5LiocpkI7HuROn1uY7rsH+kOBdjx4x1txZ1l7l48zVzW+izUN7BcTdj4WTobq+7kM0McELkQwRX94v9Ktk8CQ14p7dHRxKU8VOlF0M6/PaN3Xdxn63bUS5/ChlN3mnnZz+wMQXkwzT+a1gxjFGonnlNUI/ySFFLtrnMKLp18ng1/sQyemgSZKL9fDN7ypf//IGDSmZVAeOaUPgrd+qcF8qR+x4b+nUT0Y24g6WPFPN+xz1eyCowin2YPF6DXfPZH9YmO4VGPlxG6wZA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(7636003)(36756003)(2906002)(82310400003)(316002)(47076005)(36860700001)(70206006)(86362001)(83380400001)(8936002)(54906003)(7696005)(426003)(356005)(6666004)(5660300002)(336012)(4326008)(70586007)(508600001)(26005)(8676002)(2616005)(1076003)(6916009)(55016002)(16526019)(6286002)(186003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:02.0262 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f1591a6-6eb8-4054-1465-08d985129007 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5351 Subject: [dpdk-dev] [PATCH v2 09/14] net/mlx5: update eCPRI flex parser structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To handle eCPRI protocol in the flows the mlx5 PMD engages flex parser hardware feature. While we were implementing eCPRI support we anticipated the flex parser usage extension, and all related variables were named accordingly, containing flex syllabus. Now we are preparing to introduce more common approach of flex item, in order to avoid naming conflicts and improve the code readability the eCPRI infrastructure related variables are renamed as preparation step. Later, once we have the new flex item implemented, we could consider to refactor the eCPRI protocol support to move on common flex item basis. Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.c | 9 +++------ drivers/net/mlx5/mlx5.h | 12 +++--------- drivers/net/mlx5/mlx5_flow_dv.c | 2 +- 3 files changed, 7 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 45ccfe2784..aa49542b9d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -858,8 +858,7 @@ bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flex_parser_profiles *prf = - &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0]; + struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser; return !!prf->obj; } @@ -878,8 +877,7 @@ int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flex_parser_profiles *prf = - &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0]; + struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser; struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; @@ -942,8 +940,7 @@ static void mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flex_parser_profiles *prf = - &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0]; + struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser; if (prf->obj) mlx5_devx_cmd_destroy(prf->obj); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3581414b78..5000d2d4c5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1035,14 +1035,8 @@ struct mlx5_dev_txpp { uint64_t err_ts_future; /* Timestamp in the distant future. */ }; -/* Supported flex parser profile ID. */ -enum mlx5_flex_parser_profile_id { - MLX5_FLEX_PARSER_ECPRI_0 = 0, - MLX5_FLEX_PARSER_MAX = 8, -}; - -/* Sample ID information of flex parser structure. */ -struct mlx5_flex_parser_profiles { +/* Sample ID information of eCPRI flex parser structure. */ +struct mlx5_ecpri_parser_profile { uint32_t num; /* Actual number of samples. */ uint32_t ids[8]; /* Sample IDs for this profile. */ uint8_t offset[8]; /* Bytes offset of each parser. */ @@ -1190,7 +1184,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_devx_obj *tis; /* TIS object. */ struct mlx5_devx_obj *td; /* Transport domain. */ void *tx_uar; /* Tx/packet pacing shared UAR. */ - struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX]; + struct mlx5_ecpri_parser_profile ecpri_parser; /* Flex parser profiles information. */ void *devx_rx_uar; /* DevX UAR for Rx. */ struct mlx5_aso_age_mng *aso_age_mng; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index b610ad3ef4..fc676d3ee4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10020,7 +10020,7 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, */ if (!ecpri_m->hdr.common.u32) return; - samples = priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0].ids; + samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, prog_sample_field_value_0); From patchwork Fri Oct 1 19:34:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100353 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2A5F9A0032; Fri, 1 Oct 2021 21:36:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C0A641259; Fri, 1 Oct 2021 21:35:08 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2066.outbound.protection.outlook.com [40.107.95.66]) by mails.dpdk.org (Postfix) with ESMTP id E2B5241259 for ; Fri, 1 Oct 2021 21:35:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MsDUka/a3ssRo2KvAj3cSJZHxr9aits8jdROR9/cuRJLqP8Wg9BN5zpgq5rDXzG/mu/aXfbSI07EYHyZyOhHkSgAq3R8sCBPG2ZkQ9X3yulL0RkMPvKXnQBX5nlfHipIy8et0BmguB+TcAFk5su8O7jkZkD10xI4WtWvX0zzkAZRNMVy4dYRIKKKt6netajx6pHCAba0VXU8n5uocI8Ui67Vc5SPL+6/pRlak3uKvu9qveRCBkfR2jE6r5OvDEw6SOlQMxgq2WELrxm91OB0+LYbvLs64Nh5VKO7q3haQmhuUgfURK9zOW34HEZKtoqgnIUNRV1iqzY7P5OkNsLw3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EXMf2UcVK7hi796CshcNaJb4Rka94qvhNykNjAHhOJ0=; b=ASCEFTymvBEI4s1Uzhw2CrlYiHCdZ9tVeCk5q6SECoRTrRKX+1nImSTN2Y5v8bgObA+WujiwmpYu+wI1Q2Bw2xEE6nzEUwMaKGcX9kNxu52wiJRmX5XAYW6CaozJPq4zdHx0638ipXfHk4E8IVkHUAntC81klkxFG5KR+9+N+OMefet4n6n3oRKk2sfI3Of7v8/OZrclNQoVOkNHjZV99+3/HCB/98riXcWKU55jpoKzvPOmqnacaNHpuhfaEONrL6eXpsFV/oINj72j2p58YS8/yxTt8FBZOgJ6A5P3zmseX8jB6aTq8GW8GtOmY0XBcCuhv1bN4KGaH0EC9q4rLw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EXMf2UcVK7hi796CshcNaJb4Rka94qvhNykNjAHhOJ0=; b=n36R0wUYziDZw0QwbduALFhT01hZuTHxsa43ontKzi59SnSJWweLTSoPfICzDpaK4SFL+QorbqZa1SxbxO8V90IwjX1haofaxuDvNRYtt7c13mvcaFN+/rpCBabt5S1HigfcsVUcodhWHn329OhrILLl9MTR2AbS8+kbt+qfKGUqSWQNKF36dmFzUxlNqvC75g4s0ivLcbiOu7zaX/FwG1U4tPdnAcptmDL4ORmhOnK7RMiO6n8RsIyhbuUNdZNB4jR96Y/Qmy4CJJr0uWd7rhN551AQZaLZumozsi/cT45d4f3pb6vh3jtvM9cLD4u+ATcRCfHceYgR0tJ4A+aETg== Received: from BN9PR03CA0845.namprd03.prod.outlook.com (2603:10b6:408:13d::10) by BY5PR12MB4163.namprd12.prod.outlook.com (2603:10b6:a03:202::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14; Fri, 1 Oct 2021 19:35:04 +0000 Received: from BN8NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13d:cafe::72) by BN9PR03CA0845.outlook.office365.com (2603:10b6:408:13d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.19 via Frontend Transport; Fri, 1 Oct 2021 19:35:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT034.mail.protection.outlook.com (10.13.176.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:04 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:35:01 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:11 +0300 Message-ID: <20211001193415.23288-11-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fb20dfb5-1715-48a2-847c-08d985129150 X-MS-TrafficTypeDiagnostic: BY5PR12MB4163: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:302; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KmSc51WziPjrIrwSv/+4ewWcJ1vULdCy3ayIrKIuMfiCHMpuRt7y3pVMUaYvajg/2CxzuNOA4bKmg7ZWSMRXvcQ9XWYxnpZM+eilRk/830v8HxbiQzZjfIO+uoQbs77ewFB5riEBLlrySMAkFoQmJbBUJsTYYk7wgnt84RdTgIxRKmGaac54ZC2jYXAV/raDwy8qXC5IdRuDTRPZn45/ls+sQLfbKhVfBY1M1f+ol35847Blygsn042OO9qh6hr7CfDCpbxidlT0JpozpyNCrd1odIouc49jAC0sscviLsIDcnFZDyssdRZWXmoICxtknh8GxDr1+BTT07P0qdtzUfHbX1gYB/AxFJ3pkGTPX7fBYS2nqhkPNHF++emiuGlf3NPYf8zV8VddAqMYSwuO/p0HC7siL8XyD4dzCAuLPwwe3PsBSrhbWrGxFenJ9JeydFRvshfLLFZyH6n0w904t+745aQxHt6IpAuRcUGlV1ZFV82V0zSQfXwGyWUux3Q5HDOX4tcNdMFI94TfpxJbf7JsVCNLPFJNUiMgno8oMaeyIF7pNt0A1JLM1+iDMgvVPtPxrrVcB7k9Ehb5LaHhR+ljuLMHieSllOb2a4UHmtj4Iqiq9CQt4/QH4fGOJBCl2q+qbWoWfPhEaZIGNZSb22C4QQVSGHlPik9Qa+QEmSDDZx75dp4QKbQeM9FgOjAZu89VU3EP5uKZKDNkLqeEovOnojYl9XcwPu0AnWo/SBY= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(83380400001)(508600001)(7696005)(36756003)(336012)(426003)(82310400003)(6286002)(55016002)(47076005)(70586007)(70206006)(1076003)(7636003)(6666004)(6916009)(316002)(54906003)(36860700001)(356005)(86362001)(186003)(5660300002)(16526019)(8676002)(26005)(2616005)(30864003)(8936002)(2906002)(4326008)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:04.0906 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fb20dfb5-1715-48a2-847c-08d985129150 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4163 Subject: [dpdk-dev] [PATCH v2 10/14] net/mlx5: add flex item API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is a preparation step of implementing flex item feature in driver and it provides: - external entry point routines for flex item creation/deletion - flex item objects management over the ports. The flex item object keeps information about the item created over the port - reference counter to track whether item is in use by some active flows and the pointer to underlaying shared DevX object, providing all the data needed to translate the flow flex pattern into matcher fields according hardware configuration. There is not too many flex items supposed to be created on the port, the design is optimized rather for flow insertion rate than memory savings. Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/linux/mlx5_os.c | 5 +- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5.h | 24 ++++ drivers/net/mlx5/mlx5_flow.c | 49 ++++++++ drivers/net/mlx5/mlx5_flow.h | 18 ++- drivers/net/mlx5/mlx5_flow_dv.c | 3 +- drivers/net/mlx5/mlx5_flow_flex.c | 189 ++++++++++++++++++++++++++++++ 8 files changed, 286 insertions(+), 5 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_flow_flex.c diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 3746057673..cbbc152782 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -928,7 +928,6 @@ mlx5_representor_match(struct mlx5_dev_spawn_data *spawn, return false; } - /** * Spawn an Ethernet device from Verbs information. * @@ -1787,6 +1786,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = mlx5_alloc_shared_dr(priv); if (err) goto error; + if (mlx5_flex_item_port_init(eth_dev) < 0) + goto error; } if (config->devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; @@ -1922,6 +1923,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, claim_zero(rte_eth_switch_domain_free(priv->domain_id)); if (priv->hrxqs) mlx5_list_destroy(priv->hrxqs); + if (eth_dev && priv->flex_item_map) + mlx5_flex_item_port_cleanup(eth_dev); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index dac7f1fabf..f9b21c35d9 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -17,6 +17,7 @@ sources = files( 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', + 'mlx5_flow_flex.c', 'mlx5_mac.c', 'mlx5_mr.c', 'mlx5_rss.c', diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index aa49542b9d..d902e00ea3 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -376,7 +376,6 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { }, }; - #define MLX5_FLOW_MIN_ID_POOL_SIZE 512 #define MLX5_ID_GENERATION_ARRAY_FACTOR 16 @@ -1575,6 +1574,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_mp_os_req_stop_rxtx(dev); /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); + mlx5_flex_item_port_cleanup(dev); if (priv->rxqs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 5000d2d4c5..89b4d66374 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -49,6 +49,9 @@ #define MLX5_MAX_MODIFY_NUM 32 #define MLX5_ROOT_TBL_MODIFY_NUM 16 +/* Maximal number of flex items created on the port.*/ +#define MLX5_PORT_FLEX_ITEM_NUM 4 + enum mlx5_ipool_index { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) MLX5_IPOOL_DECAP_ENCAP = 0, /* Pool for encap/decap resource. */ @@ -1112,6 +1115,12 @@ struct mlx5_aso_ct_pools_mng { struct mlx5_aso_sq aso_sq; /* ASO queue objects. */ }; +/* Port flex item context. */ +struct mlx5_flex_item { + struct mlx5_flex_parser_devx *devx_fp; /* DevX flex parser object. */ + uint32_t refcnt; /**< Atomically accessed refcnt by flows. */ +}; + /* * Shared Infiniband device context for Master/Representors * which belong to same IB device with multiple IB ports. @@ -1448,6 +1457,10 @@ struct mlx5_priv { uint32_t rss_shared_actions; /* RSS shared actions. */ struct mlx5_devx_obj *q_counters; /* DevX queue counter object. */ uint32_t counter_set_id; /* Queue counter ID to set in DevX objects. */ + rte_spinlock_t flex_item_sl; /* Flex item list spinlock. */ + struct mlx5_flex_item flex_item[MLX5_PORT_FLEX_ITEM_NUM]; + /* Flex items have been created on the port. */ + uint32_t flex_item_map; /* Map of allocated flex item elements. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) @@ -1823,4 +1836,15 @@ int mlx5_aso_ct_query_by_wqe(struct mlx5_dev_ctx_shared *sh, int mlx5_aso_ct_available(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_ct_action *ct); +/* mlx5_flow_flex.c */ + +struct rte_flow_item_flex_handle * +flow_dv_item_create(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); +int flow_dv_item_release(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *flex_handle, + struct rte_flow_error *error); +int mlx5_flex_item_port_init(struct rte_eth_dev *dev); +void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c914a7120c..5224daed6c 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -718,6 +718,14 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, struct rte_mbuf *m, struct rte_flow_restore_info *info, struct rte_flow_error *err); +static struct rte_flow_item_flex_handle * +mlx5_flow_flex_item_create(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); +static int +mlx5_flow_flex_item_release(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -737,6 +745,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .tunnel_action_decap_release = mlx5_flow_tunnel_action_release, .tunnel_item_release = mlx5_flow_tunnel_item_release, .get_restore_info = mlx5_flow_tunnel_get_restore_info, + .flex_item_create = mlx5_flow_flex_item_create, + .flex_item_release = mlx5_flow_flex_item_release, }; /* Tunnel information. */ @@ -9398,6 +9408,45 @@ mlx5_release_tunnel_hub(__rte_unused struct mlx5_dev_ctx_shared *sh, } #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ +/* Flex flow item API */ +static struct rte_flow_item_flex_handle * +mlx5_flow_flex_item_create(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error) +{ + static const char err_msg[] = "flex item creation unsupported"; + struct rte_flow_attr attr = { .transfer = 0 }; + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(flow_get_drv_type(dev, &attr)); + + if (!fops->item_create) { + DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, err_msg); + return NULL; + } + return fops->item_create(dev, conf, error); +} + +static int +mlx5_flow_flex_item_release(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error) +{ + static const char err_msg[] = "flex item release unsupported"; + struct rte_flow_attr attr = { .transfer = 0 }; + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(flow_get_drv_type(dev, &attr)); + + if (!fops->item_release) { + DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, err_msg); + return -rte_errno; + } + return fops->item_release(dev, handle, error); +} + static void mlx5_dbg__print_pattern(const struct rte_flow_item *item) { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 5c68d4f7d7..a8f8c49dd2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1226,6 +1226,19 @@ typedef int (*mlx5_flow_create_def_policy_t) (struct rte_eth_dev *dev); typedef void (*mlx5_flow_destroy_def_policy_t) (struct rte_eth_dev *dev); +typedef struct rte_flow_item_flex_handle *(*mlx5_flow_item_create_t) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); +typedef int (*mlx5_flow_item_release_t) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error); +typedef int (*mlx5_flow_item_update_t) + (struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1260,6 +1273,9 @@ struct mlx5_flow_driver_ops { mlx5_flow_action_update_t action_update; mlx5_flow_action_query_t action_query; mlx5_flow_sync_domain_t sync_domain; + mlx5_flow_item_create_t item_create; + mlx5_flow_item_release_t item_release; + mlx5_flow_item_update_t item_update; }; /* mlx5_flow.c */ @@ -1709,6 +1725,4 @@ const struct mlx5_flow_tunnel * mlx5_get_tof(const struct rte_flow_item *items, const struct rte_flow_action *actions, enum mlx5_tof_rule_type *rule_type); - - #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index fc676d3ee4..a3c35a5edf 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -18011,7 +18011,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .action_update = flow_dv_action_update, .action_query = flow_dv_action_query, .sync_domain = flow_dv_sync_domain, + .item_create = flow_dv_item_create, + .item_release = flow_dv_item_release, }; - #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c new file mode 100644 index 0000000000..b7bc4af6fb --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -0,0 +1,189 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2021 NVIDIA Corporation & Affiliates + */ +#include +#include +#include +#include "mlx5.h" +#include "mlx5_flow.h" + +static_assert(sizeof(uint32_t) * CHAR_BIT >= MLX5_PORT_FLEX_ITEM_NUM, + "Flex item maximal number exceeds uint32_t bit width"); + +/** + * Routine called once on port initialization to init flex item + * related infrastructure initialization + * + * @param dev + * Ethernet device to perform flex item initialization + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flex_item_port_init(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + rte_spinlock_init(&priv->flex_item_sl); + MLX5_ASSERT(!priv->flex_item_map); + return 0; +} + +/** + * Routine called once on port close to perform flex item + * related infrastructure cleanup. + * + * @param dev + * Ethernet device to perform cleanup + */ +void +mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t i; + + for (i = 0; i < MLX5_PORT_FLEX_ITEM_NUM && priv->flex_item_map ; i++) { + if (priv->flex_item_map & (1 << i)) { + /* DevX object dereferencing should be provided here. */ + priv->flex_item_map &= ~(1 << i); + } + } +} + +static int +mlx5_flex_index(struct mlx5_priv *priv, struct mlx5_flex_item *item) +{ + uintptr_t start = (uintptr_t)&priv->flex_item[0]; + uintptr_t entry = (uintptr_t)item; + uintptr_t idx = (entry - start) / sizeof(struct mlx5_flex_item); + + if (entry < start || + idx >= MLX5_PORT_FLEX_ITEM_NUM || + (entry - start) % sizeof(struct mlx5_flex_item) || + !(priv->flex_item_map & (1u << idx))) + return -1; + return (int)idx; +} + +static struct mlx5_flex_item * +mlx5_flex_alloc(struct mlx5_priv *priv) +{ + struct mlx5_flex_item *item = NULL; + + rte_spinlock_lock(&priv->flex_item_sl); + if (~priv->flex_item_map) { + uint32_t idx = rte_bsf32(~priv->flex_item_map); + + if (idx < MLX5_PORT_FLEX_ITEM_NUM) { + item = &priv->flex_item[idx]; + MLX5_ASSERT(!item->refcnt); + MLX5_ASSERT(!item->devx_fp); + item->devx_fp = NULL; + __atomic_store_n(&item->refcnt, 0, __ATOMIC_RELEASE); + priv->flex_item_map |= 1u << idx; + } + } + rte_spinlock_unlock(&priv->flex_item_sl); + return item; +} + +static void +mlx5_flex_free(struct mlx5_priv *priv, struct mlx5_flex_item *item) +{ + int idx = mlx5_flex_index(priv, item); + + MLX5_ASSERT(idx >= 0 && + idx < MLX5_PORT_FLEX_ITEM_NUM && + (priv->flex_item_map & (1u << idx))); + if (idx >= 0) { + rte_spinlock_lock(&priv->flex_item_sl); + MLX5_ASSERT(!item->refcnt); + MLX5_ASSERT(!item->devx_fp); + item->devx_fp = NULL; + __atomic_store_n(&item->refcnt, 0, __ATOMIC_RELEASE); + priv->flex_item_map &= ~(1u << idx); + rte_spinlock_unlock(&priv->flex_item_sl); + } +} + +/** + * Create the flex item with specified configuration over the Ethernet device. + * + * @param dev + * Ethernet device to create flex item on. + * @param[in] conf + * Flex item configuration. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * Non-NULL opaque pointer on success, NULL otherwise and rte_errno is set. + */ +struct rte_flow_item_flex_handle * +flow_dv_item_create(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_item *flex; + + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + flex = mlx5_flex_alloc(priv); + if (!flex) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "too many flex items created on the port"); + return NULL; + } + RTE_SET_USED(conf); + /* Mark initialized flex item valid. */ + __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); + return (struct rte_flow_item_flex_handle *)flex; +} + +/** + * Release the flex item on the specified Ethernet device. + * + * @param dev + * Ethernet device to destroy flex item on. + * @param[in] handle + * Handle of the item existing on the specified device. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_item_release(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_handle *handle, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_item *flex = + (struct mlx5_flex_item *)(uintptr_t)handle; + uint32_t old_refcnt = 1; + + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + rte_spinlock_lock(&priv->flex_item_sl); + if (mlx5_flex_index(priv, flex) < 0) { + rte_spinlock_unlock(&priv->flex_item_sl); + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid flex item handle value"); + } + if (!__atomic_compare_exchange_n(&flex->refcnt, &old_refcnt, 0, 0, + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) { + rte_spinlock_unlock(&priv->flex_item_sl); + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex item has flow references"); + } + /* Flex item is marked as invalid, we can leave locked section. */ + rte_spinlock_unlock(&priv->flex_item_sl); + mlx5_flex_free(priv, flex); + return 0; +} From patchwork Fri Oct 1 19:34:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100354 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 032C1A0032; Fri, 1 Oct 2021 21:36:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2706411F3; Fri, 1 Oct 2021 21:35:11 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2060.outbound.protection.outlook.com [40.107.220.60]) by mails.dpdk.org (Postfix) with ESMTP id 002C34124A for ; Fri, 1 Oct 2021 21:35:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jYtRr+hftJS8m+UmFU4flnMDW3Yd5dJLnvCddjhiy2kHEltGMRq8omQA7a3nt7L6tQNfk0NiXJbGbebNghPKv6ZB1WljiMvZBi8l+ONqS8xYmQI0eeVdvoWS3NI0zjcWUEtqZySaMbCuchc97yYLh6L9Ha4XXLnQKeZBY+D0z0ovgsowwd9XbpRC3LsJzy17snMIUM2/lnm1gWMBuaAyPGze+qWaKA9tT4Xo3XPZmNin4xxikf0LXnavMuyBl6wFnnwWnJ5TI1ILwdKoFvWdH2H5MLkj6ITsQ9pfKo7rLi8PiTtCfOmMBXYYjEPcUEU/OYp+cyCY/1SWeEZhGyAX2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZfEGwttFwfrXHzpxiAib6BZoCMc/bG6U8aY4vLyNdxc=; b=oUE6eSpXuPFVqX7NQ6CaNdgXhbgyFt34YCLtCaeB2a+lzVSFluaQimSUw/JQab9OKqupbiFV4PxlnOO/WfBaqlSci2s6B8wcsLMiNh23QC9FSBUUC4JRhUjHQmosVtYXyJwMI/N3UAs6mQx2TWNceOkEgAKtbqzZj6v8+2yR2+9OaOZ2yNO/lq9CxJPXNxJ5LOt1aAQGP7v7+gQhCzK1rE0wMmb3UJ7SUwKjnp+uw+e+GkKgIOexK0fa0swmParLa58kLM/TTIgXGrM4A/u6EOpnJKkZNm3JKWKyBlBt2Ddf4T0xeCKrmP4RC5Wl/0BWRDXe8enOowejyS01rFtWmA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZfEGwttFwfrXHzpxiAib6BZoCMc/bG6U8aY4vLyNdxc=; b=L1xEPCfSNWmJKE1TZLVU0AtZkkpeIjMQDOjPkxWFV83HD2wlbzCsT8tI6qPjevLye6X5NfKgspI3p+NwEEsHWfIyBiO8c7tYZiutN0slwj2HgMoZ6AXWRV7OnRsbJNym8JTIIu9jK80xLj9rA2ofp+y3nwn87X2WPQpoictq0PlskF7ITuyor5XqfwekNkjgn+jGmBcMBdYqiNchU4lyEmwK63caxBHDovJi35TuKwWdn5gcU4Vp/0+Laj2NHosXhgHLpwo9ryRjdkCsEs+yzIC8fLfG5+rZcHBDQ3rzcF02aXhIoYEEyCMrGqn+/QVjI4cV+A33etCx6JmOvBh2Qg== Received: from BN9PR03CA0857.namprd03.prod.outlook.com (2603:10b6:408:13d::22) by DM6PR12MB4959.namprd12.prod.outlook.com (2603:10b6:5:208::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16; Fri, 1 Oct 2021 19:35:06 +0000 Received: from BN8NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13d:cafe::1b) by BN9PR03CA0857.outlook.office365.com (2603:10b6:408:13d::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16 via Frontend Transport; Fri, 1 Oct 2021 19:35:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT034.mail.protection.outlook.com (10.13.176.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:05 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:35:03 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:12 +0300 Message-ID: <20211001193415.23288-12-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b653816d-9a94-4a11-8d83-08d985129237 X-MS-TrafficTypeDiagnostic: DM6PR12MB4959: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:293; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CxVjxqeqBdmT/jp2gu56hCyGT8ipu7Dd9cfv1XdObaRhIu8EL+zFeiNw0L8YFqnIW8EeV1X5y0be8Dm5lUUYJ4/ngqef7He8Fkf8jX+0q8PywQdZleq0tDNhSOqDwv6sOD5YQsA4X0OO3Z06FgPLa0ZPuUm0lGx4Orwx4TzU2H28dFPjrGf/eAdWyMlFHiF4izmCvWE+snfEQjRm920NDyyCAvB4BCGsaH1rRTMPGJqds8C+OMhTMrw6J5IedlAjA9N+7cK5CRBEK6swWerWfmgiu1AFNJMZ8MpctSXnleao2xuUcY8D4iHW5wxFoBvp8EnXGgYoqT/e/Og4N6McJ3gMgKL/s06vLyI23FQQVcZ6McL7Ty6Z903TAjUA93J5vh5T6qf5rmsiLj5hsmEHKvPxrf9oXZczwaP5KqnH9x6smdFmpPIdRL9xuYGSd1SmdpHI8H+Zc4muF9ty7tpsZh4WoZuIDd7YgdfO4Vzh4ahAMZ3aAFzrVJUSQrt3gek9/uzzQFw2h9ooTvf5F6Sy1cqE65Le6e0I3ItJPA38o5rSyXugtvZkEvO/ij8vUykflVealQwLUnKN05VYxHeWc+r2wwQxH/7R8PWVejjoHSfGOL9x8v3Ztmp0IR3EvrWcXyEK/diHScLrcjO77fzPZAwijnSQYh4z+TOm1ih5GaXv4JRP62E9TFIBm9Ima15YyrOvTrCr+UlvOsNdVU60WRMGRMeEoBJOkGTeEg6j2gA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(26005)(47076005)(54906003)(316002)(356005)(186003)(6666004)(82310400003)(6286002)(36860700001)(5660300002)(7636003)(6916009)(4326008)(16526019)(508600001)(55016002)(2906002)(83380400001)(426003)(1076003)(7696005)(36756003)(86362001)(8936002)(2616005)(70206006)(70586007)(8676002)(336012)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:05.6806 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b653816d-9a94-4a11-8d83-08d985129237 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4959 Subject: [dpdk-dev] [PATCH v2 11/14] net/mlx5: add flex parser DevX object management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson The DevX flex parsers can be shared between representors within the same IB context. We should put the flex parser objects into the shared list and engage the standard mlx5_list_xxx API to manage ones. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++ drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5.h | 20 +++++ drivers/net/mlx5/mlx5_flow_flex.c | 120 +++++++++++++++++++++++++++++- 4 files changed, 153 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index cbbc152782..e4066d134b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -384,6 +384,16 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) flow_dv_dest_array_clone_free_cb); if (!sh->dest_array_list) goto error; + /* Init shared flex parsers list, no need lcore_share */ + snprintf(s, sizeof(s), "%s_flex_parsers_list", sh->ibdev_name); + sh->flex_parsers_dv = mlx5_list_create(s, sh, false, + mlx5_flex_parser_create_cb, + mlx5_flex_parser_match_cb, + mlx5_flex_parser_remove_cb, + mlx5_flex_parser_clone_cb, + mlx5_flex_parser_clone_free_cb); + if (!sh->flex_parsers_dv) + goto error; #endif #ifdef HAVE_MLX5DV_DR void *domain; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d902e00ea3..77fe073f5c 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1315,6 +1315,10 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (LIST_EMPTY(&mlx5_dev_ctx_list)) mlx5_flow_os_release_workspace(); pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); + if (sh->flex_parsers_dv) { + mlx5_list_destroy(sh->flex_parsers_dv); + sh->flex_parsers_dv = NULL; + } /* * Ensure there is no async event handler installed. * Only primary process handles async device events. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 89b4d66374..629ff6ebfe 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1115,6 +1115,15 @@ struct mlx5_aso_ct_pools_mng { struct mlx5_aso_sq aso_sq; /* ASO queue objects. */ }; +/* DevX flex parser context. */ +struct mlx5_flex_parser_devx { + struct mlx5_list_entry entry; /* List element at the beginning. */ + uint32_t num_samples; + void *devx_obj; + struct mlx5_devx_graph_node_attr devx_conf; + uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; +}; + /* Port flex item context. */ struct mlx5_flex_item { struct mlx5_flex_parser_devx *devx_fp; /* DevX flex parser object. */ @@ -1179,6 +1188,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_list *push_vlan_action_list; /* Push VLAN actions. */ struct mlx5_list *sample_action_list; /* List of sample actions. */ struct mlx5_list *dest_array_list; + struct mlx5_list *flex_parsers_dv; /* Flex Item parsers. */ /* List of destination array actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ @@ -1847,4 +1857,14 @@ int flow_dv_item_release(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flex_item_port_init(struct rte_eth_dev *dev); void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); +/* Flex parser list callbacks. */ +struct mlx5_list_entry *mlx5_flex_parser_create_cb(void *list_ctx, void *ctx); +int mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx); +void mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, + void *ctx); +void mlx5_flex_parser_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index b7bc4af6fb..b8a091e259 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -45,7 +45,13 @@ mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev) for (i = 0; i < MLX5_PORT_FLEX_ITEM_NUM && priv->flex_item_map ; i++) { if (priv->flex_item_map & (1 << i)) { - /* DevX object dereferencing should be provided here. */ + struct mlx5_flex_item *flex = &priv->flex_item[i]; + + claim_zero(mlx5_list_unregister + (priv->sh->flex_parsers_dv, + &flex->devx_fp->entry)); + flex->devx_fp = NULL; + flex->refcnt = 0; priv->flex_item_map &= ~(1 << i); } } @@ -127,7 +133,9 @@ flow_dv_item_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_parser_devx devx_config = { .devx_obj = NULL }; struct mlx5_flex_item *flex; + struct mlx5_list_entry *ent; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); flex = mlx5_flex_alloc(priv); @@ -137,10 +145,22 @@ flow_dv_item_create(struct rte_eth_dev *dev, "too many flex items created on the port"); return NULL; } + ent = mlx5_list_register(priv->sh->flex_parsers_dv, &devx_config); + if (!ent) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flex item creation failure"); + goto error; + } + flex->devx_fp = container_of(ent, struct mlx5_flex_parser_devx, entry); RTE_SET_USED(conf); /* Mark initialized flex item valid. */ __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); return (struct rte_flow_item_flex_handle *)flex; + +error: + mlx5_flex_free(priv, flex); + return NULL; } /** @@ -166,6 +186,7 @@ flow_dv_item_release(struct rte_eth_dev *dev, struct mlx5_flex_item *flex = (struct mlx5_flex_item *)(uintptr_t)handle; uint32_t old_refcnt = 1; + int rc; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); rte_spinlock_lock(&priv->flex_item_sl); @@ -184,6 +205,103 @@ flow_dv_item_release(struct rte_eth_dev *dev, } /* Flex item is marked as invalid, we can leave locked section. */ rte_spinlock_unlock(&priv->flex_item_sl); + MLX5_ASSERT(flex->devx_fp); + rc = mlx5_list_unregister(priv->sh->flex_parsers_dv, + &flex->devx_fp->entry); + flex->devx_fp = NULL; mlx5_flex_free(priv, flex); + if (rc) + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex item release failure"); return 0; } + +/* DevX flex parser list callbacks. */ +struct mlx5_list_entry * +mlx5_flex_parser_create_cb(void *list_ctx, void *ctx) +{ + struct mlx5_dev_ctx_shared *sh = list_ctx; + struct mlx5_flex_parser_devx *fp, *conf = ctx; + int ret; + + fp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + /* Copy the requested confgiurations. */ + fp->num_samples = conf->num_samples; + memcpy(&fp->devx_conf, &conf->devx_conf, sizeof(fp->devx_conf)); + /* Create DevX flex parser. */ + fp->devx_obj = mlx5_devx_cmd_create_flex_parser(sh->ctx, + &fp->devx_conf); + if (!fp->devx_obj) + goto error; + /* Query the firmware assigined sample ids. */ + ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj, + fp->sample_ids, + fp->num_samples); + if (ret) + goto error; + DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u\n", + (const void *)fp, fp->num_samples); + return &fp->entry; +error: + if (fp->devx_obj) + mlx5_devx_cmd_destroy((void *)(uintptr_t)fp->devx_obj); + if (fp) + mlx5_free(fp); + return NULL; +} + +int +mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx) +{ + struct mlx5_flex_parser_devx *fp = + container_of(iter, struct mlx5_flex_parser_devx, entry); + struct mlx5_flex_parser_devx *org = + container_of(ctx, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + return !iter || !ctx || memcmp(&fp->devx_conf, + &org->devx_conf, + sizeof(fp->devx_conf)); +} + +void +mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + MLX5_ASSERT(fp->devx_obj); + claim_zero(mlx5_devx_cmd_destroy(fp->devx_obj)); + mlx5_free(entry); +} + +struct mlx5_list_entry * +mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, void *ctx) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + fp = mlx5_malloc(0, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + memcpy(fp, ctx, sizeof(struct mlx5_flex_parser_devx)); + return &fp->entry; +} + +void +mlx5_flex_parser_clone_free_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + RTE_SET_USED(list_ctx); + mlx5_free(fp); +} From patchwork Fri Oct 1 19:34:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100355 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2CEDA0032; Fri, 1 Oct 2021 21:36:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C167B41263; Fri, 1 Oct 2021 21:35:12 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2076.outbound.protection.outlook.com [40.107.244.76]) by mails.dpdk.org (Postfix) with ESMTP id 20580411E5 for ; Fri, 1 Oct 2021 21:35:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ExlbLxoVKI0tXewMaUGrjkEOAAuO87r0JDy7hbXbnwEWcGjCfVtx7Eg2l8xDzjbsJn6EXvLJxGO75VhSbXfLn6fdqJIDi30mSsH9EseKy/Cimpfzmxctzy18ajG8Xy3VVUWRHDQ4X02AEx1vC/FvclMu46HXhF/aPtP4hEg4ozFY/kg0yqGjpZt0FpO3XFC+JNUytWKygDGJrJ/B18TyE/0Ht/6qGuOVtj9kjk5uS1q1YF7z9lnPVIYi8jlAw3+/Sr5pTUribLwWYzktX7rqx/sV3GbRCnu6EvJ+/OcIubXcfshVjt40ADv4tSCm7dkdh5y4+I2Wc+09Tae3oYLwBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DeujWsxm1DyS8DaBI/yLFEf1mWVWnxgZN7lMEBJvnGQ=; b=HfLpbGmq0X/r0NgqPuysXsRm9wa1TYChRfBHFr/fefbH0eae5ftyspv+23bUYZzJPXOl95GLcM7DgeTDFZuz79E1wvOio2UQxSw5sfDVifcZszY78WLreigyLWQ5Tp92kci9pEFbc1npXwDgHxzoNBLSzGR5oBHKLL9xEX6nMR3DA5w8h2LbPmffWsbm6P7garoyDUtecW1nossZJ4B/Wcu31W8I0SGzF10jixbih6d548fv9MFT5spiIkFuf+ezAYqjR19Yib1gQIQnT9TT/gc6klbyd4fVonb1zqYlNUBFwUk8ilJSpWVsRijnAu42jkMkcqemyuV5Ik4FfF7MFA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DeujWsxm1DyS8DaBI/yLFEf1mWVWnxgZN7lMEBJvnGQ=; b=RQ6ACe3ta82A/blRhbe/1xPCu0vZNlqgh/SstGo/7wE+6cgXlXK7WRWq04Op7YDEKJFZ/Td98IW6Ee7eR9sZ3XGtLZOs2AtXH2xF+SxttxS03/ftS8zcb3NCvZ+vyUqxbTncMoBbxL0weCbev6fMw5J131+RH2WiCuP/xZEd0xAXCnJ9SxKFGDJWM3AF3iprhyzU/rViD4uwRCp7UqLJRPoRMPRpp3SzB4GhscSojp7AoGmt3+L6Z/Ds5+iWneiZ2wncw0I09mmzimjiUROP2sYwkGvSHRHdTtUGG1TAKDyXb2Tv3IDlpMQ951mioDS0LesXW7xF0tKPBC5gNBpsMg== Received: from BN9PR03CA0050.namprd03.prod.outlook.com (2603:10b6:408:fb::25) by SA0PR12MB4543.namprd12.prod.outlook.com (2603:10b6:806:9d::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Fri, 1 Oct 2021 19:35:09 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::d) by BN9PR03CA0050.outlook.office365.com (2603:10b6:408:fb::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16 via Frontend Transport; Fri, 1 Oct 2021 19:35:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:08 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:35:05 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:13 +0300 Message-ID: <20211001193415.23288-13-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9182382e-fe6e-4970-75a9-08d9851293c9 X-MS-TrafficTypeDiagnostic: SA0PR12MB4543: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0xRbuD3tlRUANKR8SZ7M54q0vuQj2+pX0ZovZW6yEU0UCssUGKEzjcKknGGTMC/xWS8Ue63UqLyLcKXPZRS+a/VYEc8LNiKm6WFVpASpmZ5I1Sad1XmzJuaRS7MU0DHDjd1kwQkUY3EnVxdpfDtZyOzfEVNxQYANTGFNbDMp8/wBu1y+gO64SPqZCg/HJkCp72ZwyG60TnSqnzLbW9Q42cqe9pWZeLI7WzX37JinbbgZg1uwp4SJKw0acRum/uay6jHQfD8zjOHs6Dg8z3NhxAGX0WThC4QudHvLD5osUa3ELR5TcBm+zWhry0DslGgZDNvjwTaaKrpGHE82nju7z2xBjiOAicIHvwpuA18Mwv7iPqLchpdAWU15LssCu8nCN6Ojwe+Fs4A3YZL3pQ2LbKuk0UrWJT3qOE4cHIHZxZcTNkST+xufcusYYp+ePZbeURyJZ04gcX48ssL/DD5ERawdAXgD71RqDB5yQ5LWgga9PVXJ9H9VH1MU+PoaviV6mrgXmi3ZnNrtmlFY0B8JfYlxnupcu7YVoXLgSEPfa10fkWPlJ/u5hB/TMwhdWBBhZjjJGC/zdzajRkjWk8sKHKmU5O/fvBWpLUvmxjY8h5r0xS4ISR8goN6VEyp3uI0W9ag/eH7rBb5lgeMUIgoKjkG7ukzl79R3MslvExr9SKV0CZ58jFiqMHHfEPo6TOXMstywvQnztuUQKx1GzsXZUw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(508600001)(6916009)(2906002)(30864003)(356005)(316002)(7636003)(54906003)(47076005)(36756003)(55016002)(6666004)(2616005)(1076003)(186003)(16526019)(426003)(66574015)(83380400001)(82310400003)(70586007)(86362001)(70206006)(8936002)(5660300002)(6286002)(336012)(7696005)(8676002)(26005)(36860700001)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:08.2614 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9182382e-fe6e-4970-75a9-08d9851293c9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4543 Subject: [dpdk-dev] [PATCH v2 12/14] net/mlx5: translate flex item configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" RTE Flow flex item configuration should be translated into actual hardware settings: - translate header length and next protocol field samplings - translate data field sampling, the similar fields with the same mode and matching related parameters are relocated and grouped to be covered with minimal amount of hardware sampling registers (each register can cover arbitrary neighbour 32 bits (aligned to byte boundary) in the packet and we can combine the fields with smaller lengths or segments of bigger fields) - input and output links translation - preparing data for parsing flex item pattern on flow creation Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.h | 16 +- drivers/net/mlx5/mlx5_flow_flex.c | 748 +++++++++++++++++++++++++++++- 2 files changed, 762 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 629ff6ebfe..d4fa946485 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -52,6 +52,9 @@ /* Maximal number of flex items created on the port.*/ #define MLX5_PORT_FLEX_ITEM_NUM 4 +/* Maximal number of field/field parts to map into sample registers .*/ +#define MLX5_FLEX_ITEM_MAPPING_NUM 32 + enum mlx5_ipool_index { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) MLX5_IPOOL_DECAP_ENCAP = 0, /* Pool for encap/decap resource. */ @@ -1124,10 +1127,21 @@ struct mlx5_flex_parser_devx { uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; }; +/* Pattern field dscriptor - how to translate flex pattern into samples. */ +__extension__ +struct mlx5_flex_pattern_field { + uint16_t width:6; + uint16_t shift:5; + uint16_t reg_id:5; +}; + /* Port flex item context. */ struct mlx5_flex_item { struct mlx5_flex_parser_devx *devx_fp; /* DevX flex parser object. */ - uint32_t refcnt; /**< Atomically accessed refcnt by flows. */ + uint32_t refcnt; /* Atomically accessed refcnt by flows. */ + uint32_t tunnel:1; /* Flex item presents tunnel protocol. */ + uint32_t mapnum; /* Number of pattern translation entries. */ + struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; /* diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index b8a091e259..56b91da839 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -113,6 +113,750 @@ mlx5_flex_free(struct mlx5_priv *priv, struct mlx5_flex_item *item) } } +static int +mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *devx, + struct rte_flow_error *error) +{ + const struct rte_flow_item_flex_field *field = &conf->next_header; + struct mlx5_devx_graph_node_attr *node = &devx->devx_conf; + uint32_t len_width; + + if (field->field_base % CHAR_BIT) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "not byte aligned header length field"); + switch (field->field_mode) { + case FIELD_MODE_DUMMY: + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid header length field mode (DUMMY)"); + case FIELD_MODE_FIXED: + if (!(attr->header_length_mode & + RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIXED))) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported header length field mode (FIXED)"); + if (attr->header_length_mask_width < field->field_size) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "header length field width exceeds limit"); + if (field->offset_shift < 0 || + field->offset_shift > attr->header_length_mask_width) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid header length field shift (FIXED"); + if (field->field_base < 0) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "negative header length field base (FIXED)"); + node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED; + break; + case FIELD_MODE_OFFSET: + if (!(attr->header_length_mode & + RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD))) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported header length field mode (OFFSET)"); + node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD; + if (field->offset_mask == 0 || + !rte_is_power_of_2(field->offset_mask + 1)) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid length field offset mask (OFFSET)"); + len_width = rte_fls_u32(field->offset_mask); + if (len_width > attr->header_length_mask_width) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "length field offset mask too wide (OFFSET)"); + node->header_length_field_mask = field->offset_mask; + break; + case FIELD_MODE_BITMASK: + if (!(attr->header_length_mode & + RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK))) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported header length field mode (BITMASK)"); + if (attr->header_length_mask_width < field->field_size) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "header length field width exceeds limit"); + node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK; + node->header_length_field_mask = field->offset_mask; + break; + default: + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unknown header length field mode"); + } + if (field->field_base / CHAR_BIT >= 0 && + field->field_base / CHAR_BIT > attr->max_base_header_length) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "header length field base exceeds limit"); + node->header_length_base_value = field->field_base / CHAR_BIT; + if (field->field_mode == FIELD_MODE_OFFSET || + field->field_mode == FIELD_MODE_BITMASK) { + if (field->offset_shift > 15 || field->offset_shift < 0) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "header length field shift exceeeds limit"); + node->header_length_field_shift = field->offset_shift; + node->header_length_field_offset = field->offset_base; + } + return 0; +} + +static int +mlx5_flex_translate_next(struct mlx5_hca_flex_attr *attr, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *devx, + struct rte_flow_error *error) +{ + const struct rte_flow_item_flex_field *field = &conf->next_protocol; + struct mlx5_devx_graph_node_attr *node = &devx->devx_conf; + + switch (field->field_mode) { + case FIELD_MODE_DUMMY: + if (conf->output_num) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "next protocof field is required (DUMMY)"); + return 0; + case FIELD_MODE_FIXED: + break; + case FIELD_MODE_OFFSET: + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported next protocol field mode (OFFSET)"); + break; + case FIELD_MODE_BITMASK: + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported next protocol field mode (BITMASK)"); + default: + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unknown next protocol field mode"); + } + MLX5_ASSERT(field->field_mode == FIELD_MODE_FIXED); + if (attr->max_next_header_offset < field->field_base) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "next protocol field base exceeds limit"); + if (field->offset_shift) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported next protocol field shift"); + node->next_header_field_offset = field->field_base; + node->next_header_field_size = field->field_size; + return 0; +} + +/* Helper structure to handle field bit intervals. */ +struct mlx5_flex_field_cover { + uint16_t num; + int32_t start[MLX5_FLEX_ITEM_MAPPING_NUM]; + int32_t end[MLX5_FLEX_ITEM_MAPPING_NUM]; + uint8_t mapped[MLX5_FLEX_ITEM_MAPPING_NUM / CHAR_BIT + 1]; +}; + +static void +mlx5_flex_insert_field(struct mlx5_flex_field_cover *cover, + uint16_t num, int32_t start, int32_t end) +{ + MLX5_ASSERT(num < MLX5_FLEX_ITEM_MAPPING_NUM); + MLX5_ASSERT(num <= cover->num); + if (num < cover->num) { + memmove(&cover->start[num + 1], &cover->start[num], + (cover->num - num) * sizeof(int32_t)); + memmove(&cover->end[num + 1], &cover->end[num], + (cover->num - num) * sizeof(int32_t)); + } + cover->start[num] = start; + cover->end[num] = end; + cover->num++; +} + +static void +mlx5_flex_merge_field(struct mlx5_flex_field_cover *cover, uint16_t num) +{ + uint32_t i, del = 0; + int32_t end; + + MLX5_ASSERT(num < MLX5_FLEX_ITEM_MAPPING_NUM); + MLX5_ASSERT(num < (cover->num - 1)); + end = cover->end[num]; + for (i = num + 1; i < cover->num; i++) { + if (end < cover->start[i]) + break; + del++; + if (end <= cover->end[i]) { + cover->end[num] = cover->end[i]; + break; + } + } + if (del) { + MLX5_ASSERT(del < (cover->num - 1u - num)); + cover->num -= del; + MLX5_ASSERT(cover->num > num); + if ((cover->num - num) > 1) { + memmove(&cover->start[num + 1], + &cover->start[num + 1 + del], + (cover->num - num - 1) * sizeof(int32_t)); + memmove(&cover->end[num + 1], + &cover->end[num + 1 + del], + (cover->num - num - 1) * sizeof(int32_t)); + } + } +} + +/* + * Validate the sample field and update interval array + * if parameters match with the 'match" field. + * Returns: + * < 0 - error + * == 0 - no match, interval array not updated + * > 0 - match, interval array updated + */ +static int +mlx5_flex_cover_sample(struct mlx5_flex_field_cover *cover, + struct rte_flow_item_flex_field *field, + struct rte_flow_item_flex_field *match, + struct mlx5_hca_flex_attr *attr, + struct rte_flow_error *error) +{ + int32_t start, end; + uint32_t i; + + switch (field->field_mode) { + case FIELD_MODE_DUMMY: + return 0; + case FIELD_MODE_FIXED: + if (!(attr->sample_offset_mode & + RTE_BIT32(MLX5_GRAPH_SAMPLE_OFFSET_FIXED))) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported sample field mode (FIXED)"); + if (field->offset_shift) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid sample field shift (FIXED"); + if (field->field_base < 0) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid sample field base (FIXED)"); + if (field->field_base / CHAR_BIT > attr->max_sample_base_offset) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "sample field base exceeds limit (FIXED)"); + break; + case FIELD_MODE_OFFSET: + if (!(attr->sample_offset_mode & + RTE_BIT32(MLX5_GRAPH_SAMPLE_OFFSET_FIELD))) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported sample field mode (OFFSET)"); + if (field->field_base / CHAR_BIT >= 0 && + field->field_base / CHAR_BIT > attr->max_sample_base_offset) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "sample field base exceeds limit"); + break; + case FIELD_MODE_BITMASK: + if (!(attr->sample_offset_mode & + RTE_BIT32(MLX5_GRAPH_SAMPLE_OFFSET_BITMASK))) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported sample field mode (BITMASK)"); + if (field->field_base / CHAR_BIT >= 0 && + field->field_base / CHAR_BIT > attr->max_sample_base_offset) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "sample field base exceeds limit"); + break; + default: + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unknown data sample field mode"); + } + if (!match) { + if (!field->field_size) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "zero sample field width"); + if (field->rss_hash) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported RSS hash over flex item fields"); + if (field->tunnel_count != FLEX_TUNNEL_MODE_FIRST && + field->tunnel_count != FLEX_TUNNEL_MODE_OUTER && + field->tunnel_count != FLEX_TUNNEL_MODE_INNER) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported sample field tunnel mode"); + if (field->field_id) + DRV_LOG(DEBUG, "sample field id hint ignored\n"); + } else { + if (field->field_mode != match->field_mode || + field->rss_hash != match->rss_hash || + field->tunnel_count != match->tunnel_count || + field->offset_base | match->offset_base || + field->offset_mask | match->offset_mask || + field->offset_shift | match->offset_shift) + return 0; + } + start = field->field_base; + end = start + field->field_size; + /* Add the new or similar field to interval array. */ + if (!cover->num) { + cover->start[cover->num] = start; + cover->end[cover->num] = end; + cover->num = 1; + return 1; + } + for (i = 0; i < cover->num; i++) { + if (start > cover->end[i]) { + if (i >= (cover->num - 1u)) { + mlx5_flex_insert_field(cover, cover->num, + start, end); + break; + } + continue; + } + if (end < cover->start[i]) { + mlx5_flex_insert_field(cover, i, start, end); + break; + } + if (start < cover->start[i]) + cover->start[i] = start; + if (end > cover->end[i]) { + cover->end[i] = end; + if (i < (cover->num - 1u)) + mlx5_flex_merge_field(cover, i); + } + break; + } + return 1; +} + +static void +mlx5_flex_config_sample(struct mlx5_devx_match_sample_attr *na, + struct rte_flow_item_flex_field *field) +{ + memset(na, 0, sizeof(struct mlx5_devx_match_sample_attr)); + na->flow_match_sample_en = 1; + switch (field->field_mode) { + case FIELD_MODE_FIXED: + na->flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + break; + case FIELD_MODE_OFFSET: + na->flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIELD; + na->flow_match_sample_field_offset = field->offset_base; + na->flow_match_sample_field_offset_mask = field->offset_mask; + na->flow_match_sample_field_offset_shift = field->offset_shift; + break; + case FIELD_MODE_BITMASK: + na->flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_BITMASK; + na->flow_match_sample_field_offset = field->offset_base; + na->flow_match_sample_field_offset_mask = field->offset_mask; + na->flow_match_sample_field_offset_shift = field->offset_shift; + break; + default: + MLX5_ASSERT(false); + break; + } + switch (field->tunnel_count) { + case FLEX_TUNNEL_MODE_FIRST: + na->flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + break; + case FLEX_TUNNEL_MODE_OUTER: + na->flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_OUTER; + break; + case FLEX_TUNNEL_MODE_INNER: + na->flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_INNER; + break; + default: + MLX5_ASSERT(false); + break; + } +} + +/* Map specified field to set/subset of allocated sample registers. */ +static int +mlx5_flex_map_sample(struct rte_flow_item_flex_field *field, + struct mlx5_flex_parser_devx *parser, + struct mlx5_flex_item *item, + struct rte_flow_error *error) +{ + struct mlx5_devx_match_sample_attr node; + int32_t start = field->field_base; + int32_t end = start + field->field_size; + uint32_t i, done_bits = 0; + + mlx5_flex_config_sample(&node, field); + for (i = 0; i < parser->num_samples; i++) { + struct mlx5_devx_match_sample_attr *sample = + &parser->devx_conf.sample[i]; + int32_t reg_start, reg_end; + int32_t cov_start, cov_end; + struct mlx5_flex_pattern_field *trans; + + MLX5_ASSERT(sample->flow_match_sample_en); + if (!sample->flow_match_sample_en) + break; + node.flow_match_sample_field_base_offset = + sample->flow_match_sample_field_base_offset; + if (memcmp(&node, sample, sizeof(node))) + continue; + reg_start = (int8_t)sample->flow_match_sample_field_base_offset; + reg_start *= CHAR_BIT; + reg_end = reg_start + 32; + if (end <= reg_start || start >= reg_end) + continue; + cov_start = RTE_MAX(reg_start, start); + cov_end = RTE_MIN(reg_end, end); + MLX5_ASSERT(cov_end > cov_start); + done_bits += cov_end - cov_start; + if (item->mapnum >= MLX5_FLEX_ITEM_MAPPING_NUM) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "too many flex item pattern translations"); + trans = &item->map[item->mapnum]; + item->mapnum++; + trans->reg_id = i; + trans->shift = cov_start - reg_start; + trans->width = cov_end - cov_start; + } + if (done_bits != field->field_size) { + MLX5_ASSERT(false); + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "failed to map field to sample register"); + } + return 0; +} + +/* Allocate sample registers for the specified field type and interval array. */ +static int +mlx5_flex_alloc_sample(struct mlx5_flex_field_cover *cover, + struct mlx5_flex_parser_devx *parser, + struct rte_flow_item_flex_field *field, + struct mlx5_hca_flex_attr *attr, + struct rte_flow_error *error) +{ + struct mlx5_devx_match_sample_attr node; + uint32_t idx = 0; + + mlx5_flex_config_sample(&node, field); + while (idx < cover->num) { + int32_t start, end; + + /* Sample base offsets are in bytes, should align. */ + start = RTE_ALIGN_FLOOR(cover->start[idx], CHAR_BIT); + node.flow_match_sample_field_base_offset = + (start / CHAR_BIT) & 0xFF; + /* Allocate sample register. */ + if (parser->num_samples >= MLX5_GRAPH_NODE_SAMPLE_NUM || + parser->num_samples >= attr->max_num_sample || + parser->num_samples >= attr->max_num_prog_sample) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "no sample registers to handle all flex item fields"); + parser->devx_conf.sample[parser->num_samples] = node; + parser->num_samples++; + /* Remove or update covered intervals. */ + end = start + 32; + while (idx < cover->num) { + if (end >= cover->end[idx]) { + idx++; + continue; + } + if (end > cover->start[idx]) + cover->start[idx] = end; + break; + } + } + return 0; +} + +static int +mlx5_flex_translate_sample(struct mlx5_hca_flex_attr *attr, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *parser, + struct mlx5_flex_item *item, + struct rte_flow_error *error) +{ + struct mlx5_flex_field_cover cover; + uint32_t i, j; + int ret; + + if (conf->sample_num > MLX5_FLEX_ITEM_MAPPING_NUM) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "sample field number exceeds limit"); + /* + * The application can specify fields smaller or bigger than 32 bits + * covered with single sample register and it can specify field + * offsets in any order. + * + * Gather all similar fields together, build array of bit intervals + * in asсending order and try to cover with the smallest set of sample + * refgisters. + */ + memset(&cover, 0, sizeof(cover)); + for (i = 0; i < conf->sample_num; i++) { + struct rte_flow_item_flex_field *fl = conf->sample_data + i; + + /* Check whether field was covered in the previous iteration. */ + if (cover.mapped[i / CHAR_BIT] & (1u << (i % CHAR_BIT))) + continue; + if (fl->field_mode == FIELD_MODE_DUMMY) + continue; + /* Build an interval array for the field and similar ones */ + cover.num = 0; + /* Add the first field to array unconditionally. */ + ret = mlx5_flex_cover_sample(&cover, fl, NULL, attr, error); + if (ret < 0) + return ret; + MLX5_ASSERT(ret > 0); + cover.mapped[i / CHAR_BIT] |= 1u << (i % CHAR_BIT); + for (j = i + 1; j < conf->sample_num; j++) { + struct rte_flow_item_flex_field *ft; + + /* Add field to array if its type matches. */ + ft = conf->sample_data + j; + ret = mlx5_flex_cover_sample(&cover, ft, fl, + attr, error); + if (ret < 0) + return ret; + if (!ret) + continue; + cover.mapped[j / CHAR_BIT] |= 1u << (j % CHAR_BIT); + } + /* Allocate sample registers to cover array of intervals. */ + ret = mlx5_flex_alloc_sample(&cover, parser, fl, attr, error); + if (ret) + return ret; + } + /* Build the item pattern translating data on flow creation. */ + item->mapnum = 0; + memset(&item->map, 0, sizeof(item->map)); + for (i = 0; i < conf->sample_num; i++) { + struct rte_flow_item_flex_field *fl = conf->sample_data + i; + + ret = mlx5_flex_map_sample(fl, parser, item, error); + if (ret) { + MLX5_ASSERT(false); + return ret; + } + } + return 0; +} + +static int +mlx5_flex_arc_type(enum rte_flow_item_type type, int in) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_ETH: + return MLX5_GRAPH_ARC_NODE_MAC; + case RTE_FLOW_ITEM_TYPE_IPV4: + return in ? MLX5_GRAPH_ARC_NODE_IP : MLX5_GRAPH_ARC_NODE_IPV4; + case RTE_FLOW_ITEM_TYPE_IPV6: + return in ? MLX5_GRAPH_ARC_NODE_IP : MLX5_GRAPH_ARC_NODE_IPV6; + case RTE_FLOW_ITEM_TYPE_UDP: + return MLX5_GRAPH_ARC_NODE_UDP; + case RTE_FLOW_ITEM_TYPE_TCP: + return MLX5_GRAPH_ARC_NODE_TCP; + case RTE_FLOW_ITEM_TYPE_MPLS: + return MLX5_GRAPH_ARC_NODE_MPLS; + case RTE_FLOW_ITEM_TYPE_GRE: + return MLX5_GRAPH_ARC_NODE_GRE; + case RTE_FLOW_ITEM_TYPE_GENEVE: + return MLX5_GRAPH_ARC_NODE_GENEVE; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + return MLX5_GRAPH_ARC_NODE_VXLAN_GPE; + default: + return -EINVAL; + } +} + +static int +mlx5_flex_arc_in_eth(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_eth *spec = item->spec; + const struct rte_flow_item_eth *mask = item->mask; + struct rte_flow_item_eth eth = { .hdr.ether_type = RTE_BE16(0xFFFF) }; + + if (memcmp(mask, ð, sizeof(struct rte_flow_item_eth))) { + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, + "invalid eth item mask"); + } + return rte_be_to_cpu_16(spec->hdr.ether_type); +} + +static int +mlx5_flex_arc_in_udp(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_udp *spec = item->spec; + const struct rte_flow_item_udp *mask = item->mask; + struct rte_flow_item_udp udp = { .hdr.dst_port = RTE_BE16(0xFFFF) }; + + if (memcmp(mask, &udp, sizeof(struct rte_flow_item_udp))) { + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, + "invalid eth item mask"); + } + return rte_be_to_cpu_16(spec->hdr.dst_port); +} + +static int +mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *devx, + struct mlx5_flex_item *item, + struct rte_flow_error *error) +{ + struct mlx5_devx_graph_node_attr *node = &devx->devx_conf; + uint32_t i; + + RTE_SET_USED(item); + if (conf->input_num > attr->max_num_arc_in) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "too many input links"); + for (i = 0; i < conf->input_num; i++) { + struct mlx5_devx_graph_arc_attr *arc = node->in + i; + struct rte_flow_item_flex_link *link = conf->input_link + i; + const struct rte_flow_item *rte_item = &link->item; + int arc_type; + int ret; + + if (!rte_item->spec || !rte_item->mask || rte_item->last) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid flex item IN arc format"); + arc_type = mlx5_flex_arc_type(rte_item->type, true); + if (arc_type < 0 || !(attr->node_in & RTE_BIT32(arc_type))) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported flex item IN arc type"); + arc->arc_parse_graph_node = arc_type; + arc->start_inner_tunnel = link->tunnel ? 1 : 0; + /* + * Configure arc IN condition value. The value location depends + * on protocol. Current FW version supports IP & UDP for IN + * arcs only, and locations for these protocols are defined. + * Add more protocols when available. + */ + switch (rte_item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5_flex_arc_in_eth(rte_item, error); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5_flex_arc_in_udp(rte_item, error); + break; + default: + MLX5_ASSERT(false); + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported flex item IN arc type"); + } + if (ret < 0) + return ret; + arc->compare_condition_value = (uint16_t)ret; + } + return 0; +} + +static int +mlx5_flex_translate_arc_out(struct mlx5_hca_flex_attr *attr, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *devx, + struct mlx5_flex_item *item, + struct rte_flow_error *error) +{ + struct mlx5_devx_graph_node_attr *node = &devx->devx_conf; + uint32_t i; + + if (conf->output_num > attr->max_num_arc_out) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "too many output links"); + for (i = 0; i < conf->output_num; i++) { + struct mlx5_devx_graph_arc_attr *arc = node->out + i; + struct rte_flow_item_flex_link *link = conf->output_link + i; + const struct rte_flow_item *rte_item = &link->item; + int arc_type; + + if (rte_item->spec || rte_item->mask || rte_item->last) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex node: invalid OUT arc format"); + arc_type = mlx5_flex_arc_type(rte_item->type, false); + if (arc_type < 0 || !(attr->node_out & RTE_BIT32(arc_type))) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "unsupported flex item OUT arc type"); + arc->arc_parse_graph_node = arc_type; + arc->start_inner_tunnel = link->tunnel ? 1 : 0; + arc->compare_condition_value = link->next; + if (link->tunnel) + item->tunnel = 1; + } + return 0; +} + +/* Translate RTE flex item API configuration into flaex parser settings. */ +static int +mlx5_flex_translate_conf(struct rte_eth_dev *dev, + const struct rte_flow_item_flex_conf *conf, + struct mlx5_flex_parser_devx *devx, + struct mlx5_flex_item *item, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hca_flex_attr *attr = &priv->config.hca_attr.flex; + int ret; + + ret = mlx5_flex_translate_length(attr, conf, devx, error); + if (ret) + return ret; + ret = mlx5_flex_translate_next(attr, conf, devx, error); + if (ret) + return ret; + ret = mlx5_flex_translate_sample(attr, conf, devx, item, error); + if (ret) + return ret; + ret = mlx5_flex_translate_arc_in(attr, conf, devx, item, error); + if (ret) + return ret; + ret = mlx5_flex_translate_arc_out(attr, conf, devx, item, error); + if (ret) + return ret; + return 0; +} + /** * Create the flex item with specified configuration over the Ethernet device. * @@ -145,6 +889,8 @@ flow_dv_item_create(struct rte_eth_dev *dev, "too many flex items created on the port"); return NULL; } + if (mlx5_flex_translate_conf(dev, conf, &devx_config, flex, error)) + goto error; ent = mlx5_list_register(priv->sh->flex_parsers_dv, &devx_config); if (!ent) { rte_flow_error_set(error, ENOMEM, @@ -153,7 +899,6 @@ flow_dv_item_create(struct rte_eth_dev *dev, goto error; } flex->devx_fp = container_of(ent, struct mlx5_flex_parser_devx, entry); - RTE_SET_USED(conf); /* Mark initialized flex item valid. */ __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); return (struct rte_flow_item_flex_handle *)flex; @@ -278,6 +1023,7 @@ mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry) RTE_SET_USED(list_ctx); MLX5_ASSERT(fp->devx_obj); claim_zero(mlx5_devx_cmd_destroy(fp->devx_obj)); + DRV_LOG(DEBUG, "DEVx flex parser %p destroyed\n", (const void *)fp); mlx5_free(entry); } From patchwork Fri Oct 1 19:34:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100356 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71060A0032; Fri, 1 Oct 2021 21:36:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44B7541266; Fri, 1 Oct 2021 21:35:17 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2063.outbound.protection.outlook.com [40.107.93.63]) by mails.dpdk.org (Postfix) with ESMTP id C85F74126C for ; Fri, 1 Oct 2021 21:35:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EkH9VI2fCoWF2g62AIYYXC6Ct4FN2zGtmlrzCAMq0AVo3kwZwE1H/lu+qVbyvhhpuErf+pyhq/qQdq3Fw3IlZLe/KA76UgGno5JF3rLZCVGl9RCPrUZMCO5MctGIRsvkAFdEcPagzqcqL+ko6VhFhV/Rgf+b061XrqFIUqgKJ0YJqh54b24X24UsByCJi7Y0x07va6nVPeKrBlG/nQs8+cNsFW4g2FkcdXx+BCxa9jA8JBHg5tbnnS3vHWnLgiaZa6QAAz/thZTHsAQH8OC7YM5WMGcFQCXnvSICcJitrZ92jnxulbp7pAiaS9bTJqBgesyF/ozAeQjetQa6dYEUOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UPanz2XgiNLY//noQl66jLd+UYpUs7tFqj7+H3XIG+k=; b=n0YHm7peIttz9NC6ikLBNf9MgrTCY4AchmqcnmGHC4VMwR5u2S2Bt5CdkbHnJkJh/rXUT5FR8srqPSnoNl1GK8x861l9O60LxK1xIhUPph46P2U4i3ffL721XcQ38QZ1Gf3cLoK/i7Kaej+32mqJu6sbr179c7bTK/0aMMugPTVao6fYIKHXm6n40gVLqmU0FWLDLQb9LzC2UOQO58WMzUA0QCbVZprvlTxRklUW/VcO54fBVJG/bp+DjAW2/C7sO7jr94koTipWb+II/Pr1dgwwq4+BU+wkbbviwu8h2VZ4KlY0BHjTMUo3v9qoS537G4kK9KGjiBQwRCXqTUqGoQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UPanz2XgiNLY//noQl66jLd+UYpUs7tFqj7+H3XIG+k=; b=AZL4irsx8MnmgBucL3NuxuYN7PuR0HBVoBwhYSqY2l2jchSjA1MC/oM7VA86pxgUM+9VN9MuCh3bv0Cjq9jvXvJsksJnUvTX97Dju6C3u1kH32AC5oqZ39bjYZIp+RIxqeoEMmGs3MTgOFPBe5nE2+8nb16oRAqCiVg3aUJH28ogavGQ9vFKT+kWMGWp4oZjB2UJbAGDOfl9XsQGC9HmNLN68QxcsDDsCaRRE45SHckKmFE48kPpRO7Kccr6ejFdTnz1+5rzQtZMu5/wEMK73acy7GDPYBMEHOH50506RqiS65PgUesmJutaZFmPQ/m0do8Axx/as/B1SAuVt8mOhg== Received: from BN9PR03CA0047.namprd03.prod.outlook.com (2603:10b6:408:fb::22) by DM6PR12MB3035.namprd12.prod.outlook.com (2603:10b6:5:3a::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15; Fri, 1 Oct 2021 19:35:12 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::a8) by BN9PR03CA0047.outlook.office365.com (2603:10b6:408:fb::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.20 via Frontend Transport; Fri, 1 Oct 2021 19:35:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:11 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:35:07 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:14 +0300 Message-ID: <20211001193415.23288-14-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e8c60e52-f845-4fdb-b4a1-08d9851295e3 X-MS-TrafficTypeDiagnostic: DM6PR12MB3035: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +lsvgp8+4hL7k3dXFvFM1tSTvWmSyBMFXQKgLuUl6HdFW78paMxFbWDg3RbbSy4mZH06GTPy0ctf8v3ZRr3Va1PHeGN1tYO5jHDHuoaAi6ZWV3RzgZkKg0rP+utRswyS290cR4cuTZ8EAr+wMdgskRnyqx0DxCgY31AHD0OMLtma5VulQUNYTzb1pTitNncJhOvZpcU0eidljWUm2QOQ76S0Z52hiCaab5M8DoVXoSHqYGu4G70JgjpdPFShCRVh04d8a/ogUPO8y6CCABxHPpgEsC+NOJZ/NlHzP1iIZhHia6or84yKdOBJSsn69Aj4oOWJMsIspwIEqoTetXUCX0Azyb4kIbbaoqZmT/w026XUbX0r67EkBUDCW4/rD6eic2I3tM9JbCu9W1MQSwsqEErMG+6TtNDLy/9KPg/ZSJa+fj5SFWEZLCTjLKXE3L3oOzpderTE4HfIKB5F//LvTsziTSgjenoGgnN/v1WTEdlbK6V8VFAmpajAZ1ZGlvJeELuby/jyyg8ezjqXKE/UNOGuI7Eec7Ba91eKwBGEojq0gCb0+b/+ddR+sRSzbBHVrs6sWu91iSlF52RDHiKAGRSDF7nE96g6q3eRwvxPQx75m8Fso8nOteGzKK5p0GG1mo4a0E2H13sqv0vpd+P3PA1sBHSO4cu5SYPX/30pmHBN/rYoqzOhYlRzmOWbkDZDeEHQDIpMTD/AH33sXrsstQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(6916009)(426003)(508600001)(70586007)(70206006)(4326008)(83380400001)(2616005)(82310400003)(6666004)(186003)(86362001)(7696005)(8676002)(36756003)(47076005)(356005)(36860700001)(316002)(55016002)(1076003)(2906002)(16526019)(8936002)(5660300002)(336012)(26005)(54906003)(6286002)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:11.8512 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8c60e52-f845-4fdb-b4a1-08d9851295e3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3035 Subject: [dpdk-dev] [PATCH v2 13/14] net/mlx5: translate flex item pattern into matcher X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The matcher is an steering engine entity that represents the flow pattern to hardware to match. It order to provide match on the flex item pattern the appropriate matcher fields should be confgiured with values and masks accordingly. The flex item related matcher fields is an array of eight 32-bit fields to match with data captured by sample registers of confgiured flex parser. One packet field, presented in item pattern can be split between several sample registers, and multiple fields can be combined together into single sample register to optimize hardware resources usage (number os sample registers is limited), depending on field modes, widths and offsets. Actual mapping is complicated and controlled by special translation data, built by PMD on flex item creation. Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.h | 8 ++ drivers/net/mlx5/mlx5_flow_flex.c | 209 ++++++++++++++++++++++++++++++ 2 files changed, 217 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index d4fa946485..5cca704977 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1871,6 +1871,14 @@ int flow_dv_item_release(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flex_item_port_init(struct rte_eth_dev *dev); void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); +void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item); +int mlx5_flex_acquire_index(struct rte_eth_dev *dev, + struct rte_flow_item_flex_handle *handle, + bool acquire); +int mlx5_flex_release_index(struct rte_eth_dev *dev, int index); + /* Flex parser list callbacks. */ struct mlx5_list_entry *mlx5_flex_parser_create_cb(void *list_ctx, void *ctx); int mlx5_flex_parser_match_cb(void *list_ctx, diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index 56b91da839..f695198833 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -113,6 +113,215 @@ mlx5_flex_free(struct mlx5_priv *priv, struct mlx5_flex_item *item) } } +__rte_always_inline static uint32_t +mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item, + uint32_t pos, uint32_t width) +{ + const uint8_t *ptr = item->pattern; + uint32_t val; + + /* Proceed the bitfield start byte. */ + MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT); + if (item->length <= pos / CHAR_BIT) + return 0; + val = ptr[pos / CHAR_BIT] >> (pos % CHAR_BIT); + if (width <= CHAR_BIT - pos % CHAR_BIT) + return val; + width -= CHAR_BIT - pos % CHAR_BIT; + pos += CHAR_BIT - pos % CHAR_BIT; + while (width >= CHAR_BIT) { + val <<= CHAR_BIT; + if (pos / CHAR_BIT < item->length) + val |= ptr[pos / CHAR_BIT]; + width -= CHAR_BIT; + pos += CHAR_BIT; + } + /* Proceed the bitfield end byte. */ + if (width) { + val <<= width; + if (pos / CHAR_BIT < item->length) + val |= ptr[pos / CHAR_BIT] & (RTE_BIT32(width) - 1); + } + return val; +} + +#define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \ + do { \ + uint32_t tmp, out = (def); \ + tmp = MLX5_GET(fte_match_set_misc4, misc4_m, \ + prog_sample_field_value_##x); \ + tmp = (tmp & ~out) | (msk); \ + MLX5_SET(fte_match_set_misc4, misc4_m, \ + prog_sample_field_value_##x, tmp); \ + tmp = MLX5_GET(fte_match_set_misc4, misc4_v, \ + prog_sample_field_value_##x); \ + tmp = (tmp & ~out) | (val); \ + MLX5_SET(fte_match_set_misc4, misc4_v, \ + prog_sample_field_value_##x, tmp); \ + tmp = sid; \ + MLX5_SET(fte_match_set_misc4, misc4_v, \ + prog_sample_field_id_##x, tmp);\ + MLX5_SET(fte_match_set_misc4, misc4_m, \ + prog_sample_field_id_##x, tmp); \ + } while (0) + +__rte_always_inline static void +mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v, + uint32_t def, uint32_t mask, uint32_t value, + uint32_t sample_id, uint32_t id) +{ + switch (id) { + case 0: + SET_FP_MATCH_SAMPLE_ID(0, def, mask, value, sample_id); + break; + case 1: + SET_FP_MATCH_SAMPLE_ID(1, def, mask, value, sample_id); + break; + case 2: + SET_FP_MATCH_SAMPLE_ID(2, def, mask, value, sample_id); + break; + case 3: + SET_FP_MATCH_SAMPLE_ID(3, def, mask, value, sample_id); + break; + case 4: + SET_FP_MATCH_SAMPLE_ID(4, def, mask, value, sample_id); + break; + case 5: + SET_FP_MATCH_SAMPLE_ID(5, def, mask, value, sample_id); + break; + case 6: + SET_FP_MATCH_SAMPLE_ID(6, def, mask, value, sample_id); + break; + case 7: + SET_FP_MATCH_SAMPLE_ID(7, def, mask, value, sample_id); + break; + default: + MLX5_ASSERT(false); + break; + } +#undef SET_FP_MATCH_SAMPLE_ID +} +/** + * Translate item pattern into matcher fields according to translation + * array. + * + * @param dev + * Ethernet device to translate flex item on. + * @param[in, out] matcher + * Flow matcher to confgiure + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +void +mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_flex *spec, *mask; + void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, + misc_parameters_4); + void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); + struct mlx5_flex_item *tp; + uint32_t i, pos = 0; + + MLX5_ASSERT(item->spec && item->mask); + spec = item->spec; + mask = item->mask; + tp = (struct mlx5_flex_item *)spec->handle; + MLX5_ASSERT(mlx5_flex_index(priv, tp) >= 0); + for (i = 0; i < tp->mapnum; i++) { + struct mlx5_flex_pattern_field *map = tp->map + i; + uint32_t id = map->reg_id; + uint32_t def = (RTE_BIT32(map->width) - 1) << map->shift; + uint32_t val = mlx5_flex_get_bitfield(spec, pos, map->width); + uint32_t msk = mlx5_flex_get_bitfield(mask, pos, map->width); + + MLX5_ASSERT(map->width); + MLX5_ASSERT(id < tp->devx_fp->num_samples); + pos += map->width; + val <<= map->shift; + msk <<= map->shift; + mlx5_flex_set_match_sample(misc4_m, misc4_v, + def, msk & def, val & msk & def, + tp->devx_fp->sample_ids[id], id); + } +} + +/** + * Convert flex item handle (from the RTE flow) to flex item index on port. + * Optionally can increment flex item object reference count. + * + * @param dev + * Ethernet device to acquire flex item on. + * @param[in] handle + * Flow item handle from item spec. + * @param[in] acquire + * If set - increment reference counter. + * + * @return + * >=0 - index on success, a negative errno value otherwise + * and rte_errno is set. + */ +int +mlx5_flex_acquire_index(struct rte_eth_dev *dev, + struct rte_flow_item_flex_handle *handle, + bool acquire) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_item *flex = (struct mlx5_flex_item *)handle; + int ret = mlx5_flex_index(priv, flex); + + if (ret < 0) { + errno = -EINVAL; + rte_errno = EINVAL; + return ret; + } + if (acquire) + __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); + return ret; +} + +/** + * Release flex item index on port - decrements reference counter by index. + * + * @param dev + * Ethernet device to acquire flex item on. + * @param[in] index + * Flow item index. + * + * @return + * 0 - on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flex_release_index(struct rte_eth_dev *dev, + int index) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_item *flex; + + if (index >= MLX5_PORT_FLEX_ITEM_NUM || + !(priv->flex_item_map & (1u << index))) { + errno = EINVAL; + rte_errno = -EINVAL; + return -EINVAL; + } + flex = priv->flex_item + index; + if (flex->refcnt <= 1) { + MLX5_ASSERT(false); + errno = EINVAL; + rte_errno = -EINVAL; + return -EINVAL; + } + __atomic_sub_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); + return 0; +} + static int mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr, const struct rte_flow_item_flex_conf *conf, From patchwork Fri Oct 1 19:34:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 100357 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99991A0032; Fri, 1 Oct 2021 21:36:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CA124126F; Fri, 1 Oct 2021 21:35:21 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2082.outbound.protection.outlook.com [40.107.92.82]) by mails.dpdk.org (Postfix) with ESMTP id ECCF84126F for ; Fri, 1 Oct 2021 21:35:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MMap1QPzJ4POprknN7TfURybKRtYtbRymYDnuAS5hRTpYaZnNA7lOnvAxd9Lk8eeRqiQiwglYHeqHloIa88F+dJYqhV+zIZ3slaN/WStXWHp72G7RXu6idk/NPZGOC6TfWJPc4oWullIugAZ20Lk4QcKAtxyV3hO0Cyf6m4XdpqAOUWbqaNXD7AfGe8Z7iHJaAbfeJpP8Z5j0D3Fl40jVxhQ3S69A+Ups1Xzhx5ROBt7RVbeho7eHsRjvXwfo9EdQi+GiRo19li87hcX18lC9c5ecLHm8P0kp/6y0g1m1E8cWV7shwlv3gtkP9OQMBP1cjcLldANjGdHr6dMg0Bwqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HSdPbzorOW5irY8nXGiBwjQDMyULT42/2l5B7zooHFQ=; b=OY4W/n7PNACzBsYeLK8rk3YJ30dhQHWNurXZOCb3AiUrc1DuPeYk1xjQm0/QyKkR6JI7fXO9b03DZbf/n4B00Jiuy9AjkxrDKg8dvqx04hhmOVcte4lE1reARrtWS+QD0cPZImt3QTXPhVrZBtKAuv7+1pRPv5zQsy/Z45zZ/f/WYDphQstupMAqU+cPln1J5vN2236XwFXwWSl/9dENJiVFhNtlThtFvKsLECjvgrc9nlM5D/Hl/x6gQ2PT5K/lg3Gxoozgv09hpXLRYi4WDMGIFWKBjcIoI6kD+Op52hUsEp7fwS9uzg49yhq9X1OXZCBK4zu1IxvixvGYLM6ZXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HSdPbzorOW5irY8nXGiBwjQDMyULT42/2l5B7zooHFQ=; b=CLzJZrwGEVGoAeNM9AH0zHnZKMjRiCALErYNYOLymEl1jPS0EhHSMkixHaY9+CNR255B5SwlpNihkak3z6M3jWoWL3eAFEhgiMU3GDfsbTar+dP6dePfqJI88J7HbsvRb9Bhc8ARt6G99+45Lyo/jNfatyxs8IGqfKxeTaWyXm4T4ue//8GKGVV8T5ZpSl8gKevriRsHHF5xoFqUQYLYtm5qK9KiHHNJN6KGk+RBlVIGonTYCCuuveXoqoBKXc1rTqZGFyEqmMd9F+f0Bww9IEj5U0CO/dNl3s9WsypdYZxmvrxVX/pzTa76t3HQJxW6pjE/F/cjogKb8fHA2TRyHA== Received: from BN9PR03CA0046.namprd03.prod.outlook.com (2603:10b6:408:fb::21) by CH2PR12MB4103.namprd12.prod.outlook.com (2603:10b6:610:7e::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.19; Fri, 1 Oct 2021 19:35:15 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::7) by BN9PR03CA0046.outlook.office365.com (2603:10b6:408:fb::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Fri, 1 Oct 2021 19:35:15 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 1 Oct 2021 19:35:09 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Fri, 1 Oct 2021 22:34:15 +0300 Message-ID: <20211001193415.23288-15-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20211001193415.23288-1-viacheslavo@nvidia.com> References: <20210922180418.20663-1-viacheslavo@nvidia.com> <20211001193415.23288-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 70fb55b9-fa93-47e2-0dbb-08d9851297dd X-MS-TrafficTypeDiagnostic: CH2PR12MB4103: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1051; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T+JLsK4ZcvWN3Lu9+N4TdSSMKqi8MGc5dA9JdHE4Y9yOPm9A8tMRgClgBcu94VN0ROSc9CYw82Ns4lVIJzDFYZjjxvyZr5UfJQtMScnMilJOIWiKoJ+mOG3q4qw9u0RQMh/zYMtTRMStWxivLXjOrSvCsjPXBpCHtai5SHlFWOQ5KKnYPgxOQIS6efSj1PMn9tKZQ5+iNWldL+lj27qH5d4vb6uWAgvD/QHIuPJuL6jSvEtYCw4RtbfuhLhgwd1dAXAs53qBoF49aSwgNOKyyoXZiKdoWAcHcYqXbjAIn5da0tpf0QlG2LpBzpp3SAoy0MIERlP8IxPTm0w6JTUow+rN6b9PLbqEa4bwTvTAVAO/jiYVXs+dSdn6QkQdDnRSES2I+h6IlU3mAeS/nfyFhEyozyti/YeaeKkP3TVeOLZyuU3HaQkX2dlYPspfhk3Z5C7v+eW9KupAp5MBpmjELEzGLrWXWsM436WUusUgdWZnwYhNWB7V3j4R8CUsGkygyfl1Py6AwI1wuv+LxmsXOntUaIq+hgPMW4wy6Oy4X6d1xsDQBybgDpiXbLZb2TuIgU7b7jDchZILY9c1FU7AiQS/Qc1F4wo5JwurE4dBWLw18z9ROb5pK5KnNEOpgFRaNDSQ6iDVEQ48CYBS2BvZPC7qDjy500sXMdhMbHRJTcxbmNh8lWrK7LcWs01rk2xFz8+Odl/GlFkykRRzpJ0nrQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(36756003)(6286002)(55016002)(316002)(7636003)(8936002)(36860700001)(4326008)(47076005)(54906003)(8676002)(356005)(426003)(1076003)(508600001)(70586007)(336012)(26005)(70206006)(2616005)(2906002)(83380400001)(86362001)(6916009)(7696005)(5660300002)(16526019)(82310400003)(6666004)(186003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2021 19:35:15.1651 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 70fb55b9-fa93-47e2-0dbb-08d9851297dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4103 Subject: [dpdk-dev] [PATCH v2 14/14] net/mlx5: handle flex item in flows X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gregory Etelson Provide flex item recognition, validation and trabslation in flow patterns. Track the flex item referencing. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 8 +++- drivers/net/mlx5/mlx5_flow_dv.c | 70 +++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_flex.c | 4 +- 3 files changed, 79 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a8f8c49dd2..c87d8e3168 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -173,6 +173,10 @@ enum mlx5_feature_name { /* Conntrack item. */ #define MLX5_FLOW_LAYER_ASO_CT (UINT64_C(1) << 35) +/* Flex item */ +#define MLX5_FLOW_ITEM_FLEX (UINT64_C(1) << 36) +#define MLX5_FLOW_ITEM_FLEX_TUNNEL (UINT64_C(1) << 37) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -187,7 +191,8 @@ enum mlx5_feature_name { (MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE | \ MLX5_FLOW_LAYER_GRE | MLX5_FLOW_LAYER_NVGRE | MLX5_FLOW_LAYER_MPLS | \ MLX5_FLOW_LAYER_IPIP | MLX5_FLOW_LAYER_IPV6_ENCAP | \ - MLX5_FLOW_LAYER_GENEVE | MLX5_FLOW_LAYER_GTP) + MLX5_FLOW_LAYER_GENEVE | MLX5_FLOW_LAYER_GTP | \ + MLX5_FLOW_ITEM_FLEX_TUNNEL) /* Inner Masks. */ #define MLX5_FLOW_LAYER_INNER_L3 \ @@ -686,6 +691,7 @@ struct mlx5_flow_handle { uint32_t is_meter_flow_id:1; /**< Indate if flow_id is for meter. */ uint32_t mark:1; /**< Metadate rxq mark flag. */ uint32_t fate_action:3; /**< Fate action type. */ + uint32_t flex_item; /**< referenced Flex Item bitmask. */ union { uint32_t rix_hrxq; /**< Hash Rx queue object index. */ uint32_t rix_jump; /**< Index to the jump action resource. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a3c35a5edf..3a785e7925 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6825,6 +6825,38 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, return 0; } +static int +flow_dv_validate_item_flex(struct rte_eth_dev *dev, + const struct rte_flow_item *item, + uint64_t *last_item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_flex *flow_spec = item->spec; + const struct rte_flow_item_flex *flow_mask = item->mask; + struct mlx5_flex_item *flex; + + if (!flow_spec) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex flow item spec cannot be NULL"); + if (!flow_mask) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex flow item mask cannot be NULL"); + if (item->last) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex flow item last not supported"); + if (mlx5_flex_acquire_index(dev, flow_spec->handle, false) < 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "invalid flex flow item handle"); + flex = (struct mlx5_flex_item *)flow_spec->handle; + *last_item = flex->tunnel ? MLX5_FLOW_ITEM_FLEX_TUNNEL : + MLX5_FLOW_ITEM_FLEX; + return 0; +} + /** * Internal validation function. For validating both actions and items. * @@ -7266,6 +7298,14 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * list it here as a supported type */ break; + case RTE_FLOW_ITEM_TYPE_FLEX: + if (item_flags & MLX5_FLOW_ITEM_FLEX) + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "multiple flex items not supported"); + ret = flow_dv_validate_item_flex(dev, items, + &last_item, error); + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -10123,6 +10163,25 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_value, reg_mask); } +static void +flow_dv_translate_item_flex(struct rte_eth_dev *dev, void *matcher, void *key, + const struct rte_flow_item *item, + struct mlx5_flow *dev_flow) +{ + const struct rte_flow_item_flex *spec = + (const struct rte_flow_item_flex *)item->spec; + int index = mlx5_flex_acquire_index(dev, spec->handle, false); + + MLX5_ASSERT(index >= 0 && index <= (int)(sizeof(uint32_t) * CHAR_BIT)); + if (!(dev_flow->handle->flex_item & RTE_BIT32(index))) { + /* Don't count both inner and outer flex items in one rule. */ + if (mlx5_flex_acquire_index(dev, spec->handle, true) != index) + MLX5_ASSERT(false); + dev_flow->handle->flex_item |= RTE_BIT32(index); + } + mlx5_flex_flow_translate_item(dev, matcher, key, item); +} + static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 }; #define HEADER_IS_ZERO(match_criteria, headers) \ @@ -13520,6 +13579,10 @@ flow_dv_translate(struct rte_eth_dev *dev, flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow); default: break; } @@ -14393,6 +14456,12 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) if (!dev_handle) return; flow->dev_handles = dev_handle->next.next; + while (dev_handle->flex_item) { + int index = rte_bsf32(dev_handle->flex_item); + + mlx5_flex_release_index(dev, index); + dev_handle->flex_item &= ~RTE_BIT32(index); + } if (dev_handle->dvh.matcher) flow_dv_matcher_release(dev, dev_handle); if (dev_handle->dvh.rix_sample) @@ -18014,5 +18083,6 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .item_create = flow_dv_item_create, .item_release = flow_dv_item_release, }; + #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index f695198833..f1567ddfdd 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -1164,8 +1164,8 @@ flow_dv_item_release(struct rte_eth_dev *dev, &flex->devx_fp->entry); flex->devx_fp = NULL; mlx5_flex_free(priv, flex); - if (rc) - return rte_flow_error_set(error, rc, + if (rc < 0) + return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "flex item release failure"); return 0;