From patchwork Mon Aug 20 11:39:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 43789 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E665C2BC1; Mon, 20 Aug 2018 13:39:45 +0200 (CEST) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50087.outbound.protection.outlook.com [40.107.5.87]) by dpdk.org (Postfix) with ESMTP id 9ABB12BBF for ; Mon, 20 Aug 2018 13:39:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/rtRJO4WWUYDBOXX14SB73sordkg7irBUjmXmIw5Sfk=; b=L1W1v1jeqhzJgGX5RbeD2s/2yS8EGsIZ48u2HO9DjXSrBufTHxJSr6Fuj7cX3gDUXhRHCsSIyzfe1uJQo9gJPzkPlk4SGp3qlP4p6GWKHUOw99FFXHnjKrk4xDv3lk1QMAGpJlxYmepobEpTST5uP2p7GshA2iC4+8nib7PBA/k= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=viacheslavo@mellanox.com; Received: from mellanox.com (37.142.13.130) by AM4PR05MB3265.eurprd05.prod.outlook.com (2603:10a6:205:4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1059.23; Mon, 20 Aug 2018 11:39:42 +0000 From: viacheslavo To: dev@dpdk.org Cc: shahafs@mellanox.com, viacheslavo Date: Mon, 20 Aug 2018 11:39:21 +0000 Message-Id: <1534765161-12321-1-git-send-email-viacheslavo@mellanox.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-Originating-IP: [37.142.13.130] X-ClientProxiedBy: AM5PR0602CA0020.eurprd06.prod.outlook.com (2603:10a6:203:a3::30) To AM4PR05MB3265.eurprd05.prod.outlook.com (2603:10a6:205:4::22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3bad131b-0186-4ff3-c378-08d606919f5c X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989137)(4534165)(4627221)(201703031133081)(201702281549075)(8990107)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:AM4PR05MB3265; X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3265; 3:QjoC+qWDwFOQ8p7H1DAPOPmQArM8CeaOlHGquBlTJdw2OT4H+rp+Y0l0Kw+Pww3yW9yCoV//6g4c/wXbNR7TnuufL/8KwogqwSZ0iX8vJw78yW5pc15AsOd7alL/vkvJnWEZQOywRDIfdMFFu21suembo7tDh7brnRA6c/uK0Ryqt3YmcfkxbfPSGD9vuTo4QRBWGG0Y7wb0fZZWEXdARpt+eC9gy2ccH1KdGIfFvUiYfOJVytm9fTy3oDvYmdJD; 25:7pDfw/lASQ60Cr6J7mFOTM7kY4X+XMF+na+wnq9j21PN/hYULpErcHSG6TuU/1fWooShglN61wchx3k+Ld/3hlQdpM/8y/gzgviDqDUnMhrLuRa7xZd6DhvlT3PULMa0gCPg9YQunfA6XCswPnUBwHUAbZN1IQIdY9skw5FxLGLMrzbryd1f7x4O5w8llWyt18Pe1vVc9Y2xaDC1GlvRfbK485OL2A0axEFYfmqBgtrFjXK7M/t65ol5YyUfaBRgotCeW71IeyjJTXfFYWPHqSCwWnoMckh0FUx3pAgQuOMlYdaVE0oM/65RjhxbdynbmzlS5cBzEj95/iI9/c5lVA==; 31:W/hVFYgi9zXjPIRsx6iTVsvR7xnrIuYXVvYEKZiLYeYxvoBNBzhXinL3o0oXr3Vosd1f8Cm6O7Pcgxa5Fl/mIzAVPC9Yrbmk/JUWIPZNSpTHmfTz9cYIMF/cybnnMl5dXmU4s00HkSu69WrcAmLRog1yQgC+gIv6D+ecf8eYTjVRQwG0Lgui2QVVqJqjVLRnsyXVxwbwGb/uALZZM8Nfr5b+5CeyhKPYnvJnD5uk2oE= X-MS-TrafficTypeDiagnostic: AM4PR05MB3265: X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3265; 20:8SlO3FuUGI6eSsqgsEmAoy9+w9cXXccRdySzfWU0Ww5aonMc5m4aunxjHYJSdT+qCJOmJKByrAZ4DLVxsGVWPMFtXpV+yqn1m77KZqaXdpt+d35G6Lrm1ifTFQ6IPVJgT8G6Qfj5urz79xSCnx1gpViXKKdNax7aLmwv3GaWZG6uJGM+aSh2FaSIlyxqeOCW4BhjxVpcGXjhYS8b4Y7XKY1DTl/rAihMERG/j1l6OgCAOlWGEeBlmL4bWawZeDhPma4SA+WN4kOim3b4YGjAlMjOEU6RrG5kysnJVyqwccIYF2hnCM0MeRf8jR3NC6G3Q7mY8Z/jtxY1eGO/Eb9ZxNktYkGETPfXEA9OyYwdg0bLpMj//VS741iJW5gZdGnKw4FKeCCIc2IfK+pBho/ve6J5cpopsIMxv2/JQStdqHzYVujzGlQY5MWjMOjB4s5icqtfMhTFSGTcB5JNE2O0QHQD68RXS77kHPOztDRGF/vILtVaji+tHYUll4Ve0bpN; 4:cKZWmpvpE1e/O5X9vByiCEG3kXyCrutFRWSnX0g/Obm9c5H89ABd3OaPpVPCmxlvgxgGcO8TM25n/JPsOc6gERRfzmabXt3TFzPOFRQt9rtcjK2OKZOOmFrqDRul3qdmz9nimNz2jmh7L8qOHpBIv+NNVlPNbPfg0js8xBr+Nbi9PeBv7OWU62sdqggwlITdNreqZAfb3EYBWdeG9Dr562GyLkV0OJKr8r5wCeF628ZeOmdPcRB4req15OMwSNASThPbLsli2sU/SbgSedzmfzi71TVU9FJK9rdqEppIEiMsonvSMacSm6OWo5bzZxdW X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(823301075)(3231311)(944501410)(52105095)(10201501046)(93006095)(93001095)(3002001)(6055026)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123564045)(20161123562045)(20161123558120)(201708071742011)(7699016); SRVR:AM4PR05MB3265; BCL:0; PCL:0; RULEID:; SRVR:AM4PR05MB3265; X-Forefront-PRVS: 0770F75EA9 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(376002)(136003)(346002)(366004)(396003)(39860400002)(189003)(199004)(36756003)(4326008)(4720700003)(8676002)(486006)(5660300001)(476003)(2616005)(86362001)(26005)(7736002)(3846002)(6116002)(956004)(51416003)(52116002)(7696005)(107886003)(47776003)(186003)(69596002)(81156014)(81166006)(6666003)(66066001)(97736004)(386003)(16526019)(305945005)(6916009)(25786009)(478600001)(2361001)(2906002)(68736007)(86152003)(16586007)(316002)(50466002)(48376002)(55016002)(50226002)(8936002)(14444005)(21086003)(53936002)(33026002)(2351001)(105586002)(5024004)(106356001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3265; H:mellanox.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM4PR05MB3265; 23:/IJ5IRl6DcxHiq2l9+tA9llnpPvwbKzxDts2sKVYr?= VPmwWzPlxeVc7gRGBpyLZA+W/3WJ3GxFbqVsfdH/whGUNUwE0IQO/KLrpYZQz5iM+DIOiwW4NPWqk/5qYu110h3oEkpwZGywzaLaN9mSIzM6E1d+/y+iKsQxoDsMLA9O8q6r7RVOI216QFnZpl7uzVTN+Q8G0obVYl78XOZ8PurLENhJZnSBi8rSyOic4a7bF2xO8isQKuElD8H5GTG82Q7opy0m7ZXa2Vam8+eN5nGQQxusmQdqRbysaKsqRfb6qIEzX7wtxRjSkhq8BKODSY1exmQQyx/clYYuCut/19de+nHcgcwceUODuyvfdHdni/eEslwFVR4/u8E66ulcYVjT2NkYw2rI6Q6SlLMVjnLmCU7qzDHTXTsc02FkmRWse6cS+Uv0eVCv/3yPth6zeG0ZcSQD7u2dCIOSLAmYg8fFxgMRBlFkZFzE+ouDFxkNeYP8hX5nzXzf7hnNO/U2Mdosqn3RmbnLbjwEn654+VxR8EHnSBU1OZFKW50kb9tMOSB5A2Robw0OWlWxiIIhsNeyzx6ol3iyehzsDoLMt6GiuwqlnwcdxolZVzIXM1iFMFudN98N7StilPW6jMhUWHqcXUKiUZVxDGWk7cP4yMIbaFBb8XuxeEZSgo8IlWc5qDu0vqeYj69HPwd6DuXVxcflFOaxSDQa42RGOdL5xqILcf61yGNSfImWZNczto5KDhDYgu4Q4ApPo8E8OyI5YNvD6tuHKOYWmMy6qKcWIf1lbIIFiC9mf1PcjB/xk3OzgHaprXER6/crxYZKJu1SDMvcUu8HFcj68KPvwwd2RxZfHsMLwkjW3vvkCyktl5S4JhbLOrz2YQ+MpS0el9CDs0Sq1r3R+Zr3onoO7AA5UmwxYihscrwj1lz8R8b6eZt/fvzLVYC3ZNVJm8Cd1s7gW6B0/RfLo8IENDKlxv8tjsZrzuIaJS9d2GZV8mOdZXxgckFFF+cU0wK7rOTP5o1UHV2YezN+5t97U+oLlDisfR2Lnym8xvpQW6/yCWUcQAVR1aVk+1n2sXfk3I2PEJymq0mVNwJUDfntcpQgyv4CkEe2QNsAHHj8rnNN82/rcOYbzv32i1J71pimaCagPNjkHvxB72jJ1/5dUXuNIkl8icYKJS6qPA8xBuPdKNjFE4c4027WTu7NS2w6iau63BM4sQuRlCWTbxEZmWM54K0wEBLqbm6paQpak+JIH0kYXiqMNqVEKwnQ2UyMBtpArTAsdM5uzO7+y0iMJM5oIwnrVN25ClkQQO7k7vtSRJg23gEup4= X-Microsoft-Antispam-Message-Info: td0KG8/N8oGPsdY/sHl3Kzl+5ltKwE/Vu0lgFzZCY/YuLBASjWJ4MFNxEq1FvVHgUVltrXYUqTgPjOskR/NhYBlzWKSJ/hcaOV2MKpDkAxjOqF9SF1EFVDJJ+HJbyucA8eAj01okfChrN8x4wA0BRXJzQB0Fucgtmui2WsNB15zVzyC8DH7teYRkJJWsTTRX1uvjIjWw219hd/24g5excOMG/1chfdt3Kv3krUyXQwOU7vXUk+p8Xl8crKKWpNJspj83HfYodo3ruoh8fVu/Patj4rDmqXqpyVL/Z+Xq9na+inxOzj0MP+hD4sMXkiNsvkq4gFufQDLBEaTqOKOPTpf/TeKoF+a8RXdwZwpnQmI= X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3265; 6:+taD0arUbRQHJDPzp+fSr8o0sFa0LqyxrSfY9O3F91Mv5gHgHQ5miIGNTdTf07iFHxywmatK8fz5HB8OdeVGt2GQyU1JBSop+X2iDbljLoplORRlFdcLgahRad/WpcTSYC/EyLGPMRS/3cmUBat8d+i5bEGzH+buaBQunAddNa11ySm1nzp9pUbLUZQzJzz49kSSrq7lRBd1nCDvpT6HBsboKN8ShCP0sPtLhR+WdA+kwiMdM+jRlhdeiq7yCaDifi3jHo+/3lj3t+2+jAAFtCXrOxQEYgTbH78PdYSgP82kOVHssHV0bT2t9kIVkVOfymIpzezj2Q1Fy1zfC2dMsqoSsw1g2rBwsanzFQ0sa6rl7KWQn4cAAj66WvyLjEBaN+NPuU3mz91G62PkWztIJS99dh8IV9PAlHYNDoJAhFPf/1o5zqZse8lJfi1o299t+EZEAggIXu8O9+q+qoe+/w==; 5:eh6kY3ImFxywhFVZaZTIxUwvcWDtHjEgatbp8ss9F9tWgqSOGKm5JeOrmARD3VG8pmGYkbyFJVsYcdR4g64gnjzH7nhJDG9If/lP5mLAfDBAnXlwVVH+LPJ7noq6OGI8R4phT6eBbwO0eUisM3efsbiRLtmgXXJG4o/zXeJKjOE=; 7:SbjWsnUL74lql9ohhMpWrEAWN6zMhmJ1+5B6GTP3g0HqhIcdvsL8RUvmfkY+RaGdstyuQBzLsPebtBqLIKs9zXlHA9V615EQUMs9/JTVRYTcp6/Ux+wxPQjcy66aFRalhdrCAo1Z0o0D0z9NlzcyxlKyx5j0BZiKRPeWeElx6PW26AfoDS2eBL4U+C0/EDKY1f157RiIi+kdz8kuf6CjYeUON2bGtNf568IPQJjX9K6/4MpTiP3Ealxnw3t5H19t SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Aug 2018 11:39:42.7631 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3bad131b-0186-4ff3-c378-08d606919f5c X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3265 Subject: [dpdk-dev] [RFC] ethdev: flow counters batch query X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a demand to perform query operations on the set of counters generally belonging to the different RTE flows. The counter queries is the very effective way for application to know in detail what is going on into the network and to take some actions basing on this knowledge. As example, the modern vSwitch implementation supporting full hardware offload may need a huge number of counters and query these ones periodically for configuration aging check purposes. For some devices the counter query can be very expensive operation and can take a significant amount of time - often it involves the calls of kernel mode components and performing requests to hardware. If application wants to query multiple counters, belonging to the different RTE flows, it has to execute queries in one-by-one fashion due to the current RTE API implementation that allows to query counters belonging to the same flow only. The proposed RTE API extension introduces the compatible and more performance way to allow applications to query the counter values as a batch. At the creation the counter can be optionally assigned with the new introduced 'group_id' identifier (batch identifier). All counters with the same group_id form the new introduced entity - 'counter batch'. The group_id specifies the unique batch within device with given port_id. The same group_id may specify the different batches on different devices. The new method 'rte_flow_query_update()' is proposed to perform the counter batch query from the device as a single call, this eliminates a much of overhead involved with multiple quieries for the counters belonging to the different flows. The rte_flow_query_update() fetches the actual counter values from the device using underlying software and/or hardware and stores obtained data into the counter objects within PMD. The application can get the stored data by invoking the existing rte_flow_query() method with specifying the new introduced flag 'read_cached'. Such approach is compatible with the current implementation, improves the performance and requires the minor changes from applications. Let's assume we have an array of flows and attached counters. If application wants to read them it does something like that: foreach(flow) { // compatible mode rte_flow_query(flow, flow_counters[]); // no read_cached set } // doing as previously With counter batch query implemented application can do the following: // new query mode rte_flow_query_update(group_id of interest); // actual query here foreach(flow) { // as single call rte_flow_query(flow, flow_counters[]); // read_cached flag set } // read stored data // no underlying calls For some devices implementation of rte_flow_query_update() may require a lot of preparations before performing the actual query. If batch is permanent and assumed not to be changed frequently the preparations can be cached internally by implementation. By setting the RTE_FLOW_QUERY_FLAG_PERMANENT flag application gives the hint to PMD that batch is assumed to be long-term and allows to optimize the succesive calls of rte_flow_query_update() for the same group_id on given device. If permanent batch is subject to change (occurs an adding or removing the counter with specified batch id) the PMD should free all resources, internally allocated for batch query optimization. If RTE_FLOW_QUERY_FLAG_PERMANENT is not set, rte_flow_query_update() should free the resources allocated (including ones done in previous calls, if any) for batch query optimization with given group_id. Signed-off-by: Viacheslav Ovsiienko --- doc/guides/prog_guide/rte_flow.rst | 76 ++++++++++++++++++++++++++++---------- lib/librte_ethdev/rte_flow.h | 59 ++++++++++++++++++++++++++++- 2 files changed, 113 insertions(+), 22 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index b305a72..84b3b67 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1500,6 +1500,10 @@ action must specify a unique id. Counters can be retrieved and reset through ``rte_flow_query()``, see ``struct rte_flow_query_count``. +Counters can be assigned with group_id, all counters with matched group_id +on the same port are grouped into batch and can be queried from device +using the single call of rte_flow_query_update() + The shared flag indicates whether the counter is unique to the flow rule the action is specified with, or whether it is a shared counter. @@ -1515,13 +1519,15 @@ to all ports within that switch domain. .. table:: COUNT - +------------+---------------------+ - | Field | Value | - +============+=====================+ - | ``shared`` | shared counter flag | - +------------+---------------------+ - | ``id`` | counter id | - +------------+---------------------+ + +--------------+----------------------+ + | Field | Value | + +==============+======================+ + | ``shared`` | shared counter flag | + +--------------+----------------------+ + | ``id`` | counter id | + +--------------+----------------------+ + | ``group_id`` | batch id | + +--------------+----------------------+ Query structure to retrieve and reset flow rule counters: @@ -1529,19 +1535,21 @@ Query structure to retrieve and reset flow rule counters: .. table:: COUNT query - +---------------+-----+-----------------------------------+ - | Field | I/O | Value | - +===============+=====+===================================+ - | ``reset`` | in | reset counter after query | - +---------------+-----+-----------------------------------+ - | ``hits_set`` | out | ``hits`` field is set | - +---------------+-----+-----------------------------------+ - | ``bytes_set`` | out | ``bytes`` field is set | - +---------------+-----+-----------------------------------+ - | ``hits`` | out | number of hits for this rule | - +---------------+-----+-----------------------------------+ - | ``bytes`` | out | number of bytes through this rule | - +---------------+-----+-----------------------------------+ + +-----------------+-----+-------------------------------------+ + | Field | I/O | Value | + +=================+=====+=====================================+ + | ``reset`` | in | reset counter after query | + +-----------------+-----+-------------------------------------+ + | ``hits_set`` | out | ``hits`` field is set | + +-----------------+-----+-------------------------------------+ + | ``bytes_set`` | out | ``bytes`` field is set | + +-----------------+-----+-------------------------------------+ + | ``read_cached`` | in | read cached data instead of device | + +-----------------+-----+-------------------------------------+ + | ``hits`` | out | number of hits for this rule | + +-----------------+-----+-------------------------------------+ + | ``bytes`` | out | number of bytes through this rule | + +-----------------+-----+-------------------------------------+ Action: ``RSS`` ^^^^^^^^^^^^^^^ @@ -2288,6 +2296,34 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Batch Query +~~~~~~~~~~~ + +Query a batch of existing flow rules. + +This function allows retrieving flow-specific data such as counters +belonging to the different flows on the given port in single batch +query call. + +.. code-block:: c + + int + rte_flow_query_update(uint16_t port_id, + uint32_t group_id, + uint32_t flags); + +Arguments: + +- ``port_id``: port identifier of Ethernet device. +- ``group_id``: batch to query, specifies the group of actions. +- ``flags``: can be combination of ``RTE_FLOW_QUERY_FLAG_RESET`` + and ``RTE_FLOW_QUERY_FLAG_PERMANENT`` values + +Return values: + +- 0 on success, a negative errno value otherwise and ``rte_errno`` is set. + + Isolated mode ------------- diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index f8ba71c..0cbc8fa 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -1561,10 +1561,14 @@ struct rte_flow_action_queue { * Counters can be retrieved and reset through ``rte_flow_query()``, see * ``struct rte_flow_query_count``. * + * Counters can be assigned with group_id, all counters with matched group_id + * on the same port are grouped into batch and can be queried from device + * using the single call of rte_flow_query_update() + * * The shared flag indicates whether the counter is unique to the flow rule the * action is specified with, or whether it is a shared counter. * - * For a count action with the shared flag set, then then a global device + * For a count action with the shared flag set, then a global device * namespace is assumed for the counter id, so that any matched flow rules using * a count action with the same counter id on the same port will contribute to * that counter. @@ -1576,6 +1580,7 @@ struct rte_flow_action_count { uint32_t shared:1; /**< Share counter ID with other flow rules. */ uint32_t reserved:31; /**< Reserved, must be zero. */ uint32_t id; /**< Counter ID. */ + uint32_t group_id; /**< ID of batch that counter belongs to */ }; /** @@ -1587,7 +1592,8 @@ struct rte_flow_query_count { uint32_t reset:1; /**< Reset counters after query [in]. */ uint32_t hits_set:1; /**< hits field is set [out]. */ uint32_t bytes_set:1; /**< bytes field is set [out]. */ - uint32_t reserved:29; /**< Reserved, must be zero [in, out]. */ + uint32_t read_cached:1; /**< read stored data instead of device [in]. */ + uint32_t reserved:28; /**< Reserved, must be zero [in, out]. */ uint64_t hits; /**< Number of hits for this rule [out]. */ uint64_t bytes; /**< Number of bytes through this rule [out]. */ }; @@ -2094,6 +2100,55 @@ struct rte_flow * struct rte_flow_error *error); /** + * Query a batch of existing flow rules. + * + * This function allows retrieving flow-specific data such as counters + * belonging to the different flows on the given port in single batch + * query call. + * + * \see RTE_FLOW_ACTION_TYPE_COUNT + * + * @param port_id + * Port identifier of Ethernet device. + * @param group_id + * Batch identifier, specifies the group of actions to be queried. + * @param flags + * RTE_FLOW_QUERY_FLAG_RESET + * RTE_FLOW_QUERY_FLAG_PERMANENT + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +rte_flow_query_update(uint16_t port_id, + uint32_t group_id, + uint32_t flags); + +#define RTE_FLOW_QUERY_FLAG_RESET (1 << 0) +/**< Reset counters after query */ +/** + * For some devices implementation of rte_flow_query_update() may require + * a lot of preparations before performing the actual query. If batch is + * permanent and assumed not to be changed frequently the preparations + * can be cached internally by implementation. By setting the + * RTE_FLOW_QUERY_FLAG_PERMANENT flag application gives the hint to PMD + * that batch is assumed to be long-term and allows to optimize the + * succesive calls of rte_flow_query_update() for the same group_id + * on given device. + * + * If permanent batch is subject to change (occurs an adding or + * removing the action with specified batch id) the PMD should free all + * resources, internally allocated for batch query optimization + * + * If RTE_FLOW_QUERY_FLAG_PERMANENT is not set, rte_flow_query_update() + * should free the resources (if any) early allocated for batch query + * optimization with given group_id. + * + */ +#define RTE_FLOW_QUERY_FLAG_PERMANENT (1 << 1) +/**< Assume batch permanent */ + +/** * Restrict ingress traffic to the defined flow rules. * * Isolated mode guarantees that all ingress traffic comes from defined flow