From patchwork Wed Oct 19 13:12:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 118569 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15A27A0584; Wed, 19 Oct 2022 15:13:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11CE442BB1; Wed, 19 Oct 2022 15:13:09 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2069.outbound.protection.outlook.com [40.107.223.69]) by mails.dpdk.org (Postfix) with ESMTP id 9A2CF42BA5 for ; Wed, 19 Oct 2022 15:13:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ESYBRYe1TILhktCpgqOCD8yP9aZJUL/9+DSa0JLfbVfT4MBNSWTrugy0sIyTu6vAKAIysS/Z1f54x0PYQuGySg32gIxhmGdFhQw7tvE4SJ/RrZJewM7BYQOZVXddV3OTmk74vFf7LYljatlUvArDF4mIn1yKjUW8GGuj5DnnSwiA+HCcDg4BPs4yc+2kaSTmgzvRUnoU1/5FuegPQBtIH05UdEEH+rG2XDWKk37VpvjAruR4DPzAh5QqsL6p6MUeXPJ54Cncn/vRAXugNKKsE4MNyip/j46Ir/xuYC+RAszzNsC3OQg2IOkUOUOoFxvN2NXNtzjIqFG2ISu7jRtsUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KV5zFZXV6hY80WaISQMBgvkIsNdjUwOcOZusMJMYrTY=; b=cTDZraWdj9sHWmf/gPeTJ9DGd3zvMlhIliaefbbIz6VOJvRVRiIg4nVfpO99aVbOw3yGzoQkXNIZpWojhUQAeyg8D9DFo3UWX8w9WXrvh2ib1fIeDr2vD6Ij3zmqX8dN80kxZmdWeEHboY9POtYDuqrW+bwhe3M6e1M1G6pk7Mfta6okYH+XnzbF3rvnA6cwIYLvW2iOD+P2C/paudO6GrWa38AYadCHH0FYSOp3crD4IO/bYCyiCn3VGtlzJNorlTRy72cufTRj60LXHONPgNqDS8dEN89j52cB6aBsG9aaDR5NMXQ3i2o5QK/OJejRPhy3Avfn3xvVlDLY+iN/7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KV5zFZXV6hY80WaISQMBgvkIsNdjUwOcOZusMJMYrTY=; b=dqWa03HWqXD7/fGSvArfhn/I3yXXEMzOewqnqar476wpTxycDpU587Ce/a3KYzDplmXmRaM7DzEbuVaAL/wyuR8G9rAsHz9+an3SKzvs96O666Fx7AoqY9tgxD1ja0LXmM9X3bQg0w73Lfcw9PD5gFQH+41QFJ1QZgt/uxyRKtVWl7tRKBwmAcrgbWUmj1zrlUL0BHMj5W1f/Bu5qh6U5drW4KlO8CccuZlf52xwlNU0+BAdCRmZRAwI8ZcsQzzpxhR8M3eoaizalYHy4pFrO4ZfU5rMKLuBhVOHuikkKmYwhdB8um2ejr78WdzP3JnAwHhUw8k43C9OLv9kJbgTcA== Received: from BN6PR17CA0035.namprd17.prod.outlook.com (2603:10b6:405:75::24) by DM4PR12MB6302.namprd12.prod.outlook.com (2603:10b6:8:a4::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.32; Wed, 19 Oct 2022 13:13:04 +0000 Received: from BN8NAM11FT096.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::54) by BN6PR17CA0035.outlook.office365.com (2603:10b6:405:75::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.26 via Frontend Transport; Wed, 19 Oct 2022 13:13:04 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT096.mail.protection.outlook.com (10.13.177.195) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Wed, 19 Oct 2022 13:13:04 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 06:12:51 -0700 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 06:12:50 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 19 Oct 2022 06:12:49 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH v2 1/3] ethdev: add strict queue to pre-configuration flow hints Date: Wed, 19 Oct 2022 16:12:26 +0300 Message-ID: <20221019131228.2538941-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019131228.2538941-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> <20221019131228.2538941-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT096:EE_|DM4PR12MB6302:EE_ X-MS-Office365-Filtering-Correlation-Id: 1e2efedc-0732-4107-b113-08dab1d3a857 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IYo7AwXe0WoKB4lCxc2IShWwN+/d8z03hwoO3U2gPcPT5HnWnewj9xYTiBKKZiuVMgNHFrm0eh/DMB5W/Z3icxW+WCQUWuItln9qTQQ1z1GnF+pmqltptmvyIktDFiqfUiatj77ZlqQhWeipSzTPZ/U20raAXD5PArZOoMp8XB+m0ofzn4bvpHIhk4nuV3B4WWo7dJumA6rESagxOsFrFWHCJX6/4tdsMPupJgy4+MWZDpI0c3NfKVaqWk7U22hIeVw/e2xb+3bDeDlMfV3CuiLHL/1rbVHGq8cNu44YabBA4JDlZCgXOXistL0apU3uLGmoAINgOF6vmn8NOuN9rL4B/CiBgwDz5Ogq5ED5Y/qlrhvIacT4OOaNvIiU7HIsJcw0GsGtlsI1zn0nu92UoSsO+HaspYcN37txzUT7ULRhnXO4cN07gWDS4T88vvwYh77WmiOQ3wDhcWwo1rZjaS8BcIHPVdsaQDchx3rfbdiSNb8hJOsLy7k4M24D97V2BQTJKL4yjUdqrfX0Q3rh7IUDJn9vBLlNmR580dwHStgnPY8LXSlRs+HXu/tlLmi28CxgfNU0ucC8P6Gtc+n6PYRk73a8NqgfMWRGCJI+k5JySh8WoKDS3GTpHKEiDRmTTnrM9yJ/8h+HyEm0R4L868pjzEH7drOX7qD8wokWaBIi9HAsbh2wdQ4feToD4H4v6EBnJwCnZ4SfxK6w6DXWtV1RS9RCAdgHlE+ClzgdQFGhb1GpfhaReIwNc51Apnv/L5DH3Ybx2K3z8CWy6qSmfQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199015)(46966006)(36840700001)(40470700004)(82310400005)(6916009)(1076003)(186003)(47076005)(83380400001)(426003)(356005)(7636003)(2616005)(2906002)(5660300002)(36860700001)(82740400003)(478600001)(41300700001)(4326008)(86362001)(55016003)(40480700001)(107886003)(40460700003)(8936002)(26005)(6286002)(8676002)(7696005)(316002)(6666004)(70586007)(70206006)(54906003)(36756003)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2022 13:13:04.4850 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1e2efedc-0732-4107-b113-08dab1d3a857 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT096.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6302 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The data-path focused flow rule management can manage flow rules in more optimized way than traditional one by using hints provided by application in initialization phase. In addition to the current hints we have in port attr, more hints could be provided by application about its behaviour. One example is how the application do with the same flow rule ? A. create/destroy flow on same queue but query flow on different queue or queue-less way (i.e, counter query) B. All flow operations will be exactly on the same queue, by which PMD could be in more optimized way then A because resource could be isolated and access based on queue, without lock, for example. This patch add flag about above situation and could be extended to cover more situations. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 10 ++++++++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++-- lib/ethdev/rte_flow.h | 14 ++++++++++++++ 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 810dfb9854..59829371d4 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -226,6 +226,7 @@ enum index { CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, CONFIG_CONN_TRACK_NUMBER, + CONFIG_FLAGS, /* Indirect action arguments */ INDIRECT_ACTION_CREATE, @@ -1092,6 +1093,7 @@ static const enum index next_config_attr[] = { CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, CONFIG_CONN_TRACK_NUMBER, + CONFIG_FLAGS, END, ZERO, }; @@ -2692,6 +2694,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, args.configure.port_attr.nb_conn_tracks)), }, + [CONFIG_FLAGS] = { + .name = "flags", + .help = "configuration flags", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.flags)), + }, /* Top-level command. */ [PATTERN_TEMPLATE] = { .name = "pattern_template", diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index b3f31df69a..a8b99c8c19 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2891,7 +2891,7 @@ following sections. [queues_number {number}] [queues_size {size}] [counters_number {number}] [aging_counters_number {number}] - [meters_number {number}] + [meters_number {number}] [flags {number}] - Create a pattern template:: flow pattern_template {port_id} create [pattern_template_id {id}] @@ -3042,7 +3042,7 @@ for asynchronous flow creation/destruction operations. It is bound to [queues_number {number}] [queues_size {size}] [counters_number {number}] [aging_counters_number {number}] - [meters_number {number}] + [meters_number {number}] [flags {number}] If successful, it will show:: diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index cddbe74c33..a93ec796cb 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4741,6 +4741,12 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * Indicate all operations for a given flow rule will _strictly_ + * happen on the same queue (create/destroy/query/update). + */ +#define RTE_FLOW_PORT_FLAG_STRICT_QUEUE RTE_BIT32(0) + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4774,6 +4780,10 @@ struct rte_flow_port_info { * @see RTE_FLOW_ACTION_TYPE_CONNTRACK */ uint32_t max_nb_conn_tracks; + /** + * Port supported flags (RTE_FLOW_PORT_FLAG_*). + */ + uint32_t supported_flags; }; /** @@ -4848,6 +4858,10 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_CONNTRACK */ uint32_t nb_conn_tracks; + /** + * Port flags (RTE_FLOW_PORT_FLAG_*). + */ + uint32_t flags; }; /** From patchwork Wed Oct 19 13:12:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 118570 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77EC8A0584; Wed, 19 Oct 2022 15:13:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AC5A42BAD; Wed, 19 Oct 2022 15:13:12 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2061.outbound.protection.outlook.com [40.107.220.61]) by mails.dpdk.org (Postfix) with ESMTP id 398F9410D1 for ; Wed, 19 Oct 2022 15:13:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lREnBlsr2t6vwyG9vqTQtrUAHt7ipaMp+Rl1HCrB/PuWurYqvRc4tLheSbIjgLFme2SrPS7cYZm02mOpAapOUBNgqSdPWqGieD68jndKJ6wUZKcsOHpZVvdMe42Jf8wzzppCRNjK5JhMQi7qkplPycO2h2M2C76Va/ueXgrsqubu9NdgvC48qqkb3cTKSkl+4W7EUkkuXWs1N3J9fsYh6Sn8BjD5uDjTfYmHQ1SWzmczVX94Bk2ojVr9k5TZMBVXJBdQDVgeMlEU8uc5CuI/Z7Y/7/9Y64qkQKm9X8sxFE+OkbRJ9uD5ME2bB8gvQGv02d0Tq6HZYWFGEnV8mVD64g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZRyv/0EEl6c/NhxvG80pg2cT+pyjN5zYE6oV3u4nP6k=; b=BYSCXV7rgTUidiO/fhSD8S+Bhpzxw0zikCJDTpPqqm8mE9cG5T1HiHwb2i1H3qLKh8vlWQ6WRg6LiU331U/ZOtMfvwcV9xg7NFwieX6qWfhMlCR5fLKRhvdckjPj07gbw2CC1cJrvaLlMfxKww+guJJJUPA1fOSK+pZd/GOla1rt11zJho5lcSiV6Rk6W6myJ2l3zhbw3ru6p70HPM4T+d9R5ApaLoZu/FU1R9RR5o+VlbZyRxA902RL75XTqpb7zL1h5ntyKqaj5/qFYcdIjiZZWbLTVFyXD2NuYkn9w9DAUEAaWct9YMphhBOtrF+LE68eXbdPT8urwYfqJgIeXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZRyv/0EEl6c/NhxvG80pg2cT+pyjN5zYE6oV3u4nP6k=; b=f1O5eb8ceh0goEPe4lhhFvqgXuCCyPRMzkMCb0pV6g2QMZ+ngkd/YuuLXRiuozXKPtvHyivCg37VkK6aht99jcJxn15aCJgLGIeKqP6s/QYlkCJaO0eOMgKERkM+R7Wh0u85M9jUra9IvVMrKHMd0LE5rVlXcevJf1/BXxt2XzTBFSAR6bgNr+0VqdMt0dUZm6DDsvF1Fl5c4FT2ZB0bp+Kn9OU4pwDEghpxKOBoO3jOs0GizLHXc8WyV2rRzBX47mM8U/cNwQMXyY/NI5wm8h23IaHN4JF1IHuKFMJrEOZ3fJyztUHPbLvSHzVZtvBePYMs8MqU2ZUstW+mIbLrgQ== Received: from BN9P221CA0027.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::21) by BL1PR12MB5826.namprd12.prod.outlook.com (2603:10b6:208:395::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.29; Wed, 19 Oct 2022 13:13:07 +0000 Received: from BN8NAM11FT078.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10a:cafe::d4) by BN9P221CA0027.outlook.office365.com (2603:10b6:408:10a::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.34 via Frontend Transport; Wed, 19 Oct 2022 13:13:07 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT078.mail.protection.outlook.com (10.13.176.251) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Wed, 19 Oct 2022 13:13:07 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 06:12:53 -0700 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 06:12:52 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 19 Oct 2022 06:12:51 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH v2 2/3] ethdev: add queue-based API to report aged flow rules Date: Wed, 19 Oct 2022 16:12:27 +0300 Message-ID: <20221019131228.2538941-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019131228.2538941-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> <20221019131228.2538941-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT078:EE_|BL1PR12MB5826:EE_ X-MS-Office365-Filtering-Correlation-Id: 5dd1e134-d56e-488d-9697-08dab1d3a9e7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TjLPYst3qvLGKlzW7kAwIvmoIWf5cNz1ukmGBK2mPVUayYphum3SJSAFAF2xTAOppcgqv+YL1lYFRREu9i2IQ6PP66KROI2pjmR70NE9i0wNL6Rkv7H7g6Pn4eJoYJ+/m33cy8YoBAg3WS7mD9E0zeV0AxrpD1LSek7g44j1UDXBp1UFIsPSHff1X8gfZF+xrLEdTYzROhorsGQBvZynQw2ucYurcUP4nxVd4tLecERcHOOiHAcJtXUxRXR236dtrkj3XiM7QjL6p6sVMHrIVOVdrULFxEv0tGIq43Bz2oHhxBjfV9ZZWqiyXQFPwvdXRMLsv94eXd+4yPX3pJhnFdUVAGEhPNqRNuEcKG88lumsmEqteyD3PvG7LPo6tG5bY0bSwEq2/Y3Pkii6YFzvKfeiqGB6kBKK67jSihXMmSkToa9erKDrS7I0Pdv4+K9MmuGMcarRA89KqGwDh94DsIVeWzk/uMsyVX7grcFZkKnQOj4i2CGvHVYDg1+zNrIKZyDPup4ywggoLHcZcwIhEFD3u/h1c7g/HFN68yMv79kbOeQ4AdL6nR4L3rx47DAzL3TBbeZshCtXfSkNdRee5VqnRnw2vXwaXydWb+y273lKTP/LFyxr568tABqYiCuRa7VCRbik5xgg4Vc5aC7ccyKtiz2bXDVBIZ2r9BYCbMF4AFfBKx2EkZyMcKbypBXIGlWW2mzZIKFY5OLi94cULmFWF3X5vRsP8nxAOjyIYHc/KBCezKJZTsdEqwiuR4fCTLbqeiD8r2IckpXykV5osg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199015)(46966006)(36840700001)(40470700004)(70206006)(6916009)(4326008)(30864003)(54906003)(8676002)(6666004)(36756003)(7696005)(41300700001)(5660300002)(70586007)(55016003)(26005)(107886003)(316002)(40480700001)(426003)(356005)(86362001)(47076005)(6286002)(36860700001)(83380400001)(8936002)(40460700003)(7636003)(186003)(2906002)(82740400003)(1076003)(2616005)(336012)(82310400005)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2022 13:13:07.1058 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5dd1e134-d56e-488d-9697-08dab1d3a9e7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT078.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5826 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of querying aged flow rules should also have queue id parameter just like other queue-based flow APIs. By this way, PMD can work in more optimized way since resources are isolated by queue and needn't synchronize. If application do use queue-based flow management but configure port without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate a given flow rule on different queues, the queue id parameter will be ignored. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 17 ++- app/test-pmd/config.c | 159 +++++++++++++++++++- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 86 ++++++++++- lib/ethdev/rte_flow.c | 22 +++ lib/ethdev/rte_flow.h | 49 +++++- lib/ethdev/rte_flow_driver.h | 7 + lib/ethdev/version.map | 1 + 8 files changed, 332 insertions(+), 10 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 59829371d4..992aeb95b3 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -129,6 +129,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ @@ -1170,6 +1171,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, ZERO, }; @@ -2967,6 +2969,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_AGED] = { + .name = "aged", + .help = "list and destroy aged flows", + .next = NEXT(next_aged_attr, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_aged, + }, [QUEUE_INDIRECT_ACTION] = { .name = "indirect_action", .help = "queue indirect actions", @@ -8654,8 +8663,8 @@ parse_aged(struct context *ctx, const struct token *token, /* Nothing else to do if there is no buffer. */ if (!out) return len; - if (!out->command) { - if (ctx->curr != AGED) + if (!out->command || out->command == QUEUE) { + if (ctx->curr != AGED && ctx->curr != QUEUE_AGED) return -1; if (sizeof(*out) > size) return -1; @@ -10610,6 +10619,10 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_AGED: + port_queue_flow_aged(in->port, in->queue, + in->args.aged.destroy); + break; case QUEUE_INDIRECT_ACTION_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 0f7dbd698f..18f3543887 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2509,6 +2509,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; + struct rte_flow_attr flow_attr = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; @@ -2568,7 +2569,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, } job->type = QUEUE_JOB_TYPE_FLOW_CREATE; - pf = port_flow_new(NULL, pattern, actions, &error); + pf = port_flow_new(&flow_attr, pattern, actions, &error); if (!pf) { free(job); return port_flow_complain(&error); @@ -2905,6 +2906,162 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +static int +port_queue_aged_flow_destroy(portid_t port_id, queueid_t queue_id, + const uint32_t *rule, int nb_flows) +{ + struct rte_port *port = &ports[port_id]; + struct rte_flow_op_result *res; + struct rte_flow_error error; + uint32_t n = nb_flows; + int ret = 0; + int i; + + res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + while (nb_flows > 0) { + int success = 0; + + if (n > port->queue_sz) + n = port->queue_sz; + ret = port_queue_flow_destroy(port_id, queue_id, true, n, rule); + if (ret < 0) { + free(res); + return ret; + } + ret = rte_flow_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue: %s\n", + strerror(-ret)); + free(res); + return ret; + } + while (success < nb_flows) { + ret = rte_flow_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results: %s\n", + strerror(-ret)); + free(res); + return ret; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_OP_SUCCESS) + success++; + } + } + rule += n; + nb_flows -= n; + n = nb_flows; + } + + free(res); + return ret; +} + +/** List simply and destroy all aged flows per queue. */ +void +port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy) +{ + void **contexts; + int nb_context, total = 0, idx; + uint32_t *rules = NULL; + struct rte_port *port; + struct rte_flow_error error; + enum age_action_context_type *type; + union { + struct port_flow *pf; + struct port_indirect_action *pia; + } ctx; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return; + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Error: queue #%u is invalid\n", queue_id); + return; + } + total = rte_flow_get_q_aged_flows(port_id, queue_id, NULL, 0, &error); + if (total < 0) { + port_flow_complain(&error); + return; + } + printf("Port %u queue %u total aged flows: %d\n", + port_id, queue_id, total); + if (total == 0) + return; + contexts = calloc(total, sizeof(void *)); + if (contexts == NULL) { + printf("Cannot allocate contexts for aged flow\n"); + return; + } + printf("%-20s\tID\tGroup\tPrio\tAttr\n", "Type"); + nb_context = rte_flow_get_q_aged_flows(port_id, queue_id, contexts, + total, &error); + if (nb_context > total) { + printf("Port %u queue %u get aged flows count(%d) > total(%d)\n", + port_id, queue_id, nb_context, total); + free(contexts); + return; + } + if (destroy) { + rules = malloc(sizeof(uint32_t) * nb_context); + if (rules == NULL) + printf("Cannot allocate memory for destroy aged flow\n"); + } + total = 0; + for (idx = 0; idx < nb_context; idx++) { + if (!contexts[idx]) { + printf("Error: get Null context in port %u queue %u\n", + port_id, queue_id); + continue; + } + type = (enum age_action_context_type *)contexts[idx]; + switch (*type) { + case ACTION_AGE_CONTEXT_TYPE_FLOW: + ctx.pf = container_of(type, struct port_flow, age_type); + printf("%-20s\t%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 + "\t%c%c%c\t\n", + "Flow", + ctx.pf->id, + ctx.pf->rule.attr->group, + ctx.pf->rule.attr->priority, + ctx.pf->rule.attr->ingress ? 'i' : '-', + ctx.pf->rule.attr->egress ? 'e' : '-', + ctx.pf->rule.attr->transfer ? 't' : '-'); + if (rules != NULL) { + rules[total] = ctx.pf->id; + total++; + } + break; + case ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION: + ctx.pia = container_of(type, + struct port_indirect_action, + age_type); + printf("%-20s\t%" PRIu32 "\n", "Indirect action", + ctx.pia->id); + break; + default: + printf("Error: invalid context type %u\n", port_id); + break; + } + } + if (rules != NULL) { + port_queue_aged_flow_destroy(port_id, queue_id, rules, total); + free(rules); + } + printf("\n%d flows destroyed\n", total); + free(contexts); +} + /** Pull queue operation results from the queue. */ int port_queue_flow_pull(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index acdb7e855d..918c2377d8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -941,6 +941,7 @@ int port_queue_action_handle_query(portid_t port_id, uint32_t queue_id, bool postpone, uint32_t id); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); +void port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index a8b99c8c19..8e21b2a5b7 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2894,9 +2894,10 @@ following sections. [meters_number {number}] [flags {number}] - Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] [relaxed {boolean}] [ingress] [egress] [transfer] - template {item} [/ {item} [...]] / end + template {item} [/ {item} [...]] / end - Destroy a pattern template:: @@ -2995,6 +2996,10 @@ following sections. flow aged {port_id} [destroy] +- Enqueue list and destroy aged flow rules:: + + flow queue {port_id} aged {queue_id} [destroy] + - Tunnel offload - create a tunnel stub:: flow tunnel create {port_id} type {tunnel_type} @@ -4236,7 +4241,7 @@ Disabling isolated mode:: testpmd> Dumping HW internal information -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow dump`` dumps the hardware's internal representation information of all flows. It is bound to ``rte_flow_dev_dump()``:: @@ -4252,10 +4257,10 @@ Otherwise, it will complain error occurred:: Caught error type [...] ([...]): [...] Listing and destroying aged flow rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow aged`` simply lists aged flow rules be get from api ``rte_flow_get_aged_flows``, -and ``destroy`` parameter can be used to destroy those flow rules in PMD. +and ``destroy`` parameter can be used to destroy those flow rules in PMD:: flow aged {port_id} [destroy] @@ -4290,7 +4295,7 @@ will be ID 3, ID 1, ID 0:: 1 0 0 i-- 0 0 0 i-- -If attach ``destroy`` parameter, the command will destroy all the list aged flow rules. +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: testpmd> flow aged 0 destroy Port 0 total aged flows: 4 @@ -4308,6 +4313,77 @@ If attach ``destroy`` parameter, the command will destroy all the list aged flow testpmd> flow aged 0 Port 0 total aged flows: 0 + +Enqueueing listing and destroying aged flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue aged`` simply lists aged flow rules be get from +``rte_flow_get_q_aged_flows`` API, and ``destroy`` parameter can be used to +destroy those flow rules in PMD:: + + flow queue {port_id} aged {queue_id} [destroy] + +Listing current aged flow rules:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 0 + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.14 / end + actions age timeout 5 / queue index 0 / end + Flow rule #0 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.15 / end + actions age timeout 4 / queue index 0 / end + Flow rule #1 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.16 / end + actions age timeout 4 / queue index 0 / end + Flow rule #2 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.17 / end + actions age timeout 4 / queue index 0 / end + Flow rule #3 creation enqueued + testpmd> flow pull 0 queue 0 + Queue #0 pulled 4 operations (0 failed, 4 succeeded) + +Aged Rules are simply list as command ``flow queue {port_id} list {queue_id}``, +but strip the detail rule information, all the aged flows are sorted by the +longest timeout time. For example, if those rules is configured in the same time, +ID 2 will be the first aged out rule, the next will be ID 3, ID 1, ID 0:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + + 0 flows destroyed + +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: + + testpmd> flow queue 0 aged 0 destroy + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + Flow rule #2 destruction enqueued + Flow rule #3 destruction enqueued + Flow rule #1 destruction enqueued + Flow rule #0 destruction enqueued + + 4 flows destroyed + testpmd> flow queue 0 aged 0 + Port 0 total aged flows: 0 + +.. note:: + + The queue must be empty before attaching ``destroy`` parameter. + + Creating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index d11ba270db..7d0c24366c 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1132,6 +1132,28 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts, NULL, rte_strerror(ENOTSUP)); } +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->get_q_aged_flows)) { + fts_enter(dev); + ret = ops->get_q_aged_flows(dev, queue_id, contexts, + nb_contexts, error); + fts_exit(dev); + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + struct rte_flow_action_handle * rte_flow_action_handle_create(uint16_t port_id, const struct rte_flow_indir_action_conf *conf, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a93ec796cb..64ec8f0903 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2639,6 +2639,7 @@ enum rte_flow_action_type { * flow. * * See struct rte_flow_action_age. + * See function rte_flow_get_q_aged_flows * See function rte_flow_get_aged_flows * see enum RTE_ETH_EVENT_FLOW_AGED * See struct rte_flow_query_age @@ -2784,8 +2785,8 @@ struct rte_flow_action_queue { * on the flow. RTE_ETH_EVENT_FLOW_AGED event is triggered when a * port detects new aged-out flows. * - * The flow context and the flow handle will be reported by the - * rte_flow_get_aged_flows API. + * The flow context and the flow handle will be reported by the either + * rte_flow_get_aged_flows or rte_flow_get_q_aged_flows APIs. */ struct rte_flow_action_age { uint32_t timeout:24; /**< Time in seconds. */ @@ -4314,6 +4315,50 @@ int rte_flow_get_aged_flows(uint16_t port_id, void **contexts, uint32_t nb_contexts, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get aged-out flows of a given port on the given flow queue. + * + * If application configure port attribute with RTE_FLOW_PORT_FLAG_STRICT_QUEUE, + * there is no RTE_ETH_EVENT_FLOW_AGED event and this function must be called to + * get the aged flows synchronously. + * + * If application configure port attribute without + * RTE_FLOW_PORT_FLAG_STRICT_QUEUE, RTE_ETH_EVENT_FLOW_AGED event will be + * triggered at least one new aged out flow was detected on any flow queue after + * the last call to rte_flow_get_q_aged_flows. + * In addition, the @p queue_id will be ignored. + * This function can be called to get the aged flows asynchronously from the + * event callback or synchronously regardless the event. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue to query. Ignored when RTE_FLOW_PORT_FLAG_STRICT_QUEUE not set. + * @param[in, out] contexts + * The address of an array of pointers to the aged-out flows contexts. + * @param[in] nb_contexts + * The length of context array pointers. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * if nb_contexts is 0, return the amount of all aged contexts. + * if nb_contexts is not 0 , return the amount of aged flows reported + * in the context array, otherwise negative errno value. + * + * @see rte_flow_action_age + * @see RTE_ETH_EVENT_FLOW_AGED + * @see rte_flow_port_flag + */ +__rte_experimental +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error); + /** * Specify indirect action object configuration */ diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 7289deb538..c7d0699c91 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -84,6 +84,13 @@ struct rte_flow_ops { void **context, uint32_t nb_contexts, struct rte_flow_error *err); + /** See rte_flow_get_q_aged_flows() */ + int (*get_q_aged_flows) + (struct rte_eth_dev *dev, + uint32_t queue_id, + void **contexts, + uint32_t nb_contexts, + struct rte_flow_error *error); /** See rte_flow_action_handle_create() */ struct rte_flow_action_handle *(*action_handle_create) (struct rte_eth_dev *dev, diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index e749678b96..17201fbe0f 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -295,6 +295,7 @@ EXPERIMENTAL { rte_eth_rx_descriptor_dump; rte_eth_tx_descriptor_dump; rte_flow_async_action_handle_query; + rte_flow_get_q_aged_flows; rte_mtr_meter_policy_get; rte_mtr_meter_profile_get; }; From patchwork Wed Oct 19 13:12:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 118568 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BBB6A0584; Wed, 19 Oct 2022 15:13:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BFDA42BA5; Wed, 19 Oct 2022 15:13:08 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2074.outbound.protection.outlook.com [40.107.237.74]) by mails.dpdk.org (Postfix) with ESMTP id 777A2410D1 for ; Wed, 19 Oct 2022 15:13:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SQ7gJPnmU3JXgydi9fxb6jQCUzTDjTZUUSAUb7gVa5MpNxqjIsUDM9AeU370y4cFZeT2f6NMXkfIrOxK20Bv4MgiQewmv2Su5VCFJ2w8HqCnpTb4/QrpZT8uo8tujL5Rn9reEKfxrDp16Dq97T78Qm/uOWOSnqZwsGuyT1vSar6qx/wniBPjN91U54bq+LC8o4IwyaWWV8F/LHU6Q715q9GmMdoc5lS9xGFZZJcpRBPo+VUt8yapvbRuT+3mYMmHLPbZaKQlcbDl3EdhL1jHB94aafWckJXd3i1Oydb7cHdxUQAtSbxo9Tdq4AyBL/9uTTIW6cLvrCgAP5L+v8fQlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WrOOVdua0riNmSpgXhnlvku6mm8oGFh2OJal5KViGeA=; b=KmLUHCP5WVE+DLttS8sDgyrS9mu3VWbH35vRkb/jYyaWVIeopr6NVDI6ZkCshCz2Icbd9cEXIz1pfzCvQj7S6irYCuve1o3QbtJTWSR8QRm12GrslUPPmSR76l1sJ+YcB3otypDNwPrt4ZLOrzR8xGNzkcW1XNU0iqVkq+OSIxuVo/lgiw8wi+Sano9qqkv2Ccl55xQmHADbPOVF4dZ/AW00MfWMG2Lp6T+1AjQTO5a5q//wnx5Oi6udMm3PEw/rjd/DPM52ojtwtDoXujgAJu6g0XjlnJ4BDgcBOkIoNKK/DDUCWIdMDrZiAbq9Epdhj+zOL1OUDVcnC1dKRP5tcQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WrOOVdua0riNmSpgXhnlvku6mm8oGFh2OJal5KViGeA=; b=Wop45o12HINgPCsQxuG2+am4JANCJstiXcjBvpNaZKcqn+fOHDTE3oK5mo+WhwcaVdacjQbdEhpvQwJvCZZDYLRGSJW/+/Fk8Jl0zd6jq5K1fIYn3rqJp/wdJWRz9v0dmh/1vqhxJvqFA1goiCrGNEG8IublMVxg4Wu39n4VsHaATEynByZGAvHaGEnxiy6WqATpYXvvLVhVDZ2AwXruYvJU4AS8PewdibyhbaVzUq8KVb9xC1LajOIKh1BXblWsXm/zEW2OQpYRvalFHTuhKxHvVwqt6CH6qHRC8Rz3bL5dre9vEGu87OC2Ew/qu/WAalYVL3rG+Sw32jn1/YGH6Q== Received: from DS7P222CA0007.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::24) by IA0PR12MB7554.namprd12.prod.outlook.com (2603:10b6:208:43e::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.29; Wed, 19 Oct 2022 13:13:04 +0000 Received: from DM6NAM11FT084.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2e:cafe::b1) by DS7P222CA0007.outlook.office365.com (2603:10b6:8:2e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.34 via Frontend Transport; Wed, 19 Oct 2022 13:13:04 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT084.mail.protection.outlook.com (10.13.172.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Wed, 19 Oct 2022 13:13:04 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 06:12:54 -0700 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 06:12:54 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 19 Oct 2022 06:12:53 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH v2 3/3] ethdev: add structure for indirect AGE update Date: Wed, 19 Oct 2022 16:12:28 +0300 Message-ID: <20221019131228.2538941-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019131228.2538941-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> <20221019131228.2538941-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT084:EE_|IA0PR12MB7554:EE_ X-MS-Office365-Filtering-Correlation-Id: aa1872ee-cd43-4e1d-335c-08dab1d3a848 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /F53iDserNk6K3h5mwDbQc44FgvLwN/5HUFniNXl9c0d5majtDYj9zpaf7LaAISUBG3HgzlbJOavcJbVlThCEmDJTtXnLRd8UFffEFn0e1caW/iSMU0cSg8HDHe8kBEoft3+BwsuTEDZtN6HoDaiinK73zLxFpu2lT1PHvZNsPVcUP9FYHAMjFoy0oQyd7S75/zXUNLwPUSTELQndD1YlX9nS9jwo4lGpQal2tiKm4X1VIf0/Sr5K4JnJAmfoLk3VX7fh2AQdZfC0n3aadaKRG1OH01HT8OIx/fPy9EGUoxgFr040yvmxhp4p0kMrQTY+zMRRuqjmKueQ6eKqjNPYByDPlp3GruMIS8kYsVkamATR6OlLLj+w3ARCCRXuOVqQ6d2GpUTEhSj2HoL0Yq2yoPfcj/AR2qEy+doIJIR9dqOnYQJwQ6Zejrk52/OQV61b1Gix3VAGXEXxaI1HOa5fXZgp9DbZj9/K01+sdOQuzNW1e5mIIWsXaQhjms7aboceUFuEFG7VlrGGrJMEp8WNS7CVL0orxX/AQwQHL9+plxEZ7Z5Xm2VQ2KpLD0FZL4mSg42ztLQXKBdYKTU0cL2hXpl1OXmKW7SRLzpDHXqpaTmFGGhERATnAq46WPg+0UrHbMwv3xOxlCCsUv/LYfd6c3oYO4U1U4EMBztwOZdNEGUrI1c0mhek8aopr3rKfRJjAWTmC6zUbiQsXn/JJGB07vPgY1csjNnvSI5SWeDC6enSErob/gxH43Yn+bB7LbjgusjKAQbWIAFDzNgagcP7A== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(136003)(396003)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(6666004)(2616005)(478600001)(6286002)(26005)(36860700001)(83380400001)(336012)(55016003)(7696005)(186003)(1076003)(47076005)(40460700003)(426003)(40480700001)(5660300002)(15650500001)(107886003)(54906003)(6916009)(82310400005)(316002)(4326008)(41300700001)(8936002)(70586007)(70206006)(8676002)(86362001)(36756003)(356005)(7636003)(2906002)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2022 13:13:04.4147 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aa1872ee-cd43-4e1d-335c-08dab1d3a848 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT084.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7554 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a new structure for indirect AGE update. This new structure enables: 1. Update timeout value. 2. Stop AGE checking. 3. Start AGE checking. 4. restart AGE checking. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 66 ++++++++++++++++++++++++++++++ app/test-pmd/config.c | 18 +++++--- doc/guides/prog_guide/rte_flow.rst | 25 +++++++++-- lib/ethdev/rte_flow.h | 28 +++++++++++++ 4 files changed, 128 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 992aeb95b3..88108498e0 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -586,6 +586,9 @@ enum index { ACTION_SET_IPV6_DSCP_VALUE, ACTION_AGE, ACTION_AGE_TIMEOUT, + ACTION_AGE_UPDATE, + ACTION_AGE_UPDATE_TIMEOUT, + ACTION_AGE_UPDATE_TOUCH, ACTION_SAMPLE, ACTION_SAMPLE_RATIO, ACTION_SAMPLE_INDEX, @@ -1874,6 +1877,7 @@ static const enum index next_action[] = { ACTION_SET_IPV4_DSCP, ACTION_SET_IPV6_DSCP, ACTION_AGE, + ACTION_AGE_UPDATE, ACTION_SAMPLE, ACTION_INDIRECT, ACTION_MODIFY_FIELD, @@ -2110,6 +2114,14 @@ static const enum index action_age[] = { ZERO, }; +static const enum index action_age_update[] = { + ACTION_AGE_UPDATE, + ACTION_AGE_UPDATE_TIMEOUT, + ACTION_AGE_UPDATE_TOUCH, + ACTION_NEXT, + ZERO, +}; + static const enum index action_sample[] = { ACTION_SAMPLE, ACTION_SAMPLE_RATIO, @@ -2188,6 +2200,9 @@ static int parse_vc_spec(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); static int parse_vc_conf(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_conf_timeout(struct context *, const struct token *, + const char *, unsigned int, void *, + unsigned int); static int parse_vc_item_ecpri_type(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -6206,6 +6221,30 @@ static const struct token token_list[] = { .next = NEXT(action_age, NEXT_ENTRY(COMMON_UNSIGNED)), .call = parse_vc_conf, }, + [ACTION_AGE_UPDATE] = { + .name = "age_update", + .help = "update aging parameter", + .next = NEXT(action_age_update), + .priv = PRIV_ACTION(AGE, + sizeof(struct rte_flow_update_age)), + .call = parse_vc, + }, + [ACTION_AGE_UPDATE_TIMEOUT] = { + .name = "timeout", + .help = "age timeout update value", + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_update_age, + timeout, 24)), + .next = NEXT(action_age_update, NEXT_ENTRY(COMMON_UNSIGNED)), + .call = parse_vc_conf_timeout, + }, + [ACTION_AGE_UPDATE_TOUCH] = { + .name = "touch", + .help = "this flow is touched", + .next = NEXT(action_age_update, NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_update_age, + touch, 1)), + .call = parse_vc_conf, + }, [ACTION_SAMPLE] = { .name = "sample", .help = "set a sample action", @@ -7045,6 +7084,33 @@ parse_vc_conf(struct context *ctx, const struct token *token, return len; } +/** Parse action configuration field. */ +static int +parse_vc_conf_timeout(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_update_age *update; + + (void)size; + if (ctx->curr != ACTION_AGE_UPDATE_TIMEOUT) + return -1; + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Update the timeout is valid. */ + update = (struct rte_flow_update_age *)out->args.vc.data; + update->timeout_valid = 1; + return len; +} + /** Parse eCPRI common header type field. */ static int parse_vc_item_ecpri_type(struct context *ctx, const struct token *token, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 18f3543887..d036fff095 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1886,6 +1886,7 @@ port_action_handle_update(portid_t port_id, uint32_t id, if (!pia) return -EINVAL; switch (pia->type) { + case RTE_FLOW_ACTION_TYPE_AGE: case RTE_FLOW_ACTION_TYPE_CONNTRACK: update = action->conf; break; @@ -2816,17 +2817,22 @@ port_queue_action_handle_update(portid_t port_id, return -EINVAL; } - if (pia->type == RTE_FLOW_ACTION_TYPE_METER_MARK) { + switch (pia->type) { + case RTE_FLOW_ACTION_TYPE_AGE: + update = action->conf; + break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: rte_memcpy(&mtr_update.meter_mark, action->conf, sizeof(struct rte_flow_action_meter_mark)); mtr_update.profile_valid = 1; - mtr_update.policy_valid = 1; - mtr_update.color_mode_valid = 1; - mtr_update.init_color_valid = 1; - mtr_update.state_valid = 1; + mtr_update.policy_valid = 1; + mtr_update.color_mode_valid = 1; + mtr_update.init_color_valid = 1; + mtr_update.state_valid = 1; update = &mtr_update; - } else { + default: update = action; + break; } if (rte_flow_async_action_handle_update(port_id, queue_id, &attr, diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 565868aeea..1ce0277e65 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2737,7 +2737,7 @@ Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. Action: ``AGE`` ^^^^^^^^^^^^^^^ -Set ageing timeout configuration to a flow. +Set aging timeout configuration to a flow. Event RTE_ETH_EVENT_FLOW_AGED will be reported if timeout passed without any matching on the flow. @@ -2756,8 +2756,8 @@ timeout passed without any matching on the flow. | ``context`` | user input flow context | +--------------+---------------------------------+ -Query structure to retrieve ageing status information of a -shared AGE action, or a flow rule using the AGE action: +Query structure to retrieve aging status information of an +indirect AGE action, or a flow rule using the AGE action: .. _table_rte_flow_query_age: @@ -2773,6 +2773,25 @@ shared AGE action, or a flow rule using the AGE action: | ``sec_since_last_hit`` | out | Seconds since last traffic hit | +------------------------------+-----+----------------------------------------+ +Update structure to modify the parameters of an indirect AGE action. +The update structure is used by ``rte_flow_action_handle_update()`` function. + +.. _table_rte_flow_update_age: + +.. table:: AGE update + + +-------------------+--------------------------------------------------------------+ + | Field | Value | + +===================+==============================================================+ + | ``reserved`` | 6 bits reserved, must be zero | + +-------------------+--------------------------------------------------------------+ + | ``timeout_valid`` | 1 bit, timeout value is valid | + +-------------------+--------------------------------------------------------------+ + | ``timeout`` | 24 bits timeout value | + +-------------------+--------------------------------------------------------------+ + | ``touch`` | 1 bit, touch the AGE action to set ``sec_since_last_hit`` 0 | + +-------------------+--------------------------------------------------------------+ + Action: ``SAMPLE`` ^^^^^^^^^^^^^^^^^^ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 64ec8f0903..a2101e0e11 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2643,6 +2643,7 @@ enum rte_flow_action_type { * See function rte_flow_get_aged_flows * see enum RTE_ETH_EVENT_FLOW_AGED * See struct rte_flow_query_age + * See struct rte_flow_update_age */ RTE_FLOW_ACTION_TYPE_AGE, @@ -2809,6 +2810,33 @@ struct rte_flow_query_age { uint32_t sec_since_last_hit:24; /**< Seconds since last traffic hit. */ }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_AGE + * + * Update indirect AGE action attributes: + * - Timeout can be updated including stop/start action: + * +-------------+-------------+------------------------------+ + * | Old Timeout | New Timeout | Updating | + * +=============+=============+==============================+ + * | 0 | positive | Start aging with new value | + * +-------------+-------------+------------------------------+ + * | positive | 0 | Stop aging | + * +-------------+-------------+------------------------------+ + * | positive | positive | Change timeout to new value | + * +-------------+-------------+------------------------------+ + * - sec_since_last_hit can be reset. + */ +struct rte_flow_update_age { + uint32_t reserved:6; /**< Reserved, must be zero. */ + uint32_t timeout_valid:1; /**< The timeout is valid for update. */ + uint32_t timeout:24; /**< Time in seconds. */ + /**< Means that aging should assume packet passed the aging. */ + uint32_t touch:1; +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice