From patchwork Wed Sep 21 14:54:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 116573 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B362CA00C3; Wed, 21 Sep 2022 16:54:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C984742826; Wed, 21 Sep 2022 16:54:35 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2073.outbound.protection.outlook.com [40.107.237.73]) by mails.dpdk.org (Postfix) with ESMTP id B4545410D0 for ; Wed, 21 Sep 2022 16:54:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GdIhjQNKPyOsNGt08jFNokViuUaMUoXfVKFrr3t6eB5uqppSQapPvtNxDD6NgvIMgIiVAjMNQFyUnNC7ZmIc+1HJZZItP2E3sd/FfNH8nrX09/8FKFjTfTymZI9oQmxgAk/17A5lqNrk+NVer2TixkY+fAV+Yo+IZc0ZDDPG9BYTvgxe9GdZsYpJMXCHxK6GMsNgkLxaTR1alnLJZxhPIcEXEf75W+LdbWqn+YyAUGBiAfwqcTXQL0xSCuda9iLc5QoCZJIqQuu8HV89RUZy5hIS25xDkwQoMEHV1gAXvfhQYAoOeeMz91FR7nkrhIL0sfN1LAGBlYEj3yFQrLUi9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0xtEUuh6xraKQOYgKR2gxq/W9ZWgNrZmsgocOZoksps=; b=FkqDrQ4d82h9ppZe3FNzi9Iv/+fdXc+O6NVP70/XOwSHAtGD8jSwdqQtScS+gASNm3L+2pkcsYkPb1408Rc4ezUvhPT4R85MdE/C2e7OOfp+FVdE2VER2TiaD/vwvlSZTzIGim0jyieqtEfjOkcIRM74g7embOnQ7MOHOejz6Hs8JahoYzuGndHB3JjGHwj7xk8keEIScinHXj3iRedMm9X2dp6p2csecvPn3zQB/UmP66RSxPMHZYprm9osbBg7zUpOgThH//KPe2fXS849lHYeUF2ImycDe1V9DGb4NHxjcBV963m0aJF92qK1RVzsw/LXqi0XkXutjeRHqDhnCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0xtEUuh6xraKQOYgKR2gxq/W9ZWgNrZmsgocOZoksps=; b=nQKnoifHTRA2wRsxkwTC8QhgQpn31PbgR3eLhi+Hb70/FLwE9SCc1R6qx1PS/7dWBaoFlyLjIT+Lo3Lu6DjP84pWeLt/xQSUjjEogPNGw5EQ9hG28BKPxeP7IVPbbInuFf5ytkk4l4hXaxGculAcjrMXn2fRbCSxRmB3H03cpxaAZiww89CUk8BwwcDTwQaSRK1BXxqcECNzfO14qmP4BaVbwkIm48wG/4wbba38q1mlPbFzEZ/hJzGeUrCQMS4mFJAjjD0X/lfV2RQ1QGm+fCNH5bc4OQaN/trFXW7YhTnIgU/nsbCb11gVUP8cOJMgGs78HnULgcVhgo9stxpDBQ== Received: from MW4PR04CA0147.namprd04.prod.outlook.com (2603:10b6:303:84::32) by DM4PR12MB5868.namprd12.prod.outlook.com (2603:10b6:8:67::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.18; Wed, 21 Sep 2022 14:54:29 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::93) by MW4PR04CA0147.outlook.office365.com (2603:10b6:303:84::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.17 via Frontend Transport; Wed, 21 Sep 2022 14:54:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Wed, 21 Sep 2022 14:54:29 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 21 Sep 2022 07:54:19 -0700 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 21 Sep 2022 07:54:19 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29 via Frontend Transport; Wed, 21 Sep 2022 07:54:17 -0700 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , "Ori Kam" Subject: [PATCH 2/3] ethdev: add queue-based API to report aged flow rules Date: Wed, 21 Sep 2022 17:54:08 +0300 Message-ID: <20220921145409.511328-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921145409.511328-1-michaelba@nvidia.com> References: <20220921145409.511328-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT054:EE_|DM4PR12MB5868:EE_ X-MS-Office365-Filtering-Correlation-Id: b993083e-e8cc-4f80-e9f5-08da9be12f9e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r4Q3Dx6L8D6umGyKw/JQQVC06mVJ2eF3FU54bytKx57wRcF1egly55eFHW+3LRt7uM6Hn1cIu0GKCS1oUgvQv02IEB7VLlMx2u1/nQ2lDXB3VDObHNeREhM2HYjFrU7LlPkQCYCQNmoBCmT0jPs27JcTmE4YYtOj9zdI5Vgtgv+s3Xc5/nrVNQoxbuE6c+/+Mt3SfLf+8WCq3wOIROw2r9WPS7KR96z7J2t0OTgMyHFRSyhgxgFcALl44TbuOz9NwVodjNc2svEU/37AhWTordYrzUFzVDx9uH1PFUQED8XWSmuBfQUU0W7moNwZRwyopg5lTV+ZeEgvn8i7KiddDuMLEMO1Q/CMhL/5Ne8tHpO55xoZPWB3to/0MdTmr2qAvWNR69GEMFiE4GU2WrTn1mD55F8cTWeDuF87XNJP+Zn/L464dQo4j6EuLqreNIsIHjOUbt+WqBtNUPYHYNytD33hagHTW8qOaTe5afCJ5X1f8A4LLcE/yB1cwQP4RAorEJ/BsnEySSp3seKaalFsnAeQ0bxPoFDIaomfdwgdQ+80QJ3v+nQFn6l2oGx+bBv8OM39Nvo3gJljlzbDNVMHEe0w1stxb3qB4iXyR5o7SHJBujV7RKlEPdrCKNaDRZvCyoCWvJ0KRcUg7LDR6EqtCTjoWUZDdvZFffM1dMcBAn3ALUY42yGjYP9VXJGFpYpYPT7qWJlyrsc4ctzZ37U7hLlF8VMvmfDKv00PDKFSZB3Jl1wNbOr86/p1ZQ/DL6UzmtFhBLKods7Ikt+79lzo4w== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199015)(36840700001)(46966006)(40470700004)(36756003)(86362001)(70586007)(2906002)(30864003)(5660300002)(54906003)(8936002)(83380400001)(55016003)(40460700003)(40480700001)(47076005)(426003)(478600001)(82740400003)(26005)(6666004)(316002)(336012)(7696005)(2616005)(6916009)(7636003)(186003)(107886003)(41300700001)(6286002)(356005)(1076003)(36860700001)(4326008)(8676002)(70206006)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Sep 2022 14:54:29.4299 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b993083e-e8cc-4f80-e9f5-08da9be12f9e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5868 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of querying aged flow rules should also have queue id parameter just like other queue-based flow APIs. By this way, PMD can work in more optimized way since resources are isolated by queue and needn't synchronize. If application do use queue-based flow management but configure port without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate a given flow rule on different queues, the queue id parameter will be ignored. Signed-off-by: Michael Baum Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 17 ++- app/test-pmd/config.c | 159 +++++++++++++++++++- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 86 ++++++++++- lib/ethdev/rte_flow.c | 22 +++ lib/ethdev/rte_flow.h | 48 +++++- lib/ethdev/rte_flow_driver.h | 7 + lib/ethdev/version.map | 3 + 8 files changed, 333 insertions(+), 10 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index a982083d27..4fb90a92cb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -127,6 +127,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ @@ -1159,6 +1160,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_AGED, QUEUE_INDIRECT_ACTION, ZERO, }; @@ -2942,6 +2944,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_AGED] = { + .name = "aged", + .help = "list and destroy aged flows", + .next = NEXT(next_aged_attr, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_aged, + }, [QUEUE_INDIRECT_ACTION] = { .name = "indirect_action", .help = "queue indirect actions", @@ -8640,8 +8649,8 @@ parse_aged(struct context *ctx, const struct token *token, /* Nothing else to do if there is no buffer. */ if (!out) return len; - if (!out->command) { - if (ctx->curr != AGED) + if (!out->command || out->command == QUEUE) { + if (ctx->curr != AGED && ctx->curr != QUEUE_AGED) return -1; if (sizeof(*out) > size) return -1; @@ -10496,6 +10505,10 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_AGED: + port_queue_flow_aged(in->port, in->queue, + in->args.aged.destroy); + break; case QUEUE_INDIRECT_ACTION_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a2939867c4..31952467fb 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2662,6 +2662,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; + struct rte_flow_attr flow_attr = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; @@ -2713,7 +2714,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return -EINVAL; } - pf = port_flow_new(NULL, pattern, actions, &error); + pf = port_flow_new(&flow_attr, pattern, actions, &error); if (!pf) return port_flow_complain(&error); if (age) { @@ -2950,6 +2951,162 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +static int +port_queue_aged_flow_destroy(portid_t port_id, queueid_t queue_id, + const uint32_t *rule, int nb_flows) +{ + struct rte_port *port = &ports[port_id]; + struct rte_flow_op_result *res; + struct rte_flow_error error; + uint32_t n = nb_flows; + int ret = 0; + int i; + + res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + while (nb_flows > 0) { + int success = 0; + + if (n > port->queue_sz) + n = port->queue_sz; + ret = port_queue_flow_destroy(port_id, queue_id, true, n, rule); + if (ret < 0) { + free(res); + return ret; + } + ret = rte_flow_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue: %s\n", + strerror(-ret)); + free(res); + return ret; + } + while (success < nb_flows) { + ret = rte_flow_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results: %s\n", + strerror(-ret)); + free(res); + return ret; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_OP_SUCCESS) + success++; + } + } + rule += n; + nb_flows -= n; + n = nb_flows; + } + + free(res); + return ret; +} + +/** List simply and destroy all aged flows per queue. */ +void +port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy) +{ + void **contexts; + int nb_context, total = 0, idx; + uint32_t *rules = NULL; + struct rte_port *port; + struct rte_flow_error error; + enum age_action_context_type *type; + union { + struct port_flow *pf; + struct port_indirect_action *pia; + } ctx; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return; + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Error: queue #%u is invalid\n", queue_id); + return; + } + total = rte_flow_get_q_aged_flows(port_id, queue_id, NULL, 0, &error); + if (total < 0) { + port_flow_complain(&error); + return; + } + printf("Port %u queue %u total aged flows: %d\n", + port_id, queue_id, total); + if (total == 0) + return; + contexts = calloc(total, sizeof(void *)); + if (contexts == NULL) { + printf("Cannot allocate contexts for aged flow\n"); + return; + } + printf("%-20s\tID\tGroup\tPrio\tAttr\n", "Type"); + nb_context = rte_flow_get_q_aged_flows(port_id, queue_id, contexts, + total, &error); + if (nb_context > total) { + printf("Port %u queue %u get aged flows count(%d) > total(%d)\n", + port_id, queue_id, nb_context, total); + free(contexts); + return; + } + if (destroy) { + rules = malloc(sizeof(uint32_t) * nb_context); + if (rules == NULL) + printf("Cannot allocate memory for destroy aged flow\n"); + } + total = 0; + for (idx = 0; idx < nb_context; idx++) { + if (!contexts[idx]) { + printf("Error: get Null context in port %u queue %u\n", + port_id, queue_id); + continue; + } + type = (enum age_action_context_type *)contexts[idx]; + switch (*type) { + case ACTION_AGE_CONTEXT_TYPE_FLOW: + ctx.pf = container_of(type, struct port_flow, age_type); + printf("%-20s\t%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 + "\t%c%c%c\t\n", + "Flow", + ctx.pf->id, + ctx.pf->rule.attr->group, + ctx.pf->rule.attr->priority, + ctx.pf->rule.attr->ingress ? 'i' : '-', + ctx.pf->rule.attr->egress ? 'e' : '-', + ctx.pf->rule.attr->transfer ? 't' : '-'); + if (rules != NULL) { + rules[total] = ctx.pf->id; + total++; + } + break; + case ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION: + ctx.pia = container_of(type, + struct port_indirect_action, + age_type); + printf("%-20s\t%" PRIu32 "\n", "Indirect action", + ctx.pia->id); + break; + default: + printf("Error: invalid context type %u\n", port_id); + break; + } + } + if (rules != NULL) { + port_queue_aged_flow_destroy(port_id, queue_id, rules, total); + free(rules); + } + printf("\n%d flows destroyed\n", total); + free(contexts); +} + /** Pull queue operation results from the queue. */ int port_queue_flow_pull(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fb2f5195d3..4e24dd9ee0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -982,6 +982,7 @@ int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); +void port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 6c12e0286c..e68b852e29 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3085,9 +3085,10 @@ following sections. [meters_number {number}] [flags {number}] - Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] [relaxed {boolean}] [ingress] [egress] [transfer] - template {item} [/ {item} [...]] / end + template {item} [/ {item} [...]] / end - Destroy a pattern template:: @@ -3186,6 +3187,10 @@ following sections. flow aged {port_id} [destroy] +- Enqueue list and destroy aged flow rules:: + + flow queue {port_id} aged {queue_id} [destroy] + - Tunnel offload - create a tunnel stub:: flow tunnel create {port_id} type {tunnel_type} @@ -4427,7 +4432,7 @@ Disabling isolated mode:: testpmd> Dumping HW internal information -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow dump`` dumps the hardware's internal representation information of all flows. It is bound to ``rte_flow_dev_dump()``:: @@ -4443,10 +4448,10 @@ Otherwise, it will complain error occurred:: Caught error type [...] ([...]): [...] Listing and destroying aged flow rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``flow aged`` simply lists aged flow rules be get from api ``rte_flow_get_aged_flows``, -and ``destroy`` parameter can be used to destroy those flow rules in PMD. +and ``destroy`` parameter can be used to destroy those flow rules in PMD:: flow aged {port_id} [destroy] @@ -4481,7 +4486,7 @@ will be ID 3, ID 1, ID 0:: 1 0 0 i-- 0 0 0 i-- -If attach ``destroy`` parameter, the command will destroy all the list aged flow rules. +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: testpmd> flow aged 0 destroy Port 0 total aged flows: 4 @@ -4499,6 +4504,77 @@ If attach ``destroy`` parameter, the command will destroy all the list aged flow testpmd> flow aged 0 Port 0 total aged flows: 0 + +Enqueueing listing and destroying aged flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue aged`` simply lists aged flow rules be get from +``rte_flow_get_q_aged_flows`` API, and ``destroy`` parameter can be used to +destroy those flow rules in PMD:: + + flow queue {port_id} aged {queue_id} [destroy] + +Listing current aged flow rules:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 0 + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.14 / end + actions age timeout 5 / queue index 0 / end + Flow rule #0 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.15 / end + actions age timeout 4 / queue index 0 / end + Flow rule #1 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.16 / end + actions age timeout 4 / queue index 0 / end + Flow rule #2 creation enqueued + testpmd> flow queue 0 create 0 ingress tanle 0 item_template 0 action_template 0 + pattern eth / ipv4 src is 2.2.2.17 / end + actions age timeout 4 / queue index 0 / end + Flow rule #3 creation enqueued + testpmd> flow pull 0 queue 0 + Queue #0 pulled 4 operations (0 failed, 4 succeeded) + +Aged Rules are simply list as command ``flow queue {port_id} list {queue_id}``, +but strip the detail rule information, all the aged flows are sorted by the +longest timeout time. For example, if those rules is configured in the same time, +ID 2 will be the first aged out rule, the next will be ID 3, ID 1, ID 0:: + + testpmd> flow queue 0 aged 0 + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + + 0 flows destroyed + +If attach ``destroy`` parameter, the command will destroy all the list aged flow rules:: + + testpmd> flow queue 0 aged 0 destroy + Port 0 queue 0 total aged flows: 4 + ID Group Prio Attr + 2 0 0 --- + 3 0 0 --- + 1 0 0 --- + 0 0 0 --- + Flow rule #2 destruction enqueued + Flow rule #3 destruction enqueued + Flow rule #1 destruction enqueued + Flow rule #0 destruction enqueued + + 4 flows destroyed + testpmd> flow queue 0 aged 0 + Port 0 total aged flows: 0 + +.. note:: + + The queue must be empty before attaching ``destroy`` parameter. + + Creating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 501be9d602..5c95ac7f8b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1133,6 +1133,28 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts, NULL, rte_strerror(ENOTSUP)); } +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->get_q_aged_flows)) { + fts_enter(dev); + ret = ops->get_q_aged_flows(dev, queue_id, contexts, + nb_contexts, error); + fts_exit(dev); + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + struct rte_flow_action_handle * rte_flow_action_handle_create(uint16_t port_id, const struct rte_flow_indir_action_conf *conf, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index c552771472..d830b02321 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2930,8 +2930,8 @@ struct rte_flow_action_queue { * on the flow. RTE_ETH_EVENT_FLOW_AGED event is triggered when a * port detects new aged-out flows. * - * The flow context and the flow handle will be reported by the - * rte_flow_get_aged_flows API. + * The flow context and the flow handle will be reported by the either + * rte_flow_get_aged_flows or rte_flow_get_q_aged_flows APIs. */ struct rte_flow_action_age { uint32_t timeout:24; /**< Time in seconds. */ @@ -4443,6 +4443,50 @@ int rte_flow_get_aged_flows(uint16_t port_id, void **contexts, uint32_t nb_contexts, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get aged-out flows of a given port on the given flow queue. + * + * If application configure port attribute with RTE_FLOW_PORT_FLAG_STRICT_QUEUE, + * there is no RTE_ETH_EVENT_FLOW_AGED event and this function must be called to + * get the aged flows synchronously. + * + * If application configure port attribute without + * RTE_FLOW_PORT_FLAG_STRICT_QUEUE, RTE_ETH_EVENT_FLOW_AGED event will be + * triggered at least one new aged out flow was detected on any flow queue after + * the last call to rte_flow_get_q_aged_flows. + * In addition, the @p queue_id will be ignored. + * This function can be called to get the aged flows asynchronously from the + * event callback or synchronously regardless the event. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue to query. Ignored when RTE_FLOW_PORT_FLAG_STRICT_QUEUE not set. + * @param[in, out] contexts + * The address of an array of pointers to the aged-out flows contexts. + * @param[in] nb_contexts + * The length of context array pointers. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * if nb_contexts is 0, return the amount of all aged contexts. + * if nb_contexts is not 0 , return the amount of aged flows reported + * in the context array, otherwise negative errno value. + * + * @see rte_flow_action_age + * @see RTE_ETH_EVENT_FLOW_AGED + * @see rte_flow_port_flag + */ +__rte_experimental +int +rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts, + uint32_t nb_contexts, struct rte_flow_error *error); + /** * Specify indirect action object configuration */ diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2bff732d6a..f0a03bf149 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -84,6 +84,13 @@ struct rte_flow_ops { void **context, uint32_t nb_contexts, struct rte_flow_error *err); + /** See rte_flow_get_q_aged_flows() */ + int (*get_q_aged_flows) + (struct rte_eth_dev *dev, + uint32_t queue_id, + void **contexts, + uint32_t nb_contexts, + struct rte_flow_error *error); /** See rte_flow_action_handle_create() */ struct rte_flow_action_handle *(*action_handle_create) (struct rte_eth_dev *dev, diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 03f52fee91..4a40d24d8f 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -285,6 +285,9 @@ EXPERIMENTAL { rte_mtr_color_in_protocol_priority_get; rte_mtr_color_in_protocol_set; rte_mtr_meter_vlan_table_update; + + # added in 22.11 + rte_flow_get_q_aged_flows; }; INTERNAL {