From patchwork Tue May 24 15:20:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Spike Du X-Patchwork-Id: 111744 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 251C9A04FF; Tue, 24 May 2022 17:21:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 19A5E42BB7; Tue, 24 May 2022 17:21:19 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2052.outbound.protection.outlook.com [40.107.93.52]) by mails.dpdk.org (Postfix) with ESMTP id 4ECE14281F for ; Tue, 24 May 2022 17:21:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vkqy1Vwlqibyc3V4Wnem4p+9VVuWYPOsRqx6E34Z4TaLHJu7mYyDoBZr8jxJWomfSGRqbl+Rx+hQ0wfn2ZwMOWcsXB90VEiTepn80BPlyCjAj/vftEssEyskZqc6V0W0ltiSoQwHGWISR1hiCWDpeoD3QHlIFKWIFmDpzSQ87L4bfqzM3dJprmBtHJW9GAURFFkRlddVN7nMzz7BUgVCzkS1Ix0fMDns+s9oH2NALq0EZC5OBTTRz0YAPfIwk5uMHPxPLdIStcjkw+lZj6Oo6WY3JRDDseQTq2yvrRqazE2p1MhNYV5IgXIvnDo+ABVfp6GXFYvQ5DMotYNiK1z8Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YuRnAhywWfpP8riFW/etAhY0J0KrC4CVfXOmyGWGw6Q=; b=GmpYMErJkxdaF4+3KhMrLlDb9HmwcNzCiObbH5lDSGHBrDFGT032QbKFM6aUhPs4MC2ofaPwzqL2RLi7X0cdEgltaYdo9BvmAkRMpchjdFCTezs5x0c0cf3GL0CXr6+tigb/SssGiDTxWuP151wxwn6244bHDpWM6ydGddDUqt9FbPAdZQc1aNsT/5I21MWCxWGKFWgua5WeK1zmCOGFk0Q90n30GT+qY3mBipPRbx9rBUAIL2DAoOCLsMzIwlChDhaB0/i/tuhB6PZ8zhgSDqXCnVBx2y6NdgPWV8y77xMsSj/0c2+3vsawxbjXXheIj/gNXFjsqMzlEvzhPVHvrw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YuRnAhywWfpP8riFW/etAhY0J0KrC4CVfXOmyGWGw6Q=; b=uT2sZq2CB0hZyMSI/ZekEZ76xlp1iotFk6PFdwQd58vWNU46okNMPWItjdTxPUju1PBiWhu17voK6vIB7NB4YDzh8esBoKzAtKgnpVEUNMq17bRztdk9mFbHGDcem5sPhKKEvYRBBACMeT/OBqQI1edBBw5I2R2tN47EfZX/mK0RTBgtdcscxggUEkqyqu7NytoEdF9N1UnrQQQaxaf5y+bqUct59mKTRiDP8H005qJ2ZmpH8O2rdCUZAtDlaaOcnlZW8UEfyswAek/6NqrzCZXBVLD0WOJEqGbmsXZ4D9XS2oOZkZ0ZBtyb8gli+avzDXhNDZ8M32A5Aqn9LBunpg== Received: from MWHPR1201CA0007.namprd12.prod.outlook.com (2603:10b6:301:4a::17) by DM6PR12MB2889.namprd12.prod.outlook.com (2603:10b6:5:18a::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.13; Tue, 24 May 2022 15:21:15 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:301:4a:cafe::4e) by MWHPR1201CA0007.outlook.office365.com (2603:10b6:301:4a::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.20 via Frontend Transport; Tue, 24 May 2022 15:21:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Tue, 24 May 2022 15:21:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 24 May 2022 15:21:14 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 24 May 2022 08:21:12 -0700 From: Spike Du To: , , , CC: , Subject: [PATCH v3 7/7] app/testpmd: add LWM and Host Shaper command Date: Tue, 24 May 2022 18:20:41 +0300 Message-ID: <20220524152041.737154-8-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220524152041.737154-1-spiked@nvidia.com> References: <20220522055900.417282-1-spiked@nvidia.com> <20220524152041.737154-1-spiked@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3322feb7-fb7a-44d1-72ef-08da3d990b28 X-MS-TrafficTypeDiagnostic: DM6PR12MB2889:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GDHWHpfw10w/oVglRuHE4d+6bazQktbOL5oPCdS7ahOhQgLk7JFrWSI19SI+j9a6iCNv4lv+OcwkTkzHyltKwZa9tIUbnb+XtTHUFl3RTuXAG7sl8qSQ8auWLD/jB2cnx33KCuB4mNhJj1bex0/bDs9rAFbzfGx3EwpnEKedIcvowqLkSpfIY42xUwR+LUAydhCRhYDafSFBsqeVQpr7P0SPt+oPD1f1U9ah6OywsHPAy5VGMbu6DzmBFd1H+cJzAI26VYOJY4MWgL9jN7GVGXkGBL4Yg08XakyKfEq3nsQSjxkTOh2HxrPsQJiOjskbDsjo6V6KjJbyE6UXa6rozce/NqGaffwK+cy2EinCh6Vlgjq5xMCZmKYzu/gfJEZKTOsDLvnkLximQbAXTu6Pbb9ZWTbUgSqHL97YNRZEwEcWbWyrYIVeRbuaahrx+Krs5sWijndj92ujFrqv3fT7RunBbUAMIXQuKPZ7h/7DTuJ1+IZOIsA+2ewso3rgFPf6tKMLME+AJIUT6lulo+ZThb/3ksHiE4bKgLeIStva9m4MMvVFzkpcZthL/Iq9A+MQha0ZarAPPFRm4MzAftcQGtZ5z/aZ3nkO0T3TAGvOcSCYcvr//6eNM+lMD4qQzbVL136l92ccy8AksE/C4HfgP6ZLZ9L7u8BDlOiv+SzSY2HvN7ds0vD6hmfVFpFDhc8KMngdciWyFqOWPI7u7Ahpqg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(36860700001)(8676002)(54906003)(40460700003)(83380400001)(110136005)(316002)(4326008)(82310400005)(426003)(6286002)(1076003)(107886003)(16526019)(70586007)(70206006)(186003)(86362001)(2616005)(30864003)(508600001)(26005)(55016003)(336012)(81166007)(356005)(6666004)(7696005)(8936002)(2906002)(36756003)(47076005)(5660300002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2022 15:21:15.1587 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3322feb7-fb7a-44d1-72ef-08da3d990b28 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2889 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command line options to support LWM per-rxq configure. - Command syntax: set port rxq lwm mlx5 set port host_shaper lwm_triggered <0|1> rate - Example commands: To configure LWM as 30% of rxq size on port 1 rxq 0: testpmd> set port 1 rxq 0 lwm 30 To disable LWM on port 1 rxq 0: testpmd> set port 1 rxq 0 lwm 0 To enable lwm_triggered on port 1 and disable current host shaper: testpmd> mlx5 set port 1 host_shaper lwm_triggered 1 rate 0 To disable lwm_triggered and current host shaper on port 1: testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 0 The rate unit is 100Mbps. To disable lwm_triggered and configure a shaper of 5Gbps on port 1: testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 50 Add sample code to handle rxq LWM event, it delays a while so that rxq empties, then disables host shaper and rearms LWM event. Signed-off-by: Spike Du --- app/test-pmd/cmdline.c | 74 +++++++++++++ app/test-pmd/config.c | 21 ++++ app/test-pmd/meson.build | 4 + app/test-pmd/testpmd.c | 24 +++++ app/test-pmd/testpmd.h | 1 + doc/guides/nics/mlx5.rst | 46 ++++++++ drivers/net/mlx5/mlx5_testpmd.c | 184 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_testpmd.h | 27 +++++ 8 files changed, 381 insertions(+) create mode 100644 drivers/net/mlx5/mlx5_testpmd.c create mode 100644 drivers/net/mlx5/mlx5_testpmd.h diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 1e5b294ab3..86342f2ac6 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -67,6 +67,9 @@ #include "cmdline_mtr.h" #include "cmdline_tm.h" #include "bpf_cmd.h" +#ifdef RTE_NET_MLX5 +#include "mlx5_testpmd.h" +#endif static struct cmdline *testpmd_cl; @@ -17804,6 +17807,73 @@ cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = { } }; +/* *** SET LIMIT WARTER MARK FOR A RXQ OF A PORT *** */ +struct cmd_rxq_lwm_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t port; + uint16_t port_num; + cmdline_fixed_string_t rxq; + uint16_t rxq_num; + cmdline_fixed_string_t lwm; + uint16_t lwm_num; +}; + +static void cmd_rxq_lwm_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_rxq_lwm_result *res = parsed_result; + int ret = 0; + + if ((strcmp(res->set, "set") == 0) && (strcmp(res->port, "port") == 0) + && (strcmp(res->rxq, "rxq") == 0) + && (strcmp(res->lwm, "lwm") == 0)) + ret = set_rxq_lwm(res->port_num, res->rxq_num, + res->lwm_num); + if (ret < 0) + printf("rxq_lwm_cmd error: (%s)\n", strerror(-ret)); + +} + +cmdline_parse_token_string_t cmd_rxq_lwm_set = + TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result, + set, "set"); +cmdline_parse_token_string_t cmd_rxq_lwm_port = + TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result, + port, "port"); +cmdline_parse_token_num_t cmd_rxq_lwm_portnum = + TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result, + port_num, RTE_UINT16); +cmdline_parse_token_string_t cmd_rxq_lwm_rxq = + TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result, + rxq, "rxq"); +cmdline_parse_token_num_t cmd_rxq_lwm_rxqnum = + TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result, + rxq_num, RTE_UINT8); +cmdline_parse_token_string_t cmd_rxq_lwm_lwm = + TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result, + lwm, "lwm"); +cmdline_parse_token_num_t cmd_rxq_lwm_lwmnum = + TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result, + lwm_num, RTE_UINT16); + +cmdline_parse_inst_t cmd_rxq_lwm = { + .f = cmd_rxq_lwm_parsed, + .data = (void *)0, + .help_str = "set port rxq lwm " + "Set lwm for rxq on port_id", + .tokens = { + (void *)&cmd_rxq_lwm_set, + (void *)&cmd_rxq_lwm_port, + (void *)&cmd_rxq_lwm_portnum, + (void *)&cmd_rxq_lwm_rxq, + (void *)&cmd_rxq_lwm_rxqnum, + (void *)&cmd_rxq_lwm_lwm, + (void *)&cmd_rxq_lwm_lwmnum, + NULL, + }, +}; + /* ******************************************************************************** */ /* list of instructions */ @@ -18091,6 +18161,10 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_show_capability, (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern, (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern, + (cmdline_parse_inst_t *)&cmd_rxq_lwm, +#ifdef RTE_NET_MLX5 + (cmdline_parse_inst_t *)&mlx5_test_cmd_port_host_shaper, +#endif NULL, }; diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1b1e738f83..a752c6367f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -6342,3 +6342,24 @@ show_mcast_macs(portid_t port_id) printf(" %s\n", buf); } } + +int +set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm) +{ + struct rte_eth_link link; + int ret; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return -EINVAL; + ret = eth_link_get_nowait_print_err(port_id, &link); + if (ret < 0) + return -EINVAL; + if (lwm > 99) + return -EINVAL; + ret = rte_eth_rx_lwm_set(port_id, queue_idx, lwm); + + if (ret) + return ret; + return 0; +} + diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build index 43130c8856..c3577a02c1 100644 --- a/app/test-pmd/meson.build +++ b/app/test-pmd/meson.build @@ -73,3 +73,7 @@ endif if dpdk_conf.has('RTE_NET_DPAA') deps += ['bus_dpaa', 'mempool_dpaa', 'net_dpaa'] endif +if dpdk_conf.has('RTE_NET_MLX5') + deps += 'net_mlx5' + sources += files('../../drivers/net/mlx5/mlx5_testpmd.c') +endif diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 777763f749..ee6693dddf 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -69,6 +69,9 @@ #ifdef RTE_NET_BOND #include #endif +#ifdef RTE_NET_MLX5 +#include "mlx5_testpmd.h" +#endif #include "testpmd.h" @@ -420,6 +423,7 @@ static const char * const eth_event_desc[] = { [RTE_ETH_EVENT_NEW] = "device probed", [RTE_ETH_EVENT_DESTROY] = "device released", [RTE_ETH_EVENT_FLOW_AGED] = "flow aged", + [RTE_ETH_EVENT_RX_LWM] = "rxq limit reached", [RTE_ETH_EVENT_MAX] = NULL, }; @@ -3616,6 +3620,10 @@ static int eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param, void *ret_param) { + struct rte_eth_dev_info dev_info; + uint16_t rxq_id; + uint8_t lwm; + int ret; RTE_SET_USED(param); RTE_SET_USED(ret_param); @@ -3647,6 +3655,22 @@ eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param, ports[port_id].port_status = RTE_PORT_CLOSED; printf("Port %u is closed\n", port_id); break; + case RTE_ETH_EVENT_RX_LWM: + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + break; + /* LWM query API rewinds rxq_id, no need to check max rxq num. */ + for (rxq_id = 0; ; rxq_id++) { + ret = rte_eth_rx_lwm_query(port_id, &rxq_id, &lwm); + if (ret <= 0) + break; + printf("Received LWM event, port:%d rxq_id:%d\n", + port_id, rxq_id); +#ifdef RTE_NET_MLX5 + mlx5_test_lwm_event_handler(port_id, rxq_id); +#endif + } + break; default: break; } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f04a9a11b4..f2ecbe7013 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1166,6 +1166,7 @@ int update_jumbo_frame_offload(portid_t portid); void flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename); void flex_item_destroy(portid_t port_id, uint16_t flex_id); void port_flex_item_flush(portid_t port_id); +int set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm); extern int flow_parse(const char *src, void *result, unsigned int size, struct rte_flow_attr **attr, diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 3da6f5a03c..fb1c957544 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1718,3 +1718,49 @@ on the host port by the firmware upon receiving the LMW event, which allows throttling host traffic on LWM events at minimum latency, preventing excess drops in the Rx queue. +How to use LWM and Host Shaper +------------------------------ + +There are sample command lines to configure LWM in testpmd. +Testpmd also contains sample logic to handle LWM event. +The typical workflow is: testpmd configure LWM for Rx queues, enable +lwm_triggered in host shaper and register a callback, when traffic from host is +too high and Rx queue fullness is above LWM, PMD receives an event and +firmware configures a 100Mbps shaper on host port automatically, then PMD call +the callback registered previously, which will delay a while to let Rx queue +empty, then disable host shaper. + +Let's assume we have a simple Blue Field 2 setup: port 0 is uplink, port 1 +is VF representor. Each port has 2 Rx queues. +In order to control traffic from host to ARM, we can enable LWM in testpmd by: + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper lwm_triggered 1 rate 0 + testpmd> set port 1 rxq 0 lwm 70 + testpmd> set port 1 rxq 1 lwm 70 + +The first command disables current host shaper, and enables LWM triggered mode. +The left commands configure LWM to 70% of Rx queue size for both Rx queues, +When traffic from host is too high, you can see testpmd console prints log +about LWM event receiving, then host shaper is disabled. +The traffic rate from host is controlled and less drop happens in Rx queues. + +When disable LWM and lwm_triggered, we can invoke below commands in testpmd: + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 0 + testpmd> set port 1 rxq 0 lwm 0 + testpmd> set port 1 rxq 1 lwm 0 + +It's recommended an application disables LWM and lwm_triggered before exit, +if it enables them before. + +We can also configure the shaper with a value, the rate unit is 100Mbps, below +command sets current shaper to 5Gbps and disables lwm_triggered. + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 50 + diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c new file mode 100644 index 0000000000..122d6cbc4f --- /dev/null +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 6WIND S.A. + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include "mlx5_testpmd.h" + +static uint8_t host_shaper_lwm_triggered[RTE_MAX_ETHPORTS]; +#define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ + +/** + * Disable the host shaper and re-arm LWM event. + * + * @param[in] args + * uint32_t integer combining port_id and rxq_id. + */ +static void +mlx5_test_host_shaper_disable(void *args) +{ + uint32_t port_rxq_id = (uint32_t)(uintptr_t)args; + uint16_t port_id = port_rxq_id & 0xffff; + uint16_t qid = (port_rxq_id >> 16) & 0xffff; + struct rte_eth_rxq_info qinfo; + + printf("%s disable shaper\n", __func__); + if (rte_eth_rx_queue_info_get(port_id, qid, &qinfo)) { + printf("rx_queue_info_get returns error\n"); + return; + } + /* Rearm the LWM event. */ + if (rte_eth_rx_lwm_set(port_id, qid, qinfo.lwm)) { + printf("config lwm returns error\n"); + return; + } + /* Only disable the shaper when lwm_triggered is set. */ + if (host_shaper_lwm_triggered[port_id] && + rte_pmd_mlx5_host_shaper_config(port_id, 0, 0)) + printf("%s disable shaper returns error\n", __func__); +} + +void +mlx5_test_lwm_event_handler(uint16_t port_id, uint16_t rxq_id) +{ + uint32_t port_rxq_id = port_id | (rxq_id << 16); + + rte_eal_alarm_set(SHAPER_DISABLE_DELAY_US, + mlx5_test_host_shaper_disable, + (void *)(uintptr_t)port_rxq_id); + printf("%s port_id:%u rxq_id:%u\n", __func__, port_id, rxq_id); +} + +/** + * Configure host shaper's lwm_triggered and current rate. + * + * @param[in] lwm_triggered + * Disable/enable lwm_triggered. + * @param[in] rate + * Configure current host shaper rate. + * @return + * On success, returns 0. + * On failure, returns < 0. + */ +static int +mlx5_test_set_port_host_shaper(uint16_t port_id, uint16_t lwm_triggered, uint8_t rate) +{ + struct rte_eth_link link; + bool port_id_valid = false; + uint16_t pid; + int ret; + + RTE_ETH_FOREACH_DEV(pid) + if (port_id == pid) { + port_id_valid = true; + break; + } + if (!port_id_valid) + return -EINVAL; + ret = rte_eth_link_get_nowait(port_id, &link); + if (ret < 0) + return ret; + host_shaper_lwm_triggered[port_id] = lwm_triggered ? 1 : 0; + if (!lwm_triggered) { + ret = rte_pmd_mlx5_host_shaper_config(port_id, 0, + RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED)); + } else { + ret = rte_pmd_mlx5_host_shaper_config(port_id, 1, + RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED)); + } + if (ret) + return ret; + ret = rte_pmd_mlx5_host_shaper_config(port_id, rate, 0); + if (ret) + return ret; + return 0; +} + +/* *** SET HOST_SHAPER FOR A PORT *** */ +struct cmd_port_host_shaper_result { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t port; + uint16_t port_num; + cmdline_fixed_string_t host_shaper; + cmdline_fixed_string_t lwm_triggered; + uint16_t fr; + cmdline_fixed_string_t rate; + uint8_t rate_num; +}; + +static void cmd_port_host_shaper_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_port_host_shaper_result *res = parsed_result; + int ret = 0; + + if ((strcmp(res->mlx5, "mlx5") == 0) && + (strcmp(res->set, "set") == 0) && + (strcmp(res->port, "port") == 0) && + (strcmp(res->host_shaper, "host_shaper") == 0) && + (strcmp(res->lwm_triggered, "lwm_triggered") == 0) && + (strcmp(res->rate, "rate") == 0)) + ret = mlx5_test_set_port_host_shaper(res->port_num, res->fr, + res->rate_num); + if (ret < 0) + printf("cmd_port_host_shaper error: (%s)\n", strerror(-ret)); +} + +cmdline_parse_token_string_t cmd_port_host_shaper_mlx5 = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + mlx5, "mlx5"); +cmdline_parse_token_string_t cmd_port_host_shaper_set = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + set, "set"); +cmdline_parse_token_string_t cmd_port_host_shaper_port = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + port, "port"); +cmdline_parse_token_num_t cmd_port_host_shaper_portnum = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + port_num, RTE_UINT16); +cmdline_parse_token_string_t cmd_port_host_shaper_host_shaper = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + host_shaper, "host_shaper"); +cmdline_parse_token_string_t cmd_port_host_shaper_lwm_triggered = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + lwm_triggered, "lwm_triggered"); +cmdline_parse_token_num_t cmd_port_host_shaper_fr = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + fr, RTE_UINT16); +cmdline_parse_token_string_t cmd_port_host_shaper_rate = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + rate, "rate"); +cmdline_parse_token_num_t cmd_port_host_shaper_rate_num = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + rate_num, RTE_UINT8); +cmdline_parse_inst_t mlx5_test_cmd_port_host_shaper = { + .f = cmd_port_host_shaper_parsed, + .data = (void *)0, + .help_str = "mlx5 set port host_shaper lwm_triggered <0|1> " + "rate : Set HOST_SHAPER lwm_triggered and rate with port_id", + .tokens = { + (void *)&cmd_port_host_shaper_mlx5, + (void *)&cmd_port_host_shaper_set, + (void *)&cmd_port_host_shaper_port, + (void *)&cmd_port_host_shaper_portnum, + (void *)&cmd_port_host_shaper_host_shaper, + (void *)&cmd_port_host_shaper_lwm_triggered, + (void *)&cmd_port_host_shaper_fr, + (void *)&cmd_port_host_shaper_rate, + (void *)&cmd_port_host_shaper_rate_num, + NULL, + } +}; diff --git a/drivers/net/mlx5/mlx5_testpmd.h b/drivers/net/mlx5/mlx5_testpmd.h new file mode 100644 index 0000000000..50f3cf0bf9 --- /dev/null +++ b/drivers/net/mlx5/mlx5_testpmd.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 6WIND S.A. + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#ifndef RTE_PMD_MLX5_TEST_H_ +#define RTE_PMD_MLX5_TEST_H_ + +#include +#include +#include + +/** + * RTE_ETH_EVENT_RX_LWM handler sample code. + * It's called in testpmd, the work flow here is delay a while until + * RX queueu is empty, then disable host shaper. + * + * @param[in] port_id + * Port identifier. + * @param[in] rxq_id + * Rx queue identifier. + */ +void +mlx5_test_lwm_event_handler(uint16_t port_id, uint16_t rxq_id); + +extern cmdline_parse_inst_t mlx5_test_cmd_port_host_shaper; +#endif