From patchwork Tue Jun 14 12:01:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Spike Du X-Patchwork-Id: 112721 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F9D3A00C2; Tue, 14 Jun 2022 14:02:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C91A42802; Tue, 14 Jun 2022 14:02:08 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2043.outbound.protection.outlook.com [40.107.94.43]) by mails.dpdk.org (Postfix) with ESMTP id BD0B0427F9 for ; Tue, 14 Jun 2022 14:02:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZBuFOq8EKAhkNOkr7qVByHIc5H57pTBcpuVlisreB59Um5dM8hC1SGMPO81lpdJaBx5lsP47eYJcNhsMbtWqoVOmTX9o/BqWD8AvfV5TLzu2PMXPeT8/qjV3p6Wl9Of66u023spzYxuIC1S7wps4GBRQrNzQNz5oX7NniPLNCBazXu/qDVBNyffIP2DMuUWpoXH0OmAXf4qhDN02vCkm7oSn2U6JfwFyAkNA9WBTyNKpNTB1uRtWK/+JBOsBNQXK1s1ky/dZRnfGnRr7rkaKhoTbLqkYeOx+PiAQg+Sb503UZ8aKwlZUBDEpXGbct04ACdFke9qaBo9KgQlRtsKHvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+pr4SlPddzQ5tGPyMV5gaqr9rIhL+E1vsR+O4bb30l4=; b=Dbrlid6eAc/zTWEaPF39XWYSrlAgkOfGhu7tviXwLlmIl03wTEfQraaAfbRIY+bDLg4J36myIlPDHEChzjouNxEsNuiOOlc9oq/NkJ0MZc644V3vfFqaaRLM933JCcO3jq62Wy2ego5Y8LdBPeXRu54xWLKzgWDDs954O8hjso8jPeg1bpoOh+A1xjGg3GMD7VOKCV9KgZxdnTc33HEciiaHeDFzHlBn+aUF8mzpFyjodWBFvkAYKOJypt5CiWtrjbvkNueE2LYmMJv+KS1Yuy0csmTXYT9qtOFHMqrpPGU6t3pSLEF22T82XM05Xi/MW2JYpv1oyaPMmXWrQxpVDg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=networkplumber.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+pr4SlPddzQ5tGPyMV5gaqr9rIhL+E1vsR+O4bb30l4=; b=bt8b5vxTZrnafDtKsKqqWft5sTis5CGZ5yaY/H0Fh84MVhuXnV0YXitCuwAF2p1jRfyG9iIltJsKn64eGN5iClXpQ8Q9xAzcq5IcptG5iDn/nKaZT7PAnzyw1sQE2VYvNMXE1i6IQEGn6DTZVWyO+OrKX1Ky0V1CN/+rB9T1qTGbLTFIwqz1bLpap2wWwnlx4wXFlDASZPPEdZCbEjM6B52qiSU4efmppWB2swXRECQfYtq+i1U+bBEqxzHzXLNdqVFYGpR69hP9K2hoLBuYY7Fcewt4BS85froihZLip0b+PsXZPC95gz16wWoXSDMbPjZlDE4p7IVKB9k/fShNnw== Received: from DM6PR13CA0018.namprd13.prod.outlook.com (2603:10b6:5:bc::31) by DM4PR12MB5915.namprd12.prod.outlook.com (2603:10b6:8:68::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Tue, 14 Jun 2022 12:02:04 +0000 Received: from DM6NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::fe) by DM6PR13CA0018.outlook.office365.com (2603:10b6:5:bc::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.7 via Frontend Transport; Tue, 14 Jun 2022 12:02:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT008.mail.protection.outlook.com (10.13.172.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5332.12 via Frontend Transport; Tue, 14 Jun 2022 12:02:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 14 Jun 2022 12:02:04 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 14 Jun 2022 05:01:59 -0700 From: Spike Du To: , , , , Wenzhuo Lu , Beilei Xing , Bernard Iremonger , Shahaf Shuler CC: , , , , Subject: [PATCH v7] app/testpmd: add Host Shaper command Date: Tue, 14 Jun 2022 15:01:34 +0300 Message-ID: <20220614120134.1828188-2-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220614120134.1828188-1-spiked@nvidia.com> References: <20220613025006.1596552-2-spiked@nvidia.com> <20220614120134.1828188-1-spiked@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ac327834-5af9-47ba-f225-08da4dfdb2c4 X-MS-TrafficTypeDiagnostic: DM4PR12MB5915:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: z16sAbQbx7AZhxFJRCfNVAC+qtpQzuGUXtSeWH5GaAGM77FElFVSvKp6IpthNxlrY2t894XQeCAbITuCEoa0EUxSg4XCPE1Pu2DP3Sn+4x8grJg4Y0ZulSZgoj5rGTKS+oVx9unzlJYnQ4T8jvOqVad0ZlOn7QR0dF3V4xsVD1HCI/fq4ZVo5FwCGaHnGoKuC8fDYcXqMGzzJkCNo5smK+t92qEUOohZmD8YRwgADeG8iUFNLc3P7L5+TUTrq8QgAj80UWObD9aAd4Rb1fldYuASKSvQFy8t1s6GsHCqXcpgqT/IXxQdgC06Qc1S9QyfHvkJZg0PfjYesSEV5PE2jP6sen7G0v3QS6jR9vNzaTHS+YChSkf8uS3eJpe8P6CFNqsCvp+SAzU/YaZPPa6ef/B5YoojNdAwQ/bPgYx6X8c6zF2eIcRTt5oH3Q3AFTCPs+a1haI+5sfg41AgGHjckS0bv3e68pA+knrJ6xRspJfGeX9DPsDLjhiXtk0kUpwmc9wEKrpF40laxqrfrlaSHkhoFNUGugS0t4KXZZt6mQx+bYu7jQwfxXQprZNOI9KRk4NcykMpm0jmajyFd3ddcF9x8gNlS6qraTXipacndhU7Qj6g8CYktf0/v4xMcUszSVGMwd4o3zlda6/ewg5fQA4Ru2H+K+nHk1ZWCazB6JhtuklqgqBnl/3sPNx95LCdNrHbTJzyAQ1tKZbScUxq1A== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(46966006)(36840700001)(40470700004)(70586007)(70206006)(4326008)(8676002)(6666004)(2616005)(47076005)(30864003)(426003)(8936002)(40460700003)(7696005)(508600001)(26005)(2906002)(5660300002)(6286002)(81166007)(36860700001)(54906003)(36756003)(316002)(55016003)(110136005)(16526019)(186003)(107886003)(336012)(1076003)(356005)(6636002)(83380400001)(86362001)(82310400005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 12:02:04.5943 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ac327834-5af9-47ba-f225-08da4dfdb2c4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5915 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command line options to support host shaper configure. - Command syntax: mlx5 set port host_shaper avail_thresh_triggered <0|1> rate - Example commands: To enable avail_thresh_triggered on port 1 and disable current host shaper: testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 1 rate 0 To disable avail_thresh_triggered and current host shaper on port 1: testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 0 The rate unit is 100Mbps. To disable avail_thresh_triggered and configure a shaper of 5Gbps on port 1: testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 50 Add sample code to handle rxq available descriptor threshold event, it delays a while so that rxq empties, then disables host shaper and rearms available descriptor threshold event. Signed-off-by: Spike Du Acked-by: Matan Azrad --- app/test-pmd/testpmd.c | 7 ++ doc/guides/nics/mlx5.rst | 46 +++++++++ drivers/net/mlx5/meson.build | 4 + drivers/net/mlx5/mlx5_testpmd.c | 206 ++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_testpmd.h | 26 +++++ 5 files changed, 289 insertions(+) create mode 100644 drivers/net/mlx5/mlx5_testpmd.c create mode 100644 drivers/net/mlx5/mlx5_testpmd.h diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 33d9b85..b491719 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -69,6 +69,9 @@ #ifdef RTE_NET_BOND #include #endif +#ifdef RTE_NET_MLX5 +#include "mlx5_testpmd.h" +#endif #include "testpmd.h" @@ -3659,6 +3662,10 @@ struct pmd_test_command { break; printf("Received avail_thresh event, port:%d rxq_id:%d\n", port_id, rxq_id); + +#ifdef RTE_NET_MLX5 + mlx5_test_avail_thresh_event_handler(port_id, rxq_id); +#endif } break; default: diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index a1e13e7..b5a3ee3 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1727,3 +1727,49 @@ which can be installed from OFED mstflint package. Meson detects ``libmtcr_ul`` existence at configure stage. If the library is detected, the application must link with ``-lmtcr_ul``, as done by the pkg-config file libdpdk.pc. + +How to use available descriptor threshold and Host Shaper +------------------------------ + +There are sample command lines to configure available descriptor threshold in testpmd. +Testpmd also contains sample logic to handle available descriptor threshold event. +The typical workflow is: testpmd configure available descriptor threshold for Rx queues, enable +avail_thresh_triggered in host shaper and register a callback, when traffic from host is +too high and Rx queue emptiness is below available descriptor threshold, PMD receives an event and +firmware configures a 100Mbps shaper on host port automatically, then PMD call +the callback registered previously, which will delay a while to let Rx queue +empty, then disable host shaper. + +Let's assume we have a simple Blue Field 2 setup: port 0 is uplink, port 1 +is VF representor. Each port has 2 Rx queues. +In order to control traffic from host to ARM, we can enable available descriptor threshold in testpmd by: + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 1 rate 0 + testpmd> set port 1 rxq 0 avail_thresh 70 + testpmd> set port 1 rxq 1 avail_thresh 70 + +The first command disables current host shaper, and enables available descriptor threshold triggered mode. +The left commands configure available descriptor threshold to 70% of Rx queue size for both Rx queues, +When traffic from host is too high, you can see testpmd console prints log +about available descriptor threshold event receiving, then host shaper is disabled. +The traffic rate from host is controlled and less drop happens in Rx queues. + +When disable available descriptor threshold and avail_thresh_triggered, we can invoke below commands in testpmd: + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 0 + testpmd> set port 1 rxq 0 avail_thresh 0 + testpmd> set port 1 rxq 1 avail_thresh 0 + +It's recommended an application disables available descriptor threshold and avail_thresh_triggered before exit, +if it enables them before. + +We can also configure the shaper with a value, the rate unit is 100Mbps, below +command sets current shaper to 5Gbps and disables avail_thresh_triggered. + +.. code-block:: console + + testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 50 diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 99210fd..941642b 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -68,4 +68,8 @@ if get_option('buildtype').contains('debug') else cflags += [ '-UPEDANTIC' ] endif + +testpmd_sources += files('mlx5_testpmd.c') +testpmd_drivers_deps += 'net_mlx5' + subdir(exec_env) diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c new file mode 100644 index 0000000..32f2386 --- /dev/null +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -0,0 +1,206 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 6WIND S.A. + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include "mlx5_testpmd.h" +#include "testpmd.h" + +static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; +#define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ + +/** + * Disable the host shaper and re-arm available descriptor threshold event. + * + * @param[in] args + * uint32_t integer combining port_id and rxq_id. + */ +static void +mlx5_test_host_shaper_disable(void *args) +{ + uint32_t port_rxq_id = (uint32_t)(uintptr_t)args; + uint16_t port_id = port_rxq_id & 0xffff; + uint16_t qid = (port_rxq_id >> 16) & 0xffff; + struct rte_eth_rxq_info qinfo; + + printf("%s disable shaper\n", __func__); + if (rte_eth_rx_queue_info_get(port_id, qid, &qinfo)) { + printf("rx_queue_info_get returns error\n"); + return; + } + /* Rearm the available descriptor threshold event. */ + if (rte_eth_rx_avail_thresh_set(port_id, qid, qinfo.avail_thresh)) { + printf("config avail_thresh returns error\n"); + return; + } + /* Only disable the shaper when avail_thresh_triggered is set. */ + if (host_shaper_avail_thresh_triggered[port_id] && + rte_pmd_mlx5_host_shaper_config(port_id, 0, 0)) + printf("%s disable shaper returns error\n", __func__); +} + +void +mlx5_test_avail_thresh_event_handler(uint16_t port_id, uint16_t rxq_id) +{ + struct rte_eth_dev_info dev_info; + uint32_t port_rxq_id = port_id | (rxq_id << 16); + + /* Ensure it's MLX5 port. */ + if (rte_eth_dev_info_get(port_id, &dev_info) != 0 || + (strncmp(dev_info.driver_name, "mlx5", 4) != 0)) + return; + rte_eal_alarm_set(SHAPER_DISABLE_DELAY_US, + mlx5_test_host_shaper_disable, + (void *)(uintptr_t)port_rxq_id); + printf("%s port_id:%u rxq_id:%u\n", __func__, port_id, rxq_id); +} + +/** + * Configure host shaper's avail_thresh_triggered and current rate. + * + * @param[in] avail_thresh_triggered + * Disable/enable avail_thresh_triggered. + * @param[in] rate + * Configure current host shaper rate. + * @return + * On success, returns 0. + * On failure, returns < 0. + */ +static int +mlx5_test_set_port_host_shaper(uint16_t port_id, uint16_t avail_thresh_triggered, uint8_t rate) +{ + struct rte_eth_link link; + bool port_id_valid = false; + uint16_t pid; + int ret; + + RTE_ETH_FOREACH_DEV(pid) + if (port_id == pid) { + port_id_valid = true; + break; + } + if (!port_id_valid) + return -EINVAL; + ret = rte_eth_link_get_nowait(port_id, &link); + if (ret < 0) + return ret; + host_shaper_avail_thresh_triggered[port_id] = avail_thresh_triggered ? 1 : 0; + if (!avail_thresh_triggered) { + ret = rte_pmd_mlx5_host_shaper_config(port_id, 0, + RTE_BIT32(MLX5_HOST_SHAPER_FLAG_AVAIL_THRESH_TRIGGERED)); + } else { + ret = rte_pmd_mlx5_host_shaper_config(port_id, 1, + RTE_BIT32(MLX5_HOST_SHAPER_FLAG_AVAIL_THRESH_TRIGGERED)); + } + if (ret) + return ret; + ret = rte_pmd_mlx5_host_shaper_config(port_id, rate, 0); + if (ret) + return ret; + return 0; +} + +/* *** SET HOST_SHAPER FOR A PORT *** */ +struct cmd_port_host_shaper_result { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t port; + uint16_t port_num; + cmdline_fixed_string_t host_shaper; + cmdline_fixed_string_t avail_thresh_triggered; + uint16_t fr; + cmdline_fixed_string_t rate; + uint8_t rate_num; +}; + +static void cmd_port_host_shaper_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_port_host_shaper_result *res = parsed_result; + int ret = 0; + + if ((strcmp(res->mlx5, "mlx5") == 0) && + (strcmp(res->set, "set") == 0) && + (strcmp(res->port, "port") == 0) && + (strcmp(res->host_shaper, "host_shaper") == 0) && + (strcmp(res->avail_thresh_triggered, "avail_thresh_triggered") == 0) && + (strcmp(res->rate, "rate") == 0)) + ret = mlx5_test_set_port_host_shaper(res->port_num, res->fr, + res->rate_num); + if (ret < 0) + printf("cmd_port_host_shaper error: (%s)\n", strerror(-ret)); +} + +static cmdline_parse_token_string_t cmd_port_host_shaper_mlx5 = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + mlx5, "mlx5"); +static cmdline_parse_token_string_t cmd_port_host_shaper_set = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + set, "set"); +static cmdline_parse_token_string_t cmd_port_host_shaper_port = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + port, "port"); +static cmdline_parse_token_num_t cmd_port_host_shaper_portnum = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + port_num, RTE_UINT16); +static cmdline_parse_token_string_t cmd_port_host_shaper_host_shaper = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + host_shaper, "host_shaper"); +static cmdline_parse_token_string_t cmd_port_host_shaper_avail_thresh_triggered = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + avail_thresh_triggered, "avail_thresh_triggered"); +static cmdline_parse_token_num_t cmd_port_host_shaper_fr = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + fr, RTE_UINT16); +static cmdline_parse_token_string_t cmd_port_host_shaper_rate = + TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result, + rate, "rate"); +static cmdline_parse_token_num_t cmd_port_host_shaper_rate_num = + TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result, + rate_num, RTE_UINT8); +static cmdline_parse_inst_t mlx5_test_cmd_port_host_shaper = { + .f = cmd_port_host_shaper_parsed, + .data = (void *)0, + .help_str = "mlx5 set port host_shaper avail_thresh_triggered <0|1> " + "rate : Set HOST_SHAPER avail_thresh_triggered and rate with port_id", + .tokens = { + (void *)&cmd_port_host_shaper_mlx5, + (void *)&cmd_port_host_shaper_set, + (void *)&cmd_port_host_shaper_port, + (void *)&cmd_port_host_shaper_portnum, + (void *)&cmd_port_host_shaper_host_shaper, + (void *)&cmd_port_host_shaper_avail_thresh_triggered, + (void *)&cmd_port_host_shaper_fr, + (void *)&cmd_port_host_shaper_rate, + (void *)&cmd_port_host_shaper_rate_num, + NULL, + } +}; + +static struct testpmd_driver_commands mlx5_driver_cmds = { + .commands = { + { + .ctx = &mlx5_test_cmd_port_host_shaper, + .help = "mlx5 set port (port_id) host_shaper avail_thresh_triggered (on|off)" + "rate (rate_num):\n" + " Set HOST_SHAPER avail_thresh_triggered and rate with port_id\n\n", + }, + { + .ctx = NULL, + }, + } +}; +TESTPMD_ADD_DRIVER_COMMANDS(mlx5_driver_cmds); + diff --git a/drivers/net/mlx5/mlx5_testpmd.h b/drivers/net/mlx5/mlx5_testpmd.h new file mode 100644 index 0000000..7a54658 --- /dev/null +++ b/drivers/net/mlx5/mlx5_testpmd.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 6WIND S.A. + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#ifndef RTE_PMD_MLX5_TEST_H_ +#define RTE_PMD_MLX5_TEST_H_ + +#include +#include +#include + +/** + * RTE_ETH_EVENT_RX_AVAIL_THRESH handler sample code. + * It's called in testpmd, the work flow here is delay a while until + * RX queueu is empty, then disable host shaper. + * + * @param[in] port_id + * Port identifier. + * @param[in] rxq_id + * Rx queue identifier. + */ +void +mlx5_test_avail_thresh_event_handler(uint16_t port_id, uint16_t rxq_id); + +#endif From patchwork Wed Jun 15 12:58:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Spike Du X-Patchwork-Id: 112764 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78227A0548; Wed, 15 Jun 2022 15:09:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6483440F1A; Wed, 15 Jun 2022 15:09:04 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2079.outbound.protection.outlook.com [40.107.95.79]) by mails.dpdk.org (Postfix) with ESMTP id 8E4EB40F19 for ; Wed, 15 Jun 2022 15:09:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m+zkws8hxPkP1wLP+6D0/BBg3J/OgWXkHwV/ipvfqy5IZE3oe72QDQEQoBq+2yDSae1wavrp6Ouzqbg0LiCyjUmNr1f6z97w3lMKokIESxL3rptZnVgynPlKKZFXBJFWaGuGJ23ZiTb4nce38Fy9UkK0crq0A/PO7l9bIUtqpR+40Zjob6oC87QuKzUKIv0XOeg1KJvCRZFqNxWTkSwDuwK9fWEMR1vzY2/NQ4jD7ijYE20VCcBsC1tTFG4ScWjVzFzfxL4+8VbN4X4+z+63VT5GTpEW4WktgI2P9DfLlX5T+04mQSBpDb6TYovEw5VrZzawzhT800wEHQcXCnIunw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lnjJ12K6Euf4JDX+1Q6sh/8Fpp1+3LYhtr0kWNlOt/Y=; b=gdG+NxE+23PV/z/RLjzlMSsQ7dQZnMTTzc3I9YLiGvGIwTsZ4qpgtubMLIPDzxHWycIySCZzC5+LqStDBopLYhfi5zJ28Sw7LwU8lNP7XllXwglodKHdUk16cjiwDOawWepX2RRnE2Y/JsIcdH2s9mUsiHsUrpY6wFbTYyUCrIUPF6kBoSIOTAByHsYYgXVU9JU/B1caq+12RAEFd+zcYPzlUXJxoFnTH2JYTghqkLN3lePE0KUNRxS9qopPrF/JAsS/1xARIFJ9EvGALfvVBdI+Qc+QZna7XzAFIOCPc5OFagYmT+q7QXv7ObQ/DcjihHRyXFGMwrRRc0+l0Hr/uw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=networkplumber.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lnjJ12K6Euf4JDX+1Q6sh/8Fpp1+3LYhtr0kWNlOt/Y=; b=ZWXi1W17hmWPAo6r6nbDeFcNTF8PTwwSLuJTKS/4gssJ+3AKiF4ZXXhBejdW7cZQFZC1pOFKYBbVCLayDDbahoqhvkbA1SJXJP3NzK0C2gYvWdZ2uTNHMrQdCT5jZmylUAKWkfsC4HlIKZ3L2ByL+5FO6v3rQOWtMZhzj/iRf92ODM/XLjbnn3wyyiNJs8LPmMSMx7UL2++93T1BgC3ZaUCHbd8J/DDnw4ALogW7H6IutR/9B7CqpqsdW82gpFPERkYus90YPc1kPHYVLFvdXHeFo9O70PgtcO39Ij++izdwZiZM5Tww4m17VI+6mJ4GihqZs2+kX7zz3JORa2ZxqA== Received: from MWHPR1601CA0019.namprd16.prod.outlook.com (2603:10b6:300:da::29) by MWHPR12MB1934.namprd12.prod.outlook.com (2603:10b6:300:109::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun 2022 12:59:01 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:300:da:cafe::a) by MWHPR1601CA0019.outlook.office365.com (2603:10b6:300:da::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14 via Frontend Transport; Wed, 15 Jun 2022 12:59:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5332.12 via Frontend Transport; Wed, 15 Jun 2022 12:59:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 15 Jun 2022 12:59:00 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 15 Jun 2022 05:58:56 -0700 From: Spike Du To: , , , , Shahaf Shuler , Ray Kinsella , Neil Horman CC: , , , , Subject: [PATCH v8 2/6] common/mlx5: share interrupt management Date: Wed, 15 Jun 2022 15:58:32 +0300 Message-ID: <20220615125836.391771-3-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220615125836.391771-1-spiked@nvidia.com> References: <20220614120134.1828188-2-spiked@nvidia.com> <20220615125836.391771-1-spiked@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 69de5b76-54c2-4ccb-36ee-08da4eced197 X-MS-TrafficTypeDiagnostic: MWHPR12MB1934:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kY2yWmTcKK8vr+qedc51XVw121DAWR/EH3k9h9NHZ/3ZKgNsH2gRJEVpXX4Z4T7zpSpTV2faciawF0CwW/awOaV/qP3NpwR0DTXWlM1eH1MPscjITC8GjJpPi+6Hg0Irqj7B8Rfurohjpsp8Sj3xBXpRGbgQ7a/jZPlJiqq+tCQQsaYeY9bnelS3JPxxl+SQzqeJS2xBCW6hAq8gVHKVWkZN1hGnBWeA6QJnsUb/8DcC1E7D96qA4jjL1xutoUK9QItrIKumXeQ+yyqE+DD0l4fmsL0XZBSRK9yDnfFP1pF4KCoBHAxw0KBuom1m0UdnM/fAoWVDHxUhwK5pwJr4eT/Nc31jgxcDp1Pog68OBEStDuY2EaYyy+2bcNxdmKZOVj2EY121+h6yp6as8mBU7VD86v0qQLcpeM/VqPTlVuE5WsTk+s/rDLscny2XEs9XkSUC/B2FT6I+Cn8GLztIv2fZvu173dhQ1rzQeCTiUyInf7dOaM2Mnnp9vvkMMg5rrMkIM0nAwAttB/dDN6GsJY0SBz7RDP/jKygsYvxB/f1eq8QxXXKl4TVkjHQe4pzR9TiR5/3oLWdvovibLpJp6Pv/LPyMPU+9MJg73gUizgEJGoJ811u6or09zxEwRkME6gOe+Fi88RqcWjM+L5Jw+zhZ+uxW+jVvwONlZwZgLbCJxhXF8tXbINJ96OM4IiVt2gkCOuWyJovkuS0IGwDPOA== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(46966006)(40470700004)(36840700001)(2906002)(6666004)(6286002)(26005)(508600001)(8936002)(40460700003)(82310400005)(86362001)(7696005)(70586007)(36756003)(30864003)(107886003)(83380400001)(2616005)(5660300002)(336012)(47076005)(186003)(1076003)(16526019)(426003)(356005)(81166007)(36860700001)(316002)(55016003)(110136005)(4326008)(54906003)(70206006)(8676002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 12:59:01.1674 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 69de5b76-54c2-4ccb-36ee-08da4eced197 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1934 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There are many duplicate code of creating and initializing rte_intr_handle. Add a new mlx5_os API to do this, replace all PMD related code with this API. Signed-off-by: Spike Du --- drivers/common/mlx5/linux/mlx5_common_os.c | 131 ++++++++++++++++++++++++++ drivers/common/mlx5/linux/mlx5_common_os.h | 11 +++ drivers/common/mlx5/version.map | 2 + drivers/common/mlx5/windows/mlx5_common_os.h | 24 +++++ drivers/net/mlx5/linux/mlx5_ethdev_os.c | 71 -------------- drivers/net/mlx5/linux/mlx5_os.c | 132 ++++++--------------------- drivers/net/mlx5/linux/mlx5_socket.c | 53 ++--------- drivers/net/mlx5/mlx5.h | 2 - drivers/net/mlx5/mlx5_txpp.c | 28 ++---- drivers/net/mlx5/windows/mlx5_ethdev_os.c | 22 ----- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 48 ++-------- 11 files changed, 217 insertions(+), 307 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c index d40cfd5..f10a981 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.c +++ b/drivers/common/mlx5/linux/mlx5_common_os.c @@ -11,6 +11,7 @@ #endif #include #include +#include #include #include @@ -964,3 +965,133 @@ claim_zero(mlx5_glue->dereg_mr(pmd_mr->obj)); memset(pmd_mr, 0, sizeof(*pmd_mr)); } + +/** + * Rte_intr_handle create and init helper. + * + * @param[in] mode + * interrupt instance can be shared between primary and secondary + * processes or not. + * @param[in] set_fd_nonblock + * Whether to set fd to O_NONBLOCK. + * @param[in] fd + * Fd to set in created intr_handle. + * @param[in] cb + * Callback to register for intr_handle. + * @param[in] cb_arg + * Callback argument for cb. + * + * @return + * - Interrupt handle on success. + * - NULL on failure, with rte_errno set. + */ +struct rte_intr_handle * +mlx5_os_interrupt_handler_create(int mode, bool set_fd_nonblock, int fd, + rte_intr_callback_fn cb, void *cb_arg) +{ + struct rte_intr_handle *tmp_intr_handle; + int ret, flags; + + tmp_intr_handle = rte_intr_instance_alloc(mode); + if (!tmp_intr_handle) { + rte_errno = ENOMEM; + goto err; + } + if (set_fd_nonblock) { + flags = fcntl(fd, F_GETFL); + ret = fcntl(fd, F_SETFL, flags | O_NONBLOCK); + if (ret) { + rte_errno = errno; + goto err; + } + } + ret = rte_intr_fd_set(tmp_intr_handle, fd); + if (ret) + goto err; + ret = rte_intr_type_set(tmp_intr_handle, RTE_INTR_HANDLE_EXT); + if (ret) + goto err; + ret = rte_intr_callback_register(tmp_intr_handle, cb, cb_arg); + if (ret) { + rte_errno = -ret; + goto err; + } + return tmp_intr_handle; +err: + if (tmp_intr_handle) + rte_intr_instance_free(tmp_intr_handle); + return NULL; +} + +/* Safe unregistration for interrupt callback. */ +static void +mlx5_intr_callback_unregister(const struct rte_intr_handle *handle, + rte_intr_callback_fn cb_fn, void *cb_arg) +{ + uint64_t twait = 0; + uint64_t start = 0; + + do { + int ret; + + ret = rte_intr_callback_unregister(handle, cb_fn, cb_arg); + if (ret >= 0) + return; + if (ret != -EAGAIN) { + DRV_LOG(INFO, "failed to unregister interrupt" + " handler (error: %d)", ret); + MLX5_ASSERT(false); + return; + } + if (twait) { + struct timespec onems; + + /* Wait one millisecond and try again. */ + onems.tv_sec = 0; + onems.tv_nsec = NS_PER_S / MS_PER_S; + nanosleep(&onems, 0); + /* Check whether one second elapsed. */ + if ((rte_get_timer_cycles() - start) <= twait) + continue; + } else { + /* + * We get the amount of timer ticks for one second. + * If this amount elapsed it means we spent one + * second in waiting. This branch is executed once + * on first iteration. + */ + twait = rte_get_timer_hz(); + MLX5_ASSERT(twait); + } + /* + * Timeout elapsed, show message (once a second) and retry. + * We have no other acceptable option here, if we ignore + * the unregistering return code the handler will not + * be unregistered, fd will be closed and we may get the + * crush. Hanging and messaging in the loop seems not to be + * the worst choice. + */ + DRV_LOG(INFO, "Retrying to unregister interrupt handler"); + start = rte_get_timer_cycles(); + } while (true); +} + +/** + * Rte_intr_handle destroy helper. + * + * @param[in] intr_handle + * Rte_intr_handle to destroy. + * @param[in] cb + * Callback which is registered to intr_handle. + * @param[in] cb_arg + * Callback argument for cb. + * + */ +void +mlx5_os_interrupt_handler_destroy(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *cb_arg) +{ + if (rte_intr_fd_get(intr_handle) >= 0) + mlx5_intr_callback_unregister(intr_handle, cb, cb_arg); + rte_intr_instance_free(intr_handle); +} diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h index 27f1192..479bb3c 100644 --- a/drivers/common/mlx5/linux/mlx5_common_os.h +++ b/drivers/common/mlx5/linux/mlx5_common_os.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "mlx5_autoconf.h" #include "mlx5_glue.h" @@ -299,4 +300,14 @@ int mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len); +__rte_internal +struct rte_intr_handle * +mlx5_os_interrupt_handler_create(int mode, bool set_fd_nonblock, int fd, + rte_intr_callback_fn cb, void *cb_arg); + +__rte_internal +void +mlx5_os_interrupt_handler_destroy(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *cb_arg); + #endif /* RTE_PMD_MLX5_COMMON_OS_H_ */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index a23a30a..413dec1 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -153,5 +153,7 @@ INTERNAL { mlx5_mr_mempool2mr_bh; mlx5_mr_mempool_populate_cache; + mlx5_os_interrupt_handler_create; # WINDOWS_NO_EXPORT + mlx5_os_interrupt_handler_destroy; # WINDOWS_NO_EXPORT local: *; }; diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h index ee7973f..e9e9108 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.h +++ b/drivers/common/mlx5/windows/mlx5_common_os.h @@ -9,6 +9,7 @@ #include #include +#include #include "mlx5_autoconf.h" #include "mlx5_glue.h" @@ -253,4 +254,27 @@ __rte_internal int mlx5_os_umem_dereg(void *pumem); +static inline struct rte_intr_handle * +mlx5_os_interrupt_handler_create(int mode, bool set_fd_nonblock, int fd, + rte_intr_callback_fn cb, void *cb_arg) +{ + (void)mode; + (void)set_fd_nonblock; + (void)fd; + (void)cb; + (void)cb_arg; + rte_errno = ENOTSUP; + return NULL; +} + +static inline void +mlx5_os_interrupt_handler_destroy(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *cb_arg) +{ + (void)intr_handle; + (void)cb; + (void)cb_arg; +} + + #endif /* RTE_PMD_MLX5_COMMON_OS_H_ */ diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c index 8fe73f1..a276b2b 100644 --- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c +++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c @@ -881,77 +881,6 @@ struct ethtool_link_settings { } } -/* - * Unregister callback handler safely. The handler may be active - * while we are trying to unregister it, in this case code -EAGAIN - * is returned by rte_intr_callback_unregister(). This routine checks - * the return code and tries to unregister handler again. - * - * @param handle - * interrupt handle - * @param cb_fn - * pointer to callback routine - * @cb_arg - * opaque callback parameter - */ -void -mlx5_intr_callback_unregister(const struct rte_intr_handle *handle, - rte_intr_callback_fn cb_fn, void *cb_arg) -{ - /* - * Try to reduce timeout management overhead by not calling - * the timer related routines on the first iteration. If the - * unregistering succeeds on first call there will be no - * timer calls at all. - */ - uint64_t twait = 0; - uint64_t start = 0; - - do { - int ret; - - ret = rte_intr_callback_unregister(handle, cb_fn, cb_arg); - if (ret >= 0) - return; - if (ret != -EAGAIN) { - DRV_LOG(INFO, "failed to unregister interrupt" - " handler (error: %d)", ret); - MLX5_ASSERT(false); - return; - } - if (twait) { - struct timespec onems; - - /* Wait one millisecond and try again. */ - onems.tv_sec = 0; - onems.tv_nsec = NS_PER_S / MS_PER_S; - nanosleep(&onems, 0); - /* Check whether one second elapsed. */ - if ((rte_get_timer_cycles() - start) <= twait) - continue; - } else { - /* - * We get the amount of timer ticks for one second. - * If this amount elapsed it means we spent one - * second in waiting. This branch is executed once - * on first iteration. - */ - twait = rte_get_timer_hz(); - MLX5_ASSERT(twait); - } - /* - * Timeout elapsed, show message (once a second) and retry. - * We have no other acceptable option here, if we ignore - * the unregistering return code the handler will not - * be unregistered, fd will be closed and we may get the - * crush. Hanging and messaging in the loop seems not to be - * the worst choice. - */ - DRV_LOG(INFO, "Retrying to unregister interrupt handler"); - start = rte_get_timer_cycles(); - } while (true); -} - /** * Handle DEVX interrupts from the NIC. * This function is probably called from the DPDK host thread. diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index a821153..0741028 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2494,40 +2494,6 @@ mlx5_pmd_socket_uninit(); } -static int -mlx5_os_dev_shared_handler_install_lsc(struct mlx5_dev_ctx_shared *sh) -{ - int nlsk_fd, flags, ret; - - nlsk_fd = mlx5_nl_init(NETLINK_ROUTE, RTMGRP_LINK); - if (nlsk_fd < 0) { - DRV_LOG(ERR, "Failed to create a socket for Netlink events: %s", - rte_strerror(rte_errno)); - return -1; - } - flags = fcntl(nlsk_fd, F_GETFL); - ret = fcntl(nlsk_fd, F_SETFL, flags | O_NONBLOCK); - if (ret != 0) { - DRV_LOG(ERR, "Failed to make Netlink event socket non-blocking: %s", - strerror(errno)); - rte_errno = errno; - goto error; - } - rte_intr_type_set(sh->intr_handle_nl, RTE_INTR_HANDLE_EXT); - rte_intr_fd_set(sh->intr_handle_nl, nlsk_fd); - if (rte_intr_callback_register(sh->intr_handle_nl, - mlx5_dev_interrupt_handler_nl, - sh) != 0) { - DRV_LOG(ERR, "Failed to register Netlink events interrupt"); - rte_intr_fd_set(sh->intr_handle_nl, -1); - goto error; - } - return 0; -error: - close(nlsk_fd); - return -1; -} - /** * Install shared asynchronous device events handler. * This function is implemented to support event sharing @@ -2539,76 +2505,47 @@ void mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) { - int ret; - int flags; struct ibv_context *ctx = sh->cdev->ctx; + int nlsk_fd; - sh->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (sh->intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - rte_errno = ENOMEM; + sh->intr_handle = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + ctx->async_fd, mlx5_dev_interrupt_handler, sh); + if (!sh->intr_handle) { + DRV_LOG(ERR, "Failed to allocate intr_handle."); return; } - rte_intr_fd_set(sh->intr_handle, -1); - - flags = fcntl(ctx->async_fd, F_GETFL); - ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK); - if (ret) { - DRV_LOG(INFO, "failed to change file descriptor async event" - " queue"); - } else { - rte_intr_fd_set(sh->intr_handle, ctx->async_fd); - rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT); - if (rte_intr_callback_register(sh->intr_handle, - mlx5_dev_interrupt_handler, sh)) { - DRV_LOG(INFO, "Fail to install the shared interrupt."); - rte_intr_fd_set(sh->intr_handle, -1); - } + nlsk_fd = mlx5_nl_init(NETLINK_ROUTE, RTMGRP_LINK); + if (nlsk_fd < 0) { + DRV_LOG(ERR, "Failed to create a socket for Netlink events: %s", + rte_strerror(rte_errno)); + return; } - sh->intr_handle_nl = rte_intr_instance_alloc - (RTE_INTR_INSTANCE_F_SHARED); + sh->intr_handle_nl = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + nlsk_fd, mlx5_dev_interrupt_handler_nl, sh); if (sh->intr_handle_nl == NULL) { DRV_LOG(ERR, "Fail to allocate intr_handle"); - rte_errno = ENOMEM; return; } - rte_intr_fd_set(sh->intr_handle_nl, -1); - if (mlx5_os_dev_shared_handler_install_lsc(sh) < 0) { - DRV_LOG(INFO, "Fail to install the shared Netlink event handler."); - rte_intr_fd_set(sh->intr_handle_nl, -1); - } if (sh->cdev->config.devx) { #ifdef HAVE_IBV_DEVX_ASYNC - sh->intr_handle_devx = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (!sh->intr_handle_devx) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - rte_errno = ENOMEM; - return; - } - rte_intr_fd_set(sh->intr_handle_devx, -1); + struct mlx5dv_devx_cmd_comp *devx_comp; + sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx); - struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp; + devx_comp = sh->devx_comp; if (!devx_comp) { DRV_LOG(INFO, "failed to allocate devx_comp."); return; } - flags = fcntl(devx_comp->fd, F_GETFL); - ret = fcntl(devx_comp->fd, F_SETFL, flags | O_NONBLOCK); - if (ret) { - DRV_LOG(INFO, "failed to change file descriptor" - " devx comp"); + sh->intr_handle_devx = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + devx_comp->fd, + mlx5_dev_interrupt_handler_devx, sh); + if (!sh->intr_handle_devx) { + DRV_LOG(ERR, "Failed to allocate intr_handle."); return; } - rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd); - rte_intr_type_set(sh->intr_handle_devx, - RTE_INTR_HANDLE_EXT); - if (rte_intr_callback_register(sh->intr_handle_devx, - mlx5_dev_interrupt_handler_devx, sh)) { - DRV_LOG(INFO, "Fail to install the devx shared" - " interrupt."); - rte_intr_fd_set(sh->intr_handle_devx, -1); - } #endif /* HAVE_IBV_DEVX_ASYNC */ } } @@ -2624,24 +2561,13 @@ void mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh) { - int nlsk_fd; - - if (rte_intr_fd_get(sh->intr_handle) >= 0) - mlx5_intr_callback_unregister(sh->intr_handle, - mlx5_dev_interrupt_handler, sh); - rte_intr_instance_free(sh->intr_handle); - nlsk_fd = rte_intr_fd_get(sh->intr_handle_nl); - if (nlsk_fd >= 0) { - mlx5_intr_callback_unregister - (sh->intr_handle_nl, mlx5_dev_interrupt_handler_nl, sh); - close(nlsk_fd); - } - rte_intr_instance_free(sh->intr_handle_nl); + mlx5_os_interrupt_handler_destroy(sh->intr_handle, + mlx5_dev_interrupt_handler, sh); + mlx5_os_interrupt_handler_destroy(sh->intr_handle_nl, + mlx5_dev_interrupt_handler_nl, sh); #ifdef HAVE_IBV_DEVX_ASYNC - if (rte_intr_fd_get(sh->intr_handle_devx) >= 0) - rte_intr_callback_unregister(sh->intr_handle_devx, - mlx5_dev_interrupt_handler_devx, sh); - rte_intr_instance_free(sh->intr_handle_devx); + mlx5_os_interrupt_handler_destroy(sh->intr_handle_devx, + mlx5_dev_interrupt_handler_devx, sh); if (sh->devx_comp) mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp); #endif diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c index 4882e5f..0e01aff 100644 --- a/drivers/net/mlx5/linux/mlx5_socket.c +++ b/drivers/net/mlx5/linux/mlx5_socket.c @@ -134,51 +134,6 @@ } /** - * Install interrupt handler. - * - * @param dev - * Pointer to Ethernet device. - * @return - * 0 on success, a negative errno value otherwise. - */ -static int -mlx5_pmd_interrupt_handler_install(void) -{ - MLX5_ASSERT(server_socket != -1); - - server_intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); - if (server_intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - return -ENOMEM; - } - if (rte_intr_fd_set(server_intr_handle, server_socket)) - return -rte_errno; - - if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT)) - return -rte_errno; - - return rte_intr_callback_register(server_intr_handle, - mlx5_pmd_socket_handle, NULL); -} - -/** - * Uninstall interrupt handler. - */ -static void -mlx5_pmd_interrupt_handler_uninstall(void) -{ - if (server_socket != -1) { - mlx5_intr_callback_unregister(server_intr_handle, - mlx5_pmd_socket_handle, - NULL); - } - rte_intr_fd_set(server_intr_handle, 0); - rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN); - rte_intr_instance_free(server_intr_handle); -} - -/** * Initialise the socket to communicate with external tools. * * @return @@ -224,7 +179,10 @@ strerror(errno)); goto remove; } - if (mlx5_pmd_interrupt_handler_install()) { + server_intr_handle = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_PRIVATE, false, + server_socket, mlx5_pmd_socket_handle, NULL); + if (server_intr_handle == NULL) { DRV_LOG(WARNING, "cannot register interrupt handler for mlx5 socket: %s", strerror(errno)); goto remove; @@ -248,7 +206,8 @@ { if (server_socket == -1) return; - mlx5_pmd_interrupt_handler_uninstall(); + mlx5_os_interrupt_handler_destroy(server_intr_handle, + mlx5_pmd_socket_handle, NULL); claim_zero(close(server_socket)); server_socket = -1; MKSTR(path, MLX5_SOCKET_PATH, getpid()); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 305edff..7ebb2cc 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1682,8 +1682,6 @@ int mlx5_sysfs_switch_info(unsigned int ifindex, struct mlx5_switch_info *info); void mlx5_translate_port_name(const char *port_name_in, struct mlx5_switch_info *port_info_out); -void mlx5_intr_callback_unregister(const struct rte_intr_handle *handle, - rte_intr_callback_fn cb_fn, void *cb_arg); int mlx5_sysfs_bond_info(unsigned int pf_ifindex, unsigned int *ifindex, char *ifname); int mlx5_get_module_info(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index fe74317..f853a67 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -741,11 +741,8 @@ static void mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh) { - if (!rte_intr_fd_get(sh->txpp.intr_handle)) - return; - mlx5_intr_callback_unregister(sh->txpp.intr_handle, - mlx5_txpp_interrupt_handler, sh); - rte_intr_instance_free(sh->txpp.intr_handle); + mlx5_os_interrupt_handler_destroy(sh->txpp.intr_handle, + mlx5_txpp_interrupt_handler, sh); } /* Attach interrupt handler and fires first request to Rearm Queue. */ @@ -769,23 +766,12 @@ rte_errno = errno; return -rte_errno; } - sh->txpp.intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (sh->txpp.intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - return -ENOMEM; - } fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan); - if (rte_intr_fd_set(sh->txpp.intr_handle, fd)) - return -rte_errno; - - if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT)) - return -rte_errno; - - if (rte_intr_callback_register(sh->txpp.intr_handle, - mlx5_txpp_interrupt_handler, sh)) { - rte_intr_fd_set(sh->txpp.intr_handle, 0); - DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno); + sh->txpp.intr_handle = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, false, + fd, mlx5_txpp_interrupt_handler, sh); + if (!sh->txpp.intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); return -rte_errno; } /* Subscribe CQ event to the event channel controlled by the driver. */ diff --git a/drivers/net/mlx5/windows/mlx5_ethdev_os.c b/drivers/net/mlx5/windows/mlx5_ethdev_os.c index f975265..88d8213 100644 --- a/drivers/net/mlx5/windows/mlx5_ethdev_os.c +++ b/drivers/net/mlx5/windows/mlx5_ethdev_os.c @@ -140,28 +140,6 @@ return 0; } -/* - * Unregister callback handler safely. The handler may be active - * while we are trying to unregister it, in this case code -EAGAIN - * is returned by rte_intr_callback_unregister(). This routine checks - * the return code and tries to unregister handler again. - * - * @param handle - * interrupt handle - * @param cb_fn - * pointer to callback routine - * @cb_arg - * opaque callback parameter - */ -void -mlx5_intr_callback_unregister(const struct rte_intr_handle *handle, - rte_intr_callback_fn cb_fn, void *cb_arg) -{ - RTE_SET_USED(handle); - RTE_SET_USED(cb_fn); - RTE_SET_USED(cb_arg); -} - /** * DPDK callback to get flow control status. * diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index e025be4..fd447cc 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -93,22 +93,10 @@ static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { - int ret = -EAGAIN; - - if (rte_intr_fd_get(virtq->intr_handle) >= 0) { - while (ret == -EAGAIN) { - ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, virtq); - if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", - rte_intr_fd_get(virtq->intr_handle), - virtq->index); - usleep(MLX5_VDPA_INTR_RETRIES_USEC); - } - } - rte_intr_fd_set(virtq->intr_handle, -1); - } - rte_intr_instance_free(virtq->intr_handle); + int ret; + + mlx5_os_interrupt_handler_destroy(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq); if (virtq->virtq) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) @@ -365,35 +353,13 @@ virtq->priv = priv; rte_write32(virtq->index, priv->virtq_db_addr); /* Setup doorbell mapping. */ - virtq->intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + virtq->intr_handle = mlx5_os_interrupt_handler_create( + RTE_INTR_INSTANCE_F_SHARED, false, + vq.kickfd, mlx5_vdpa_virtq_kick_handler, virtq); if (virtq->intr_handle == NULL) { DRV_LOG(ERR, "Fail to allocate intr_handle"); goto error; } - - if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd)) - goto error; - - if (rte_intr_fd_get(virtq->intr_handle) == -1) { - DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); - } else { - if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT)) - goto error; - - if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, - virtq)) { - rte_intr_fd_set(virtq->intr_handle, -1); - DRV_LOG(ERR, "Failed to register virtq %d interrupt.", - index); - goto error; - } else { - DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - rte_intr_fd_get(virtq->intr_handle), - index); - } - } /* Subscribe virtq error event. */ virtq->version++; cookie = ((uint64_t)virtq->version << 32) + index;