From patchwork Sat Feb 25 19:58:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 124549 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DCB941D6F; Sat, 25 Feb 2023 20:58:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10DD840DDC; Sat, 25 Feb 2023 20:58:32 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2057.outbound.protection.outlook.com [40.107.94.57]) by mails.dpdk.org (Postfix) with ESMTP id DA19840DDA for ; Sat, 25 Feb 2023 20:58:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bbx8irPGXROLq66dhe4N/P90pulPK8aMQcv0shScYugK4uuEGrVnHGOk1rR71TFUGgkI6wxKUBHnVKAEfLyTk0Skl2ItlQaIhHSZHYNp34auy3j9ih9lE9pooiwUX8/j58IBH2glFpIwpDH6odzBzwZXffAKqF3OC73WA0k+sJvbQK6P4IfV0d5srCk5YyksRr8bQ02S7emioOWTe6LFtMz06b9n1AGVAC6QjZyltzl1AaPwQhnDk01uhgIKvR2NXiUyTLszrVnaY6qlJPCfLpQVIOJiezaEb326AXxDy4enSi7ciL5OqG+pRJWM9GRS5q47yVMnHYMHdD6bApXl2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fAeBfUZORmID7vkIq+uDEV8QEzp/2KWlB2/5d0yxqC8=; b=By/iWYHZ/Gxy21fZWYuVDRfXx222GzaKQT318LLJ2v/DRcxhmih5fQ9UWlXkb2I5UYL8ivcYPBjBgVtOhaFoDCFr6/yHm9Zn+iPCmRaAEtL1gMY4bsEdd/bv2VaNMt5rjYZHnmHGzgJX/QIeBuNmo99e+m6TsCv3737B9uKrX+7n6tS4C9VzC1H+e/q/umTXn5mrHdFCwSx3SHsoCSf2a/o9J5ykzR2mFZjuhekkl2NAuT50RTB/kW8JdCZzMHVO57+9gKZFzeLKe5F4yJoyWhJKTE1cCPckR153/LaHGDiu/BN/8mRxeQuwKVRXtTdZLzIxmfdoZKeZ08g5lUSp1A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fAeBfUZORmID7vkIq+uDEV8QEzp/2KWlB2/5d0yxqC8=; b=fywPhWP6TQrVmJFB8y3LjLmGTo2Svm/z+4/UDLTeGuExdEVRtDPKwGFIGJtM2gg0EBCiN8xqxtcUZ8oqcefMXfPCNc98DLbub+doVpJAMAPV7IWD4TmUjP7mX9yciNI4F27cHU7oS3EkhNJ9+MENzk8dI6cm7jHe20prBzsGjWzox5d1pJrnIMVu7go5BYxLX/BatgnhctP1k8kfXd5BjDRPHHOkyQZBzVxKW5t6ngOtrsYnf/+qv6hDMJdEkui9m6eWwLBxmYYijtbNMI6jPeVlGR37HESsCvAKIhYzZS2ewLiXqdKHOAEveyRKeVDM03Mk5SpInH2KnhDK9sts8w== Received: from BN8PR16CA0012.namprd16.prod.outlook.com (2603:10b6:408:4c::25) by MW4PR12MB6924.namprd12.prod.outlook.com (2603:10b6:303:207::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6134.25; Sat, 25 Feb 2023 19:58:26 +0000 Received: from BN8NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:408:4c:cafe::2a) by BN8PR16CA0012.outlook.office365.com (2603:10b6:408:4c::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6134.25 via Frontend Transport; Sat, 25 Feb 2023 19:58:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT054.mail.protection.outlook.com (10.13.177.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6156.12 via Frontend Transport; Sat, 25 Feb 2023 19:58:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Sat, 25 Feb 2023 11:58:24 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Sat, 25 Feb 2023 11:58:22 -0800 From: Dariusz Sosnowski To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH] net/mlx5: reject flow template API reconfiguration Date: Sat, 25 Feb 2023 19:58:04 +0000 Message-ID: <20230225195804.3581-1-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT054:EE_|MW4PR12MB6924:EE_ X-MS-Office365-Filtering-Correlation-Id: e204fe53-d06a-4a6f-7c50-08db176aa7b0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mgaFSAQtsn/zkYC+8hEmRRXfPpWN8whPp+OxzPLHZ+IwHosfEVXrX4+rX7ndu5Zwvkz69UZuoPHo7yECjZEIra/oJKvBV8jCpZtPZy83EI/tU761zgZ1DXHgNNMYiRntQpjtN+xb4PTgTrAu3SwAmMxLUVqWhXX2DwDjALvDbKCWldoMan9fI4OzeD47ae7ExfBvWJhRyc7pF9wMminlFBzdu5z0jTq6sVHN21I0UKjN0tUEQt3ERwVbKmGsumziFkqSHhD7dcLapuCd5bWxglF+Gxr9B+C7AcqmwjmV1DfimZskQRb6foX3rEzSjEpMVDziQe4cv4cstFqIBcBp44B9y+8T5qnh37dqPaquS995COU1KZgY+h3x9kzzeYeHGFbpbwvEn2wf0qhACDeBFSle77PKGWA4I4ST0ZH0/2PjMSweuEhBRRBzlobioAnlAum4KuuYGxSb8HYqUvMAmW8Tl0Vf0C25Qohu3/m+lABPxfGW1c1XB23RFmxwmurhIWJYNrAw7018zMT/OWnyq94Vc2XtLeBwm1pcAAGC6RZZgXOkGDh7zJ5Pz3xysuraoTN9OOu3zVVJCnuD/XAqk3/Snjs5/YtwrHQh2nmr0fykQzPTK2MspeN0esTF3BZ+Ug9eJzcZHMj2wcfP1QIW+LjgEx3jELooVIa7aFd4AKhnLKyfm5hH13zEXJ+olFMbUtz6SpKjQUnyz4x55BPkMLifbXC0gpwedmAllxwQhZM= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(39850400004)(396003)(376002)(346002)(136003)(451199018)(46966006)(40470700004)(36840700001)(336012)(82740400003)(6636002)(7636003)(110136005)(16526019)(6286002)(36756003)(26005)(186003)(5660300002)(82310400005)(356005)(40460700003)(7696005)(2906002)(478600001)(6666004)(1076003)(55016003)(86362001)(40480700001)(8936002)(2616005)(34020700004)(36860700001)(4326008)(41300700001)(83380400001)(70586007)(70206006)(47076005)(426003)(316002)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2023 19:58:24.8359 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e204fe53-d06a-4a6f-7c50-08db176aa7b0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6924 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Flow Template API allows rte_flow_configure() to be called more than once, to allow applications to dynamically reconfigure the amount of resources available for flow table and flow rule creation. PMDs can reject the change in configuration and keep the old configuration. Before this patch, during Flow Template API reconfiguration in mlx5 PMD, all the allocated resources (HW/FW object pools, flow rules, pattern and actions templates) were released and reallocated according to the new configuration. This however leads to undefined behavior, since all references to templates, flows or indirect actions, which are held by the user, are now invalidated. This patch changes the reconfiguration behavior. Configuration provided to rte_flow_configure() is stored in port's private data structure. On any subsequent call to rte_flow_configure(), the provided configuration is compared against the stored configuration. If both are equal, rte_flow_configure() reports a success and configuration is unaffected. Otherwise, (-ENOTSUP) is returned. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam Acked-by: Suanming Mou Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 3 + drivers/net/mlx5/mlx5.h | 8 +++ drivers/net/mlx5/mlx5_flow_hw.c | 104 +++++++++++++++++++++++++++++++- 3 files changed, 112 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 6510e74fb9..2fc0399b99 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -160,6 +160,9 @@ Limitations - Set ``dv_flow_en`` to 2 in order to enable HW steering. - Async queue-based ``rte_flow_async`` APIs supported only. - NIC ConnectX-5 and before are not supported. + - Reconfiguring flow API engine is not supported. + Any subsequent call to ``rte_flow_configure()`` with different configuration + than initially provided will be rejected with ``-ENOTSUP`` error code. - Partial match with item template is not supported. - IPv6 5-tuple matching is not supported. - With E-Switch enabled, ports which share the E-Switch domain diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a766fb408e..7fc75a2535 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1656,6 +1656,13 @@ struct mlx5_hw_ctrl_flow { struct rte_flow *flow; }; +/* HW Steering port configuration passed to rte_flow_configure(). */ +struct mlx5_flow_hw_attr { + struct rte_flow_port_attr port_attr; + uint16_t nb_queue; + struct rte_flow_queue_attr *queue_attr; +}; + struct mlx5_flow_hw_ctrl_rx; struct mlx5_priv { @@ -1763,6 +1770,7 @@ struct mlx5_priv { uint32_t nb_queue; /* HW steering queue number. */ struct mlx5_hws_cnt_pool *hws_cpool; /* HW steering's counter pool. */ uint32_t hws_mark_refcnt; /* HWS mark action reference counter. */ + struct mlx5_flow_hw_attr *hw_attr; /* HW Steering port configuration. */ #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) /* Item template list. */ LIST_HEAD(flow_hw_itt, rte_flow_pattern_template) flow_hw_itt; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a9c7045a3e..29437d8d6c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6792,6 +6792,86 @@ mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev) } } +/** + * Copy the provided HWS configuration to a newly allocated buffer. + * + * @param[in] port_attr + * Port configuration attributes. + * @param[in] nb_queue + * Number of queue. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * + * @return + * Pointer to copied HWS configuration is returned on success. + * Otherwise, NULL is returned and rte_errno is set. + */ +static struct mlx5_flow_hw_attr * +flow_hw_alloc_copy_config(const struct rte_flow_port_attr *port_attr, + const uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *error) +{ + struct mlx5_flow_hw_attr *hw_attr; + size_t hw_attr_size; + unsigned int i; + + hw_attr_size = sizeof(*hw_attr) + nb_queue * sizeof(*hw_attr->queue_attr); + hw_attr = mlx5_malloc(MLX5_MEM_ZERO, hw_attr_size, 0, SOCKET_ID_ANY); + if (!hw_attr) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Not enough memory to store configuration"); + return NULL; + } + memcpy(&hw_attr->port_attr, port_attr, sizeof(*port_attr)); + hw_attr->nb_queue = nb_queue; + /* Queue attributes are placed after the mlx5_flow_hw_attr. */ + hw_attr->queue_attr = (struct rte_flow_queue_attr *)(hw_attr + 1); + for (i = 0; i < nb_queue; ++i) + memcpy(&hw_attr->queue_attr[i], queue_attr[i], sizeof(hw_attr->queue_attr[i])); + return hw_attr; +} + +/** + * Compares the preserved HWS configuration with the provided one. + * + * @param[in] hw_attr + * Pointer to preserved HWS configuration. + * @param[in] new_pa + * Port configuration attributes to compare. + * @param[in] new_nbq + * Number of queues to compare. + * @param[in] new_qa + * Array that holds attributes for each flow queue. + * + * @return + * True if configurations are the same, false otherwise. + */ +static bool +flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr, + const struct rte_flow_port_attr *new_pa, + const uint16_t new_nbq, + const struct rte_flow_queue_attr *new_qa[]) +{ + const struct rte_flow_port_attr *old_pa = &hw_attr->port_attr; + const uint16_t old_nbq = hw_attr->nb_queue; + const struct rte_flow_queue_attr *old_qa = hw_attr->queue_attr; + unsigned int i; + + if (old_pa->nb_counters != new_pa->nb_counters || + old_pa->nb_aging_objects != new_pa->nb_aging_objects || + old_pa->nb_meters != new_pa->nb_meters || + old_pa->nb_conn_tracks != new_pa->nb_conn_tracks || + old_pa->flags != new_pa->flags) + return false; + if (old_nbq != new_nbq) + return false; + for (i = 0; i < old_nbq; ++i) + if (old_qa[i].size != new_qa[i]->size) + return false; + return true; +} + /** * Configure port HWS resources. * @@ -6846,9 +6926,12 @@ flow_hw_configure(struct rte_eth_dev *dev, rte_errno = EINVAL; goto err; } - /* In case re-configuring, release existing context at first. */ + /* + * Calling rte_flow_configure() again is allowed if and only if + * provided configuration matches the initially provided one. + */ if (priv->dr_ctx) { - /* */ + MLX5_ASSERT(priv->hw_attr != NULL); for (i = 0; i < priv->nb_queue; i++) { hw_q = &priv->hw_q[i]; /* Make sure all queues are empty. */ @@ -6857,7 +6940,18 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } } - flow_hw_resource_release(dev); + if (flow_hw_compare_config(priv->hw_attr, port_attr, nb_queue, queue_attr)) + return 0; + else + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Changing HWS configuration attributes " + "is not supported"); + } + priv->hw_attr = flow_hw_alloc_copy_config(port_attr, nb_queue, queue_attr, error); + if (!priv->hw_attr) { + ret = -rte_errno; + goto err; } ctrl_queue_attr.size = queue_attr[0]->size; nb_q_updated = nb_queue + 1; @@ -7142,6 +7236,8 @@ flow_hw_configure(struct rte_eth_dev *dev, __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); priv->shared_host = NULL; } + mlx5_free(priv->hw_attr); + priv->hw_attr = NULL; /* Do not overwrite the internal errno information. */ if (ret) return ret; @@ -7226,6 +7322,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->shared_host = NULL; } priv->dr_ctx = NULL; + mlx5_free(priv->hw_attr); + priv->hw_attr = NULL; priv->nb_queue = 0; }