From patchwork Sat Jul 1 14:51:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 129176 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01F6D42DA3; Sat, 1 Jul 2023 16:51:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC590410E6; Sat, 1 Jul 2023 16:51:52 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2058.outbound.protection.outlook.com [40.107.244.58]) by mails.dpdk.org (Postfix) with ESMTP id 3E6C8410DC; Sat, 1 Jul 2023 16:51:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KJV+YEDXdlwZC+tn/uIaoeNcfP981tmF6sJ/kZwe1cLiBaeu4h2NT/drr9QMJpQMBsa3Kggel4woZ0IZhQUDITS4wEdb83Zm0uI7byLsLJ+AacWPQon294fbSpEDqslun7HMVXjWrmBkFtcWAZAAYTpmMVWwMscmyKxXgiN6fPtMS7uSODlHqI/GgNGYfSFKWcaKjp3jB8fhjmppGX8QWUi7aKA3ca0XRPfzJtCsmKA7Lt8JKEb4Rj5XTzczyCTVo4Wwguk8UA9SnoUya8eksAHQSmOar5DN70U3oDpjXTZweTxrnWx9nCpmepIzPburEI26BYGisEGdqNL7BYJwpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=13IXX+mrXb9W/ehwK/PreLXGruqu4XdmFcjw9CdMfew=; b=TBm7l935A1WvE2IAnYteXyt1v8XWaQi7nmegB4rjdxvYLwbfP+Q6feaMey0y/ZKPMY+OhlMqLLRdmy+DT8YapA/1YEW2ksHGp3/MgdxfMcOGRYywqHQcUrLJavvtp4j+0iJgjmgRJQEMZWlfVPV6zWGk827/Rrmc6++IUGeNgi7TdKQDLY4cG49B9bkDZAodLzHetkmrb5WjHh2Dh6qn3Jz5sRFHlcG9iwBg/VWoR/bpM5kdQOXTk5yh+Dg3vQfUVOznT9TRffxK6H3FsRMRbm2VvTTtn0/myNkgiL0yRePBvnzg2CcIjQEssOlEGK45plgUYIWz0DXRb0T0fhqouA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=13IXX+mrXb9W/ehwK/PreLXGruqu4XdmFcjw9CdMfew=; b=WiymuwcKcdRvSLPAFi190qaiT7t8NMrhR5xx1GaQnyKgL9/A4mgZ2dwX5qbOKGluUPvBs4vUBD3S3lsVhV+0+9idTCeYgt0uyM9RFYxx7VAXH/CMlrgxCZeZMUJg3hVJ2JyqiAlTjSScUTddup8NAJXoaLMqrAyeXkHw3gWTjNX0xUKLLzNx/YdRUifne/Y6H4V6h5uWcnODZzRW+kaWVH2GkCS3L3uZDDvJ9h9KV1IHiBIo8oMtgEsLI/1jJsH4Ivi0bI1dV+Yk66+cew7ecswT8BhOdnQ6jXRk1R6n1ZKkA8pL37f7/QJ1mA1T/+hZ+WJFKb+YIjuaccNNlLR3Pw== Received: from BN9PR03CA0765.namprd03.prod.outlook.com (2603:10b6:408:13a::20) by DM3PR12MB9435.namprd12.prod.outlook.com (2603:10b6:0:40::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.19; Sat, 1 Jul 2023 14:51:47 +0000 Received: from BN8NAM11FT108.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13a:cafe::6d) by BN9PR03CA0765.outlook.office365.com (2603:10b6:408:13a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.22 via Frontend Transport; Sat, 1 Jul 2023 14:51:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT108.mail.protection.outlook.com (10.13.176.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.47 via Frontend Transport; Sat, 1 Jul 2023 14:51:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 1 Jul 2023 07:51:32 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 1 Jul 2023 07:51:30 -0700 From: Bing Zhao To: , , , , CC: , Gregory Etelson , Subject: [PATCH v2] net/mlx5: fix flow workspace destruction Date: Sat, 1 Jul 2023 17:51:16 +0300 Message-ID: <20230701145116.441135-1-bingz@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230701144346.441037-1-bingz@nvidia.com> References: <20230701144346.441037-1-bingz@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT108:EE_|DM3PR12MB9435:EE_ X-MS-Office365-Filtering-Correlation-Id: b020350e-58fe-4f2a-7946-08db7a42b18c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WILnzwYAKpc+rVf3eSeL4ub9EvTFs8SwKablbGOVmrlwgfl2ZIqsu5GVUEeNfeipy3uUEhZ6YylkgrqNju/3YmxTqbsfV/kNd8w4NNkkJ7IWROGIXyEkRttqhy1O1TepXvrmHsSkss7TnEua28vX3ooz3jenlxMTanb+98RvC0TwjW8n6hM1gAGsL4IRfluGSt3uK06WnuB89p6nzQ0b9AQqe0JyQaD4ss/Ld0owwfLb0YNRfGrqwSQhfNaIXCg4HwJXE+vl5ztPP0eN6XRntNXvT7eeZT2/umVIEe8QYFPbczNjxW7Kk8GdzxDmANEsfClEkgwwsltceSITZ+I0SNF7Z0HG4us90J7CyJYys1RyBr1joYCYzlnkcoJQfcVALlwbbxwW/ZokL2LkVV1YQ73d2B6gXpUYqpn7cOUlUapWK9kSvyuX3Yx0P7sol9i5Xi5cr6t4h7JOeJE+AuXWjc7WyW23v+PZraxFT1KMh1tojoloIYexqPAgzWY1MDxMyvMWUaEWxVQ6l/LdNGENyo8q5678ZP0AN6Nr2d8JYiYOFhGrQbD/znXd5fkuC35yLc9PPkAa1+HNMMF/EUi4sClZeSdJ15v0apB8BALPRSbUhXtIjRDtfjaV2J3b0Y5q828ltdZvfZqEa2wPZvhFHqAMpY8ZoUZeDGl0K0DeTdTG8/IS7bkv1kBotXwmJvTewd2PsfWHGXGx5LWzUhKV6g== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(39850400004)(346002)(136003)(396003)(451199021)(36840700001)(46966006)(356005)(7636003)(478600001)(110136005)(54906003)(82740400003)(6666004)(5660300002)(8676002)(8936002)(36756003)(2906002)(86362001)(82310400005)(40480700001)(55016003)(70206006)(70586007)(4326008)(6636002)(450100002)(316002)(41300700001)(36860700001)(426003)(336012)(83380400001)(47076005)(1076003)(26005)(6286002)(16526019)(186003)(2616005)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jul 2023 14:51:46.6385 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b020350e-58fe-4f2a-7946-08db7a42b18c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT108.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB9435 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson PMD uses pthread key to allocate and access per thread flow workspace memory buffers. PMD registered a key destructor function to clean up flow workspace buffers. However, the key destructor was not called by the pthread library. The patch keeps track of per-thread flow workspaces in PMD. Flow workspaces memory release is activated from PMD destructor. In the meanwhile, workspace buffer and RSS queues array are allocated in a single memory chunk with this patch. The maximal number of queues RTE_ETH_RSS_RETA_SIZE_512 is chosen. Then the workspace adjustment can be removed to reduce the software hiccup: 1. realloc and content copy 2. spinlock acquire and release Fixes: 5d55a494f4e6 ("net/mlx5: split multi-thread flow handling per OS") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Signed-off-by: Bing Zhao Acked-by: Matan Azrad --- v2: fix typo in the commit message and remove the needless NULL pointer initialization for static variable. --- drivers/net/mlx5/linux/mlx5_flow_os.c | 2 +- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 76 +++++++++++---------------- drivers/net/mlx5/mlx5_flow.h | 4 +- 4 files changed, 36 insertions(+), 47 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_flow_os.c b/drivers/net/mlx5/linux/mlx5_flow_os.c index 3c9a823edf..b139bb75b9 100644 --- a/drivers/net/mlx5/linux/mlx5_flow_os.c +++ b/drivers/net/mlx5/linux/mlx5_flow_os.c @@ -51,7 +51,7 @@ mlx5_flow_os_validate_item_esp(const struct rte_flow_item *item, int mlx5_flow_os_init_workspace_once(void) { - if (rte_thread_key_create(&key_workspace, flow_release_workspace)) { + if (rte_thread_key_create(&key_workspace, NULL)) { DRV_LOG(ERR, "Can't create flow workspace data thread key."); rte_errno = ENOMEM; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5f0aa296ba..fd9b76027d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1838,6 +1838,7 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (LIST_EMPTY(&mlx5_dev_ctx_list)) { mlx5_os_net_cleanup(); mlx5_flow_os_release_workspace(); + mlx5_flow_workspace_gc_release(); } pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); if (sh->flex_parsers_dv) { diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index cf83db7b60..d3b1252ad6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7155,36 +7155,6 @@ flow_tunnel_from_rule(const struct mlx5_flow *flow) return tunnel; } -/** - * Adjust flow RSS workspace if needed. - * - * @param wks - * Pointer to thread flow work space. - * @param rss_desc - * Pointer to RSS descriptor. - * @param[in] nrssq_num - * New RSS queue number. - * - * @return - * 0 on success, -1 otherwise and rte_errno is set. - */ -static int -flow_rss_workspace_adjust(struct mlx5_flow_workspace *wks, - struct mlx5_flow_rss_desc *rss_desc, - uint32_t nrssq_num) -{ - if (likely(nrssq_num <= wks->rssq_num)) - return 0; - rss_desc->queue = realloc(rss_desc->queue, - sizeof(*rss_desc->queue) * RTE_ALIGN(nrssq_num, 2)); - if (!rss_desc->queue) { - rte_errno = ENOMEM; - return -1; - } - wks->rssq_num = RTE_ALIGN(nrssq_num, 2); - return 0; -} - /** * Create a flow and add it to @p list. * @@ -7303,8 +7273,7 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, if (attr->ingress) rss = flow_get_rss_action(dev, p_actions_rx); if (rss) { - if (flow_rss_workspace_adjust(wks, rss_desc, rss->queue_num)) - return 0; + MLX5_ASSERT(rss->queue_num <= RTE_ETH_RSS_RETA_SIZE_512); /* * The following information is required by * mlx5_flow_hashfields_adjust() in advance. @@ -8072,12 +8041,34 @@ flow_release_workspace(void *data) while (wks) { next = wks->next; - free(wks->rss_desc.queue); free(wks); wks = next; } } +static struct mlx5_flow_workspace *gc_head; +static rte_spinlock_t mlx5_flow_workspace_lock = RTE_SPINLOCK_INITIALIZER; + +static void +mlx5_flow_workspace_gc_add(struct mlx5_flow_workspace *ws) +{ + rte_spinlock_lock(&mlx5_flow_workspace_lock); + ws->gc = gc_head; + gc_head = ws; + rte_spinlock_unlock(&mlx5_flow_workspace_lock); +} + +void +mlx5_flow_workspace_gc_release(void) +{ + while (gc_head) { + struct mlx5_flow_workspace *wks = gc_head; + + gc_head = wks->gc; + flow_release_workspace(wks); + } +} + /** * Get thread specific current flow workspace. * @@ -8103,23 +8094,17 @@ mlx5_flow_get_thread_workspace(void) static struct mlx5_flow_workspace* flow_alloc_thread_workspace(void) { - struct mlx5_flow_workspace *data = calloc(1, sizeof(*data)); + size_t data_size = RTE_ALIGN(sizeof(struct mlx5_flow_workspace), sizeof(long)); + size_t rss_queue_array_size = sizeof(uint16_t) * RTE_ETH_RSS_RETA_SIZE_512; + struct mlx5_flow_workspace *data = calloc(1, data_size + + rss_queue_array_size); if (!data) { - DRV_LOG(ERR, "Failed to allocate flow workspace " - "memory."); + DRV_LOG(ERR, "Failed to allocate flow workspace memory."); return NULL; } - data->rss_desc.queue = calloc(1, - sizeof(uint16_t) * MLX5_RSSQ_DEFAULT_NUM); - if (!data->rss_desc.queue) - goto err; - data->rssq_num = MLX5_RSSQ_DEFAULT_NUM; + data->rss_desc.queue = RTE_PTR_ADD(data, data_size); return data; -err: - free(data->rss_desc.queue); - free(data); - return NULL; } /** @@ -8140,6 +8125,7 @@ mlx5_flow_push_thread_workspace(void) data = flow_alloc_thread_workspace(); if (!data) return NULL; + mlx5_flow_workspace_gc_add(data); } else if (!curr->inuse) { data = curr; } else if (curr->next) { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 003e7da3a6..62789853ab 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1496,10 +1496,10 @@ struct mlx5_flow_workspace { /* If creating another flow in same thread, push new as stack. */ struct mlx5_flow_workspace *prev; struct mlx5_flow_workspace *next; + struct mlx5_flow_workspace *gc; uint32_t inuse; /* can't create new flow with current. */ struct mlx5_flow flows[MLX5_NUM_MAX_DEV_FLOWS]; struct mlx5_flow_rss_desc rss_desc; - uint32_t rssq_num; /* Allocated queue num in rss_desc. */ uint32_t flow_idx; /* Intermediate device flow index. */ struct mlx5_flow_meter_info *fm; /* Pointer to the meter in flow. */ struct mlx5_flow_meter_policy *policy; @@ -2022,6 +2022,8 @@ struct mlx5_flow_driver_ops { struct mlx5_flow_workspace *mlx5_flow_push_thread_workspace(void); void mlx5_flow_pop_thread_workspace(void); struct mlx5_flow_workspace *mlx5_flow_get_thread_workspace(void); +void mlx5_flow_workspace_gc_release(void); + __extension__ struct flow_grp_info { uint64_t external:1;