From patchwork Sat Jul 1 14:43:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 129175 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13FEC42DA3; Sat, 1 Jul 2023 16:44:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9624410E6; Sat, 1 Jul 2023 16:44:21 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2042.outbound.protection.outlook.com [40.107.93.42]) by mails.dpdk.org (Postfix) with ESMTP id AA0AB410DC; Sat, 1 Jul 2023 16:44:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nJEKcerT0dzeEWo4qLidps8EY9FKEDKTo+zPI0OesE58Or2kAGc64ld62Mlhgh7ox5njsOlEbrNGjXiv4W7tFyR8jySXQ1bREDyVd74sRfHMjP4S4ZBFJAoY9cxZuXd8zgOl2X1pBgc2WPYIffPhDbo/pEsM92khOzXtWrB2oHscvWXHvcnkXXMiCA4oUu5C/05bhs8E877FzhMTW8yTYcauCEQOMGOHH3mu9+g9yOyFtLJyrCyseHJ8dLsdMbtkjol1x8DwAg+AKP0TFU/g2cQpR8p0x4127RV8J4l+7TshFJNGsq5EijdlwH7arCwPln2klzne7GhziJZCIvqiwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dOQDiE65WbcvBWX8q4CvPZxJNwKlikV5Y9VG3NkbWeE=; b=m7aaxdsLZtvsa/nuwGmUK4biNWXUPmcRPy4vHCbgkdqeNWWEZsXNs+XE/doX8hn4a/PXFDbugK+y4wvc7DYvsj18LW8fuJ1T/01P0PVvUlu21nbvaXvlpYvKF94Ju2IZqC+NxVzCsM+go14Y2lbIjnRSS57DJxR2p9RCeHNaqW0fiHjpiH5DkZYHdMPI48aohlwOFBE9cdZhk0A2nv2y2fuJFFCqosZWCLvMiOgpKDrGtKbcwa8sCpf8XAE5uf+25llZvye6p67d5cxaKX8cmmOmIKORwIb1h1C9EYVItyePqZDVB6GNl5sDiEMTTBSP7P9n7sDzRsUHqGiQZUeZ3Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dOQDiE65WbcvBWX8q4CvPZxJNwKlikV5Y9VG3NkbWeE=; b=DAVJ3314a1U+gOJqag9qkHxbLEdAGxCBG/a40Ef/nJfALgQ9TQD53np7f5ob8AtCzoAHqvzWOLYHPh6AAy/Kh10yWCQhYWSm+copsVMpamm7xBqUmdD1WMgWg6avKsVLGCu9ftMP2P8dQi3r543bkAEKrCK0k0wjkIQtNxVM9i1jZFn88hWRQKAvLy653jcyAy1BmNXvbCiizgoyf137SUCKdk/P3O7uXwGH0Ya7pKO24zLDSVy9ICTiM3qjKNbVz2k8XkbdEljAAoHSa5He6CONutJ3DL649F8z1QxwPROda9SuChsdUIcT7Zpx3Uuluq6qvKt0tbPhomRJIfFKNw== Received: from MW4PR03CA0268.namprd03.prod.outlook.com (2603:10b6:303:b4::33) by PH7PR12MB7233.namprd12.prod.outlook.com (2603:10b6:510:204::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.19; Sat, 1 Jul 2023 14:44:18 +0000 Received: from CO1NAM11FT091.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::2) by MW4PR03CA0268.outlook.office365.com (2603:10b6:303:b4::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.26 via Frontend Transport; Sat, 1 Jul 2023 14:44:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT091.mail.protection.outlook.com (10.13.175.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.43 via Frontend Transport; Sat, 1 Jul 2023 14:44:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 1 Jul 2023 07:44:13 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 1 Jul 2023 07:44:11 -0700 From: Bing Zhao To: , , , , CC: , Gregory Etelson , Subject: [PATCH] net/mlx5: fix flow workspace destruction Date: Sat, 1 Jul 2023 17:43:46 +0300 Message-ID: <20230701144346.441037-1-bingz@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT091:EE_|PH7PR12MB7233:EE_ X-MS-Office365-Filtering-Correlation-Id: 7ca1bafb-10c3-4eea-eaea-08db7a41a64c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +PuB8/cSVTawSaG1JFhEegiqCailglIov3Lg7lNKvux6NjWcPdFhuvSzx5tAMIjUY5lLF2kdvB0OXt4xWhpB+LXBaUTCbPC7hwNOYgNwuXQ8w1VKdCeNLkrnNpTZdgLhU8A+Kq14IreFBH2JAzXcFwtdYu2T6DVaUui47JLeRyUl3Og5G6JZ5wLXWeufZBLFiXEzhalMk3NL73wnMUJcsXN+IgJ5FmXn84HAD7zuh+9vIhxhymq3Mvbrpcq2IqTHNhSXqfC0j9X+VbrrKIAbpiMwZqPB9XFM7cMRnOzB1mD9ZdQXSicIBmtjew6EUSagrcr7OTJN0Vft5UCxdsZvYI80BhSevIozNyT681+BYmQkHVnVq7r1liIlmptoNGlV/MncsOnv9YC2rCPuKZNbythxg4bWYJJBBOj7dJ5kLWtlDTSk3PwOIOrRdfKV3KRs3gWGDH7R+OLoaB75xSRn3hGQPk+chvIHdR/ls0rao6wXoOW+LV2voNpsfjk5AKIMWUV2gFrTVwD/PLODo7XlBIZE4rgrumfZPgrDyZ4HCgJ/MtvORczGqE4zJcAKhOQT/FXo7ea7Y+90wa4PWA9G9Y7hPelWM+ETK2zFHllN1g05OqTV5jPiAreOyGtd9Fzw+Fx23k+9L9EZZ//KTRgLHFCvZCILqnmkTt9T++XSA1toRse85m4S4NQJtCi/dqCw/oP9M2gV6CJvHf9r0LbMxecWjwgLuBLBDWUWUYX1MSlRFEGBXqnrZAYvriqhMndV X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(450100002)(70586007)(6636002)(4326008)(316002)(6666004)(478600001)(110136005)(70206006)(54906003)(86362001)(36756003)(47076005)(16526019)(186003)(26005)(336012)(426003)(83380400001)(36860700001)(6286002)(2616005)(8936002)(1076003)(2906002)(5660300002)(41300700001)(8676002)(82310400005)(7696005)(40480700001)(55016003)(356005)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jul 2023 14:44:18.3641 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7ca1bafb-10c3-4eea-eaea-08db7a41a64c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT091.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7233 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson PMD uses pthread key to allocate and access per thread flow workspace memory buffers. PMD registered a key destructor function to clean up flow workspace buffers. However, the key destructor was not called by the pthread library. The patch keeps track of per-thread flow workspaces in PMD. Flow workspaces memory release is activated from PMD destructor. In the meanwhile, workspace buffer and RSS queues array are allocated in a single memory chunk with this patch. The maximal number of queues RTE_ETH_RSS_RETA_SIZE_512 is chosen. Then the workspace adjustment can be removed to reduce the software hiccup: 1. realloc and content copy 2. spinlock aquire and release Fixes: 5d55a494f4e6 ("net/mlx5: split multi-thread flow handling per OS") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Signed-off-by: Bing Zhao --- drivers/net/mlx5/linux/mlx5_flow_os.c | 2 +- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 76 +++++++++++---------------- drivers/net/mlx5/mlx5_flow.h | 4 +- 4 files changed, 36 insertions(+), 47 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_flow_os.c b/drivers/net/mlx5/linux/mlx5_flow_os.c index 3c9a823edf..b139bb75b9 100644 --- a/drivers/net/mlx5/linux/mlx5_flow_os.c +++ b/drivers/net/mlx5/linux/mlx5_flow_os.c @@ -51,7 +51,7 @@ mlx5_flow_os_validate_item_esp(const struct rte_flow_item *item, int mlx5_flow_os_init_workspace_once(void) { - if (rte_thread_key_create(&key_workspace, flow_release_workspace)) { + if (rte_thread_key_create(&key_workspace, NULL)) { DRV_LOG(ERR, "Can't create flow workspace data thread key."); rte_errno = ENOMEM; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5f0aa296ba..fd9b76027d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1838,6 +1838,7 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (LIST_EMPTY(&mlx5_dev_ctx_list)) { mlx5_os_net_cleanup(); mlx5_flow_os_release_workspace(); + mlx5_flow_workspace_gc_release(); } pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); if (sh->flex_parsers_dv) { diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index cf83db7b60..b5874bbe22 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7155,36 +7155,6 @@ flow_tunnel_from_rule(const struct mlx5_flow *flow) return tunnel; } -/** - * Adjust flow RSS workspace if needed. - * - * @param wks - * Pointer to thread flow work space. - * @param rss_desc - * Pointer to RSS descriptor. - * @param[in] nrssq_num - * New RSS queue number. - * - * @return - * 0 on success, -1 otherwise and rte_errno is set. - */ -static int -flow_rss_workspace_adjust(struct mlx5_flow_workspace *wks, - struct mlx5_flow_rss_desc *rss_desc, - uint32_t nrssq_num) -{ - if (likely(nrssq_num <= wks->rssq_num)) - return 0; - rss_desc->queue = realloc(rss_desc->queue, - sizeof(*rss_desc->queue) * RTE_ALIGN(nrssq_num, 2)); - if (!rss_desc->queue) { - rte_errno = ENOMEM; - return -1; - } - wks->rssq_num = RTE_ALIGN(nrssq_num, 2); - return 0; -} - /** * Create a flow and add it to @p list. * @@ -7303,8 +7273,7 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, if (attr->ingress) rss = flow_get_rss_action(dev, p_actions_rx); if (rss) { - if (flow_rss_workspace_adjust(wks, rss_desc, rss->queue_num)) - return 0; + MLX5_ASSERT(rss->queue_num <= RTE_ETH_RSS_RETA_SIZE_512); /* * The following information is required by * mlx5_flow_hashfields_adjust() in advance. @@ -8072,12 +8041,34 @@ flow_release_workspace(void *data) while (wks) { next = wks->next; - free(wks->rss_desc.queue); free(wks); wks = next; } } +static struct mlx5_flow_workspace *gc_head = NULL; +static rte_spinlock_t mlx5_flow_workspace_lock = RTE_SPINLOCK_INITIALIZER; + +static void +mlx5_flow_workspace_gc_add(struct mlx5_flow_workspace *ws) +{ + rte_spinlock_lock(&mlx5_flow_workspace_lock); + ws->gc = gc_head; + gc_head = ws; + rte_spinlock_unlock(&mlx5_flow_workspace_lock); +} + +void +mlx5_flow_workspace_gc_release(void) +{ + while (gc_head) { + struct mlx5_flow_workspace *wks = gc_head; + + gc_head = wks->gc; + flow_release_workspace(wks); + } +} + /** * Get thread specific current flow workspace. * @@ -8103,23 +8094,17 @@ mlx5_flow_get_thread_workspace(void) static struct mlx5_flow_workspace* flow_alloc_thread_workspace(void) { - struct mlx5_flow_workspace *data = calloc(1, sizeof(*data)); + size_t data_size = RTE_ALIGN(sizeof(struct mlx5_flow_workspace), sizeof(long)); + size_t rss_queue_array_size = sizeof(uint16_t) * RTE_ETH_RSS_RETA_SIZE_512; + struct mlx5_flow_workspace *data = calloc(1, data_size + + rss_queue_array_size); if (!data) { - DRV_LOG(ERR, "Failed to allocate flow workspace " - "memory."); + DRV_LOG(ERR, "Failed to allocate flow workspace memory."); return NULL; } - data->rss_desc.queue = calloc(1, - sizeof(uint16_t) * MLX5_RSSQ_DEFAULT_NUM); - if (!data->rss_desc.queue) - goto err; - data->rssq_num = MLX5_RSSQ_DEFAULT_NUM; + data->rss_desc.queue = RTE_PTR_ADD(data, data_size); return data; -err: - free(data->rss_desc.queue); - free(data); - return NULL; } /** @@ -8140,6 +8125,7 @@ mlx5_flow_push_thread_workspace(void) data = flow_alloc_thread_workspace(); if (!data) return NULL; + mlx5_flow_workspace_gc_add(data); } else if (!curr->inuse) { data = curr; } else if (curr->next) { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 003e7da3a6..62789853ab 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1496,10 +1496,10 @@ struct mlx5_flow_workspace { /* If creating another flow in same thread, push new as stack. */ struct mlx5_flow_workspace *prev; struct mlx5_flow_workspace *next; + struct mlx5_flow_workspace *gc; uint32_t inuse; /* can't create new flow with current. */ struct mlx5_flow flows[MLX5_NUM_MAX_DEV_FLOWS]; struct mlx5_flow_rss_desc rss_desc; - uint32_t rssq_num; /* Allocated queue num in rss_desc. */ uint32_t flow_idx; /* Intermediate device flow index. */ struct mlx5_flow_meter_info *fm; /* Pointer to the meter in flow. */ struct mlx5_flow_meter_policy *policy; @@ -2022,6 +2022,8 @@ struct mlx5_flow_driver_ops { struct mlx5_flow_workspace *mlx5_flow_push_thread_workspace(void); void mlx5_flow_pop_thread_workspace(void); struct mlx5_flow_workspace *mlx5_flow_get_thread_workspace(void); +void mlx5_flow_workspace_gc_release(void); + __extension__ struct flow_grp_info { uint64_t external:1;