From patchwork Thu Nov 4 12:33:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103749 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1237A0548; Thu, 4 Nov 2021 13:34:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED6A442718; Thu, 4 Nov 2021 13:34:23 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2044.outbound.protection.outlook.com [40.107.92.44]) by mails.dpdk.org (Postfix) with ESMTP id 182BC41144 for ; Thu, 4 Nov 2021 13:34:22 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BeTkK5usDcwUgjS/VIabWe88yMBYA3ae+uUZzzMU6vlncObZzgMnwcS4H4nwU9ZIfLXsph6ilcWYfBEN9cYVZGzpXTtk0iVrt1KLYTzcE6+jX9LEUMZznu3vArrPsmRNU9yDiGGs4vJ23rPilN0uPLraolkiGZgzQJaCG2vzb1RT//KJgYOgM809HgmE3OpKSGxwItr89gGdP4RpFpVBhOXN536CAUD96KJX/qdpvBmQVh13AxGLkkUReZeLVGbzJlSN5BdteE3rX4Nnvdy1OhmoiSAZPvjMeUv57q6htt8NJRjmQGqZNKpnddhQ/B6NNQodzJc/KO+vRoqQqoXOEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qfXJuWu4MrGFxiwjXEWttAAQaBWL5GUQBsIqe3AJb8g=; b=feZENyAD8urhjYQM5nJXRYpxlVmpbIFFk53h92WmGLMyYyj91vg3mV7ZNZIqvbYeJBYmcBkk81mri/qNI1dgO2plqwRf5Twpy/CPFGj/adEjJGWBNPnqPZcPeyKl97XKB5/06M49aGi+9tm48LRalJf1Am3NKO+u+sytM+47uqggwXf1ztX12vOlwOSPGusxPZGP7Yy1T2hwEeC9cq+SWTGEyPkSBWfM0PbBNNhO+S7oxYnojrbmOTG3o1JqDvGw6cYxbX5bMDXOcfdNBUtf7apnsrZcI5BRhRLhw7/COrqaZ3L9XUro94TR4Bh8wGe6Nlb/zotZVOGfOcyr4gzavw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qfXJuWu4MrGFxiwjXEWttAAQaBWL5GUQBsIqe3AJb8g=; b=pecXpehdutEMNeMIMtlNzqIiAzdLxBb0zwSYvN40nQ8fv8bU3T2PAEPAvuJ9Ai71D7TLG7dy1CFAmOGIlTpfqhUeM34Yq9bzvd2ux3Km6iIXvzt9U1WSna8ZQ7OA9xOd85NdZXK2Ab+nBtwcLUEyZ1xMCE2kg43HfUc0A6sAfCfI8EmzYDUZ/Oq2VBN041VcGA35DmfYeYbM/kEbDLTBwlSQW7WEML8IoxJdYiGOkJSxadV++65PDJUxTkeyUzcBC1LgtFQ8/GTeSMfs2eEhiqnuCRsjbZNhANd+SX/IOrBbY9bneDdGw5dxutu12fKS1TE/aL7ZI/WJ3fLE14S/BA== Received: from BN6PR17CA0043.namprd17.prod.outlook.com (2603:10b6:405:75::32) by MWHPR12MB1886.namprd12.prod.outlook.com (2603:10b6:300:10f::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.19; Thu, 4 Nov 2021 12:34:19 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::b4) by BN6PR17CA0043.outlook.office365.com (2603:10b6:405:75::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.13 via Frontend Transport; Thu, 4 Nov 2021 12:34:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:18 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:33:43 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad , Ori Kam Date: Thu, 4 Nov 2021 20:33:07 +0800 Message-ID: <20211104123320.1638915-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f6d25551-7b70-4aae-106b-08d99f8f6b7c X-MS-TrafficTypeDiagnostic: MWHPR12MB1886: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5w3/rbfxPlcZoFf68lmosDdgFaVUXeXVoL1fZ7Gpmq6xOL21rMTKeHSvZmExQSQ267Pr/AkhNmbkX2xUIVVZq3gxfN+vquSa4OJtocYOp4dF7wYo540ZO7W9AlM+4IBXRt7lgUwO27+V8P3ad/M9sjR2qBMBSVIiccffcMBOUJYHEfD3gE3PEJlyXFEAG9Q/z/3HxcdN6PLKuLKQ9243wSwX5NXS+EE13c6SDq9IxmcTljw55mG1IB2asg3EmL9PwRGycs1BjU3t1slszTnL0VrC09mhCNne106Fhu/FW+rYImlGJqDPy4ZhPkcVNCqRj1bbuPc8QiVneHN+F1vHxRdKc+Is4vW2qk0VUuzK6zFdY6gl1f783TnXIUQUqafnyrT/VhAcJQFyR8oJ08u0Wn5TEMgWFNGm7gyvU7eXMuMj1P1HDR3NHLimBcGF1NALFfnJy0J+VP/eNQpAkbD9Q1pNXdRikIyhmUWzxNmrMxSYyUE9FUN3x8wc9hfaPMJbPVfIDcECRgve7rHPu+I5OThFYpHfd+iTkoAg5tIiJBRbKnfjOScOheMNpYd/zznsJCTA5QpiV3jz0TwGos/I+TyOHH8EMQjGHi0cTK5FprwCcuCsXNjVEbOrImgkBoeiidUaB3WzLRrKfQ8bzw8dY/i/TNL5kyVRZ3mHVTvMt8e+5IyMb2r9S7GsueMlAqjIM3Tva/yrizkis29jgBmCdQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(83380400001)(5660300002)(6666004)(6286002)(316002)(7696005)(6916009)(70586007)(54906003)(26005)(55016002)(70206006)(36860700001)(107886003)(36756003)(2616005)(16526019)(426003)(356005)(2906002)(4326008)(1076003)(47076005)(508600001)(8936002)(86362001)(336012)(7636003)(186003)(82310400003)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:18.0549 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f6d25551-7b70-4aae-106b-08d99f8f6b7c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1886 Subject: [dpdk-dev] [PATCH v4 01/14] common/mlx5: introduce user index field in completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On ConnectX devices the completion entry provides the dedicated 24-bit field, that is filled up with some static value assigned at the Receiving Queue creation moment. This patch declares this field. This is a preparation step for supporting shared RQs and the field is supposed to provide actual port index while handling the shared receiving queue(s). Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/common/mlx5/mlx5_prm.h | 8 +++++++- drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 8014ec2f925..c85634c774c 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -393,7 +393,13 @@ struct mlx5_cqe { uint16_t hdr_type_etc; uint16_t vlan_info; uint8_t lro_num_seg; - uint8_t rsvd3[3]; + union { + uint8_t user_index_bytes[3]; + struct { + uint8_t user_index_hi; + uint16_t user_index_low; + } __rte_packed; + }; uint32_t flow_table_metadata; uint8_t rsvd4[4]; uint32_t byte_cnt; diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index adb5343a46b..6836203ecf2 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -559,7 +559,7 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id, uint16_t wq_counter = (rte_be_to_cpu_16(cqe->wqe_counter) + 1) & MLX5_REGEX_MAX_WQE_INDEX; - size_t hw_qpid = cqe->rsvd3[2]; + size_t hw_qpid = cqe->user_index_bytes[2]; struct mlx5_regex_hw_qp *qp_obj = &queue->qps[hw_qpid]; /* UMR mode WQE counter move as WQE set(4 WQEBBS).*/ From patchwork Thu Nov 4 12:33:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103748 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15D7DA0C41; Thu, 4 Nov 2021 13:34:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8693426FE; Thu, 4 Nov 2021 13:34:22 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2079.outbound.protection.outlook.com [40.107.220.79]) by mails.dpdk.org (Postfix) with ESMTP id A1C1641144; Thu, 4 Nov 2021 13:34:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O4VZmPEmV2vXwqKQxhgd2v1XOvf+dLxV5g2oSm37qY+BXDmmjTlS0LX12afBSVuu4DoMij07TIDkBUHFLt1z06fRE8Mq8YMDnwqcsG5dMDgJpxqFWW5UdumzQhas65AfelOeibovTUlAqILCfPmvSQtdS+8HNT32oeQDs052Hlh3tN1srQHRpbPSfa8w7W3rgHnSHsd6N4DCeccAQsHshgVyD7Ff6CRLb1UFDtQu29fcTA82He7y7EYlGjUia70qRDRyzJbH8lpohS1370CtY4fD2ry9gpJ6/ZdGD9zVOkaVag7trQ5HvMpDczeOV/HrpqKE3nNuoYCPfLxoypxV7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qa5R5CGFujP8rE1I6nIBc4tqybvxgI7Jl8RHe4qSjSI=; b=jCl5IbCNJE0EK8+pLNQAJ2CfrCSaSTA7RaalcE2Nx/fCrNBXZpLgpZ2LxcdkgvTOsNwlltLUtxQb3je1wodWXgiHb8GXRX5uMlvONtblj71UBxb5HJn4d93yseUhIQ22bV4STy48aQ8XM1JyR3JVmBBAv9y7StT/Kd1wPesQBH+DTcwXHP/f7ZEetA4royCv55kBU/AdZyneqc8jB0YWJkvcmubYDlIQ3mmUsCHUIIsX6jgChYJTcdUNXfF4klzblKLff6hLphrvelCymIL+xQehWiQryhofUTRKCRy4rasnyng7SZH38PctobBeSi4CKLdqBnGnAf3AAcC97Z+uAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=linux.vnet.ibm.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qa5R5CGFujP8rE1I6nIBc4tqybvxgI7Jl8RHe4qSjSI=; b=q06HqI+5xMtfErUscvCi0kZpoJnb5cG/zsFiWiuEPkoN2fI5IcZK55ZH9hCYKWWZwT9uholJdn0C+R4wwftnkp/+iuMJd9/2TchkqxmqXr3HqHKbJFwdygvZroWITPvYgh8whZZ7xAgC++OQ32oJJ4bd8rQnNfdMDBtlvf37vGYsd9ay+yK7j56pkL1Z+/IsAIXRjGLZhYmSsLRcsJJLA6docY9e2m5D6kY9yXsPA/C3zKG5bBKrgEUw26GMbfnq0NYku44VpDjvSB5LKBu6r4JNGGXBnjNzgtH1krIBuuGzU8IoA95vQ790Hu0GwIvKJWTfAPZLzSuMqzPvs54I6w== Received: from BN6PR17CA0041.namprd17.prod.outlook.com (2603:10b6:405:75::30) by DM6PR12MB4620.namprd12.prod.outlook.com (2603:10b6:5:76::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Thu, 4 Nov 2021 12:34:20 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::c6) by BN6PR17CA0041.outlook.office365.com (2603:10b6:405:75::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; linux.vnet.ibm.com; dkim=none (message not signed) header.d=none;linux.vnet.ibm.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:19 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:33:45 +0000 From: Xueming Li To: CC: , Lior Margalit , , , David Christensen , Matan Azrad , Yongseok Koh Date: Thu, 4 Nov 2021 20:33:08 +0800 Message-ID: <20211104123320.1638915-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f0b7ae9a-16c9-4c6d-2618-08d99f8f6c6f X-MS-TrafficTypeDiagnostic: DM6PR12MB4620: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:361; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OdHYpYRHVgv0eLS+WXzO76brXPNtcqmo0dpVIgxxVl9U55EdtMxhMW2HLdJltiNP5tvwfdQpecuwOp+4ObJ60jWliugdX+SgWUZMicIU9MB0NLGpk7ZI6azOM2jA0095bECEh63PhkVWvjc67A3Z4C8yRK4tURQ/e7hxaOrOfPnf9fwJ+1DVtkf7DQ8Qm5XhdR/mTkB2fs9uWXrThhgwGFhjXtmrZghrvLFbDkU7h9PXCAT8IHs9FRXJJF0iljT+cx85W3MydYQ4STy36aBDBDUd/Eh3xr9Ls7ocUhNDX0EBSYx//dWqkWpl0hB410/tet35EnnFvFpV9p+mV+H87DkRdezyefMqslTj+KNbKg4RzFF/I+uS2uuuT6t/rrAJrL67yrqftf8vbvPhuHhxQ5FbE8ZJ54JUlPjypJVTfslWMRT9JD5O9CuTR1Q226rWPBUlIwwGARNw8Mil50VvoEcl8s05fVbAbZ0JAYWwyY3YPNOIvfpdAk+G4/vvxFZxTtAqspQk214x40oyi45BPHM8a79iDoDjoAQKjimXhi9AxfHTui+xEchyMiBKfrEJswDvBDEUnifW6xEi5xXABl3MTG9hf4Vp+K3I9wGknAF5L1RS46QfGnNvw3DkVTGArsIUs3ILL+X0//+XXPDC5uv4lKl5sY09hteZw6sUJPsX4Z68ENAZ45WtRI7E8DjXee34fmRJcqGIacjRpCdAhw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(336012)(7636003)(2616005)(426003)(54906003)(7696005)(70206006)(6666004)(5660300002)(70586007)(1076003)(16526019)(4326008)(107886003)(186003)(47076005)(8936002)(356005)(2906002)(36860700001)(6286002)(36756003)(82310400003)(26005)(316002)(55016002)(6916009)(508600001)(8676002)(86362001)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:19.6320 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f0b7ae9a-16c9-4c6d-2618-08d99f8f6c6f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4620 Subject: [dpdk-dev] [PATCH v4 02/14] net/mlx5: fix field reference for PPC X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch fixes stale field reference. Fixes: a18ac6113331 ("net/mlx5: add metadata support to Rx datapath") Cc: viacheslavo@nvidia.com Cc: stable@dpdk.org Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko Reviewed-by: David Christensen --- drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h index bcf487c34e9..1d00c1c43d1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h @@ -974,10 +974,10 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, (vector unsigned short)cqe_tmp1, cqe_sel_mask1); cqe_tmp2 = (vector unsigned char)(vector unsigned long){ *(__rte_aligned(8) unsigned long *) - &cq[pos + p3].rsvd3[9], 0LL}; + &cq[pos + p3].rsvd4[2], 0LL}; cqe_tmp1 = (vector unsigned char)(vector unsigned long){ *(__rte_aligned(8) unsigned long *) - &cq[pos + p2].rsvd3[9], 0LL}; + &cq[pos + p2].rsvd4[2], 0LL}; cqes[3] = (vector unsigned char) vec_sel((vector unsigned short)cqes[3], (vector unsigned short)cqe_tmp2, @@ -1037,10 +1037,10 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, (vector unsigned short)cqe_tmp1, cqe_sel_mask1); cqe_tmp2 = (vector unsigned char)(vector unsigned long){ *(__rte_aligned(8) unsigned long *) - &cq[pos + p1].rsvd3[9], 0LL}; + &cq[pos + p1].rsvd4[2], 0LL}; cqe_tmp1 = (vector unsigned char)(vector unsigned long){ *(__rte_aligned(8) unsigned long *) - &cq[pos].rsvd3[9], 0LL}; + &cq[pos].rsvd4[2], 0LL}; cqes[1] = (vector unsigned char) vec_sel((vector unsigned short)cqes[1], (vector unsigned short)cqe_tmp2, cqe_sel_mask2); From patchwork Thu Nov 4 12:33:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103750 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A0E9A0548; Thu, 4 Nov 2021 13:34:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 019394271C; Thu, 4 Nov 2021 13:34:26 +0100 (CET) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2057.outbound.protection.outlook.com [40.107.95.57]) by mails.dpdk.org (Postfix) with ESMTP id 56BF242712 for ; Thu, 4 Nov 2021 13:34:23 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dJDqEyIyY5HEQ7K++W28ySUzyurvZB0JbT/eigg/wIeMgonxVlvUzzRvYMkzT1SayKAE6PyynMmqpb034jGYV5xc8yWE+VZzZxNoDBuKWXstwuF4dzM/lqho+bTxv54m9O2up+XZWhzRspcFL3gEmpS2jKWykUIqhl7OKBGHJtvm5NXvmZkuCZnINa/+9bffjaZlKIzxO8JGmwkF+5AAV8f70NIhquDjeCF4zEXqosMiZlJWjPz5rZiLDWqg/0mj1YTemiF7xVfg106DrnY2lK61L0faPTBHBlD/qdgYAsfKdcCciPNZogeJuj1eG7pl2TQuLTGUoECBIMVh+ncQnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zzmSkoGeTiNDhLli9D63WotEFPKyTAW14ByHciMr1pM=; b=dmLMykJMWafkwPLsnffsPC7hjfJ4LYoOpiQMU2zhoI3cksfHX9H6ljUkId6pXgOQS97gKrrvn4tqQwb1z8BeFIVwXsVzYgZnrfJllwiupPYTbr7PXMvFtEh6e9ouux9Cn80/vqRFsu5+BtxBZ3uK4YWKHzen7awrnMlPvUD3zOWRy9WJW5qFzm9MCrjpRSNxyLnj81Uy6iOFhgt7nmV5DqfsAC3pNl3od3u8sKK+bCmiJ/+Ddzw65E5HtaF1kdrQ2HhwL6TTk2bhBF7OVdwY3pnZHvtvaCctG5zhZlx68cK90WLoYn7u+DR+YlA3x/IVifW7NhhL4nX4sKFKQodXNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=ashroe.eu smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zzmSkoGeTiNDhLli9D63WotEFPKyTAW14ByHciMr1pM=; b=JuiAdfC6fbeds1emzCDagFD8AiMQdZmOXwrB39IEDB3caLGWEAukOg70ZkX3gbjwfcZFwCuCcJxQgHIGi11Ot+GQEhkticl7ZhAqkiTvJCU+nnTJ0mDiat+y6hOIkSndontixFvYwK9gqX4Li3DRqxnV/me6VWONuK/l2o4kiHTfpMlB9o/hUv5Z4bMky49gtil9yIFtMSew+TF8B1F7NiUII4YOSN9MzriYpv5+tEOFpL4ipa2e4MUhorLDn4sqqV7hx7UPT1Nh9qRxenqMm0Q7BbPayayd2pPNKfVh/Q+opZmEktir5LqeaJmQJ7qeRrc5pNlJS768sg4Wp9qPMA== Received: from BN6PR17CA0045.namprd17.prod.outlook.com (2603:10b6:405:75::34) by CY4PR12MB1734.namprd12.prod.outlook.com (2603:10b6:903:121::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14; Thu, 4 Nov 2021 12:34:21 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::3c) by BN6PR17CA0045.outlook.office365.com (2603:10b6:405:75::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; ashroe.eu; dkim=none (message not signed) header.d=none;ashroe.eu; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:20 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:33:48 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad , "Ray Kinsella" Date: Thu, 4 Nov 2021 20:33:09 +0800 Message-ID: <20211104123320.1638915-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a4abc102-3d43-4970-e7f3-08d99f8f6cfb X-MS-TrafficTypeDiagnostic: CY4PR12MB1734: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: o+QyOAWt+bwY1h5cXVJJw2K3Q3VtGU3Qecc2FzCoRk1yRML9HKTKRz0PJ6bSVaYl/35yAfFHhGdAqe8sxi0nhFeH5M6e5lR6fwRAWKOoDDTnLgFr4DZshimn5LyrKgli1NXpzp8x6WAqIby4wppsKQIZeR+5Lr7nZXq06203oy5ETXfoP1ne/8pwLkxuOh+n3KtkHejtSR27rAE4HCaueQJacfAn2BVDnMiL8kgtetv6vyqYo8Jb/Nyie3pBVoP3vyEOH87HLlYJVTcRoHAaJgX0gaHiRiHXebCk4YwhV2e/LrQkrz8jVZg6MyjA70+wgwjgRS0zEXnezxhV3TlsxWdfihCdmvCifbs3uml2+od3ICrg9AlvqaTXeBoipp1vHF+FTeuRMz/l5JxChD1QBOmX6s5aIizLIs8K5tVtZsPLUIzvOEZ0pk4ENsFNuuF0WwNZHlnP4nzJEy8LJIpEMRKCVAwoZHZsJfV6m/CtFrguNoJpSbfZLy/fdZChs0RMN5ASIzBxMSHKFadlIqYBXLB6wQZumSiqKsbY/YW8zqb5IBqiZHK7PVAK7vZee8lFVLIin7cpY3eB1sb6E8HO7OIIU71PJmaGdoR8x270Gxc754b1dY4qTlq3R0336jK+arXqHz8MRRxw/efNNsWb7s+ayAIeL9Ih3NwnYNA7LwKz794+lR+g28SF0KK92fSONtVBoxQdu4TRwE2A5sBSeQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(70206006)(6916009)(1076003)(6286002)(186003)(16526019)(6666004)(426003)(356005)(36756003)(4326008)(86362001)(47076005)(7636003)(5660300002)(2616005)(336012)(8936002)(508600001)(8676002)(2906002)(316002)(83380400001)(70586007)(55016002)(26005)(54906003)(82310400003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:20.4955 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a4abc102-3d43-4970-e7f3-08d99f8f6cfb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1734 Subject: [dpdk-dev] [PATCH v4 03/14] common/mlx5: adds basic receive memory pool support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The hardware Receive Memory Pool (RMP) object holds the destination for incoming packets/messages that are routed to the RMP through RQs. RMP enables sharing of memory across multiple Receive Queues. Multiple Receive Queues can be attached to the same RMP and consume memory from that shared poll. When using RMPs, completions are reported to the CQ pointed to by the RQ, and this Completion Queue can be shared as well. This patch adds DevX supports of PRM RMP object. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 52 +++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 16 ++++++ drivers/common/mlx5/mlx5_prm.h | 85 +++++++++++++++++++++++++++- drivers/common/mlx5/version.map | 1 + 4 files changed, 153 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 12c114a91b6..4ab3070da0c 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -836,6 +836,8 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc); attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr, flow_counters_dump); + attr->log_max_rmp = MLX5_GET(cmd_hca_cap, hcattr, log_max_rmp); + attr->mem_rq_rmp = MLX5_GET(cmd_hca_cap, hcattr, mem_rq_rmp); attr->log_max_rqt_size = MLX5_GET(cmd_hca_cap, hcattr, log_max_rqt_size); attr->eswitch_manager = MLX5_GET(cmd_hca_cap, hcattr, eswitch_manager); @@ -1312,6 +1314,56 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq, } /** + * Create RMP using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param [in] rmp_attr + * Pointer to create RMP attributes structure. + * @param [in] socket + * CPU socket ID for allocations. + * + * @return + * The DevX object created, NULL otherwise and rte_errno is set. + */ +struct mlx5_devx_obj * +mlx5_devx_cmd_create_rmp(void *ctx, + struct mlx5_devx_create_rmp_attr *rmp_attr, + int socket) +{ + uint32_t in[MLX5_ST_SZ_DW(create_rmp_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(create_rmp_out)] = {0}; + void *rmp_ctx, *wq_ctx; + struct mlx5_devx_wq_attr *wq_attr; + struct mlx5_devx_obj *rmp = NULL; + + rmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rmp), 0, socket); + if (!rmp) { + DRV_LOG(ERR, "Failed to allocate RMP data"); + rte_errno = ENOMEM; + return NULL; + } + MLX5_SET(create_rmp_in, in, opcode, MLX5_CMD_OP_CREATE_RMP); + rmp_ctx = MLX5_ADDR_OF(create_rmp_in, in, ctx); + MLX5_SET(rmpc, rmp_ctx, state, rmp_attr->state); + MLX5_SET(rmpc, rmp_ctx, basic_cyclic_rcv_wqe, + rmp_attr->basic_cyclic_rcv_wqe); + wq_ctx = MLX5_ADDR_OF(rmpc, rmp_ctx, wq); + wq_attr = &rmp_attr->wq_attr; + devx_cmd_fill_wq_data(wq_ctx, wq_attr); + rmp->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, + sizeof(out)); + if (!rmp->obj) { + DRV_LOG(ERR, "Failed to create RMP using DevX"); + rte_errno = errno; + mlx5_free(rmp); + return NULL; + } + rmp->id = MLX5_GET(create_rmp_out, out, rmpn); + return rmp; +} + +/* * Create TIR using DevX API. * * @param[in] ctx diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 2326f1e9686..86ee4f7b78b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -152,6 +152,8 @@ mlx5_hca_parse_graph_node_base_hdr_len_mask struct mlx5_hca_attr { uint32_t eswitch_manager:1; uint32_t flow_counters_dump:1; + uint32_t mem_rq_rmp:1; + uint32_t log_max_rmp:5; uint32_t log_max_rqt_size:5; uint32_t parse_graph_flex_node:1; uint8_t flow_counter_bulk_alloc_bitmap; @@ -319,6 +321,17 @@ struct mlx5_devx_modify_rq_attr { uint32_t lwm:16; /* Contained WQ lwm. */ }; +/* Create RMP attributes structure, used by create RMP operation. */ +struct mlx5_devx_create_rmp_attr { + uint32_t rsvd0:8; + uint32_t state:4; + uint32_t rsvd1:20; + uint32_t basic_cyclic_rcv_wqe:1; + uint32_t rsvd4:31; + uint32_t rsvd8[10]; + struct mlx5_devx_wq_attr wq_attr; +}; + struct mlx5_rx_hash_field_select { uint32_t l3_prot_type:1; uint32_t l4_prot_type:1; @@ -596,6 +609,9 @@ __rte_internal int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq, struct mlx5_devx_modify_rq_attr *rq_attr); __rte_internal +struct mlx5_devx_obj *mlx5_devx_cmd_create_rmp(void *ctx, + struct mlx5_devx_create_rmp_attr *rq_attr, int socket); +__rte_internal struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(void *ctx, struct mlx5_devx_tir_attr *tir_attr); __rte_internal diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index c85634c774c..304bcdf55a0 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1069,6 +1069,10 @@ enum { MLX5_CMD_OP_CREATE_RQ = 0x908, MLX5_CMD_OP_MODIFY_RQ = 0x909, MLX5_CMD_OP_QUERY_RQ = 0x90b, + MLX5_CMD_OP_CREATE_RMP = 0x90c, + MLX5_CMD_OP_MODIFY_RMP = 0x90d, + MLX5_CMD_OP_DESTROY_RMP = 0x90e, + MLX5_CMD_OP_QUERY_RMP = 0x90f, MLX5_CMD_OP_CREATE_TIS = 0x912, MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, @@ -1569,7 +1573,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_378[0x3]; u8 log_max_tis[0x5]; u8 basic_cyclic_rcv_wqe[0x1]; - u8 reserved_at_381[0x2]; + u8 reserved_at_381[0x1]; + u8 mem_rq_rmp[0x1]; u8 log_max_rmp[0x5]; u8 reserved_at_388[0x3]; u8 log_max_rqt[0x5]; @@ -2243,6 +2248,84 @@ struct mlx5_ifc_query_rq_in_bits { u8 reserved_at_60[0x20]; }; +enum { + MLX5_RMPC_STATE_RDY = 0x1, + MLX5_RMPC_STATE_ERR = 0x3, +}; + +struct mlx5_ifc_rmpc_bits { + u8 reserved_at_0[0x8]; + u8 state[0x4]; + u8 reserved_at_c[0x14]; + u8 basic_cyclic_rcv_wqe[0x1]; + u8 reserved_at_21[0x1f]; + u8 reserved_at_40[0x140]; + struct mlx5_ifc_wq_bits wq; +}; + +struct mlx5_ifc_query_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0xc0]; + struct mlx5_ifc_rmpc_bits rmp_context; +}; + +struct mlx5_ifc_query_rmp_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_40[0x8]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_modify_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_rmp_bitmask_bits { + u8 reserved_at_0[0x20]; + u8 reserved_at_20[0x1f]; + u8 lwm[0x1]; +}; + +struct mlx5_ifc_modify_rmp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 rmp_state[0x4]; + u8 reserved_at_44[0x4]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; + struct mlx5_ifc_rmp_bitmask_bits bitmask; + u8 reserved_at_c0[0x40]; + struct mlx5_ifc_rmpc_bits ctx; +}; + +struct mlx5_ifc_create_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_create_rmp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_40[0xc0]; + struct mlx5_ifc_rmpc_bits ctx; +}; + struct mlx5_ifc_create_tis_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 0ea8325f9ac..7265ff8c56f 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -30,6 +30,7 @@ INTERNAL { mlx5_devx_cmd_create_geneve_tlv_option; mlx5_devx_cmd_create_import_kek_obj; mlx5_devx_cmd_create_qp; + mlx5_devx_cmd_create_rmp; mlx5_devx_cmd_create_rq; mlx5_devx_cmd_create_rqt; mlx5_devx_cmd_create_sq; From patchwork Thu Nov 4 12:33:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103751 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB42AA0548; Thu, 4 Nov 2021 13:34:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 419DD42717; Thu, 4 Nov 2021 13:34:32 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2057.outbound.protection.outlook.com [40.107.236.57]) by mails.dpdk.org (Postfix) with ESMTP id 9B2494270B for ; Thu, 4 Nov 2021 13:34:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cdFJQkvm2sZvZ9PYZuKG1AMX/CdE169ZPRKmp5LS/GsqdX4If6yd1ZryCQQvvwGa3fTx/tdHhRXevXX/viwFTJj1A/BcdBLF3fSfZDgCB+C5bWdS2WkULVZQWWorxSOCd/ktfHvxWmQc3QWq5waGd5fc0OIhdsuU7b61XUOxCArKGAWR1ZDLWMD745qSuFblnG4b1OsHZ4b6O/BTBOx6UHxfgvJyQ/hUD70y4FO7xeg8wsWPc9fHOzQh/o9Re7dSN3weVWf5PAaS2jBXr9tT8xJVa53Pwdes5Wzuee9z8cNVXQ5ORADdUGTawp761vcSEwG4GRQ9VKmo/W2m/g9LZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=d44IW24O4Ib4LzOLTskTJbSJlUG6LoGfpC+d9qaPLNY=; b=MFgjuszUDeuyEIOzeVmYSUEHlsuu9y9vjfmZPAANnoow95rPFvvkM4VcVPzIEwXfXKoQQvJiH26uy0ofpa0shrLD385v6p/uNmCSprJr7t+8lb8saRyKRIGz1/hx+NU8jlOAv+Q4j+09N4zTBf9tofpHibadXTJUzVH0wjQwi3fhWH1H0jOJa3pc6Hp3MEk1ZF3+E4evpDFi97pjrRmMYdOafY5+vvwIOxBBBZgtbB0jfBAWdYPRsX2G7IzHRMeniKWwrEtzHYTafrZ1ztniIgz8IyMZchcv6WSXBy3P93cA/s4e88Uu7lYfQaZcQ0P9L18ncfOG9gdKucSJdHSfsw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=d44IW24O4Ib4LzOLTskTJbSJlUG6LoGfpC+d9qaPLNY=; b=VVvJsrJqAXohW4XUUSOeyyphWNdbHdYfUIeMq8eDb22jMm/cLN2tG/+deu150+G0Lx7M3f3wJLqqHuzUJ2eFqo0+THSZ+QUxDUTaswR+DHZWXqMYsjIdNdjALVbsQqA/UJgBBgAWStRbs793W9IfuaYHZLh0NhIoM/lowFm6LovDjDBSmsBtH5NjfVqigIGD+fWN+0M7iizXBsfsN79Ift5YogPFdYMZktNWHxEixeGWzfPhJv0JdI5OwuApfnNAAcbybzwgPD+kxrD2lKatfGoDURSp6OEAjXdfgHb4aunRIg1a8J2HmaiHbirdy3vodSVqbMBhRM+lt3GX/h+r8g== Received: from BN6PR17CA0046.namprd17.prod.outlook.com (2603:10b6:405:75::35) by DM5PR12MB2471.namprd12.prod.outlook.com (2603:10b6:4:b5::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Thu, 4 Nov 2021 12:34:26 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:405:75:cafe::48) by BN6PR17CA0046.outlook.office365.com (2603:10b6:405:75::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:34:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:23 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:33:50 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:10 +0800 Message-ID: <20211104123320.1638915-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e9b75a21-adad-4056-23ec-08d99f8f6eec X-MS-TrafficTypeDiagnostic: DM5PR12MB2471: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1122; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: c8H/6J+qHTWDTby7Cxe6K4Q1LqLh8yeQ2fYtI1R2jsWB6nTlaT142n/TCH7AE9qiHxXoxMNxlcjdYcrRns3IP7kiCnR5LNj+htlh5xrMQijj+MO/Z31Qc2i3fiNIQ4rnE+OZ0e76SotKz1lHC7VSWCmrPa7PKuxIJuKHaapjq8Mc3BWgMqGzV+b/VYrtY8SJy4aa59xLC9TE+aErwZWsGfyocyBPzLlbpjdaOd3i+m+7985ibAfUC00+gMIIC4mcRC2G6SUCHcHIX0BrRWcTEJcFxnY3QEF+hRnFhxMSOPAc8GRgeH6qV6WUssq3OJw1y4sMTcsjQ/slQY4RAH6T8psMxHlKuy6KEFbiDTl4el4AHz2BXDcdo9ihq7kujrrGBXAxUBf1N2/+w7JHAHXkkRI73bmhH+3Zxtcmw/Bsxof1R5cFFx0H65jWVolZPj0HxLaJfr8ZVDKa5uUcV4S2AqItKzEju2vV6P51s/HcXrMN8/u7N/JdYLIF+53PV4GkrJiQrin+wf8ovUue9qI+9zwJ47Ed613FkO+1wLDU+nRL3ITaUWCngdBBqKVTawYzx4MCtBDhXiHPYtiAi+3ctHF3eB7eC+j7m/VRhL64Ena4e5L8rhQ+p6f/xFSdeYotJqGHawKbJ5RQFUC3XWctZ09m9nXCcW04RcGBRciaXOul8Vd2pL3DJMaThWHk23RIvvhARfIo8qocA/f+XLKj/A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(47076005)(6286002)(70586007)(70206006)(54906003)(6916009)(4326008)(426003)(508600001)(2616005)(107886003)(55016002)(2906002)(8676002)(6666004)(356005)(36860700001)(26005)(82310400003)(7696005)(1076003)(5660300002)(8936002)(36756003)(336012)(83380400001)(316002)(7636003)(16526019)(30864003)(186003)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:23.6007 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e9b75a21-adad-4056-23ec-08d99f8f6eec X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2471 Subject: [dpdk-dev] [PATCH v4 04/14] common/mlx5: support receive memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The hardware Receive Memory Pool (RMP) object holds the destination for incoming packets/messages that are routed to the RMP through RQs. RMP enables sharing of memory across multiple Receive Queues. Multiple Receive Queues can be attached to the same RMP and consume memory from that shared poll. When using RMPs, completions are reported to the CQ pointed to by the RQ, user index that set in RQ creation time is carried to completion entry. This patch enables RMP based RQ, RMP is created when mlx5_devx_rq.rmp is set. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/common/mlx5/mlx5_common_devx.c | 295 +++++++++++++++++++++---- drivers/common/mlx5/mlx5_common_devx.h | 19 +- drivers/net/mlx5/mlx5_devx.c | 4 +- 3 files changed, 271 insertions(+), 47 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 825f84b1833..85b5282061a 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -271,6 +271,39 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, return -rte_errno; } +/** + * Destroy DevX Receive Queue resources. + * + * @param[in] rq_res + * DevX RQ resource to destroy. + */ +static void +mlx5_devx_wq_res_destroy(struct mlx5_devx_wq_res *rq_res) +{ + if (rq_res->umem_obj) + claim_zero(mlx5_os_umem_dereg(rq_res->umem_obj)); + if (rq_res->umem_buf) + mlx5_free((void *)(uintptr_t)rq_res->umem_buf); + memset(rq_res, 0, sizeof(*rq_res)); +} + +/** + * Destroy DevX Receive Memory Pool. + * + * @param[in] rmp + * DevX RMP to destroy. + */ +static void +mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp) +{ + MLX5_ASSERT(rmp->ref_cnt == 0); + if (rmp->rmp) { + claim_zero(mlx5_devx_cmd_destroy(rmp->rmp)); + rmp->rmp = NULL; + } + mlx5_devx_wq_res_destroy(&rmp->wq); +} + /** * Destroy DevX Queue Pair. * @@ -389,55 +422,48 @@ mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint16_t log_wqbb_n, void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq) { - if (rq->rq) + if (rq->rq) { claim_zero(mlx5_devx_cmd_destroy(rq->rq)); - if (rq->umem_obj) - claim_zero(mlx5_os_umem_dereg(rq->umem_obj)); - if (rq->umem_buf) - mlx5_free((void *)(uintptr_t)rq->umem_buf); + rq->rq = NULL; + if (rq->rmp) + rq->rmp->ref_cnt--; + } + if (rq->rmp == NULL) { + mlx5_devx_wq_res_destroy(&rq->wq); + } else { + if (rq->rmp->ref_cnt == 0) + mlx5_devx_rmp_destroy(rq->rmp); + } } /** - * Create Receive Queue using DevX API. - * - * Get a pointer to partially initialized attributes structure, and updates the - * following fields: - * wq_umem_valid - * wq_umem_id - * wq_umem_offset - * dbr_umem_valid - * dbr_umem_id - * dbr_addr - * log_wq_pg_sz - * All other fields are updated by caller. + * Create WQ resources using DevX API. * * @param[in] ctx * Context returned from mlx5 open_device() glue function. - * @param[in/out] rq_obj - * Pointer to RQ to create. * @param[in] wqe_size * Size of WQE structure. * @param[in] log_wqbb_n * Log of number of WQBBs in queue. - * @param[in] attr - * Pointer to RQ attributes structure. * @param[in] socket * Socket to use for allocation. + * @param[out] wq_attr + * Pointer to WQ attributes structure. + * @param[out] wq_res + * Pointer to WQ resource to create. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -int -mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, - uint16_t log_wqbb_n, - struct mlx5_devx_create_rq_attr *attr, int socket) +static int +mlx5_devx_wq_init(void *ctx, uint32_t wqe_size, uint16_t log_wqbb_n, int socket, + struct mlx5_devx_wq_attr *wq_attr, + struct mlx5_devx_wq_res *wq_res) { - struct mlx5_devx_obj *rq = NULL; struct mlx5dv_devx_umem *umem_obj = NULL; void *umem_buf = NULL; size_t alignment = MLX5_WQE_BUF_ALIGNMENT; uint32_t umem_size, umem_dbrec; - uint16_t rq_size = 1 << log_wqbb_n; int ret; if (alignment == (size_t)-1) { @@ -446,7 +472,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, return -rte_errno; } /* Allocate memory buffer for WQEs and doorbell record. */ - umem_size = wqe_size * rq_size; + umem_size = wqe_size * (1 << log_wqbb_n); umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); umem_size += MLX5_DBR_SIZE; umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size, @@ -464,14 +490,60 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = errno; goto error; } + /* Fill WQ attributes for RQ/RMP object creation. */ + wq_attr->wq_umem_valid = 1; + wq_attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj); + wq_attr->wq_umem_offset = 0; + wq_attr->dbr_umem_valid = 1; + wq_attr->dbr_umem_id = wq_attr->wq_umem_id; + wq_attr->dbr_addr = umem_dbrec; + wq_attr->log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; /* Fill attributes for RQ object creation. */ - attr->wq_attr.wq_umem_valid = 1; - attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj); - attr->wq_attr.wq_umem_offset = 0; - attr->wq_attr.dbr_umem_valid = 1; - attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id; - attr->wq_attr.dbr_addr = umem_dbrec; - attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; + wq_res->umem_buf = umem_buf; + wq_res->umem_obj = umem_obj; + wq_res->db_rec = RTE_PTR_ADD(umem_buf, umem_dbrec); + return 0; +error: + ret = rte_errno; + if (umem_obj) + claim_zero(mlx5_os_umem_dereg(umem_obj)); + if (umem_buf) + mlx5_free((void *)(uintptr_t)umem_buf); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create standalone Receive Queue using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_std_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq; + int ret; + + ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket, + &attr->wq_attr, &rq_obj->wq); + if (ret != 0) + return ret; /* Create receive queue object with DevX. */ rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); if (!rq) { @@ -479,21 +551,160 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = ENOMEM; goto error; } - rq_obj->umem_buf = umem_buf; - rq_obj->umem_obj = umem_obj; rq_obj->rq = rq; - rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec); return 0; error: ret = rte_errno; - if (umem_obj) - claim_zero(mlx5_os_umem_dereg(umem_obj)); - if (umem_buf) - mlx5_free((void *)(uintptr_t)umem_buf); + mlx5_devx_wq_res_destroy(&rq_obj->wq); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Receive Memory Pool using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rmp_create(void *ctx, struct mlx5_devx_rmp *rmp_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_wq_attr *wq_attr, int socket) +{ + struct mlx5_devx_create_rmp_attr rmp_attr = { 0 }; + int ret; + + if (rmp_obj->rmp != NULL) + return 0; + rmp_attr.wq_attr = *wq_attr; + ret = mlx5_devx_wq_init(ctx, wqe_size, log_wqbb_n, socket, + &rmp_attr.wq_attr, &rmp_obj->wq); + if (ret != 0) + return ret; + rmp_attr.state = MLX5_RMPC_STATE_RDY; + rmp_attr.basic_cyclic_rcv_wqe = + wq_attr->wq_type != MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ; + /* Create receive memory pool object with DevX. */ + rmp_obj->rmp = mlx5_devx_cmd_create_rmp(ctx, &rmp_attr, socket); + if (rmp_obj->rmp == NULL) { + DRV_LOG(ERR, "Can't create DevX RMP object."); + rte_errno = ENOMEM; + goto error; + } + return 0; +error: + ret = rte_errno; + mlx5_devx_wq_res_destroy(&rmp_obj->wq); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Shared Receive Queue based on RMP using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq; + int ret; + + ret = mlx5_devx_rmp_create(ctx, rq_obj->rmp, wqe_size, log_wqbb_n, + &attr->wq_attr, socket); + if (ret != 0) + return ret; + attr->mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP; + attr->rmpn = rq_obj->rmp->rmp->id; + attr->flush_in_error_en = 0; + memset(&attr->wq_attr, 0, sizeof(attr->wq_attr)); + /* Create receive queue object with DevX. */ + rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); + if (!rq) { + DRV_LOG(ERR, "Can't create DevX RMP RQ object."); + rte_errno = ENOMEM; + goto error; + } + rq_obj->rq = rq; + rq_obj->rmp->ref_cnt++; + return 0; +error: + ret = rte_errno; + mlx5_devx_rq_destroy(rq_obj); rte_errno = ret; return -rte_errno; } +/** + * Create Receive Queue using DevX API. Shared RQ is created only if rmp set. + * + * Get a pointer to partially initialized attributes structure, and updates the + * following fields: + * wq_umem_valid + * wq_umem_id + * wq_umem_offset + * dbr_umem_valid + * dbr_umem_id + * dbr_addr + * log_wq_pg_sz + * All other fields are updated by caller. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + if (rq_obj->rmp == NULL) + return mlx5_devx_rq_std_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); + return mlx5_devx_rq_shared_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); +} /** * Change QP state to RTS. diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index f699405f69b..7ceac040f8b 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -45,14 +45,27 @@ struct mlx5_devx_qp { volatile uint32_t *db_rec; /* The QP doorbell record. */ }; -/* DevX Receive Queue structure. */ -struct mlx5_devx_rq { - struct mlx5_devx_obj *rq; /* The RQ DevX object. */ +/* DevX Receive Queue resource structure. */ +struct mlx5_devx_wq_res { void *umem_obj; /* The RQ umem object. */ volatile void *umem_buf; volatile uint32_t *db_rec; /* The RQ doorbell record. */ }; +/* DevX Receive Memory Pool structure. */ +struct mlx5_devx_rmp { + struct mlx5_devx_obj *rmp; /* The RMP DevX object. */ + uint32_t ref_cnt; /* Reference count. */ + struct mlx5_devx_wq_res wq; +}; + +/* DevX Receive Queue structure. */ +struct mlx5_devx_rq { + struct mlx5_devx_obj *rq; /* The RQ DevX object. */ + struct mlx5_devx_rmp *rmp; /* Shared RQ RMP object. */ + struct mlx5_devx_wq_res wq; /* WQ resource of standalone RQ. */ +}; + /* mlx5_common_devx.c */ __rte_internal diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 424f77be790..443252df05d 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -515,8 +515,8 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf; - rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec; + rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf; + rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.wq.db_rec; rxq_data->cq_arm_sn = 0; rxq_data->cq_ci = 0; mlx5_rxq_initialize(rxq_data); From patchwork Thu Nov 4 12:33:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103753 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52EE1A0548; Thu, 4 Nov 2021 13:35:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C4C24272E; Thu, 4 Nov 2021 13:34:47 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2057.outbound.protection.outlook.com [40.107.244.57]) by mails.dpdk.org (Postfix) with ESMTP id A622F411A4; Thu, 4 Nov 2021 13:34:45 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fejtZTUn1vVb2s5JBCi07NP80WkQAIqejsKuQDOVCZ6dF6jBEjzmw6pa+93JKQl2a56SNerGix+4EKqcPh+lTtJmM1B3bHmfaGBV03gm4efjDSILzw3XIOQDm+0Ik3tgGCJyGKwqi1w0UdwCHcvcQRCBmP92iYW6r0nQ70kZPCUi5mJ5OnQf4ODkWhtmFZEEzolt4I0U9/NIh7d9syefp0g/7YfbohkPs4JhyPVdOZ38mEKprMrpdhhWReUgFMXNcXdTebge0GyQKAkurpHO0TEsGMCZTsWL0TTncmGuyeP8JNDTQEE1+/2nwIIZdFou11BwGZGwmM68k7y/Kb/QzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D1HXByl8S32k1PvomDLT39P79y96a+lmXjb3D1Dx6Fo=; b=EhnziJ2iLp5CDV7r00iVmYn7OLlrXoIe4UdA0begsTldhtmnu2cvQxqOPJ0rZCKCulECK2eBsTiVGSd6xc+V2WR155IKO3liZIjHyYN4tMMTw089Wqf+ved5+WZd9GQWXlbQO+7vOIqcS9m0SrKwzHmjUseKiQwvpVDFKehy/EKymDoE/ZFbeCOfb78LhPfGJ4qUawyLYFcJ2kkupY5CZu9anZ4EQQLcMCZSGrvNMcexp/1b0NRLhT38AF2OtzgOSO8fMtLkySVrchwhQkXt+5Wnt+Ag+2n9nZScimMQxEaSEdtZIjqUiqHEKrx6bMrzvKDPIGW4BT2QVdtqiHXIjA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D1HXByl8S32k1PvomDLT39P79y96a+lmXjb3D1Dx6Fo=; b=f2payI0jzM8i/Mcr/cshN/pryvRbZ81xh0eCHnc1Msf7Vk4DtXXFyqiASUPLOi7K4iO9x7kLTKGSX8WiS3UXYk14HaSMj3wf+uRkXlvyMjkzHwKzP8Sfw+nABnDX3rI14YKGhEyWP9y6mEKxznMREoLQrEcSe2LgvpO+WCg66eQeCURrEATFKa0EXuZyPCuh7u1Z+3sfm0/uTRDaRj+5LgzVo2FBUBjIRpCy1t5pDNbDYcZtwch9R2OdSGk8Hb4K2Vx/W9pfGfyfR+yL3+j7b5FjYScQFjAqIQXnkNDx85KFmOlva1lq+Y8S0qGJR07D1kzepZc/y0E5PSIh9paODQ== Received: from BN0PR04CA0017.namprd04.prod.outlook.com (2603:10b6:408:ee::22) by DM6PR12MB4282.namprd12.prod.outlook.com (2603:10b6:5:223::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Thu, 4 Nov 2021 12:34:42 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ee:cafe::f2) by BN0PR04CA0017.outlook.office365.com (2603:10b6:408:ee::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17 via Frontend Transport; Thu, 4 Nov 2021 12:34:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:41 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:16 +0000 From: Xueming Li To: CC: , Lior Margalit , , , Slava Ovsiienko , Matan Azrad Date: Thu, 4 Nov 2021 20:33:11 +0800 Message-ID: <20211104123320.1638915-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2d370ef1-ec10-45cc-d6bb-08d99f8f7989 X-MS-TrafficTypeDiagnostic: DM6PR12MB4282: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:660; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OagnPHKDl1Be/Le1lvlJgY8C8yHPTpwkERVCgEvT3av7qb8GQp5GvDbQJjtgQkz99ftDOZ2B2yv6r74yBsrH7AgnAeAGgQPQ3/OjG6ed/t7rX2xAYcu0WGz4LxexC4lKwIE3vg31fMs01g9AF3k1zYqF271PuLwBunvUem/wQy9JWa3iZx5DQSRLB3YqQQi4g8EsE6pntAb8tfZicq287It6SFY+W/8O9JFSJVvPe3gYiyeZarX/SIqWsQjV4Fp67F4ezm3KCUaLB+wiOusFlc8IeCiHN0ez+QpJNq4YlIW9W06DM/Acs2tVIkg13CgY/M/LLtFCDSnX8kBBjCgnDPsI7ACkHWBbtYw/xr7zBS6aJIn9IlO6kaMLcoeb8CMHcS7tb61Cd1sX0YlSvTCxPdKi58F2iBZpkoFaBvbNFLT00EYueiTZp4IdebBQ32w13BFKsEpSeCL20LbwFXcSweDYhVK9fPX3NptjlatDOxbTV7YwIEu2PWmfEktfj4RvqHfBFX1FQ4PkfGYNcjqfGhr4/6VCqfDXitSnVlq1S1JBWehBTAIaITuYZ8LuNq0E2UazR61zBLrHv+jLAXpS/d3FP9ZvV7c+8d/ffwIVPPalyjS1dPJbqsKK32YmzCdH0pU2vrnaRiq/5+AjnRbfzcDfG0OAPSLwxj9SZ1Ox9E6Sy1YhQFmTHOSKg66cuWlU3FDbWrLR4akHVubdBFgW1g== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(8676002)(356005)(70586007)(508600001)(70206006)(6666004)(54906003)(6916009)(1076003)(450100002)(82310400003)(7696005)(4326008)(107886003)(86362001)(7636003)(55016002)(83380400001)(336012)(36860700001)(5660300002)(26005)(316002)(16526019)(426003)(8936002)(6286002)(47076005)(36756003)(2906002)(186003)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:41.6033 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2d370ef1-ec10-45cc-d6bb-08d99f8f7989 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4282 Subject: [dpdk-dev] [PATCH v4 05/14] net/mlx5: fix Rx queue memory allocation return value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If error happened during Rx queue mbuf allocation, boolean value returned. From description, return value should be error number. This patch returns negative error number. Fixes: 0f20acbf5eda ("net/mlx5: implement vectorized MPRQ burst") Cc: akozyrev@nvidia.com Cc: stable@dpdk.org Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rxq.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 9220bb2c15c..4567b43c1b6 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -129,7 +129,7 @@ rxq_alloc_elts_mprq(struct mlx5_rxq_ctrl *rxq_ctrl) * Pointer to RX queue structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, negative errno value on failure. */ static int rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) @@ -220,7 +220,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) * Pointer to RX queue structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, negative errno value on failure. */ int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) @@ -233,7 +233,9 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) */ if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) ret = rxq_alloc_elts_mprq(rxq_ctrl); - return (ret || rxq_alloc_elts_sprq(rxq_ctrl)); + if (ret == 0) + ret = rxq_alloc_elts_sprq(rxq_ctrl); + return ret; } /** From patchwork Thu Nov 4 12:33:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103752 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A0CBA0548; Thu, 4 Nov 2021 13:34:55 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64C434270E; Thu, 4 Nov 2021 13:34:46 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2084.outbound.protection.outlook.com [40.107.236.84]) by mails.dpdk.org (Postfix) with ESMTP id E4DED411A4 for ; Thu, 4 Nov 2021 13:34:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QzBhWoMtQR556onYCqLXuTlQ4x7IUkWo1jRzlfAbD/CgG18Zm5JrHdjfgd+RovCB79uSWvfmAJALBKTlmqlvj1rjbBGIxFg+lYCzqMHnJ07K65xtt995xwPmI6Kmm4mpy/zCiUj4morE1QdiwmAG8wW58oGQRoFAXH05ma7OPX5rqP7iIDCrLgjQyex0kV3zv+54Nbk3epsZsy9BJL+imk0A0IhJcSERmYj5XqfT16OI6w94Rrya8gnzcmaNKLXRqUD3NNDk2UK4GYqnN+GGj6w/sq+0fMWDacq08J6q7Vfb2hWsc1yqd+73ncUovpKl4IxSzmBouAL0rhE68gB1zQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1lX1dhrE6LOjzZQOa8VNx0BD5vGEQf82p19dY0hx5Z4=; b=PIemcZoTj3wnutVfiAhOKve/EWr9gAaoa6mp7xc8c7YjV9YZWA07pAVVTrnMinbNdnc7Y+9jA6i6cUiSh7myUCqJJn77UCj+YtGxJsUELYIuaXtslrzLtU2GsXhvA1YgZ9Bs+8FunVR3C3H6GWhaLFcitP30wRALwU0vHEhZabvqkhqh3zapJFaVUq8Pnpgf3c5ZVezofGJf/Ct2HGNE8LQgbCjvFZlVih81WIU0lZoZFUbOyGdgc81UNWGVD/mpOVUF8lzTdaNH5awA6FWYfuE/E3UupbApwpLyVKgltyky5LhA99fqTBWIrjrYTDEuNeh4IBeOGAM4le1T4JbO1A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1lX1dhrE6LOjzZQOa8VNx0BD5vGEQf82p19dY0hx5Z4=; b=PGWgFx46Qtt5sOlkg4kzOOhxkZMODtNABORGhp8AOVGSxIim5Uv1YYzQBPfM4PXgn5hfn+KR4x3pstCLQKnlGWTH3GhPyJv12sHmT/r1ugPzJ/+a+Q7lM8hwzNTnU0vxEwq9Qd2SqQiH2I39UPNvZmoUxRhcmYGyoYgwWQiCfCNcHnkdUoHW1tIz9lKxzZwXVYcTEqK1s4wMaHqzUQsna/t+6u3YWfQUdlMYkL3dlVzxeb+dHyrR9NFDatMrDOzcEu80DO0XIwvCjdsdJOak+zrCwkOSuGfbc40dfYnOkNhnd8cUghf5ZomiHFt0YCRB1M6Sldw+kT9L6tSsKTNiww== Received: from BN6PR14CA0019.namprd14.prod.outlook.com (2603:10b6:404:79::29) by CH2PR12MB5004.namprd12.prod.outlook.com (2603:10b6:610:62::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Thu, 4 Nov 2021 12:34:43 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::37) by BN6PR14CA0019.outlook.office365.com (2603:10b6:404:79::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:34:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:43 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:18 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:12 +0800 Message-ID: <20211104123320.1638915-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 673988c8-ff66-46e0-9d63-08d99f8f7a71 X-MS-TrafficTypeDiagnostic: CH2PR12MB5004: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:324; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XhnV6Y7HuR4V0kOfGzQaWeWILfbkoE2E2PUxJYSXzv0xGCWVj+fCj+o3/X8CkFbOjhvQXpQ7IhB1TPZfuVbLulK70KbJ0NFSm9n0bnr5gjaBN1gIknHbcR2iGjS8zmRtWlyYQgQGfTVu0klTXOTG3iA7LC0x1XvMjsgBZHeLL4deNwXIsmKyugiS0dVK1+MZSwVtArTxAoqDoBsBmv5joqKOxAFD/XvtFhMbXb5PklTOhqmEbh7ueurpHZJUvrZe85uDUZIvq17VT/issbPd6ildMSPIc+cP0S49dpjBm4G7BwLLyuK9l39HtZXueCRaOMcN01nRvSyP8Po33/B/RkV8Xa5DbaQ9VaJ4i8iv0HcYNTweuGjqEJY4CLdphWDjumjkSnWOkOCyoDYD0C2kw6aX+A0G/nm8hjsqBqDcHK+jyMWWLeVpEMMPOBLGu/DfC4V57bQA1Camvp34MxYNjO9j9fOVKhOnknXbbCOpZKawywNqI8NzuSC9/tLZB/ocl+GXqPFqEPxYRWxUgs+eNt5u32fqpVJeh3f0K4DNFCcvns+K8uVqSNaWVnDuXXzZlc0HBXSocIP6c4F7OQ/XZDpc/hF7NkY03qj8NQXJZt+9bwcnvxyKjSjmRZF4/MKlSxmc4M+j3cETvxzYKhot5uNNnsA0pSpuXOpcXoMT3ly52k+53nZh5v/F9vh7IPLbX3LcGfdLSnqt+/EE71xQ/Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(86362001)(6666004)(336012)(55016002)(6286002)(47076005)(2906002)(26005)(316002)(70586007)(4326008)(70206006)(186003)(16526019)(83380400001)(82310400003)(5660300002)(6916009)(107886003)(8936002)(8676002)(426003)(1076003)(54906003)(356005)(508600001)(2616005)(7636003)(36860700001)(36756003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:43.1402 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 673988c8-ff66-46e0-9d63-08d99f8f7a71 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5004 Subject: [dpdk-dev] [PATCH v4 06/14] net/mlx5: clean Rx queue code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch removes unused Rx queue code. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rxq.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 4567b43c1b6..b2e4389ad60 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -674,9 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct rte_mempool *mp) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; struct rte_eth_rxseg_split rx_single = {.mp = mp}; @@ -743,9 +741,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, const struct rte_eth_hairpin_conf *hairpin_conf) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl; int res; res = mlx5_rx_queue_pre_setup(dev, idx, &desc); From patchwork Thu Nov 4 12:33:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103754 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA27FA0548; Thu, 4 Nov 2021 13:35:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDAC64273F; Thu, 4 Nov 2021 13:34:48 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2086.outbound.protection.outlook.com [40.107.223.86]) by mails.dpdk.org (Postfix) with ESMTP id 44199411A4 for ; Thu, 4 Nov 2021 13:34:46 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Kyto5C8DBtTeMkOx3caHqw5yp1Bz0QN+3s+Lj2ahCOljefgnpqxHehbS4rn+QMxiKsIK/xPjU331OPfQYSW+MM99ZFGxS6484IcoSk+o2yVWktZucsG+2gz7Go+f2x78Qh5aNUChsXi6WcitL/fqUy9AEtL6zy2wio2IPRnYVuKSbKj4CUDP7SGAxjgV6aiHmvUwKBEXP5U0XS+GSeqf2xVolzEVTdJ/uwGuorNFN8czDExA1P2oStw5QwR2v4wxAV41dv3hejVZjGHIb5grlWZ5YtR8m6HhZDRA4c4rWivPuze4L3IRosDeOa4BBVD+0BxJ/D/MeSvA9+kxOPx6gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aSfrL3cuKSUqURyXJXdMtx5L71UWOjYwnWulaZ0DyLo=; b=QUaEUzR/ihZY10Uew55gEOkBKBtusFfUdKD2hmLgGmRKQ1rBtvmascrhcMYDy2DbUaOn1Ju3moafP76yU04D7wrNuZ6O8lh66/ogKeNvIKksDGUQj4lUagEgmvDkZzoCQklmtSdhAUNfQwpy5mybTrCIYDdNja67kIPefZPjIchGT6kqrzprWcNxtJtMM6Odody3I2ZBBNpaeUkEhgBVHEVyhrClSAFNrD5TgrW4eMBZ9pdvbsGKHML994f919GXfnQi4FH6026a1Sg4MjHa29mNOpbTMsWXHnxjMagKXiLhdBoymO6L/kPHv/rzLVDMVadbz08ct334u4KCATwa8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aSfrL3cuKSUqURyXJXdMtx5L71UWOjYwnWulaZ0DyLo=; b=VQxQGDO71fr9Hhz0aMAFxwqMe2SOiK/VV+nC4YkecJ3l8bBZirw3RfKc0mW8cWHi7uf1ZiK4UlfYGlSLZbobagSsZ0sT4q3pSvNz8awzOwLTqW/SMjCZFlNn06ST08AWjURp3LQlHwaFp01M1jYJgGxoLOvDj2ybN6YkXOU7eEAO4JaFKEkyTlYSgde146ofXB071SDk/G29pwjuQzZNfAcB+BmHZRR4/CddbfBuD1EVnBztdbvnIljPgNDsvFo9oJHCZOWR9hGoAXtP6FLfeB/aFH8HwLexA9oGCeRZxpdTQWPgOxkKevBpXT7ZJQR2OB2Xo89lX8cTtCkw2m1gsw== Received: from DS7PR03CA0203.namprd03.prod.outlook.com (2603:10b6:5:3b6::28) by DM4PR12MB5247.namprd12.prod.outlook.com (2603:10b6:5:39b::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 4 Nov 2021 12:34:44 +0000 Received: from DM6NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b6:cafe::8d) by DS7PR03CA0203.outlook.office365.com (2603:10b6:5:3b6::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:34:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT016.mail.protection.outlook.com (10.13.173.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:44 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:20 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:13 +0800 Message-ID: <20211104123320.1638915-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 191112e8-3683-4ec8-0d14-08d99f8f7b3d X-MS-TrafficTypeDiagnostic: DM4PR12MB5247: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4941; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JlYs4mFOckJa8Pm3GxGGzcCSEx0aIlkddVom6JFpZU2KQkBYfzpmfPEZSu2Tm8uguvIJ7MrLSHrg8IVU1O4vK+ofqRqDbfcjzCEDtQQmPaO4Ib5l9I3BI501YLv5xVyK+nuRiRMjDprCeAPSwSd+b7vv7sQYlTpoLqjss+pGYJvodDJ+bZ21cC/GLIG5o/d//bdRvI/E8ZSWSibIM4WIBMt6VQSa0jnxmi57Zit2Omrc1LtvuKUiomJf02MpXMUnHyoJCO683eSQEMFcT209D88ws0o/2zndaXpZQ5sDcRiJZ5Q8mvi1eqEM55pq26H51e00s0Qa/YP7Ux7PYNLYaDSQL86eUZjMV3TXL5ZLH8MarkHqlbGpDxO+dvW6xXQXoPQbI4dycTEMOpZJeUTq0xWu11iJd9ZoIHo0Lrql6CjBREUsxfImYWNGvqmZv7DKnkH9OiL7AQC3WZoJCbhUU6HgqObq3akwwVGKGIZNgIWD14Ou6cjYBafYXZEKzw1BqYvprsTUVbb0zs6zagpQeNLFDxOnLya5dH67AMqhGxoL5oU5dSTbWNRsNuppIFaX7qIBUvOadiwP8vUnfLThXTmvr4iWQA1bpMFM9tloym4vLE6YfzMszno+/MUWv98onTMeyFUT6YZj3w/+7T5XetCrt9EUrctWwcZq+6Kj01HK8k0crbwNoki8oAdv49UDGI8+sm39mDbrH7XlMLkYrnYVXD0ZvVRpv9eXO0kwSFGCSbzc+hS7VWZVUx02tDqu X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(186003)(8676002)(107886003)(26005)(5660300002)(54906003)(36860700001)(6916009)(16526019)(316002)(83380400001)(55016002)(6286002)(7636003)(356005)(30864003)(2906002)(36756003)(2616005)(7696005)(47076005)(426003)(6666004)(70206006)(336012)(508600001)(1076003)(4326008)(8936002)(70586007)(82310400003)(86362001)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:44.4635 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 191112e8-3683-4ec8-0d14-08d99f8f7b3d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5247 Subject: [dpdk-dev] [PATCH v4 07/14] net/mlx5: split Rx queue into shareable and private X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare shared Rx queue, splits RxQ data into shareable and private. Struct mlx5_rxq_priv is per queue data. Struct mlx5_rxq_ctrl is shared queue resources and data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5.c | 4 +++ drivers/net/mlx5/mlx5.h | 5 ++- drivers/net/mlx5/mlx5_ethdev.c | 10 ++++++ drivers/net/mlx5/mlx5_rx.h | 17 +++++++-- drivers/net/mlx5/mlx5_rxq.c | 66 ++++++++++++++++++++++++++++------ 5 files changed, 88 insertions(+), 14 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index dc15688f216..374cc9757aa 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1700,6 +1700,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_free(dev->intr_handle); dev->intr_handle = NULL; } + if (priv->rxq_privs != NULL) { + mlx5_free(priv->rxq_privs); + priv->rxq_privs = NULL; + } if (priv->txqs != NULL) { /* XXX race condition if mlx5_tx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 74af88ec194..4e99fe7d068 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1345,6 +1345,8 @@ enum mlx5_txq_modify_type { MLX5_TXQ_MOD_ERR2RDY, /* modify state from error to ready. */ }; +struct mlx5_rxq_priv; + /* HW objects operations structure. */ struct mlx5_obj_ops { int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on); @@ -1408,7 +1410,8 @@ struct mlx5_priv { /* RX/TX queues. */ unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ - struct mlx5_rxq_data *(*rxqs)[]; /* RX queues. */ + struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ + struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 81fa8845bb5..cde505955df 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -104,6 +104,16 @@ mlx5_dev_configure(struct rte_eth_dev *dev) MLX5_RSS_HASH_KEY_LEN); priv->rss_conf.rss_key_len = MLX5_RSS_HASH_KEY_LEN; priv->rss_conf.rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf; + priv->rxq_privs = mlx5_realloc(priv->rxq_privs, + MLX5_MEM_RTE | MLX5_MEM_ZERO, + sizeof(void *) * rxqs_n, 0, + SOCKET_ID_ANY); + if (priv->rxq_privs == NULL) { + DRV_LOG(ERR, "port %u cannot allocate rxq private data", + dev->data->port_id); + rte_errno = ENOMEM; + return -rte_errno; + } priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 69b1263339e..fa24f5cdf3a 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -150,10 +150,14 @@ struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ uint32_t refcnt; /* Reference counter. */ + LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ + struct mlx5_dev_ctx_shared *sh; /* Shared context. */ struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ + uint32_t share_group; /* Group ID of shared RXQ. */ + uint16_t share_qid; /* Shared RxQ ID in group. */ unsigned int irq:1; /* Whether IRQ is enabled. */ uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */ uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ @@ -163,6 +167,14 @@ struct mlx5_rxq_ctrl { uint32_t hairpin_status; /* Hairpin binding status. */ }; +/* RX queue private data. */ +struct mlx5_rxq_priv { + uint16_t idx; /* Queue index. */ + struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ + LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ + struct mlx5_priv *priv; /* Back pointer to private data. */ +}; + /* mlx5_rxq.c */ extern uint8_t rss_hash_default_key[]; @@ -186,13 +198,14 @@ void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev); int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id); int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id); int mlx5_rxq_obj_verify(struct rte_eth_dev *dev); -struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, +struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, + struct mlx5_rxq_priv *rxq, uint16_t desc, unsigned int socket, const struct rte_eth_rxconf *conf, const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg); struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index b2e4389ad60..00df245a5c6 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -674,6 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct rte_mempool *mp) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; @@ -708,10 +709,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, res = mlx5_rx_queue_pre_setup(dev, idx, &desc); if (res) return res; - rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, conf, rx_seg, n_seg); + rxq = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*rxq), 0, + SOCKET_ID_ANY); + if (!rxq) { + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u private data", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + rxq->priv = priv; + rxq->idx = idx; + (*priv->rxq_privs)[idx] = rxq; + rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg); if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate queue index %u", + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; rte_errno = ENOMEM; return -rte_errno; } @@ -741,6 +755,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, const struct rte_eth_hairpin_conf *hairpin_conf) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; int res; @@ -776,14 +791,27 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, return -rte_errno; } } - rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf); + rxq = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*rxq), 0, + SOCKET_ID_ANY); + if (!rxq) { + DRV_LOG(ERR, "port %u unable to allocate hairpin rx queue index %u private data", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + rxq->priv = priv; + rxq->idx = idx; + (*priv->rxq_privs)[idx] = rxq; + rxq_ctrl = mlx5_rxq_hairpin_new(dev, rxq, desc, hairpin_conf); if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate queue index %u", + DRV_LOG(ERR, "port %u unable to allocate hairpin queue index %u", dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; rte_errno = ENOMEM; return -rte_errno; } - DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", + DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; return 0; @@ -1319,8 +1347,8 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, * * @param dev * Pointer to Ethernet device. - * @param idx - * RX queue index. + * @param rxq + * RX queue private data. * @param desc * Number of descriptors to configure in queue. * @param socket @@ -1330,10 +1358,12 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ctrl * -mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, + uint16_t desc, unsigned int socket, const struct rte_eth_rxconf *conf, const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg) { + uint16_t idx = rxq->idx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *tmpl; unsigned int mb_len = rte_pktmbuf_data_room_size(rx_seg[0].mp); @@ -1377,6 +1407,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOMEM; return NULL; } + LIST_INIT(&tmpl->owners); + rxq->ctrl = tmpl; + LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG); /* * Build the array of actual buffer offsets and lengths. @@ -1610,6 +1643,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf && (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; + tmpl->sh = priv->sh; tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); @@ -1637,8 +1671,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * * @param dev * Pointer to Ethernet device. - * @param idx - * RX queue index. + * @param rxq + * RX queue. * @param desc * Number of descriptors to configure in queue. * @param hairpin_conf @@ -1648,9 +1682,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ctrl * -mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, + uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf) { + uint16_t idx = rxq->idx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *tmpl; @@ -1660,10 +1696,14 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOMEM; return NULL; } + LIST_INIT(&tmpl->owners); + rxq->ctrl = tmpl; + LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); tmpl->type = MLX5_RXQ_TYPE_HAIRPIN; tmpl->socket = SOCKET_ID_ANY; tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; + tmpl->sh = priv->sh; tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); @@ -1717,6 +1757,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; @@ -1736,9 +1777,12 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); + LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); (*priv->rxqs)[idx] = NULL; + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; } return 0; } From patchwork Thu Nov 4 12:33:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103755 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23110A0548; Thu, 4 Nov 2021 13:35:20 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C87E342743; Thu, 4 Nov 2021 13:34:49 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2048.outbound.protection.outlook.com [40.107.101.48]) by mails.dpdk.org (Postfix) with ESMTP id 23B6C42715 for ; Thu, 4 Nov 2021 13:34:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WArSNLpSpY0UfIc6EnfQu1Td1C8xfJbeUda2DT0mNtj9RlCd9KKpDqO1nSe2ZN7YN/nEyKEY6ImELl0bA1zW0LQTiL1tTUhtqM6UoClhQNM8/pkAkRMUoQBgMzfxCQAHJCOgrjOi9e97DOgYzX7uc4uH7hRSu4PZI0DHFifOhqwfBGk071bF62v9sHtpal33zaWM8+JxYcZOU7GF5LztS7rp2l8vF+gOvakhV4Hn3a8ga3McBxEEvrvVeBo/D79biLTCfVViMSRVV447BzLPb475owOVrsiBCVhikPPPXmbmrvh/WDHEBXvcA2k1LpRENiWpQsOImD2WA0uYEPCzkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9KVM8kD4w7sMtCbkb0Xl8PWzKG/fY7mQoeIFJYMAyuA=; b=AfsFR+8cpfxhza2qH1Kd3MuhmvJPjF4ax57dQboCEO7z08AtH9o4pvsMa8Nj61DnXihBPFt5MGWxRI2HSt+CMqvO924+ti79bHrrtiBQYMdpSIvV9W1ZmVrrgseI+giim3XosVUbWk0+9yY9FyIewCbi6mZxn9Rim9sU6YmTZt18eLKBfuLssR3NxHYn/va1tM4ZMyM/YMsyvuT+xsyG8WwP8PnL/UYuuigmbOhjXbCR8dnq5ik/jhlnYqgLwVnVrf+uabFXK0nSBLht8TxXqg3RyTi0FmmKz1QYF2vE8nhIyuYuthl/XNQtm6iOrE7T1st06BLBa1jJ756xMtnEAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9KVM8kD4w7sMtCbkb0Xl8PWzKG/fY7mQoeIFJYMAyuA=; b=BILlgHNPddnZWtYiyYJ0i0MlQrwAV5RFMiRBVotjy5cKsPObdBb1reXWMT9Fv3uctfi9hTeYo0UjQ6QTzqPFtdCEUXgTavCDtFv7s+nGt0Wok52Yh3cdMVvAFtkmSr9AGlQeSyWVdn88N34DvJpyc2f8Su7HLZLJzmvFtKKAE4pUoc1asiuDL1pNvThr0E+j0F0gv+Z9/oDly5/2MvCQ7Yg+qohErRWmwdJUyZFRYJ0KQMSxNDcwi6BxBzzPvDt4hc2J0B69kMtHTGw02hTAYkHGQXauF4VXCpqhUWJT81uVNaPuGylblhOYN1PtBP8dkf2DxY4bljwqIgbqSPc0vQ== Received: from BN6PR14CA0022.namprd14.prod.outlook.com (2603:10b6:404:79::32) by DM6PR12MB5551.namprd12.prod.outlook.com (2603:10b6:5:1bc::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 12:34:45 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::55) by BN6PR14CA0022.outlook.office365.com (2603:10b6:404:79::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:44 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:22 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:14 +0800 Message-ID: <20211104123320.1638915-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 567590d1-4188-4441-db35-08d99f8f7b52 X-MS-TrafficTypeDiagnostic: DM6PR12MB5551: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KfOqD67Ilq6WJkHmHS5CJxLxuqzEl+pGoioPwO0oS/Oa0V20Pnh3pkUJn98iPeC8DIvmx2wtC7llJ17oSpxLFOO5fQCe7f0DUSRPbWVUFj0uPwoItWrOdYfbR+237HBstrJyGLTOqLUs+sxFNCpED5SGcpGKMLKZ661KtuWKEFzXA2atOV6R1GD8/6oSjYKpim3+4NilB42jZBZse8iNt4B/34x9/Fg/WMSI66b0/5FAOWACkOE7MM8GYKQO4/MlCa6XUimYhlOS8tEPaWq+ldn2zyP0fAw7GkgA85m1xX2PkZitI2MlZRGkwfWxS8MEYbYAlSl4k0rWNoenU0jCu3uzoEPcdsDQXVGo2kbNkYPJbzKJ2TjVXPqxM9ntHpCGGDmV7v8AygS2rkESvol2osrtvg/dqCE0k+uSXWE/659W7NwsSjO/X4SF1hoAcoDDcgyg6sAEMuCJuWvmnyEaXBHGxFI7b9yz5dT3b3okSLgE2+a+EFoZIqZ6N9sGoFP4DaxjpVhStWjzISP/sL9RqwlE52cvw9OUreVrnBIYm7HNPGF8f2tEAegWaWu0/ggzKT1qpVK7uj1ifOmfRD8k/+utRR/ioOfEhzX9+oYxKp3Fke8aPJWxwfZ6pzYB2EWwIkER1ICWaJoiB4K7eww4n12VJP1W6ajtg52KXwZ8nXNEsxJ0cg1HCA8ZRijh6fmkGwSundBFHTp0LUuvi79gog== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(70206006)(107886003)(6916009)(70586007)(8676002)(83380400001)(36756003)(16526019)(6286002)(26005)(86362001)(5660300002)(186003)(1076003)(8936002)(2906002)(7696005)(336012)(6666004)(2616005)(30864003)(47076005)(426003)(55016002)(54906003)(316002)(36860700001)(356005)(4326008)(7636003)(508600001)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:44.6064 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 567590d1-4188-4441-db35-08d99f8f7b52 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5551 Subject: [dpdk-dev] [PATCH v4 08/14] net/mlx5: move Rx queue reference count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue reference count is counter of RQ, used to count reference to RQ object. To prepare for shared Rx queue, this patch moves it from rxq_ctrl to Rx queue private data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rx.h | 8 +- drivers/net/mlx5/mlx5_rxq.c | 169 +++++++++++++++++++++----------- drivers/net/mlx5/mlx5_trigger.c | 57 +++++------ 3 files changed, 142 insertions(+), 92 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fa24f5cdf3a..eccfbf1108d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -149,7 +149,6 @@ enum mlx5_rxq_type { struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ - uint32_t refcnt; /* Reference counter. */ LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ @@ -170,6 +169,7 @@ struct mlx5_rxq_ctrl { /* RX queue private data. */ struct mlx5_rxq_priv { uint16_t idx; /* Queue index. */ + uint32_t refcnt; /* Reference counter. */ struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ @@ -207,7 +207,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); +uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 00df245a5c6..8071ddbd61c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void) static int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (!(*priv->rxqs)[idx]) { + if (rxq == NULL) { rte_errno = EINVAL; return -rte_errno; } - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1); + return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1); } /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */ @@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL; int rc; /* Skip queues that cannot request interrupts. */ @@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (rte_intr_vec_list_index_set(intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) return -rte_errno; - /* Decrease the rxq_ctrl's refcnt */ - if (rxq_ctrl) - mlx5_rxq_release(dev, i); continue; } + mlx5_rxq_ref(dev, i); if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { DRV_LOG(ERR, "port %u too many Rx queues for interrupt" @@ -954,7 +950,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) * Need to access directly the queue to release the reference * kept in mlx5_rx_intr_vec_enable(). */ - mlx5_rxq_release(dev, i); + mlx5_rxq_deref(dev, i); } free: rte_intr_free_epoll_fd(intr_handle); @@ -1003,19 +999,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + if (!rxq) goto error; - if (rxq_ctrl->irq) { - if (!rxq_ctrl->obj) { - mlx5_rxq_release(dev, rx_queue_id); + if (rxq->ctrl->irq) { + if (!rxq->ctrl->obj) goto error; - } - mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn); + mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn); } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: rte_errno = EINVAL; @@ -1037,23 +1028,21 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); int ret = 0; - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) { + if (!rxq) { rte_errno = EINVAL; return -rte_errno; } - if (!rxq_ctrl->obj) + if (!rxq->ctrl->obj) goto error; - if (rxq_ctrl->irq) { - ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj); + if (rxq->ctrl->irq) { + ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj); if (ret < 0) goto error; - rxq_ctrl->rxq.cq_arm_sn++; + rxq->ctrl->rxq.cq_arm_sn++; } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: /** @@ -1064,12 +1053,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_errno = errno; else rte_errno = EINVAL; - ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_release(dev, rx_queue_id); - if (ret != EAGAIN) + if (rte_errno != EAGAIN) DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", dev->data->port_id, rx_queue_id); - rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1657,7 +1643,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1711,11 +1697,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; } +/** + * Increase Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_rxq_priv * +mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq != NULL) + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Dereference a Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * Updated reference count. + */ +uint32_t +mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq == NULL) + return 0; + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + /** * Get a Rx queue. * @@ -1727,18 +1755,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_rxq_ctrl * +struct mlx5_rxq_priv * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; - if (rxq_data) { - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - __atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED); - } - return rxq_ctrl; + if (priv->rxq_privs == NULL) + return NULL; + return (*priv->rxq_privs)[idx]; +} + +/** + * Get Rx queue shareable control. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue control if it exists, NULL otherwise. + */ +struct mlx5_rxq_ctrl * +mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : rxq->ctrl; +} + +/** + * Get Rx queue shareable data. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue data if it exists, NULL otherwise. + */ +struct mlx5_rxq_data * +mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : &rxq->ctrl->rxq; } /** @@ -1756,13 +1818,12 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1) + if (mlx5_rxq_deref(dev, idx) > 1) return 1; if (rxq_ctrl->obj) { priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); @@ -1774,7 +1835,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } - if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { + if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); LIST_REMOVE(rxq, owner_entry); @@ -1952,7 +2013,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, return 1; priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); mlx5_free(ind_tbl); return 0; } @@ -2009,7 +2070,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, log2above(priv->config.ind_table_max_size); for (i = 0; i != queues_n; ++i) { - if (!mlx5_rxq_get(dev, queues[i])) { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { ret = -rte_errno; goto error; } @@ -2022,7 +2083,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: err = rte_errno; for (j = 0; j < i; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); + mlx5_rxq_deref(dev, ind_tbl->queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2118,7 +2179,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i, j; + unsigned int i; int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : @@ -2138,15 +2199,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl); if (ret) goto error; - for (j = 0; j < ind_tbl->queues_n; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); ind_tbl->queues_n = queues_n; ind_tbl->queues = queues; return 0; error: err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_release(dev, queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ebeeae279e2..e5d74d275f8 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -201,10 +201,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", dev->data->port_id, priv->sh->device_attr.max_sge); for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); + struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; - if (!rxq_ctrl) + if (rxq == NULL) continue; + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { /* * Pre-register the mempools. Regardless of whether @@ -266,6 +268,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_obj *sq; struct mlx5_devx_obj *rq; @@ -310,9 +313,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } sq = txq_ctrl->obj->sq; - rxq_ctrl = mlx5_rxq_get(dev, - txq_ctrl->hairpin_conf.peers[0].queue); - if (!rxq_ctrl) { + rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue); + if (rxq == NULL) { mlx5_txq_release(dev, i); rte_errno = EINVAL; DRV_LOG(ERR, "port %u no rxq object found: %d", @@ -320,6 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || rxq_ctrl->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; @@ -354,12 +357,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) rxq_ctrl->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); } return 0; error: mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } @@ -432,27 +433,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; mlx5_txq_release(dev, peer_queue); } else { /* Peer port used as ingress. */ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue); struct mlx5_rxq_ctrl *rxq_ctrl; - rxq_ctrl = mlx5_rxq_get(dev, peer_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, peer_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; @@ -460,7 +460,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; - mlx5_rxq_release(dev, peer_queue); } return 0; } @@ -559,34 +558,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (peer_info->tx_explicit != @@ -594,7 +591,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (peer_info->manual_bind != @@ -602,7 +598,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RDY; @@ -612,7 +607,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 1; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -677,34 +671,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 0; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RST; @@ -712,7 +704,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 0; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -1014,7 +1005,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *txq_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; uint32_t i; uint16_t pp; uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0}; @@ -1043,24 +1033,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, } } else { for (i = 0; i < priv->rxqs_n; i++) { - rxq_ctrl = mlx5_rxq_get(dev, i); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (rxq == NULL) continue; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { - mlx5_rxq_release(dev, i); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - } pp = rxq_ctrl->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; - mlx5_rxq_release(dev, i); DRV_LOG(ERR, "port %hu queue %u peer port " "out of range %hu", priv->dev_data->port_id, i, pp); return -rte_errno; } bits[pp / 32] |= 1 << (pp % 32); - mlx5_rxq_release(dev, i); } } for (i = 0; i < RTE_MAX_ETHPORTS; i++) { From patchwork Thu Nov 4 12:33:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103756 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC924A0548; Thu, 4 Nov 2021 13:35:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCE7B42749; Thu, 4 Nov 2021 13:34:50 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2066.outbound.protection.outlook.com [40.107.244.66]) by mails.dpdk.org (Postfix) with ESMTP id C7C8B42715 for ; Thu, 4 Nov 2021 13:34:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OGaigiFu1NFmCCkX2OBNfyes4TL7VBqc6Yt/Eu7xeIHcZhNSHdbeJVhvG/hqa5+gyaH+ie6U9FRNRVeIk+EixpWyrFTneKsPaFl8etAzZmOcSCYPCNev/G6dGw6jG6FTyPzcGi16MekxnBlBPhBXIVV1nEwgzWACNTr+8x90lrpWDGB+m0aq3ONhcqmBL6APBJpu24Yuyo63cz06t/7RFMwQRSaYf0Ngsa3faPPFw4cRQVvi9GC+msvIlrRqU5mfemn8X7fh7CZk+Td7wFE0se7DjDVOIcj3wqCom27XqP5Oo8tWtQyzQxzf3aHdh+tfX6FeKNO+5kdF+f5KUvkIUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ACxRsWsW6lSq+cyAs5TKh8yM8c3PM8uX3hRW7r+8tiE=; b=kYQmDlkZoW91sNc5ZEO5EDdaHX+viyCtbJ2x3u9a5jsEpPhNWedW0CpGYX/sIzp9jQ78+USLYHQa7cXTMaDXJAuPmqfVQpmRZAr/DPtQrp9ncT3yLjkGCvqGMIAEdA8iNs/dXFjllWxu8j43P8qQZcftrt4kQ8yAqRmbbGQnFe16YvBHQjmYzFJA2sFmv6FraCae8bCbO+wYOC0iVlBo6KppP0p2lrf8JlUFFrzZ2Lw36RcgXmq4iuDyTmBpawCsTJ4KyUPReoDE5zYKOOg4Wy9gAi8dgPzxJPhZM3xzFyYyct37ZzXJyA/HWPv5gw0bC7RWFZiP63GSGHK//RuOzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ACxRsWsW6lSq+cyAs5TKh8yM8c3PM8uX3hRW7r+8tiE=; b=hqu0VGLCHgJVPZ0aRN6xHTb/QaPcosUFzOFvbC02h7GowpRxxuWhbwuuoFCtju+H6Sn8cgfDb9v8YrybMqcEGHFNlC0YmIg3w9h9TfU+bUm+yvPH0xhOyZzxJKhRyCKBG9SA8RNhzUNbDPIPPzz6GGGGceSLeY1nx7fGzKhdih8QyZCrS+ZeU590JbRFKnegENL1DRqwmhhl4L3POMszMPzJRAS59UFHfyxiWNlik12JAnIbyUk4xA+tTdtS6GTjekeaGL4bZl1SO+kRm/VqLqWzs/SZBTnyZsyqsSBvYA/DPo7Tzdb3krog2yDWCMYmHzt+3SLrowOUcapVlORusw== Received: from BN6PR14CA0009.namprd14.prod.outlook.com (2603:10b6:404:79::19) by CH0PR12MB5281.namprd12.prod.outlook.com (2603:10b6:610:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 4 Nov 2021 12:34:46 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::98) by BN6PR14CA0009.outlook.office365.com (2603:10b6:404:79::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:45 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:24 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:15 +0800 Message-ID: <20211104123320.1638915-10-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5e633fd5-1147-4ce6-60cf-08d99f8f7bd7 X-MS-TrafficTypeDiagnostic: CH0PR12MB5281: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:295; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: COBJQx7mUh+s0ePfd9e/VfYa0v8byOP+f3lHYq+0Sjn4Tht4l4QJIYSU5VifW1W/jwumPbi+IEpqDpchBFTHurevmZmV91shquago5I7kdg1TvGKWFZJ61OqYNL/BaThaHn1NH6jDlSwW/R4uoy2OjX1Qulm9O16DZ4L5tYwzCexs5/9LOTp4dF5k4jUy2XytIPTVSY782Z5OcB0WuA+Tv3tAiS9YTeyfv5LkZbrn4wHEvLCEblOXl7MG7wtS2sHcTUEcPnUjxpdXkQWsz8IPCmMcxb+XpdfBoh6etxLiCXosuZEQVxDD8xz3n9L7TNDOfuqB7j1vpM1mQB2h6t8W2uG7I/zCUw/UpWQ6iG3oZjH8to9Yyx3uBTMPWGyNRi2m2GrjjHZ2oXGfanBz9QKgQ+5eGvyV9rQn5acNy3mvIMKDvr4iMwcywiQc9rzA5WiwJYHiMXz3GaHTNzN7dvhz4EpSpGnj2l2ISC7+OGq9fum5gcDqvUXHXPNYOTX9zv2HgfzGS6HjdwbhdGLKdCG7O3U8Q+bXCIVZTiaXLPWDXv3JqWAtDT7CZkVRCgSQsEjwa2c/xDQPX7M7p7/Z23mQ4+IEvduS5sBw3+0umJW2u9nCp4LEEKH8V8aELfuyXgsc30dxWGgWdyZkFjXoozhlMzBspZDyE1VRb0n0PL3KT11BE9LWMqzmC2YQ+zoInUmNDZ5yuFnMUZE121vL/FTng== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(6666004)(336012)(8676002)(83380400001)(8936002)(4326008)(5660300002)(16526019)(36756003)(426003)(70206006)(2906002)(70586007)(2616005)(7696005)(82310400003)(6916009)(36860700001)(316002)(107886003)(186003)(26005)(47076005)(86362001)(55016002)(508600001)(356005)(7636003)(6286002)(1076003)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:45.4849 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5e633fd5-1147-4ce6-60cf-08d99f8f7bd7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5281 Subject: [dpdk-dev] [PATCH v4 09/14] net/mlx5: move Rx queue hairpin info to private data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hairpin info of Rx queue can't be shared, moves to private queue data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rx.h | 4 ++-- drivers/net/mlx5/mlx5_rxq.c | 13 +++++-------- drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------ 3 files changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index eccfbf1108d..b21918223b8 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -162,8 +162,6 @@ struct mlx5_rxq_ctrl { uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ uint32_t wqn; /* WQ number. */ uint16_t dump_file_n; /* Number of dump files. */ - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ - uint32_t hairpin_status; /* Hairpin binding status. */ }; /* RX queue private data. */ @@ -173,6 +171,8 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ + uint32_t hairpin_status; /* Hairpin binding status. */ }; /* mlx5_rxq.c */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8071ddbd61c..7b637fda643 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1695,8 +1695,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; - tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; + rxq->hairpin_conf = *hairpin_conf; mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; @@ -1913,14 +1913,11 @@ const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq_ctrl->hairpin_conf; + if (idx < priv->rxqs_n && rxq != NULL) { + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + return &rxq->hairpin_conf; } return NULL; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index e5d74d275f8..a124f74fcda 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -324,7 +324,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) } rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || - rxq_ctrl->hairpin_conf.peers[0].queue != i) { + rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " "Rx queue %d", dev->data->port_id, @@ -354,7 +354,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (ret) goto error; /* Qs with auto-bind will be destroyed directly. */ - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); } @@ -457,9 +457,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, } peer_info->qp_id = rxq_ctrl->obj->rq->id; peer_info->vhca_id = priv->config.hca_attr.vhca_id; - peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; - peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; - peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; + peer_info->peer_q = rxq->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit; + peer_info->manual_bind = rxq->hairpin_conf.manual_bind; } return 0; } @@ -581,20 +581,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status != 0) { + if (rxq->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); return 0; } if (peer_info->tx_explicit != - rxq_ctrl->hairpin_conf.tx_explicit) { + rxq->hairpin_conf.tx_explicit) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); return -rte_errno; } if (peer_info->manual_bind != - rxq_ctrl->hairpin_conf.manual_bind) { + rxq->hairpin_conf.manual_bind) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); @@ -606,7 +606,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.hairpin_peer_vhca = peer_info->vhca_id; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; } return ret; } @@ -688,7 +688,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status == 0) { + if (rxq->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); return 0; @@ -703,7 +703,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.rq_state = MLX5_SQC_STATE_RST; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 0; + rxq->hairpin_status = 0; } return ret; } @@ -1041,7 +1041,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - pp = rxq_ctrl->hairpin_conf.peers[0].port; + pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; DRV_LOG(ERR, "port %hu queue %u peer port " From patchwork Thu Nov 4 12:33:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103757 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D227EA0548; Thu, 4 Nov 2021 13:35:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 375214271E; Thu, 4 Nov 2021 13:35:01 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2086.outbound.protection.outlook.com [40.107.93.86]) by mails.dpdk.org (Postfix) with ESMTP id E2CCB42701 for ; Thu, 4 Nov 2021 13:34:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nMjTTb9rHKl/B8qHW3tN+4DslTDQwCHDxeObbtZL8E/Lo8PK9740hiZQ3rmqAzQuKb+rWgofSvBo7oWInHn76P9oqUbQgIPmxna4MhN0EWcc3AZ8fmdXij7KTQwliK+ye+/TkNtl9TFzcqlaP3SCS6lfyyCIURaQ3W6RnbhrKfN5AfnGYaCrZjZoLbesoqVxSaathZZmr2Q3hd/fLEBF4zFTgk6N2YHN8vKOee34G1l0U2ovSsVPJAEj/XFr/NuorzVtaGq+vcQEi4cmcIt98o82+I0v4niWcbjE6ZkHWttw0fFZElK6b59mafHm5v2WKXnb9KfutXsxS0LS5gHzTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4LIWylTyW/GERxwZuyWeFzXOFVjd4AM/k7IEGdtLThE=; b=M7SusJwVdffktRB7geqY94THaziIJGWvduo1WFz7Nkyci6gGEkCdGHPa7vB5Fb8dl39hpEtZ4S45KLwLyO56QazCONromCwo/gk1eafLewMeu/1q2g7hBUe8eI7v28f7pRduwW3EUlTPaNk5PSdwt7HzYzIATL8KUrE/6GLmYoI2oIyKcj+QioHQs9Y2RRuMQqwcttqsRD3xqaCaWx2/XvgEDcQlggqIPbKReo941nzBT12lKJCR66wMr7nnyWvHjaIx9vZzarhsKf3KIgIAog8Y6F6uyDS+aoh9ByNT5VGengoYQ2lJrCxg3hvmdwwMWAbbXfxprwT51KrTyHx6xQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4LIWylTyW/GERxwZuyWeFzXOFVjd4AM/k7IEGdtLThE=; b=iBzSBcZv66Dtt7LebdzcAJSImzOIujWsJCYsrkHEDo5BvVm0Qib+IRE3EM2EoYSElxpRNhGL6mimmHRRKHB/gFa88/VKZfKnNWzkVpLasm0qoNYj8ufyG0soirLg357xtMEIaAKcxhqDIrlPiojdn0yUco6Gs6f562lc1MAN3jckkw23XtIlYNxKbqDqDilbnb1E6nkN9E3RlUuPj1E+CqMTW3ABDAxp/xMWLxhX33PHghy22G1IzWJ3PKG/d0S/P0J/PFAdLu7b3pZUFnioGHzqc2GFl/Kjq+0uXa1YeaMGRCJHZywuA1AggbBIZxVbrX7EHP3q2w5tlTorVuosPA== Received: from DM5PR1101CA0023.namprd11.prod.outlook.com (2603:10b6:4:4c::33) by DM6PR12MB3273.namprd12.prod.outlook.com (2603:10b6:5:188::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 4 Nov 2021 12:34:58 +0000 Received: from DM6NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4c:cafe::6e) by DM5PR1101CA0023.outlook.office365.com (2603:10b6:4:4c::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT046.mail.protection.outlook.com (10.13.172.121) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:57 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:55 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:16 +0800 Message-ID: <20211104123320.1638915-11-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f950daff-b188-4073-1085-08d99f8f8322 X-MS-TrafficTypeDiagnostic: DM6PR12MB3273: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:23; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qfMwunfqmwf6HkTrjjd4SsjCHZa+nflVuHU4gv6/j0wwF69pNFXYXuc1pqHhBv7EqnH5D3f92gJ4xxuM/CcqCvjgsb/rsMjHxxgTUOmS7uWNNO5QKuhtPWwxQz2YxJ3G4d90IFq1VZ47ntZ8Y0+G9cmANvcwpCoVOnYgFGG/zZyd+yJbJmm7ozVkXt7LB5rmqnzUZRaCsfr68lVpT34VvEIRVI/Lu4iYqfegBNl3JHcHb9oGbPqKRK1vdoYbFd579eZZ+Rb0ictXgmxi3NqTy/4ZwliZMwruTxH5WpDo4f7Yvfb3QfND95dB2M7m8fZqRXRLBQeCPjxfUe6fW7l2otIdmVXlk5i9E1xRyNph9b+SRFSaktwyL/BmEjcFYmZ1Bt98oW961sZfu5iz/dZwyY0LXuJNk7+vu+gKc/RcC+iZPQy29U5lJCy4bUNP+DrZNcwAjKIpItPedatBndHfLws0/bfBG6Nk0ogczdV8upXf8arp/nXbkT25i22xSOTutm9vOUIT1B3Yu4frlQ7J6yDJiswoMo94l9q8Dngnv5Sp2FLlBzim19p7HpMY6BZYLiZ0rhWRrrZtRRcACnjvguNSFJoeBPhA4UugVUJBY4Dl9uRQs0VHj8ixQqI1jUBC9tfmvM1UGGq+eyFjARC+5SNVcsls7ehwsfY+oWxs7vc7/Pd7493gXPPRAQZNya+7jkVn5EFx6JJQy2kY0tL9fw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6916009)(186003)(316002)(6666004)(54906003)(36756003)(55016002)(2616005)(86362001)(70206006)(7636003)(26005)(356005)(8676002)(336012)(8936002)(426003)(82310400003)(6286002)(107886003)(508600001)(47076005)(1076003)(70586007)(83380400001)(7696005)(36860700001)(5660300002)(4326008)(16526019)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:57.6574 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f950daff-b188-4073-1085-08d99f8f8322 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3273 Subject: [dpdk-dev] [PATCH v4 10/14] net/mlx5: remove port info from shareable Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare for shared Rx queue, removes port info from shareable Rx queue control. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_rx.c | 15 +++-------- drivers/net/mlx5/mlx5_rx.h | 7 ++++-- drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +- drivers/net/mlx5/mlx5_trigger.c | 13 +++++----- 6 files changed, 47 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 443252df05d..8b3651f5034 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -918,7 +918,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) } rxq->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; - rxq_ctrl->priv = priv; + rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq; rxq_data = &rxq_ctrl->rxq; /* Create CQ using DevX API. */ diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 258a6453144..d41905a2a04 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -118,15 +118,7 @@ int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset) { struct mlx5_rxq_data *rxq = rx_queue; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); - struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv); - if (dev->rx_pkt_burst == NULL || - dev->rx_pkt_burst == removed_rx_burst) { - rte_errno = ENOTSUP; - return -rte_errno; - } if (offset >= (1 << rxq->cqe_n)) { rte_errno = EINVAL; return -rte_errno; @@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RESET; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (rxq_ctrl->dump_file_n < - rxq_ctrl->priv->config.max_dump_files_num) { + RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) { MKSTR(err_str, "Unexpected CQE error syndrome " "0x%02x CQN = %u RQN = %u wqe_counter = %u" " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome, @@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RDY; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), - &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (vec) { const uint32_t elts_n = diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index b21918223b8..c04c0c73349 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -22,6 +22,10 @@ /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 +#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv +#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl)) +#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl)) + /* First entry must be NULL for comparison. */ #define mlx5_mr_btree_len(bt) ((bt)->len - 1) @@ -152,7 +156,6 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ uint32_t share_group; /* Group ID of shared RXQ. */ @@ -318,7 +321,7 @@ mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) */ rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); mp = mlx5_rxq_mprq_enabled(rxq) ? rxq->mprq_mp : rxq->mp; - return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->cdev->mr_scache, + return mlx5_mr_mempool2mr_bh(&rxq_ctrl->sh->cdev->mr_scache, mr_ctrl, mp, addr); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7b637fda643..5a20966e2ca 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -148,8 +148,14 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(seg->mp); if (buf == NULL) { - DRV_LOG(ERR, "port %u empty mbuf pool", - PORT_ID(rxq_ctrl->priv)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(ERR, "port %u queue %u empty mbuf pool", + RXQ_PORT_ID(rxq_ctrl), + rxq_ctrl->rxq.idx); + else + DRV_LOG(ERR, "share group %u queue %u empty mbuf pool", + rxq_ctrl->share_group, + rxq_ctrl->share_qid); rte_errno = ENOMEM; goto error; } @@ -193,11 +199,16 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; } - DRV_LOG(DEBUG, - "port %u SPRQ queue %u allocated and configured %u segments" - " (max %u packets)", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n, - elts_n / (1 << rxq_ctrl->rxq.sges_n)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, + "port %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); + else + DRV_LOG(DEBUG, + "share group %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + rxq_ctrl->share_group, rxq_ctrl->share_qid, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -207,8 +218,12 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]); (*rxq_ctrl->rxq.elts)[i] = NULL; } - DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx); + else + DRV_LOG(DEBUG, "share group %u SPRQ queue %u failed, freed everything", + rxq_ctrl->share_group, rxq_ctrl->share_qid); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -284,8 +299,12 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t used = q_n - (elts_ci - rxq->rq_pi); uint16_t i; - DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", - PORT_ID(rxq_ctrl->priv), rxq->idx, q_n); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", + RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n); + else + DRV_LOG(DEBUG, "share group %u Rx queue %u freeing %d WRs", + rxq_ctrl->share_group, rxq_ctrl->share_qid, q_n); if (rxq->elts == NULL) return; /** @@ -1630,7 +1649,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.rq_repl_thresh = @@ -1690,7 +1708,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index ecd273e00a8..511681841ca 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq) struct mlx5_rxq_ctrl *ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0) + if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0) return -ENOTSUP; if (rxq->lro) return -ENOTSUP; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index a124f74fcda..caafdf27e8f 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -131,9 +131,11 @@ mlx5_rxq_mempool_register_cb(struct rte_mempool *mp, void *opaque, * 0 on success, (-1) on failure and rte_errno is set. */ static int -mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) +mlx5_rxq_mempool_register(struct rte_eth_dev *dev, + struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = rxq_ctrl->priv; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh; struct rte_mempool *mp; uint32_t s; int ret = 0; @@ -148,9 +150,8 @@ mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) } for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) { mp = rxq_ctrl->rxq.rxseg[s].mp; - ret = mlx5_mr_mempool_register(&priv->sh->cdev->mr_scache, - priv->sh->cdev->pd, mp, - &priv->mp_id); + ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, + sh->cdev->pd, mp, &priv->mp_id); if (ret < 0 && rte_errno != EEXIST) return ret; rte_mempool_mem_iter(mp, mlx5_rxq_mempool_register_cb, @@ -213,7 +214,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) * the implicit registration is enabled or not, * Rx mempool destruction is tracked to free MRs. */ - if (mlx5_rxq_mempool_register(rxq_ctrl) < 0) + if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0) goto error; ret = rxq_alloc_elts(rxq_ctrl); if (ret) From patchwork Thu Nov 4 12:33:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103758 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBF28A0548; Thu, 4 Nov 2021 13:35:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3FB7B42738; Thu, 4 Nov 2021 13:35:05 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2086.outbound.protection.outlook.com [40.107.220.86]) by mails.dpdk.org (Postfix) with ESMTP id 5DFF54271A for ; Thu, 4 Nov 2021 13:35:03 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eEO7YboskN5qCJlyItBEcOOoR3iZPlMuggU9g5ch2gU2mi4YmAzmdzdJh8xWdCMbBVArXHT1kvhOISpUf3psD1q8ZbUi5Tm7B34A8O4uS29Q1dqr/r5QjaEj7/X+OC8E51uTNHjMbFN2ow7GBaQWVP+yxI5oqYuIhBjBLlEQq6MxJS/FR0hYZu/aoAoH3dNoF9bmpX2OC0F9zJY0rcho2QI3TOXcZGHicVvEQ+C9Mrrluj7ddAfDYv3eGA+CvjiDHFSZakWtPK6AnUwoLkKK3+ey+BaiegC153UrVM4emrJnkG9ivvVtnhugsgcDbrUg2ENE7IzEFLe56OoIbTRO8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yzI4KPWJ7PpK+hTpztzJp80z9EarwSjVwhZboBctFCM=; b=f1FaAPwbG9lqlt9BK3AoEU2aU7TtjpNF/02qc0pIEF1MZa+gOHiI7OxZzCo4kBNvD7U5aUiG+Dl2pA5u0uV5kNQdVih9gb/qIbZ71fL+IwUKWB7c1xCJUcxQCfVD5Y+ag7xtoyTJwSHE3eTRK5/0mgg3vPuheb0qqAFWkbHgds2NFh8kAKIyYmbUcxMJRYDRRZftiukYVXIM0V12A+XiWO2ZjEzxohb8/kapKWO8ZpQv0GNWEeUiLg15RL2uSDGmRRBM24OGIT6UrVPtRq5urVisOB1EucFl/bb0PRQ3eO0VZA6lkIvcz4Ju8f4g6XatBJfRL2Ge7K48+t/NqOZCaw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yzI4KPWJ7PpK+hTpztzJp80z9EarwSjVwhZboBctFCM=; b=THUTcOQMS36C1O74sWOonKUpmgWpNeLabgQhptI+OFW6wdWwaxVAfyfVVeD9lWSSM/PerCKgOHK/vc7n+wrsEpAqp/arq/Tsa4GWiECICISgvU+p7qe01WP2Z9KckbowvPFaVGzB4kWI50QVRN//9K150R9R6IcnB5Wnn393zwm3QIswFttghZdgyW/rv1CY/FDd38ClxlFyxtz8YYve9+BJgA5HH+VJSULNVr1Ow5l3X6VTh1963SQ/YOej21hC0OjULs/KeD5h2Z+5b0/O6evUuutcNu5J4qLqZf3P3hWRlq+cMKPIYOkztWRkmYlM6ijdys6VPzKzvB3NKblG4w== Received: from DS7PR03CA0058.namprd03.prod.outlook.com (2603:10b6:5:3b5::33) by DM6PR12MB3676.namprd12.prod.outlook.com (2603:10b6:5:1c7::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Thu, 4 Nov 2021 12:35:00 +0000 Received: from DM6NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b5:cafe::e2) by DS7PR03CA0058.outlook.office365.com (2603:10b6:5:3b5::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:35:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT008.mail.protection.outlook.com (10.13.172.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:00 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:57 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad , "Anatoly Burakov" Date: Thu, 4 Nov 2021 20:33:17 +0800 Message-ID: <20211104123320.1638915-12-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5f444941-9e4a-4529-97a3-08d99f8f8480 X-MS-TrafficTypeDiagnostic: DM6PR12MB3676: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: n5+r4XynlXgdQIZD/ivKwBUPC8hbzdl4oCM7YbAdqUvs3A3Q5/jrPmznwsYxYHXJ0uAA92DwvjZbz9Q3IK4ZHuDeJx/PNlFQlesUOWofpU/1k3wtEOPMgvaJ5pta5hMifFHDbcwnEZgsig4ocPhCRYl1KUSEhuhxluZ2l6etxZAzsdxhZHjclVqjzAskdPgIG7OX3nnS/gTfRXKoBAyIBVReu6sdeHFp2Ip//1DoRBD4T98v829lS8TCnIhrS6sCY3fsAvKnI62gMtYMeaNkm61swQJG+8l97eN8E9n7qwSRWv9RLYTupi5d3jARzOu6sYz80MFerE8RJBD+o138QINvyvDCK9oU1D/+3waPueTSRkWBXU/9PBXbwQbqx7h6++CC34H0SHppbTyfM/nOTaAV67PBNOgrfC3F3AVsm94ERHx8VJuxKTWsoRjxxlMb4BJPlYjYZz/VKm1sOCPQdUw0x6NFg6YJfnOWYECL220C9uP/o0+ufqvPbtrfaT2u6aRttoMldZLj+2t4hEmohyaivZGO0t3qOa7RPy/k8EFwroHBGf8TqBE4a8Wba25IXkAtk3nw+ayL0mmLQ4e3GRlHQbWopqsI8f0n3zMKcn2k6mSAxxCraO/TUlbuIL3eI9MOSI7perV4N1iXTe1Bf9A6vNguVHTm5T6zqRMP+DHeEPpn4g/Z891Eox3BEraUi7gAIGU2HWbSZc9afDxd9zFGxa9LWAYWuqmuskqV7Sk= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(2616005)(186003)(16526019)(316002)(54906003)(8936002)(26005)(70586007)(70206006)(2906002)(86362001)(55016002)(8676002)(6286002)(83380400001)(47076005)(30864003)(508600001)(36756003)(6916009)(7636003)(336012)(7696005)(5660300002)(1076003)(4326008)(6666004)(82310400003)(356005)(426003)(36860700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:35:00.0246 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f444941-9e4a-4529-97a3-08d99f8f8480 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3676 Subject: [dpdk-dev] [PATCH v4 11/14] net/mlx5: move Rx queue DevX resource X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To support shared RX queue, moves DevX RQ which is per queue resource to Rx queue private data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/linux/mlx5_verbs.c | 154 +++++++++++-------- drivers/net/mlx5/mlx5.h | 11 +- drivers/net/mlx5/mlx5_devx.c | 227 +++++++++++++--------------- drivers/net/mlx5/mlx5_rx.h | 1 + drivers/net/mlx5/mlx5_rxq.c | 44 +++--- drivers/net/mlx5/mlx5_rxtx.c | 6 +- drivers/net/mlx5/mlx5_trigger.c | 2 +- drivers/net/mlx5/mlx5_vlan.c | 16 +- 8 files changed, 240 insertions(+), 221 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 4779b37aa65..5d4ae3ea752 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -29,13 +29,13 @@ /** * Modify Rx WQ vlan stripping offload * - * @param rxq_obj - * Rx queue object. + * @param rxq + * Rx queue. * * @return 0 on success, non-0 otherwise */ static int -mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_priv *rxq, int on) { uint16_t vlan_offloads = (on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) | @@ -47,14 +47,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) .flags = vlan_offloads, }; - return mlx5_glue->modify_wq(rxq_obj->wq, &mod); + return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod); } /** * Modifies the attributes for the specified WQ. * - * @param rxq_obj - * Verbs Rx queue object. + * @param rxq + * Verbs Rx queue. * @param type * Type of change queue state. * @@ -62,14 +62,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) +mlx5_ibv_modify_wq(struct mlx5_rxq_priv *rxq, uint8_t type) { struct ibv_wq_attr mod = { .attr_mask = IBV_WQ_ATTR_STATE, .wq_state = (enum ibv_wq_state)type, }; - return mlx5_glue->modify_wq(rxq_obj->wq, &mod); + return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod); } /** @@ -139,21 +139,18 @@ mlx5_ibv_modify_qp(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, /** * Create a CQ Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * The Verbs CQ object initialized, NULL otherwise and rte_errno is set. */ static struct ibv_cq * -mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_cq_create(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data); struct { @@ -199,7 +196,7 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(DEBUG, "Port %u Rx CQE compression is disabled for HW" " timestamp.", - dev->data->port_id); + priv->dev_data->port_id); } #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD if (RTE_CACHE_LINE_SIZE == 128) { @@ -216,21 +213,18 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) /** * Create a WQ Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * The Verbs WQ object initialized, NULL otherwise and rte_errno is set. */ static struct ibv_wq * -mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_wq_create(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; unsigned int wqe_n = 1 << rxq_data->elts_n; struct { @@ -297,7 +291,7 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(ERR, "Port %u Rx queue %u requested %u*%u but got" " %u*%u WRs*SGEs.", - dev->data->port_id, idx, + priv->dev_data->port_id, rxq->idx, wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n), wq_attr.ibv.max_wr, wq_attr.ibv.max_sge); @@ -312,21 +306,20 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) /** * Create the Rx queue Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t idx = rxq->idx; + struct mlx5_priv *priv = rxq->priv; + uint16_t port_id = priv->dev_data->port_id; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; struct mlx5dv_cq cq_info; struct mlx5dv_rwq rwq; @@ -341,17 +334,17 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) mlx5_glue->create_comp_channel(priv->sh->cdev->ctx); if (!tmpl->ibv_channel) { DRV_LOG(ERR, "Port %u: comp channel creation failure.", - dev->data->port_id); + port_id); rte_errno = ENOMEM; goto error; } tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd; } /* Create CQ using Verbs API. */ - tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(dev, idx); + tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(rxq); if (!tmpl->ibv_cq) { DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ENOMEM; goto error; } @@ -366,7 +359,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(ERR, "Port %u wrong MLX5_CQE_SIZE environment " "variable value: it should be set to %u.", - dev->data->port_id, RTE_CACHE_LINE_SIZE); + port_id, RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -377,19 +370,19 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->cq_uar = cq_info.cq_uar; rxq_data->cqn = cq_info.cqn; /* Create WQ (RQ) using Verbs API. */ - tmpl->wq = mlx5_rxq_ibv_wq_create(dev, idx); + tmpl->wq = mlx5_rxq_ibv_wq_create(rxq); if (!tmpl->wq) { DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ENOMEM; goto error; } /* Change queue state to ready. */ - ret = mlx5_ibv_modify_wq(tmpl, IBV_WQS_RDY); + ret = mlx5_ibv_modify_wq(rxq, IBV_WQS_RDY); if (ret) { DRV_LOG(ERR, "Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ret; goto error; } @@ -405,7 +398,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->cq_arm_sn = 0; mlx5_rxq_initialize(rxq_data); rxq_data->cq_ci = 0; - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; + priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num; return 0; error: @@ -423,12 +416,14 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) /** * Release an Rx verbs queue object. * - * @param rxq_obj - * Verbs Rx queue object. + * @param rxq + * Pointer to Rx queue. */ static void -mlx5_rxq_ibv_obj_release(struct mlx5_rxq_obj *rxq_obj) +mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq) { + struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj; + MLX5_ASSERT(rxq_obj); MLX5_ASSERT(rxq_obj->wq); MLX5_ASSERT(rxq_obj->ibv_cq); @@ -652,12 +647,24 @@ static void mlx5_rxq_ibv_obj_drop_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_obj *rxq_obj; - if (rxq->wq) - claim_zero(mlx5_glue->destroy_wq(rxq->wq)); - if (rxq->ibv_cq) - claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq)); + if (rxq == NULL) + return; + if (rxq->ctrl == NULL) + goto free_priv; + rxq_obj = rxq->ctrl->obj; + if (rxq_obj == NULL) + goto free_ctrl; + if (rxq_obj->wq) + claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq)); + if (rxq_obj->ibv_cq) + claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq)); + mlx5_free(rxq_obj); +free_ctrl: + mlx5_free(rxq->ctrl); +free_priv: mlx5_free(rxq); priv->drop_queue.rxq = NULL; } @@ -676,39 +683,58 @@ mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct ibv_context *ctx = priv->sh->cdev->ctx; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_obj *rxq_obj = NULL; - if (rxq) + if (rxq != NULL) return 0; rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY); - if (!rxq) { + if (rxq == NULL) { DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.", dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } priv->drop_queue.rxq = rxq; - rxq->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0); - if (!rxq->ibv_cq) { + rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0, + SOCKET_ID_ANY); + if (rxq_ctrl == NULL) { + DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue control memory.", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq->ctrl = rxq_ctrl; + rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, + SOCKET_ID_ANY); + if (rxq_obj == NULL) { + DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq_ctrl->obj = rxq_obj; + rxq_obj->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0); + if (!rxq_obj->ibv_cq) { DRV_LOG(DEBUG, "Port %u cannot allocate CQ for drop queue.", dev->data->port_id); rte_errno = errno; goto error; } - rxq->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){ + rxq_obj->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){ .wq_type = IBV_WQT_RQ, .max_wr = 1, .max_sge = 1, .pd = priv->sh->cdev->pd, - .cq = rxq->ibv_cq, + .cq = rxq_obj->ibv_cq, }); - if (!rxq->wq) { + if (!rxq_obj->wq) { DRV_LOG(DEBUG, "Port %u cannot allocate WQ for drop queue.", dev->data->port_id); rte_errno = errno; goto error; } - priv->drop_queue.rxq = rxq; return 0; error: mlx5_rxq_ibv_obj_drop_release(dev); @@ -737,7 +763,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev) ret = mlx5_rxq_ibv_obj_drop_create(dev); if (ret < 0) goto error; - rxq = priv->drop_queue.rxq; + rxq = priv->drop_queue.rxq->ctrl->obj; ind_tbl = mlx5_glue->create_rwq_ind_table (priv->sh->cdev->ctx, &(struct ibv_rwq_ind_table_init_attr){ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4e99fe7d068..967d92b4ad6 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -300,7 +300,7 @@ struct mlx5_vf_vlan { /* Flow drop context necessary due to Verbs API. */ struct mlx5_drop { struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */ - struct mlx5_rxq_obj *rxq; /* Rx queue object. */ + struct mlx5_rxq_priv *rxq; /* Rx queue. */ }; /* Loopback dummy queue resources required due to Verbs API. */ @@ -1267,7 +1267,6 @@ struct mlx5_rxq_obj { }; struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */ struct { - struct mlx5_devx_rq rq_obj; /* DevX RQ object. */ struct mlx5_devx_cq cq_obj; /* DevX CQ object. */ void *devx_channel; }; @@ -1349,11 +1348,11 @@ struct mlx5_rxq_priv; /* HW objects operations structure. */ struct mlx5_obj_ops { - int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on); - int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx); + int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_priv *rxq, int on); + int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq); int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj); - int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, uint8_t type); - void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj); + int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type); + void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq); int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n, struct mlx5_ind_table_obj *ind_tbl); int (*ind_table_modify)(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 8b3651f5034..b90a5d82458 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -30,14 +30,16 @@ /** * Modify RQ vlan stripping offload * - * @param rxq_obj - * Rx queue object. + * @param rxq + * Rx queue. + * @param on + * Enable/disable VLAN stripping. * * @return * 0 on success, non-0 otherwise */ static int -mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_priv *rxq, int on) { struct mlx5_devx_modify_rq_attr rq_attr; @@ -46,14 +48,14 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) rq_attr.state = MLX5_RQC_STATE_RDY; rq_attr.vsd = (on ? 0 : 1); rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD; - return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr); + return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } /** * Modify RQ using DevX API. * - * @param rxq_obj - * DevX Rx queue object. + * @param rxq + * DevX rx queue. * @param type * Type of change queue state. * @@ -61,7 +63,7 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) +mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type) { struct mlx5_devx_modify_rq_attr rq_attr; @@ -86,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) default: break; } - return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr); + return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } /** @@ -145,42 +147,34 @@ mlx5_txq_devx_modify(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, return 0; } -/** - * Destroy the Rx queue DevX object. - * - * @param rxq_obj - * Rxq object to destroy. - */ -static void -mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj) -{ - mlx5_devx_rq_destroy(&rxq_obj->rq_obj); - memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj)); - mlx5_devx_cq_destroy(&rxq_obj->cq_obj); - memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); -} - /** * Release an Rx DevX queue object. * - * @param rxq_obj - * DevX Rx queue object. + * @param rxq + * DevX Rx queue. */ static void -mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj) +mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) { - MLX5_ASSERT(rxq_obj); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; + + MLX5_ASSERT(rxq != NULL); + MLX5_ASSERT(rxq_ctrl != NULL); if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) { MLX5_ASSERT(rxq_obj->rq); - mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST); + mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST); claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq)); } else { - MLX5_ASSERT(rxq_obj->cq_obj.cq); - MLX5_ASSERT(rxq_obj->rq_obj.rq); - mlx5_rxq_release_devx_resources(rxq_obj); - if (rxq_obj->devx_channel) + mlx5_devx_rq_destroy(&rxq->devx_rq); + memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq)); + mlx5_devx_cq_destroy(&rxq_obj->cq_obj); + memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); + if (rxq_obj->devx_channel) { mlx5_os_devx_destroy_event_channel (rxq_obj->devx_channel); + rxq_obj->devx_channel = NULL; + } } } @@ -224,22 +218,19 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj) /** * Create a RQ object using DevX. * - * @param dev - * Pointer to Ethernet device. - * @param rxq_data - * RX queue data. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, - struct mlx5_rxq_data *rxq_data) +mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_priv *priv = rxq->priv; struct mlx5_common_device *cdev = priv->sh->cdev; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; struct mlx5_devx_create_rq_attr rq_attr = { 0 }; uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n; uint32_t wqe_size, log_wqe_size; @@ -281,31 +272,29 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, rq_attr.wq_attr.pd = cdev->pdn; rq_attr.counter_set_id = priv->counter_set_id; /* Create RQ using DevX API. */ - return mlx5_devx_rq_create(cdev->ctx, &rxq_ctrl->obj->rq_obj, wqe_size, + return mlx5_devx_rq_create(cdev->ctx, &rxq->devx_rq, wqe_size, log_desc_n, &rq_attr, rxq_ctrl->socket); } /** * Create a DevX CQ object for an Rx queue. * - * @param dev - * Pointer to Ethernet device. - * @param rxq_data - * RX queue data. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, - struct mlx5_rxq_data *rxq_data) +mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq) { struct mlx5_devx_cq *cq_obj = 0; struct mlx5_devx_cq_attr cq_attr = { 0 }; - struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_priv *priv = rxq->priv; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t port_id = priv->dev_data->port_id; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data); uint32_t log_cqe_n; uint16_t event_nums[1] = { 0 }; @@ -346,7 +335,7 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, } DRV_LOG(DEBUG, "Port %u Rx CQE compression is enabled, format %d.", - dev->data->port_id, priv->config.cqe_comp_fmt); + port_id, priv->config.cqe_comp_fmt); /* * For vectorized Rx, it must not be doubled in order to * make cq_ci and rq_ci aligned. @@ -355,13 +344,12 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, cqe_n *= 2; } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) { DRV_LOG(DEBUG, - "Port %u Rx CQE compression is disabled for HW" - " timestamp.", - dev->data->port_id); + "Port %u Rx CQE compression is disabled for HW timestamp.", + port_id); } else if (priv->config.cqe_comp && rxq_data->lro) { DRV_LOG(DEBUG, "Port %u Rx CQE compression is disabled for LRO.", - dev->data->port_id); + port_id); } cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar); log_cqe_n = log2above(cqe_n); @@ -399,27 +387,23 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, /** * Create the Rx hairpin queue object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t idx = rxq->idx; + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; struct mlx5_devx_create_rq_attr attr = { 0 }; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; uint32_t max_wq_data; - MLX5_ASSERT(rxq_data); - MLX5_ASSERT(tmpl); + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL && tmpl != NULL); tmpl->rxq_ctrl = rxq_ctrl; attr.hairpin = 1; max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz; @@ -448,39 +432,36 @@ mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl->rq) { DRV_LOG(ERR, "Port %u Rx hairpin queue %u can't create rq object.", - dev->data->port_id, idx); + priv->dev_data->port_id, idx); rte_errno = errno; return -rte_errno; } - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN; + priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN; return 0; } /** * Create the Rx queue DevX object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; int ret = 0; MLX5_ASSERT(rxq_data); MLX5_ASSERT(tmpl); if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return mlx5_rxq_obj_hairpin_new(dev, idx); + return mlx5_rxq_obj_hairpin_new(rxq); tmpl->rxq_ctrl = rxq_ctrl; if (rxq_ctrl->irq) { int devx_ev_flag = @@ -498,34 +479,32 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel); } /* Create CQ using DevX API. */ - ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_cq_resources(rxq); if (ret) { DRV_LOG(ERR, "Failed to create CQ."); goto error; } /* Create RQ using DevX API. */ - ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_rq_resources(rxq); if (ret) { DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.", - dev->data->port_id, idx); + priv->dev_data->port_id, rxq->idx); rte_errno = ENOMEM; goto error; } /* Change queue state to ready. */ - ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY); + ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf; - rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.wq.db_rec; - rxq_data->cq_arm_sn = 0; - rxq_data->cq_ci = 0; + rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; + rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec; mlx5_rxq_initialize(rxq_data); - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; - rxq_ctrl->wqn = tmpl->rq_obj.rq->id; + priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED; + rxq_ctrl->wqn = rxq->devx_rq.rq->id; return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_devx_obj_release(tmpl); + mlx5_rxq_devx_obj_release(rxq); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -571,15 +550,15 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, rqt_attr->rqt_actual_size = rqt_n; if (queues == NULL) { for (i = 0; i < rqt_n; i++) - rqt_attr->rq_list[i] = priv->drop_queue.rxq->rq->id; + rqt_attr->rq_list[i] = + priv->drop_queue.rxq->devx_rq.rq->id; return rqt_attr; } for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); - rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id; + MLX5_ASSERT(rxq != NULL); + rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; } MLX5_ASSERT(i > 0); for (j = 0; i != rqt_n; ++j, ++i) @@ -719,7 +698,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, } } } else { - rxq_obj_type = priv->drop_queue.rxq->rxq_ctrl->type; + rxq_obj_type = priv->drop_queue.rxq->ctrl->type; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; @@ -891,9 +870,9 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; int socket_id = dev->device->numa_node; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_obj *rxq = NULL; + struct mlx5_rxq_priv *rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_obj *rxq_obj = NULL; int ret; /* @@ -901,6 +880,13 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) * They are required to hold pointers for cleanup * and are only accessible via drop queue DevX objects. */ + rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id); + if (rxq == NULL) { + DRV_LOG(ERR, "Port %u could not allocate drop queue private", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0, socket_id); if (rxq_ctrl == NULL) { @@ -909,27 +895,29 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) rte_errno = ENOMEM; goto error; } - rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id); - if (rxq == NULL) { + rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, socket_id); + if (rxq_obj == NULL) { DRV_LOG(ERR, "Port %u could not allocate drop queue object", dev->data->port_id); rte_errno = ENOMEM; goto error; } - rxq->rxq_ctrl = rxq_ctrl; + rxq_obj->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; rxq_ctrl->sh = priv->sh; - rxq_ctrl->obj = rxq; - rxq_data = &rxq_ctrl->rxq; + rxq_ctrl->obj = rxq_obj; + rxq->ctrl = rxq_ctrl; + rxq->priv = priv; + LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry); /* Create CQ using DevX API. */ - ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_cq_resources(rxq); if (ret != 0) { DRV_LOG(ERR, "Port %u drop queue CQ creation failed.", dev->data->port_id); goto error; } /* Create RQ using DevX API. */ - ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_rq_resources(rxq); if (ret != 0) { DRV_LOG(ERR, "Port %u drop queue RQ creation failed.", dev->data->port_id); @@ -945,18 +933,20 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - if (rxq != NULL) { - if (rxq->rq_obj.rq != NULL) - mlx5_devx_rq_destroy(&rxq->rq_obj); - if (rxq->cq_obj.cq != NULL) - mlx5_devx_cq_destroy(&rxq->cq_obj); - if (rxq->devx_channel) + if (rxq != NULL && rxq->devx_rq.rq != NULL) + mlx5_devx_rq_destroy(&rxq->devx_rq); + if (rxq_obj != NULL) { + if (rxq_obj->cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&rxq_obj->cq_obj); + if (rxq_obj->devx_channel) mlx5_os_devx_destroy_event_channel - (rxq->devx_channel); - mlx5_free(rxq); + (rxq_obj->devx_channel); + mlx5_free(rxq_obj); } if (rxq_ctrl != NULL) mlx5_free(rxq_ctrl); + if (rxq != NULL) + mlx5_free(rxq); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -971,12 +961,13 @@ static void mlx5_rxq_devx_obj_drop_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; - struct mlx5_rxq_ctrl *rxq_ctrl = rxq->rxq_ctrl; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; mlx5_rxq_devx_obj_release(rxq); - mlx5_free(rxq); + mlx5_free(rxq_ctrl->obj); mlx5_free(rxq_ctrl); + mlx5_free(rxq); priv->drop_queue.rxq = NULL; } @@ -996,7 +987,7 @@ mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev) mlx5_devx_tir_destroy(hrxq); if (hrxq->ind_table->ind_table != NULL) mlx5_devx_ind_table_destroy(hrxq->ind_table); - if (priv->drop_queue.rxq->rq != NULL) + if (priv->drop_queue.rxq->devx_rq.rq != NULL) mlx5_rxq_devx_obj_drop_release(dev); } diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index c04c0c73349..337dcca59fb 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -174,6 +174,7 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct mlx5_devx_rq devx_rq; struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ uint32_t hairpin_status; /* Hairpin binding status. */ }; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 5a20966e2ca..2850a220399 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -471,13 +471,13 @@ int mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; int ret; + MLX5_ASSERT(rxq != NULL && rxq_ctrl != NULL); MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RDY2RST); + ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RDY2RST); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to RESET: %s", strerror(errno)); @@ -485,7 +485,7 @@ mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx) return ret; } /* Remove all processes CQEs. */ - rxq_sync_cq(rxq); + rxq_sync_cq(&rxq_ctrl->rxq); /* Free all involved mbufs. */ rxq_free_elts(rxq_ctrl); /* Set the actual queue state. */ @@ -557,26 +557,26 @@ int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; int ret; - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL); + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); /* Allocate needed buffers. */ - ret = rxq_alloc_elts(rxq_ctrl); + ret = rxq_alloc_elts(rxq->ctrl); if (ret) { DRV_LOG(ERR, "Cannot reallocate buffers for Rx WQ"); rte_errno = errno; return ret; } rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); + *rxq_data->cq_db = rte_cpu_to_be_32(rxq_data->cq_ci); rte_io_wmb(); /* Reset RQ consumer before moving queue to READY state. */ - *rxq->rq_db = rte_cpu_to_be_32(0); + *rxq_data->rq_db = rte_cpu_to_be_32(0); rte_io_wmb(); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RST2RDY); + ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to READY: %s", strerror(errno)); @@ -584,8 +584,8 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx) return ret; } /* Reinitialize RQ - set WQEs. */ - mlx5_rxq_initialize(rxq); - rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + mlx5_rxq_initialize(rxq_data); + rxq_data->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; /* Set actual queue state. */ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; return 0; @@ -1835,15 +1835,19 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_priv *rxq; + struct mlx5_rxq_ctrl *rxq_ctrl; - if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) + if (priv->rxq_privs == NULL) + return 0; + rxq = mlx5_rxq_get(dev, idx); + if (rxq == NULL) return 0; if (mlx5_rxq_deref(dev, idx) > 1) return 1; - if (rxq_ctrl->obj) { - priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->obj != NULL) { + priv->obj_ops.rxq_obj_release(rxq); LIST_REMOVE(rxq_ctrl->obj, next); mlx5_free(rxq_ctrl->obj); rxq_ctrl->obj = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 0bcdff1b116..54d410b513b 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -373,11 +373,9 @@ mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; if (sm->is_wq) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, sm->queue_id); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, sm->state); + ret = priv->obj_ops.rxq_obj_modify(rxq, sm->state); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to %u - %s", sm->state, strerror(errno)); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index caafdf27e8f..2cf62a9780d 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -231,7 +231,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) rte_errno = ENOMEM; goto error; } - ret = priv->obj_ops.rxq_obj_new(dev, i); + ret = priv->obj_ops.rxq_obj_new(rxq); if (ret) { mlx5_free(rxq_ctrl->obj); rxq_ctrl->obj = NULL; diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 07792fc5d94..ea841bb32fb 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -91,11 +91,11 @@ void mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[queue]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queue); + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; int ret = 0; + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL); /* Validate hw support */ if (!priv->config.hw_vlan_strip) { DRV_LOG(ERR, "port %u VLAN stripping is not supported", @@ -109,20 +109,20 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) return; } DRV_LOG(DEBUG, "port %u set VLAN stripping offloads %d for port %uqueue %d", - dev->data->port_id, on, rxq->port_id, queue); - if (!rxq_ctrl->obj) { + dev->data->port_id, on, rxq_data->port_id, queue); + if (rxq->ctrl->obj == NULL) { /* Update related bits in RX queue. */ - rxq->vlan_strip = !!on; + rxq_data->vlan_strip = !!on; return; } - ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on); + ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq, on); if (ret) { DRV_LOG(ERR, "Port %u failed to modify object stripping mode:" " %s", dev->data->port_id, strerror(rte_errno)); return; } /* Update related bits in RX queue. */ - rxq->vlan_strip = !!on; + rxq_data->vlan_strip = !!on; } /** From patchwork Thu Nov 4 12:33:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103759 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75205A0548; Thu, 4 Nov 2021 13:35:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9DEED4271C; Thu, 4 Nov 2021 13:35:22 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2053.outbound.protection.outlook.com [40.107.237.53]) by mails.dpdk.org (Postfix) with ESMTP id A892642715 for ; Thu, 4 Nov 2021 13:35:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DWP7xidF6aM9JD96hXqfL1Jr1rTIzZTNmKtKuvG5cy7DTo54Tpm5uiq7BVAeFYW+uKa2EF8zqb6L7at586nnpix6T5EHzwfvT2WNQ3tM4DsD1wJf1CdAA4llLlC4DZvk1LEeO/IBwBHI/0kFoshsUBh98AyuBvn+NhqniPsxjkjPfMWsqkKS4PHyLV8d2nCqxKcyeMXbGTiqHO9pOI+EmE6DCSvQGYzaFoAnRtsjofgUKvAxogUw9GbTIonkmbayACBk96xjlt5JOHyHvHC5SpmfeI3z4zyM2dAd6pBq99FG0Zpjk9OtpHRCHcHDCJDnqZ5Mdin1bcPxPBM92TzPbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KyYFdjJAbFYVa/RE3BsMq/GQK3yxVPInlXaZDbXIED4=; b=iwhhkTkI5fKptGfuEmsF6RC10mdYq6eHtVXMYHzcHRfIIqicuUY23L8AuWDyoQKqZzql0ZpSYJQLFuYKTd3DBux7VA3mcomFXVhuyV7ThRYEkyt0X05yLlhUtAIn4yI2n7kKPxGyb8liLgxPxrcPxuuwMTixZsk/Ilu3tzq7ExP1YKQTAm6MibUaCHoJX+MNSZwkktFRoSW/c1n+z904R0NV/zmzK2UONu7N5xDGcjA7Ykzejhi2c+O2NkHae8xAkpj7lJvfJB6Z9KubjHdqPc92rdhbBmm7jkhOIatULY6BED2z2Zc1j6l7/+tWUT1JBLGS1k/Y+xdMqzJEJUK4/Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KyYFdjJAbFYVa/RE3BsMq/GQK3yxVPInlXaZDbXIED4=; b=EcNv9I24r3gLj18ndfi5gQ3Wp75IRAUD55hRMX1oOrLWzkOpMENHUEczy8SP3z5AkSjsnmG8RcTybNqgBBETVnsER533D9skhxdy25EkbIQkGU0WK/u69zPgYPtSVYBkiS2LJwQ1m9efKcqWZNnEGTT5Csn9mUGwXTxe9Em0CGfyApTqU2pVNRQz+qQxuzK1yu099Ivxo54MN8yphrUofut5wCVhhGFD3i8rNmFnFkjLdKOTf2S9Z4dYHxzQ8QlqxRX7b2ZcaMUipJUKf5mKPDJ+X9leoDCqtZypscCQeB+mXlBSoa2Q/ShsHpNqZ47K1Ki4TFZiKyR0Jjsk0d1v/A== Received: from DM6PR06CA0078.namprd06.prod.outlook.com (2603:10b6:5:336::11) by DM6PR12MB4219.namprd12.prod.outlook.com (2603:10b6:5:217::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 12:35:19 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:5:336:cafe::48) by DM6PR06CA0078.outlook.office365.com (2603:10b6:5:336::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:35:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:18 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:59 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:18 +0800 Message-ID: <20211104123320.1638915-13-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 460951ce-21d9-4a23-7f8f-08d99f8f8f9f X-MS-TrafficTypeDiagnostic: DM6PR12MB4219: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FcoahX3M5CdktEfhMuQEVsziPZBWMTD4qSAgRbw39eBBHBRrTfCMhf2j5DM2k3T2B1EmdCAbhYFeAZj22bLgSnBPPcO4Gm6jTMrgJwbZiwO1UCa+kzjMtYLS9DDMXpFtcyjjB3UjQQuuO3XuSORc8cDW9iv6zFil5bBgwg9Ih3XeV/RpVXSpa3TEhjFu+wquPxWdf9g7hl4MCyGEGw46e9iyEh/g4BhKpYmvZ6GqllXN4stwt0LgV7icm81jzQUXCqzc8SMd2LU+zkT/W101tXcTqi+ELPisQ3ubhPiBc78r5ATvT61XnxSHN0efv0cF4W43Jgo9K65WQdl/pA9Ah993EoGmVfogwKk5SZEQR1ysgIerMdt/fMSrCl0qSD0PyWlpPL7u2xrt6hJGTQW8//3BuYU9PWZyVyYh1uzJ+uYPPd3cAc5FkLfwlMJCkQiYxAyq6cCLI0WmT1cPdpfsGIho0Ij+swFnvqy7HZPBuNqtMK+Gap6WTWiOYmUy+6aCdrYeOxhNTTa/BYZ9y0BtJhjwlEvCzxXCm8KnL+ZGCeAgyyQgKuNIL2xoNTHjEux3jnQA76h/aSNOWARePN7akSNUYc3ex53w+JPHYN4YvD4pxZ7WOpc+AUpuQubCa8BP3USbf0OagLXZ4m9qbeLr2/+P7QIWi4k9b7U20sXOYyENmyd2ScVInSQIgqzRivbvcLz7QmiikfGCxHQW241yzHCfGJ3KilOBmaEEAPs149XajwhixN7ryBw6ETW985SB X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(6916009)(47076005)(6666004)(8936002)(316002)(36756003)(83380400001)(30864003)(356005)(7636003)(1076003)(8676002)(4326008)(6286002)(55016002)(2906002)(5660300002)(186003)(70586007)(16526019)(2616005)(7696005)(82310400003)(508600001)(70206006)(86362001)(54906003)(336012)(107886003)(26005)(426003)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:35:18.7340 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 460951ce-21d9-4a23-7f8f-08d99f8f8f9f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4219 Subject: [dpdk-dev] [PATCH v4 12/14] net/mlx5: remove Rx queue data list from device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replaces with universal wrapper API. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/linux/mlx5_verbs.c | 7 ++--- drivers/net/mlx5/mlx5.c | 10 +----- drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_devx.c | 12 +++++--- drivers/net/mlx5/mlx5_ethdev.c | 6 +--- drivers/net/mlx5/mlx5_flow.c | 47 +++++++++++++++-------------- drivers/net/mlx5/mlx5_rss.c | 6 ++-- drivers/net/mlx5/mlx5_rx.c | 15 +++++---- drivers/net/mlx5/mlx5_rx.h | 9 +++--- drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++-------------- drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++-- drivers/net/mlx5/mlx5_stats.c | 9 +++--- drivers/net/mlx5/mlx5_trigger.c | 2 +- 13 files changed, 79 insertions(+), 94 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 5d4ae3ea752..f78916c868f 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -486,11 +486,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, MLX5_ASSERT(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[i]); - wq[i] = rxq_ctrl->obj->wq; + wq[i] = rxq->ctrl->obj->wq; } MLX5_ASSERT(i > 0); /* Finalise indirection table. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 374cc9757aa..8614b8ffddd 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1687,20 +1687,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); - if (priv->rxqs != NULL) { + if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); for (i = 0; (i != priv->rxqs_n); ++i) mlx5_rxq_release(dev, i); priv->rxqs_n = 0; - priv->rxqs = NULL; - } - if (priv->representor) { - /* Each representor has a dedicated interrupts handler */ - mlx5_free(dev->intr_handle); - dev->intr_handle = NULL; - } - if (priv->rxq_privs != NULL) { mlx5_free(priv->rxq_privs); priv->rxq_privs = NULL; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 967d92b4ad6..a037a33debf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1410,7 +1410,6 @@ struct mlx5_priv { unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ - struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index b90a5d82458..668d47025e8 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -684,15 +684,17 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_data *rxq_data = - (*priv->rxqs)[ind_tbl->queues[0]]; struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - rxq_obj_type = rxq_ctrl->type; + mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]); + rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type : + MLX5_RXQ_TYPE_STANDARD; /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { - if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { + struct mlx5_rxq_data *rxq_i = + mlx5_rxq_data_get(dev, ind_tbl->queues[i]); + + if (rxq_i != NULL && !rxq_i->lro) { lro = false; break; } diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index cde505955df..bb38d5d2ade 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u", @@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) return -rte_errno; } for (i = 0, j = 0; i < rxqs_n; i++) { - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - rxq_data = (*priv->rxqs)[i]; - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) rss_queue_arr[j++] = i; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 5435660a2dd..2f30a355258 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1210,10 +1210,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, return; for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; /* * To support metadata register copy on Tx loopback, * this must be always enabled (metadata may arive @@ -1305,10 +1306,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, MLX5_ASSERT(dev->data->dev_started); for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; if (priv->config.dv_flow_en && priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && mlx5_flow_ext_mreg_supported(dev)) { @@ -1369,18 +1371,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); unsigned int j; - if (!(*priv->rxqs)[i]) + if (rxq == NULL || rxq->ctrl == NULL) continue; - rxq_ctrl = container_of((*priv->rxqs)[i], - struct mlx5_rxq_ctrl, rxq); - rxq_ctrl->flow_mark_n = 0; - rxq_ctrl->rxq.mark = 0; + rxq->ctrl->flow_mark_n = 0; + rxq->ctrl->rxq.mark = 0; for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) - rxq_ctrl->flow_tunnels_n[j] = 0; - rxq_ctrl->rxq.tunnel = 0; + rxq->ctrl->flow_tunnels_n[j] = 0; + rxq->ctrl->rxq.tunnel = 0; } } @@ -1394,13 +1394,15 @@ void mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_data *data; + + if (rxq == NULL || rxq->ctrl == NULL) continue; - data = (*priv->rxqs)[i]; + data = &rxq->ctrl->rxq; if (!rte_flow_dynf_metadata_avail()) { data->dynf_meta = 0; data->flow_meta_mask = 0; @@ -1591,7 +1593,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue index out of range"); - if (!(*priv->rxqs)[queue->index]) + if (mlx5_rxq_get(dev, queue->index) == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, @@ -1622,7 +1624,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * 0 on success, a negative errno code on error. */ static int -mlx5_validate_rss_queues(const struct rte_eth_dev *dev, +mlx5_validate_rss_queues(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n, const char **error, uint32_t *queue_idx) { @@ -1631,20 +1633,19 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev, uint32_t i; for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, + queues[i]); if (queues[i] >= priv->rxqs_n) { *error = "queue index out of range"; *queue_idx = i; return -EINVAL; } - if (!(*priv->rxqs)[queues[i]]) { + if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; return -EINVAL; } - rxq_ctrl = container_of((*priv->rxqs)[queues[i]], - struct mlx5_rxq_ctrl, rxq); if (i == 0) rxq_type = rxq_ctrl->type; if (rxq_type != rxq_ctrl->type) { diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c index a04e22398db..75af05b7b02 100644 --- a/drivers/net/mlx5/mlx5_rss.c +++ b/drivers/net/mlx5/mlx5_rss.c @@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev, priv->rss_conf.rss_hf = rss_conf->rss_hf; /* Enable the RSS hash in all Rx queues. */ for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + + if (rxq == NULL || rxq->ctrl == NULL) continue; - (*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf && + rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf && !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS); ++idx; } diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index d41905a2a04..1ffa1b95b88 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -148,10 +148,8 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id); + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (!rxq) return; @@ -162,7 +160,10 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.rx_thresh.wthresh = 0; qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh; qinfo->conf.rx_drop_en = 1; - qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1; + if (rxq_ctrl == NULL || rxq_ctrl->obj == NULL) + qinfo->conf.rx_deferred_start = 0; + else + qinfo->conf.rx_deferred_start = 1; qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? @@ -191,10 +192,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, struct rte_eth_burst_mode *mode) { eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 337dcca59fb..413e36f6d8d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -603,14 +603,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return 0; /* All the configured queues should be enabled. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; n_ibv++; - if (mlx5_rxq_mprq_enabled(rxq)) + if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) ++n; } /* Multi-Packet RQ can't be partially configured. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 2850a220399..f3fc618ed2c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -748,7 +748,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -830,7 +830,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, } DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -1163,7 +1163,7 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) rte_mempool_free(mp); /* Unset mempool for each Rx queue. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -1204,12 +1204,13 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) return 0; /* Count the total number of descriptors configured. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); + struct mlx5_rxq_data *rxq; - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; + rxq = &rxq_ctrl->rxq; n_ibv++; desc += 1 << rxq->elts_n; /* Get the max number of strides. */ @@ -1292,13 +1293,12 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) exit: /* Set mempool for each Rx queue. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; - rxq->mprq_mp = mp; + rxq_ctrl->rxq.mprq_mp = mp; } DRV_LOG(INFO, "port %u Multi-Packet RQ is configured", dev->data->port_id); @@ -1777,8 +1777,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - if (priv->rxq_privs == NULL) - return NULL; + MLX5_ASSERT(priv->rxq_privs != NULL); return (*priv->rxq_privs)[idx]; } @@ -1862,7 +1861,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); - (*priv->rxqs)[idx] = NULL; + dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; } @@ -1908,14 +1907,10 @@ enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); + if (idx < priv->rxqs_n && rxq_ctrl != NULL) return rxq_ctrl->type; - } return MLX5_RXQ_TYPE_UNDEFINED; } @@ -2682,13 +2677,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i); + + if (data == NULL) continue; - data = (*priv->rxqs)[i]; data->sh = sh; data->rt_timestamp = priv->config.rt_timestamp; } diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 511681841ca..6212ce8247d 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev) return -ENOTSUP; /* All the configured queues should support. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); - if (!rxq) + if (!rxq_data) continue; - if (mlx5_rxq_check_vec_support(rxq) < 0) + if (mlx5_rxq_check_vec_support(rxq_data) < 0) break; } if (i != priv->rxqs_n) diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index ae2f5668a74..732775954ad 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) memset(&tmp, 0, sizeof(tmp)); /* Add software counters. */ for (i = 0; (i != priv->rxqs_n); ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev) unsigned int i; for (i = 0; (i != priv->rxqs_n); ++i) { - if ((*priv->rxqs)[i] == NULL) + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); + + if (rxq_data == NULL) continue; - memset(&(*priv->rxqs)[i]->stats, 0, - sizeof(struct mlx5_rxq_stats)); + memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats)); } for (i = 0; (i != priv->txqs_n); ++i) { if ((*priv->txqs)[i] == NULL) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 2cf62a9780d..72475e4b5b5 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -227,7 +227,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl->obj) { DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, (*priv->rxqs)[i]->idx); + dev->data->port_id, i); rte_errno = ENOMEM; goto error; } From patchwork Thu Nov 4 12:33:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103760 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8EA2FA0548; Thu, 4 Nov 2021 13:36:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A496742723; Thu, 4 Nov 2021 13:35:26 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2061.outbound.protection.outlook.com [40.107.223.61]) by mails.dpdk.org (Postfix) with ESMTP id 05D2042711 for ; Thu, 4 Nov 2021 13:35:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g+tSI/SalWPGwziyD2X0KbP9BRPhs8EAMMrTJsYcy/0EHpEmH2nDMM3yTKknJ8ur+AGYaZVYek4Ose21cvKsAhZEELkZSiURj/ng6tUgfQ6HZjLMncaHIU0sJOJlAbN+FKsq8GFewt2DvL9FMtpsd11U7gYkedFQcHjVAZRCpzsFZ7kq6p5Y4jEn+RFaR3R0DPtzwz1lS64U7iUisUw2d3/MpW5rWHZ0WjA4DWGTICzN1PZGmlRcB2jKu0jU+VT4zkISj6Cub2hiH406gSQ+Bw4fcijmtYexJYZRzGsFUUwis0DjL/ZBXDKvkTR83STIhl0bEk0FlcZ9ZGHW2nk+og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Est2uosU+Pr3Krq/Iis7UlunZpa4waa/XO0jXvmZ1RM=; b=EY2hYs8txuLfZapt431eZVN0aQG6QTWQ9u9SLZ77cosfrw1fzlFk+N2ayZjHcv8UAJkmht4rkrU6X8zXJUDO4TyQJ+aZmIXAZqEZf2M/yp72SZBXKm+qozMzQ3capAnG/FM6U3d9wyL38luvnkEVGneu4H42TMACzu6d42TCyXt64MIse6vEbLOazIRUZY0+hURWazNgg0iTJ9lWg9OcSSN638ehZzRnkJrk4j1vLF/DMQ5jnXWE9In9lf2h860Nt0rzZ8CnuDdbre7rhu5P6VMROQZY3oz7G9oPlHRJv/WgMYYoSegoJE0PzGmvuczMHsjnEJoNmmR2Df8TvhxCIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Est2uosU+Pr3Krq/Iis7UlunZpa4waa/XO0jXvmZ1RM=; b=LVxWhyqr8fUHvND+AMAt2a1D9f9N7Skx7Fe63wrertbvC3TRmousSwPk86TnBHW4mBrm0ZN80Q5U57EXqi3O7PXcDFnfuoIS7oIgV+TcSye0hjHFRcf5SjFw4sMLVv21imYztRpEKLSlQhNQwQNKqVc2gnx0OZGwZmM416kkSS56Zbr50E6tBZqirdhKL7LNfemDUJVZjLk8dIyF0IrwdgMUbm7fqFJ9qgGnuNpm6fZMY01Q/EDTlST5g0O+rgCR1xaLrQrRssaP+KpJMcweeRFItmWs853gaFpGfXpT9ERl80RbcBPZikHzVPP3IqWAJCyQtGaQKxh0DHgjYnx0/A== Received: from DM6PR06CA0076.namprd06.prod.outlook.com (2603:10b6:5:336::9) by BN9PR12MB5116.namprd12.prod.outlook.com (2603:10b6:408:119::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 12:35:22 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:5:336:cafe::37) by DM6PR06CA0076.outlook.office365.com (2603:10b6:5:336::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:35:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:22 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:35:01 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:19 +0800 Message-ID: <20211104123320.1638915-14-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a92e252b-67b7-44a5-0eaf-08d99f8f91cb X-MS-TrafficTypeDiagnostic: BN9PR12MB5116: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1091; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SArxiU2LZdWn7tGmXpmgS65hVGFMKeBMrLjIuCYJiiNuxdHonSmv1oBXXNryr32DKzqf25JA5lOs25H5KKzo4U+WpxfiMpTr4xM5vYTX8y6G6wvSm6heA36FxAnJ4qIZa3MEUF42LqxkmM6YebWz6VkXzsR372xnA/etwU0xV/ywROXOEw6PQPi4uV4W1h5uIW/lgqSndYT1LOb/CCan0lF0Uq7V+9k5JlodgMX004TUB9DxsgbUcP2sK6k4lJdMzgCieiWcGNE5UebvrHtYEjUuof6+4fCddGnLDuYyf5Ny/MYHX7ApPQA6rIk7xIgXGWGlXa9ircpGsqCvw9VIUskM5UGYZQYrCc09a/SSqsOWUoqLvpwzGJ/oIYO0kv9AOuhUoR1fIJ6ckHEEsGjFRtd1xL9JnlXPnhMzTKlS308gn3JCXFNtShoo+xrqtHPV6nsc5Ic1GKbPx5KM/qRnTaSCHh8qGQYd8qzauHWsiHbOjPWnZbwT6TnRlSyzk9UDFGZK/AlirgKvwXYlTX/lBaWfS7HEnJ8zY8IrxqPhsmDOaDBe4FXXbSlbfJddjr4dBxpxu7gaLul+03hm16NUfsfBucM5gxPC3myEyaDuOHYbRMRF+DGwZsFcR++HxZd09w+rC8c7vV5oeqr0LBTxpxa/09zWlARfNIwUzwO0rJXS9T3pILaJF6AuT4WyyDn9L14G9w5PkJ53bI4n6rXSKA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(7636003)(1076003)(83380400001)(7696005)(82310400003)(4326008)(47076005)(86362001)(30864003)(5660300002)(2906002)(8676002)(356005)(70586007)(508600001)(6666004)(70206006)(36860700001)(107886003)(6916009)(26005)(316002)(54906003)(16526019)(55016002)(186003)(6286002)(36756003)(426003)(2616005)(8936002)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:35:22.3250 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a92e252b-67b7-44a5-0eaf-08d99f8f91cb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5116 Subject: [dpdk-dev] [PATCH v4 13/14] net/mlx5: support shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces shared RxQ. All shared Rx queues with same group and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared, all queues from different member port share same WQ and CQ, essentially one Rx WQ, mbufs are filled into this singleton WQ. Shared rxq_data is set into device Rx queues of all member ports as RxQ object, used for receiving packets. Polling queue of any member ports returns packets of any member, mbuf->port is used to identify source port. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- doc/guides/nics/features/mlx5.ini | 1 + doc/guides/nics/mlx5.rst | 6 + drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/linux/mlx5_verbs.c | 8 +- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_devx.c | 46 +++-- drivers/net/mlx5/mlx5_ethdev.c | 5 + drivers/net/mlx5/mlx5_rx.h | 3 + drivers/net/mlx5/mlx5_rxq.c | 273 ++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_trigger.c | 61 ++++--- 10 files changed, 329 insertions(+), 78 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 403f58cd7e2..7cbd11bb160 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -11,6 +11,7 @@ Removal event = Y Rx interrupt = Y Fast mbuf free = Y Queue start/stop = Y +Shared Rx queue = Y Burst mode info = Y Power mgmt address monitor = Y MTU update = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index bb92520dff4..824971d89ae 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -113,6 +113,7 @@ Features - Connection tracking. - Sub-Function representors. - Sub-Function. +- Shared Rx queue. Limitations @@ -465,6 +466,11 @@ Limitations - In order to achieve best insertion rate, application should manage the flows per lcore. - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache. + Shared Rx queue: + + - Counters of received packets and bytes number of devices in same share group are same. + - Counters of received packets and bytes number of queues in same group and queue ID are same. + - HW hashed bonding - TXQ affinity subjects to HW hash once enabled. diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index f51da8c3a38..e0304b685e5 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -420,6 +420,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) mlx5_glue->dr_create_flow_action_default_miss(); if (!sh->default_miss_action) DRV_LOG(WARNING, "Default miss action is not supported."); + LIST_INIT(&sh->shared_rxqs); return 0; error: /* Rollback the created objects. */ @@ -494,6 +495,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) MLX5_ASSERT(sh && sh->refcnt); if (sh->refcnt > 1) return; + MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs)); #ifdef HAVE_MLX5DV_DR if (sh->rx_domain) { mlx5_glue->dr_destroy_domain(sh->rx_domain); diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index f78916c868f..9d299542614 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -424,14 +424,16 @@ mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq) { struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj; - MLX5_ASSERT(rxq_obj); - MLX5_ASSERT(rxq_obj->wq); - MLX5_ASSERT(rxq_obj->ibv_cq); + if (rxq_obj == NULL || rxq_obj->wq == NULL) + return; claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq)); + rxq_obj->wq = NULL; + MLX5_ASSERT(rxq_obj->ibv_cq); claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq)); if (rxq_obj->ibv_channel) claim_zero(mlx5_glue->destroy_comp_channel (rxq_obj->ibv_channel)); + rxq->ctrl->started = false; } /** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a037a33debf..51f45788381 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_ecpri_parser_profile ecpri_parser; /* Flex parser profiles information. */ void *devx_rx_uar; /* DevX UAR for Rx. */ + LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */ struct mlx5_aso_age_mng *aso_age_mng; /* Management data for aging mechanism using ASO Flow Hit. */ struct mlx5_geneve_tlv_option_resource *geneve_tlv_option_resource; @@ -1267,6 +1268,7 @@ struct mlx5_rxq_obj { }; struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */ struct { + struct mlx5_devx_rmp devx_rmp; /* RMP for shared RQ. */ struct mlx5_devx_cq cq_obj; /* DevX CQ object. */ void *devx_channel; }; diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 668d47025e8..d3d189ab7f2 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -88,6 +88,8 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type) default: break; } + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr); return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } @@ -156,18 +158,21 @@ mlx5_txq_devx_modify(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, static void mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) { - struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; + struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj; - MLX5_ASSERT(rxq != NULL); - MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_obj == NULL) + return; if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) { - MLX5_ASSERT(rxq_obj->rq); + if (rxq_obj->rq == NULL) + return; mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST); claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq)); } else { + if (rxq->devx_rq.rq == NULL) + return; mlx5_devx_rq_destroy(&rxq->devx_rq); - memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq)); + if (rxq->devx_rq.rmp != NULL && rxq->devx_rq.rmp->ref_cnt > 0) + return; mlx5_devx_cq_destroy(&rxq_obj->cq_obj); memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); if (rxq_obj->devx_channel) { @@ -176,6 +181,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) rxq_obj->devx_channel = NULL; } } + rxq->ctrl->started = false; } /** @@ -271,6 +277,8 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) MLX5_WQ_END_PAD_MODE_NONE; rq_attr.wq_attr.pd = cdev->pdn; rq_attr.counter_set_id = priv->counter_set_id; + if (rxq_data->shared) /* Create RMP based RQ. */ + rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp; /* Create RQ using DevX API. */ return mlx5_devx_rq_create(cdev->ctx, &rxq->devx_rq, wqe_size, log_desc_n, &rq_attr, rxq_ctrl->socket); @@ -300,6 +308,8 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq) uint16_t event_nums[1] = { 0 }; int ret = 0; + if (rxq_ctrl->started) + return 0; if (priv->config.cqe_comp && !rxq_data->hw_timestamp && !rxq_data->lro) { cq_attr.cqe_comp_en = 1u; @@ -365,6 +375,7 @@ mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq) rxq_data->cq_uar = mlx5_os_get_devx_uar_base_addr(sh->devx_rx_uar); rxq_data->cqe_n = log_cqe_n; rxq_data->cqn = cq_obj->cq->id; + rxq_data->cq_ci = 0; if (rxq_ctrl->obj->devx_channel) { ret = mlx5_os_devx_subscribe_devx_event (rxq_ctrl->obj->devx_channel, @@ -463,7 +474,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) return mlx5_rxq_obj_hairpin_new(rxq); tmpl->rxq_ctrl = rxq_ctrl; - if (rxq_ctrl->irq) { + if (rxq_ctrl->irq && !rxq_ctrl->started) { int devx_ev_flag = MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA; @@ -496,11 +507,19 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; - rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec; - mlx5_rxq_initialize(rxq_data); + if (!rxq_data->shared) { + rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; + rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.wq.db_rec; + } else if (!rxq_ctrl->started) { + rxq_data->wqes = (void *)(uintptr_t)tmpl->devx_rmp.wq.umem_buf; + rxq_data->rq_db = + (uint32_t *)(uintptr_t)tmpl->devx_rmp.wq.db_rec; + } + if (!rxq_ctrl->started) { + mlx5_rxq_initialize(rxq_data); + rxq_ctrl->wqn = rxq->devx_rq.rq->id; + } priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED; - rxq_ctrl->wqn = rxq->devx_rq.rq->id; return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ @@ -558,7 +577,10 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); MLX5_ASSERT(rxq != NULL); - rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; + else + rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; } MLX5_ASSERT(i > 0); for (j = 0; i != rqt_n; ++j, ++i) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index bb38d5d2ade..dc647d5580c 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -26,6 +26,7 @@ #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_autoconf.h" +#include "mlx5_devx.h" /** * Get the interface index from device name. @@ -336,9 +337,13 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK; mlx5_set_default_params(dev, info); mlx5_set_txlimit_params(dev, info); + if (priv->config.hca_attr.mem_rq_rmp && + priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new) + info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE; info->switch_info.name = dev->data->name; info->switch_info.domain_id = priv->domain_id; info->switch_info.port_id = priv->representor_id; + info->switch_info.rx_domain = 0; /* No sub Rx domains. */ if (priv->representor) { uint16_t port_id; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 413e36f6d8d..eda6eca8dea 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -96,6 +96,7 @@ struct mlx5_rxq_data { unsigned int lro:1; /* Enable LRO. */ unsigned int dynf_meta:1; /* Dynamic metadata is configured. */ unsigned int mcqe_format:3; /* CQE compression format. */ + unsigned int shared:1; /* Shared RXQ. */ volatile uint32_t *rq_db; volatile uint32_t *cq_db; uint16_t port_id; @@ -158,8 +159,10 @@ struct mlx5_rxq_ctrl { struct mlx5_dev_ctx_shared *sh; /* Shared context. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ + LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */ uint32_t share_group; /* Group ID of shared RXQ. */ uint16_t share_qid; /* Shared RxQ ID in group. */ + unsigned int started:1; /* Whether (shared) RXQ has been started. */ unsigned int irq:1; /* Whether IRQ is enabled. */ uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */ uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f3fc618ed2c..8feb3e2c0fb 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -29,6 +29,7 @@ #include "mlx5_rx.h" #include "mlx5_utils.h" #include "mlx5_autoconf.h" +#include "mlx5_devx.h" /* Default RSS hash key also used for ConnectX-3. */ @@ -633,14 +634,19 @@ mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t idx) * RX queue index. * @param desc * Number of descriptors to configure in queue. + * @param[out] rxq_ctrl + * Address of pointer to shared Rx queue control. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) +mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc, + struct mlx5_rxq_ctrl **rxq_ctrl) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_priv *rxq; + bool empty; if (!rte_is_power_of_2(*desc)) { *desc = 1 << log2above(*desc); @@ -657,16 +663,143 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) rte_errno = EOVERFLOW; return -rte_errno; } - if (!mlx5_rxq_releasable(dev, idx)) { - DRV_LOG(ERR, "port %u unable to release queue index %u", - dev->data->port_id, idx); - rte_errno = EBUSY; - return -rte_errno; + if (rxq_ctrl == NULL || *rxq_ctrl == NULL) + return 0; + if (!(*rxq_ctrl)->rxq.shared) { + if (!mlx5_rxq_releasable(dev, idx)) { + DRV_LOG(ERR, "port %u unable to release queue index %u", + dev->data->port_id, idx); + rte_errno = EBUSY; + return -rte_errno; + } + mlx5_rxq_release(dev, idx); + } else { + if ((*rxq_ctrl)->obj != NULL) + /* Some port using shared Rx queue has been started. */ + return 0; + /* Release all owner RxQ to reconfigure Shared RxQ. */ + do { + rxq = LIST_FIRST(&(*rxq_ctrl)->owners); + LIST_REMOVE(rxq, owner_entry); + empty = LIST_EMPTY(&(*rxq_ctrl)->owners); + mlx5_rxq_release(ETH_DEV(rxq->priv), rxq->idx); + } while (!empty); + *rxq_ctrl = NULL; } - mlx5_rxq_release(dev, idx); return 0; } +/** + * Get the shared Rx queue object that matches group and queue index. + * + * @param dev + * Pointer to Ethernet device structure. + * @param group + * Shared RXQ group. + * @param share_qid + * Shared RX queue index. + * + * @return + * Shared RXQ object that matching, or NULL if not found. + */ +static struct mlx5_rxq_ctrl * +mlx5_shared_rxq_get(struct rte_eth_dev *dev, uint32_t group, uint16_t share_qid) +{ + struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_priv *priv = dev->data->dev_private; + + LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) { + if (rxq_ctrl->share_group == group && + rxq_ctrl->share_qid == share_qid) + return rxq_ctrl; + } + return NULL; +} + +/** + * Check whether requested Rx queue configuration matches shared RXQ. + * + * @param rxq_ctrl + * Pointer to shared RXQ. + * @param dev + * Pointer to Ethernet device structure. + * @param idx + * Queue index. + * @param desc + * Number of descriptors to configure in queue. + * @param socket + * NUMA socket on which memory must be allocated. + * @param[in] conf + * Thresholds parameters. + * @param mp + * Memory pool for buffer allocations. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static bool +mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev, + uint16_t idx, uint16_t desc, unsigned int socket, + const struct rte_eth_rxconf *conf, + struct rte_mempool *mp) +{ + struct mlx5_priv *spriv = LIST_FIRST(&rxq_ctrl->owners)->priv; + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + + RTE_SET_USED(conf); + if (rxq_ctrl->socket != socket) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: socket mismatch", + dev->data->port_id, idx); + return false; + } + if (rxq_ctrl->rxq.elts_n != log2above(desc)) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: descriptor number mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->mtu != spriv->mtu) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->dev_data->dev_conf.intr_conf.rxq != + spriv->dev_data->dev_conf.intr_conf.rxq) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: interrupt mismatch", + dev->data->port_id, idx); + return false; + } + if (mp != NULL && rxq_ctrl->rxq.mp != mp) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mempool mismatch", + dev->data->port_id, idx); + return false; + } else if (mp == NULL) { + for (i = 0; i < conf->rx_nseg; i++) { + if (conf->rx_seg[i].split.mp != + rxq_ctrl->rxq.rxseg[i].mp || + conf->rx_seg[i].split.length != + rxq_ctrl->rxq.rxseg[i].length) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: segment %u configuration mismatch", + dev->data->port_id, idx, i); + return false; + } + } + } + if (priv->config.hw_padding != spriv->config.hw_padding) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: padding mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->config.cqe_comp != spriv->config.cqe_comp || + (priv->config.cqe_comp && + priv->config.cqe_comp_fmt != spriv->config.cqe_comp_fmt)) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: CQE compression mismatch", + dev->data->port_id, idx); + return false; + } + return true; +} + /** * * @param dev @@ -692,12 +825,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; struct rte_eth_rxseg_split rx_single = {.mp = mp}; uint16_t n_seg = conf->rx_nseg; int res; + uint64_t offloads = conf->offloads | + dev->data->dev_conf.rxmode.offloads; if (mp) { /* @@ -709,9 +844,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, n_seg = 1; } if (n_seg > 1) { - uint64_t offloads = conf->offloads | - dev->data->dev_conf.rxmode.offloads; - /* The offloads should be checked on rte_eth_dev layer. */ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER); if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { @@ -723,9 +855,46 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG); } - res = mlx5_rx_queue_pre_setup(dev, idx, &desc); + if (conf->share_group > 0) { + if (!priv->config.hca_attr.mem_rq_rmp) { + DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw", + dev->data->port_id, idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) { + DRV_LOG(ERR, "port %u queue index %u shared Rx queue needs DevX api", + dev->data->port_id, idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (conf->share_qid >= priv->rxqs_n) { + DRV_LOG(ERR, "port %u shared Rx queue index %u > number of Rx queues %u", + dev->data->port_id, conf->share_qid, + priv->rxqs_n); + rte_errno = EINVAL; + return -rte_errno; + } + if (priv->config.mprq.enabled) { + DRV_LOG(ERR, "port %u shared Rx queue index %u: not supported when MPRQ enabled", + dev->data->port_id, conf->share_qid); + rte_errno = EINVAL; + return -rte_errno; + } + /* Try to reuse shared RXQ. */ + rxq_ctrl = mlx5_shared_rxq_get(dev, conf->share_group, + conf->share_qid); + if (rxq_ctrl != NULL && + !mlx5_shared_rxq_match(rxq_ctrl, dev, idx, desc, socket, + conf, mp)) { + rte_errno = EINVAL; + return -rte_errno; + } + } + res = mlx5_rx_queue_pre_setup(dev, idx, &desc, &rxq_ctrl); if (res) return res; + /* Allocate RXQ. */ rxq = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY); if (!rxq) { @@ -737,15 +906,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rxq->priv = priv; rxq->idx = idx; (*priv->rxq_privs)[idx] = rxq; - rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg); - if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", - dev->data->port_id, idx); - mlx5_free(rxq); - (*priv->rxq_privs)[idx] = NULL; - rte_errno = ENOMEM; - return -rte_errno; + if (rxq_ctrl != NULL) { + /* Join owner list. */ + LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry); + rxq->ctrl = rxq_ctrl; + } else { + rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, + n_seg); + if (rxq_ctrl == NULL) { + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", + dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; + rte_errno = ENOMEM; + return -rte_errno; + } } + mlx5_rxq_ref(dev, idx); DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); dev->data->rx_queues[idx] = &rxq_ctrl->rxq; @@ -776,7 +953,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, struct mlx5_rxq_ctrl *rxq_ctrl; int res; - res = mlx5_rx_queue_pre_setup(dev, idx, &desc); + res = mlx5_rx_queue_pre_setup(dev, idx, &desc, NULL); if (res) return res; if (hairpin_conf->peer_count != 1) { @@ -1095,6 +1272,9 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev) struct mlx5_rxq_obj *rxq_obj; LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) { + if (rxq_obj->rxq_ctrl->rxq.shared && + !LIST_EMPTY(&rxq_obj->rxq_ctrl->owners)) + continue; DRV_LOG(DEBUG, "port %u Rx queue %u still referenced", dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx); ++ret; @@ -1413,6 +1593,12 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, return NULL; } LIST_INIT(&tmpl->owners); + if (conf->share_group > 0) { + tmpl->rxq.shared = 1; + tmpl->share_group = conf->share_group; + tmpl->share_qid = conf->share_qid; + LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry); + } rxq->ctrl = tmpl; LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG); @@ -1661,7 +1847,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1836,31 +2021,41 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; + uint32_t refcnt; if (priv->rxq_privs == NULL) return 0; rxq = mlx5_rxq_get(dev, idx); - if (rxq == NULL) + if (rxq == NULL || rxq->refcnt == 0) return 0; - if (mlx5_rxq_deref(dev, idx) > 1) - return 1; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->obj != NULL) { + refcnt = mlx5_rxq_deref(dev, idx); + if (refcnt > 1) { + return 1; + } else if (refcnt == 1) { /* RxQ stopped. */ priv->obj_ops.rxq_obj_release(rxq); - LIST_REMOVE(rxq_ctrl->obj, next); - mlx5_free(rxq_ctrl->obj); - rxq_ctrl->obj = NULL; - } - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - rxq_free_elts(rxq_ctrl); - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; - } - if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) - mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); + if (!rxq_ctrl->started && rxq_ctrl->obj != NULL) { + LIST_REMOVE(rxq_ctrl->obj, next); + mlx5_free(rxq_ctrl->obj); + rxq_ctrl->obj = NULL; + } + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (!rxq_ctrl->started) + rxq_free_elts(rxq_ctrl); + dev->data->rx_queue_state[idx] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + } else { /* Refcnt zero, closing device. */ LIST_REMOVE(rxq, owner_entry); - LIST_REMOVE(rxq_ctrl, next); - mlx5_free(rxq_ctrl); + if (LIST_EMPTY(&rxq_ctrl->owners)) { + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + mlx5_mr_btree_free + (&rxq_ctrl->rxq.mr_ctrl.cache_bh); + if (rxq_ctrl->rxq.shared) + LIST_REMOVE(rxq_ctrl, share_entry); + LIST_REMOVE(rxq_ctrl, next); + mlx5_free(rxq_ctrl); + } dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 72475e4b5b5..a3e62e95335 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -176,6 +176,39 @@ mlx5_rxq_stop(struct rte_eth_dev *dev) mlx5_rxq_release(dev, i); } +static int +mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl, + unsigned int idx) +{ + int ret = 0; + + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + /* + * Pre-register the mempools. Regardless of whether + * the implicit registration is enabled or not, + * Rx mempool destruction is tracked to free MRs. + */ + if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0) + return -rte_errno; + ret = rxq_alloc_elts(rxq_ctrl); + if (ret) + return ret; + } + MLX5_ASSERT(!rxq_ctrl->obj); + rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, + sizeof(*rxq_ctrl->obj), 0, + rxq_ctrl->socket); + if (!rxq_ctrl->obj) { + DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", dev->data->port_id, + idx, (void *)&rxq_ctrl->obj); + return 0; +} + /** * Start traffic on Rx queues. * @@ -208,28 +241,10 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (rxq == NULL) continue; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - /* - * Pre-register the mempools. Regardless of whether - * the implicit registration is enabled or not, - * Rx mempool destruction is tracked to free MRs. - */ - if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0) - goto error; - ret = rxq_alloc_elts(rxq_ctrl); - if (ret) + if (!rxq_ctrl->started) { + if (mlx5_rxq_ctrl_prepare(dev, rxq_ctrl, i) < 0) goto error; - } - MLX5_ASSERT(!rxq_ctrl->obj); - rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, - sizeof(*rxq_ctrl->obj), 0, - rxq_ctrl->socket); - if (!rxq_ctrl->obj) { - DRV_LOG(ERR, - "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, i); - rte_errno = ENOMEM; - goto error; + LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next); } ret = priv->obj_ops.rxq_obj_new(rxq); if (ret) { @@ -237,9 +252,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) rxq_ctrl->obj = NULL; goto error; } - DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", - dev->data->port_id, i, (void *)&rxq_ctrl->obj); - LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next); + rxq_ctrl->started = true; } return 0; error: From patchwork Thu Nov 4 12:33:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103761 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A7B3A0C41; Thu, 4 Nov 2021 13:36:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F17974272B; Thu, 4 Nov 2021 13:35:28 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2088.outbound.protection.outlook.com [40.107.212.88]) by mails.dpdk.org (Postfix) with ESMTP id D936E42742 for ; Thu, 4 Nov 2021 13:35:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MYqy82wUXzvL6ZvapTQis7jFNxQY4NyGpwG5jJ1dp9q1CI0RMUJ3iYFxzoBWwLGCyVylXKfOtKMFK3P/EESB9uwC0j3YFOGIZD36AkfGc9bxU1GjzrpQUTzKE0su6uwuZR58HZGVCTPzkE7hD/gGw5ycXWxl0jhCY2Au79YxAf/UYJkcxZH8a+2jzlLeqhiewEHoTUX7IzWf9meibGAMambBIRCh3j6jWzHCljk51Jn5JvAx8ulHSNU686840/CpjcnS78oKFSNV7DYZIvllS0woz0sfvJdTErXTt3LFpQsB4E5+ND8JXfUFCVbR4M1ecWrKohXb8QdLDqWRMaGK7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OBLqD2QAXhn42wICgxJ/LOrXW3QBMnG+PsAy+pKInSs=; b=Q9y6uRVVnIkZbzwNBqBOMtxW+x8rtpzPH1/mx8oC/OJXd9YKx3f57UF9JQJwZTIj/fgqs8iCbZzNyaBidJMcWrdyGx6Mx7pKecq7Ct0GsotbdLMpWVQiSwWY+R7dS+R1QgDH1DOjSHQKi/VojlZ+4/baZI+AaQRzIiMBpwzCFiPD6egzeGugy7qX/3eMALUKw6HWsNAFMnMcM9hDrKCA0/Tt5TcCe8+DesM6ptgMGxd3Tk+NZxf2b87Mhe8sfJOHXrnT3M7xXU1VJim9MunT1UPRfKO8nD/t8K0FIqbiupc0TlkVaonL/ETZS+kp8DcViqdQ7HALl/SQQ1fjiCbbcg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OBLqD2QAXhn42wICgxJ/LOrXW3QBMnG+PsAy+pKInSs=; b=DFSSmv76liyyinJB7PyZ5tjqb/77Iat+G1O/vGxt3WHCX+/YJm84kj/UgzJELCad39Um/jwcF0oLhPo3+vmYbLu8NxHbgoK2srBJNMjr1ph9sNEq5a4ld7A+DMz9uVkS747rETFrB56Jla7R+UWv3HoQc2xIsP5UHQk/LGjMjFJkmxZm5fE1EzcmhzvvFd9C5Emw5KQ6jGcBXzyfhVf6PPOXLgJhLIC4ugIMRY2fdF0Z4p5/mweb0ZjUg8twjEM9RUiZHC50C/h6BzbgAtFDW9XvACF+oPPM1J+q03jQN1pDx9OhqO4W6wkBDd6+akdmBUlmq+X117Q12rXYGCVkZA== Received: from DM6PR06CA0101.namprd06.prod.outlook.com (2603:10b6:5:336::34) by DM6PR12MB3545.namprd12.prod.outlook.com (2603:10b6:5:18b::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Thu, 4 Nov 2021 12:35:25 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:5:336:cafe::ee) by DM6PR06CA0101.outlook.office365.com (2603:10b6:5:336::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:24 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:35:03 +0000 From: Xueming Li To: CC: Viacheslav Ovsiienko , , "Lior Margalit" , Matan Azrad , "David Christensen" , Ruifeng Wang , Bruce Richardson , Konstantin Ananyev Date: Thu, 4 Nov 2021 20:33:20 +0800 Message-ID: <20211104123320.1638915-15-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1928e47d-8db3-4d6b-80da-08d99f8f9329 X-MS-TrafficTypeDiagnostic: DM6PR12MB3545: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:312; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: h6Nuh0rjQIj1dxbiGAPu3js4XaaT8A+fbhuZth6sTmqQq/i4Q3+3d6+Ns5vE0BSTphfW7SX3xSYfAkyWU4s/jfSZtMGk9if3x3jie5S5eUbRFvNWRSG+fiy9RC8/+5AB33wAUK7pikBZPXXruTXW84phetgSKDa9T4nIU9bvyWmjEcOoZW8bpKep5VtYDRRQGLjeI63nbtio38pzbpY0pjpBaQtYTV4fb/Ci0ToUwkTBfuX2xK5qKHcMqokVDQ42pYNg5/V8zU8LFw1how8WvRDOq2W9gehgtRuv4Ii5o5Ps+tl6HVaO98S9tGQMU8TubKDR8IaX61DZCyvH0r0uSPkZlH+WVlr9rWuv3+L7tvKhCtgIBCBkN//1PLMxxup9uKZFXTSyvnOz3BZJqvl6Mib7fcfEyU6T8GuFE/Qg8+heyHTrMmZ7R+Mb4EhEQJCViP4J1sItLznnE6WUcjcTYKSwQ1Wzcyu0dgofqLgrIq3k+KE6MD0nhpanU9kpxMsd10WS6C+0ingG/rtPfTkqW5uM869AEsty1/BBzM/QfMUF8ItKAh+QyesOCJX13dDAwrr3zs1CbLoXcsp2GlkODu1+xJw+9jcJ/pKyyCODAaLc07top/z0odzQiFxQtzMPLYLaAJbgAkoNVzj2HJ2Z/YxQlZpr7pYGn0aTuHs7q3IBbbL9h/WesuvMcVlG2uN7CMzI2bAHyaaKtfYpUskuXw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(336012)(4326008)(2906002)(36860700001)(86362001)(8936002)(8676002)(5660300002)(83380400001)(186003)(16526019)(426003)(82310400003)(2616005)(508600001)(36756003)(54906003)(47076005)(316002)(70586007)(1076003)(70206006)(7696005)(6286002)(6916009)(55016002)(26005)(7636003)(356005)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:35:24.6187 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1928e47d-8db3-4d6b-80da-08d99f8f9329 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3545 Subject: [dpdk-dev] [PATCH v4 14/14] net/mlx5: add shared Rx queue port datapath support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Viacheslav Ovsiienko When receive packet, mlx5 PMD saves mbuf port number from RxQ data. To support shared RxQ, save port number into RQ context as user index. Received packet resolve port number from CQE user index which derived from RQ context. Legacy Verbs API doesn't support RQ user index setting, still read from RxQ port number. Signed-off-by: Xueming Li Signed-off-by: Viacheslav Ovsiienko Acked-by: Slava Ovsiienko Reviewed-by: David Christensen --- drivers/net/mlx5/mlx5_devx.c | 1 + drivers/net/mlx5/mlx5_rx.c | 1 + drivers/net/mlx5/mlx5_rxq.c | 3 ++- drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 6 ++++++ drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 12 +++++++++++- drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 8 +++++++- 6 files changed, 28 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index d3d189ab7f2..a9f9f4af700 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -277,6 +277,7 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) MLX5_WQ_END_PAD_MODE_NONE; rq_attr.wq_attr.pd = cdev->pdn; rq_attr.counter_set_id = priv->counter_set_id; + rq_attr.user_index = rte_cpu_to_be_16(priv->dev_data->port_id); if (rxq_data->shared) /* Create RMP based RQ. */ rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp; /* Create RQ using DevX API. */ diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 1ffa1b95b88..4d85f64accd 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -709,6 +709,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, { /* Update packet information. */ pkt->packet_type = rxq_cq_to_pkt_type(rxq, cqe, mcqe); + pkt->port = unlikely(rxq->shared) ? cqe->user_index_low : rxq->port_id; if (rxq->rss_hash) { uint32_t rss_hash_res = 0; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8feb3e2c0fb..4515d531835 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -186,7 +186,8 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) mbuf_init->data_off = RTE_PKTMBUF_HEADROOM; rte_mbuf_refcnt_set(mbuf_init, 1); mbuf_init->nb_segs = 1; - mbuf_init->port = rxq->port_id; + /* For shared queues port is provided in CQE */ + mbuf_init->port = rxq->shared ? 0 : rxq->port_id; if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF) mbuf_init->ol_flags = RTE_MBUF_F_EXTERNAL; /* diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h index 1d00c1c43d1..423e229508c 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h @@ -1189,6 +1189,12 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, /* D.5 fill in mbuf - rearm_data and packet_type. */ rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]); + if (unlikely(rxq->shared)) { + pkts[pos]->port = cq[pos].user_index_low; + pkts[pos + p1]->port = cq[pos + p1].user_index_low; + pkts[pos + p2]->port = cq[pos + p2].user_index_low; + pkts[pos + p3]->port = cq[pos + p3].user_index_low; + } if (rxq->hw_timestamp) { int offset = rxq->timestamp_offset; if (rxq->rt_timestamp) { diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h index aa36df29a09..b1d16baa619 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h @@ -787,7 +787,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, /* C.4 fill in mbuf - rearm_data and packet_type. */ rxq_cq_to_ptype_oflags_v(rxq, ptype_info, flow_tag, opcode, &elts[pos]); - if (rxq->hw_timestamp) { + if (unlikely(rxq->shared)) { + elts[pos]->port = container_of(p0, struct mlx5_cqe, + pkt_info)->user_index_low; + elts[pos + 1]->port = container_of(p1, struct mlx5_cqe, + pkt_info)->user_index_low; + elts[pos + 2]->port = container_of(p2, struct mlx5_cqe, + pkt_info)->user_index_low; + elts[pos + 3]->port = container_of(p3, struct mlx5_cqe, + pkt_info)->user_index_low; + } + if (unlikely(rxq->hw_timestamp)) { int offset = rxq->timestamp_offset; if (rxq->rt_timestamp) { struct mlx5_dev_ctx_shared *sh = rxq->sh; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h index b0fc29d7b9e..f3d838389e2 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h @@ -736,7 +736,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, *err |= _mm_cvtsi128_si64(opcode); /* D.5 fill in mbuf - rearm_data and packet_type. */ rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]); - if (rxq->hw_timestamp) { + if (unlikely(rxq->shared)) { + pkts[pos]->port = cq[pos].user_index_low; + pkts[pos + p1]->port = cq[pos + p1].user_index_low; + pkts[pos + p2]->port = cq[pos + p2].user_index_low; + pkts[pos + p3]->port = cq[pos + p3].user_index_low; + } + if (unlikely(rxq->hw_timestamp)) { int offset = rxq->timestamp_offset; if (rxq->rt_timestamp) { struct mlx5_dev_ctx_shared *sh = rxq->sh;