From patchwork Sun Sep 26 11:18:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99684 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48C90A0547; Sun, 26 Sep 2021 13:19:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CD5740E78; Sun, 26 Sep 2021 13:19:30 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2047.outbound.protection.outlook.com [40.107.243.47]) by mails.dpdk.org (Postfix) with ESMTP id 5B86640E5A for ; Sun, 26 Sep 2021 13:19:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MY4+CzNmB/6+AIBLQksBvz0OF1SueeOy4YoaU+JXC2Z2Xt/AGBkm8B4qAgUEvxfMPndkpBC9Ffvm5n64rUz0wLNzARPJdbB/B/IaSpT0d8fc5CpQIyxw5QlXVJLCAGu7vtR+RXf8DkN9ECohB3Orb70ZcWLB2ra4DH5hEStD9nvceZ4Gt9nBwkDeqzlUf4PUbiLjhU0IUvNiluISHSO0UcjQ2AyUN5vNet7cYtkEgSbFgAOrrxVpdPf+JimTe240gBRyT2FIXkEfACuUAHzVlabOwVKRszWUcpJrGdCQNHHyqYkVdZTR4RMnqrnNVAZF8xweVXzPbKz80ZKvPhEaXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=oaWZO91GZm0+dBLxIjIE627qw8cBmU0VQyilu8jxdPk=; b=N+yRCbCi+ECeP49kSntwoasHQugijPs4DtBTJ6UpNzKK4dHV1/O10+EGAHRFj41N1FUo0uPE4AwzGvqAtPdmiDlcbR/218/c78iCBZBly8mt2G3IQEMYCCjQSDIIQ4hqU1StB24P0hbFsNN62oSQ/RMl0HY5KIxtigF2ZgqbNkTRJhUgz1ozCKzTMRf6AVQOQvnlPQR1tDfRvs6GBD26vtnCvErcKJS1xOJ4m83eNp56d+HqE3w8HTu6jQ1jcLCGXzjy/1Ot1Tgu6V8nqTZ6Pju0c2LU7IxxoRokF7Ha/gbQ1CFlxyWROwS4oi21PrtSGvhqWYaqX/frSiNdvs+j5Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oaWZO91GZm0+dBLxIjIE627qw8cBmU0VQyilu8jxdPk=; b=mirvcmD65KOCdF3EaT1Vab7aQ1FfiGx1CtLi2BwiYxE0Xansl1smkXHymV2ebedVkD1bcFkHfE7uKQQSeUTCQfz+FLOOFfUptTechrd3yjgcPBn/I5LX/d5g9BNhv/DL1nR8hbQBa5QpqLZ2g2a2JRitQameZivdCbPNZxl89RKOH/ajP+eYxh7g2+AtoMbMDrvqUTY2Pd4KhDcN+fGaHZh2hK/Mtek7o+/HDoFF+qSYYyFphuOLvfa49dhePEeva4SYAh59oBlJS3QrUoCpfzttjz3qSmsYf84HdoR9c3/RPSQO4rcVaooiWIwi70DdV3jAhHlCjsHOUVye528iHw== Received: from BN9PR03CA0847.namprd03.prod.outlook.com (2603:10b6:408:13d::12) by DM6PR12MB2731.namprd12.prod.outlook.com (2603:10b6:5:45::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13; Sun, 26 Sep 2021 11:19:28 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13d:cafe::ef) by BN9PR03CA0847.outlook.office365.com (2603:10b6:408:13d::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:19:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:19:27 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:26 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:24 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko , Ori Kam Date: Sun, 26 Sep 2021 19:18:54 +0800 Message-ID: <20210926111904.237736-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 39866808-aeae-4537-0648-08d980df80fb X-MS-TrafficTypeDiagnostic: DM6PR12MB2731: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1923; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: X5TDlPoFLHRS8OwMQ5GHst685OqxRB8F6EiOL/zi2Jicxy+G5752N9H8Pqas38tUs7UqhMD15j3R0Ferr0kF3KyRaAcGlU6KWhjBDwRxbm+TeJLoQfaHVttgQIkLtNEohkvnj546mVZSLUi8Ky0lUkyrZnOTDMErXNpQ5B23qd+V6Uic0Hd+gvZdS6tFusT0zi9Y/Q7F7JpVTbSKNT/PD0vrJOWTgzGtHbtTuU4qL7WkZSELYD7Xo6JiLtX7dd0EFGjh3lQG7YZUFxLff2cgE2LqtA8xB8wWsjG+Cb6uAcAJnjt3aToSiHqoRVu4szr/Cok+oq3j/Fi3KfY3tBrrx+ckk6kqvdbb9agf21R/1T+ukNkWEHTq5AWZlNehnwZqkE0U1RY3Y6UhsEk269RexzkGL+iR1B12iqv3eytggE3M/qeHVVhV/n9PfxVHLNTn1sYCYY0sCJauaZzCVLPbPQ8mANzMwqWVZ5arS0DHwJ85KsPXJmNE8gtAuFgGhkuYDR40yM1qhtbJTgY2pcOP7anTjmHn1qhk7Ol9Gk5WlrgcRbv75zA9WpsleeceaArdbHRiXzDWafGvra199GAZ8Q0ANOktm+ghKlQcvdlnmuCag2aq8XXvrMRER3Pjsx6yBrxFup2zMg5V7d7r0KckaMg9U6fCs6VV/sb/C0MDwO+oIQrtobhcP+owVqHaRGa23o2FawNHCMqq1dDbIZe6Wg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(426003)(6666004)(356005)(82310400003)(336012)(6916009)(47076005)(36860700001)(86362001)(186003)(8936002)(7636003)(7696005)(16526019)(55016002)(26005)(83380400001)(6286002)(508600001)(36906005)(5660300002)(4326008)(107886003)(54906003)(316002)(1076003)(70206006)(8676002)(70586007)(2906002)(36756003)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:19:27.7883 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 39866808-aeae-4537-0648-08d980df80fb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2731 Subject: [dpdk-dev] [PATCH 01/11] common/mlx5: support receive queue user index X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" RQ user index is saved in CQE when packet received by RQ. Signed-off-by: Xueming Li --- drivers/common/mlx5/mlx5_prm.h | 8 +++++++- drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index d361bcf90ef..72af3710a8f 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -393,7 +393,13 @@ struct mlx5_cqe { uint16_t hdr_type_etc; uint16_t vlan_info; uint8_t lro_num_seg; - uint8_t rsvd3[3]; + union { + uint8_t user_index_bytes[3]; + struct { + uint8_t user_index_hi; + uint16_t user_index_low; + } __rte_packed; + }; uint32_t flow_table_metadata; uint8_t rsvd4[4]; uint32_t byte_cnt; diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index c79445ce7d3..a151e4ce8dc 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -567,7 +567,7 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id, uint16_t wq_counter = (rte_be_to_cpu_16(cqe->wqe_counter) + 1) & MLX5_REGEX_MAX_WQE_INDEX; - size_t sqid = cqe->rsvd3[2]; + size_t sqid = cqe->user_index_bytes[2]; struct mlx5_regex_sq *sq = &queue->sqs[sqid]; /* UMR mode WQE counter move as WQE set(4 WQEBBS).*/ From patchwork Sun Sep 26 11:18:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99685 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 541A1A0547; Sun, 26 Sep 2021 13:19:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E694040E5A; Sun, 26 Sep 2021 13:19:34 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2066.outbound.protection.outlook.com [40.107.243.66]) by mails.dpdk.org (Postfix) with ESMTP id 3DBA0410ED for ; Sun, 26 Sep 2021 13:19:33 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=R8khKgHRzCHTSu6QFr8DBSluMhjVIUpO0q/qrH5v/qAnS56jgs6SksQJVGVjtsVluE52MfI7BPtBUf+jAhkqqNyX1EosQmevV3/jjs3jNefsE2vwmpNIxl1Dq9g11xXu8WX+CufH5qFpaw4Xdh1UO/oAIYbMTXVMcWJNMjdG0F40Trsa41ZU0glfwCzCvewPBbblRhXdE9NzDSFDbsbsmpMt2B2iA16nkczrapc4FjmcG+kgXSeNqLdq5zXNXBrLeH4Nd1wooe17OQPRgpljATY8IDXHNF/Z2+h+OB1rGtkjzKUgKc+ZnGo2SoYbYc55ULHXnQ6zv9wuHAhKLSK+Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=XyxGV6DTOsWSgcKQnLGNhrSn/fPtuFxywAnS2xuQfl4=; b=aICihJ5b9Ngxqz9JD3A2+s/sLTtn4T64WVVsrUoZR0RbRmFM+O+xnzmlG5QwbBMstluXmUhmmRgRGjZgNWlD/sG+2Uo5Et5OaBM6pUYSQgy5VLK+wLi93yPgbxKY/EYLaRK/l6I2g6EeQFZHuJJUdDNgEjwH+6pLdHRnTI335M43oaaATLFNFhl4MvPGnFy3euhqudoAxIvgg0utDKLG68uv6LrirIEesU1fxOD4Ejl8oAw17iAbXJfUHBWFa7YdkxYUEWU0rGvZAWT2qwzyYtFgxBMn2I3TbVP6LJbXNo1Z8cVwYpzi/wI4QrSjrWOMwhItfEOCopnEuZN+KPxzPQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XyxGV6DTOsWSgcKQnLGNhrSn/fPtuFxywAnS2xuQfl4=; b=SDbPJpNc7WTYPmR7AU2ji6p12L0Xx1LweyggJhdsoR3L1yBm+Gpxv9Ob4jmPfw6QkvAev0QQV9bnKXsKNiA1tExf9Cw1qVYp5lFr/N+hopzRUFj2FtK9BY+/AJP/a4Rg61uonxCeFW2BIYPNkhNT9OrAz6IIFhUo0PQ4uFXsBOyGwqJp0gMmkGL2TaXL4NBOpLRPoR6YxtYmoTKnEPUdlctD3OD4V1wvv8EqKkHmsI87yo0bI9h/63iKnhhqLN5y5shSdduvvQpY2QRmWqy6G25fWBlT/WZPRKgE9tCI4F696E+ToCU8BP7oLZwySoEESUEpCkdBpMIptJxdwrgtfw== Received: from MWHPR10CA0058.namprd10.prod.outlook.com (2603:10b6:300:2c::20) by BN6PR12MB1283.namprd12.prod.outlook.com (2603:10b6:404:19::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Sun, 26 Sep 2021 11:19:30 +0000 Received: from CO1NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:300:2c:cafe::d3) by MWHPR10CA0058.outlook.office365.com (2603:10b6:300:2c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:19:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT027.mail.protection.outlook.com (10.13.174.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:19:30 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:29 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:27 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko , "Ray Kinsella" Date: Sun, 26 Sep 2021 19:18:55 +0800 Message-ID: <20210926111904.237736-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 67e9d1d7-0654-446e-3532-08d980df825c X-MS-TrafficTypeDiagnostic: BN6PR12MB1283: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zRoAhBcXSI56fNZvg2rz/DhORslM5Q8Zw4Ml2/YIqeM9Zo/77SMNfUfje3InNQNuHFx8HgSmlFhf0zk9a1umkkgWlqmxnVEm+T4jbBo3H5L767Aytb6628+/zSyiF4/I+OtVTYMK+dHq9OHbHyxoy+YHPSro2Kk6Fj72fMHIBCMhdxQ0W/SB1ZYeQa0Ek1/PbHs/ilkCG1pckAQ6Hk4CWAjFp+I/4hztTXCiK3J6ARih7e1see+xB1fAS3GakcYb6lMRHxMD4Ce581PwLd4kwDKUzEgIGfM5bn2VHiWGt8bNpS68LZRIDDbkmGXfQQtWI7d6au8wUlFZ1R2hv02M/oX03+WTuQpx6Tk+v0TA1LuefAlt4K2zRvq+8APz9SWDir80ykLINIWkevuRtD4ekmdW+BXVvQBgabqIf8yP4d0N9zDNXcOwhlssqqBAq2aI2kyHpcvJooBXZlj4vtUagThFa933G4LWDShAWRy2PxoxVt99yrgvLKEgixN523d/BiIv3M0lsbPX+o9isoZBbIyx58EYnl27p/rB/prVb9LnKKDGKjfEFqOUY0RjHyXeWObnZO+GsR4JvrIRO7XcL4uJbSP6Sw6oGyiKH2cZDHX1s7WwuauFFw7QFjPFDCyoFKNMilQWyoeF9YK0FLJA4CwSPoCLnLDdspZiNBX+PASJpr5JtRT4pC4VBkifhhmJP62D7H2et/QcDrMyi3AuDw== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(6916009)(30864003)(356005)(36906005)(316002)(54906003)(336012)(2906002)(4326008)(2616005)(36756003)(6666004)(426003)(1076003)(8936002)(8676002)(55016002)(70586007)(6286002)(186003)(508600001)(86362001)(36860700001)(7636003)(5660300002)(47076005)(83380400001)(70206006)(7696005)(82310400003)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:19:30.1799 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 67e9d1d7-0654-446e-3532-08d980df825c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1283 Subject: [dpdk-dev] [PATCH 02/11] common/mlx5: support receive memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds DevX supports of PRM shared receive memory pool(RMP) object. RMP is used to support shared Rx queue. Multiple RQ could share same RMP. Memory buffers are supplied to RMP. This patch makes RMP RQ optional, created only if mlx5_devx_rq.rmp is set. Signed-off-by: Xueming Li --- drivers/common/mlx5/mlx5_common_devx.c | 310 +++++++++++++++++++++---- drivers/common/mlx5/mlx5_common_devx.h | 19 +- drivers/common/mlx5/mlx5_devx_cmds.c | 52 +++++ drivers/common/mlx5/mlx5_devx_cmds.h | 16 ++ drivers/common/mlx5/mlx5_prm.h | 85 ++++++- drivers/common/mlx5/version.map | 1 + drivers/net/mlx5/mlx5_devx.c | 2 +- 7 files changed, 434 insertions(+), 51 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c index 22c8d356c45..cd6f13a66b6 100644 --- a/drivers/common/mlx5/mlx5_common_devx.c +++ b/drivers/common/mlx5/mlx5_common_devx.c @@ -271,6 +271,39 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, return -rte_errno; } +/** + * Destroy DevX Receive Queue resources. + * + * @param[in] rq_res + * DevX RQ resource to destroy. + */ +static void +mlx5_devx_wq_res_destroy(struct mlx5_devx_wq_res *rq_res) +{ + if (rq_res->umem_obj) + claim_zero(mlx5_os_umem_dereg(rq_res->umem_obj)); + if (rq_res->umem_buf) + mlx5_free((void *)(uintptr_t)rq_res->umem_buf); + memset(rq_res, 0, sizeof(*rq_res)); +} + +/** + * Destroy DevX Receive Memory Pool. + * + * @param[in] rmp + * DevX RMP to destroy. + */ +static void +mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp) +{ + MLX5_ASSERT(rmp->ref_cnt == 0); + if (rmp->rmp) { + claim_zero(mlx5_devx_cmd_destroy(rmp->rmp)); + rmp->rmp = NULL; + } + mlx5_devx_wq_res_destroy(&rmp->wq); +} + /** * Destroy DevX Receive Queue. * @@ -280,55 +313,47 @@ mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n, void mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq) { - if (rq->rq) + if (rq->rq) { claim_zero(mlx5_devx_cmd_destroy(rq->rq)); - if (rq->umem_obj) - claim_zero(mlx5_os_umem_dereg(rq->umem_obj)); - if (rq->umem_buf) - mlx5_free((void *)(uintptr_t)rq->umem_buf); + rq->rq = NULL; + } + if (rq->rmp == NULL) { + mlx5_devx_wq_res_destroy(&rq->wq); + } else { + MLX5_ASSERT(rq->rmp->ref_cnt > 0); + rq->rmp->ref_cnt--; + if (rq->rmp->ref_cnt == 0) + mlx5_devx_rmp_destroy(rq->rmp); + } + rq->db_rec = 0; } /** - * Create Receive Queue using DevX API. - * - * Get a pointer to partially initialized attributes structure, and updates the - * following fields: - * wq_umem_valid - * wq_umem_id - * wq_umem_offset - * dbr_umem_valid - * dbr_umem_id - * dbr_addr - * log_wq_pg_sz - * All other fields are updated by caller. + * Create WQ resources using DevX API. * * @param[in] ctx * Context returned from mlx5 open_device() glue function. - * @param[in/out] rq_obj - * Pointer to RQ to create. + * @param[in/out] rq_rest + * Pointer to RQ resource to create. * @param[in] wqe_size * Size of WQE structure. * @param[in] log_wqbb_n * Log of number of WQBBs in queue. - * @param[in] attr - * Pointer to RQ attributes structure. - * @param[in] socket - * Socket to use for allocation. + * @param[in] wq_attr + * Pointer to WQ attributes structure. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -int -mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, - uint16_t log_wqbb_n, - struct mlx5_devx_create_rq_attr *attr, int socket) +static int +mlx5_devx_wq_res_create(void *ctx, struct mlx5_devx_wq_res *rq_res, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_wq_attr *wq_attr, int socket) { - struct mlx5_devx_obj *rq = NULL; struct mlx5dv_devx_umem *umem_obj = NULL; void *umem_buf = NULL; size_t alignment = MLX5_WQE_BUF_ALIGNMENT; - uint32_t umem_size, umem_dbrec; - uint16_t rq_size = 1 << log_wqbb_n; + uint32_t umem_size; int ret; if (alignment == (size_t)-1) { @@ -337,8 +362,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, return -rte_errno; } /* Allocate memory buffer for WQEs and doorbell record. */ - umem_size = wqe_size * rq_size; - umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); + umem_size = wqe_size * (1 << log_wqbb_n); umem_size += MLX5_DBR_SIZE; umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size, alignment, socket); @@ -355,14 +379,58 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = errno; goto error; } - /* Fill attributes for RQ object creation. */ - attr->wq_attr.wq_umem_valid = 1; - attr->wq_attr.wq_umem_id = mlx5_os_get_umem_id(umem_obj); - attr->wq_attr.wq_umem_offset = 0; - attr->wq_attr.dbr_umem_valid = 1; - attr->wq_attr.dbr_umem_id = attr->wq_attr.wq_umem_id; - attr->wq_attr.dbr_addr = umem_dbrec; - attr->wq_attr.log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; + rq_res->umem_buf = umem_buf; + rq_res->umem_obj = umem_obj; + /* Fill WQ attributes. */ + wq_attr->wq_umem_valid = 1; + wq_attr->wq_umem_id = mlx5_os_get_umem_id(umem_obj); + wq_attr->wq_umem_offset = 0; + wq_attr->dbr_umem_valid = 1; + wq_attr->dbr_umem_id = wq_attr->wq_umem_id; + wq_attr->dbr_addr = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); + wq_attr->log_wq_pg_sz = MLX5_LOG_PAGE_SIZE; + return 0; +error: + ret = rte_errno; + if (umem_obj) + claim_zero(mlx5_os_umem_dereg(umem_obj)); + if (umem_buf) + mlx5_free((void *)(uintptr_t)umem_buf); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create standalone Receive Queue using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_std_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq = NULL; + int ret; + + ret = mlx5_devx_wq_res_create(ctx, &rq_obj->wq, wqe_size, log_wqbb_n, + &attr->wq_attr, socket); + if (ret != 0) + return ret; /* Create receive queue object with DevX. */ rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); if (!rq) { @@ -370,18 +438,166 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, uint32_t wqe_size, rte_errno = ENOMEM; goto error; } - rq_obj->umem_buf = umem_buf; - rq_obj->umem_obj = umem_obj; rq_obj->rq = rq; - rq_obj->db_rec = RTE_PTR_ADD(rq_obj->umem_buf, umem_dbrec); return 0; error: ret = rte_errno; - if (umem_obj) - claim_zero(mlx5_os_umem_dereg(umem_obj)); - if (umem_buf) - mlx5_free((void *)(uintptr_t)umem_buf); + mlx5_devx_wq_res_destroy(&rq_obj->wq); rte_errno = ret; return -rte_errno; } +/** + * Create Receive Memory Pool using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rmp_create(void *ctx, struct mlx5_devx_rmp *rmp_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_wq_attr *wq_attr, int socket) +{ + struct mlx5_devx_create_rmp_attr rmp_attr = { 0 }; + int ret; + + rmp_attr.wq_attr = *wq_attr; + ret = mlx5_devx_wq_res_create(ctx, &rmp_obj->wq, wqe_size, log_wqbb_n, + &rmp_attr.wq_attr, socket); + if (ret != 0) + return ret; + rmp_attr.state = MLX5_RMPC_STATE_RDY; + rmp_attr.basic_cyclic_rcv_wqe = + wq_attr->wq_type == MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ ? + 0 : 1; + /* Create receive queue object with DevX. */ + rmp_obj->rmp = mlx5_devx_cmd_create_rmp(ctx, &rmp_attr, socket); + if (rmp_obj->rmp == NULL) { + DRV_LOG(ERR, "Can't create DevX RMP object."); + rte_errno = ENOMEM; + goto error; + } + return 0; +error: + ret = rte_errno; + mlx5_devx_wq_res_destroy(&rmp_obj->wq); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Shared Receive Queue based on RMP using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + struct mlx5_devx_obj *rq = NULL; + int ret; + + ret = mlx5_devx_rmp_create(ctx, rq_obj->rmp, wqe_size, log_wqbb_n, + &attr->wq_attr, socket); + if (ret != 0) + return ret; + rq_obj->rmp->ref_cnt++; + attr->mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP; + attr->rmpn = rq_obj->rmp->rmp->id; + attr->flush_in_error_en = 0; + memset(&attr->wq_attr, 0, sizeof(attr->wq_attr)); + /* Create receive queue object with DevX. */ + rq = mlx5_devx_cmd_create_rq(ctx, attr, socket); + if (!rq) { + DRV_LOG(ERR, "Can't create DevX RMP RQ object."); + rte_errno = ENOMEM; + goto error; + } + rq_obj->rq = rq; + return 0; +error: + ret = rte_errno; + mlx5_devx_rq_destroy(rq_obj); + rte_errno = ret; + return -rte_errno; +} + +/** + * Create Receive Queue using DevX API. Shared RQ is created only if rmp set. + * + * Get a pointer to partially initialized attributes structure, and updates the + * following fields: + * wq_umem_valid + * wq_umem_id + * wq_umem_offset + * dbr_umem_valid + * dbr_umem_id + * dbr_addr + * log_wq_pg_sz + * All other fields are updated by caller. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param[in/out] rq_obj + * Pointer to RQ to create. + * @param[in] wqe_size + * Size of WQE structure. + * @param[in] log_wqbb_n + * Log of number of WQBBs in queue. + * @param[in] attr + * Pointer to RQ attributes structure. + * @param[in] socket + * Socket to use for allocation. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj, + uint32_t wqe_size, uint16_t log_wqbb_n, + struct mlx5_devx_create_rq_attr *attr, int socket) +{ + uint32_t umem_size, umem_dbrec; + int ret; + + if (rq_obj->rmp == NULL) + ret = mlx5_devx_rq_std_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); + else + ret = mlx5_devx_rq_shared_create(ctx, rq_obj, wqe_size, + log_wqbb_n, attr, socket); + if (ret != 0) + return ret; + umem_size = wqe_size * (1 << log_wqbb_n); + umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE); + rq_obj->db_rec = RTE_PTR_ADD(rq_obj->wq.umem_buf, umem_dbrec); + return 0; +} diff --git a/drivers/common/mlx5/mlx5_common_devx.h b/drivers/common/mlx5/mlx5_common_devx.h index aad0184e5ac..328b6ce9324 100644 --- a/drivers/common/mlx5/mlx5_common_devx.h +++ b/drivers/common/mlx5/mlx5_common_devx.h @@ -33,11 +33,26 @@ struct mlx5_devx_sq { volatile uint32_t *db_rec; /* The SQ doorbell record. */ }; +/* DevX Receive Queue resource structure. */ +struct mlx5_devx_wq_res { + void *umem_obj; /* The RQ umem object. */ + volatile void *umem_buf; +}; + +/* DevX Receive Queue structure. */ +struct mlx5_devx_rmp { + struct mlx5_devx_obj *rmp; /* The RMP DevX object. */ + uint32_t ref_cnt; /* Reference count. */ + struct mlx5_devx_wq_res wq; +}; + /* DevX Receive Queue structure. */ struct mlx5_devx_rq { struct mlx5_devx_obj *rq; /* The RQ DevX object. */ - void *umem_obj; /* The RQ umem object. */ - volatile void *umem_buf; + union { + struct mlx5_devx_rmp *rmp; /* Shared RQ RMP object. */ + struct mlx5_devx_wq_res wq; /* WQ resource of standalone RQ. */ + }; volatile uint32_t *db_rec; /* The RQ doorbell record. */ }; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 56407cc332f..120331e9c87 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -766,6 +766,8 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc); attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr, flow_counters_dump); + attr->log_max_rmp = MLX5_GET(cmd_hca_cap, hcattr, log_max_rmp); + attr->mem_rq_rmp = MLX5_GET(cmd_hca_cap, hcattr, mem_rq_rmp); attr->log_max_rqt_size = MLX5_GET(cmd_hca_cap, hcattr, log_max_rqt_size); attr->eswitch_manager = MLX5_GET(cmd_hca_cap, hcattr, eswitch_manager); @@ -1250,6 +1252,56 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq, } /** + * Create RMP using DevX API. + * + * @param[in] ctx + * Context returned from mlx5 open_device() glue function. + * @param [in] rmp_attr + * Pointer to create RMP attributes structure. + * @param [in] socket + * CPU socket ID for allocations. + * + * @return + * The DevX object created, NULL otherwise and rte_errno is set. + */ +struct mlx5_devx_obj * +mlx5_devx_cmd_create_rmp(void *ctx, + struct mlx5_devx_create_rmp_attr *rmp_attr, + int socket) +{ + uint32_t in[MLX5_ST_SZ_DW(create_rmp_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(create_rmp_out)] = {0}; + void *rmp_ctx, *wq_ctx; + struct mlx5_devx_wq_attr *wq_attr; + struct mlx5_devx_obj *rmp = NULL; + + rmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rmp), 0, socket); + if (!rmp) { + DRV_LOG(ERR, "Failed to allocate RMP data"); + rte_errno = ENOMEM; + return NULL; + } + MLX5_SET(create_rmp_in, in, opcode, MLX5_CMD_OP_CREATE_RMP); + rmp_ctx = MLX5_ADDR_OF(create_rmp_in, in, ctx); + MLX5_SET(rmpc, rmp_ctx, state, rmp_attr->state); + MLX5_SET(rmpc, rmp_ctx, basic_cyclic_rcv_wqe, + rmp_attr->basic_cyclic_rcv_wqe); + wq_ctx = MLX5_ADDR_OF(rmpc, rmp_ctx, wq); + wq_attr = &rmp_attr->wq_attr; + devx_cmd_fill_wq_data(wq_ctx, wq_attr); + rmp->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, + sizeof(out)); + if (!rmp->obj) { + DRV_LOG(ERR, "Failed to create RMP using DevX"); + rte_errno = errno; + mlx5_free(rmp); + return NULL; + } + rmp->id = MLX5_GET(create_rmp_out, out, rmpn); + return rmp; +} + +/* * Create TIR using DevX API. * * @param[in] ctx diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index e576e30f242..fa8ba89abe6 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -101,6 +101,8 @@ struct mlx5_hca_flow_attr { struct mlx5_hca_attr { uint32_t eswitch_manager:1; uint32_t flow_counters_dump:1; + uint32_t mem_rq_rmp:1; + uint32_t log_max_rmp:5; uint32_t log_max_rqt_size:5; uint32_t parse_graph_flex_node:1; uint8_t flow_counter_bulk_alloc_bitmap; @@ -245,6 +247,17 @@ struct mlx5_devx_modify_rq_attr { uint32_t lwm:16; /* Contained WQ lwm. */ }; +/* Create RMP attributes structure, used by create RMP operation. */ +struct mlx5_devx_create_rmp_attr { + uint32_t rsvd0:8; + uint32_t state:4; + uint32_t rsvd1:20; + uint32_t basic_cyclic_rcv_wqe:1; + uint32_t rsvd4:31; + uint32_t rsvd8[10]; + struct mlx5_devx_wq_attr wq_attr; +}; + struct mlx5_rx_hash_field_select { uint32_t l3_prot_type:1; uint32_t l4_prot_type:1; @@ -520,6 +533,9 @@ __rte_internal int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq, struct mlx5_devx_modify_rq_attr *rq_attr); __rte_internal +struct mlx5_devx_obj *mlx5_devx_cmd_create_rmp(void *ctx, + struct mlx5_devx_create_rmp_attr *rq_attr, int socket); +__rte_internal struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(void *ctx, struct mlx5_devx_tir_attr *tir_attr); __rte_internal diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 72af3710a8f..df0991ee402 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1061,6 +1061,10 @@ enum { MLX5_CMD_OP_CREATE_RQ = 0x908, MLX5_CMD_OP_MODIFY_RQ = 0x909, MLX5_CMD_OP_QUERY_RQ = 0x90b, + MLX5_CMD_OP_CREATE_RMP = 0x90c, + MLX5_CMD_OP_MODIFY_RMP = 0x90d, + MLX5_CMD_OP_DESTROY_RMP = 0x90e, + MLX5_CMD_OP_QUERY_RMP = 0x90f, MLX5_CMD_OP_CREATE_TIS = 0x912, MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, @@ -1557,7 +1561,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_378[0x3]; u8 log_max_tis[0x5]; u8 basic_cyclic_rcv_wqe[0x1]; - u8 reserved_at_381[0x2]; + u8 reserved_at_381[0x1]; + u8 mem_rq_rmp[0x1]; u8 log_max_rmp[0x5]; u8 reserved_at_388[0x3]; u8 log_max_rqt[0x5]; @@ -2159,6 +2164,84 @@ struct mlx5_ifc_query_rq_in_bits { u8 reserved_at_60[0x20]; }; +enum { + MLX5_RMPC_STATE_RDY = 0x1, + MLX5_RMPC_STATE_ERR = 0x3, +}; + +struct mlx5_ifc_rmpc_bits { + u8 reserved_at_0[0x8]; + u8 state[0x4]; + u8 reserved_at_c[0x14]; + u8 basic_cyclic_rcv_wqe[0x1]; + u8 reserved_at_21[0x1f]; + u8 reserved_at_40[0x140]; + struct mlx5_ifc_wq_bits wq; +}; + +struct mlx5_ifc_query_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0xc0]; + struct mlx5_ifc_rmpc_bits rmp_context; +}; + +struct mlx5_ifc_query_rmp_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_40[0x8]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_modify_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_rmp_bitmask_bits { + u8 reserved_at_0[0x20]; + u8 reserved_at_20[0x1f]; + u8 lwm[0x1]; +}; + +struct mlx5_ifc_modify_rmp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 rmp_state[0x4]; + u8 reserved_at_44[0x4]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; + struct mlx5_ifc_rmp_bitmask_bits bitmask; + u8 reserved_at_c0[0x40]; + struct mlx5_ifc_rmpc_bits ctx; +}; + +struct mlx5_ifc_create_rmp_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 rmpn[0x18]; + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_create_rmp_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_40[0xc0]; + struct mlx5_ifc_rmpc_bits ctx; +}; + struct mlx5_ifc_create_tis_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index e5cb6b70604..40975078cc4 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -31,6 +31,7 @@ INTERNAL { mlx5_devx_cmd_create_geneve_tlv_option; mlx5_devx_cmd_create_import_kek_obj; mlx5_devx_cmd_create_qp; + mlx5_devx_cmd_create_rmp; mlx5_devx_cmd_create_rq; mlx5_devx_cmd_create_rqt; mlx5_devx_cmd_create_sq; diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 447d6bafb93..4d479c19e6c 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -514,7 +514,7 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.umem_buf; + rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf; rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec; rxq_data->cq_arm_sn = 0; rxq_data->cq_ci = 0; From patchwork Sun Sep 26 11:18:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99686 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A1447A0547; Sun, 26 Sep 2021 13:19:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEC14410EF; Sun, 26 Sep 2021 13:19:36 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2074.outbound.protection.outlook.com [40.107.212.74]) by mails.dpdk.org (Postfix) with ESMTP id 53D71410E2 for ; Sun, 26 Sep 2021 13:19:35 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BB7Ac0E+jAJBYjh2YZ4DqZxOvXimTFfdvooyt9BWdvlOdCkW+XxVEU1fHNIo4I6QkmqM2Ubc+5kFBlrq4cERgKIMXUMZU5bexWw37mx9zPjFMbDxqm8TD1fan3OsjWo4T7mYhI5+QiaqR87dJmmXWWR6stN7jizHZdN9jE6DyZJPgNMmvPIiOrDDgCQDxLyfLY+uF3OUMbnS61R7Utsm40VuwYMgy1hp9D3BSO+CdC1FPci1g72jquoH8ifM5Gt4UeA9TRf2+Q+8yhoJdtu0ZWGxYg3/F05JBevfgl9AClmP5YQUUaFVj1sBM5aMY3GIZOrrTpkdkUuxl1JkMA2aRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=JQVFcku/CcWxMBkBcQgPyDFtkfYAXtTeIK/mfKMch6c=; b=bvD4ZVtJqfWBnMfMzTUxOfWt+pDnZQCF353wI5lCEXzLjUpsbDg5iFo0k7jD/yM9ve99fgeCQ7FoXbxAkogyLuqKIs9D7Npoq/EDqL2z0RLdmgc/eivxnUnsJZIiUsaNICEjkV9j5wpm2bNhnk1OdauFAYuKfNMF1545uFmGPrOFtrfutoK/zUk/1xh0/84TYYqrMMrzFdk9A8wW5iS2DWMt3VPnLwcy8UuwxvcT6m7UFY2WyQf3ekh5xbbibfHmPgYCexHgwEoi4d8o2D44fa5dm0hjWXTc9D20SrC99p//zoBaITOxpBa0iOUelfyW4akKXYYdfGSMjlXo/SGJEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JQVFcku/CcWxMBkBcQgPyDFtkfYAXtTeIK/mfKMch6c=; b=txhkpAULjJvqUqCBwtTR5vl4FezgwAHc8UkErJrw565ubVm07lRcxtEXi9CwTrpFcHJdG4mc0VRRZ5T5DEPKvH6n7J/WrqqJNMobHX7O95e4z5OpL0YFvsNclYfdRSLaJJU5jJ4c+l20SrjemRO2ybY03zQQysEeJdjmBbT5hWDjhIjRx0POs5FLJNMFbzxOJA5cF+LQu8AAK0M1Od5k5JxifdZIF1wgmr7rXsAboFdunl9rdUT+uHnILjQl7mYb7BtLygiE0iYLwHFNvdWyxBmRqoLpPogZZwjU3IFNho/l9VX7MkLNd55MVteB4yqiD/bGBXsyF7LaBCHgHQy2qg== Received: from BN0PR03CA0009.namprd03.prod.outlook.com (2603:10b6:408:e6::14) by MN2PR12MB3741.namprd12.prod.outlook.com (2603:10b6:208:162::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15; Sun, 26 Sep 2021 11:19:31 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e6:cafe::13) by BN0PR03CA0009.outlook.office365.com (2603:10b6:408:e6::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:19:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:19:31 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:31 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:29 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:56 +0800 Message-ID: <20210926111904.237736-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e92b456b-0c5d-4bb6-2dc6-08d980df8342 X-MS-TrafficTypeDiagnostic: MN2PR12MB3741: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:324; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bb4GCW8PoID2dmBHIoZB/suHzBHhBcBcIdkwzRZIwFDepez+r2imxrKoXK3M220Papz8y7aF3KXRgXPbBb+wfR35lOfdtFlN3KTPFrarpIS53S7SgvVEUlaXcKrfZg0j4iEfLxEeH+AqMPjOL2S3kpVL89ZG30zrEmDg0BdUa6wtCuemxDC5VELVThayRRrybblcRz0ymWSCtCdVrRMC0m06IEM4w3K1qHEyeUYsfaY5Pv4OwHGvEE+9rVLyv8lbadVzwerziFulDxgIz3Fk/2VvM7Epui0t+hk+QADGJ+LqqfvUpbeEKCrMnR6DhOyOmWQ9Xh7rGsV33RurcIKNABFBBa/85tiE94aZEjxVc/GedKcY7WJXuhM0Hno6Uzp0pq1ZvrgsMM+fCiVndDp75Gs4GCmYGU3+d00OLfJoKhCTbPgYlQsjmXsv7L6H4Reqeiva+qHcelzn3o81t0sqAfm7xd347Vyla3gnmZobjH3LH5E/Ktlx8XUdfYV9w3/tGLIHzLyIQC0pDqpaBhvwejvvG92YkWNfkq3TpvQOsoIVUGBQsWOluZQIs1tIso79YeqhBkfp+yIEnu8zZLnW1bgcGDhqfF2NLX/3Iq/RtO2CU38dhXC0jAAWb/74/GNp7IpTNiGyTO/i5uHFekc1zMuG9HBali6KON2o8jJYhlFFUiTbVCNncGzCERSGkV5kwhaQ5jMIXDBVv/qi4xX5ww== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(16526019)(6666004)(86362001)(47076005)(8676002)(8936002)(186003)(426003)(336012)(2616005)(54906003)(316002)(107886003)(36860700001)(83380400001)(6286002)(36906005)(1076003)(4326008)(36756003)(2906002)(5660300002)(70586007)(55016002)(70206006)(82310400003)(508600001)(7636003)(26005)(356005)(7696005)(6916009); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:19:31.6191 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e92b456b-0c5d-4bb6-2dc6-08d980df8342 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3741 Subject: [dpdk-dev] [PATCH 03/11] net/mlx5: clean Rx queue code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Removes unused rxq code. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rxq.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 396de327d11..7e97cdd4bc0 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -674,9 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct rte_mempool *mp) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; struct rte_eth_rxseg_split rx_single = {.mp = mp}; @@ -743,9 +741,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, const struct rte_eth_hairpin_conf *hairpin_conf) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl; int res; res = mlx5_rx_queue_pre_setup(dev, idx, &desc); From patchwork Sun Sep 26 11:18:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99687 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCAC5A0547; Sun, 26 Sep 2021 13:19:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 001EA410F6; Sun, 26 Sep 2021 13:19:38 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2053.outbound.protection.outlook.com [40.107.94.53]) by mails.dpdk.org (Postfix) with ESMTP id A7A35410F4 for ; Sun, 26 Sep 2021 13:19:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d9KO5gAUL/AQxvFlgPBYH1fSH6zIQodPMKnBIU6gTWsvPG5VKaUA6Mhwf4y9ARHZva/cyf+bwQfJBX7VmChmEyrT1mNcgN2JqSNH3DXym52p04xSPgLduSYeyqXD2PXEm/7qjdkVCwFhgOCnVPSf0nihRbYTIIyKNnzhtIEoq7s7DMuvr3YdH4hophcObx+2CBGbRKEpGulBU/gOPTMFGju8ygwFLqRtlw+vfoEHe2VpdF6nCQZBkZ0zoZgMxDNs75P9s36ybKYaw1VEa2U7RV0Z/TVVoou4koZ5XqbCnuNVPpgU9ezfJSHrm/mM4GidLmmy5uR7FfQJjnvaLZsRcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=T2HOQbAWANpYJ9TChzKqfYKFj4XrSJz1zULPDHkuTqY=; b=MLUSg4gy0IkvlBsnfAFzX/eU4ORvencQ7U36pp/OEV10j0dOiHrYXgx+pvmyfQny6fALGJRabyHhnC5vlReJ7oUPIUbDZ37qXLbCd36lR0fLESAmITQU4o5CqEGTJ2wxvnkYJwUsnEGKIHj1oe7v0qFLAV4VrzHUoZoxqbZIgNE5EUejoVB98INxMylz6+vzoh6EBZgKJhVe0JRPjXxPrp82RATPebN8gBzpAMmTanJBBTEysn4SEZ3wMKVIBBJqiJ0i/Sabn8T4fUzijzzeUWRnBj3Emm7FMj6YfVx3ZacXKkr8OcoLfr+KSkGh3lWgH8P1aP2M7kFB+4aaQrXCuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T2HOQbAWANpYJ9TChzKqfYKFj4XrSJz1zULPDHkuTqY=; b=ZwVs4L74lLcbLSAylLaYs24Ej4S+Hexd2Vew4RYMg/S4RXwOaKNH8gQb5/vpVtb3jt+xnZ3WVvLNXkFj/SZP3wFmGVWds0TYn76AmI/a3HeNl2ifhcdedwnQ18cp2BhzEIMPpFxC4xElFOFiyFS0JnRVQ+aNb1YF3pcQoU8UEPrK4HiFA5n8r7Gic5WvVD+BZwocLamWTMRH1wR3uEKjGM9HrY1K/9jgsgII+OSaEDIqWLIhdGgjBwMZgDth46ewb7hTQfADMUJy1yGugZrGHU7XCeDlfNUqntlnXP35sxs4Yf5o7CEjf6UUPI4tdPN3FGbm2bqLre9vkh3shf2Bpg== Received: from BN9PR03CA0735.namprd03.prod.outlook.com (2603:10b6:408:110::20) by CO6PR12MB5473.namprd12.prod.outlook.com (2603:10b6:303:13e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13; Sun, 26 Sep 2021 11:19:34 +0000 Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::22) by BN9PR03CA0735.outlook.office365.com (2603:10b6:408:110::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.14 via Frontend Transport; Sun, 26 Sep 2021 11:19:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:19:34 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 04:19:33 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:31 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:57 +0800 Message-ID: <20210926111904.237736-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 704253d3-2331-4c60-be61-08d980df84c2 X-MS-TrafficTypeDiagnostic: CO6PR12MB5473: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:233; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dr1GPKjeZyo/zIBYe8k9HQywIwInsj/nKfuou7131oOsS/d+E4JYdFR+mZvYXYLMmiRxAnqVvhQUr/b1k8S0Iv/o2mG9dR4F4RaDQw2bvMgXfG8C9UrNmuAHg2aa83IQZhruF6dBMqw1rwRnmTMyuCeGrFYmGOtWavLKMm4ALMM2gWn9Cp2C90MMxk1gJ1rYlpuiX08B3P1jdSutW6G8oMZIpUjzgoVlX9wWTxr5IPZ8XB3B+BHFyFkTAQ/nJr4vTWlCOATG27vxRa1OyNFkv/ZiRP76G4EWyua++FrlQPN+cSJgkYRfnqqemfEtfr41/vMsDTlMLiQZbFmCFSwuIyGsnU61yDXueYUnWFafrJx9VvsKyrSouM8s0yRIkB+svGKXSeQ3CUv9tn3oyBsDBZCYxfZQoSS9SwNN/3OQFGlW4q7UGHTkEBs8Kn7vNIGh9zTF0dZJD+AeCJTLhKCMAPvmB9OAW/AQQS4M5oL5TwgXlQdduiFeiiX3Tl0tks8nVaZWtZxKQd+gILrAQ+0jHLXEp3qCnVnWmXaf2OM4oiLfOQK1Uv06LLdQBRKyqGK7pM07yERrvUoKUCeDq9Dyc0NHfX5mlDUQGGszfeq/rHNPSmjpwaCcSJcHcrPDq2uVhCGXLSaNtuA5bCQlR6bmgsdBjtG64R93rnbSnM6s76RK/o7GPi2kcMCtOh82/MddGFBww6p96gwqc901PBfvyA== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(4326008)(7696005)(36756003)(2616005)(70206006)(70586007)(82310400003)(26005)(336012)(6666004)(83380400001)(426003)(5660300002)(86362001)(2906002)(8676002)(1076003)(107886003)(6286002)(316002)(6916009)(36860700001)(186003)(508600001)(8936002)(55016002)(54906003)(16526019)(7636003)(47076005)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:19:34.1194 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 704253d3-2331-4c60-be61-08d980df84c2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5473 Subject: [dpdk-dev] [PATCH 04/11] net/mlx5: split multiple packet Rq memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Port info is invisible from shared Rx queue, split MPR mempool from device to Rx queue, also changed pool flag to mp_sc. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 1 - drivers/net/mlx5/mlx5_rx.h | 4 +- drivers/net/mlx5/mlx5_rxq.c | 109 ++++++++++++-------------------- drivers/net/mlx5/mlx5_trigger.c | 10 ++- 4 files changed, 47 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f84e061fe71..3abb8c97e76 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1602,7 +1602,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_drop_action_destroy(dev); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); - mlx5_mprq_free_mp(dev); if (priv->sh->ct_mng) mlx5_flow_aso_ct_mng_close(priv->sh); mlx5_os_free_shared_dr(priv); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index d44c8078dea..a8e0c3162b0 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -179,8 +179,8 @@ struct mlx5_rxq_ctrl { extern uint8_t rss_hash_default_key[]; unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data); -int mlx5_mprq_free_mp(struct rte_eth_dev *dev); -int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev); +int mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); +int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t queue_id); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7e97cdd4bc0..14de8d0e6a4 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1087,7 +1087,7 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, } /** - * Free mempool of Multi-Packet RQ. + * Free RXQ mempool of Multi-Packet RQ. * * @param dev * Pointer to Ethernet device. @@ -1096,16 +1096,15 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, * 0 on success, negative errno value on failure. */ int -mlx5_mprq_free_mp(struct rte_eth_dev *dev) +mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; - unsigned int i; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; if (mp == NULL) return 0; - DRV_LOG(DEBUG, "port %u freeing mempool (%s) for Multi-Packet RQ", - dev->data->port_id, mp->name); + DRV_LOG(DEBUG, "port %u queue %hu freeing mempool (%s) for Multi-Packet RQ", + dev->data->port_id, rxq->idx, mp->name); /* * If a buffer in the pool has been externally attached to a mbuf and it * is still in use by application, destroying the Rx queue can spoil @@ -1123,34 +1122,28 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) return -rte_errno; } rte_mempool_free(mp); - /* Unset mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - - if (rxq == NULL) - continue; - rxq->mprq_mp = NULL; - } - priv->mprq_mp = NULL; + rxq->mprq_mp = NULL; return 0; } /** - * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the - * mempool. If already allocated, reuse it if there're enough elements. + * Allocate RXQ a mempool for Multi-Packet RQ. + * If already allocated, reuse it if there're enough elements. * Otherwise, resize it. * * @param dev * Pointer to Ethernet device. + * @param rxq_ctrl + * Pointer to RXQ. * * @return * 0 on success, negative errno value on failure. */ int -mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) +mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; char name[RTE_MEMPOOL_NAMESIZE]; unsigned int desc = 0; unsigned int buf_len; @@ -1158,28 +1151,15 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) unsigned int obj_size; unsigned int strd_num_n = 0; unsigned int strd_sz_n = 0; - unsigned int i; - unsigned int n_ibv = 0; - if (!mlx5_mprq_enabled(dev)) + if (rxq_ctrl == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) return 0; - /* Count the total number of descriptors configured. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - n_ibv++; - desc += 1 << rxq->elts_n; - /* Get the max number of strides. */ - if (strd_num_n < rxq->strd_num_n) - strd_num_n = rxq->strd_num_n; - /* Get the max size of a stride. */ - if (strd_sz_n < rxq->strd_sz_n) - strd_sz_n = rxq->strd_sz_n; - } + /* Number of descriptors configured. */ + desc = 1 << rxq->elts_n; + /* Get the max number of strides. */ + strd_num_n = rxq->strd_num_n; + /* Get the max size of a stride. */ + strd_sz_n = rxq->strd_sz_n; MLX5_ASSERT(strd_num_n && strd_sz_n); buf_len = (1 << strd_num_n) * (1 << strd_sz_n); obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) * @@ -1196,7 +1176,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) * this Mempool gets available again. */ desc *= 4; - obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * n_ibv; + obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ; /* * rte_mempool_create_empty() has sanity check to refuse large cache * size compared to the number of elements. @@ -1209,50 +1189,41 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "port %u mempool %s is being reused", dev->data->port_id, mp->name); /* Reuse. */ - goto exit; - } else if (mp != NULL) { - DRV_LOG(DEBUG, "port %u mempool %s should be resized, freeing it", - dev->data->port_id, mp->name); + return 0; + } + if (mp != NULL) { + DRV_LOG(DEBUG, "port %u queue %u mempool %s should be resized, freeing it", + dev->data->port_id, rxq->idx, mp->name); /* * If failed to free, which means it may be still in use, no way * but to keep using the existing one. On buffer underrun, * packets will be memcpy'd instead of external buffer * attachment. */ - if (mlx5_mprq_free_mp(dev)) { + if (mlx5_mprq_free_mp(dev, rxq_ctrl) != 0) { if (mp->elt_size >= obj_size) - goto exit; + return 0; else return -rte_errno; } } - snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id); + snprintf(name, sizeof(name), "port-%u-queue-%hu-mprq", + dev->data->port_id, rxq->idx); mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ, 0, NULL, NULL, mlx5_mprq_buf_init, - (void *)((uintptr_t)1 << strd_num_n), - dev->device->numa_node, 0); + (void *)(uintptr_t)(1 << strd_num_n), + dev->device->numa_node, MEMPOOL_F_SC_GET); if (mp == NULL) { DRV_LOG(ERR, - "port %u failed to allocate a mempool for" + "port %u queue %hu failed to allocate a mempool for" " Multi-Packet RQ, count=%u, size=%u", - dev->data->port_id, obj_num, obj_size); + dev->data->port_id, rxq->idx, obj_num, obj_size); rte_errno = ENOMEM; return -rte_errno; } - priv->mprq_mp = mp; -exit: - /* Set mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - rxq->mprq_mp = mp; - } - DRV_LOG(INFO, "port %u Multi-Packet RQ is configured", - dev->data->port_id); + rxq->mprq_mp = mp; + DRV_LOG(INFO, "port %u queue %hu Multi-Packet RQ is configured", + dev->data->port_id, rxq->idx); return 0; } @@ -1717,8 +1688,10 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); + mlx5_mprq_free_mp(dev, rxq_ctrl); + } LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); (*priv->rxqs)[idx] = NULL; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index c3adf5082e6..0753dbad053 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -138,11 +138,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev) unsigned int i; int ret = 0; - /* Allocate/reuse/resize mempool for Multi-Packet RQ. */ - if (mlx5_mprq_alloc_mp(dev)) { - /* Should not release Rx queues but return immediately. */ - return -rte_errno; - } DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.", dev->data->port_id, priv->sh->device_attr.max_qp_wr); DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", @@ -153,8 +148,11 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl) continue; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - /* Pre-register Rx mempools. */ if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { + /* Allocate/reuse/resize mempool for MPRQ. */ + if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0) + goto error; + /* Pre-register Rx mempools. */ mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, rxq_ctrl->rxq.mprq_mp); } else { From patchwork Sun Sep 26 11:18:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99688 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84FD2A0547; Sun, 26 Sep 2021 13:20:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75085410E4; Sun, 26 Sep 2021 13:20:03 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2050.outbound.protection.outlook.com [40.107.244.50]) by mails.dpdk.org (Postfix) with ESMTP id 69E1340F35 for ; Sun, 26 Sep 2021 13:20:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QADRp0OwvM3k4X0sdBrxc0BUKliXL/ear7w6PWNlOkErGLWtGk9r8bytWfbSjQUClulN8XYJ0uhrvdXW7/yC/1FENvRPa+S40alA0mN3XMTzRKuLN4delzb6TajYWqGnB13VmB8O0xLJ+w9ICHtCnL3qbTHwPZqIohRVNrUcGygE5nXxD6reEjDmq6zkSbWKvS9+PKCr56spfX3/ihIecn/qZ/a2p8moCCaZuzFCFhVTPmqRGD9+B4rKw/gbOn4I0u+QBDJtmeLHUfmEb92FKYM2PDMfYdGxt1zSZ+GrDIL1xSeHwgikBE9bXyQcLc9HN8NbwBmTE5x54tSGb4aAXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=nkMenBUxzD0NxwYx1j0dI+EQtE1rfnAnAgg92RWHaNc=; b=jwl7SpzUWA6yjt4T0OfczlejRmTJIgDzHt7qggW7ohXjjc30TI5Q7UcTfYAQU6ssawVG8cJeUz4zW6kyuNkR5Ok5/1QTzAlHV97yPV981ji4RLlo+Nf81z6yTWwsxoP8WMtc3NeJmLUA8ZgyUv6vw70YWagn7Of1wCaYLDB8VdXi5wIbd8MG4MlgT6L7eZvJ7OUkyXSHW+hV+Li4mFWi8LoWPJWk74UtOVGC6uJpptVmdf7PASO3yzX2uE50N9zESRs2yJ1oUO3odzw/VgpZci8SC0zQwEFDuX8J7VEgwzWtspOYtVI2JIEuBMKrd2r0kJP9lJvrQGCy3vCjGU62oQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nkMenBUxzD0NxwYx1j0dI+EQtE1rfnAnAgg92RWHaNc=; b=rMAVFL65eAUCF73Zv9ZwTEzILAfuxEaYwFyb1OT6j50O36paEze9ciuYqFNhsKhd40iJfet2O/ngh/wmJjqVx1sUqhnG1gn7Z2R4Xak8US6XODdQcrQgT/y5tVSzgyjeOxChY+ecFjnvJcnOGr3IjH0HAbUUYWYchqyXyz5f4Nknm0k/lg21UdWC4xltH9iMSIsM/2ZHGlgpPaGvcpChlMfUMQfP00ImpXldhutILo1VnPilrzN//Q+NMI9Ucl2iSjEnddOBP3nWVxFsxzbw8CAwGVvv5AK5YzhN0TXcgfxeLJi6q26rUmE4T9KAsd4WgZoQxCyKXx4J3nri7Uxxrw== Received: from BN9PR03CA0095.namprd03.prod.outlook.com (2603:10b6:408:fd::10) by DM5PR1201MB2504.namprd12.prod.outlook.com (2603:10b6:3:e3::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.19; Sun, 26 Sep 2021 11:20:00 +0000 Received: from BN8NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fd:cafe::a2) by BN9PR03CA0095.outlook.office365.com (2603:10b6:408:fd::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT009.mail.protection.outlook.com (10.13.176.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:00 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:58 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:56 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:58 +0800 Message-ID: <20210926111904.237736-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f2732c10-72fb-4c65-e949-08d980df9440 X-MS-TrafficTypeDiagnostic: DM5PR1201MB2504: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WwxQYFmOA1YcRpBShspxLR2HeOLmJEyl+WnCb8Gvxi7fYPu2DaS3uUo/7VomFks8Nd21K2ShepAFOcxEH+uimS+FkPkG9zes851F8H3HNgrKIT1YDCtO5D61S0LkpmPP48gzrjI9/aO1ZYIvXE7G9Uc8R+Eyiciw8bWTUP7bOBEf8pMrhF2sSA+ECDMbQIif8JzxjU6Ev9pqzRiYG0aD9N7+x4u9PBrU+DUHxV9m8EV6kdbrG/DoVTu6fcQXPcJB/h0Nt8S0pmBVQQKrjPIwUnpCUDN4Wsmx3g/vXewm+7k//DaM/q75Whg37clwUpTMjrkentrxEnkX7lypEN3CUUx1gFrb7EOra1i8EymTEdu3BlcQE7p6N1IfCW8tGVOn+16jzXNQQpdMtCk31KlGUA7DkzsBMNVmYCBMLTViaPt4v7HeA5Huiwx2ucbq3Lp9cBx6/NDLNkUMRvVHpn3Vw9GYevXBirwkF3uFQ1dCEDCfOTF8Zw7oaaV5stjG0mIwDeQvIkwLF+vH64Ne7cpX5JdyOX6eiE3bp9vZVIq06aF+IqZFLp4kSQ60hE9OyrTlX6Q/4FAJPNHfsCwg72T09JWeFZ28aHPgvG4BgM1Zaj5plAFFnO1iCwPopVfhLq4n3DKTtDGojlaiW1Rw1cOc9zWfICT0oPGOww4VIJW1WfzSe52c8KNqHc0K0ORT0eDGvU4fKbP2WzdzfI92yN7wlsxVItJX5bsQS6thZJtkDPXNyleChDjIbNmXWdwE/CfK X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(336012)(2616005)(426003)(6916009)(8676002)(47076005)(6286002)(2906002)(5660300002)(16526019)(8936002)(186003)(26005)(30864003)(82310400003)(107886003)(83380400001)(508600001)(36860700001)(55016002)(70586007)(36906005)(4326008)(316002)(36756003)(70206006)(7696005)(356005)(7636003)(54906003)(1076003)(86362001)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:00.1251 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f2732c10-72fb-4c65-e949-08d980df9440 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2504 Subject: [dpdk-dev] [PATCH 05/11] net/mlx5: split Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare shared RX queue, splits rxq data into shareable and private. Struct mlx5_rxq_priv is per queue data. Struct mlx5_rxq_ctrl is shared queue resources and data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 4 +++ drivers/net/mlx5/mlx5.h | 5 ++- drivers/net/mlx5/mlx5_ethdev.c | 10 ++++++ drivers/net/mlx5/mlx5_rx.h | 15 ++++++-- drivers/net/mlx5/mlx5_rxq.c | 66 ++++++++++++++++++++++++++++------ 5 files changed, 86 insertions(+), 14 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 3abb8c97e76..749729d6fbe 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1585,6 +1585,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_free(dev->intr_handle); dev->intr_handle = NULL; } + if (priv->rxq_privs != NULL) { + mlx5_free(priv->rxq_privs); + priv->rxq_privs = NULL; + } if (priv->txqs != NULL) { /* XXX race condition if mlx5_tx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e02714e2319..d06f828ed33 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1335,6 +1335,8 @@ enum mlx5_txq_modify_type { MLX5_TXQ_MOD_ERR2RDY, /* modify state from error to ready. */ }; +struct mlx5_rxq_priv; + /* HW objects operations structure. */ struct mlx5_obj_ops { int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on); @@ -1404,7 +1406,8 @@ struct mlx5_priv { /* RX/TX queues. */ unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ - struct mlx5_rxq_data *(*rxqs)[]; /* RX queues. */ + struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ + struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 82e2284d986..7071a5f7039 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -104,6 +104,16 @@ mlx5_dev_configure(struct rte_eth_dev *dev) MLX5_RSS_HASH_KEY_LEN); priv->rss_conf.rss_key_len = MLX5_RSS_HASH_KEY_LEN; priv->rss_conf.rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf; + priv->rxq_privs = mlx5_realloc(priv->rxq_privs, + MLX5_MEM_ANY | MLX5_MEM_ZERO, + sizeof(void *) * rxqs_n, 0, + SOCKET_ID_ANY); + if (priv->rxq_privs == NULL) { + DRV_LOG(ERR, "port %u cannot allocate rxq private data", + dev->data->port_id); + rte_errno = ENOMEM; + return -rte_errno; + } priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index a8e0c3162b0..db6252e8e86 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -161,7 +161,9 @@ struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ uint32_t refcnt; /* Reference counter. */ + LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ + struct mlx5_dev_ctx_shared *sh; /* Shared context. */ struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ @@ -174,6 +176,14 @@ struct mlx5_rxq_ctrl { uint32_t hairpin_status; /* Hairpin binding status. */ }; +/* RX queue private data. */ +struct mlx5_rxq_priv { + uint16_t idx; /* Queue index. */ + struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ + LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ + struct mlx5_priv *priv; /* Back pointer to private data. */ +}; + /* mlx5_rxq.c */ extern uint8_t rss_hash_default_key[]; @@ -197,13 +207,14 @@ void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev); int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id); int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id); int mlx5_rxq_obj_verify(struct rte_eth_dev *dev); -struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, +struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, + struct mlx5_rxq_priv *rxq, uint16_t desc, unsigned int socket, const struct rte_eth_rxconf *conf, const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg); struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 14de8d0e6a4..70e73690aa7 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -674,6 +674,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct rte_mempool *mp) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; @@ -708,10 +709,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, res = mlx5_rx_queue_pre_setup(dev, idx, &desc); if (res) return res; - rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, conf, rx_seg, n_seg); + rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0, + SOCKET_ID_ANY); + if (!rxq) { + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u private data", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + rxq->priv = priv; + rxq->idx = idx; + (*priv->rxq_privs)[idx] = rxq; + rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg); if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate queue index %u", + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; rte_errno = ENOMEM; return -rte_errno; } @@ -741,6 +755,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, const struct rte_eth_hairpin_conf *hairpin_conf) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; int res; @@ -776,14 +791,27 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, return -rte_errno; } } - rxq_ctrl = mlx5_rxq_hairpin_new(dev, idx, desc, hairpin_conf); + rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0, + SOCKET_ID_ANY); + if (!rxq) { + DRV_LOG(ERR, "port %u unable to allocate hairpin rx queue index %u private data", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + rxq->priv = priv; + rxq->idx = idx; + (*priv->rxq_privs)[idx] = rxq; + rxq_ctrl = mlx5_rxq_hairpin_new(dev, rxq, desc, hairpin_conf); if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate queue index %u", + DRV_LOG(ERR, "port %u unable to allocate hairpin queue index %u", dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; rte_errno = ENOMEM; return -rte_errno; } - DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", + DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; return 0; @@ -1274,8 +1302,8 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, * * @param dev * Pointer to Ethernet device. - * @param idx - * RX queue index. + * @param rxq + * RX queue private data. * @param desc * Number of descriptors to configure in queue. * @param socket @@ -1285,10 +1313,12 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ctrl * -mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, + uint16_t desc, unsigned int socket, const struct rte_eth_rxconf *conf, const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg) { + uint16_t idx = rxq->idx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *tmpl; unsigned int mb_len = rte_pktmbuf_data_room_size(rx_seg[0].mp); @@ -1331,6 +1361,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOMEM; return NULL; } + LIST_INIT(&tmpl->owners); + rxq->ctrl = tmpl; + LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG); /* * Build the array of actual buffer offsets and lengths. @@ -1564,6 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf && (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; + tmpl->sh = priv->sh; tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); @@ -1591,8 +1625,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * * @param dev * Pointer to Ethernet device. - * @param idx - * RX queue index. + * @param rxq + * RX queue. * @param desc * Number of descriptors to configure in queue. * @param hairpin_conf @@ -1602,9 +1636,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ctrl * -mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, +mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, + uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf) { + uint16_t idx = rxq->idx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *tmpl; @@ -1614,10 +1650,14 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOMEM; return NULL; } + LIST_INIT(&tmpl->owners); + rxq->ctrl = tmpl; + LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); tmpl->type = MLX5_RXQ_TYPE_HAIRPIN; tmpl->socket = SOCKET_ID_ANY; tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; + tmpl->sh = priv->sh; tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); @@ -1671,6 +1711,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; @@ -1692,9 +1733,12 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); mlx5_mprq_free_mp(dev, rxq_ctrl); } + LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); (*priv->rxqs)[idx] = NULL; + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; } return 0; } From patchwork Sun Sep 26 11:18:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99690 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F019A0548; Sun, 26 Sep 2021 13:20:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D427410FF; Sun, 26 Sep 2021 13:20:11 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2059.outbound.protection.outlook.com [40.107.237.59]) by mails.dpdk.org (Postfix) with ESMTP id BDB2F410F7 for ; Sun, 26 Sep 2021 13:20:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=W9LTcUFto+nGTRxRqckVKMkGj4kjg57c8+FOLr553z5u6w/3bMkwsMaQ/SYRN8hZKY4UvPtYyxL1ID/xNVHT7jfriph7nperYHQfoFDusqG7cQt+Q/Jas1vIzsfl0kG2kyT2B4vB2SBk6vzjGl9TY2wfHbvMyrELEamwvRGHbCSdjKaTQBtNx8LUAc0XE4WyOBx/9vpp8lI0pd1BDnKmGcUmsJk5jd117naJCDG7OPN4lHm0Egu6aEWz4D4Aw0BfHoFH7zcGABM3ApOqD4nxMvUkLxxIq2IanL4RwpmrlRHFUkOwNSZp1czxvVPImAJgbXmZd2YVFKdsNpW1BNYHVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=e80ndCJjqHeLzVtqrgCpSxuaAK8o4c0ijedlHtlPQmU=; b=AbDDjUbPoAWZKF4Ha2tFZgaaCMNG/Hf24rb/kEqPEPg8ACOPXRsjQYSBoDCMOJuCwNiVni8bmhpelSNBMTg6Ss7FvJ9BsaxXRJZtQhD7oEzkRaHSYrF1u2LXEsxxedal7P+9J11ZNQwDRV9q1RIC8LAjXpU/oybH6QMgrHSkqjZZ3iw9SNiJ4qCK1n/bq9g4IZ0rEXi/TaO/IE4RZsqdi8O5Y+wkhx4lvEuX/raHns7+yH2L0v2n02/G1mSET1Xm0FZjnHEv8oApPtYzVIdke5VmmHX95MWj+perRB01w/DVRBnlQQG2kR4M2OP1lBm6uKPmvYrNC4cahHILAoUkiQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e80ndCJjqHeLzVtqrgCpSxuaAK8o4c0ijedlHtlPQmU=; b=aPYGwwHTBLkGLKfiSnL1cRexVZdIuYFVvhGo5/acxaSjannZyj5v9lBLQEB2VUZNz9z35Ttq2s1WiJ0A1492fmj4GUAGIylER9ZOXLRiF4xOOkM8E1OheI8kkkiZIkhjTDT0B0tUGzbmQ0zetClzohg6jw2ZE5ZVrM6qkliAuAFEgV7Ycv3OUOcF8V6BoLs1WFQjsKu66Qh3P4WHkyDtLcQUVx8nm54TuYwhnaYlGUZX+o9NFzwIEEAALCZPZtV7jv/6aaPqvWFR0j0yCoAyjs61STB5GaoKwMR4fBYnuFKc0G+ecmM70mNEd2hfDc3Znfrzl8WTPoAuSHX+LdkIUg== Received: from BN1PR12CA0027.namprd12.prod.outlook.com (2603:10b6:408:e1::32) by CY4PR12MB1223.namprd12.prod.outlook.com (2603:10b6:903:38::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.21; Sun, 26 Sep 2021 11:20:07 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::eb) by BN1PR12CA0027.outlook.office365.com (2603:10b6:408:e1::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:20:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:07 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:02 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:58 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:59 +0800 Message-ID: <20210926111904.237736-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3420b393-4dca-4a5f-b673-08d980df98ad X-MS-TrafficTypeDiagnostic: CY4PR12MB1223: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +Q0sY7cEiJ+7fWHN6mgDOECyA8izHJwpthQiA58vMyUN25jV8oHlE6RLGh1d2n8AOrlSG557Ap4tYDAGpkq9wYx4cc9H746ctT836D1JdHnTCwrAehahd4EVKOrOJJGA77Lrq3xpTRrU9sAbIh6GX6TZRWpKbAnZ6VWVnmQveQ2XWBre73+Z0+qNaOnLrXoPUZFFwjM+Yulfpyh16ZInbOlaZd8pi90sdH0MAuf73I2NqwP7a7BXZR7nseX+lmUmXfblWXxKWwoLptf45CYwif5+4zzbaunxML1+E/IWnvk+BYSQzSOrs+HqpQ3YRC9FlkbKWuNaQsk0ZL70jkGdodKlSaAFthm/mYYsfaCCfnFNoLOyxICnTyNB7VYzZ5xheq4bh2Zm/bsjkVwhW7w5UNI8I3iq2jssZb+jUt+WHuodVF6jajS6/ZasR/3lBGJBEr0eW5spKwjyQiXbTObrCOd48jxrC1dAl6/1PJ27PQbYwvM2+wxhdW0kBHvbYH+6VPnfLBrLIl6DaVD3LiV5rcRK6/pX74xpsZUDP3isvIt+n1KFOpQ7gB6hBpj0oYbVoiLo5fLcboJmQuiRJp3lwusUUiIbmSaOxm05Yv8wIxgYi9hePw0t+l7kXIg+w/ptNCnOfkD0Cw0m+yXzyF6EgyDsWKmT0Pn4/eJDJjrWGbOAX2QtBYVNhiu4i8Q585HgQPVwvW2HZ7IXZpVOYkpL9A== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(2616005)(426003)(86362001)(508600001)(356005)(336012)(7636003)(1076003)(107886003)(8676002)(6286002)(8936002)(55016002)(47076005)(186003)(70586007)(36860700001)(36756003)(7696005)(5660300002)(16526019)(83380400001)(30864003)(70206006)(6916009)(2906002)(82310400003)(26005)(54906003)(316002)(4326008)(36906005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:07.4401 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3420b393-4dca-4a5f-b673-08d980df98ad X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1223 Subject: [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue reference count is counter of RQ, used on RQ table. To prepare for shared Rx queue, move it from rxq_ctrl to Rx queue private data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rx.h | 8 +- drivers/net/mlx5/mlx5_rxq.c | 173 +++++++++++++++++++++----------- drivers/net/mlx5/mlx5_trigger.c | 57 +++++------ 3 files changed, 144 insertions(+), 94 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index db6252e8e86..fe19414c130 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -160,7 +160,6 @@ enum mlx5_rxq_type { struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ - uint32_t refcnt; /* Reference counter. */ LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ @@ -179,6 +178,7 @@ struct mlx5_rxq_ctrl { /* RX queue private data. */ struct mlx5_rxq_priv { uint16_t idx; /* Queue index. */ + uint32_t refcnt; /* Reference counter. */ struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ @@ -216,7 +216,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); +uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 70e73690aa7..7f28646f55c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void) static int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (!(*priv->rxqs)[idx]) { + if (rxq == NULL) { rte_errno = EINVAL; return -rte_errno; } - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1); + return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1); } /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */ @@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) intr_handle->type = RTE_INTR_HANDLE_EXT; for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL; int rc; /* Skip queues that cannot request interrupts. */ @@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID; - /* Decrease the rxq_ctrl's refcnt */ - if (rxq_ctrl) - mlx5_rxq_release(dev, i); continue; } + mlx5_rxq_ref(dev, i); if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { DRV_LOG(ERR, "port %u too many Rx queues for interrupt" @@ -949,7 +945,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) * Need to access directly the queue to release the reference * kept in mlx5_rx_intr_vec_enable(). */ - mlx5_rxq_release(dev, i); + mlx5_rxq_deref(dev, i); } free: rte_intr_free_epoll_fd(intr_handle); @@ -998,19 +994,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + if (!rxq) goto error; - if (rxq_ctrl->irq) { - if (!rxq_ctrl->obj) { - mlx5_rxq_release(dev, rx_queue_id); + if (rxq->ctrl->irq) { + if (!rxq->ctrl->obj) goto error; - } - mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn); + mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn); } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: rte_errno = EINVAL; @@ -1032,23 +1023,21 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); int ret = 0; - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) { + if (!rxq) { rte_errno = EINVAL; return -rte_errno; } - if (!rxq_ctrl->obj) + if (!rxq->ctrl->obj) goto error; - if (rxq_ctrl->irq) { - ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj); + if (rxq->ctrl->irq) { + ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj); if (ret < 0) goto error; - rxq_ctrl->rxq.cq_arm_sn++; + rxq->ctrl->rxq.cq_arm_sn++; } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: /** @@ -1059,12 +1048,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_errno = errno; else rte_errno = EINVAL; - ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_release(dev, rx_queue_id); - if (ret != EAGAIN) + if (rte_errno != EAGAIN) DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", dev->data->port_id, rx_queue_id); - rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1611,7 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1665,11 +1651,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; } +/** + * Increase Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_priv * +mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq != NULL) + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Dereference a Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * Updated reference count. + */ +inline uint32_t +mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq == NULL) + return 0; + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + /** * Get a Rx queue. * @@ -1681,18 +1709,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_rxq_ctrl * +inline struct mlx5_rxq_priv * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; - if (rxq_data) { - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - __atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED); - } - return rxq_ctrl; + if (priv->rxq_privs == NULL) + return NULL; + return (*priv->rxq_privs)[idx]; +} + +/** + * Get Rx queue shareable control. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue control if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_ctrl * +mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : rxq->ctrl; +} + +/** + * Get Rx queue shareable data. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue data if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_data * +mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : &rxq->ctrl->rxq; } /** @@ -1710,13 +1772,12 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1) + if (mlx5_rxq_deref(dev, idx) > 1) return 1; if (rxq_ctrl->obj) { priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); @@ -1728,7 +1789,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } - if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { + if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); mlx5_mprq_free_mp(dev, rxq_ctrl); @@ -1908,7 +1969,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, return 1; priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); mlx5_free(ind_tbl); return 0; } @@ -1965,7 +2026,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, log2above(priv->config.ind_table_max_size); for (i = 0; i != queues_n; ++i) { - if (!mlx5_rxq_get(dev, queues[i])) { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { ret = -rte_errno; goto error; } @@ -1978,7 +2039,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: err = rte_errno; for (j = 0; j < i; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); + mlx5_rxq_deref(dev, ind_tbl->queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2074,7 +2135,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i, j; + unsigned int i; int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : @@ -2094,15 +2155,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl); if (ret) goto error; - for (j = 0; j < ind_tbl->queues_n; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); ind_tbl->queues_n = queues_n; ind_tbl->queues = queues; return 0; error: err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_release(dev, queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2135,7 +2192,7 @@ mlx5_ind_table_obj_attach(struct rte_eth_dev *dev, return ret; } for (i = 0; i < ind_tbl->queues_n; i++) - mlx5_rxq_get(dev, ind_tbl->queues[i]); + mlx5_rxq_ref(dev, ind_tbl->queues[i]); return 0; } @@ -2172,7 +2229,7 @@ mlx5_ind_table_obj_detach(struct rte_eth_dev *dev, return ret; } for (i = 0; i < ind_tbl->queues_n; i++) - mlx5_rxq_release(dev, ind_tbl->queues[i]); + mlx5_rxq_deref(dev, ind_tbl->queues[i]); return ret; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 0753dbad053..a49254c96f6 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -143,10 +143,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", dev->data->port_id, priv->sh->device_attr.max_sge); for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); + struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; - if (!rxq_ctrl) + if (rxq == NULL) continue; + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { /* Allocate/reuse/resize mempool for MPRQ. */ @@ -215,6 +217,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_obj *sq; struct mlx5_devx_obj *rq; @@ -259,9 +262,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } sq = txq_ctrl->obj->sq; - rxq_ctrl = mlx5_rxq_get(dev, - txq_ctrl->hairpin_conf.peers[0].queue); - if (!rxq_ctrl) { + rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue); + if (rxq == NULL) { mlx5_txq_release(dev, i); rte_errno = EINVAL; DRV_LOG(ERR, "port %u no rxq object found: %d", @@ -269,6 +271,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || rxq_ctrl->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; @@ -303,12 +306,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) rxq_ctrl->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); } return 0; error: mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } @@ -381,27 +382,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; mlx5_txq_release(dev, peer_queue); } else { /* Peer port used as ingress. */ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue); struct mlx5_rxq_ctrl *rxq_ctrl; - rxq_ctrl = mlx5_rxq_get(dev, peer_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, peer_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; @@ -409,7 +409,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; - mlx5_rxq_release(dev, peer_queue); } return 0; } @@ -508,34 +507,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (peer_info->tx_explicit != @@ -543,7 +540,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (peer_info->manual_bind != @@ -551,7 +547,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RDY; @@ -561,7 +556,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 1; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -626,34 +620,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 0; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RST; @@ -661,7 +653,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 0; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -963,7 +954,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *txq_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; uint32_t i; uint16_t pp; uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0}; @@ -992,24 +982,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, } } else { for (i = 0; i < priv->rxqs_n; i++) { - rxq_ctrl = mlx5_rxq_get(dev, i); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (rxq == NULL) continue; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { - mlx5_rxq_release(dev, i); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - } pp = rxq_ctrl->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; - mlx5_rxq_release(dev, i); DRV_LOG(ERR, "port %hu queue %u peer port " "out of range %hu", priv->dev_data->port_id, i, pp); return -rte_errno; } bits[pp / 32] |= 1 << (pp % 32); - mlx5_rxq_release(dev, i); } } for (i = 0; i < RTE_MAX_ETHPORTS; i++) { From patchwork Sun Sep 26 11:19:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99689 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5E3AA0547; Sun, 26 Sep 2021 13:20:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D625A40F35; Sun, 26 Sep 2021 13:20:08 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2041.outbound.protection.outlook.com [40.107.100.41]) by mails.dpdk.org (Postfix) with ESMTP id 39930410F7 for ; Sun, 26 Sep 2021 13:20:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H95ruXlfdg9HDhkPTzlTVDXRCvaa1+AX5ZH+ve14jq8EkR9OGoCyoNsDcQ6UDTSoxEYt1n8AEQcAquXfCbohCstQSGI46QX6IBGh3dUhyJMSs6SBrth9/SP5qQg1ek1LVg9zXpulOTsegj2UVhOGZc+X9Z/+AkfmcgFe4iO7j3EzVrCIS6toCk8jToBfaFR81SUtKyOQmH+vGFQrXRPcr8WJLHudF40dK/cTBYbQZ478s/JAcagbqHrYX2KnNo9YKcimeN+TsFYZesIbsWIncQHPPQ+hjLuMhULXr0PPhboXVStho9j6BZMRo8CvDqde3P7TRnuPN4xY5IgQPMt0dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=g9ZAVJsOgXT0CXSpyMqBAS1xmErFaK2i/ixelqauWDY=; b=ac4CsRaOfLALBkq+p3G7SazrZc05WCORaPZ/cgrm+8PvUQW6cNQmKIJGX4wE9HkkEpA3V8NLKjpzwQIEdRmGReKfV1oHPugKIqdwdyYXjr8bQCbgYacyeLiFYrKFPrIzoRHvGczzdo6AcGeqOgudzG1Gb3iphvGFDnosjTB+VRIaqTn80pvtYUcQpO/nsQMsese6MKIr1u0XmIG+583vkIhpFol7m+YqAVYZKunU/pyC2grrNc3+g1I+0zrQo4kYzuS9hHwpMlCXKBmT4hTk9gLGojJJPhbEIAWpzVc4mGD0T6nGAlew3Tkd+CjP0nUk9UCOX7jYA3lAUdfJFnajrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g9ZAVJsOgXT0CXSpyMqBAS1xmErFaK2i/ixelqauWDY=; b=Jt5P9uWZTCoEP3jZOX9TqWnm7xZKXWrGXXvjoOylYpwmqurxe9wNT4Sk9a9RBsj4wB4IxzfP6rzassafu3n2HRBCkVG0GlmcXfKR3GJtVCeK0scNxouozmTf5YsvDLKZdiInWs3LKVl5Vx5+5jloGcaxj9sL9YXmJaq+sScsi6K1kw4FWjAQ5Of9MGk+hXyCbvDP1mZqTIRZpeKEi7mqLgB0HayeQVyxpacT5LU+PumuvJoepXroCVTgtISb9EXZD095jAQ6sZe0tMp5yK+z/a/y2PIYs0JGegcEdu6s5h8dqdM3Sehin71qs2gGZ+INrc+nWiw+DlFGwC5xF686cw== Received: from BN6PR16CA0038.namprd16.prod.outlook.com (2603:10b6:405:14::24) by CH2PR12MB5529.namprd12.prod.outlook.com (2603:10b6:610:36::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.14; Sun, 26 Sep 2021 11:20:05 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::45) by BN6PR16CA0038.outlook.office365.com (2603:10b6:405:14::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:20:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:05 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 04:20:04 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:02 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:19:00 +0800 Message-ID: <20210926111904.237736-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e1de08b1-da06-4f30-d96f-08d980df9751 X-MS-TrafficTypeDiagnostic: CH2PR12MB5529: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:295; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +e1mBvVjdzU2ICUm/lZfl1z0ncO4+8/28EHZAKxUk5DHYSUd4l8KJ78rHZvnLKhwKl28VzORlynD5e52gSVrPtVOeMJCYQVak20sG74yIsSA6rKbSrWUuA9QJeQFIkNn+I0IzOUa2z13GPmGaUbi08BY+Gvg93XNxWZmy5EQJI593n0ZA7+wMdMh/Gz+isMVz9IYM5rtNdHzG6qsF3P9FF4oU9k+UKhxDoA+W4h/U7P+sCnOmR7oKM59CzBVlUkT3h1SXj5x0vDbbdGmzQY1wOtGO6BhGa2g+Th4w9MOsZesm/ZMUwxY9wCLLx8V2Mi/gzr3X1prpmO3BPvzoeSe4pDxsgNYpKnNuWOdusdwwYFjbRQgh7FjcNwyIRPutOcsksRbWVMH5j+F3VsAt9w2hJb9HkkGK+3vgfXXhsToFnneo03PpxT2lhnqDDzAlPVY6hSuBpq3DNHqRvHbNr9EoEpF/W6xPMDV+1hoOn1dmEM/3+IaL5htnLw9QVTs4VkksRNxoAKfHVCbZcI/0eYdcQwjAThsnIxKlV+y5sER3mS/3SIZWWsDQdJdYCeuNlS6oAc3dfrWUkTd29hR0xXOCaofyxkAWU1M8MWaNa9apTE0IaBqsztVIM5NvGZiK5VrEqMKlsPyF+Fc7FZ3uOG/gFhGD0IWxRjrSMgnk7OwxAmpRV0cTp7bhXWpP8e997n0y6Sd17nBF+e/OQ/ESM6/2Q== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(83380400001)(6916009)(356005)(336012)(47076005)(316002)(8936002)(54906003)(186003)(16526019)(107886003)(426003)(2616005)(8676002)(2906002)(7636003)(1076003)(6286002)(4326008)(7696005)(5660300002)(36756003)(508600001)(26005)(70586007)(55016002)(36860700001)(70206006)(82310400003)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:05.2743 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e1de08b1-da06-4f30-d96f-08d980df9751 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5529 Subject: [dpdk-dev] [PATCH 07/11] net/mlx5: move Rx queue hairpin info to private data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hairpin info of Rx queue can't be shared, moves to private queue data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rx.h | 4 ++-- drivers/net/mlx5/mlx5_rxq.c | 13 +++++-------- drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------ 3 files changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fe19414c130..2ed544556f5 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -171,8 +171,6 @@ struct mlx5_rxq_ctrl { uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ uint32_t wqn; /* WQ number. */ uint16_t dump_file_n; /* Number of dump files. */ - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ - uint32_t hairpin_status; /* Hairpin binding status. */ }; /* RX queue private data. */ @@ -182,6 +180,8 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ + uint32_t hairpin_status; /* Hairpin binding status. */ }; /* mlx5_rxq.c */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7f28646f55c..21cb1000899 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1649,8 +1649,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; - tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; + rxq->hairpin_conf = *hairpin_conf; mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; @@ -1869,14 +1869,11 @@ const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq_ctrl->hairpin_conf; + if (idx < priv->rxqs_n && rxq != NULL) { + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + return &rxq->hairpin_conf; } return NULL; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index a49254c96f6..f376f4d6fc4 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -273,7 +273,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) } rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || - rxq_ctrl->hairpin_conf.peers[0].queue != i) { + rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " "Rx queue %d", dev->data->port_id, @@ -303,7 +303,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (ret) goto error; /* Qs with auto-bind will be destroyed directly. */ - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); } @@ -406,9 +406,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, } peer_info->qp_id = rxq_ctrl->obj->rq->id; peer_info->vhca_id = priv->config.hca_attr.vhca_id; - peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; - peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; - peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; + peer_info->peer_q = rxq->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit; + peer_info->manual_bind = rxq->hairpin_conf.manual_bind; } return 0; } @@ -530,20 +530,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status != 0) { + if (rxq->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); return 0; } if (peer_info->tx_explicit != - rxq_ctrl->hairpin_conf.tx_explicit) { + rxq->hairpin_conf.tx_explicit) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); return -rte_errno; } if (peer_info->manual_bind != - rxq_ctrl->hairpin_conf.manual_bind) { + rxq->hairpin_conf.manual_bind) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); @@ -555,7 +555,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.hairpin_peer_vhca = peer_info->vhca_id; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; } return ret; } @@ -637,7 +637,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status == 0) { + if (rxq->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); return 0; @@ -652,7 +652,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.rq_state = MLX5_SQC_STATE_RST; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 0; + rxq->hairpin_status = 0; } return ret; } @@ -990,7 +990,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - pp = rxq_ctrl->hairpin_conf.peers[0].port; + pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; DRV_LOG(ERR, "port %hu queue %u peer port " From patchwork Sun Sep 26 11:19:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99691 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4EF26A0547; Sun, 26 Sep 2021 13:20:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9327D41101; Sun, 26 Sep 2021 13:20:13 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by mails.dpdk.org (Postfix) with ESMTP id 85844410F7 for ; Sun, 26 Sep 2021 13:20:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LtwAlftt3aAwJZ4JB2vXUA7zj+q5qzXx2H8fNlvP/oq3AG0mLjhKs+NLsKQlaAIjfaAGqq3fqUs7UdKBtOXI5rGCl5WCJl0PXO4/NIdJtxxagPjkn9v9YDORauLhWyo1STNw8UczRjbrrHEJGH4Ov37NDS/2m43KiNwCUqiYGIU5fLtrSl9w99Es8mmao2o5HXrFkNQMsjL8JLczQvNZcVndxBizFWKpZkJaozHMrOttgbS+wni0LbTaZHYmL+rQllKK+Cmh7hVW//ZbMIdS2COWpZ4d9z5bsKOyYMBeBvOLqThfMGr6ejEmfcyx5DFfQ41UXuoC1kaw2LzTxaEVqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=NheNNYbWLQaRH1Z4kIMBbycBa4zHC84BhShVuMAB9Ws=; b=fFUByy+M4qMRl8UY0P55t/Q0hDIRtdMTX7sHFdQ/ZFZpDMd+zMi0r0Ko+rFo8d86qgQ94ecwau/J0nMw/UyA6j80EPfOKsUhhAHC5PtnjKMiunYmudr5kqrrwcmVH3NO9fgYGLlUg7SrtReIZZfoQEwcn2G6RV1vldgt6ZyfujQQpWaLHFOrdqWVVcBmBqziIky9isVBWavU1+7Iz/2JOUDGKnwy+JaSjqGnQS0bYlk5vrI3RQRCqTzgDg7uK0ybCnPIkwiLqNSiyBqMTWzFWfF9IHr2jBmM41hAaukfurlwUG0bQhlzPcmCvgaA6wkpIcIjzj1V1/YRz3Lf3+Q0Tw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NheNNYbWLQaRH1Z4kIMBbycBa4zHC84BhShVuMAB9Ws=; b=oUCgfoGkEnseQP4eg8anOPfXqdA4BaKJAE8nyUgARKg4R0UaXroM/6taopKObn9yrMw13CfIuNOnUjDe4obL3h7xCyauu4A+vmJ9D9cK1ZNgBNbGEXHMK6VbxGYTkcLX0oWMKAUjh50kb2CmCl6dHDz8lUBC+N9bRjrNKMQTWPuzPbJup1AvZs9kauZHNMz7q4pXCkZFmJqpYRlJcE5LymEbhtCGej9+y6jz8BNTlijdnfq9QWy6R/Y4O6E7aKJVMMaI2E9dwH9yrAJc3aYjRkm081RRWbZ6GufePFQhcKOqVxd1oAkXmfVOaw7SuflI4gQV1Nu5d1RQb/VIADxd4Q== Received: from BN8PR15CA0067.namprd15.prod.outlook.com (2603:10b6:408:80::44) by BY5PR12MB5512.namprd12.prod.outlook.com (2603:10b6:a03:1df::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Sun, 26 Sep 2021 11:20:09 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:80:cafe::9f) by BN8PR15CA0067.outlook.office365.com (2603:10b6:408:80::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:08 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:06 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:04 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:19:01 +0800 Message-ID: <20210926111904.237736-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 69d38e07-1b0b-463a-4815-08d980df9978 X-MS-TrafficTypeDiagnostic: BY5PR12MB5512: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:48; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0TBu4CIcKzSRPQ5WyPwNeqDtC+bNylk6ejJTtLgYgMrbr2fWn/pPHgzbHeBUkQIOmbqnvIsNb/0RsBQZRiv6+5lDKNjm5QhBai6m+3kFUVBlItf0ZUQ/5Lbe0EWICNTarL5MhB0hvc4HkwDGXemN0sYruwWUpsoWItnqwCRhex973Zps5AQzAUGjCDfy7GG3BttHdz+ZRda/vYwOoDUaw2DH8FzcxlB/pvgBwcfkK24ENONZS4G5dmbLYyS3/XH9lKhtArcFMdKkKKk/71AQy1MOm0sdXhfTV090vuH9IaiSGuIJ23kbdeRaojUw+7P4qleokRnABSElkv5XHihislZ9y/bp7NZSZhQixo7AyZii9Ec8ns0X+2u78Jl2W94XPdL2EffbFMqPF4qiOzBtD29i7FKL1TG/0ADZ6M3NlURb/KEtY6Y6w8CSJ2QGMQsXFVTGXHjkW5GzWEdqx9lAnUxLQ6p8oBVixh+6GU6w/5xnb6VgsbRBcgodowZ4ZcHxz0gaPC0dx4Yewjzo/dn+9CJRbuVFM5mJ8DRZGVJ8miNCnn0XrycxiOfijox6FmWFeQBtH38aVzzGAEZTJE72C8iwq2snt1EnDwbxNPaDhz97wz05momEnpmX+PWsV3XppGdonYxfCaLRvclhvRZcQmk9ZRvqh0A8n12KwFflSV+lbqg8tN05v88d9Ds8QZ0/5fkaEF3LV3qB8N6zBgfDoA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(36756003)(70206006)(316002)(55016002)(70586007)(86362001)(107886003)(4326008)(6286002)(8676002)(36906005)(7696005)(508600001)(26005)(186003)(82310400003)(1076003)(336012)(6666004)(426003)(2906002)(47076005)(54906003)(2616005)(7636003)(356005)(6916009)(83380400001)(5660300002)(8936002)(36860700001)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:08.8785 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 69d38e07-1b0b-463a-4815-08d980df9978 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB5512 Subject: [dpdk-dev] [PATCH 08/11] net/mlx5: remove port info from shareable Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare for shared Rx queue, removes port info from shareable Rx queue control. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_mr.c | 7 ++++--- drivers/net/mlx5/mlx5_rx.c | 15 +++------------ drivers/net/mlx5/mlx5_rx.h | 5 ++++- drivers/net/mlx5/mlx5_rxq.c | 10 ++++------ drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +- 6 files changed, 17 insertions(+), 24 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 4d479c19e6c..71e4bce1588 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -916,7 +916,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) } rxq->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; - rxq_ctrl->priv = priv; + rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq; rxq_data = &rxq_ctrl->rxq; /* Create CQ using DevX API. */ diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 44afda731fc..8d48b4614ee 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -82,10 +82,11 @@ mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr) struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); struct mlx5_mr_ctrl *mr_ctrl = &rxq->mr_ctrl; - struct mlx5_priv *priv = rxq_ctrl->priv; + struct mlx5_priv *priv = RXQ_PORT(rxq_ctrl); + struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh; - return mlx5_mr_addr2mr_bh(priv->sh->pd, &priv->mp_id, - &priv->sh->share_cache, mr_ctrl, addr, + return mlx5_mr_addr2mr_bh(sh->pd, &priv->mp_id, + &sh->share_cache, mr_ctrl, addr, priv->config.mr_ext_memseg_en); } diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e3b1051ba46..09de26c0d39 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -118,15 +118,7 @@ int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset) { struct mlx5_rxq_data *rxq = rx_queue; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); - struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv); - if (dev->rx_pkt_burst == NULL || - dev->rx_pkt_burst == removed_rx_burst) { - rte_errno = ENOTSUP; - return -rte_errno; - } if (offset >= (1 << rxq->cqe_n)) { rte_errno = EINVAL; return -rte_errno; @@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RESET; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (rxq_ctrl->dump_file_n < - rxq_ctrl->priv->config.max_dump_files_num) { + RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) { MKSTR(err_str, "Unexpected CQE error syndrome " "0x%02x CQN = %u RQN = %u wqe_counter = %u" " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome, @@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RDY; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), - &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (vec) { const uint32_t elts_n = diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 2ed544556f5..4eed4176324 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -23,6 +23,10 @@ /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 +#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv +#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl)) +#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl)) + struct mlx5_rxq_stats { #ifdef MLX5_PMD_SOFT_COUNTERS uint64_t ipackets; /**< Total of successfully received packets. */ @@ -163,7 +167,6 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ unsigned int irq:1; /* Whether IRQ is enabled. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 21cb1000899..3aac7cc20ba 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -148,7 +148,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(seg->mp); if (buf == NULL) { DRV_LOG(ERR, "port %u empty mbuf pool", - PORT_ID(rxq_ctrl->priv)); + RXQ_PORT_ID(rxq_ctrl)); rte_errno = ENOMEM; goto error; } @@ -195,7 +195,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) DRV_LOG(DEBUG, "port %u SPRQ queue %u allocated and configured %u segments" " (max %u packets)", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n, + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: @@ -207,7 +207,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) (*rxq_ctrl->rxq.elts)[i] = NULL; } DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx); + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -284,7 +284,7 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t i; DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", - PORT_ID(rxq_ctrl->priv), rxq->idx, q_n); + RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n); if (rxq->elts == NULL) return; /** @@ -1584,7 +1584,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.rq_repl_thresh = @@ -1644,7 +1643,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index ecd273e00a8..511681841ca 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq) struct mlx5_rxq_ctrl *ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0) + if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0) return -ENOTSUP; if (rxq->lro) return -ENOTSUP; From patchwork Sun Sep 26 11:19:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99692 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B727A0547; Sun, 26 Sep 2021 13:20:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7B8641123; Sun, 26 Sep 2021 13:20:14 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2080.outbound.protection.outlook.com [40.107.243.80]) by mails.dpdk.org (Postfix) with ESMTP id 02C5641101 for ; Sun, 26 Sep 2021 13:20:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mTCEseoz9Ko1P98ZwAVfev5SDCEyaBcHT+vnczTLSYS+re+o4i1pJSyoNYioillZ8OE9ch3RBbTsFswMWFWZErMdJPJEJ+vKpRY6PGI+x5IP1gWrkRjvhn59fRV+yDBh4K4fNtke6OdYbz6QK12PYme1HcxKA+85M/pzU8rjEtrTMEcvKXFS7gCe8dhIQWKAJSWDhVJMKTBMi1L6gekCqu2YYrgrDsnIu6iHz9LJ8THwAnBxcOB7oCRPARCtoInK8RDjs+LNuUQhvMjbbk2VnmEjhG46Z00+08XbKry2ccxc0bvXHmHS3O6Sw+gMYEbZ46C9OMfw3FXOG87LxslyRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=BDLOONjmavYDoUSLBaSunMP8GaoQhVOcqZmu1HOLfGM=; b=TMNqtm2uA7RjmOd9mSq/M0+26DZq76euL3e/tQpEw2A8URmmp4t4aBQadMIunrTgYsmtKbVd4sUlaEOVlF0CsPvOyXt21Ml2xiVdTxnuvKEpVUjpU+qiPfgcaCNVERrpgXvti4MsvfAi5wL2V8dzv5gWzGwlR2S26B2s6ScjFy7dnAJ3hFdM9ZLMYFy4GVsnpSqmp3YSJHMX5tgN+mLHvgce+A9+V9CPYyoJwV4SymK74GIs3MxFOa6V54pzW/3Z/pu79fljGGgNyv/YMWCt5Bz8Eqn+iFCH9fA0rv9g9jjt9Kl+dsqx/LFfwYbOo/3OfbO6n/mOx9atbGG4861RvQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BDLOONjmavYDoUSLBaSunMP8GaoQhVOcqZmu1HOLfGM=; b=NT1O+4lP5XsQ+hOAQfY85iqRUObaaxqG7eXX6koj+kOnp1SLQ32qaq8PpXU34NCkAS7feoFSv7mjRpdicWNruFteXpbt/lt8oOgkinZBtc6wo+WeMH3H5ujBj+lKFVbwAwetBkhX77xhhgQiKv5qXhy0a6BQwiWrQn7kWahNptceEepRwVX/USBSFmCiWu6nH/5sQcWn/B3WSNZE0E54QfnPQL7aDtecstDbGVqhgP70k1OWn+HG0FKz05OC75p6t/kylik6yzf9tOi/XZxCt6vQAvwmxZx+zmxd2IVP+wIwgcP7lMZYmZS6f6yony2UEaYexYH1uLg2CHqGNwa2Yw== Received: from MW2PR2101CA0025.namprd21.prod.outlook.com (2603:10b6:302:1::38) by CY4PR12MB1783.namprd12.prod.outlook.com (2603:10b6:903:121::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Sun, 26 Sep 2021 11:20:10 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:302:1:cafe::d2) by MW2PR2101CA0025.outlook.office365.com (2603:10b6:302:1::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.2 via Frontend Transport; Sun, 26 Sep 2021 11:20:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:09 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:09 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:06 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko , "Anatoly Burakov" Date: Sun, 26 Sep 2021 19:19:02 +0800 Message-ID: <20210926111904.237736-10-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 59cc8acc-e1e5-4072-8b34-08d980df99f6 X-MS-TrafficTypeDiagnostic: CY4PR12MB1783: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w512xndSwyLVFI58QRS8kJ7hvQ8ff8PZ48xmFadcCBa/zCDeoWD3y06CtIvhhWgbZF6P6QdjKnylXchatKzLmv+tKtRamQV+0sbKAJHEhPSJegawDyWeWceRMwHn9pzQSaKYVD+wzlpV4uinDkY/xTbnQtDPX8KJ2vJs0Z0BDQfofiBNk8Z/223D3awtrbhGWMjljThrhGltuk3uQKybjlEChf4nMnq65V6VRah+n2hBP29PhYDAvBsTDk1y3xw1lePZryfe8aa8hduZIS7b43QyCPzGhwgpCg7C2V2UuGR9cjiAgrhKBEBbvfLoco8p2nvFWCmYxFe4/KEX9ECTEwpCDWWDkVJpw63oaFKOVcmjtfbtSy1DUpc0aZ8KcuPmBINiKVgi2eOqjBq29d2KGzZdckgb51aEZwVaMcVBePh5QpGQTn2MCrSUrIaBWg+eSbUa2nYpT6Mui7gj5H9k9Jpen67qGloCtr4/y9p9Mh8YrRjbGitV2z7NQeiu529yyXlM7zpaZ3PGUiOB0qu1VsIqdu3Z193oIIDGcDZYldVAjLp4RiiyD42k/HidaWNR6lgizWmbvTQuaPd1PZk28uuyTCzh9VOS+hngvhYHMBp1EVvhUG+3bedZ1UMv+TatCrbsozDy2tF0qQZ/8t6RQKHN3MRFLC5W/nlMj6i3LWdAYHHc/p8OXIqGuq5Pkxbex9uDlWxgWjBeCArI+Wb8hamWzhdjsZee+RH5Z89BV7s= X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(83380400001)(316002)(86362001)(47076005)(8936002)(30864003)(186003)(508600001)(36860700001)(356005)(5660300002)(55016002)(336012)(4326008)(426003)(6666004)(36906005)(6916009)(6286002)(70586007)(2616005)(36756003)(8676002)(26005)(2906002)(82310400003)(7696005)(1076003)(7636003)(54906003)(70206006)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:09.7610 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 59cc8acc-e1e5-4072-8b34-08d980df99f6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1783 Subject: [dpdk-dev] [PATCH 09/11] net/mlx5: move Rx queue DevX resource X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To support shared RX queue, move DevX RQ which is per queue resource to Rx queue private data. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_verbs.c | 154 +++++++++++-------- drivers/net/mlx5/mlx5.h | 11 +- drivers/net/mlx5/mlx5_devx.c | 227 ++++++++++++++-------------- drivers/net/mlx5/mlx5_rx.h | 1 + drivers/net/mlx5/mlx5_rxq.c | 44 +++--- drivers/net/mlx5/mlx5_rxtx.c | 6 +- drivers/net/mlx5/mlx5_trigger.c | 2 +- drivers/net/mlx5/mlx5_vlan.c | 16 +- 8 files changed, 241 insertions(+), 220 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index d4fa202ac4b..a2a9b9c1f98 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -71,13 +71,13 @@ const struct mlx5_mr_ops mlx5_mr_verbs_ops = { /** * Modify Rx WQ vlan stripping offload * - * @param rxq_obj - * Rx queue object. + * @param rxq + * Rx queue. * * @return 0 on success, non-0 otherwise */ static int -mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_priv *rxq, int on) { uint16_t vlan_offloads = (on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) | @@ -89,14 +89,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) .flags = vlan_offloads, }; - return mlx5_glue->modify_wq(rxq_obj->wq, &mod); + return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod); } /** * Modifies the attributes for the specified WQ. * - * @param rxq_obj - * Verbs Rx queue object. + * @param rxq + * Verbs Rx queue. * @param type * Type of change queue state. * @@ -104,14 +104,14 @@ mlx5_rxq_obj_modify_wq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_ibv_modify_wq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) +mlx5_ibv_modify_wq(struct mlx5_rxq_priv *rxq, uint8_t type) { struct ibv_wq_attr mod = { .attr_mask = IBV_WQ_ATTR_STATE, .wq_state = (enum ibv_wq_state)type, }; - return mlx5_glue->modify_wq(rxq_obj->wq, &mod); + return mlx5_glue->modify_wq(rxq->ctrl->obj->wq, &mod); } /** @@ -181,21 +181,18 @@ mlx5_ibv_modify_qp(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, /** * Create a CQ Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * The Verbs CQ object initialized, NULL otherwise and rte_errno is set. */ static struct ibv_cq * -mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_cq_create(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data); struct { @@ -241,7 +238,7 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(DEBUG, "Port %u Rx CQE compression is disabled for HW" " timestamp.", - dev->data->port_id); + priv->dev_data->port_id); } #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD if (RTE_CACHE_LINE_SIZE == 128) { @@ -257,21 +254,18 @@ mlx5_rxq_ibv_cq_create(struct rte_eth_dev *dev, uint16_t idx) /** * Create a WQ Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * The Verbs WQ object initialized, NULL otherwise and rte_errno is set. */ static struct ibv_wq * -mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_wq_create(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; unsigned int wqe_n = 1 << rxq_data->elts_n; struct { @@ -338,7 +332,7 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(ERR, "Port %u Rx queue %u requested %u*%u but got" " %u*%u WRs*SGEs.", - dev->data->port_id, idx, + priv->dev_data->port_id, rxq->idx, wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n), wq_attr.ibv.max_wr, wq_attr.ibv.max_sge); @@ -353,21 +347,20 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) /** * Create the Rx queue Verbs object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_ibv_obj_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t idx = rxq->idx; + struct mlx5_priv *priv = rxq->priv; + uint16_t port_id = priv->dev_data->port_id; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; struct mlx5dv_cq cq_info; struct mlx5dv_rwq rwq; @@ -382,17 +375,17 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) mlx5_glue->create_comp_channel(priv->sh->ctx); if (!tmpl->ibv_channel) { DRV_LOG(ERR, "Port %u: comp channel creation failure.", - dev->data->port_id); + port_id); rte_errno = ENOMEM; goto error; } tmpl->fd = ((struct ibv_comp_channel *)(tmpl->ibv_channel))->fd; } /* Create CQ using Verbs API. */ - tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(dev, idx); + tmpl->ibv_cq = mlx5_rxq_ibv_cq_create(rxq); if (!tmpl->ibv_cq) { DRV_LOG(ERR, "Port %u Rx queue %u CQ creation failure.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ENOMEM; goto error; } @@ -407,7 +400,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) DRV_LOG(ERR, "Port %u wrong MLX5_CQE_SIZE environment " "variable value: it should be set to %u.", - dev->data->port_id, RTE_CACHE_LINE_SIZE); + port_id, RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -418,19 +411,19 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->cq_uar = cq_info.cq_uar; rxq_data->cqn = cq_info.cqn; /* Create WQ (RQ) using Verbs API. */ - tmpl->wq = mlx5_rxq_ibv_wq_create(dev, idx); + tmpl->wq = mlx5_rxq_ibv_wq_create(rxq); if (!tmpl->wq) { DRV_LOG(ERR, "Port %u Rx queue %u WQ creation failure.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ENOMEM; goto error; } /* Change queue state to ready. */ - ret = mlx5_ibv_modify_wq(tmpl, IBV_WQS_RDY); + ret = mlx5_ibv_modify_wq(rxq, IBV_WQS_RDY); if (ret) { DRV_LOG(ERR, "Port %u Rx queue %u WQ state to IBV_WQS_RDY failed.", - dev->data->port_id, idx); + port_id, idx); rte_errno = ret; goto error; } @@ -446,7 +439,7 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->cq_arm_sn = 0; mlx5_rxq_initialize(rxq_data); rxq_data->cq_ci = 0; - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; + priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; rxq_ctrl->wqn = ((struct ibv_wq *)(tmpl->wq))->wq_num; return 0; error: @@ -464,12 +457,14 @@ mlx5_rxq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) /** * Release an Rx verbs queue object. * - * @param rxq_obj - * Verbs Rx queue object. + * @param rxq + * Pointer to Rx queue. */ static void -mlx5_rxq_ibv_obj_release(struct mlx5_rxq_obj *rxq_obj) +mlx5_rxq_ibv_obj_release(struct mlx5_rxq_priv *rxq) { + struct mlx5_rxq_obj *rxq_obj = rxq->ctrl->obj; + MLX5_ASSERT(rxq_obj); MLX5_ASSERT(rxq_obj->wq); MLX5_ASSERT(rxq_obj->ibv_cq); @@ -692,12 +687,24 @@ static void mlx5_rxq_ibv_obj_drop_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_obj *rxq_obj; - if (rxq->wq) - claim_zero(mlx5_glue->destroy_wq(rxq->wq)); - if (rxq->ibv_cq) - claim_zero(mlx5_glue->destroy_cq(rxq->ibv_cq)); + if (rxq == NULL) + return; + if (rxq->ctrl == NULL) + goto free_priv; + rxq_obj = rxq->ctrl->obj; + if (rxq_obj == NULL) + goto free_ctrl; + if (rxq_obj->wq) + claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq)); + if (rxq_obj->ibv_cq) + claim_zero(mlx5_glue->destroy_cq(rxq_obj->ibv_cq)); + mlx5_free(rxq_obj); +free_ctrl: + mlx5_free(rxq->ctrl); +free_priv: mlx5_free(rxq); priv->drop_queue.rxq = NULL; } @@ -716,39 +723,58 @@ mlx5_rxq_ibv_obj_drop_create(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct ibv_context *ctx = priv->sh->ctx; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_obj *rxq_obj = NULL; - if (rxq) + if (rxq != NULL) return 0; rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY); - if (!rxq) { + if (rxq == NULL) { DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.", dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } priv->drop_queue.rxq = rxq; - rxq->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0); - if (!rxq->ibv_cq) { + rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0, + SOCKET_ID_ANY); + if (rxq_ctrl == NULL) { + DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue control memory.", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq->ctrl = rxq_ctrl; + rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, + SOCKET_ID_ANY); + if (rxq_obj == NULL) { + DRV_LOG(DEBUG, "Port %u cannot allocate drop Rx queue memory.", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq_ctrl->obj = rxq_obj; + rxq_obj->ibv_cq = mlx5_glue->create_cq(ctx, 1, NULL, NULL, 0); + if (!rxq_obj->ibv_cq) { DRV_LOG(DEBUG, "Port %u cannot allocate CQ for drop queue.", dev->data->port_id); rte_errno = errno; goto error; } - rxq->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){ + rxq_obj->wq = mlx5_glue->create_wq(ctx, &(struct ibv_wq_init_attr){ .wq_type = IBV_WQT_RQ, .max_wr = 1, .max_sge = 1, .pd = priv->sh->pd, - .cq = rxq->ibv_cq, + .cq = rxq_obj->ibv_cq, }); - if (!rxq->wq) { + if (!rxq_obj->wq) { DRV_LOG(DEBUG, "Port %u cannot allocate WQ for drop queue.", dev->data->port_id); rte_errno = errno; goto error; } - priv->drop_queue.rxq = rxq; return 0; error: mlx5_rxq_ibv_obj_drop_release(dev); @@ -777,7 +803,7 @@ mlx5_ibv_drop_action_create(struct rte_eth_dev *dev) ret = mlx5_rxq_ibv_obj_drop_create(dev); if (ret < 0) goto error; - rxq = priv->drop_queue.rxq; + rxq = priv->drop_queue.rxq->ctrl->obj; ind_tbl = mlx5_glue->create_rwq_ind_table (priv->sh->ctx, &(struct ibv_rwq_ind_table_init_attr){ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index d06f828ed33..c674f5ba9c4 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -310,7 +310,7 @@ struct mlx5_vf_vlan { /* Flow drop context necessary due to Verbs API. */ struct mlx5_drop { struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */ - struct mlx5_rxq_obj *rxq; /* Rx queue object. */ + struct mlx5_rxq_priv *rxq; /* Rx queue. */ }; /* Loopback dummy queue resources required due to Verbs API. */ @@ -1257,7 +1257,6 @@ struct mlx5_rxq_obj { }; struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */ struct { - struct mlx5_devx_rq rq_obj; /* DevX RQ object. */ struct mlx5_devx_cq cq_obj; /* DevX CQ object. */ void *devx_channel; }; @@ -1339,11 +1338,11 @@ struct mlx5_rxq_priv; /* HW objects operations structure. */ struct mlx5_obj_ops { - int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_obj *rxq_obj, int on); - int (*rxq_obj_new)(struct rte_eth_dev *dev, uint16_t idx); + int (*rxq_obj_modify_vlan_strip)(struct mlx5_rxq_priv *rxq, int on); + int (*rxq_obj_new)(struct mlx5_rxq_priv *rxq); int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj); - int (*rxq_obj_modify)(struct mlx5_rxq_obj *rxq_obj, uint8_t type); - void (*rxq_obj_release)(struct mlx5_rxq_obj *rxq_obj); + int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type); + void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq); int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n, struct mlx5_ind_table_obj *ind_tbl); int (*ind_table_modify)(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 71e4bce1588..d219e255f0a 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -30,14 +30,16 @@ /** * Modify RQ vlan stripping offload * - * @param rxq_obj - * Rx queue object. + * @param rxq + * Rx queue. + * @param on + * Enable/disable VLAN stripping. * * @return * 0 on success, non-0 otherwise */ static int -mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) +mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_priv *rxq, int on) { struct mlx5_devx_modify_rq_attr rq_attr; @@ -46,14 +48,14 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) rq_attr.state = MLX5_RQC_STATE_RDY; rq_attr.vsd = (on ? 0 : 1); rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD; - return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr); + return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } /** * Modify RQ using DevX API. * - * @param rxq_obj - * DevX Rx queue object. + * @param rxq + * DevX rx queue. * @param type * Type of change queue state. * @@ -61,7 +63,7 @@ mlx5_rxq_obj_modify_rq_vlan_strip(struct mlx5_rxq_obj *rxq_obj, int on) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) +mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type) { struct mlx5_devx_modify_rq_attr rq_attr; @@ -86,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_obj *rxq_obj, uint8_t type) default: break; } - return mlx5_devx_cmd_modify_rq(rxq_obj->rq_obj.rq, &rq_attr); + return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } /** @@ -145,42 +147,34 @@ mlx5_devx_modify_sq(struct mlx5_txq_obj *obj, enum mlx5_txq_modify_type type, return 0; } -/** - * Destroy the Rx queue DevX object. - * - * @param rxq_obj - * Rxq object to destroy. - */ -static void -mlx5_rxq_release_devx_resources(struct mlx5_rxq_obj *rxq_obj) -{ - mlx5_devx_rq_destroy(&rxq_obj->rq_obj); - memset(&rxq_obj->rq_obj, 0, sizeof(rxq_obj->rq_obj)); - mlx5_devx_cq_destroy(&rxq_obj->cq_obj); - memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); -} - /** * Release an Rx DevX queue object. * - * @param rxq_obj - * DevX Rx queue object. + * @param rxq + * DevX Rx queue. */ static void -mlx5_rxq_devx_obj_release(struct mlx5_rxq_obj *rxq_obj) +mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) { - MLX5_ASSERT(rxq_obj); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; + + MLX5_ASSERT(rxq != NULL); + MLX5_ASSERT(rxq_ctrl != NULL); if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) { MLX5_ASSERT(rxq_obj->rq); - mlx5_devx_modify_rq(rxq_obj, MLX5_RXQ_MOD_RDY2RST); + mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST); claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq)); } else { - MLX5_ASSERT(rxq_obj->cq_obj.cq); - MLX5_ASSERT(rxq_obj->rq_obj.rq); - mlx5_rxq_release_devx_resources(rxq_obj); - if (rxq_obj->devx_channel) + mlx5_devx_rq_destroy(&rxq->devx_rq); + memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq)); + mlx5_devx_cq_destroy(&rxq_obj->cq_obj); + memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); + if (rxq_obj->devx_channel) { mlx5_os_devx_destroy_event_channel (rxq_obj->devx_channel); + rxq_obj->devx_channel = NULL; + } } } @@ -224,21 +218,18 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj) /** * Create a RQ object using DevX. * - * @param dev - * Pointer to Ethernet device. - * @param rxq_data - * RX queue data. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, - struct mlx5_rxq_data *rxq_data) +mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; struct mlx5_devx_create_rq_attr rq_attr = { 0 }; uint16_t log_desc_n = rxq_data->elts_n - rxq_data->sges_n; uint32_t wqe_size, log_wqe_size; @@ -279,7 +270,7 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, rq_attr.wq_attr.pd = priv->sh->pdn; rq_attr.counter_set_id = priv->counter_set_id; /* Create RQ using DevX API. */ - return mlx5_devx_rq_create(priv->sh->ctx, &rxq_ctrl->obj->rq_obj, + return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq, wqe_size, log_desc_n, &rq_attr, rxq_ctrl->socket); } @@ -287,24 +278,22 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, /** * Create a DevX CQ object for an Rx queue. * - * @param dev - * Pointer to Ethernet device. - * @param rxq_data - * RX queue data. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, - struct mlx5_rxq_data *rxq_data) +mlx5_rxq_create_devx_cq_resources(struct mlx5_rxq_priv *rxq) { struct mlx5_devx_cq *cq_obj = 0; struct mlx5_devx_cq_attr cq_attr = { 0 }; - struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_priv *priv = rxq->priv; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t port_id = priv->dev_data->port_id; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data); uint32_t log_cqe_n; uint16_t event_nums[1] = { 0 }; @@ -345,7 +334,7 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, } DRV_LOG(DEBUG, "Port %u Rx CQE compression is enabled, format %d.", - dev->data->port_id, priv->config.cqe_comp_fmt); + port_id, priv->config.cqe_comp_fmt); /* * For vectorized Rx, it must not be doubled in order to * make cq_ci and rq_ci aligned. @@ -354,13 +343,12 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, cqe_n *= 2; } else if (priv->config.cqe_comp && rxq_data->hw_timestamp) { DRV_LOG(DEBUG, - "Port %u Rx CQE compression is disabled for HW" - " timestamp.", - dev->data->port_id); + "Port %u Rx CQE compression is disabled for HW timestamp.", + port_id); } else if (priv->config.cqe_comp && rxq_data->lro) { DRV_LOG(DEBUG, "Port %u Rx CQE compression is disabled for LRO.", - dev->data->port_id); + port_id); } cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->devx_rx_uar); log_cqe_n = log2above(cqe_n); @@ -398,27 +386,23 @@ mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, /** * Create the Rx hairpin queue object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + uint16_t idx = rxq->idx; + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; struct mlx5_devx_create_rq_attr attr = { 0 }; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; uint32_t max_wq_data; - MLX5_ASSERT(rxq_data); - MLX5_ASSERT(tmpl); + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL && tmpl != NULL); tmpl->rxq_ctrl = rxq_ctrl; attr.hairpin = 1; max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz; @@ -447,39 +431,36 @@ mlx5_rxq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl->rq) { DRV_LOG(ERR, "Port %u Rx hairpin queue %u can't create rq object.", - dev->data->port_id, idx); + priv->dev_data->port_id, idx); rte_errno = errno; return -rte_errno; } - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN; + priv->dev_data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_HAIRPIN; return 0; } /** * Create the Rx queue DevX object. * - * @param dev - * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq + * Pointer to Rx queue. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_priv *priv = rxq->priv; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_data *rxq_data = &rxq_ctrl->rxq; struct mlx5_rxq_obj *tmpl = rxq_ctrl->obj; int ret = 0; MLX5_ASSERT(rxq_data); MLX5_ASSERT(tmpl); if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return mlx5_rxq_obj_hairpin_new(dev, idx); + return mlx5_rxq_obj_hairpin_new(rxq); tmpl->rxq_ctrl = rxq_ctrl; if (rxq_ctrl->irq) { int devx_ev_flag = @@ -497,34 +478,32 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel); } /* Create CQ using DevX API. */ - ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_cq_resources(rxq); if (ret) { DRV_LOG(ERR, "Failed to create CQ."); goto error; } /* Create RQ using DevX API. */ - ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_rq_resources(rxq); if (ret) { DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.", - dev->data->port_id, idx); + priv->dev_data->port_id, rxq->idx); rte_errno = ENOMEM; goto error; } /* Change queue state to ready. */ - ret = mlx5_devx_modify_rq(tmpl, MLX5_RXQ_MOD_RST2RDY); + ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)tmpl->rq_obj.wq.umem_buf; - rxq_data->rq_db = (uint32_t *)(uintptr_t)tmpl->rq_obj.db_rec; - rxq_data->cq_arm_sn = 0; - rxq_data->cq_ci = 0; + rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; + rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.db_rec; mlx5_rxq_initialize(rxq_data); - dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; - rxq_ctrl->wqn = tmpl->rq_obj.rq->id; + priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED; + rxq_ctrl->wqn = rxq->devx_rq.rq->id; return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_devx_obj_release(tmpl); + mlx5_rxq_devx_obj_release(rxq); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -570,15 +549,15 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, rqt_attr->rqt_actual_size = rqt_n; if (queues == NULL) { for (i = 0; i < rqt_n; i++) - rqt_attr->rq_list[i] = priv->drop_queue.rxq->rq->id; + rqt_attr->rq_list[i] = + priv->drop_queue.rxq->devx_rq.rq->id; return rqt_attr; } for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); - rqt_attr->rq_list[i] = rxq_ctrl->obj->rq_obj.rq->id; + MLX5_ASSERT(rxq != NULL); + rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; } MLX5_ASSERT(i > 0); for (j = 0; i != rqt_n; ++j, ++i) @@ -717,7 +696,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, } } } else { - rxq_obj_type = priv->drop_queue.rxq->rxq_ctrl->type; + rxq_obj_type = priv->drop_queue.rxq->ctrl->type; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; @@ -889,16 +868,23 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; int socket_id = dev->device->numa_node; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_obj *rxq = NULL; + struct mlx5_rxq_priv *rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_obj *rxq_obj = NULL; int ret; /* - * Initialize dummy control structures. + * Initialize dummy Rx queue structures. * They are required to hold pointers for cleanup * and are only accessible via drop queue DevX objects. */ + rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id); + if (rxq == NULL) { + DRV_LOG(ERR, "Port %u could not allocate drop queue", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), 0, socket_id); if (rxq_ctrl == NULL) { @@ -907,27 +893,29 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) rte_errno = ENOMEM; goto error; } - rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id); - if (rxq == NULL) { + rxq_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_obj), 0, socket_id); + if (rxq_obj == NULL) { DRV_LOG(ERR, "Port %u could not allocate drop queue object", dev->data->port_id); rte_errno = ENOMEM; goto error; } - rxq->rxq_ctrl = rxq_ctrl; + rxq->priv = priv; + rxq->ctrl = rxq_ctrl; + LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry); + rxq_obj->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; rxq_ctrl->sh = priv->sh; - rxq_ctrl->obj = rxq; - rxq_data = &rxq_ctrl->rxq; + rxq_ctrl->obj = rxq_obj; /* Create CQ using DevX API. */ - ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_cq_resources(rxq); if (ret != 0) { DRV_LOG(ERR, "Port %u drop queue CQ creation failed.", dev->data->port_id); goto error; } /* Create RQ using DevX API. */ - ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); + ret = mlx5_rxq_create_devx_rq_resources(rxq); if (ret != 0) { DRV_LOG(ERR, "Port %u drop queue RQ creation failed.", dev->data->port_id); @@ -944,15 +932,18 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) error: ret = rte_errno; /* Save rte_errno before cleanup. */ if (rxq != NULL) { - if (rxq->rq_obj.rq != NULL) - mlx5_devx_rq_destroy(&rxq->rq_obj); - if (rxq->cq_obj.cq != NULL) - mlx5_devx_cq_destroy(&rxq->cq_obj); - if (rxq->devx_channel) - mlx5_os_devx_destroy_event_channel - (rxq->devx_channel); + if (rxq->devx_rq.rq != NULL) + claim_zero(mlx5_devx_rq_destroy(&rxq->devx_rq)); mlx5_free(rxq); } + if (rxq_obj != NULL) { + if (rxq_obj->cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&rxq_obj->cq_obj); + if (rxq_obj->devx_channel) + mlx5_os_devx_destroy_event_channel + (rxq_obj->devx_channel); + mlx5_free(rxq_obj); + } if (rxq_ctrl != NULL) mlx5_free(rxq_ctrl); rte_errno = ret; /* Restore rte_errno. */ @@ -969,12 +960,14 @@ static void mlx5_rxq_devx_obj_drop_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; - struct mlx5_rxq_ctrl *rxq_ctrl = rxq->rxq_ctrl; + struct mlx5_rxq_priv *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_obj *rxq_obj = rxq_ctrl->obj; mlx5_rxq_devx_obj_release(rxq); mlx5_free(rxq); mlx5_free(rxq_ctrl); + mlx5_free(rxq_obj); priv->drop_queue.rxq = NULL; } @@ -994,7 +987,7 @@ mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev) mlx5_devx_tir_destroy(hrxq); if (hrxq->ind_table->ind_table != NULL) mlx5_devx_ind_table_destroy(hrxq->ind_table); - if (priv->drop_queue.rxq->rq != NULL) + if (priv->drop_queue.rxq != NULL) mlx5_rxq_devx_obj_drop_release(dev); } diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 4eed4176324..25f7fc2071a 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -183,6 +183,7 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct mlx5_devx_rq devx_rq; struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ uint32_t hairpin_status; /* Hairpin binding status. */ }; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 3aac7cc20ba..98408da3c8e 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -452,13 +452,13 @@ int mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; int ret; + MLX5_ASSERT(rxq != NULL && rxq_ctrl != NULL); MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RDY2RST); + ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RDY2RST); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to RESET: %s", strerror(errno)); @@ -466,7 +466,7 @@ mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t idx) return ret; } /* Remove all processes CQEs. */ - rxq_sync_cq(rxq); + rxq_sync_cq(&rxq_ctrl->rxq); /* Free all involved mbufs. */ rxq_free_elts(rxq_ctrl); /* Set the actual queue state. */ @@ -538,26 +538,26 @@ int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; int ret; - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL); + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); /* Allocate needed buffers. */ - ret = rxq_alloc_elts(rxq_ctrl); + ret = rxq_alloc_elts(rxq->ctrl); if (ret) { DRV_LOG(ERR, "Cannot reallocate buffers for Rx WQ"); rte_errno = errno; return ret; } rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); + *rxq_data->cq_db = rte_cpu_to_be_32(rxq_data->cq_ci); rte_io_wmb(); /* Reset RQ consumer before moving queue to READY state. */ - *rxq->rq_db = rte_cpu_to_be_32(0); + *rxq_data->rq_db = rte_cpu_to_be_32(0); rte_io_wmb(); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, MLX5_RXQ_MOD_RST2RDY); + ret = priv->obj_ops.rxq_obj_modify(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to READY: %s", strerror(errno)); @@ -565,8 +565,8 @@ mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t idx) return ret; } /* Reinitialize RQ - set WQEs. */ - mlx5_rxq_initialize(rxq); - rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + mlx5_rxq_initialize(rxq_data); + rxq_data->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; /* Set actual queue state. */ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STARTED; return 0; @@ -1770,15 +1770,19 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; + struct mlx5_rxq_priv *rxq; + struct mlx5_rxq_ctrl *rxq_ctrl; - if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) + if (priv->rxq_privs == NULL) + return 0; + rxq = mlx5_rxq_get(dev, idx); + if (rxq == NULL) return 0; if (mlx5_rxq_deref(dev, idx) > 1) return 1; - if (rxq_ctrl->obj) { - priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->obj != NULL) { + priv->obj_ops.rxq_obj_release(rxq); LIST_REMOVE(rxq_ctrl->obj, next); mlx5_free(rxq_ctrl->obj); rxq_ctrl->obj = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 7b984eff35f..d44d6d8e4c3 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -374,11 +374,9 @@ mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; if (sm->is_wq) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, sm->queue_id); - ret = priv->obj_ops.rxq_obj_modify(rxq_ctrl->obj, sm->state); + ret = priv->obj_ops.rxq_obj_modify(rxq, sm->state); if (ret) { DRV_LOG(ERR, "Cannot change Rx WQ state to %u - %s", sm->state, strerror(errno)); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index f376f4d6fc4..b3188f510fb 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -180,7 +180,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) rte_errno = ENOMEM; goto error; } - ret = priv->obj_ops.rxq_obj_new(dev, i); + ret = priv->obj_ops.rxq_obj_new(rxq); if (ret) { mlx5_free(rxq_ctrl->obj); goto error; diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 60f97f2d2d1..586ba7166cb 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -91,11 +91,11 @@ void mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[queue]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queue); + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; int ret = 0; + MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL); /* Validate hw support */ if (!priv->config.hw_vlan_strip) { DRV_LOG(ERR, "port %u VLAN stripping is not supported", @@ -109,20 +109,20 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) return; } DRV_LOG(DEBUG, "port %u set VLAN stripping offloads %d for port %uqueue %d", - dev->data->port_id, on, rxq->port_id, queue); - if (!rxq_ctrl->obj) { + dev->data->port_id, on, rxq_data->port_id, queue); + if (rxq->ctrl->obj == NULL) { /* Update related bits in RX queue. */ - rxq->vlan_strip = !!on; + rxq_data->vlan_strip = !!on; return; } - ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq_ctrl->obj, on); + ret = priv->obj_ops.rxq_obj_modify_vlan_strip(rxq, on); if (ret) { DRV_LOG(ERR, "Port %u failed to modify object stripping mode:" " %s", dev->data->port_id, strerror(rte_errno)); return; } /* Update related bits in RX queue. */ - rxq->vlan_strip = !!on; + rxq_data->vlan_strip = !!on; } /** From patchwork Sun Sep 26 11:19:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99693 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 034A2A0547; Sun, 26 Sep 2021 13:20:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C7328410DA; Sun, 26 Sep 2021 13:20:37 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2070.outbound.protection.outlook.com [40.107.236.70]) by mails.dpdk.org (Postfix) with ESMTP id 1F2D340E78 for ; Sun, 26 Sep 2021 13:20:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Rh/RLufmyxJzPJgvtEtBz2HPsSxGG25aoh+f1tdhvPnVaV7wn4Ww4kR0zFCyf8lsIUqqvh2TJuo44ltC/s/s0u6fZXyu3tDCIfvnOGuXTTby47ffdxTSImarLx/6LraLxKcabgP3Ek90cBi9To8LgC0CECVpqg+Rq7TB5lqnV7x232HPPtWPi6WBX436VQTPxIQ/k+TuB6jUkal1/B9MhZ79x5OTtpcXFd5j1q4GybKAuuhQEqqcyBuSzTuiHxTL1VQ7vDZvgy6K3y8qiQP/YRf5RhUErne5FezMGwFdES9vN57A2ivBsWJT1YPV+tgdL4RnAxodMNYDK+gLUHjb8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=HtQzhtFNXJsBxa1fqThfHhuHNu53SC3/wRe8slp+Uxc=; b=WHwVrKr7gIQGACefrwOAXLWDE5NU0jvarKVYV3RS9tJSthLQ/zbTqeUHLIIBZXjy2pyRwvSZZBRI5gj0vf69D3ZPlsk9co/CpazUEd3TEXQmjp64ClZ9HP8D+2UthfoKjGx2O5O7f5K+gNh+koi0DXgN9P0tXFThdin+giHO5RHZy5ZbhFZtMayX9kHfS8Si4z7lNRnAikjZM91cb+uuT3orA3paoOu/EAosPtxTh16D1StWtrRRz5Cbbea3EEjTQbHceYIIUtw3WUCXR/TKuS+nICEZ6tST7n4YWOY0zav+fqLSciVoYaF+Th6Hb5HT1MCJ7hTWGG2L8FWHD0B1lg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HtQzhtFNXJsBxa1fqThfHhuHNu53SC3/wRe8slp+Uxc=; b=CVdJBXwbbDpm+FZyD6s5E3wfDJCNhsRbXvejifKWGh0kbd2EpzLFx8n+k4lOt3CIzz1r5jRLsDHE729OA2ZxXeGhkW/SqPca8Dpa0+WbXNNn43p0pv7dPl6RpxiC/LbIRpcv8flSkO8u/8cD5mB8BJOUoC4y2bfY1B4vLdbcNKdY8SVMHnoT445MZapPbcx4PSv/3Rlg6NHJ35QVlUmaga5qkeX//i+2GYGg0vgs/DO70kwmPVpVBxoeb4WbG0GB7H88tbAdsrT2Wyi/PO6HsQt0/o0hER88ByHU59fTbzBeSQrYsqa5RXFpqwCNb5zEWCPkhzhF/0MIxRv/qsICBA== Received: from BN6PR21CA0012.namprd21.prod.outlook.com (2603:10b6:404:8e::22) by CH2PR12MB4264.namprd12.prod.outlook.com (2603:10b6:610:a4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15; Sun, 26 Sep 2021 11:20:35 +0000 Received: from BN8NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:404:8e:cafe::64) by BN6PR21CA0012.outlook.office365.com (2603:10b6:404:8e::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.2 via Frontend Transport; Sun, 26 Sep 2021 11:20:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT048.mail.protection.outlook.com (10.13.177.117) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:35 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:34 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:32 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:19:03 +0800 Message-ID: <20210926111904.237736-11-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2b2357e8-691f-4367-3959-08d980dfa92f X-MS-TrafficTypeDiagnostic: CH2PR12MB4264: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UzIUVsBCPJ/xVhRHIFVR3c77u9wW9yFce2taksAmaIlxP6yPr9GRiqCL787VGrvRT8bj5cLiQr/kl+HwKMkJ3pkwy+Gw4mgFklPRpVB6rwzNDzmxZ7lSusErPCR6DDQlNvne+8BdTtOc4GdzCG60UbJyYJGHqlCg+OKCg8GslspZ+Md92wcMa4vmGXN4Ar0p0rfo5zyuWtwLS6YaCPyGV2VajiZa4Sw8TD+t64qdYRHYTEulAQAO+Wgr6SDRVWlkNPcSCLiyRHRXtvPtZ1Isx1wt3LcS9gBgxtCiovF8HVNCs4ZxBaaU0eHdtHRJeb5brTHn6HO7OOTjlNre7xmbdxV9avTJwNe6YL+Q7fNG4RhAjPC45q+rJ2uOl6Va648s5s3QSD3sl3Nbkuz/0ZR0JQl/3h0QvhSaK4EqUzjoh8L2Fy5IWzbBlYk2Ncoxu73CrpsPe5Y+K5e7alwwsR0DaeU/HdFXVl4w6S810sr8Vz+RSnU3KP8Gns73zkFphAT9C13RfIRiYG72IMOMqnJDQUJpn7yt7mtvoJpurHqXv0p2RpNzNIRkz+8i6DsrTYz2GwqVAUnKL9HgE3rQrnXu9QwcxDZAqXIbyBIR8SW+4clWK1W7dHFFMNLKIaqessGhIfCECvo23IGBXCXeVYpHb+pfny6Lc8yG4j1ji1GgI5J2eqsPO2xFqMJR0nKuBC9JRjPOCVqzx8Nw+nO4Af2QcDNLmUlqvixrDw7q+im8BbeOcylebp1T/Y+5K5PrmNZ+ X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(26005)(36906005)(82310400003)(4326008)(107886003)(6666004)(316002)(1076003)(2906002)(86362001)(54906003)(426003)(6286002)(30864003)(6916009)(47076005)(8936002)(336012)(70206006)(2616005)(70586007)(8676002)(36860700001)(7696005)(83380400001)(16526019)(356005)(55016002)(36756003)(186003)(508600001)(7636003)(5660300002)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:35.1683 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2b2357e8-691f-4367-3959-08d980dfa92f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4264 Subject: [dpdk-dev] [PATCH 10/11] net/mlx5: remove Rx queue data list from device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replace with universal wrapper API. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_verbs.c | 7 ++--- drivers/net/mlx5/mlx5.c | 10 +------ drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_devx.c | 13 +++++---- drivers/net/mlx5/mlx5_ethdev.c | 6 +--- drivers/net/mlx5/mlx5_flow.c | 45 +++++++++++++++-------------- drivers/net/mlx5/mlx5_rss.c | 6 ++-- drivers/net/mlx5/mlx5_rx.c | 16 ++++------ drivers/net/mlx5/mlx5_rx.h | 9 +++--- drivers/net/mlx5/mlx5_rxq.c | 23 ++++++--------- drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++-- drivers/net/mlx5/mlx5_stats.c | 9 +++--- drivers/net/mlx5/mlx5_trigger.c | 2 +- 13 files changed, 66 insertions(+), 87 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index a2a9b9c1f98..0e68a13208b 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -527,11 +527,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, MLX5_ASSERT(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[i]); - wq[i] = rxq_ctrl->obj->wq; + wq[i] = rxq->ctrl->obj->wq; } MLX5_ASSERT(i > 0); /* Finalise indirection table. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 749729d6fbe..6681b74c8f0 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1572,20 +1572,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_mp_os_req_stop_rxtx(dev); /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); - if (priv->rxqs != NULL) { + if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); for (i = 0; (i != priv->rxqs_n); ++i) mlx5_rxq_release(dev, i); priv->rxqs_n = 0; - priv->rxqs = NULL; - } - if (priv->representor) { - /* Each representor has a dedicated interrupts handler */ - mlx5_free(dev->intr_handle); - dev->intr_handle = NULL; - } - if (priv->rxq_privs != NULL) { mlx5_free(priv->rxq_privs); priv->rxq_privs = NULL; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c674f5ba9c4..6a9c99a8826 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1406,7 +1406,6 @@ struct mlx5_priv { unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ - struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index d219e255f0a..371ff387c99 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -682,15 +682,16 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_data *rxq_data = - (*priv->rxqs)[ind_tbl->queues[0]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - rxq_obj_type = rxq_ctrl->type; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[0]); + rxq_obj_type = rxq->ctrl->type; /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { - if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { + struct mlx5_rxq_data *rxq_i = + mlx5_rxq_data_get(dev, ind_tbl->queues[i]); + + if (rxq_i != NULL && !rxq_i->lro) { lro = false; break; } diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 7071a5f7039..16e96da8d24 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u", @@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) return -rte_errno; } for (i = 0, j = 0; i < rxqs_n; i++) { - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - rxq_data = (*priv->rxqs)[i]; - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) rss_queue_arr[j++] = i; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c10b9112593..49a74edd2e6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1166,10 +1166,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, return; for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; /* * To support metadata register copy on Tx loopback, * this must be always enabled (metadata may arive @@ -1261,10 +1262,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, MLX5_ASSERT(dev->data->dev_started); for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; if (priv->config.dv_flow_en && priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && mlx5_flow_ext_mreg_supported(dev)) { @@ -1325,18 +1327,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); unsigned int j; - if (!(*priv->rxqs)[i]) + if (rxq == NULL || rxq->ctrl == NULL) continue; - rxq_ctrl = container_of((*priv->rxqs)[i], - struct mlx5_rxq_ctrl, rxq); - rxq_ctrl->flow_mark_n = 0; - rxq_ctrl->rxq.mark = 0; + rxq->ctrl->flow_mark_n = 0; + rxq->ctrl->rxq.mark = 0; for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) - rxq_ctrl->flow_tunnels_n[j] = 0; - rxq_ctrl->rxq.tunnel = 0; + rxq->ctrl->flow_tunnels_n[j] = 0; + rxq->ctrl->rxq.tunnel = 0; } } @@ -1350,13 +1350,15 @@ void mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_data *data; + + if (rxq == NULL || rxq->ctrl == NULL) continue; - data = (*priv->rxqs)[i]; + data = &rxq->ctrl->rxq; if (!rte_flow_dynf_metadata_avail()) { data->dynf_meta = 0; data->flow_meta_mask = 0; @@ -1547,7 +1549,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue index out of range"); - if (!(*priv->rxqs)[queue->index]) + if (mlx5_rxq_get(dev, queue->index) == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, @@ -1578,7 +1580,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * 0 on success, a negative errno code on error. */ static int -mlx5_validate_rss_queues(const struct rte_eth_dev *dev, +mlx5_validate_rss_queues(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n, const char **error, uint32_t *queue_idx) { @@ -1594,13 +1596,12 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev, *queue_idx = i; return -EINVAL; } - if (!(*priv->rxqs)[queues[i]]) { + rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]); + if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; return -EINVAL; } - rxq_ctrl = container_of((*priv->rxqs)[queues[i]], - struct mlx5_rxq_ctrl, rxq); if (i == 0) rxq_type = rxq_ctrl->type; if (rxq_type != rxq_ctrl->type) { diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c index c32129cdc2b..9ffc44b179f 100644 --- a/drivers/net/mlx5/mlx5_rss.c +++ b/drivers/net/mlx5/mlx5_rss.c @@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev, priv->rss_conf.rss_hf = rss_conf->rss_hf; /* Enable the RSS hash in all Rx queues. */ for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + + if (rxq == NULL || rxq->ctrl == NULL) continue; - (*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf && + rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf && !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS); ++idx; } diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 09de26c0d39..13fbd12b22c 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -148,10 +148,8 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id); + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (!rxq) return; @@ -162,7 +160,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.rx_thresh.wthresh = 0; qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh; qinfo->conf.rx_drop_en = 1; - qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1; + qinfo->conf.rx_deferred_start = rxq_ctrl->obj == NULL ? 0 : 1; qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? @@ -191,10 +189,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, struct rte_eth_burst_mode *mode) { eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; @@ -245,15 +241,13 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (dev->rx_pkt_burst == NULL || dev->rx_pkt_burst == removed_rx_burst) { rte_errno = ENOTSUP; return -rte_errno; } - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 25f7fc2071a..161399c764d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -606,14 +606,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return 0; /* All the configured queues should be enabled. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; n_ibv++; - if (mlx5_rxq_mprq_enabled(rxq)) + if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) ++n; } /* Multi-Packet RQ can't be partially configured. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 98408da3c8e..cde01a48022 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -729,7 +729,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -811,7 +811,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, } DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -1712,8 +1712,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - if (priv->rxq_privs == NULL) - return NULL; + MLX5_ASSERT(priv->rxq_privs != NULL); return (*priv->rxq_privs)[idx]; } @@ -1799,7 +1798,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); - (*priv->rxqs)[idx] = NULL; + dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; } @@ -1845,14 +1844,10 @@ enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); + if (idx < priv->rxqs_n && rxq_ctrl != NULL) return rxq_ctrl->type; - } return MLX5_RXQ_TYPE_UNDEFINED; } @@ -2619,13 +2614,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i); + + if (data == NULL) continue; - data = (*priv->rxqs)[i]; data->sh = sh; data->rt_timestamp = priv->config.rt_timestamp; } diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 511681841ca..6212ce8247d 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev) return -ENOTSUP; /* All the configured queues should support. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); - if (!rxq) + if (!rxq_data) continue; - if (mlx5_rxq_check_vec_support(rxq) < 0) + if (mlx5_rxq_check_vec_support(rxq_data) < 0) break; } if (i != priv->rxqs_n) diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index ae2f5668a74..732775954ad 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) memset(&tmp, 0, sizeof(tmp)); /* Add software counters. */ for (i = 0; (i != priv->rxqs_n); ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev) unsigned int i; for (i = 0; (i != priv->rxqs_n); ++i) { - if ((*priv->rxqs)[i] == NULL) + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); + + if (rxq_data == NULL) continue; - memset(&(*priv->rxqs)[i]->stats, 0, - sizeof(struct mlx5_rxq_stats)); + memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats)); } for (i = 0; (i != priv->txqs_n); ++i) { if ((*priv->txqs)[i] == NULL) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index b3188f510fb..1e865e74e39 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -176,7 +176,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl->obj) { DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, (*priv->rxqs)[i]->idx); + dev->data->port_id, i); rte_errno = ENOMEM; goto error; } From patchwork Sun Sep 26 11:19:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99694 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E658FA0547; Sun, 26 Sep 2021 13:20:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 128C740E78; Sun, 26 Sep 2021 13:20:41 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2053.outbound.protection.outlook.com [40.107.236.53]) by mails.dpdk.org (Postfix) with ESMTP id 69A5C4003F for ; Sun, 26 Sep 2021 13:20:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YG4yemd6BXj7iuJiB6wDv8Dswto74Znzs0tGBJLF4q/Pwmkx+48W3h5F+x6RwGva2yPNLGGkw9YbhcWvJStrct9kD5XxocrNFXYLBX23umuGdiPZpPsfoCf0k28f3Dv3cfssyZYln1AG1h9YmlTmA+DfVRu27HfpDFclLaQsbZdlVwn9Q1R8bT7jXhhSYB8YN7L/PL/f/qE7vG1cjf9uAP+R5BDzw8qm8FjOjnYo5/xB7Kj61J6MVKBTiV375KABYOnv7YuAhOBuhS2nZ/KvBMr1z+ZzpATyMeNFlao8Vfjo1UtDvaeiRkUlJT6kL/09uzej6y3gQH1LkggtjcJowg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=yPRCQNcYnl5/4SbLy66mAZbW3XX0VcBOK+p/sd40nHw=; b=DYJsNp3wb9XNU+e9VsaDVgLcueOlCoejwueUr4llJmcQMTNXMkoqrzcLrWb+Jin791RiLHjMV4AZ7vowNVcC++bdmHQ2xKGiRTYQmzom/s6V0jYEVWubAWWX1Z24lSGSTj+YzY+g3FTsI1DFjnPlYuuOev00ChDwyuYvwlNO23N1JMt8yeiG4FsilPpKRAh563bkabUauKilRu1H/OxkIM1cHu10kXpsj5rWAscMfBHE+w0kjh1B0RwRCwmW2bEWlmJYRflfVff0RO6N+n68Fv68aav4h+BPkG/CRtyPrcTXMGDdJmn7HJrKS7lhYPYukA2k28tMPH/srn9qzGgCaQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yPRCQNcYnl5/4SbLy66mAZbW3XX0VcBOK+p/sd40nHw=; b=Az1toUoJJs9Wu3ayuUKYY+5k2Znahl5jBjdhTUI8L4PXfM4eXhCPE2gUgGcA962nyrjL7ASEYsHXLCgsOnMR0GWrdPTZw2Mi9P5dSOrmbZ31wICPnDbU2F5AOnwJtqjXHNITbFigKagqr7ssyIOdzcXADa9YEnIl/tTuCBMzeP8o5O+KFJdEgqW1fyl+5dAOfQTbrH4k9RUAn+HKVUztuvtj2fyw7Y2Jz+vjAlC3buEx/8jYC+LjLW2zeYbvwEGqN8I0faRLmHOU9wji/rXbHdh7ojRmIrx0+A8CK4JNz1+CEo/yWDJzj/WVal9h3Zbr9dZZpQjDX7ODicoPbFsNnw== Received: from BN6PR1101CA0010.namprd11.prod.outlook.com (2603:10b6:405:4a::20) by BYAPR12MB2871.namprd12.prod.outlook.com (2603:10b6:a03:13d::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15; Sun, 26 Sep 2021 11:20:37 +0000 Received: from BN8NAM11FT061.eop-nam11.prod.protection.outlook.com (2603:10b6:405:4a:cafe::6b) by BN6PR1101CA0010.outlook.office365.com (2603:10b6:405:4a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by BN8NAM11FT061.mail.protection.outlook.com (10.13.177.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:37 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 04:20:36 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:34 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:19:04 +0800 Message-ID: <20210926111904.237736-12-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f72671ed-0068-4843-b062-08d980dfaa63 X-MS-TrafficTypeDiagnostic: BYAPR12MB2871: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:94; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BHzQy4wzdMjbV2aLqYLQSlLetWd9GpO7IOtcn848ZAjKd0Niq8h2a7IU1kJI6QX01iDNh5WKwV7m6kBJCxbWeu/eTMRtf5kNcjq6wfV6TaCddbyFyrGbDjq1takLviiRCRUC7ObZWQSmSBqk47Rt17HyJ5YsxE9QqGePEauj4Fppqu2Xf2KRyRgGBMeCf3oYovFJHe06EMya4D60BOvVxiWKBFQM3bXLqQDw7nOrjVu4u2EScTbroRKSAOQEyuNn4VHY0DWUkRFpFlUAwQCaHuWTMSYGYTtgcdSh+y+k8T6sSU0FXqcrd6Bnf/isTfRI6QQJka4sBD+Mj0zK4i/HG5tOElsrXvMP96633jCgknWbj2GirQgSfK59G+k4kMZ0A9hEWEr/ToixiYEK+nzVDPj/jVmmtk8ArLMgyLtna2cQ+GcSSGzlET9sKjs80Bp8tV5He6Jjtss33iDDiy8rtdZgreKKr0PkDsSlM7yiCMLIserz/K2vcFdN/A4I+rMZifn9Plr3HFflncDfq6Ozeenj0LqfZhDSRqpV3otQQl7SFvdW9TddL/Dhs//jP8aMw9RKPPn3Adg/Fb8HHFuGuGy7UmdJCOlbCbPtT7GBJ2BRzwisonwxAZOzdg6SxAmqLTEoC8CpmWSqscnDIh0T5kshvTn1V+gHCzfmOQ+65pLswmAKaDgxY4goNEt9FalCUySETYUs6hknBlX8zltQrg== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(6286002)(36860700001)(36756003)(107886003)(6916009)(26005)(47076005)(6666004)(30864003)(16526019)(1076003)(186003)(83380400001)(7696005)(2616005)(8676002)(316002)(2906002)(86362001)(70586007)(82310400003)(7636003)(336012)(55016002)(5660300002)(8936002)(54906003)(508600001)(70206006)(4326008)(426003)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:37.1849 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f72671ed-0068-4843-b062-08d980dfaa63 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT061.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2871 Subject: [dpdk-dev] [PATCH 11/11] net/mlx5: support shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces shared RXQ. All share Rx queues with same group and queue id shares same rxq_ctrl. Rxq_ctrl and rxq_data are shared, all queues from different member port share same WQ and CQ, essentially one Rx WQ, mbufs are filled into this singleton WQ. Shared rxq_data is set into device Rx queues of all member ports as rxq object, used for receiving packets. Polling queue of any member ports returns packets of any member, mbuf->port is used to identify source port. Signed-off-by: Xueming Li --- doc/guides/nics/features/mlx5.ini | 1 + doc/guides/nics/mlx5.rst | 6 + drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_devx.c | 9 +- drivers/net/mlx5/mlx5_rx.h | 7 + drivers/net/mlx5/mlx5_rxq.c | 208 ++++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_trigger.c | 76 ++++++----- 8 files changed, 255 insertions(+), 56 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index f01abd4231f..ff5e669acc1 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -11,6 +11,7 @@ Removal event = Y Rx interrupt = Y Fast mbuf free = Y Queue start/stop = Y +Shared Rx queue = Y Burst mode info = Y Power mgmt address monitor = Y MTU update = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index ca3e7f560da..494ee957c1d 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -113,6 +113,7 @@ Features - Connection tracking. - Sub-Function representors. - Sub-Function. +- Shared Rx queue. Limitations @@ -464,6 +465,11 @@ Limitations - In order to achieve best insertion rate, application should manage the flows per lcore. - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache. + Shared Rx queue: + + - Counter of received packets and bytes number of devices in same share group are same. + - Counter of received packets and bytes number of queues in same group and queue ID are same. + Statistics ---------- diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 27233b679c6..b631768b4f9 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -457,6 +457,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) mlx5_glue->dr_create_flow_action_default_miss(); if (!sh->default_miss_action) DRV_LOG(WARNING, "Default miss action is not supported."); + LIST_INIT(&sh->shared_rxqs); return 0; error: /* Rollback the created objects. */ @@ -531,6 +532,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) MLX5_ASSERT(sh && sh->refcnt); if (sh->refcnt > 1) return; + MLX5_ASSERT(LIST_EMPTY(&sh->shared_rxqs)); #ifdef HAVE_MLX5DV_DR if (sh->rx_domain) { mlx5_glue->dr_destroy_domain(sh->rx_domain); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 6a9c99a8826..c671c8a354f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1193,6 +1193,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX]; /* Flex parser profiles information. */ void *devx_rx_uar; /* DevX UAR for Rx. */ + LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */ struct mlx5_aso_age_mng *aso_age_mng; /* Management data for aging mechanism using ASO Flow Hit. */ struct mlx5_geneve_tlv_option_resource *geneve_tlv_option_resource; @@ -1257,6 +1258,7 @@ struct mlx5_rxq_obj { }; struct mlx5_devx_obj *rq; /* DevX RQ object for hairpin. */ struct { + struct mlx5_devx_rmp devx_rmp; /* RMP for shared RQ. */ struct mlx5_devx_cq cq_obj; /* DevX CQ object. */ void *devx_channel; }; diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 371ff387c99..01561639038 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -170,6 +170,8 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) memset(&rxq->devx_rq, 0, sizeof(rxq->devx_rq)); mlx5_devx_cq_destroy(&rxq_obj->cq_obj); memset(&rxq_obj->cq_obj, 0, sizeof(rxq_obj->cq_obj)); + if (!RXQ_CTRL_LAST(rxq)) + return; if (rxq_obj->devx_channel) { mlx5_os_devx_destroy_event_channel (rxq_obj->devx_channel); @@ -270,6 +272,8 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) rq_attr.wq_attr.pd = priv->sh->pdn; rq_attr.counter_set_id = priv->counter_set_id; /* Create RQ using DevX API. */ + if (rxq_data->shared) + rxq->devx_rq.rmp = &rxq_ctrl->obj->devx_rmp; return mlx5_devx_rq_create(priv->sh->ctx, &rxq->devx_rq, wqe_size, log_desc_n, &rq_attr, rxq_ctrl->socket); @@ -495,7 +499,10 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY); if (ret) goto error; - rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; + if (rxq_data->shared) + rxq_data->wqes = (void *)(uintptr_t)tmpl->devx_rmp.wq.umem_buf; + else + rxq_data->wqes = (void *)(uintptr_t)rxq->devx_rq.wq.umem_buf; rxq_data->rq_db = (uint32_t *)(uintptr_t)rxq->devx_rq.db_rec; mlx5_rxq_initialize(rxq_data); priv->dev_data->rx_queue_state[rxq->idx] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 161399c764d..a83fa6e8db1 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -26,6 +26,9 @@ #define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv #define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl)) #define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl)) +#define RXQ_CTRL_LAST(rxq) \ + (LIST_FIRST(&(rxq)->ctrl->owners) == (rxq) && \ + LIST_NEXT((rxq), owner_entry) == NULL) struct mlx5_rxq_stats { #ifdef MLX5_PMD_SOFT_COUNTERS @@ -107,6 +110,7 @@ struct mlx5_rxq_data { unsigned int lro:1; /* Enable LRO. */ unsigned int dynf_meta:1; /* Dynamic metadata is configured. */ unsigned int mcqe_format:3; /* CQE compression format. */ + unsigned int shared:1; /* Shared RXQ. */ volatile uint32_t *rq_db; volatile uint32_t *cq_db; uint16_t port_id; @@ -169,6 +173,9 @@ struct mlx5_rxq_ctrl { struct mlx5_dev_ctx_shared *sh; /* Shared context. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ + LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */ + uint32_t share_group; /* Group ID of shared RXQ. */ + unsigned int started:1; /* Whether (shared) RXQ has been started. */ unsigned int irq:1; /* Whether IRQ is enabled. */ uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */ uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index cde01a48022..45f78ad076b 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -28,6 +28,7 @@ #include "mlx5_rx.h" #include "mlx5_utils.h" #include "mlx5_autoconf.h" +#include "mlx5_devx.h" /* Default RSS hash key also used for ConnectX-3. */ @@ -352,6 +353,9 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev) offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; if (MLX5_LRO_SUPPORTED(dev)) offloads |= DEV_RX_OFFLOAD_TCP_LRO; + if (priv->config.hca_attr.mem_rq_rmp && + priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new) + offloads |= RTE_ETH_RX_OFFLOAD_SHARED_RXQ; return offloads; } @@ -648,6 +652,114 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc) return 0; } +/** + * Get the shared Rx queue object that matches group and queue index. + * + * @param dev + * Pointer to Ethernet device structure. + * @param group + * Shared RXQ group. + * @param idx + * RX queue index. + * + * @return + * Shared RXQ object that matching, or NULL if not found. + */ +static struct mlx5_rxq_ctrl * +mlx5_shared_rxq_get(struct rte_eth_dev *dev, uint32_t group, uint16_t idx) +{ + struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_priv *priv = dev->data->dev_private; + + LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) { + if (rxq_ctrl->share_group == group && rxq_ctrl->rxq.idx == idx) + return rxq_ctrl; + } + return NULL; +} + +/** + * Check whether requested Rx queue configuration matches shared RXQ. + * + * @param rxq_ctrl + * Pointer to shared RXQ. + * @param dev + * Pointer to Ethernet device structure. + * @param idx + * Queue index. + * @param desc + * Number of descriptors to configure in queue. + * @param socket + * NUMA socket on which memory must be allocated. + * @param[in] conf + * Thresholds parameters. + * @param mp + * Memory pool for buffer allocations. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static bool +mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev, + uint16_t idx, uint16_t desc, unsigned int socket, + const struct rte_eth_rxconf *conf, + struct rte_mempool *mp) +{ + struct mlx5_priv *spriv = LIST_FIRST(&rxq_ctrl->owners)->priv; + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int mprq_stride_nums = priv->config.mprq.stride_num_n ? + priv->config.mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N; + + RTE_SET_USED(conf); + if (rxq_ctrl->socket != socket) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: socket mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->config.mprq.enabled) + desc >>= mprq_stride_nums; + if (rxq_ctrl->rxq.elts_n != log2above(desc)) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: descriptor number mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->mtu != spriv->mtu) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->dev_data->dev_conf.intr_conf.rxq != + spriv->dev_data->dev_conf.intr_conf.rxq) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: interrupt mismatch", + dev->data->port_id, idx); + return false; + } + if (!spriv->config.mprq.enabled && rxq_ctrl->rxq.mp != mp) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mempool mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->config.hw_padding != spriv->config.hw_padding) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: padding mismatch", + dev->data->port_id, idx); + return false; + } + if (memcmp(&priv->config.mprq, &spriv->config.mprq, + sizeof(priv->config.mprq)) != 0) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: MPRQ mismatch", + dev->data->port_id, idx); + return false; + } + if (priv->config.cqe_comp != spriv->config.cqe_comp || + (priv->config.cqe_comp && + priv->config.cqe_comp_fmt != spriv->config.cqe_comp_fmt)) { + DRV_LOG(ERR, "port %u queue index %u failed to join shared group: CQE compression mismatch", + dev->data->port_id, idx); + return false; + } + return true; +} + /** * * @param dev @@ -673,12 +785,14 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = NULL; struct rte_eth_rxseg_split *rx_seg = (struct rte_eth_rxseg_split *)conf->rx_seg; struct rte_eth_rxseg_split rx_single = {.mp = mp}; uint16_t n_seg = conf->rx_nseg; int res; + uint64_t offloads = conf->offloads | + dev->data->dev_conf.rxmode.offloads; if (mp) { /* @@ -690,9 +804,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, n_seg = 1; } if (n_seg > 1) { - uint64_t offloads = conf->offloads | - dev->data->dev_conf.rxmode.offloads; - /* The offloads should be checked on rte_eth_dev layer. */ MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER); if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { @@ -704,9 +815,32 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG); } + if (offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) { + if (!priv->config.hca_attr.mem_rq_rmp) { + DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw", + dev->data->port_id, idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) { + DRV_LOG(ERR, "port %u queue index %u shared Rx queue needs DevX api", + dev->data->port_id, idx); + rte_errno = EINVAL; + return -rte_errno; + } + /* Try to reuse shared RXQ. */ + rxq_ctrl = mlx5_shared_rxq_get(dev, conf->shared_group, idx); + if (rxq_ctrl != NULL && + !mlx5_shared_rxq_match(rxq_ctrl, dev, idx, desc, socket, + conf, mp)) { + rte_errno = EINVAL; + return -rte_errno; + } + } res = mlx5_rx_queue_pre_setup(dev, idx, &desc); if (res) return res; + /* Allocate RXQ. */ rxq = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY); if (!rxq) { @@ -718,14 +852,22 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rxq->priv = priv; rxq->idx = idx; (*priv->rxq_privs)[idx] = rxq; - rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, n_seg); - if (!rxq_ctrl) { - DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", - dev->data->port_id, idx); - mlx5_free(rxq); - (*priv->rxq_privs)[idx] = NULL; - rte_errno = ENOMEM; - return -rte_errno; + if (rxq_ctrl != NULL) { + /* Join owner list of shared RXQ. */ + LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry); + rxq->ctrl = rxq_ctrl; + } else { + /* Create new shared RXQ. */ + rxq_ctrl = mlx5_rxq_new(dev, rxq, desc, socket, conf, rx_seg, + n_seg); + if (rxq_ctrl == NULL) { + DRV_LOG(ERR, "port %u unable to allocate rx queue index %u", + dev->data->port_id, idx); + mlx5_free(rxq); + (*priv->rxq_privs)[idx] = NULL; + rte_errno = ENOMEM; + return -rte_errno; + } } DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); @@ -1071,6 +1213,9 @@ mlx5_rxq_obj_verify(struct rte_eth_dev *dev) struct mlx5_rxq_obj *rxq_obj; LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) { + if (rxq_obj->rxq_ctrl->rxq.shared && + !LIST_EMPTY(&rxq_obj->rxq_ctrl->owners)) + continue; DRV_LOG(DEBUG, "port %u Rx queue %u still referenced", dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx); ++ret; @@ -1348,6 +1493,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, return NULL; } LIST_INIT(&tmpl->owners); + if (offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) { + tmpl->rxq.shared = 1; + LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry); + } rxq->ctrl = tmpl; LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG); @@ -1771,6 +1920,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; + bool free_ctrl; if (priv->rxq_privs == NULL) return 0; @@ -1780,24 +1930,36 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) if (mlx5_rxq_deref(dev, idx) > 1) return 1; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->obj != NULL) { + /* If the last entry in share RXQ. */ + free_ctrl = RXQ_CTRL_LAST(rxq); + if (rxq->devx_rq.rq != NULL) priv->obj_ops.rxq_obj_release(rxq); - LIST_REMOVE(rxq_ctrl->obj, next); - mlx5_free(rxq_ctrl->obj); - rxq_ctrl->obj = NULL; + if (free_ctrl) { + if (rxq_ctrl->obj != NULL) { + LIST_REMOVE(rxq_ctrl->obj, next); + mlx5_free(rxq_ctrl->obj); + rxq_ctrl->obj = NULL; + } + rxq_ctrl->started = false; } if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - rxq_free_elts(rxq_ctrl); + if (free_ctrl) + rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); - mlx5_mprq_free_mp(dev, rxq_ctrl); - } LIST_REMOVE(rxq, owner_entry); - LIST_REMOVE(rxq_ctrl, next); - mlx5_free(rxq_ctrl); + if (free_ctrl) { + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + mlx5_mr_btree_free + (&rxq_ctrl->rxq.mr_ctrl.cache_bh); + mlx5_mprq_free_mp(dev, rxq_ctrl); + } + if (rxq_ctrl->rxq.shared) + LIST_REMOVE(rxq_ctrl, share_entry); + LIST_REMOVE(rxq_ctrl, next); + mlx5_free(rxq_ctrl); + } dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 1e865e74e39..2fd8c70cce5 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -122,6 +122,46 @@ mlx5_rxq_stop(struct rte_eth_dev *dev) mlx5_rxq_release(dev, i); } +static int +mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl, + unsigned int idx) +{ + int ret = 0; + + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { + /* Allocate/reuse/resize mempool for MPRQ. */ + if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0) + return -rte_errno; + + /* Pre-register Rx mempools. */ + mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, + rxq_ctrl->rxq.mprq_mp); + } else { + uint32_t s; + for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) + mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, + rxq_ctrl->rxq.rxseg[s].mp); + } + ret = rxq_alloc_elts(rxq_ctrl); + if (ret) + return ret; + } + MLX5_ASSERT(!rxq_ctrl->obj); + rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, + sizeof(*rxq_ctrl->obj), 0, + rxq_ctrl->socket); + if (!rxq_ctrl->obj) { + DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", + dev->data->port_id, idx); + rte_errno = ENOMEM; + return -rte_errno; + } + DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", dev->data->port_id, + idx, (void *)&rxq_ctrl->obj); + return 0; +} + /** * Start traffic on Rx queues. * @@ -149,45 +189,17 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (rxq == NULL) continue; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { - /* Allocate/reuse/resize mempool for MPRQ. */ - if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0) - goto error; - /* Pre-register Rx mempools. */ - mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, - rxq_ctrl->rxq.mprq_mp); - } else { - uint32_t s; - - for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) - mlx5_mr_update_mp - (dev, &rxq_ctrl->rxq.mr_ctrl, - rxq_ctrl->rxq.rxseg[s].mp); - } - ret = rxq_alloc_elts(rxq_ctrl); - if (ret) + if (!rxq_ctrl->started) { + if (mlx5_rxq_ctrl_prepare(dev, rxq_ctrl, i) < 0) goto error; - } - MLX5_ASSERT(!rxq_ctrl->obj); - rxq_ctrl->obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, - sizeof(*rxq_ctrl->obj), 0, - rxq_ctrl->socket); - if (!rxq_ctrl->obj) { - DRV_LOG(ERR, - "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, i); - rte_errno = ENOMEM; - goto error; + LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next); + rxq_ctrl->started = true; } ret = priv->obj_ops.rxq_obj_new(rxq); if (ret) { mlx5_free(rxq_ctrl->obj); goto error; } - DRV_LOG(DEBUG, "Port %u rxq %u updated with %p.", - dev->data->port_id, i, (void *)&rxq_ctrl->obj); - LIST_INSERT_HEAD(&priv->rxqsobj, rxq_ctrl->obj, next); } return 0; error: