From patchwork Fri May 26 03:14:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127522 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F67542BA3; Fri, 26 May 2023 05:15:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4D7A4114A; Fri, 26 May 2023 05:15:43 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2046.outbound.protection.outlook.com [40.107.244.46]) by mails.dpdk.org (Postfix) with ESMTP id 8C71540A87 for ; Fri, 26 May 2023 05:15:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HphENWiqSMUwvMHXXVWk0O1Vfngh/Pz5kUuNg43ma9fHXPubMe6en+Fi4Qd9umxkTMFPH6w9iUCBG9R1wSlC2ODy1g+7TFjjz93QevWd9RH7Z02elLBewRKuLJdV0YmyKqaHQ2vHwEmE4EU73OeR3/oRpjhvOxq6SMGZgMhTr/iuXTmogloB76GHoeaHcADvWvGVRliSfyuEzeWmN8xtuv9eY884XZ0Au2iw0drwlp2wZlVAAghZ7BCpk8Uc/sMLUn2W8Bet2Tb/UFrq6ET8hCiJEU6rlGBvq40aiTBzXxg9L9qSmmSWGqqupra2aLJCLP/R69RRaMgFx3igN7bUGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qrhr6JbVmVgkOPC42Lratv+TLSusXswEJEXXIrH/Tlc=; b=JfHtEHXPMODDC/j0V7MqYH11Aqjz4M5HJ5POv50nddZIv2gvEEvibcYpNh1mLKbKI5gEdaZlBcduzPtzrLrCvhmzToUU365EwOqMQNqvtFU2WjjYF+siFon8qgM0rMVuAwjlTTunS4HK35esE49OqSUGUH6af8oL1HpAeYSWwQH0sPc4+ThHTcxQWJbI+xet6M5s9OYidKpphLDsyom/Py/MUbRdNh7weSZsVQvEw3wb1SEtZYRTxCEbIFXujUu/u2ceHPs8PlUo01NUFVACHOp8pfga2Ld3izplHuy7ztiM4C/oP7mhiLQI/2puYnca1+Ib0zy2imdpo9DGw0nM6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qrhr6JbVmVgkOPC42Lratv+TLSusXswEJEXXIrH/Tlc=; b=ESLWTD6eSO7ULGrxB+1QQaCFv+eyo4DmEogiLIiyIFw/01ZRJ3/H0Qk8fMPLkBHtDrfnHlejrwS9tqIytf+VlZVczTH+/0B7y5KsGwKHTn+TWbomUG5gWlBLm2Y4f7+9+pud+GN1A8u2W0NUq8xmjRGDwRX7VOz7UHthL1mBQjPvW6cxavKWII5bl8rOFlEJMW48Vsr0TBq0GzLLOUsJyCpNqc655sEmqEVZQVT5CHP2dbaoq5btNPQN8SydfL5AbPSQ1jC5Brb9R0lA/4t4g1fvJgdqa2j+H4Cq5YrV0hZUqhkewj8Fe/+z8WaLjowxerGNQSMKGkjWv4AyUhN8IQ== Received: from BN7PR02CA0005.namprd02.prod.outlook.com (2603:10b6:408:20::18) by CY5PR12MB6156.namprd12.prod.outlook.com (2603:10b6:930:24::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Fri, 26 May 2023 03:15:38 +0000 Received: from BN8NAM11FT069.eop-nam11.prod.protection.outlook.com (2603:10b6:408:20:cafe::6f) by BN7PR02CA0005.outlook.office365.com (2603:10b6:408:20::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend Transport; Fri, 26 May 2023 03:15:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT069.mail.protection.outlook.com (10.13.176.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:22 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:19 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 1/9] common/mlx5: export memory region lookup by address Date: Fri, 26 May 2023 06:14:13 +0300 Message-ID: <20230526031422.913377-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT069:EE_|CY5PR12MB6156:EE_ X-MS-Office365-Filtering-Correlation-Id: 607df5d6-2254-4af3-86c9-08db5d977ab0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IwZ1FFqCmiRZFgXKaVJYv8lbjL9ZVSwGiRrNa6zzk6OFe45s8KecUbMwmu9j97sJcibn8lviPdRvuE+2WHoY8/MrHGtzKcYipte14jzRxFK4qYRq6BPkBOkfGbfQiUwvZ/XTzlNk3bi+6hfe3Znl5qvsiY8dJWfz5KYtJxq17kmMD+pnRzaUpPNu1lcEOM2CQPYrMG6tuTSpmCVQ+vkgsr+Ru4tetRZKnmXM+vEVFWuVAd4i8Ge03wqrplFvjhtRo60lwpw/5dEAZj9s8VB/eMbu9J08gwtu80vV7T7aK5NQk/GwdGb7F6gLxIXT1ECOiNj3b1Svmi8z2mRmLW0/HBZ3d4wkOLnLsyHM/hR9dKsQxGviCoBKtlGbIGkk5iayPcbJ2fmtZV/+WT3NpSIszX46tC2yv5oecnQyzxgNud5dXEtTqkpDNyK0voPFRYxyzliLYAS1nx7WZ934s6C6HAJPHZDFWcBNOTTE2hr+dNd/1f8cE3+NosoL8ZU7mBd0gIjJkMIHYDMP1VXJVJwccVu5p1fMhevfczMGNSxKcRkRaPbiVIq3uGBj/rA0gQf0u3EeqDn2o7ZY5gM6Fw9UOsHdk/GFEur8EwRcI/h3pjuVoobbGrrxdDiHpzE+mW7YBswVVWHPWE8aGEzaZoBGzM47FjMPCDng6Il0bSiyHv0nipe2Ur6xp3x0BkOANa4y0mKBTdyTDdp0t0eLH5RGXwifd8Eq+yOi8/lNwpHvjjHf7QtxmucJCuTDzh4dUYi6 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(36840700001)(46966006)(40470700004)(186003)(356005)(7636003)(40460700003)(1076003)(82740400003)(26005)(6286002)(107886003)(2616005)(36860700001)(47076005)(83380400001)(36756003)(2906002)(40480700001)(16526019)(55016003)(70586007)(70206006)(316002)(6636002)(4326008)(6666004)(41300700001)(54906003)(426003)(336012)(82310400005)(110136005)(7696005)(86362001)(478600001)(8676002)(8936002)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:38.1016 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 607df5d6-2254-4af3-86c9-08db5d977ab0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT069.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6156 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case user provides the address without mempool. Export the function to lookup the address without mempool is required. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_common_mr.c | 2 +- drivers/common/mlx5/mlx5_common_mr.h | 4 ++++ drivers/common/mlx5/version.map | 1 + 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 7b14b0c7bf..40ff9153bd 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1059,7 +1059,7 @@ mr_lookup_caches(struct mlx5_mr_ctrl *mr_ctrl, * @return * Searched LKey on success, UINT32_MAX on no match. */ -static uint32_t +uint32_t mlx5_mr_addr2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr) { uint32_t lkey; diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 12def1585f..66623868a2 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -240,6 +240,10 @@ mlx5_mr_create(struct mlx5_common_device *cdev, struct mlx5_mr_share_cache *share_cache, struct mr_cache_entry *entry, uintptr_t addr); +__rte_internal +uint32_t +mlx5_mr_addr2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr); + /* mlx5_common_verbs.c */ __rte_internal diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index e05e1aa8c5..f860b069de 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -122,6 +122,7 @@ INTERNAL { mlx5_mr_ctrl_init; mlx5_mr_flush_local_cache; mlx5_mr_mb2mr_bh; + mlx5_mr_addr2mr_bh; mlx5_nl_allmulti; # WINDOWS_NO_EXPORT mlx5_nl_ifindex; # WINDOWS_NO_EXPORT From patchwork Fri May 26 03:14:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127523 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABED742BA3; Fri, 26 May 2023 05:15:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2D0442B7E; Fri, 26 May 2023 05:15:46 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2086.outbound.protection.outlook.com [40.107.93.86]) by mails.dpdk.org (Postfix) with ESMTP id D322F42B71 for ; Fri, 26 May 2023 05:15:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RMytuVR1dUOnzh0ejM4ykPuyUdIPS8JjVlD1ex/p31uY5bfSt0hWqtGQAc72yAxidKdCY8bN5801JXxvBaN2WkwuA6wm3mOV7qvGFsDxtm8m22cfkSddENj3LPHxCNfkboJzCitedI5uH3GA2tc401bJ8/nMJ1RfepilvwIdyh4ahznvEm3wsq6Ey9bzWCOU4PC6l91YHdPepzMK+T73jIEgLoNZEoVX1dbsn2JMAnvsSSflviUKfP+5bdIGpNzXFqxldrPWKSK/6RitooxcYKTe7eB3D2HRgEwrwsp5D3v3Ng6zTVGw6Uoz8mmk/0g7QRZ/1sngIA55MyISphPw3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IeQFal92YA0PNhwJbmoYbGUiROZGtWMPa03NHpMAZ7g=; b=Trmu6RixTiFHEaNAGdC0HQ/K+CfrQhW+KizuxxQUaBdspiJWmQJqWyyJNn8GSww/XaKRvFBk+8Nl9c4KEaZ7kBURhisb0vRNZe6HMkK6K7leZVzmaZBW7peje8doezLAw7YaxMAbXd8kDBJEowhp8zA9kZEPfGU8qI/8vhOQDu4EM2eSWBqPxEM4+9+hFQp7nrXPBs5B7FDOwa2BAfku7wrfqEm6j6R0LHS0jMzQVgZGAMleZyzCWWX12imDAslBQW+6BUie3L4EcWS+WGGKitpMIGFhDGgPhFTHU6bVFLBbhDrjvQ4QffU1tL2ZYwgZD/mxbnavqZeS8knWoB/ifg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IeQFal92YA0PNhwJbmoYbGUiROZGtWMPa03NHpMAZ7g=; b=iD3OqlzLE5KAuWGCTiFXhjxMges8jUw4QD9gcjbSj0t04l2/El5FduW933jfXmqH6bBitpmvZACZYg3Le85jCpk8h+CC9LY17gIlYM/W0mj559RO8jop/2WBor2AqBX2rizb4U5p9/yTukB8d5zO65XrBvwaQ0y265f9DBBhN2pwYerJUoQlrLRqhKN4lNZf5um32IMKm41UmA6I5rUp0uB4lEtiCtiN7NhI3KsYU+b0kdGUVxLBJWJAR64NzAejKREEMh6NqLBHrpz162cbxRzBM/cIkk1BOL6boPWHEjaImJQCj0r+EtjqTCCk9jtXPqVCKE3jXbCEDSQspabyOg== Received: from BN0PR04CA0198.namprd04.prod.outlook.com (2603:10b6:408:e9::23) by MN6PR12MB8592.namprd12.prod.outlook.com (2603:10b6:208:478::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Fri, 26 May 2023 03:15:42 +0000 Received: from BN8NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e9:cafe::1) by BN0PR04CA0198.outlook.office365.com (2603:10b6:408:e9::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend Transport; Fri, 26 May 2023 03:15:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT031.mail.protection.outlook.com (10.13.177.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:25 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:22 -0700 From: Suanming Mou To: Matan Azrad CC: , Subject: [PATCH v2 2/9] crypto/mlx5: split AES-XTS Date: Fri, 26 May 2023 06:14:14 +0300 Message-ID: <20230526031422.913377-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT031:EE_|MN6PR12MB8592:EE_ X-MS-Office365-Filtering-Correlation-Id: c2120753-c32b-47e9-5c1e-08db5d977d01 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DiOqTyGulel0tL67u2mNulK700oBH5kJdS+yy+ZD7A0Y+AtMAbE3oJFpJUxPPj1agnB61+ggqBD9iVqP23Y+XJCRENKK7MadEZQHpbVtynGhZpYUA5JL3XgCLusKXzG9eXYPycrp52EY3ziZ+DXDot2+xcanh2SrOGXrH1qH3igjj/mS0Swi4klDscHd6a1W5FKg7qxcLAPA3T3FDSD1+JEz5e32ayRY3mD/1zgU3GdcppsAvxH4fJWtIn2JyqIGdByx21hVfxaGZAM2ENDXQwEIOkp0HNhfVRaalVo65fzKdaSYRB0AjiAMgWAa95RVY5CGlP/dBnwJWk0Aw8XzwMhAhifhilm5QZBzKqcQ5bmqe83+mPcviOljMrk8gOOau5309+nvLU0FWsucDfJPvYKSI1oohhvMi796+Wn+0+vVJIirUsBSlzL/v4WVbXBDMOROKsAmUj8iB4FQn2LX9takgQgIdjP5HUl4qd7GbCs1h5dvVn1tgXDYGQG/dLHYS2BuztV7iu/BezEQn2RYXQWKVVQE3o8v0zlefxYejWv0b+AgLfMbXVz4kDxAIieERa3tqXUP5F+E1/N//cKyC/pV7iKXO5VlFyadYTrnEaflRTriq3YVb46Ak3pIfQr6syzqXzb+W5iINhq3CJpVWxXpISBd50K3/pZu0l7f0Nqab1tufhyyY6C1XFpqrA4RJC5sqUv8nXcEXSgtAZz+4bUZ1WTC/v/3JAC3y/l32gx249D4R2DxtX4tx4RMsjdS X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(426003)(70586007)(4326008)(54906003)(82740400003)(40480700001)(6636002)(316002)(37006003)(478600001)(70206006)(40460700003)(41300700001)(7696005)(6666004)(55016003)(16526019)(6286002)(86362001)(7636003)(356005)(186003)(1076003)(8936002)(6862004)(8676002)(83380400001)(26005)(107886003)(2906002)(82310400005)(336012)(47076005)(30864003)(36756003)(5660300002)(2616005)(36860700001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:41.9868 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c2120753-c32b-47e9-5c1e-08db5d977d01 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8592 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As there will be other crypto algo be supported. This commit splits AES-XTS code to another *_xts.c file. The mlx5_crypto.c file will just contain the common code. Signed-off-by: Suanming Mou --- drivers/crypto/mlx5/meson.build | 1 + drivers/crypto/mlx5/mlx5_crypto.c | 642 ++------------------------ drivers/crypto/mlx5/mlx5_crypto.h | 33 ++ drivers/crypto/mlx5/mlx5_crypto_xts.c | 594 ++++++++++++++++++++++++ 4 files changed, 667 insertions(+), 603 deletions(-) create mode 100644 drivers/crypto/mlx5/mlx5_crypto_xts.c diff --git a/drivers/crypto/mlx5/meson.build b/drivers/crypto/mlx5/meson.build index a2691ec0f0..045e8ce81d 100644 --- a/drivers/crypto/mlx5/meson.build +++ b/drivers/crypto/mlx5/meson.build @@ -15,6 +15,7 @@ endif sources = files( 'mlx5_crypto.c', + 'mlx5_crypto_xts.c', 'mlx5_crypto_dek.c', ) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 5267f48c1e..2e6bcc6ddc 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -40,33 +40,6 @@ int mlx5_crypto_logtype; uint8_t mlx5_crypto_driver_id; -const struct rte_cryptodev_capabilities mlx5_crypto_caps[] = { - { /* AES XTS */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_AES_XTS, - .block_size = 16, - .key_size = { - .min = 32, - .max = 64, - .increment = 32 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .dataunit_set = - RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_512_BYTES | - RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_4096_BYTES | - RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_1_MEGABYTES, - }, } - }, } - }, -}; - static const char mlx5_crypto_drv_name[] = RTE_STR(MLX5_CRYPTO_DRIVER_NAME); static const struct rte_driver mlx5_drv = { @@ -76,21 +49,6 @@ static const struct rte_driver mlx5_drv = { static struct cryptodev_driver mlx5_cryptodev_driver; -struct mlx5_crypto_session { - uint32_t bs_bpt_eo_es; - /**< bsf_size, bsf_p_type, encryption_order and encryption standard, - * saved in big endian format. - */ - uint32_t bsp_res; - /**< crypto_block_size_pointer and reserved 24 bits saved in big - * endian format. - */ - uint32_t iv_offset:16; - /**< Starting point for Initialisation Vector. */ - struct mlx5_crypto_dek *dek; /**< Pointer to dek struct. */ - uint32_t dek_id; /**< DEK ID */ -} __rte_packed; - static void mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *dev_info) @@ -102,7 +60,7 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, dev_info->driver_id = mlx5_crypto_driver_id; dev_info->feature_flags = MLX5_CRYPTO_FEATURE_FLAGS(priv->is_wrapped_mode); - dev_info->capabilities = mlx5_crypto_caps; + dev_info->capabilities = priv->caps; dev_info->max_nb_queue_pairs = MLX5_CRYPTO_MAX_QPS; dev_info->min_mbuf_headroom_req = 0; dev_info->min_mbuf_tailroom_req = 0; @@ -114,6 +72,38 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, } } +void +mlx5_crypto_indirect_mkeys_release(struct mlx5_crypto_qp *qp, + uint16_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + if (qp->mkey[i]) + claim_zero(mlx5_devx_cmd_destroy(qp->mkey[i])); +} + +int +mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp, + struct mlx5_devx_mkey_attr *attr, + mlx5_crypto_mkey_update_t update_cb) +{ + uint32_t i; + + for (i = 0; i < qp->entries_n; i++) { + attr->klm_array = update_cb(priv, qp, i); + qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, attr); + if (!qp->mkey[i]) + goto error; + } + return 0; +error: + DRV_LOG(ERR, "Failed to allocate indirect mkey."); + mlx5_crypto_indirect_mkeys_release(qp, i); + return -1; +} + static int mlx5_crypto_dev_configure(struct rte_cryptodev *dev, struct rte_cryptodev_config *config) @@ -168,72 +158,6 @@ mlx5_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) return sizeof(struct mlx5_crypto_session); } -static int -mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev, - struct rte_crypto_sym_xform *xform, - struct rte_cryptodev_sym_session *session) -{ - struct mlx5_crypto_priv *priv = dev->data->dev_private; - struct mlx5_crypto_session *sess_private_data = - CRYPTODEV_GET_SYM_SESS_PRIV(session); - struct rte_crypto_cipher_xform *cipher; - uint8_t encryption_order; - - if (unlikely(xform->next != NULL)) { - DRV_LOG(ERR, "Xform next is not supported."); - return -ENOTSUP; - } - if (unlikely((xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) || - (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_XTS))) { - DRV_LOG(ERR, "Only AES-XTS algorithm is supported."); - return -ENOTSUP; - } - cipher = &xform->cipher; - sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher); - if (sess_private_data->dek == NULL) { - DRV_LOG(ERR, "Failed to prepare dek."); - return -ENOMEM; - } - if (cipher->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) - encryption_order = MLX5_ENCRYPTION_ORDER_ENCRYPTED_RAW_MEMORY; - else - encryption_order = MLX5_ENCRYPTION_ORDER_ENCRYPTED_RAW_WIRE; - sess_private_data->bs_bpt_eo_es = rte_cpu_to_be_32 - (MLX5_BSF_SIZE_64B << MLX5_BSF_SIZE_OFFSET | - MLX5_BSF_P_TYPE_CRYPTO << MLX5_BSF_P_TYPE_OFFSET | - encryption_order << MLX5_ENCRYPTION_ORDER_OFFSET | - MLX5_ENCRYPTION_STANDARD_AES_XTS); - switch (xform->cipher.dataunit_len) { - case 0: - sess_private_data->bsp_res = 0; - break; - case 512: - sess_private_data->bsp_res = rte_cpu_to_be_32 - ((uint32_t)MLX5_BLOCK_SIZE_512B << - MLX5_BLOCK_SIZE_OFFSET); - break; - case 4096: - sess_private_data->bsp_res = rte_cpu_to_be_32 - ((uint32_t)MLX5_BLOCK_SIZE_4096B << - MLX5_BLOCK_SIZE_OFFSET); - break; - case 1048576: - sess_private_data->bsp_res = rte_cpu_to_be_32 - ((uint32_t)MLX5_BLOCK_SIZE_1MB << - MLX5_BLOCK_SIZE_OFFSET); - break; - default: - DRV_LOG(ERR, "Cipher data unit length is not supported."); - return -ENOTSUP; - } - sess_private_data->iv_offset = cipher->iv.offset; - sess_private_data->dek_id = - rte_cpu_to_be_32(sess_private_data->dek->obj->id & - 0xffffff); - DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data); - return 0; -} - static void mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess) @@ -249,412 +173,6 @@ mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, DRV_LOG(DEBUG, "Session %p was cleared.", spriv); } -static void -mlx5_crypto_indirect_mkeys_release(struct mlx5_crypto_qp *qp, uint16_t n) -{ - uint16_t i; - - for (i = 0; i < n; i++) - if (qp->mkey[i]) - claim_zero(mlx5_devx_cmd_destroy(qp->mkey[i])); -} - -static void -mlx5_crypto_qp_release(struct mlx5_crypto_qp *qp) -{ - if (qp == NULL) - return; - mlx5_devx_qp_destroy(&qp->qp_obj); - mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); - mlx5_devx_cq_destroy(&qp->cq_obj); - rte_free(qp); -} - -static int -mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) -{ - struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; - - mlx5_crypto_indirect_mkeys_release(qp, qp->entries_n); - mlx5_crypto_qp_release(qp); - dev->data->queue_pairs[qp_id] = NULL; - return 0; -} - -static __rte_noinline uint32_t -mlx5_crypto_get_block_size(struct rte_crypto_op *op) -{ - uint32_t bl = op->sym->cipher.data.length; - - switch (bl) { - case (1 << 20): - return RTE_BE32(MLX5_BLOCK_SIZE_1MB << MLX5_BLOCK_SIZE_OFFSET); - case (1 << 12): - return RTE_BE32(MLX5_BLOCK_SIZE_4096B << - MLX5_BLOCK_SIZE_OFFSET); - case (1 << 9): - return RTE_BE32(MLX5_BLOCK_SIZE_512B << MLX5_BLOCK_SIZE_OFFSET); - default: - DRV_LOG(ERR, "Unknown block size: %u.", bl); - return UINT32_MAX; - } -} - -static __rte_always_inline uint32_t -mlx5_crypto_klm_set(struct mlx5_crypto_qp *qp, struct rte_mbuf *mbuf, - struct mlx5_wqe_dseg *klm, uint32_t offset, - uint32_t *remain) -{ - uint32_t data_len = (rte_pktmbuf_data_len(mbuf) - offset); - uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); - - if (data_len > *remain) - data_len = *remain; - *remain -= data_len; - klm->bcount = rte_cpu_to_be_32(data_len); - klm->pbuf = rte_cpu_to_be_64(addr); - klm->lkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf); - return klm->lkey; - -} - -static __rte_always_inline uint32_t -mlx5_crypto_klms_set(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op, - struct rte_mbuf *mbuf, struct mlx5_wqe_dseg *klm) -{ - uint32_t remain_len = op->sym->cipher.data.length; - uint32_t nb_segs = mbuf->nb_segs; - uint32_t klm_n = 1u; - - /* First mbuf needs to take the cipher offset. */ - if (unlikely(mlx5_crypto_klm_set(qp, mbuf, klm, - op->sym->cipher.data.offset, &remain_len) == UINT32_MAX)) { - op->status = RTE_CRYPTO_OP_STATUS_ERROR; - return 0; - } - while (remain_len) { - nb_segs--; - mbuf = mbuf->next; - if (unlikely(mbuf == NULL || nb_segs == 0)) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - return 0; - } - if (unlikely(mlx5_crypto_klm_set(qp, mbuf, ++klm, 0, - &remain_len) == UINT32_MAX)) { - op->status = RTE_CRYPTO_OP_STATUS_ERROR; - return 0; - } - klm_n++; - } - return klm_n; -} - -static __rte_always_inline int -mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, - struct mlx5_crypto_qp *qp, - struct rte_crypto_op *op, - struct mlx5_umr_wqe *umr) -{ - struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); - struct mlx5_wqe_cseg *cseg = &umr->ctr; - struct mlx5_wqe_mkey_cseg *mkc = &umr->mkc; - struct mlx5_wqe_dseg *klms = &umr->kseg[0]; - struct mlx5_wqe_umr_bsf_seg *bsf = ((struct mlx5_wqe_umr_bsf_seg *) - RTE_PTR_ADD(umr, priv->umr_wqe_size)) - 1; - uint32_t ds; - bool ipl = op->sym->m_dst == NULL || op->sym->m_dst == op->sym->m_src; - /* Set UMR WQE. */ - uint32_t klm_n = mlx5_crypto_klms_set(qp, op, - ipl ? op->sym->m_src : op->sym->m_dst, klms); - - if (unlikely(klm_n == 0)) - return 0; - bsf->bs_bpt_eo_es = sess->bs_bpt_eo_es; - if (unlikely(!sess->bsp_res)) { - bsf->bsp_res = mlx5_crypto_get_block_size(op); - if (unlikely(bsf->bsp_res == UINT32_MAX)) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - return 0; - } - } else { - bsf->bsp_res = sess->bsp_res; - } - bsf->raw_data_size = rte_cpu_to_be_32(op->sym->cipher.data.length); - memcpy(bsf->xts_initial_tweak, - rte_crypto_op_ctod_offset(op, uint8_t *, sess->iv_offset), 16); - bsf->res_dp = sess->dek_id; - mkc->len = rte_cpu_to_be_64(op->sym->cipher.data.length); - cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_UMR); - qp->db_pi += priv->umr_wqe_stride; - /* Set RDMA_WRITE WQE. */ - cseg = RTE_PTR_ADD(cseg, priv->umr_wqe_size); - klms = RTE_PTR_ADD(cseg, sizeof(struct mlx5_rdma_write_wqe)); - if (!ipl) { - klm_n = mlx5_crypto_klms_set(qp, op, op->sym->m_src, klms); - if (unlikely(klm_n == 0)) - return 0; - } else { - memcpy(klms, &umr->kseg[0], sizeof(*klms) * klm_n); - } - ds = 2 + klm_n; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); - cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | - MLX5_OPCODE_RDMA_WRITE); - ds = RTE_ALIGN(ds, 4); - qp->db_pi += ds >> 2; - /* Set NOP WQE if needed. */ - if (priv->max_rdmar_ds > ds) { - cseg += ds; - ds = priv->max_rdmar_ds - ds; - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); - cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | - MLX5_OPCODE_NOP); - qp->db_pi += ds >> 2; /* Here, DS is 4 aligned for sure. */ - } - qp->wqe = (uint8_t *)cseg; - return 1; -} - -static uint16_t -mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - struct mlx5_crypto_qp *qp = queue_pair; - struct mlx5_crypto_priv *priv = qp->priv; - struct mlx5_umr_wqe *umr; - struct rte_crypto_op *op; - uint16_t mask = qp->entries_n - 1; - uint16_t remain = qp->entries_n - (qp->pi - qp->ci); - uint32_t idx; - - if (remain < nb_ops) - nb_ops = remain; - else - remain = nb_ops; - if (unlikely(remain == 0)) - return 0; - do { - idx = qp->pi & mask; - op = *ops++; - umr = RTE_PTR_ADD(qp->qp_obj.umem_buf, - priv->wqe_set_size * idx); - if (unlikely(mlx5_crypto_wqe_set(priv, qp, op, umr) == 0)) { - qp->stats.enqueue_err_count++; - if (remain != nb_ops) { - qp->stats.enqueued_count -= remain; - break; - } - return 0; - } - qp->ops[idx] = op; - qp->pi++; - } while (--remain); - qp->stats.enqueued_count += nb_ops; - mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->wqe, - qp->db_pi, &qp->qp_obj.db_rec[MLX5_SND_DBR], - !priv->uar.dbnc); - return nb_ops; -} - -static __rte_noinline void -mlx5_crypto_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op) -{ - const uint32_t idx = qp->ci & (qp->entries_n - 1); - volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) - &qp->cq_obj.cqes[idx]; - - op->status = RTE_CRYPTO_OP_STATUS_ERROR; - qp->stats.dequeue_err_count++; - DRV_LOG(ERR, "CQE ERR:%x.\n", rte_be_to_cpu_32(cqe->syndrome)); -} - -static uint16_t -mlx5_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, - uint16_t nb_ops) -{ - struct mlx5_crypto_qp *qp = queue_pair; - volatile struct mlx5_cqe *restrict cqe; - struct rte_crypto_op *restrict op; - const unsigned int cq_size = qp->entries_n; - const unsigned int mask = cq_size - 1; - uint32_t idx; - uint32_t next_idx = qp->ci & mask; - const uint16_t max = RTE_MIN((uint16_t)(qp->pi - qp->ci), nb_ops); - uint16_t i = 0; - int ret; - - if (unlikely(max == 0)) - return 0; - do { - idx = next_idx; - next_idx = (qp->ci + 1) & mask; - op = qp->ops[idx]; - cqe = &qp->cq_obj.cqes[idx]; - ret = check_cqe(cqe, cq_size, qp->ci); - rte_io_rmb(); - if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { - if (unlikely(ret != MLX5_CQE_STATUS_HW_OWN)) - mlx5_crypto_cqe_err_handle(qp, op); - break; - } - op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; - ops[i++] = op; - qp->ci++; - } while (i < max); - if (likely(i != 0)) { - rte_io_wmb(); - qp->cq_obj.db_rec[0] = rte_cpu_to_be_32(qp->ci); - qp->stats.dequeued_count += i; - } - return i; -} - -static void -mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) -{ - uint32_t i; - - for (i = 0 ; i < qp->entries_n; i++) { - struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->qp_obj.umem_buf, - i * priv->wqe_set_size); - struct mlx5_wqe_umr_cseg *ucseg = (struct mlx5_wqe_umr_cseg *) - (cseg + 1); - struct mlx5_wqe_umr_bsf_seg *bsf = - (struct mlx5_wqe_umr_bsf_seg *)(RTE_PTR_ADD(cseg, - priv->umr_wqe_size)) - 1; - struct mlx5_wqe_rseg *rseg; - - /* Init UMR WQE. */ - cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | - (priv->umr_wqe_size / MLX5_WSEG_SIZE)); - cseg->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << - MLX5_COMP_MODE_OFFSET); - cseg->misc = rte_cpu_to_be_32(qp->mkey[i]->id); - ucseg->if_cf_toe_cq_res = RTE_BE32(1u << MLX5_UMRC_IF_OFFSET); - ucseg->mkey_mask = RTE_BE64(1u << 0); /* Mkey length bit. */ - ucseg->ko_to_bs = rte_cpu_to_be_32 - ((MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size) << - MLX5_UMRC_KO_OFFSET) | (4 << MLX5_UMRC_TO_BS_OFFSET)); - bsf->keytag = priv->keytag; - /* Init RDMA WRITE WQE. */ - cseg = RTE_PTR_ADD(cseg, priv->umr_wqe_size); - cseg->flags = RTE_BE32((MLX5_COMP_ALWAYS << - MLX5_COMP_MODE_OFFSET) | - MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE); - rseg = (struct mlx5_wqe_rseg *)(cseg + 1); - rseg->rkey = rte_cpu_to_be_32(qp->mkey[i]->id); - } -} - -static int -mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, - struct mlx5_crypto_qp *qp) -{ - struct mlx5_umr_wqe *umr; - uint32_t i; - struct mlx5_devx_mkey_attr attr = { - .pd = priv->cdev->pdn, - .umr_en = 1, - .crypto_en = 1, - .set_remote_rw = 1, - .klm_num = MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size), - }; - - for (umr = (struct mlx5_umr_wqe *)qp->qp_obj.umem_buf, i = 0; - i < qp->entries_n; i++, umr = RTE_PTR_ADD(umr, priv->wqe_set_size)) { - attr.klm_array = (struct mlx5_klm *)&umr->kseg[0]; - qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &attr); - if (!qp->mkey[i]) - goto error; - } - return 0; -error: - DRV_LOG(ERR, "Failed to allocate indirect mkey."); - mlx5_crypto_indirect_mkeys_release(qp, i); - return -1; -} - -static int -mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, - const struct rte_cryptodev_qp_conf *qp_conf, - int socket_id) -{ - struct mlx5_crypto_priv *priv = dev->data->dev_private; - struct mlx5_devx_qp_attr attr = {0}; - struct mlx5_crypto_qp *qp; - uint16_t log_nb_desc = rte_log2_u32(qp_conf->nb_descriptors); - uint32_t ret; - uint32_t alloc_size = sizeof(*qp); - uint32_t log_wqbb_n; - struct mlx5_devx_cq_attr cq_attr = { - .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), - }; - - if (dev->data->queue_pairs[qp_id] != NULL) - mlx5_crypto_queue_pair_release(dev, qp_id); - alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); - alloc_size += (sizeof(struct rte_crypto_op *) + - sizeof(struct mlx5_devx_obj *)) * - RTE_BIT32(log_nb_desc); - qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, - socket_id); - if (qp == NULL) { - DRV_LOG(ERR, "Failed to allocate QP memory."); - rte_errno = ENOMEM; - return -rte_errno; - } - if (mlx5_devx_cq_create(priv->cdev->ctx, &qp->cq_obj, log_nb_desc, - &cq_attr, socket_id) != 0) { - DRV_LOG(ERR, "Failed to create CQ."); - goto error; - } - log_wqbb_n = rte_log2_u32(RTE_BIT32(log_nb_desc) * - (priv->wqe_set_size / MLX5_SEND_WQE_BB)); - attr.pd = priv->cdev->pdn; - attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); - attr.cqn = qp->cq_obj.cq->id; - attr.num_of_receive_wqes = 0; - attr.num_of_send_wqbbs = RTE_BIT32(log_wqbb_n); - attr.ts_format = - mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); - ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, - attr.num_of_send_wqbbs * MLX5_WQE_SIZE, - &attr, socket_id); - if (ret) { - DRV_LOG(ERR, "Failed to create QP."); - goto error; - } - if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, - priv->dev_config.socket_id) != 0) { - DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", - (uint32_t)qp_id); - rte_errno = ENOMEM; - goto error; - } - /* - * In Order to configure self loopback, when calling devx qp2rts the - * remote QP id that is used is the id of the same QP. - */ - if (mlx5_devx_qp2rts(&qp->qp_obj, qp->qp_obj.qp->id)) - goto error; - qp->mkey = (struct mlx5_devx_obj **)RTE_ALIGN((uintptr_t)(qp + 1), - RTE_CACHE_LINE_SIZE); - qp->ops = (struct rte_crypto_op **)(qp->mkey + RTE_BIT32(log_nb_desc)); - qp->entries_n = 1 << log_nb_desc; - if (mlx5_crypto_indirect_mkeys_prepare(priv, qp)) { - DRV_LOG(ERR, "Cannot allocate indirect memory regions."); - rte_errno = ENOMEM; - goto error; - } - mlx5_crypto_qp_init(priv, qp); - qp->priv = priv; - dev->data->queue_pairs[qp_id] = qp; - return 0; -error: - mlx5_crypto_qp_release(qp); - return -1; -} - static void mlx5_crypto_stats_get(struct rte_cryptodev *dev, struct rte_cryptodev_stats *stats) @@ -691,10 +209,7 @@ static struct rte_cryptodev_ops mlx5_crypto_ops = { .dev_infos_get = mlx5_crypto_dev_infos_get, .stats_get = mlx5_crypto_stats_get, .stats_reset = mlx5_crypto_stats_reset, - .queue_pair_setup = mlx5_crypto_queue_pair_setup, - .queue_pair_release = mlx5_crypto_queue_pair_release, .sym_session_get_size = mlx5_crypto_sym_session_get_size, - .sym_session_configure = mlx5_crypto_sym_session_configure, .sym_session_clear = mlx5_crypto_sym_session_clear, .sym_get_raw_dp_ctx_size = NULL, .sym_configure_raw_dp_ctx = NULL, @@ -796,81 +311,6 @@ mlx5_crypto_parse_devargs(struct mlx5_kvargs_ctrl *mkvlist, return 0; } -/* - * Calculate UMR WQE size and RDMA Write WQE size with the - * following limitations: - * - Each WQE size is multiple of 64. - * - The summarize of both UMR WQE and RDMA_W WQE is a power of 2. - * - The number of entries in the UMR WQE's KLM list is multiple of 4. - */ -static void -mlx5_crypto_get_wqe_sizes(uint32_t segs_num, uint32_t *umr_size, - uint32_t *rdmaw_size) -{ - uint32_t diff, wqe_set_size; - - *umr_size = MLX5_CRYPTO_UMR_WQE_STATIC_SIZE + - RTE_ALIGN(segs_num, 4) * - sizeof(struct mlx5_wqe_dseg); - /* Make sure UMR WQE size is multiple of WQBB. */ - *umr_size = RTE_ALIGN(*umr_size, MLX5_SEND_WQE_BB); - *rdmaw_size = sizeof(struct mlx5_rdma_write_wqe) + - sizeof(struct mlx5_wqe_dseg) * - (segs_num <= 2 ? 2 : 2 + - RTE_ALIGN(segs_num - 2, 4)); - /* Make sure RDMA_WRITE WQE size is multiple of WQBB. */ - *rdmaw_size = RTE_ALIGN(*rdmaw_size, MLX5_SEND_WQE_BB); - wqe_set_size = *rdmaw_size + *umr_size; - diff = rte_align32pow2(wqe_set_size) - wqe_set_size; - /* Make sure wqe_set size is power of 2. */ - if (diff) - *umr_size += diff; -} - -static uint8_t -mlx5_crypto_max_segs_num(uint16_t max_wqe_size) -{ - int klms_sizes = max_wqe_size - MLX5_CRYPTO_UMR_WQE_STATIC_SIZE; - uint32_t max_segs_cap = RTE_ALIGN_FLOOR(klms_sizes, MLX5_SEND_WQE_BB) / - sizeof(struct mlx5_wqe_dseg); - - MLX5_ASSERT(klms_sizes >= MLX5_SEND_WQE_BB); - while (max_segs_cap) { - uint32_t umr_wqe_size, rdmw_wqe_size; - - mlx5_crypto_get_wqe_sizes(max_segs_cap, &umr_wqe_size, - &rdmw_wqe_size); - if (umr_wqe_size <= max_wqe_size && - rdmw_wqe_size <= max_wqe_size) - break; - max_segs_cap -= 4; - } - return max_segs_cap; -} - -static int -mlx5_crypto_configure_wqe_size(struct mlx5_crypto_priv *priv, - uint16_t max_wqe_size, uint32_t max_segs_num) -{ - uint32_t rdmw_wqe_size, umr_wqe_size; - - mlx5_crypto_get_wqe_sizes(max_segs_num, &umr_wqe_size, - &rdmw_wqe_size); - priv->wqe_set_size = rdmw_wqe_size + umr_wqe_size; - if (umr_wqe_size > max_wqe_size || - rdmw_wqe_size > max_wqe_size) { - DRV_LOG(ERR, "Invalid max_segs_num: %u. should be %u or lower.", - max_segs_num, - mlx5_crypto_max_segs_num(max_wqe_size)); - rte_errno = EINVAL; - return -EINVAL; - } - priv->umr_wqe_size = (uint16_t)umr_wqe_size; - priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; - priv->max_rdmar_ds = rdmw_wqe_size / sizeof(struct mlx5_wqe_dseg); - return 0; -} - static int mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, struct mlx5_kvargs_ctrl *mkvlist) @@ -916,14 +356,18 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, DRV_LOG(INFO, "Crypto device %s was created successfully.", ibdev_name); crypto_dev->dev_ops = &mlx5_crypto_ops; - crypto_dev->dequeue_burst = mlx5_crypto_dequeue_burst; - crypto_dev->enqueue_burst = mlx5_crypto_enqueue_burst; crypto_dev->feature_flags = MLX5_CRYPTO_FEATURE_FLAGS(wrapped_mode); crypto_dev->driver_id = mlx5_crypto_driver_id; priv = crypto_dev->data->dev_private; priv->cdev = cdev; priv->crypto_dev = crypto_dev; priv->is_wrapped_mode = wrapped_mode; + priv->max_segs_num = devarg_prms.max_segs_num; + ret = mlx5_crypto_xts_init(priv); + if (ret) { + DRV_LOG(ERR, "Failed to init AES-XTS crypto."); + return -ENOTSUP; + } if (mlx5_devx_uar_prepare(cdev, &priv->uar) != 0) { rte_cryptodev_pmd_destroy(priv->crypto_dev); return -1; @@ -939,14 +383,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, } priv->login_obj = login; } - ret = mlx5_crypto_configure_wqe_size(priv, - cdev->config.hca_attr.max_wqe_sz_sq, devarg_prms.max_segs_num); - if (ret) { - claim_zero(mlx5_devx_cmd_destroy(priv->login_obj)); - mlx5_devx_uar_release(&priv->uar); - rte_cryptodev_pmd_destroy(priv->crypto_dev); - return -1; - } priv->keytag = rte_cpu_to_be_64(devarg_prms.keytag); DRV_LOG(INFO, "Max number of segments: %u.", (unsigned int)RTE_MIN( diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index a2771b3dab..05d8fe97fe 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -31,6 +31,7 @@ struct mlx5_crypto_priv { struct mlx5_uar uar; /* User Access Region. */ uint32_t max_segs_num; /* Maximum supported data segs. */ struct mlx5_hlist *dek_hlist; /* Dek hash list. */ + const struct rte_cryptodev_capabilities *caps; struct rte_cryptodev_config dev_config; struct mlx5_devx_obj *login_obj; uint64_t keytag; @@ -70,6 +71,35 @@ struct mlx5_crypto_devarg_params { uint32_t max_segs_num; }; +struct mlx5_crypto_session { + uint32_t bs_bpt_eo_es; + /**< bsf_size, bsf_p_type, encryption_order and encryption standard, + * saved in big endian format. + */ + uint32_t bsp_res; + /**< crypto_block_size_pointer and reserved 24 bits saved in big + * endian format. + */ + uint32_t iv_offset:16; + /**< Starting point for Initialisation Vector. */ + struct mlx5_crypto_dek *dek; /**< Pointer to dek struct. */ + uint32_t dek_id; /**< DEK ID */ +} __rte_packed; + +typedef void *(*mlx5_crypto_mkey_update_t)(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp, + uint32_t idx); + +void +mlx5_crypto_indirect_mkeys_release(struct mlx5_crypto_qp *qp, + uint16_t n); + +int +mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp, + struct mlx5_devx_mkey_attr *attr, + mlx5_crypto_mkey_update_t update_cb); + int mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek *dek); @@ -84,4 +114,7 @@ mlx5_crypto_dek_setup(struct mlx5_crypto_priv *priv); void mlx5_crypto_dek_unset(struct mlx5_crypto_priv *priv); +int +mlx5_crypto_xts_init(struct mlx5_crypto_priv *priv); + #endif /* MLX5_CRYPTO_H_ */ diff --git a/drivers/crypto/mlx5/mlx5_crypto_xts.c b/drivers/crypto/mlx5/mlx5_crypto_xts.c new file mode 100644 index 0000000000..964d02e6ed --- /dev/null +++ b/drivers/crypto/mlx5/mlx5_crypto_xts.c @@ -0,0 +1,594 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "mlx5_crypto_utils.h" +#include "mlx5_crypto.h" + +const struct rte_cryptodev_capabilities mlx5_crypto_caps[] = { + { /* AES XTS */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_XTS, + .block_size = 16, + .key_size = { + .min = 32, + .max = 64, + .increment = 32 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .dataunit_set = + RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_512_BYTES | + RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_4096_BYTES | + RTE_CRYPTO_CIPHER_DATA_UNIT_LEN_1_MEGABYTES, + }, } + }, } + }, +}; + +static int +mlx5_crypto_xts_sym_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *session) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_crypto_session *sess_private_data = + CRYPTODEV_GET_SYM_SESS_PRIV(session); + struct rte_crypto_cipher_xform *cipher; + uint8_t encryption_order; + + if (unlikely(xform->next != NULL)) { + DRV_LOG(ERR, "Xform next is not supported."); + return -ENOTSUP; + } + if (unlikely((xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) || + (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_XTS))) { + DRV_LOG(ERR, "Only AES-XTS algorithm is supported."); + return -ENOTSUP; + } + cipher = &xform->cipher; + sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher); + if (sess_private_data->dek == NULL) { + DRV_LOG(ERR, "Failed to prepare dek."); + return -ENOMEM; + } + if (cipher->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) + encryption_order = MLX5_ENCRYPTION_ORDER_ENCRYPTED_RAW_MEMORY; + else + encryption_order = MLX5_ENCRYPTION_ORDER_ENCRYPTED_RAW_WIRE; + sess_private_data->bs_bpt_eo_es = rte_cpu_to_be_32 + (MLX5_BSF_SIZE_64B << MLX5_BSF_SIZE_OFFSET | + MLX5_BSF_P_TYPE_CRYPTO << MLX5_BSF_P_TYPE_OFFSET | + encryption_order << MLX5_ENCRYPTION_ORDER_OFFSET | + MLX5_ENCRYPTION_STANDARD_AES_XTS); + switch (xform->cipher.dataunit_len) { + case 0: + sess_private_data->bsp_res = 0; + break; + case 512: + sess_private_data->bsp_res = rte_cpu_to_be_32 + ((uint32_t)MLX5_BLOCK_SIZE_512B << + MLX5_BLOCK_SIZE_OFFSET); + break; + case 4096: + sess_private_data->bsp_res = rte_cpu_to_be_32 + ((uint32_t)MLX5_BLOCK_SIZE_4096B << + MLX5_BLOCK_SIZE_OFFSET); + break; + case 1048576: + sess_private_data->bsp_res = rte_cpu_to_be_32 + ((uint32_t)MLX5_BLOCK_SIZE_1MB << + MLX5_BLOCK_SIZE_OFFSET); + break; + default: + DRV_LOG(ERR, "Cipher data unit length is not supported."); + return -ENOTSUP; + } + sess_private_data->iv_offset = cipher->iv.offset; + sess_private_data->dek_id = + rte_cpu_to_be_32(sess_private_data->dek->obj->id & + 0xffffff); + DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data); + return 0; +} + +static void +mlx5_crypto_xts_qp_release(struct mlx5_crypto_qp *qp) +{ + if (qp == NULL) + return; + mlx5_devx_qp_destroy(&qp->qp_obj); + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); + mlx5_devx_cq_destroy(&qp->cq_obj); + rte_free(qp); +} + +static int +mlx5_crypto_xts_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; + + mlx5_crypto_indirect_mkeys_release(qp, qp->entries_n); + mlx5_crypto_xts_qp_release(qp); + dev->data->queue_pairs[qp_id] = NULL; + return 0; +} + +static __rte_noinline uint32_t +mlx5_crypto_xts_get_block_size(struct rte_crypto_op *op) +{ + uint32_t bl = op->sym->cipher.data.length; + + switch (bl) { + case (1 << 20): + return RTE_BE32(MLX5_BLOCK_SIZE_1MB << MLX5_BLOCK_SIZE_OFFSET); + case (1 << 12): + return RTE_BE32(MLX5_BLOCK_SIZE_4096B << + MLX5_BLOCK_SIZE_OFFSET); + case (1 << 9): + return RTE_BE32(MLX5_BLOCK_SIZE_512B << MLX5_BLOCK_SIZE_OFFSET); + default: + DRV_LOG(ERR, "Unknown block size: %u.", bl); + return UINT32_MAX; + } +} + +static __rte_always_inline uint32_t +mlx5_crypto_xts_klm_set(struct mlx5_crypto_qp *qp, struct rte_mbuf *mbuf, + struct mlx5_wqe_dseg *klm, uint32_t offset, + uint32_t *remain) +{ + uint32_t data_len = (rte_pktmbuf_data_len(mbuf) - offset); + uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); + + if (data_len > *remain) + data_len = *remain; + *remain -= data_len; + klm->bcount = rte_cpu_to_be_32(data_len); + klm->pbuf = rte_cpu_to_be_64(addr); + klm->lkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf); + return klm->lkey; + +} + +static __rte_always_inline uint32_t +mlx5_crypto_xts_klms_set(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op, + struct rte_mbuf *mbuf, struct mlx5_wqe_dseg *klm) +{ + uint32_t remain_len = op->sym->cipher.data.length; + uint32_t nb_segs = mbuf->nb_segs; + uint32_t klm_n = 1u; + + /* First mbuf needs to take the cipher offset. */ + if (unlikely(mlx5_crypto_xts_klm_set(qp, mbuf, klm, + op->sym->cipher.data.offset, &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + while (remain_len) { + nb_segs--; + mbuf = mbuf->next; + if (unlikely(mbuf == NULL || nb_segs == 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return 0; + } + if (unlikely(mlx5_crypto_xts_klm_set(qp, mbuf, ++klm, 0, + &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + klm_n++; + } + return klm_n; +} + +static __rte_always_inline int +mlx5_crypto_xts_wqe_set(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + struct mlx5_umr_wqe *umr) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_wqe_cseg *cseg = &umr->ctr; + struct mlx5_wqe_mkey_cseg *mkc = &umr->mkc; + struct mlx5_wqe_dseg *klms = &umr->kseg[0]; + struct mlx5_wqe_umr_bsf_seg *bsf = ((struct mlx5_wqe_umr_bsf_seg *) + RTE_PTR_ADD(umr, priv->umr_wqe_size)) - 1; + uint32_t ds; + bool ipl = op->sym->m_dst == NULL || op->sym->m_dst == op->sym->m_src; + /* Set UMR WQE. */ + uint32_t klm_n = mlx5_crypto_xts_klms_set(qp, op, + ipl ? op->sym->m_src : op->sym->m_dst, klms); + + if (unlikely(klm_n == 0)) + return 0; + bsf->bs_bpt_eo_es = sess->bs_bpt_eo_es; + if (unlikely(!sess->bsp_res)) { + bsf->bsp_res = mlx5_crypto_xts_get_block_size(op); + if (unlikely(bsf->bsp_res == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return 0; + } + } else { + bsf->bsp_res = sess->bsp_res; + } + bsf->raw_data_size = rte_cpu_to_be_32(op->sym->cipher.data.length); + memcpy(bsf->xts_initial_tweak, + rte_crypto_op_ctod_offset(op, uint8_t *, sess->iv_offset), 16); + bsf->res_dp = sess->dek_id; + mkc->len = rte_cpu_to_be_64(op->sym->cipher.data.length); + cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | MLX5_OPCODE_UMR); + qp->db_pi += priv->umr_wqe_stride; + /* Set RDMA_WRITE WQE. */ + cseg = RTE_PTR_ADD(cseg, priv->umr_wqe_size); + klms = RTE_PTR_ADD(cseg, sizeof(struct mlx5_rdma_write_wqe)); + if (!ipl) { + klm_n = mlx5_crypto_xts_klms_set(qp, op, op->sym->m_src, klms); + if (unlikely(klm_n == 0)) + return 0; + } else { + memcpy(klms, &umr->kseg[0], sizeof(*klms) * klm_n); + } + ds = 2 + klm_n; + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); + cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | + MLX5_OPCODE_RDMA_WRITE); + ds = RTE_ALIGN(ds, 4); + qp->db_pi += ds >> 2; + /* Set NOP WQE if needed. */ + if (priv->max_rdmar_ds > ds) { + cseg += ds; + ds = priv->max_rdmar_ds - ds; + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | ds); + cseg->opcode = rte_cpu_to_be_32((qp->db_pi << 8) | + MLX5_OPCODE_NOP); + qp->db_pi += ds >> 2; /* Here, DS is 4 aligned for sure. */ + } + qp->wqe = (uint8_t *)cseg; + return 1; +} + +static uint16_t +mlx5_crypto_xts_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + struct mlx5_crypto_priv *priv = qp->priv; + struct mlx5_umr_wqe *umr; + struct rte_crypto_op *op; + uint16_t mask = qp->entries_n - 1; + uint16_t remain = qp->entries_n - (qp->pi - qp->ci); + uint32_t idx; + + if (remain < nb_ops) + nb_ops = remain; + else + remain = nb_ops; + if (unlikely(remain == 0)) + return 0; + do { + idx = qp->pi & mask; + op = *ops++; + umr = RTE_PTR_ADD(qp->qp_obj.umem_buf, + priv->wqe_set_size * idx); + if (unlikely(mlx5_crypto_xts_wqe_set(priv, qp, op, umr) == 0)) { + qp->stats.enqueue_err_count++; + if (remain != nb_ops) { + qp->stats.enqueued_count -= remain; + break; + } + return 0; + } + qp->ops[idx] = op; + qp->pi++; + } while (--remain); + qp->stats.enqueued_count += nb_ops; + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->wqe, + qp->db_pi, &qp->qp_obj.db_rec[MLX5_SND_DBR], + !priv->uar.dbnc); + return nb_ops; +} + +static __rte_noinline void +mlx5_crypto_xts_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op) +{ + const uint32_t idx = qp->ci & (qp->entries_n - 1); + volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) + &qp->cq_obj.cqes[idx]; + + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + qp->stats.dequeue_err_count++; + DRV_LOG(ERR, "CQE ERR:%x.\n", rte_be_to_cpu_32(cqe->syndrome)); +} + +static uint16_t +mlx5_crypto_xts_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + volatile struct mlx5_cqe *restrict cqe; + struct rte_crypto_op *restrict op; + const unsigned int cq_size = qp->entries_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = qp->ci & mask; + const uint16_t max = RTE_MIN((uint16_t)(qp->pi - qp->ci), nb_ops); + uint16_t i = 0; + int ret; + + if (unlikely(max == 0)) + return 0; + do { + idx = next_idx; + next_idx = (qp->ci + 1) & mask; + op = qp->ops[idx]; + cqe = &qp->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, qp->ci); + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (unlikely(ret != MLX5_CQE_STATUS_HW_OWN)) + mlx5_crypto_xts_cqe_err_handle(qp, op); + break; + } + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + ops[i++] = op; + qp->ci++; + } while (i < max); + if (likely(i != 0)) { + rte_io_wmb(); + qp->cq_obj.db_rec[0] = rte_cpu_to_be_32(qp->ci); + qp->stats.dequeued_count += i; + } + return i; +} + +static void +mlx5_crypto_xts_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) +{ + uint32_t i; + + for (i = 0 ; i < qp->entries_n; i++) { + struct mlx5_wqe_cseg *cseg = RTE_PTR_ADD(qp->qp_obj.umem_buf, + i * priv->wqe_set_size); + struct mlx5_wqe_umr_cseg *ucseg = (struct mlx5_wqe_umr_cseg *) + (cseg + 1); + struct mlx5_wqe_umr_bsf_seg *bsf = + (struct mlx5_wqe_umr_bsf_seg *)(RTE_PTR_ADD(cseg, + priv->umr_wqe_size)) - 1; + struct mlx5_wqe_rseg *rseg; + + /* Init UMR WQE. */ + cseg->sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | + (priv->umr_wqe_size / MLX5_WSEG_SIZE)); + cseg->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << + MLX5_COMP_MODE_OFFSET); + cseg->misc = rte_cpu_to_be_32(qp->mkey[i]->id); + ucseg->if_cf_toe_cq_res = RTE_BE32(1u << MLX5_UMRC_IF_OFFSET); + ucseg->mkey_mask = RTE_BE64(1u << 0); /* Mkey length bit. */ + ucseg->ko_to_bs = rte_cpu_to_be_32 + ((MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size) << + MLX5_UMRC_KO_OFFSET) | (4 << MLX5_UMRC_TO_BS_OFFSET)); + bsf->keytag = priv->keytag; + /* Init RDMA WRITE WQE. */ + cseg = RTE_PTR_ADD(cseg, priv->umr_wqe_size); + cseg->flags = RTE_BE32((MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET) | + MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE); + rseg = (struct mlx5_wqe_rseg *)(cseg + 1); + rseg->rkey = rte_cpu_to_be_32(qp->mkey[i]->id); + } +} + +static void * +mlx5_crypto_gcm_mkey_klm_update(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp, + uint32_t idx) +{ + return RTE_PTR_ADD(qp->qp_obj.umem_buf, priv->wqe_set_size * idx); +} + +static int +mlx5_crypto_xts_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_devx_qp_attr attr = {0}; + struct mlx5_crypto_qp *qp; + uint16_t log_nb_desc = rte_log2_u32(qp_conf->nb_descriptors); + uint32_t ret; + uint32_t alloc_size = sizeof(*qp); + uint32_t log_wqbb_n; + struct mlx5_devx_cq_attr cq_attr = { + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + }; + struct mlx5_devx_mkey_attr mkey_attr = { + .pd = priv->cdev->pdn, + .umr_en = 1, + .crypto_en = 1, + .set_remote_rw = 1, + .klm_num = MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size), + }; + + if (dev->data->queue_pairs[qp_id] != NULL) + mlx5_crypto_xts_queue_pair_release(dev, qp_id); + alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); + alloc_size += (sizeof(struct rte_crypto_op *) + + sizeof(struct mlx5_devx_obj *)) * + RTE_BIT32(log_nb_desc); + qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, + socket_id); + if (qp == NULL) { + DRV_LOG(ERR, "Failed to allocate QP memory."); + rte_errno = ENOMEM; + return -rte_errno; + } + if (mlx5_devx_cq_create(priv->cdev->ctx, &qp->cq_obj, log_nb_desc, + &cq_attr, socket_id) != 0) { + DRV_LOG(ERR, "Failed to create CQ."); + goto error; + } + log_wqbb_n = rte_log2_u32(RTE_BIT32(log_nb_desc) * + (priv->wqe_set_size / MLX5_SEND_WQE_BB)); + attr.pd = priv->cdev->pdn; + attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); + attr.cqn = qp->cq_obj.cq->id; + attr.num_of_receive_wqes = 0; + attr.num_of_send_wqbbs = RTE_BIT32(log_wqbb_n); + attr.ts_format = + mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, + attr.num_of_send_wqbbs * MLX5_WQE_SIZE, + &attr, socket_id); + if (ret) { + DRV_LOG(ERR, "Failed to create QP."); + goto error; + } + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, + priv->dev_config.socket_id) != 0) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto error; + } + /* + * In Order to configure self loopback, when calling devx qp2rts the + * remote QP id that is used is the id of the same QP. + */ + if (mlx5_devx_qp2rts(&qp->qp_obj, qp->qp_obj.qp->id)) + goto error; + qp->mkey = (struct mlx5_devx_obj **)RTE_ALIGN((uintptr_t)(qp + 1), + RTE_CACHE_LINE_SIZE); + qp->ops = (struct rte_crypto_op **)(qp->mkey + RTE_BIT32(log_nb_desc)); + qp->entries_n = 1 << log_nb_desc; + if (mlx5_crypto_indirect_mkeys_prepare(priv, qp, &mkey_attr, + mlx5_crypto_gcm_mkey_klm_update)) { + DRV_LOG(ERR, "Cannot allocate indirect memory regions."); + rte_errno = ENOMEM; + goto error; + } + mlx5_crypto_xts_qp_init(priv, qp); + qp->priv = priv; + dev->data->queue_pairs[qp_id] = qp; + return 0; +error: + mlx5_crypto_xts_qp_release(qp); + return -1; +} + +/* + * Calculate UMR WQE size and RDMA Write WQE size with the + * following limitations: + * - Each WQE size is multiple of 64. + * - The summarize of both UMR WQE and RDMA_W WQE is a power of 2. + * - The number of entries in the UMR WQE's KLM list is multiple of 4. + */ +static void +mlx5_crypto_xts_get_wqe_sizes(uint32_t segs_num, uint32_t *umr_size, + uint32_t *rdmaw_size) +{ + uint32_t diff, wqe_set_size; + + *umr_size = MLX5_CRYPTO_UMR_WQE_STATIC_SIZE + + RTE_ALIGN(segs_num, 4) * + sizeof(struct mlx5_wqe_dseg); + /* Make sure UMR WQE size is multiple of WQBB. */ + *umr_size = RTE_ALIGN(*umr_size, MLX5_SEND_WQE_BB); + *rdmaw_size = sizeof(struct mlx5_rdma_write_wqe) + + sizeof(struct mlx5_wqe_dseg) * + (segs_num <= 2 ? 2 : 2 + + RTE_ALIGN(segs_num - 2, 4)); + /* Make sure RDMA_WRITE WQE size is multiple of WQBB. */ + *rdmaw_size = RTE_ALIGN(*rdmaw_size, MLX5_SEND_WQE_BB); + wqe_set_size = *rdmaw_size + *umr_size; + diff = rte_align32pow2(wqe_set_size) - wqe_set_size; + /* Make sure wqe_set size is power of 2. */ + if (diff) + *umr_size += diff; +} + +static uint8_t +mlx5_crypto_xts_max_segs_num(uint16_t max_wqe_size) +{ + int klms_sizes = max_wqe_size - MLX5_CRYPTO_UMR_WQE_STATIC_SIZE; + uint32_t max_segs_cap = RTE_ALIGN_FLOOR(klms_sizes, MLX5_SEND_WQE_BB) / + sizeof(struct mlx5_wqe_dseg); + + MLX5_ASSERT(klms_sizes >= MLX5_SEND_WQE_BB); + while (max_segs_cap) { + uint32_t umr_wqe_size, rdmw_wqe_size; + + mlx5_crypto_xts_get_wqe_sizes(max_segs_cap, &umr_wqe_size, + &rdmw_wqe_size); + if (umr_wqe_size <= max_wqe_size && + rdmw_wqe_size <= max_wqe_size) + break; + max_segs_cap -= 4; + } + return max_segs_cap; +} + +static int +mlx5_crypto_xts_configure_wqe_size(struct mlx5_crypto_priv *priv, + uint16_t max_wqe_size, uint32_t max_segs_num) +{ + uint32_t rdmw_wqe_size, umr_wqe_size; + + mlx5_crypto_xts_get_wqe_sizes(max_segs_num, &umr_wqe_size, + &rdmw_wqe_size); + priv->wqe_set_size = rdmw_wqe_size + umr_wqe_size; + if (umr_wqe_size > max_wqe_size || + rdmw_wqe_size > max_wqe_size) { + DRV_LOG(ERR, "Invalid max_segs_num: %u. should be %u or lower.", + max_segs_num, + mlx5_crypto_xts_max_segs_num(max_wqe_size)); + rte_errno = EINVAL; + return -EINVAL; + } + priv->umr_wqe_size = (uint16_t)umr_wqe_size; + priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; + priv->max_rdmar_ds = rdmw_wqe_size / sizeof(struct mlx5_wqe_dseg); + return 0; +} + +int +mlx5_crypto_xts_init(struct mlx5_crypto_priv *priv) +{ + struct mlx5_common_device *cdev = priv->cdev; + struct rte_cryptodev *crypto_dev = priv->crypto_dev; + struct rte_cryptodev_ops *dev_ops = crypto_dev->dev_ops; + int ret; + + ret = mlx5_crypto_xts_configure_wqe_size(priv, + cdev->config.hca_attr.max_wqe_sz_sq, priv->max_segs_num); + if (ret) + return -EINVAL; + /* Override AES-XST specified ops. */ + dev_ops->sym_session_configure = mlx5_crypto_xts_sym_session_configure; + dev_ops->queue_pair_setup = mlx5_crypto_xts_queue_pair_setup; + dev_ops->queue_pair_release = mlx5_crypto_xts_queue_pair_release; + crypto_dev->dequeue_burst = mlx5_crypto_xts_dequeue_burst; + crypto_dev->enqueue_burst = mlx5_crypto_xts_enqueue_burst; + priv->caps = mlx5_crypto_caps; + return 0; +} + From patchwork Fri May 26 03:14:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127524 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38F9E42BA3; Fri, 26 May 2023 05:16:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28A5A42D2C; Fri, 26 May 2023 05:15:50 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2059.outbound.protection.outlook.com [40.107.101.59]) by mails.dpdk.org (Postfix) with ESMTP id B6D2040A89 for ; Fri, 26 May 2023 05:15:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oDYf492j1NfbHqffJpgXQvmMhn2yTLqKBB+z53oUWLGVML8yznw5k+u6+9bBnr8mB8Rvz9/hf39uhd3WH3UKO+3+jrJiIIw6MnXMLWVS0mGMcdGtnhXRsCTp1exXGA3eGJFEfXe/Uy6J457/eEvPWXbxxozxjgSJa+1oWPRdFGgdHb2gCdiDiVkBC4cGOF7iD+YJULMh4o8yvVtZnanKGL9hVbwBhtVY6PxEqCn7TY41ttXvhOWm6VwdknuUQjAvGoaDUdSxd3xqPQ5rFPYunKormgeu+bTju+Z86S+qPnyNZAl+M8fi1eW5wKlUrSzsgRb+0/V2MkRVZhD+/1b2Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oYHS5lDahbA9FBCQt7JVB1H4+QmXg49VOt22a6Dl+Ts=; b=BJ+v/oxAwWyNAXt8pyAphZoEnT9PTpMqafFMA1RKZTzrkVBagAhhLPhLG+XBqeG1vv94kYke20QGDIYHuk7dfgoCAD5Me/xt6f1uvF+fcXyaouzY5QDP0TJTh6dOssSBToqoKtohYgCocXECVvPD66QBjkON5NjCN7dkWsyrM/4mjsFIt39AU+UMjzL37vx8rvJOdoz+jEI78krfKibqzQwGhelCbIP7iJosFM/yFsKZAVMHdhNW0FKWjvxBMNKv6esDH/GrRcVz7tKsHf7Di++UQL+RZMzKc/jnOyyTl8sGbKaV7nK4vzV4yivW+d9CpT5ECy/VI3hR/4KgasP9QQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oYHS5lDahbA9FBCQt7JVB1H4+QmXg49VOt22a6Dl+Ts=; b=CardUc3z34ydlG53OLDrG8RHPEod5RJGxKLuejXaBcZ9Bkiey8Xs6mI9ibjTOYX44LrhrF2FaKpaZWxXsOj0LbPH8+hQ7aKvaoFXJEV0mXam8VwWGIkZQsYN/WP6ZNmxGBVrPJmY7v8+dWYukgphTWsyJNbNzcYyRFdfCeqiBUDYolg7+FTP9KoQN724JzON2KXH5HFypF6bmNvLTe7uVXE+IcaPUVukb/kT+gK+1q26B568W5+ELKemr/jcC9Xv1P/rTiP513U6yPlcBgoCK+65gOSDRmycfDfY66fJYoZIzd4exjidFFUGxJjB2Omk9NszeDfCcshwYlPkj7u4Pw== Received: from BN9PR03CA0600.namprd03.prod.outlook.com (2603:10b6:408:10d::35) by CO6PR12MB5476.namprd12.prod.outlook.com (2603:10b6:303:138::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May 2023 03:15:44 +0000 Received: from BN8NAM11FT037.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::56) by BN9PR03CA0600.outlook.office365.com (2603:10b6:408:10d::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend Transport; Fri, 26 May 2023 03:15:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT037.mail.protection.outlook.com (10.13.177.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:28 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:25 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 3/9] crypto/mlx5: add AES-GCM query and initialization Date: Fri, 26 May 2023 06:14:15 +0300 Message-ID: <20230526031422.913377-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT037:EE_|CO6PR12MB5476:EE_ X-MS-Office365-Filtering-Correlation-Id: 9153aad7-bbd8-410b-3cb5-08db5d977e06 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WO2/9POGZQkFUhqKUobapzWA6NaFAw0pqVAtgXBNEl3NecsQnQcJyKD9D0pqeSdBGHbotZU85/yDWxa8XOFCaFmWlGd1XpcsZbiOlhOpo2UEZS5zLwtuM0ng6qsMotcYVK82qorAy6LyP9LGDiD1BJ4bx+hStc577Y2FHrUxiIvDo9vZdVy3iqAvkHA6yxLTVH6Vw8e+otX5MQxs+u741i+mBPManT6su+Y+80a+JupuX3kX9iWN/KKBLwseJ9LsXDNjFHfvyBczhLvswy3O0giQqagaFQrbKR1oJ0DaBUcmmqHF5nsoTK1qIigSDF+j3/GMPa8shkm8yzKUVxjVIXIIVr+wy9eR9PyRVPg+4XHEHkQPiVr9KsZwef1y8qF5By9eODF5mnpFQru26LHu2mpFxRNDWr6D5mXEs+eoJJf8icA22S7YklOULYNnGPp1rgmCDiCvPaTChlN3PrYzn+siO95lmSCdfxXwtaVkoFBtiOYeDjftiT8q4/jolAvxKUNNBUcMLb5XBzx/eRBUk44P6T+QzcmZjHllULSO3pN7BPFZFilQwXZk2XddydgepYGYa7gT23uclvTAarVmVHArOudfIBWjzc5iQvBgKyTw2/1UEJZHTQEpQ7UxpPop71qxLLaAoyYmYV0Y6h++vKRz2CBi7ezpRKXgzLy0FqOWzAs2wzLPSCYgOBRCdD4ktAsgrozIDoaSjNTm1l5GNmltQrpmLu+igkXCLmZan3uUOzCt7JwbrkdAXl1RE/Rq X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(316002)(6666004)(26005)(1076003)(70586007)(70206006)(82740400003)(36756003)(356005)(107886003)(7636003)(4326008)(6636002)(40460700003)(82310400005)(40480700001)(5660300002)(55016003)(41300700001)(7696005)(8936002)(8676002)(83380400001)(86362001)(478600001)(336012)(426003)(110136005)(54906003)(36860700001)(47076005)(186003)(6286002)(16526019)(2906002)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:43.6965 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9153aad7-bbd8-410b-3cb5-08db5d977e06 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT037.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5476 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org AES-GCM provides both authenticated encryption and the ability to check the integrity and authentication of additional authenticated data (AAD) that is sent in the clear. This commit adds the AES-GCM attributes query and initialization function. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_devx_cmds.c | 15 +++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 13 ++++++++++ drivers/common/mlx5/mlx5_prm.h | 19 +++++++++++--- drivers/crypto/mlx5/meson.build | 1 + drivers/crypto/mlx5/mlx5_crypto.c | 4 ++- drivers/crypto/mlx5/mlx5_crypto.h | 3 +++ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 36 +++++++++++++++++++++++++++ 7 files changed, 87 insertions(+), 4 deletions(-) create mode 100644 drivers/crypto/mlx5/mlx5_crypto_gcm.c diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 1e418a0353..4332081165 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1117,6 +1117,21 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->crypto_wrapped_import_method = !!(MLX5_GET(crypto_caps, hcattr, wrapped_import_method) & 1 << 2); + attr->crypto_mmo.crypto_mmo_qp = MLX5_GET(crypto_caps, hcattr, crypto_mmo_qp); + attr->crypto_mmo.gcm_256_encrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_encrypt); + attr->crypto_mmo.gcm_128_encrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_encrypt); + attr->crypto_mmo.gcm_256_decrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_decrypt); + attr->crypto_mmo.gcm_128_decrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_decrypt); + attr->crypto_mmo.gcm_auth_tag_128 = + MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_128); + attr->crypto_mmo.gcm_auth_tag_96 = + MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_96); + attr->crypto_mmo.log_crypto_mmo_max_size = + MLX5_GET(crypto_caps, hcattr, log_crypto_mmo_max_size); } if (hca_cap_2_sup) { hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index dc3359268d..cb3f3a211b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -125,6 +125,18 @@ struct mlx5_hca_flex_attr { uint8_t header_length_mask_width; }; +__extension__ +struct mlx5_hca_crypto_mmo_attr { + uint32_t crypto_mmo_qp:1; + uint32_t gcm_256_encrypt:1; + uint32_t gcm_128_encrypt:1; + uint32_t gcm_256_decrypt:1; + uint32_t gcm_128_decrypt:1; + uint32_t gcm_auth_tag_128:1; + uint32_t gcm_auth_tag_96:1; + uint32_t log_crypto_mmo_max_size:6; +}; + /* ISO C restricts enumerator values to range of 'int' */ __extension__ enum { @@ -250,6 +262,7 @@ struct mlx5_hca_attr { struct mlx5_hca_vdpa_attr vdpa; struct mlx5_hca_flow_attr flow; struct mlx5_hca_flex_attr flex; + struct mlx5_hca_crypto_mmo_attr crypto_mmo; int log_max_qp_sz; int log_max_cq_sz; int log_max_qp; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9f749a2dcc..b4446f56b9 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -4577,7 +4577,9 @@ struct mlx5_ifc_crypto_caps_bits { u8 synchronize_dek[0x1]; u8 int_kek_manual[0x1]; u8 int_kek_auto[0x1]; - u8 reserved_at_6[0x12]; + u8 reserved_at_6[0xd]; + u8 sw_wrapped_dek_key_purpose[0x1]; + u8 reserved_at_14[0x4]; u8 wrapped_import_method[0x8]; u8 reserved_at_20[0x3]; u8 log_dek_max_alloc[0x5]; @@ -4594,8 +4596,19 @@ struct mlx5_ifc_crypto_caps_bits { u8 log_dek_granularity[0x5]; u8 reserved_at_68[0x3]; u8 log_max_num_int_kek[0x5]; - u8 reserved_at_70[0x10]; - u8 reserved_at_80[0x780]; + u8 sw_wrapped_dek_new[0x10]; + u8 reserved_at_80[0x80]; + u8 crypto_mmo_qp[0x1]; + u8 crypto_aes_gcm_256_encrypt[0x1]; + u8 crypto_aes_gcm_128_encrypt[0x1]; + u8 crypto_aes_gcm_256_decrypt[0x1]; + u8 crypto_aes_gcm_128_decrypt[0x1]; + u8 gcm_auth_tag_128[0x1]; + u8 gcm_auth_tag_96[0x1]; + u8 reserved_at_107[0x3]; + u8 log_crypto_mmo_max_size[0x6]; + u8 reserved_at_110[0x10]; + u8 reserved_at_120[0x6e0]; }; struct mlx5_ifc_crypto_commissioning_register_bits { diff --git a/drivers/crypto/mlx5/meson.build b/drivers/crypto/mlx5/meson.build index 045e8ce81d..17ffce89f0 100644 --- a/drivers/crypto/mlx5/meson.build +++ b/drivers/crypto/mlx5/meson.build @@ -16,6 +16,7 @@ endif sources = files( 'mlx5_crypto.c', 'mlx5_crypto_xts.c', + 'mlx5_crypto_gcm.c', 'mlx5_crypto_dek.c', ) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 2e6bcc6ddc..ff632cd69a 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -335,7 +335,9 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, rte_errno = ENOTSUP; return -rte_errno; } - if (!cdev->config.hca_attr.crypto || !cdev->config.hca_attr.aes_xts) { + if (!cdev->config.hca_attr.crypto || + (!cdev->config.hca_attr.aes_xts && + !cdev->config.hca_attr.crypto_mmo.crypto_mmo_qp)) { DRV_LOG(ERR, "Not enough capabilities to support crypto " "operations, maybe old FW/OFED version?"); rte_errno = ENOTSUP; diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 05d8fe97fe..76f368ee91 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -117,4 +117,7 @@ mlx5_crypto_dek_unset(struct mlx5_crypto_priv *priv); int mlx5_crypto_xts_init(struct mlx5_crypto_priv *priv); +int +mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv); + #endif /* MLX5_CRYPTO_H_ */ diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c new file mode 100644 index 0000000000..bd78c6d66b --- /dev/null +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "mlx5_crypto_utils.h" +#include "mlx5_crypto.h" + +static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { + { + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, + }, + { + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, + } +}; + +int +mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) +{ + priv->caps = mlx5_crypto_gcm_caps; + return 0; +} + From patchwork Fri May 26 03:14:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127525 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23EC242BA3; Fri, 26 May 2023 05:16:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 88EFC42D35; Fri, 26 May 2023 05:15:51 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2045.outbound.protection.outlook.com [40.107.244.45]) by mails.dpdk.org (Postfix) with ESMTP id E973842BDA for ; Fri, 26 May 2023 05:15:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dWI0V9e8Ydy9Zk7mPqLdx19fnZJTlmcz242AYByby/wKaM/GkVClHxspn666vmX458fY6Q00+iGIsIXsX+EevjlG3gppTsKtsJfzsDvXPCCVOwDo2nNLeV1JzyBU1UA3bQTIGCCf7KF4Fxwuw36rJSyd3hswlySdn/upZMEpR8EmjLFc05iY6Rhr4FriY/0ukuvmoi52QGZAENwrRX9H8kGnYs1qswfcXFSw5jkg2Y8RgtmR5dequWCY7r4dCB8sorXVBebBuZe9C9FuF3CpTi5XdBAOS00i4rLdO5w/Nu9MMFGk4Mf/j0feBpqzzrU53f68zKmpmUsTkrK/37lWRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=82PhRJ2t2i073GYi/DJQaF3yPORlODVW5vZK4bF1Afo=; b=JoZR4irLCdBc7NeiuEawCc7v4KX1KLHVyv+Q57DmKOb3CqQsl1/hUuhxRdQp+ylBkai3wkaLARGTPHzqmQoYvny0WlLQd8vqh5EAtBwg7sIlBIMmToltr2ieiOt0ruZn847mBAUI3VKM6gGETP+6lMvSpa5gamYh0JsJPMOddMDhFtrEO9TuZGE+ERaOX8Kqx15bTs435PeN0RllDdtTVGQ659DHhgTMLO3FdOj8NmeT5NDKC6jaP72V/L1YUW8k7m7Wqnw5XJJvWE+JnY1KwrJ+gJtj6i5n0d6VCVFsdoYsTvGDPljtTm7ihkYtqK1qX3p6b6WqvOACr5JceFxbKQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=82PhRJ2t2i073GYi/DJQaF3yPORlODVW5vZK4bF1Afo=; b=Kbdqr9eVKZAa3KVM95FMWcMaIp/SK9cM+Ky7X0R+9MCS4nwtI9Erph6La1IgbU9SEo8LrmnzS0VQk0SZIaPjYVaF02yu+6viSuwISSRxEaiUfHUoJac+C4ik9N6GClxq2U7OOeK/ORr5idCythUdaCRhSniRQloqhWYaE2Nw7gdTcG63q38kiW6Uqzf0MA2OEpYlHVVNeXyEFK2qkDgZWjLnEW5lgqMkjoQ1VtPKEpzoa2+v1ZqH3rCpwxDacFNkrLTr4LKdYVFAQVSuPT/jG4wJnGD8V/kJ20Y/A/lb1BV+RIunL/8mBoHA3iPjLm121LINam8N0+N12mld7ykzcg== Received: from BN8PR04CA0006.namprd04.prod.outlook.com (2603:10b6:408:70::19) by DM6PR12MB4911.namprd12.prod.outlook.com (2603:10b6:5:20e::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Fri, 26 May 2023 03:15:45 +0000 Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com (2603:10b6:408:70:cafe::fb) by BN8PR04CA0006.outlook.office365.com (2603:10b6:408:70::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend Transport; Fri, 26 May 2023 03:15:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:30 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:28 -0700 From: Suanming Mou To: Matan Azrad CC: , Subject: [PATCH v2 4/9] crypto/mlx5: add AES-GCM encryption key Date: Fri, 26 May 2023 06:14:16 +0300 Message-ID: <20230526031422.913377-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|DM6PR12MB4911:EE_ X-MS-Office365-Filtering-Correlation-Id: f1917107-5362-4899-5165-08db5d977f16 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9FVQXoXDEsoUX8rMJ+RWgJZe8CSmHXojF5ZCf+RvurbVsRBrmz7SmrG8KTlSAqGyO6gjqXVboOyDcIghjxpmvtiWM0U9Sgo+Bvk4NScfMf37En6yVXB/ObuaFixqGlzc3uBFoTbUZ1icYiriFHxJwLpc0nFnZ385PKxrg4ZGLk1kLqfsx1+McUjOcCk0vc1Sp24IWn8iLnqgeW88ra6t1CA1BO+i1ibH2gDB6OrMIkPh9+3Cja8pAoZ5gSsSHvE7sCNCuraLhXk1jUmEvqSeqPrANXvUFfBaRWbmo5us8+K8B+W4C6TQX/ORg8Z3Npx6OJt0OVXUx3uPLT6GovAnx8Au2UMVyYLTwUTTkUUoPHQSH9A/cLPAmtJ5pZKp4tvx+Avhfb1p9GvhlZm/asbxdMVgz2x8P9UzT7v352/Pv0pzXLJU/Scwp7oWFzd7KWYVxYY9TcaZCIYixki+HFUHxaHzitFjucpVAoNIIZMOYyT5jMVPdTb6haU56T2jonuaBs6YnXmDZ6uKa6mOg/F5rMh/zRgT8mAxkLu3acg50qlXfQ0looFN34aYa9IW0RU7crHgbgn8i+2KgA+LvOumiXxI+IyPMF+75PVPFji8JFMq3jZx95rcTTVRuvCpnS1smxa3V4JTNtgY/meI2Ei9F26Ujc3ythFwFPeGGfr5Gz9dUI+RaxwwSLlutHgvUPDsdXl+alqncXOvukW6rE3DDJLlC6EimYh0Lj8SMxzgfA7xRvevhlQ5Vbv1oGXdd+3E X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(37006003)(478600001)(70206006)(16526019)(70586007)(7696005)(4326008)(6636002)(41300700001)(6862004)(8936002)(8676002)(5660300002)(336012)(426003)(83380400001)(6286002)(2616005)(47076005)(2906002)(36860700001)(316002)(6666004)(54906003)(82740400003)(186003)(82310400005)(356005)(7636003)(86362001)(26005)(36756003)(107886003)(1076003)(40460700003)(40480700001)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:45.4796 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f1917107-5362-4899-5165-08db5d977f16 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT113.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4911 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The crypto device requires the DEK(data encryption key) object for data encryption/decryption operation. This commit adds the AES-GCM DEK object management support. Signed-off-by: Suanming Mou --- drivers/crypto/mlx5/mlx5_crypto.h | 17 ++++- drivers/crypto/mlx5/mlx5_crypto_dek.c | 102 +++++++++++++------------- drivers/crypto/mlx5/mlx5_crypto_gcm.c | 31 ++++++++ drivers/crypto/mlx5/mlx5_crypto_xts.c | 53 ++++++++++++- 4 files changed, 148 insertions(+), 55 deletions(-) diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 76f368ee91..bb5a557a38 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -86,6 +86,11 @@ struct mlx5_crypto_session { uint32_t dek_id; /**< DEK ID */ } __rte_packed; +struct mlx5_crypto_dek_ctx { + struct rte_crypto_sym_xform *xform; + struct mlx5_crypto_priv *priv; +}; + typedef void *(*mlx5_crypto_mkey_update_t)(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, uint32_t idx); @@ -106,7 +111,7 @@ mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek * mlx5_crypto_dek_prepare(struct mlx5_crypto_priv *priv, - struct rte_crypto_cipher_xform *cipher); + struct rte_crypto_sym_xform *xform); int mlx5_crypto_dek_setup(struct mlx5_crypto_priv *priv); @@ -120,4 +125,14 @@ mlx5_crypto_xts_init(struct mlx5_crypto_priv *priv); int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv); +int +mlx5_crypto_dek_fill_xts_attr(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx); + +int +mlx5_crypto_dek_fill_gcm_attr(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx); + #endif /* MLX5_CRYPTO_H_ */ diff --git a/drivers/crypto/mlx5/mlx5_crypto_dek.c b/drivers/crypto/mlx5/mlx5_crypto_dek.c index 7339ef2bd9..716bcc0545 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_dek.c +++ b/drivers/crypto/mlx5/mlx5_crypto_dek.c @@ -13,10 +13,24 @@ #include "mlx5_crypto_utils.h" #include "mlx5_crypto.h" -struct mlx5_crypto_dek_ctx { - struct rte_crypto_cipher_xform *cipher; - struct mlx5_crypto_priv *priv; -}; +static int +mlx5_crypto_dek_get_key(struct rte_crypto_sym_xform *xform, + const uint8_t **key, + uint16_t *key_len) +{ + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + *key = xform->cipher.key.data; + *key_len = xform->cipher.key.length; + } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + *key = xform->aead.key.data; + *key_len = xform->aead.key.length; + } else { + DRV_LOG(ERR, "Xform dek type not supported."); + rte_errno = -EINVAL; + return -1; + } + return 0; +} int mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, @@ -27,19 +41,22 @@ mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek * mlx5_crypto_dek_prepare(struct mlx5_crypto_priv *priv, - struct rte_crypto_cipher_xform *cipher) + struct rte_crypto_sym_xform *xform) { + const uint8_t *key; + uint16_t key_len; struct mlx5_hlist *dek_hlist = priv->dek_hlist; struct mlx5_crypto_dek_ctx dek_ctx = { - .cipher = cipher, + .xform = xform, .priv = priv, }; - struct rte_crypto_cipher_xform *cipher_ctx = cipher; - uint64_t key64 = __rte_raw_cksum(cipher_ctx->key.data, - cipher_ctx->key.length, 0); - struct mlx5_list_entry *entry = mlx5_hlist_register(dek_hlist, - key64, &dek_ctx); + uint64_t key64; + struct mlx5_list_entry *entry; + if (mlx5_crypto_dek_get_key(xform, &key, &key_len)) + return NULL; + key64 = __rte_raw_cksum(key, key_len, 0); + entry = mlx5_hlist_register(dek_hlist, key64, &dek_ctx); return entry == NULL ? NULL : container_of(entry, struct mlx5_crypto_dek, entry); } @@ -76,76 +93,55 @@ mlx5_crypto_dek_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_crypto_dek_ctx *ctx = cb_ctx; - struct rte_crypto_cipher_xform *cipher_ctx = ctx->cipher; + struct rte_crypto_sym_xform *xform = ctx->xform; struct mlx5_crypto_dek *dek = container_of(entry, typeof(*dek), entry); uint32_t key_len = dek->size; + uint16_t xkey_len; + const uint8_t *key; - if (key_len != cipher_ctx->key.length) + if (mlx5_crypto_dek_get_key(xform, &key, &xkey_len)) + return -1; + if (key_len != xkey_len) return -1; - return memcmp(cipher_ctx->key.data, dek->data, cipher_ctx->key.length); + return memcmp(key, dek->data, xkey_len); } static struct mlx5_list_entry * mlx5_crypto_dek_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) { struct mlx5_crypto_dek_ctx *ctx = cb_ctx; - struct rte_crypto_cipher_xform *cipher_ctx = ctx->cipher; + struct rte_crypto_sym_xform *xform = ctx->xform; struct mlx5_crypto_dek *dek = rte_zmalloc(__func__, sizeof(*dek), RTE_CACHE_LINE_SIZE); struct mlx5_devx_dek_attr dek_attr = { .pd = ctx->priv->cdev->pdn, - .key_purpose = MLX5_CRYPTO_KEY_PURPOSE_AES_XTS, - .has_keytag = 1, }; - bool is_wrapped = ctx->priv->is_wrapped_mode; + int ret = -1; if (dek == NULL) { DRV_LOG(ERR, "Failed to allocate dek memory."); return NULL; } - if (is_wrapped) { - switch (cipher_ctx->key.length) { - case 48: - dek->size = 48; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_128b; - break; - case 80: - dek->size = 80; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_256b; - break; - default: - DRV_LOG(ERR, "Wrapped key size not supported."); - return NULL; - } - } else { - switch (cipher_ctx->key.length) { - case 32: - dek->size = 40; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_128b; - break; - case 64: - dek->size = 72; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_256b; - break; - default: - DRV_LOG(ERR, "Key size not supported."); - return NULL; - } - memcpy(&dek_attr.key[cipher_ctx->key.length], - &ctx->priv->keytag, 8); - } - memcpy(&dek_attr.key, cipher_ctx->key.data, cipher_ctx->key.length); + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) + ret = mlx5_crypto_dek_fill_xts_attr(dek, &dek_attr, cb_ctx); + else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) + ret = mlx5_crypto_dek_fill_gcm_attr(dek, &dek_attr, cb_ctx); + if (ret) + goto fail; dek->obj = mlx5_devx_cmd_create_dek_obj(ctx->priv->cdev->ctx, &dek_attr); if (dek->obj == NULL) { - rte_free(dek); - return NULL; + DRV_LOG(ERR, "Failed to create dek obj."); + goto fail; } - memcpy(&dek->data, cipher_ctx->key.data, cipher_ctx->key.length); return &dek->entry; +fail: + rte_free(dek); + return NULL; } + static void mlx5_crypto_dek_remove_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index bd78c6d66b..676bec6b18 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -27,6 +27,37 @@ static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { } }; +int +mlx5_crypto_dek_fill_gcm_attr(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx) +{ + struct mlx5_crypto_dek_ctx *ctx = cb_ctx; + struct rte_crypto_aead_xform *aead_ctx = &ctx->xform->aead; + + if (aead_ctx->algo != RTE_CRYPTO_AEAD_AES_GCM) { + DRV_LOG(ERR, "Only AES-GCM algo supported."); + return -EINVAL; + } + dek_attr->key_purpose = MLX5_CRYPTO_KEY_PURPOSE_GCM; + switch (aead_ctx->key.length) { + case 16: + dek->size = 16; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; + break; + case 32: + dek->size = 32; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; + break; + default: + DRV_LOG(ERR, "Wrapped key size not supported."); + return -EINVAL; + } + memcpy(&dek_attr->key, aead_ctx->key.data, aead_ctx->key.length); + memcpy(&dek->data, aead_ctx->key.data, aead_ctx->key.length); + return 0; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { diff --git a/drivers/crypto/mlx5/mlx5_crypto_xts.c b/drivers/crypto/mlx5/mlx5_crypto_xts.c index 964d02e6ed..661da5f589 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_xts.c +++ b/drivers/crypto/mlx5/mlx5_crypto_xts.c @@ -45,6 +45,57 @@ const struct rte_cryptodev_capabilities mlx5_crypto_caps[] = { }, }; +int +mlx5_crypto_dek_fill_xts_attr(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx) +{ + struct mlx5_crypto_dek_ctx *ctx = cb_ctx; + struct rte_crypto_cipher_xform *cipher_ctx = &ctx->xform->cipher; + bool is_wrapped = ctx->priv->is_wrapped_mode; + + if (cipher_ctx->algo != RTE_CRYPTO_CIPHER_AES_XTS) { + DRV_LOG(ERR, "Only AES-XTS algo supported."); + return -EINVAL; + } + dek_attr->key_purpose = MLX5_CRYPTO_KEY_PURPOSE_AES_XTS; + dek_attr->has_keytag = 1; + if (is_wrapped) { + switch (cipher_ctx->key.length) { + case 48: + dek->size = 48; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; + break; + case 80: + dek->size = 80; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; + break; + default: + DRV_LOG(ERR, "Wrapped key size not supported."); + return -EINVAL; + } + } else { + switch (cipher_ctx->key.length) { + case 32: + dek->size = 40; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; + break; + case 64: + dek->size = 72; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; + break; + default: + DRV_LOG(ERR, "Key size not supported."); + return -EINVAL; + } + memcpy(&dek_attr->key[cipher_ctx->key.length], + &ctx->priv->keytag, 8); + } + memcpy(&dek_attr->key, cipher_ctx->key.data, cipher_ctx->key.length); + memcpy(&dek->data, cipher_ctx->key.data, cipher_ctx->key.length); + return 0; +} + static int mlx5_crypto_xts_sym_session_configure(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, @@ -66,7 +117,7 @@ mlx5_crypto_xts_sym_session_configure(struct rte_cryptodev *dev, return -ENOTSUP; } cipher = &xform->cipher; - sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher); + sess_private_data->dek = mlx5_crypto_dek_prepare(priv, xform); if (sess_private_data->dek == NULL) { DRV_LOG(ERR, "Failed to prepare dek."); return -ENOMEM; From patchwork Fri May 26 03:14:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127526 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3BB4942BA3; Fri, 26 May 2023 05:16:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8861042D3B; Fri, 26 May 2023 05:15:52 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2077.outbound.protection.outlook.com [40.107.92.77]) by mails.dpdk.org (Postfix) with ESMTP id 451DD42B71 for ; Fri, 26 May 2023 05:15:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k4y4CyJB7fftFPQgtHbaMAVTrvgBsLR5qIBgRG9bErXlwcAIq/FGILeXP56H/AsTYiaPKJMkdb+VcQD4SwfAw9NsLD4X8r2xmNvZA6dY6539IGMC/GBcjjdEh0Il6nAE2aTNWTIcKAUD3uOyFape5xtRFYQ/hlqx50UgvzVS03CUzPSu0BbAeVnLCNkbNYkZCChPouOaeecuvkIq3LHkAb02t2cqVITfAI4zF3yDC+INigyomA9FiXl4dtqo+3tIccKXfFeiDkiEkhCxCGiK4aZyV1DD6buccoVMVP2TJXdLTztHrBJZfJqxuJ2Mwle6oIsWXaAUku7AW6jUT8AZaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PTw2VemPomIOyP8/UdMO3OTsV9E4FMs0AC6DRTB+o7k=; b=RdXScnVV5SNZbAVHCR3Cq9u4D/OtNBjJIjgNNxCUk2esaYhpO9fbGdRw5KZJHN18OJds4FFzIRBnapWVEl6oIIhzBb0mq6uBnZuF6cu17mOKZTKT67lk1N4RWS27G2FUGdpLzT4Cf3acxKnf4Prj8byrt6CcwyXhyOngWVL184jKU+kIELgjMRD11D7zZVdQgM+LRJHAPKTX4kK4QYi9FhmoQUYwJmAnjf7tK6J5RoiVrHQBrHQ+FhA4/VG4VcALdDA67UsZbIYlCfAzGTPS7+G5JeVRIUgdiZ7un6SwQpQA9tFM1hLLnrhLgbLB4QP1lGuuOEipZilbVmjdSsyruQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PTw2VemPomIOyP8/UdMO3OTsV9E4FMs0AC6DRTB+o7k=; b=Bs5ie6vFCICbm93TpJpAV/ParoXMvYoJ4dKykB20W1Qnz2gErs+MnzXoYLYuOzFpqJ90m4kKzrvbda6pNI8eKZ9MMp+HDiqZNTZylUbZzmUvrF0SDMMrzUahXjukhkLgol8gV6cx0w0QQv6UoeVHnipbbUtG4pqmWiUEefH2yDLL56QmH1T5wIzIrBYfSaj2Xcuu1O3zep65Eb1CKx8gt0S528RZseZWNBgFfMUNWJEvplMR2GKPhoQ16lVNLpI1CWAlToctROIoZMF/Kr0VhQtc/JeRtgnjDAHYqvKaUrRLNweQzOXnr1ukk11xnW9PGullp6Ijw2F0gUcaOEwB1w== Received: from BN0PR08CA0017.namprd08.prod.outlook.com (2603:10b6:408:142::24) by IA1PR12MB9064.namprd12.prod.outlook.com (2603:10b6:208:3a8::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May 2023 03:15:47 +0000 Received: from BN8NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:408:142:cafe::d) by BN0PR08CA0017.outlook.office365.com (2603:10b6:408:142::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT009.mail.protection.outlook.com (10.13.176.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:47 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:33 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:30 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 5/9] crypto/mlx5: add AES-GCM session configure Date: Fri, 26 May 2023 06:14:17 +0300 Message-ID: <20230526031422.913377-6-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT009:EE_|IA1PR12MB9064:EE_ X-MS-Office365-Filtering-Correlation-Id: 1546b555-fe5f-4239-188c-08db5d978038 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yE2TBCmCSDxKThFnVel83rEaFcQZZ+ol8ABMVuxFFcroLbjJRLUte9/y4nwdT8pb456XtJ+olxi5QnGRzsW1aEnjD9iEGXDgcHYfvV2jbkVJoPsPYr/ZemCNL07JBJFMRTpIWBuqKvHdovG2XGJwGhxYZvDptowZVzfAztB5+qsv0LmkVPewYHn4n06FNhoG8B1R7CZsVLh2gE0pxlh2Nmj8NLIZfTmuCZq4pdRW8wIzVnDNPS57/cfIgu8VBcyEeUIL1O/iSo1P40aqj2DEdvT+sRJdd8nBXKzW+qOONTKM1Qmx0XVbrCIzyGwLPlyIz343TEutf71Hd3nTAsT6zT91NWnvwhj7UqdvRqAvnuc0A9Vbt7U4TKKXuolC+5aRnYssUpaEvsjREktyo4c4TOfo9rTL3iWSzTAwkpdyOGKDJovv4TJJM7Lkl1xW0VvYOpdfoYegZOgQw8Qlk/7pb81OjRTxhvMoRsAdYYXo3G/vL/RMf+qa4ImQeh3D6STvKpJkjebc4im8phDyp6cfy73vokvDdypcs5POLwD7tBMH92YXgsSfkT1iHWf63tHKIeQWsVGpcvYLOTvNPVcqHuTnh8IzIQ1btRKCVEF1P68cC4eIomhNeNBsvoGeg09nxP4X5yI4TJDjxBwmI1pklpuGmxJVsRXoON5/Tp67YIf26dUppHy0tm2GcbukmHVEmsE9b83Z4B7BjCQFz6HtSv9Wx2x9adOS3uG3NFY4HKr/aTLHzHNKaxAHluh649x1 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(54906003)(4326008)(110136005)(70586007)(6636002)(70206006)(8676002)(41300700001)(8936002)(1076003)(26005)(107886003)(316002)(6666004)(7696005)(478600001)(6286002)(186003)(16526019)(40460700003)(2616005)(336012)(426003)(2906002)(47076005)(83380400001)(36860700001)(7636003)(356005)(82740400003)(40480700001)(55016003)(86362001)(82310400005)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:47.3630 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1546b555-fe5f-4239-188c-08db5d978038 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB9064 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sessions are used in symmetric transformations in order to prepare objects and data for packet processing stage. The AES-GCM session includes IV, AAD, digest(tag), DEK, operation mode information. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_prm.h | 12 +++++++ drivers/crypto/mlx5/mlx5_crypto.h | 40 ++++++++++++++++++----- drivers/crypto/mlx5/mlx5_crypto_gcm.c | 47 +++++++++++++++++++++++++++ 3 files changed, 91 insertions(+), 8 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index b4446f56b9..3b26499a47 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -523,11 +523,23 @@ enum { MLX5_BLOCK_SIZE_4048B = 0x6, }; +enum { + MLX5_ENCRYPTION_TYPE_AES_GCM = 0x3, +}; + +enum { + MLX5_CRYPTO_OP_TYPE_ENCRYPTION = 0x0, + MLX5_CRYPTO_OP_TYPE_DECRYPTION = 0x1, +}; + #define MLX5_BSF_SIZE_OFFSET 30 #define MLX5_BSF_P_TYPE_OFFSET 24 #define MLX5_ENCRYPTION_ORDER_OFFSET 16 #define MLX5_BLOCK_SIZE_OFFSET 24 +#define MLX5_CRYPTO_MMO_TYPE_OFFSET 24 +#define MLX5_CRYPTO_MMO_OP_OFFSET 20 + struct mlx5_wqe_umr_bsf_seg { /* * bs_bpt_eo_es contains: diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index bb5a557a38..6cb4d4ddec 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -72,16 +72,40 @@ struct mlx5_crypto_devarg_params { }; struct mlx5_crypto_session { - uint32_t bs_bpt_eo_es; - /**< bsf_size, bsf_p_type, encryption_order and encryption standard, - * saved in big endian format. - */ - uint32_t bsp_res; - /**< crypto_block_size_pointer and reserved 24 bits saved in big - * endian format. - */ + union { + /**< AES-XTS configuration. */ + struct { + uint32_t bs_bpt_eo_es; + /**< bsf_size, bsf_p_type, encryption_order and encryption standard, + * saved in big endian format. + */ + uint32_t bsp_res; + /**< crypto_block_size_pointer and reserved 24 bits saved in big + * endian format. + */ + }; + /**< AES-GCM configuration. */ + struct { + uint32_t mmo_ctrl; + /**< Crypto control fields with algo type and op type in big + * endian format. + */ + uint32_t wqe_aad_len; + /**< Crypto AAD length field in big endian format. */ + uint32_t wqe_tag_len; + /**< Crypto tag length field in big endian format. */ + uint16_t tag_len; + /**< AES-GCM crypto digest size in bytes. */ + uint16_t aad_len; + /**< The length of the additional authenticated data (AAD) in bytes. */ + uint32_t op_type; + /**< Operation type. */ + }; + }; uint32_t iv_offset:16; /**< Starting point for Initialisation Vector. */ + uint32_t iv_len; + /**< Initialisation Vector length. */ struct mlx5_crypto_dek *dek; /**< Pointer to dek struct. */ uint32_t dek_id; /**< DEK ID */ } __rte_packed; diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index 676bec6b18..6b6a3df57c 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -58,9 +58,56 @@ mlx5_crypto_dek_fill_gcm_attr(struct mlx5_crypto_dek *dek, return 0; } +static int +mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *session) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_crypto_session *sess_private_data = CRYPTODEV_GET_SYM_SESS_PRIV(session); + struct rte_crypto_aead_xform *aead = &xform->aead; + uint32_t op_type; + + if (unlikely(xform->next != NULL)) { + DRV_LOG(ERR, "Xform next is not supported."); + return -ENOTSUP; + } + if (aead->algo != RTE_CRYPTO_AEAD_AES_GCM) { + DRV_LOG(ERR, "Only AES-GCM algorithm is supported."); + return -ENOTSUP; + } + if (aead->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) + op_type = MLX5_CRYPTO_OP_TYPE_ENCRYPTION; + else + op_type = MLX5_CRYPTO_OP_TYPE_DECRYPTION; + sess_private_data->op_type = op_type; + sess_private_data->mmo_ctrl = rte_cpu_to_be_32 + (op_type << MLX5_CRYPTO_MMO_OP_OFFSET | + MLX5_ENCRYPTION_TYPE_AES_GCM << MLX5_CRYPTO_MMO_TYPE_OFFSET); + sess_private_data->aad_len = aead->aad_length; + sess_private_data->tag_len = aead->digest_length; + sess_private_data->iv_offset = aead->iv.offset; + sess_private_data->iv_len = aead->iv.length; + sess_private_data->dek = mlx5_crypto_dek_prepare(priv, xform); + if (sess_private_data->dek == NULL) { + DRV_LOG(ERR, "Failed to prepare dek."); + return -ENOMEM; + } + sess_private_data->dek_id = + rte_cpu_to_be_32(sess_private_data->dek->obj->id & + 0xffffff); + DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data); + return 0; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { + struct rte_cryptodev *crypto_dev = priv->crypto_dev; + struct rte_cryptodev_ops *dev_ops = crypto_dev->dev_ops; + + /* Override AES-GCM specified ops. */ + dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; priv->caps = mlx5_crypto_gcm_caps; return 0; } From patchwork Fri May 26 03:14:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127527 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD3A442BA3; Fri, 26 May 2023 05:16:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D09F442D42; Fri, 26 May 2023 05:15:54 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2083.outbound.protection.outlook.com [40.107.94.83]) by mails.dpdk.org (Postfix) with ESMTP id 894A642D42 for ; Fri, 26 May 2023 05:15:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YwkDYjwndZw5NgglmNW5nLFXq4U1YJopXz0p/gIyfNCn0l+MUbCrQnGkmmEUSIn1U9Jo3kQetUML2tma/cpEMUdrMzkp8rG9UqdfjjatC0tZ8N1FXr2WAWA7LIMBOZp/MvmJk6hScMM2xDWKRfYCQzCp02bNyh/ulaXQ08kq8NNq5065rgX6YbUdRAYHLEZhNh8G0uMCRtdqCVAhvKL9ur6JlJfgKZ0yIhPogXzAcAXR3k9WxrG/j8SsoBKlk7EvkLzMUp3UTJLdAW2I8eWXYk4h7UKjuYXcbUBzDfylOJe4tPM7TicWVYOpmjJlHwVsZTHvmjAmpLb4HfgfMW5SZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ijmIr9v00YGPL9ba/rDGsddWtRTGyYVm0ro6P890GV4=; b=mJXzm1vtvUd96eFrH/abWXH86nNtvYTuH98rkRZiCmIvpAELn3EUC4s9SpbSS+A+AT4f1MRdi2C4+PILfW907N7GZkQBvrbftc0OvbjedLiZT5LokIPkY4M3f4XBdcqdsI6UR9jArsIIXQmq9lC+/6h4XS2DuScwGyBqMVyaEfbtLQkt2uxEy4WAF5JUGNvEQR1hn38YNku6WbtXegbl8NcGdHbwuYbI87bUwQSjZ1nTFYOFmDkMYUald25HkyRDdhgEl50//P117wET/Qj6VecZM+ory1IWuFcHOjjgPNmp62f2Iwguij1yz4zZxaPOtAFqbgIdjKmQ6HyaIeV4yg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ijmIr9v00YGPL9ba/rDGsddWtRTGyYVm0ro6P890GV4=; b=K6aKx1xNMAZ/RvgtuDEOvjJOzd9MLws+uVqs7cgzo+5A+IR+G7/Sbuk/QeblHxvqerPTIqqETAnDmzmnUt/rKKgabj2YxGzFoKA34iN3UPTA4SB7VubeV4E+0O65uDLvoxgzYy7McR+3dAHzlkbVKK2ZpZyEZUzjXZb7GopivdZQiJwNBX0dA9HAntKVSGN4zOQaolMMR8FHK9Dgtm17FhSUdb4qqPWgy0W0QthpH2QQGMAMFvllQC6lsm4JQ6I/Ui0eZr4WzA12809xdB+w/eVbiz8kcLeyf+Zbqp3Kve0d3EYVxWeRJkU1PLtLv+GNLUzY9mLhoFrrFvw0W/QuGw== Received: from BN0PR10CA0027.namprd10.prod.outlook.com (2603:10b6:408:143::28) by IA1PR12MB6436.namprd12.prod.outlook.com (2603:10b6:208:3ac::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Fri, 26 May 2023 03:15:50 +0000 Received: from BN8NAM11FT091.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::1e) by BN0PR10CA0027.outlook.office365.com (2603:10b6:408:143::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend Transport; Fri, 26 May 2023 03:15:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT091.mail.protection.outlook.com (10.13.176.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:35 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:33 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 6/9] common/mlx5: add WQE-based QP synchronous basics Date: Fri, 26 May 2023 06:14:18 +0300 Message-ID: <20230526031422.913377-7-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT091:EE_|IA1PR12MB6436:EE_ X-MS-Office365-Filtering-Correlation-Id: 375704ca-9088-48c8-decd-08db5d978227 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CkSUqJbf7f4h/JcEezngO1G2kYAENENa0GLTYRZBEkcFEqYoTVb3p09069TnbheLOcN7F7umjcpTMlpByT6dxX06H+JOO0l0LQvd9gOzAd1zkl0Gk8lsQpD0Xcg4V+ss6hcLNTeK+KJHrv9LlECKdzRXFpv4MC/zuB7a3TzpUUO/4qkQBkbLXFzOBGMEgpUansPspuiFuvbzdCX7VK0Drv11tzcnA4kD0mcgIl7lAEsVvg/Ke9GPnM/xXnJxzzCyck/xS8u9MQEj2Wx19jBHfgF7n8B2NWd52Y5MkJajuZ/bvMqB6aXN2ajjS+rlxIov5/1C/ubfevZ/+7ymjE4A+fpftNwdbiIASLauEOB/i13HyDtJcd8mVbDwLzpzr3/lrOpJALk/j/cgi1lIMGSy7cIHkOK7GszjRyT6iBpHCrEaoStX095VidXa7K9OwzTtlD/ew1N/Npq1tFvsMDpkrk9lLzFgmbUvn6nqIUK269x0QVNrA4pnPgnkSLFPq01jyhzS4A6JY4DqQS6D1MODrKu/ZdR3xZkSqBhJmz2uh5HApeNz35I1qLZ8lewFFWoyPGBUP5E/1wCbZfZJcB/iUl7Yhn34fKT0vuuV4+v9et7jdprn2dJUCpI40epuT5azqlK1N9vOzYDGAQbXeycP1eeiWEHJ705izrrt4ONydk7jCNCcvLy+WkEs//2cNYfNltBiOnh+hMKqyHSgHM7Ip3Wv4ENJjIEamWoRzeUBSME= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(39860400002)(396003)(346002)(451199021)(46966006)(40470700004)(36840700001)(82740400003)(6666004)(7696005)(26005)(36860700001)(1076003)(6286002)(16526019)(107886003)(186003)(86362001)(110136005)(478600001)(54906003)(7636003)(356005)(336012)(47076005)(426003)(2616005)(4326008)(70586007)(70206006)(40460700003)(316002)(6636002)(5660300002)(8936002)(8676002)(2906002)(55016003)(41300700001)(40480700001)(36756003)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:50.6278 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 375704ca-9088-48c8-decd-08db5d978227 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT091.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6436 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Nvidia HW provides a synchronous mechanism between QPs. When creating the QPs, user can set one as primary and another as follower. The follower QP's WQE execution can be controlled by primary QP via SEND_EN WQE. This commit introduces the SEND_EN WQE to improve the WQE execution sync-up between primary and follower QPs. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_devx_cmds.c | 6 ++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 3 +++ drivers/common/mlx5/mlx5_prm.h | 11 +++++++++++ 3 files changed, 20 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 4332081165..ef87862a6d 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -2475,6 +2475,12 @@ mlx5_devx_cmd_create_qp(void *ctx, attr->dbr_umem_valid); MLX5_SET(qpc, qpc, dbr_umem_id, attr->dbr_umem_id); } + if (attr->cd_master) + MLX5_SET(qpc, qpc, cd_master, attr->cd_master); + if (attr->cd_slave_send) + MLX5_SET(qpc, qpc, cd_slave_send, attr->cd_slave_send); + if (attr->cd_slave_recv) + MLX5_SET(qpc, qpc, cd_slave_receive, attr->cd_slave_recv); MLX5_SET64(qpc, qpc, dbr_addr, attr->dbr_address); MLX5_SET64(create_qp_in, in, wq_umem_offset, attr->wq_umem_offset); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index cb3f3a211b..e071cd841f 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -559,6 +559,9 @@ struct mlx5_devx_qp_attr { uint64_t wq_umem_offset; uint32_t user_index:24; uint32_t mmo:1; + uint32_t cd_master:1; + uint32_t cd_slave_send:1; + uint32_t cd_slave_recv:1; }; struct mlx5_devx_virtio_q_couners_attr { diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 3b26499a47..96d5eb8de3 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -589,6 +589,17 @@ struct mlx5_rdma_write_wqe { struct mlx5_wqe_dseg dseg[]; } __rte_packed; +struct mlx5_wqe_send_en_seg { + uint32_t reserve[2]; + uint32_t sqnpc; + uint32_t qpn; +} __rte_packed; + +struct mlx5_wqe_send_en_wqe { + struct mlx5_wqe_cseg ctr; + struct mlx5_wqe_send_en_seg sseg; +} __rte_packed; + #ifdef PEDANTIC #pragma GCC diagnostic error "-Wpedantic" #endif From patchwork Fri May 26 03:14:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127529 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E103842BA3; Fri, 26 May 2023 05:16:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20E7142D64; Fri, 26 May 2023 05:15:59 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2040.outbound.protection.outlook.com [40.107.237.40]) by mails.dpdk.org (Postfix) with ESMTP id 66DA942D51 for ; Fri, 26 May 2023 05:15:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WznoRN4URrTpbFzH92nG130/iF2tpX8vpJzlYNKY5lOHrqrRTuCnOciEu2MkyBWd5sq02/zbS1btqnIg47drFmW6y2xMl3kDp27TBAD/74o5zUsBzWwcRDffV6CrlKZqqygtpz+cCYNoS23y2nYWbUQ/0uoYpfSn66NkUNoTGXEd7MrF5CW4l9onYQ52QOc6PtcG0RtEDikG/sYs0k6yPq4dxHB0xdvBGKjQMBwWv6JyoF73uCpqOqrKrXawS0NpGpHyqrqzGog27FXwaOR5vNGOyhLUyPoRzqXN2UubXopKrxTQQIsg/iid0Z7TZQCWlEbI3/3+rWnbv4vNUqpWJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SRXusWgaj8C9LdWZeW2Fubj8lsJN/d+UbxmtGCsf5Mk=; b=QfI+Bps0qETNzOwb8bao0RYzOPGnGJz5I614dBqfE84ANz0GTWVAa0Xgy6gQcvA+5F8pSi/OPFvVxFWaAY+ykM/mS2kith5M9pePamUTuUypvcA+xz63BFiljoNR/XGwznVpf17GIfaHdv2MwJGbAEvYuzzH1QQqZy5YOjbTAtFM3QnVz985Wnu3XVA+wxP/tDLLpz9fNfikq0nrsecIj0R+cjv4yAsW/DvdlK8xp0tQVL20sCyQHPpGBQ+Jc5AW/+5y8y8xRqK9l00CPU6FEEyxbKtUQN5S6mntcAVx/cISUkgLviqkXU8JTP84hsZewHTkOJZwHCFAEyZb+ImY4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SRXusWgaj8C9LdWZeW2Fubj8lsJN/d+UbxmtGCsf5Mk=; b=hafuRgwU12O7WZJBIdlE4JByhMTpJ/vQzfSVy3qO4AkmRXk64RX9lsyW+0v/pvd115F2+bioLm8itR851qkxwwUAp2y431hX3nbukN0u5LYg7KxXVyBrTobkWjMLGggLj3pgPMo1fDCrHxZwsb8f2RGNVYl4Kwbewez9R333JWD5c4R+kB4CFflubENpVLOAect11hgkQrzodxJ2eMXoaSsrqpdves2vYFnu25Ua74HJ/eZsQ7/Mso+USy9HbB0purbtXQX7C5KOjItCmidGJtA/vUXVmMcMyhCB0ZLdq1gvDvq7VWNTX/icgM60n7eMuiv/Pw6m0x7co9PPBqKDrQ== Received: from BN0PR10CA0028.namprd10.prod.outlook.com (2603:10b6:408:143::34) by MW6PR12MB8898.namprd12.prod.outlook.com (2603:10b6:303:246::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Fri, 26 May 2023 03:15:53 +0000 Received: from BN8NAM11FT091.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::5c) by BN0PR10CA0028.outlook.office365.com (2603:10b6:408:143::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT091.mail.protection.outlook.com (10.13.176.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:52 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:38 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:36 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 7/9] crypto/mlx5: add queue pair setup for GCM Date: Fri, 26 May 2023 06:14:19 +0300 Message-ID: <20230526031422.913377-8-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT091:EE_|MW6PR12MB8898:EE_ X-MS-Office365-Filtering-Correlation-Id: 494a9d60-1405-4d6c-d2a0-08db5d978300 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UxSK3TtzWJICDTVxk4RpqrWkaJZ5xoT/aeImaFykx/ymlBhhP4fDstk7jAYhKKn5DLxHqxjHrxaeDmXBUAp2ywhFIbONORPVrV2MWYnDkt7CDngdLy22UsOeevpr0x2py4gFYshvcOGEabeGUI9QhzoZXTU8a2LRRollU6Ejinu+PJEjIe73+cDVzhxqM/nd4RJyHnl7zrjei/gmfz3OeJ7s2eNpPS9y0SL9snvhLPCeSm96a+E4dq1AvGOSjHph49QHsUTc2EwkpYm7ATy0ABh7AZ5aRmHgwuVf9At7AySU1zZLvZm4cWjlCp4ulXkxH2nDZJLnY9JIDKGd3lM6tT/wEJdob+UWdmSAJQAigePKaQLYZfQfo0XvnMS8nbxSToiHKcVH7OA/VRu2pjsufbacqLZGu7mb7QP+f2AiKqJkKsaDqoMTaPs3bbGRVo8ygJkHDjCcat/ot1oUQ6klBKQChWvQj0ZWmdxNz+cUnERYL/beVlzXbgOC6rERamAJ+s+JqMUPFkRkjjIT3ioM+qMOe5Z/nCxVUWzxFye40WFpzLkglgkg9I/I6vt5GGS/69R8tGSSmF61Wh81FCUOWG+sF9n2M6mkNMPYAhmvDZ8XN/+DBIZ+LcmGdASXAFe6ofUJmjrQK/8Ll1GkYl5rYVbIE1eJ5+ZWnVzUnKXsJA9pX03l2ARNc8M2xg33MUTK4vrB04qUEhaPsBL526AZw/9nMwQqa/tfMI3GmtDtN9QLlQ6Objh1Ki59YbrErhHk X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(70586007)(70206006)(4326008)(40480700001)(82740400003)(6636002)(54906003)(110136005)(316002)(478600001)(7696005)(41300700001)(40460700003)(6666004)(426003)(55016003)(16526019)(6286002)(86362001)(7636003)(356005)(186003)(8676002)(26005)(107886003)(8936002)(36756003)(2906002)(30864003)(82310400005)(336012)(47076005)(5660300002)(1076003)(36860700001)(83380400001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:52.0496 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 494a9d60-1405-4d6c-d2a0-08db5d978300 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT091.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8898 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto queue pair is for handling the encryption/decryption operations. As AES-GCM AEAD API provides AAD, mbuf, digest separately, low-level FW only accepts the data in a single contiguous memory region, two internal QPs are created for AES-GCM queue pair. One for organizing the memory to be contiguous if they are not. The other is for crypto. If the buffers are checked as implicitly contiguous, the buffer will be sent to the crypto QP directly for encryption/decryption. If not, the buffers will be handled by the first UMR QP. The UMR QP will convert the buffers to be contiguous one. Then the well organized "new" buffer can be handled by crypto QP. The crypto QP is initialized as follower, and UMR as leader. Once crypto operation input buffer requires memory address space converting by UMR QP, the crypto QP processing will be triggered by UMR QP. Otherwise, the ring crypto QP doorbell directly. The existing max_segs_num devarg is used for define how many segments the chained mbuf contains same as AES-XTS before. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_common_mr.h | 1 + drivers/common/mlx5/mlx5_prm.h | 22 +++ drivers/common/mlx5/version.map | 2 + drivers/crypto/mlx5/mlx5_crypto.h | 15 ++ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 230 ++++++++++++++++++++++++++ 5 files changed, 270 insertions(+) diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 66623868a2..8789d403b1 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -254,6 +254,7 @@ __rte_internal void mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr); +__rte_internal void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb); diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 96d5eb8de3..a502e29bd8 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -470,6 +470,15 @@ struct mlx5_wqe_rseg { #define MLX5_UMRC_KO_OFFSET 16u #define MLX5_UMRC_TO_BS_OFFSET 0u +/* + * As PRM describes, the address of the UMR pointer must be + * aligned to 2KB. + */ +#define MLX5_UMR_KLM_PTR_ALIGN (1 << 11) + +#define MLX5_UMR_KLM_NUM_ALIGN \ + (MLX5_UMR_KLM_PTR_ALIGN / sizeof(struct mlx5_klm)) + struct mlx5_wqe_umr_cseg { uint32_t if_cf_toe_cq_res; uint32_t ko_to_bs; @@ -674,6 +683,19 @@ union mlx5_gga_compress_opaque { uint32_t data[64]; }; +union mlx5_gga_crypto_opaque { + struct { + uint32_t syndrome; + uint32_t reserved0[2]; + struct { + uint32_t iv[3]; + uint32_t tag_size; + uint32_t aad_size; + } cp __rte_packed; + } __rte_packed; + uint8_t data[64]; +}; + struct mlx5_ifc_regexp_mmo_control_bits { uint8_t reserved_at_31[0x2]; uint8_t le[0x1]; diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index f860b069de..0758ba76de 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -159,5 +159,7 @@ INTERNAL { mlx5_os_interrupt_handler_create; # WINDOWS_NO_EXPORT mlx5_os_interrupt_handler_destroy; # WINDOWS_NO_EXPORT + + mlx5_os_set_reg_mr_cb; local: *; }; diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 6cb4d4ddec..88a09a6b1c 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -28,8 +28,11 @@ struct mlx5_crypto_priv { TAILQ_ENTRY(mlx5_crypto_priv) next; struct mlx5_common_device *cdev; /* Backend mlx5 device. */ struct rte_cryptodev *crypto_dev; + mlx5_reg_mr_t reg_mr_cb; /* Callback to reg_mr func */ + mlx5_dereg_mr_t dereg_mr_cb; /* Callback to dereg_mr func */ struct mlx5_uar uar; /* User Access Region. */ uint32_t max_segs_num; /* Maximum supported data segs. */ + uint32_t max_klm_num; /* Maximum supported klm. */ struct mlx5_hlist *dek_hlist; /* Dek hash list. */ const struct rte_cryptodev_capabilities *caps; struct rte_cryptodev_config dev_config; @@ -46,15 +49,27 @@ struct mlx5_crypto_qp { struct mlx5_crypto_priv *priv; struct mlx5_devx_cq cq_obj; struct mlx5_devx_qp qp_obj; + struct mlx5_devx_qp umr_qp_obj; struct rte_cryptodev_stats stats; struct rte_crypto_op **ops; struct mlx5_devx_obj **mkey; /* WQE's indirect mekys. */ + struct mlx5_klm *klm_array; + union mlx5_gga_crypto_opaque *opaque_addr; struct mlx5_mr_ctrl mr_ctrl; + struct mlx5_pmd_mr mr; + /* Crypto QP. */ uint8_t *wqe; uint16_t entries_n; + uint16_t cq_entries_n; uint16_t pi; uint16_t ci; uint16_t db_pi; + /* UMR QP. */ + uint8_t *umr_wqe; + uint16_t umr_wqbbs; + uint16_t umr_pi; + uint16_t umr_ci; + uint32_t umr_errors; }; struct mlx5_crypto_dek { diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index 6b6a3df57c..dfef5455b4 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -18,6 +18,20 @@ #include "mlx5_crypto_utils.h" #include "mlx5_crypto.h" +/* + * AES-GCM uses indirect KLM mode. The UMR WQE comprises of WQE control + + * UMR control + mkey context + indirect KLM. The WQE size is aligned to + * be 3 WQEBBS. + */ +#define MLX5_UMR_GCM_WQE_SIZE \ + (RTE_ALIGN(sizeof(struct mlx5_umr_wqe) + sizeof(struct mlx5_wqe_dseg), \ + MLX5_SEND_WQE_BB)) + +#define MLX5_UMR_GCM_WQE_SET_SIZE \ + (MLX5_UMR_GCM_WQE_SIZE + \ + RTE_ALIGN(sizeof(struct mlx5_wqe_send_en_wqe), \ + MLX5_SEND_WQE_BB)) + static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { { .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, @@ -84,6 +98,8 @@ mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, sess_private_data->mmo_ctrl = rte_cpu_to_be_32 (op_type << MLX5_CRYPTO_MMO_OP_OFFSET | MLX5_ENCRYPTION_TYPE_AES_GCM << MLX5_CRYPTO_MMO_TYPE_OFFSET); + sess_private_data->wqe_aad_len = rte_cpu_to_be_32((uint32_t)aead->aad_length); + sess_private_data->wqe_tag_len = rte_cpu_to_be_32((uint32_t)aead->digest_length); sess_private_data->aad_len = aead->aad_length; sess_private_data->tag_len = aead->digest_length; sess_private_data->iv_offset = aead->iv.offset; @@ -100,6 +116,216 @@ mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, return 0; } +static void * +mlx5_crypto_gcm_mkey_klm_update(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp __rte_unused, + uint32_t idx) +{ + return &qp->klm_array[idx * priv->max_klm_num]; +} + +static int +mlx5_crypto_gcm_qp_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; + + if (qp->umr_qp_obj.qp != NULL) + mlx5_devx_qp_destroy(&qp->umr_qp_obj); + if (qp->qp_obj.qp != NULL) + mlx5_devx_qp_destroy(&qp->qp_obj); + if (qp->cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&qp->cq_obj); + if (qp->mr.obj != NULL) { + void *opaq = qp->mr.addr; + + priv->dereg_mr_cb(&qp->mr); + rte_free(opaq); + } + mlx5_crypto_indirect_mkeys_release(qp, qp->entries_n); + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + return 0; +} + +static void +mlx5_crypto_gcm_init_qp(struct mlx5_crypto_qp *qp) +{ + volatile struct mlx5_gga_wqe *restrict wqe = + (volatile struct mlx5_gga_wqe *)qp->qp_obj.wqes; + volatile union mlx5_gga_crypto_opaque *opaq = qp->opaque_addr; + const uint32_t sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | 4u); + const uint32_t flags = RTE_BE32(MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET); + const uint32_t opaq_lkey = rte_cpu_to_be_32(qp->mr.lkey); + int i; + + /* All the next fields state should stay constant. */ + for (i = 0; i < qp->entries_n; ++i, ++wqe) { + wqe->sq_ds = sq_ds; + wqe->flags = flags; + wqe->opaque_lkey = opaq_lkey; + wqe->opaque_vaddr = rte_cpu_to_be_64((uint64_t)(uintptr_t)&opaq[i]); + } +} + +static inline int +mlx5_crypto_gcm_umr_qp_setup(struct rte_cryptodev *dev, struct mlx5_crypto_qp *qp, + int socket_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_devx_qp_attr attr = {0}; + uint32_t ret; + uint32_t log_wqbb_n; + + /* Set UMR + SEND_EN WQE as maximum same with crypto. */ + log_wqbb_n = rte_log2_u32(qp->entries_n * + (MLX5_UMR_GCM_WQE_SET_SIZE / MLX5_SEND_WQE_BB)); + attr.pd = priv->cdev->pdn; + attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); + attr.cqn = qp->cq_obj.cq->id; + attr.num_of_receive_wqes = 0; + attr.num_of_send_wqbbs = RTE_BIT32(log_wqbb_n); + attr.ts_format = + mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); + attr.cd_master = 1; + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->umr_qp_obj, + attr.num_of_send_wqbbs * MLX5_SEND_WQE_BB, + &attr, socket_id); + if (ret) { + DRV_LOG(ERR, "Failed to create UMR QP."); + return -1; + } + if (mlx5_devx_qp2rts(&qp->umr_qp_obj, qp->umr_qp_obj.qp->id)) { + DRV_LOG(ERR, "Failed to change UMR QP state to RTS."); + return -1; + } + /* Save the UMR WQEBBS for checking the WQE boundary. */ + qp->umr_wqbbs = attr.num_of_send_wqbbs; + return 0; +} + +static int +mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_hca_attr *attr = &priv->cdev->config.hca_attr; + struct mlx5_crypto_qp *qp; + struct mlx5_devx_cq_attr cq_attr = { + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + }; + struct mlx5_devx_qp_attr qp_attr = { + .pd = priv->cdev->pdn, + .uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + .user_index = qp_id, + }; + struct mlx5_devx_mkey_attr mkey_attr = { + .pd = priv->cdev->pdn, + .umr_en = 1, + .klm_num = priv->max_klm_num, + }; + uint32_t log_ops_n = rte_log2_u32(qp_conf->nb_descriptors); + uint32_t entries = RTE_BIT32(log_ops_n); + uint32_t alloc_size = sizeof(*qp); + size_t mr_size, opaq_size; + void *mr_buf; + int ret; + + alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); + alloc_size += (sizeof(struct rte_crypto_op *) + + sizeof(struct mlx5_devx_obj *)) * entries; + qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, + socket_id); + if (qp == NULL) { + DRV_LOG(ERR, "Failed to allocate qp memory."); + rte_errno = ENOMEM; + return -rte_errno; + } + qp->priv = priv; + qp->entries_n = entries; + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, + priv->dev_config.socket_id)) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto err; + } + /* + * The following KLM pointer must be aligned with + * MLX5_UMR_KLM_PTR_ALIGN. Aligned opaq_size here + * to make the KLM pointer with offset be aligned. + */ + opaq_size = RTE_ALIGN(sizeof(union mlx5_gga_crypto_opaque) * entries, + MLX5_UMR_KLM_PTR_ALIGN); + mr_size = (priv->max_klm_num * sizeof(struct mlx5_klm) * entries) + opaq_size; + mr_buf = rte_calloc(__func__, (size_t)1, mr_size, MLX5_UMR_KLM_PTR_ALIGN); + if (mr_buf == NULL) { + DRV_LOG(ERR, "Failed to allocate mr memory."); + rte_errno = ENOMEM; + goto err; + } + if (priv->reg_mr_cb(priv->cdev->pd, mr_buf, mr_size, &qp->mr) != 0) { + rte_free(mr_buf); + DRV_LOG(ERR, "Failed to register opaque MR."); + rte_errno = ENOMEM; + goto err; + } + qp->opaque_addr = qp->mr.addr; + qp->klm_array = RTE_PTR_ADD(qp->opaque_addr, opaq_size); + /* + * Triple the CQ size as UMR QP which contains UMR and SEND_EN WQE + * will share this CQ . + */ + qp->cq_entries_n = rte_align32pow2(entries * 3); + ret = mlx5_devx_cq_create(priv->cdev->ctx, &qp->cq_obj, + rte_log2_u32(qp->cq_entries_n), + &cq_attr, socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create CQ."); + goto err; + } + qp_attr.cqn = qp->cq_obj.cq->id; + qp_attr.ts_format = mlx5_ts_format_conv(attr->qp_ts_format); + qp_attr.num_of_receive_wqes = 0; + qp_attr.num_of_send_wqbbs = entries; + qp_attr.mmo = attr->crypto_mmo.crypto_mmo_qp; + /* Set MMO QP as follower as the input data may depend on UMR. */ + qp_attr.cd_slave_send = 1; + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, + qp_attr.num_of_send_wqbbs * MLX5_WQE_SIZE, + &qp_attr, socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create QP."); + goto err; + } + mlx5_crypto_gcm_init_qp(qp); + ret = mlx5_devx_qp2rts(&qp->qp_obj, 0); + if (ret) + goto err; + qp->ops = (struct rte_crypto_op **)(qp + 1); + qp->mkey = (struct mlx5_devx_obj **)(qp->ops + entries); + if (mlx5_crypto_gcm_umr_qp_setup(dev, qp, socket_id)) { + DRV_LOG(ERR, "Failed to setup UMR QP."); + goto err; + } + DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u", + (uint32_t)qp_id, qp->qp_obj.qp->id, qp->cq_obj.cq->id, entries); + if (mlx5_crypto_indirect_mkeys_prepare(priv, qp, &mkey_attr, + mlx5_crypto_gcm_mkey_klm_update)) { + DRV_LOG(ERR, "Cannot allocate indirect memory regions."); + rte_errno = ENOMEM; + goto err; + } + dev->data->queue_pairs[qp_id] = qp; + return 0; +err: + mlx5_crypto_gcm_qp_release(dev, qp_id); + return -1; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { @@ -108,6 +334,10 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) /* Override AES-GCM specified ops. */ dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; + mlx5_os_set_reg_mr_cb(&priv->reg_mr_cb, &priv->dereg_mr_cb); + dev_ops->queue_pair_setup = mlx5_crypto_gcm_qp_setup; + dev_ops->queue_pair_release = mlx5_crypto_gcm_qp_release; + priv->max_klm_num = RTE_ALIGN((priv->max_segs_num + 1) * 2 + 1, MLX5_UMR_KLM_NUM_ALIGN); priv->caps = mlx5_crypto_gcm_caps; return 0; } From patchwork Fri May 26 03:14:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127528 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8712A42BA3; Fri, 26 May 2023 05:16:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C5C1742D41; Fri, 26 May 2023 05:15:57 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 4FB1F42D50 for ; Fri, 26 May 2023 05:15:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ROCkyMPpOPOqnS+DyCRYnUOAEeTwzIgHO2l1bBaCGMrst6Jg6bPyi45m8glSmVE3/EwWFAwQkIxIzDKE6Om6WcbHS+diS0VIoAhyHS9KP/Qu420XQd8wP6ER/ruGcW1GtppZWwZnZ3cdcKPETsCe90q08PP0eGV8+B5k1LEFj6xq9qqYA7o3eaXKF91Cy5KPvlwu+pIJK1ATSY1xy+Q6GmTwR4HBSIBreyklQqXMFoTuY0zCD89SWgi7rWU3QSIRB1KZyt//FeRVdulpQtLypc/VTiEq047HGPpo/smLuWLcsmxJzAJG6ydYr8RguRbAsaJycZNKwHZTNBs5T6N04A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S7jcN3gwGkQ90ypQsFTKhO+e/i76aS2QPhICspO4z1k=; b=as0ff1z/wLmL1XIPjodW4PpwlMz1B/W5FA2JMDxi11qPHdSzkezoHhydKXlhqkHxjyBJz8PVUJfGHlj+QsupjdsFDOVXD8ylH788Gnqp9mENFXjZcn2EXHRAVyp/4uUggJ/iYsl2+8REKrc1AvaksHmtORM0rJdk3j/6QGhUu2wm/m48MlsgsoTA8vamuC04I70b5jyIQ84cKNa7h6OVM36OFCElrvdV5l+K37NlZT3MX2d4IhpCVgKz4+a1PSGht5BcUitdicG5TZtPnRIFAyPmrKKfzUVEERCNy7Vu7WINkDaPUrfx2CijPRF811nAinb/bvfbMX4gjP7RWb8C6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=S7jcN3gwGkQ90ypQsFTKhO+e/i76aS2QPhICspO4z1k=; b=NP/sRs21LRNsVbuBNfB4hbRL8hpBuMJ5fLelSVG5M1bOPJZBrnASDKB6xLn0SsvNouvV0m6S2y7nYZ+inA7U63JveDQNzMKJSJ9PCXwjQLbi+9lK0wyVcynIC3a3d1TT8k0jhzbJC7QSRpFBu/inL9tSZXx9o24OZxjHcNbOXdaCTINYWr0mso5Y7SYeh50PjT6yn4NdfsTHDhS+skcf0d/aAa7l0Qp9FVxRxvF2aoQ6n8s4nI7T1dui204MVclUP9Oef21NY+2mw3/aTcrn3rSyx/mD7P0SnKqJLQyycMi78vYKMr2UvknPgwqB65nHTunHMPmWo1XtoVpokV8GMw== Received: from BN8PR15CA0043.namprd15.prod.outlook.com (2603:10b6:408:80::20) by PH7PR12MB6763.namprd12.prod.outlook.com (2603:10b6:510:1ad::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May 2023 03:15:54 +0000 Received: from BN8NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:408:80:cafe::23) by BN8PR15CA0043.outlook.office365.com (2603:10b6:408:80::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend Transport; Fri, 26 May 2023 03:15:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT049.mail.protection.outlook.com (10.13.177.157) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend Transport; Fri, 26 May 2023 03:15:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:41 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:38 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Subject: [PATCH v2 8/9] crypto/mlx5: add enqueue and dequeue operations Date: Fri, 26 May 2023 06:14:20 +0300 Message-ID: <20230526031422.913377-9-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT049:EE_|PH7PR12MB6763:EE_ X-MS-Office365-Filtering-Correlation-Id: 7099899b-1ed6-4efb-ee9f-08db5d978409 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cc9zz25zshwIkD6n1Q36sZ6so5VKJdHFYzIuB/7N4czO9Ne4wel2i4WVJ5XBf13tUT88/1bXLpmbsch6w2FGhg7wTivNmn5amdoMZAuFoEpe2il+ffkueiHf5G5G6wDZwJTIuq1VmMVZSGxXSbJzzhLZLeAiqT+ojUA6Gke8vnzVK4rWGeyJq5pKLhUU11QDMNP+CZFLHR5A2Edd46LBAacw7qxZmYf97b8hTsVJMMZ9tLnOc3Mk4mBK2Zlzc9CgNcBtQk8dWMNH0tI2ngTp4BvqXMenltP1xq/5aYRMLR+9dZqo6jiQCiGaKR6q4/ngmhpstUw7bzfym93UQaTkSvklYEogzvj/33p0FtfA/EIvEsMed0w8qZFvYj5oNskwQOBETY1Ql5+LPIV1TY9zOoltGXwPmZ02yZuZjAPFyTQCYgtLIQelDoywhRoIW7juj1tnoWX8xJCdkD4xk3OTK1KOYe5TL1PEk5A/YAnaPvP4PWgA8P5pmZSxSnFK9jGdWQWyHWfTNTmpSt6fLztdGRr5/dbI5PBwmXL4MkGfRlMcUd0FaRCmV8VxD8Nl0jepW8lIWlHTSlzTRsbByMLZ81GSX/aBZLPkotv+e314QD0QzMekY65HMnvP3JKzIWPKxP+doYfS6v5+zRI7I2ilbMdXH0CRq9XMZRzeGDVbmKqsUMiw++8EnuBPy5xSu+dXGmf1+NzXHJxZnMFHdIrXKGzU5DuL8dIy5wxQNq8R81w3opTpbBDdplg8eMACn1hi X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199021)(46966006)(40470700004)(36840700001)(2906002)(30864003)(6286002)(16526019)(6666004)(82310400005)(186003)(86362001)(6636002)(41300700001)(7696005)(4326008)(36756003)(70206006)(83380400001)(70586007)(1076003)(26005)(426003)(336012)(316002)(47076005)(107886003)(110136005)(36860700001)(54906003)(2616005)(82740400003)(478600001)(40460700003)(55016003)(40480700001)(5660300002)(7636003)(356005)(8676002)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:53.7870 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7099899b-1ed6-4efb-ee9f-08db5d978409 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6763 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The crypto operations are performed with crypto WQE. If the input buffers(AAD, mbuf, digest) are not contiguous and there is no enough headroom/tailroom for copying AAD/digest, as the requirement from FW, an UMR WQE is needed to generate contiguous address space for crypto WQE. The UMR WQE and crypto WQE are handled in two different QPs. Crypto operation with non-contiguous buffers will have its own UMR WQE, while the operation with contiguous buffers doesn't need the UMR WQE. Once the all the operations WQE in the enqueue burst built finishes, if any UMR WQEs are built, an additional SEND_EN WQE will be as the final WQE of the burst in the UMR QP. The purpose of that SEND_EN WQE is to trigger the crypto QP processing with the UMR ready input memory address space buffers. The QP for crypto operations contains only the crypto WQE and the QP WQEs are built as fixed in QP setup. The QP processing is triggered by doorbell ring or the SEND_EN WQE from UMR QP. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/crypto/mlx5/mlx5_crypto.c | 9 +- drivers/crypto/mlx5/mlx5_crypto.h | 8 + drivers/crypto/mlx5/mlx5_crypto_gcm.c | 588 ++++++++++++++++++++++++++ 4 files changed, 604 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index a502e29bd8..98b71a4031 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -617,6 +617,7 @@ struct mlx5_wqe_send_en_wqe { /* MMO metadata segment */ #define MLX5_OPCODE_MMO 0x2fu +#define MLX5_OPC_MOD_MMO_CRYPTO 0x6u #define MLX5_OPC_MOD_MMO_REGEX 0x4u #define MLX5_OPC_MOD_MMO_COMP 0x2u #define MLX5_OPC_MOD_MMO_DECOMP 0x3u diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index ff632cd69a..4d7d3ef2a3 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -62,8 +62,13 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, MLX5_CRYPTO_FEATURE_FLAGS(priv->is_wrapped_mode); dev_info->capabilities = priv->caps; dev_info->max_nb_queue_pairs = MLX5_CRYPTO_MAX_QPS; - dev_info->min_mbuf_headroom_req = 0; - dev_info->min_mbuf_tailroom_req = 0; + if (priv->caps->sym.xform_type == RTE_CRYPTO_SYM_XFORM_AEAD) { + dev_info->min_mbuf_headroom_req = MLX5_CRYPTO_GCM_MAX_AAD; + dev_info->min_mbuf_tailroom_req = MLX5_CRYPTO_GCM_MAX_DIGEST; + } else { + dev_info->min_mbuf_headroom_req = 0; + dev_info->min_mbuf_tailroom_req = 0; + } dev_info->sym.max_nb_sessions = 0; /* * If 0, the device does not have any limitation in number of diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 88a09a6b1c..6dcb41b27c 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -23,6 +23,8 @@ #define MLX5_CRYPTO_KLM_SEGS_NUM(umr_wqe_sz) ((umr_wqe_sz -\ MLX5_CRYPTO_UMR_WQE_STATIC_SIZE) /\ MLX5_WSEG_SIZE) +#define MLX5_CRYPTO_GCM_MAX_AAD 64 +#define MLX5_CRYPTO_GCM_MAX_DIGEST 16 struct mlx5_crypto_priv { TAILQ_ENTRY(mlx5_crypto_priv) next; @@ -61,6 +63,9 @@ struct mlx5_crypto_qp { uint8_t *wqe; uint16_t entries_n; uint16_t cq_entries_n; + uint16_t reported_ci; + uint16_t qp_ci; + uint16_t cq_ci; uint16_t pi; uint16_t ci; uint16_t db_pi; @@ -70,6 +75,9 @@ struct mlx5_crypto_qp { uint16_t umr_pi; uint16_t umr_ci; uint32_t umr_errors; + uint16_t last_gga_pi; + bool has_umr; + uint16_t cpy_tag_op; }; struct mlx5_crypto_dek { diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index dfef5455b4..2231bcbe6f 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -32,6 +33,40 @@ RTE_ALIGN(sizeof(struct mlx5_wqe_send_en_wqe), \ MLX5_SEND_WQE_BB)) +#define MLX5_UMR_GCM_WQE_STRIDE \ + (MLX5_UMR_GCM_WQE_SIZE / MLX5_SEND_WQE_BB) + +#define MLX5_MMO_CRYPTO_OPC (MLX5_OPCODE_MMO | \ + (MLX5_OPC_MOD_MMO_CRYPTO << WQE_CSEG_OPC_MOD_OFFSET)) + +/* + * The status default value is RTE_CRYPTO_OP_STATUS_SUCCESS. + * Copy tag should fill different value to status. + */ +#define MLX5_CRYPTO_OP_STATUS_GCM_TAG_COPY (RTE_CRYPTO_OP_STATUS_SUCCESS + 1) + +struct mlx5_crypto_gcm_op_info { + bool need_umr; + bool is_oop; + bool is_enc; + void *digest; + void *src_addr; +}; + +struct mlx5_crypto_gcm_data { + void *src_addr; + uint32_t src_bytes; + void *dst_addr; + uint32_t dst_bytes; + uint32_t src_mkey; + uint32_t dst_mkey; +}; + +struct mlx5_crypto_gcm_tag_cpy_info { + void *digest; + uint8_t tag_len; +} __rte_packed; + static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { { .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, @@ -326,6 +361,557 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, return -1; } +static __rte_always_inline void +mlx5_crypto_gcm_get_op_info(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + struct mlx5_crypto_gcm_op_info *op_info) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct rte_mbuf *m_src = op->sym->m_src; + void *aad_addr = op->sym->aead.aad.data; + void *tag_addr = op->sym->aead.digest.data; + void *src_addr = rte_pktmbuf_mtod_offset(m_src, void *, op->sym->aead.data.offset); + struct rte_mbuf *m_dst = m_src; + void *dst_addr = src_addr; + void *expected_aad = NULL; + void *expected_tag = NULL; + bool is_enc = sess->op_type == MLX5_CRYPTO_OP_TYPE_ENCRYPTION; + bool cp_aad = false; + bool cp_tag = false; + + op_info->is_oop = false; + op_info->need_umr = false; + op_info->is_enc = is_enc; + op_info->digest = NULL; + op_info->src_addr = aad_addr; + if (op->sym->m_dst && op->sym->m_dst != m_src) { + op_info->is_oop = true; + m_dst = op->sym->m_dst; + dst_addr = rte_pktmbuf_mtod_offset(m_dst, void *, op->sym->aead.data.offset); + if (m_dst->nb_segs > 1) { + op_info->need_umr = true; + return; + } + /* + * If the op's mbuf has extra data offset, don't copy AAD to + * this area. + */ + if (rte_pktmbuf_headroom(m_dst) < sess->aad_len || + op->sym->aead.data.offset) { + op_info->need_umr = true; + return; + } + } + if (m_src->nb_segs > 1) { + op_info->need_umr = true; + return; + } + expected_aad = RTE_PTR_SUB(src_addr, sess->aad_len); + if (expected_aad != aad_addr) { + /* + * If the op's mbuf has extra data offset, don't copy AAD to + * this area. + */ + if (sess->aad_len > MLX5_CRYPTO_GCM_MAX_AAD || + sess->aad_len > rte_pktmbuf_headroom(m_src) || + op->sym->aead.data.offset) { + op_info->need_umr = true; + return; + } + cp_aad = true; + op_info->src_addr = expected_aad; + } + expected_tag = RTE_PTR_ADD(is_enc ? dst_addr : src_addr, op->sym->aead.data.length); + if (expected_tag != tag_addr) { + struct rte_mbuf *mbuf = is_enc ? m_dst : m_src; + + /* + * If op's mbuf is not fully set as payload, don't copy digest to + * the left area. + */ + if (rte_pktmbuf_tailroom(mbuf) < sess->tag_len || + rte_pktmbuf_data_len(mbuf) != op->sym->aead.data.length) { + op_info->need_umr = true; + return; + } + if (is_enc) { + op_info->digest = expected_tag; + qp->cpy_tag_op++; + } else { + cp_tag = true; + } + } + if (cp_aad) + memcpy(expected_aad, aad_addr, sess->aad_len); + if (cp_tag) + memcpy(expected_tag, tag_addr, sess->tag_len); +} + +static __rte_always_inline uint32_t +_mlx5_crypto_gcm_umr_build_mbuf_klm(struct mlx5_crypto_qp *qp, + struct rte_mbuf *mbuf, + struct mlx5_klm *klm, + uint32_t offset, + uint32_t *remain) +{ + uint32_t data_len = (rte_pktmbuf_data_len(mbuf) - offset); + uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); + + if (data_len > *remain) + data_len = *remain; + *remain -= data_len; + klm->byte_count = rte_cpu_to_be_32(data_len); + klm->address = rte_cpu_to_be_64(addr); + klm->mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf); + return klm->mkey; +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_mbuf_chain_klms(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + struct rte_mbuf *mbuf, + struct mlx5_klm *klm) +{ + uint32_t remain_len = op->sym->aead.data.length; + __rte_unused uint32_t nb_segs = mbuf->nb_segs; + uint32_t klm_n = 0; + + /* mbuf seg num should be less than max_segs_num. */ + MLX5_ASSERT(nb_segs <= qp->priv->max_segs_num); + /* First mbuf needs to take the data offset. */ + if (unlikely(_mlx5_crypto_gcm_umr_build_mbuf_klm(qp, mbuf, klm, + op->sym->aead.data.offset, &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + klm++; + klm_n++; + while (remain_len) { + nb_segs--; + mbuf = mbuf->next; + MLX5_ASSERT(mbuf && nb_segs); + if (unlikely(_mlx5_crypto_gcm_umr_build_mbuf_klm(qp, mbuf, klm, + 0, &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + klm++; + klm_n++; + } + return klm_n; +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_klm_by_addr(struct mlx5_crypto_qp *qp, + struct mlx5_klm *klm, + void *addr, + uint32_t len) +{ + klm->byte_count = rte_cpu_to_be_32(len); + klm->address = rte_cpu_to_be_64((uintptr_t)addr); + klm->mkey = mlx5_mr_addr2mr_bh(&qp->mr_ctrl, (uintptr_t)addr); + if (klm->mkey == UINT32_MAX) + return 0; + return 1; +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_op_klm(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + struct mlx5_crypto_gcm_op_info *op_info, + struct mlx5_klm *klm, + uint32_t *len) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_klm *digest = NULL, *aad = NULL; + uint32_t total_len = op->sym->aead.data.length + sess->aad_len + sess->tag_len; + uint32_t klm_n = 0, klm_src = 0, klm_dst = 0; + + /* Build AAD KLM. */ + aad = klm; + if (!mlx5_crypto_gcm_build_klm_by_addr(qp, aad, op->sym->aead.aad.data, sess->aad_len)) + return 0; + klm_n++; + /* Build src mubf KLM. */ + klm_src = mlx5_crypto_gcm_build_mbuf_chain_klms(qp, op, op->sym->m_src, &klm[klm_n]); + if (!klm_src) + return 0; + klm_n += klm_src; + /* Reserve digest KLM if needed. */ + if (!op_info->is_oop || + sess->op_type == MLX5_CRYPTO_OP_TYPE_DECRYPTION) { + digest = &klm[klm_n]; + klm_n++; + } + /* Build dst mbuf KLM. */ + if (op_info->is_oop) { + klm[klm_n] = *aad; + klm_n++; + klm_dst = mlx5_crypto_gcm_build_mbuf_chain_klms(qp, op, op->sym->m_dst, + &klm[klm_n]); + if (!klm_dst) + return 0; + klm_n += klm_dst; + total_len += (op->sym->aead.data.length + sess->aad_len); + } + /* Update digest at the end if it is not set. */ + if (!digest) { + digest = &klm[klm_n]; + klm_n++; + } + /* Build digest KLM. */ + if (!mlx5_crypto_gcm_build_klm_by_addr(qp, digest, op->sym->aead.digest.data, + sess->tag_len)) + return 0; + *len = total_len; + return klm_n; +} + +static __rte_always_inline struct mlx5_wqe_cseg * +mlx5_crypto_gcm_get_umr_wqe(struct mlx5_crypto_qp *qp) +{ + uint32_t wqe_offset = qp->umr_pi & (qp->umr_wqbbs - 1); + uint32_t left_wqbbs = qp->umr_wqbbs - wqe_offset; + struct mlx5_wqe_cseg *wqe; + + /* If UMR WQE is near the boundary. */ + if (left_wqbbs < MLX5_UMR_GCM_WQE_STRIDE) { + /* Append NOP WQE as the left WQEBBS is not enough for UMR. */ + wqe = RTE_PTR_ADD(qp->umr_qp_obj.umem_buf, wqe_offset * MLX5_SEND_WQE_BB); + wqe->opcode = rte_cpu_to_be_32(MLX5_OPCODE_NOP | ((uint32_t)qp->umr_pi << 8)); + wqe->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | (left_wqbbs << 2)); + wqe->flags = RTE_BE32(0); + wqe->misc = RTE_BE32(0); + qp->umr_pi += left_wqbbs; + wqe_offset = qp->umr_pi & (qp->umr_wqbbs - 1); + } + wqe_offset *= MLX5_SEND_WQE_BB; + return RTE_PTR_ADD(qp->umr_qp_obj.umem_buf, wqe_offset); +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_umr(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + uint32_t idx, + struct mlx5_crypto_gcm_op_info *op_info, + struct mlx5_crypto_gcm_data *data) +{ + struct mlx5_crypto_priv *priv = qp->priv; + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_wqe_cseg *wqe; + struct mlx5_wqe_umr_cseg *ucseg; + struct mlx5_wqe_mkey_cseg *mkc; + struct mlx5_klm *iklm; + struct mlx5_klm *klm = &qp->klm_array[idx * priv->max_klm_num]; + uint16_t klm_size, klm_align; + uint32_t total_len; + + /* Build KLM base on the op. */ + klm_size = mlx5_crypto_gcm_build_op_klm(qp, op, op_info, klm, &total_len); + if (!klm_size) + return -EINVAL; + klm_align = RTE_ALIGN(klm_size, 4); + /* Get UMR WQE memory. */ + wqe = mlx5_crypto_gcm_get_umr_wqe(qp); + memset(wqe, 0, MLX5_UMR_GCM_WQE_SIZE); + /* Set WQE control seg. Non-inline KLM UMR WQE size must be 9 WQE_DS. */ + wqe->opcode = rte_cpu_to_be_32(MLX5_OPCODE_UMR | ((uint32_t)qp->umr_pi << 8)); + wqe->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | 9); + wqe->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); + wqe->misc = rte_cpu_to_be_32(qp->mkey[idx]->id); + /* Set UMR WQE control seg. */ + ucseg = (struct mlx5_wqe_umr_cseg *)(wqe + 1); + ucseg->mkey_mask |= RTE_BE64(1u << 0); + ucseg->ko_to_bs = rte_cpu_to_be_32(klm_align << MLX5_UMRC_KO_OFFSET); + /* Set mkey context seg. */ + mkc = (struct mlx5_wqe_mkey_cseg *)(ucseg + 1); + mkc->len = rte_cpu_to_be_64(total_len); + mkc->qpn_mkey = rte_cpu_to_be_32(0xffffff00 | (qp->mkey[idx]->id & 0xff)); + /* Set UMR pointer to data seg. */ + iklm = (struct mlx5_klm *)(mkc + 1); + iklm->address = rte_cpu_to_be_64((uintptr_t)((char *)klm)); + iklm->mkey = rte_cpu_to_be_32(qp->mr.lkey); + data->src_mkey = rte_cpu_to_be_32(qp->mkey[idx]->id); + data->dst_mkey = data->src_mkey; + data->src_addr = 0; + data->src_bytes = sess->aad_len + op->sym->aead.data.length; + data->dst_bytes = data->src_bytes; + if (op_info->is_enc) + data->dst_bytes += sess->tag_len; + else + data->src_bytes += sess->tag_len; + if (op_info->is_oop) + data->dst_addr = (void *)(uintptr_t)(data->src_bytes); + else + data->dst_addr = 0; + /* Clear the padding memory. */ + memset(&klm[klm_size], 0, sizeof(struct mlx5_klm) * (klm_align - klm_size)); + /* Update PI and WQE */ + qp->umr_pi += MLX5_UMR_GCM_WQE_STRIDE; + qp->umr_wqe = (uint8_t *)wqe; + return 0; +} + +static __rte_always_inline void +mlx5_crypto_gcm_build_send_en(struct mlx5_crypto_qp *qp) +{ + uint32_t wqe_offset = (qp->umr_pi & (qp->umr_wqbbs - 1)) * MLX5_SEND_WQE_BB; + struct mlx5_wqe_cseg *cs = RTE_PTR_ADD(qp->umr_qp_obj.wqes, wqe_offset); + struct mlx5_wqe_qseg *qs = RTE_PTR_ADD(cs, sizeof(struct mlx5_wqe_cseg)); + + cs->opcode = rte_cpu_to_be_32(MLX5_OPCODE_SEND_EN | ((uint32_t)qp->umr_pi << 8)); + cs->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | 2); + /* + * No need to generate the SEND_EN CQE as we want only GGA CQE + * in the CQ normally. We can compare qp->last_send_gga_pi with + * qp->pi to know if all SEND_EN be consumed. + */ + cs->flags = RTE_BE32((MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET) | + MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE); + cs->misc = RTE_BE32(0); + qs->max_index = rte_cpu_to_be_32(qp->pi); + qs->qpn_cqn = rte_cpu_to_be_32(qp->qp_obj.qp->id); + qp->umr_wqe = (uint8_t *)cs; + qp->umr_pi += 1; +} + +static __rte_always_inline void +mlx5_crypto_gcm_wqe_set(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + uint32_t idx, + struct mlx5_crypto_gcm_data *data) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_gga_wqe *wqe = &((struct mlx5_gga_wqe *)qp->qp_obj.wqes)[idx]; + union mlx5_gga_crypto_opaque *opaq = qp->opaque_addr; + + memcpy(opaq[idx].cp.iv, + rte_crypto_op_ctod_offset(op, uint8_t *, sess->iv_offset), sess->iv_len); + opaq[idx].cp.tag_size = sess->wqe_tag_len; + opaq[idx].cp.aad_size = sess->wqe_aad_len; + /* Update control seg. */ + wqe->opcode = rte_cpu_to_be_32(MLX5_MMO_CRYPTO_OPC + (qp->pi << 8)); + wqe->gga_ctrl1 = sess->mmo_ctrl; + wqe->gga_ctrl2 = sess->dek_id; + wqe->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); + /* Update op_info seg. */ + wqe->gather.bcount = rte_cpu_to_be_32(data->src_bytes); + wqe->gather.lkey = data->src_mkey; + wqe->gather.pbuf = rte_cpu_to_be_64((uintptr_t)data->src_addr); + /* Update output seg. */ + wqe->scatter.bcount = rte_cpu_to_be_32(data->dst_bytes); + wqe->scatter.lkey = data->dst_mkey; + wqe->scatter.pbuf = rte_cpu_to_be_64((uintptr_t)data->dst_addr); + qp->wqe = (uint8_t *)wqe; +} + +static uint16_t +mlx5_crypto_gcm_enqueue_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + struct mlx5_crypto_session *sess; + struct mlx5_crypto_priv *priv = qp->priv; + struct mlx5_crypto_gcm_tag_cpy_info *tag; + struct mlx5_crypto_gcm_data gcm_data; + struct rte_crypto_op *op; + struct mlx5_crypto_gcm_op_info op_info; + uint16_t mask = qp->entries_n - 1; + uint16_t remain = qp->entries_n - (qp->pi - qp->qp_ci); + uint32_t idx; + uint16_t umr_cnt = 0; + + if (remain < nb_ops) + nb_ops = remain; + else + remain = nb_ops; + if (unlikely(remain == 0)) + return 0; + do { + op = *ops++; + sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + idx = qp->pi & mask; + mlx5_crypto_gcm_get_op_info(qp, op, &op_info); + if (!op_info.need_umr) { + gcm_data.src_addr = op_info.src_addr; + gcm_data.src_bytes = op->sym->aead.data.length + sess->aad_len; + gcm_data.src_mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, op->sym->m_src); + if (op_info.is_oop) { + gcm_data.dst_addr = RTE_PTR_SUB + (rte_pktmbuf_mtod_offset(op->sym->m_dst, + void *, op->sym->aead.data.offset), sess->aad_len); + gcm_data.dst_mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, op->sym->m_dst); + } else { + gcm_data.dst_addr = gcm_data.src_addr; + gcm_data.dst_mkey = gcm_data.src_mkey; + } + gcm_data.dst_bytes = gcm_data.src_bytes; + if (op_info.is_enc) + gcm_data.dst_bytes += sess->tag_len; + else + gcm_data.src_bytes += sess->tag_len; + } else { + if (unlikely(mlx5_crypto_gcm_build_umr(qp, op, idx, + &op_info, &gcm_data))) { + qp->stats.enqueue_err_count++; + if (remain != nb_ops) { + qp->stats.enqueued_count -= remain; + break; + } + return 0; + } + umr_cnt++; + } + mlx5_crypto_gcm_wqe_set(qp, op, idx, &gcm_data); + if (op_info.digest) { + tag = (struct mlx5_crypto_gcm_tag_cpy_info *)op->sym->aead.digest.data; + tag->digest = op_info.digest; + tag->tag_len = sess->tag_len; + op->status = MLX5_CRYPTO_OP_STATUS_GCM_TAG_COPY; + } else { + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + } + qp->ops[idx] = op; + qp->pi++; + } while (--remain); + qp->stats.enqueued_count += nb_ops; + /* Update the last GGA cseg with COMP. */ + ((struct mlx5_wqe_cseg *)qp->wqe)->flags = + RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET); + /* Only when there are no pending SEND_EN WQEs in background. */ + if (!umr_cnt && !qp->has_umr) { + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->wqe, + qp->pi, &qp->qp_obj.db_rec[MLX5_SND_DBR], + !priv->uar.dbnc); + } else { + mlx5_crypto_gcm_build_send_en(qp); + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->umr_wqe, + qp->umr_pi, &qp->umr_qp_obj.db_rec[MLX5_SND_DBR], + !priv->uar.dbnc); + qp->last_gga_pi = qp->pi; + qp->has_umr = true; + } + return nb_ops; +} + +static __rte_noinline void +mlx5_crypto_gcm_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op) +{ + uint8_t op_code; + const uint32_t idx = qp->cq_ci & (qp->entries_n - 1); + volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) + &qp->cq_obj.cqes[idx]; + + op_code = rte_be_to_cpu_32(cqe->s_wqe_opcode_qpn) >> MLX5_CQ_INDEX_WIDTH; + DRV_LOG(ERR, "CQE ERR:0x%x, Vender_ERR:0x%x, OP:0x%x, QPN:0x%x, WQE_CNT:0x%x", + cqe->syndrome, cqe->vendor_err_synd, op_code, + (rte_be_to_cpu_32(cqe->s_wqe_opcode_qpn) & 0xffffff), + rte_be_to_cpu_16(cqe->wqe_counter)); + if (op && op_code == MLX5_OPCODE_MMO) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + qp->stats.dequeue_err_count++; + } +} + +static __rte_always_inline void +mlx5_crypto_gcm_fill_op(struct mlx5_crypto_qp *qp, + struct rte_crypto_op **ops, + uint16_t orci, + uint16_t rci, + uint16_t op_mask) +{ + uint16_t n; + + orci &= op_mask; + rci &= op_mask; + if (unlikely(orci > rci)) { + n = op_mask - orci + 1; + memcpy(ops, &qp->ops[orci], n * sizeof(*ops)); + orci = 0; + } else { + n = 0; + } + /* rci can be 0 here, memcpy will skip that. */ + memcpy(&ops[n], &qp->ops[orci], (rci - orci) * sizeof(*ops)); +} + +static __rte_always_inline void +mlx5_crypto_gcm_cpy_tag(struct mlx5_crypto_qp *qp, + uint16_t orci, + uint16_t rci, + uint16_t op_mask) +{ + struct rte_crypto_op *op; + struct mlx5_crypto_gcm_tag_cpy_info *tag; + + while (qp->cpy_tag_op && orci != rci) { + op = qp->ops[orci & op_mask]; + if (op->status == MLX5_CRYPTO_OP_STATUS_GCM_TAG_COPY) { + tag = (struct mlx5_crypto_gcm_tag_cpy_info *)op->sym->aead.digest.data; + memcpy(op->sym->aead.digest.data, tag->digest, tag->tag_len); + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + qp->cpy_tag_op--; + } + orci++; + } +} + +static uint16_t +mlx5_crypto_gcm_dequeue_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + volatile struct mlx5_cqe *restrict cqe; + const unsigned int cq_size = qp->cq_entries_n; + const unsigned int mask = cq_size - 1; + const unsigned int op_mask = qp->entries_n - 1; + uint32_t idx; + uint32_t next_idx = qp->cq_ci & mask; + uint16_t reported_ci = qp->reported_ci; + uint16_t qp_ci = qp->qp_ci; + const uint16_t max = RTE_MIN((uint16_t)(qp->pi - reported_ci), nb_ops); + uint16_t op_num = 0; + int ret; + + if (unlikely(max == 0)) + return 0; + while (qp_ci - reported_ci < max) { + idx = next_idx; + next_idx = (qp->cq_ci + 1) & mask; + cqe = &qp->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, qp->cq_ci); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (unlikely(ret != MLX5_CQE_STATUS_HW_OWN)) + mlx5_crypto_gcm_cqe_err_handle(qp, + qp->ops[reported_ci & op_mask]); + break; + } + qp_ci = rte_be_to_cpu_16(cqe->wqe_counter) + 1; + if (qp->has_umr && + (qp->last_gga_pi + 1) == qp_ci) + qp->has_umr = false; + qp->cq_ci++; + } + /* If wqe_counter changed, means CQE handled. */ + if (likely(qp->qp_ci != qp_ci)) { + qp->qp_ci = qp_ci; + rte_io_wmb(); + qp->cq_obj.db_rec[0] = rte_cpu_to_be_32(qp->cq_ci); + } + /* If reported_ci is not same with qp_ci, means op retrieved. */ + if (qp_ci != reported_ci) { + op_num = RTE_MIN((uint16_t)(qp_ci - reported_ci), max); + reported_ci += op_num; + mlx5_crypto_gcm_cpy_tag(qp, qp->reported_ci, reported_ci, op_mask); + mlx5_crypto_gcm_fill_op(qp, ops, qp->reported_ci, reported_ci, op_mask); + qp->stats.dequeued_count += op_num; + qp->reported_ci = reported_ci; + } + return op_num; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { @@ -337,6 +923,8 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) mlx5_os_set_reg_mr_cb(&priv->reg_mr_cb, &priv->dereg_mr_cb); dev_ops->queue_pair_setup = mlx5_crypto_gcm_qp_setup; dev_ops->queue_pair_release = mlx5_crypto_gcm_qp_release; + crypto_dev->dequeue_burst = mlx5_crypto_gcm_dequeue_burst; + crypto_dev->enqueue_burst = mlx5_crypto_gcm_enqueue_burst; priv->max_klm_num = RTE_ALIGN((priv->max_segs_num + 1) * 2 + 1, MLX5_UMR_KLM_NUM_ALIGN); priv->caps = mlx5_crypto_gcm_caps; return 0; From patchwork Fri May 26 03:14:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 127530 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C0A342BA3; Fri, 26 May 2023 05:16:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2970342D49; Fri, 26 May 2023 05:16:02 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77]) by mails.dpdk.org (Postfix) with ESMTP id 1568442D49 for ; Fri, 26 May 2023 05:16:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UoGD3bnztp0A2HDCyU21Lv/J457G1bM52/NQaSHyPvb2/E7UG93RR3t6cBx39tGdwmZ+WKYAvl8OsWQ58YCz7VoQbGKd6uwtxELORLFJF9IzXVb4cctNxQO5KANJYpDR0Pu6cc9ES2yjiUWU8Rg/KK6CgyGygRcaNWm0IBCeKjASpbYtcU7aa8ezElTyYpHvlYmCdd07dj5uZX/+PJXs+hiBLtXvuSGZNUD9xgmC0qoeMW/sEFY+H0FWKayg/sal5xQMaOAXGT9jz77mcrJEvkQKsiTuLW6AryhuPtoiQBwoWE6PjCPsnSnd0R09G1GNDXC/0lYp3wfhKG3IznUeBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NTDCrjejpt4yEfoJhjy239WryMRnasR5EOc0z5tVWOY=; b=gA06zjXafmcVKYgpk1fuq8XcEYITPzEK3Bo9pdlnCQcRFa5fM1Hpy6V3TUxvY7EnoYvAk8gvnSCMebFa+PZ9ck36/ALHCjEZAYDKP6iXX5MNEYACJLAI0WxUz462zYfVjrrH7ys2q9esMRIFSAX74zkKGJX62KlkzuNW3x5/XTzz32LQxPlX0p7hqz/MOB5oY00k0HaB0Rm45zwpIxyittplQWM98yW2A5LKUsvvxHtLPnpiCYcl+zcHRnh5Kl+rwIfLQeIIl3kuT4iF6zETaSFluBT11ZBfjaLbsEstfavBhGa0AIuxx0GbNdC1ec1PHWhxSdr0L9sNrXUQ/9inlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NTDCrjejpt4yEfoJhjy239WryMRnasR5EOc0z5tVWOY=; b=KxFdpc46i5j6bP+4CJEhamHVENGdsJ1fpOJbhumPWHarWxpi6z9lAZ68FPCx1hE6btzZa1GEV173Pe86cky3JSmOJbGA0EYJoB7ArG7UAPz21sRpO2T6Zl1Js9ReMi0WUGmzBgUaWIZHetjEIAs21UL6ouwx9tp/eFXSYLBzHAmp+2OdCgeh3gSFHXwMMmaN9leUHkZlmjS3NJm5zQwqHxAKbrcZSUP03FMjWQ+hIHCEoajMF37ArHvmlWEMldyxqzb+9ekkrc8xq/5qhcIU+46/w3MUggYjFWJa3sZoL0VyU1ToCKHQ4NV/sQp+p3wxZBg0ltIAUMlLH6egTTU++w== Received: from BN0PR10CA0003.namprd10.prod.outlook.com (2603:10b6:408:143::35) by SJ1PR12MB6266.namprd12.prod.outlook.com (2603:10b6:a03:457::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May 2023 03:15:59 +0000 Received: from BN8NAM11FT091.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::c8) by BN0PR10CA0003.outlook.office365.com (2603:10b6:408:143::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT091.mail.protection.outlook.com (10.13.176.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18 via Frontend Transport; Fri, 26 May 2023 03:15:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 25 May 2023 20:15:43 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 25 May 2023 20:15:41 -0700 From: Suanming Mou To: Matan Azrad CC: , Subject: [PATCH v2 9/9] crypto/mlx5: enable AES-GCM capability Date: Fri, 26 May 2023 06:14:21 +0300 Message-ID: <20230526031422.913377-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526031422.913377-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> <20230526031422.913377-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT091:EE_|SJ1PR12MB6266:EE_ X-MS-Office365-Filtering-Correlation-Id: f20c8e25-96ce-4441-7f4c-08db5d9786f8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w4sKvsdYSmMf+RUOkWLBAMRsBAzm96IKIFvsiP1vjQrO3D6BLZ7RPEvkD0SuF1fbLAoc8bPNXSyNR44+LRJsw7kpd50z8gpkcw6hyY5An7dXKAcQ+iyYPZDIBXF5d4HTTHhVB/WoAD4VHLvFs8I8GY4XxoG1Ddo/GcPrXhcUC7O4NWF9Ixv3sl5H8e2CqJEWlWQPmedPRIrr6AbxxfU7BNmbhPtJkE5k33B7Pf1fapiEdWtcWvKIWUgUrBYBX2s2dmyKa8y2Qa8Zdm8BQzzFtctHMmFU7JxfE7SdEJoKOnL7D5BbrIwzAMEOWuMszQMhb7YD4MYq7OasM5/prlLNFJyCKFcmeU0izVykufSayrj4kiAsd0Auhs7Uu/fNcy3YbKMENFTs47Dhz79Nog70VIP62F0eqMhSgwOd6zSFaPKgcNvCz+IL62iCBzPceP2FZTqMWprlsas+M0QBljKrqq5eylOX2mj5kPxExKCcUVYh9CCCwhWaUQgtkT0L9xxnNC1rNuNOL14gV+hhkGkIVE+YNh+nbagkgHPGJtp9EKIyrGEleiJURT7iL5Wa2Ivx6OPzTdkvrYM0y1WdM2GtZPYLXbh6cDxbeW17BDEIisnsAaNlRunP202MYdXVxYdQMXRfZc2neDQinj1IUUIKjUxrG2dx4NOD8fCLQsdcPYseKzO8+V4OC+qzhupeTgcz2rEsHEtYUlv3iRRTy7Cm+1VQD942ZMQAqjGg3L7P3rxEt2AR6zGzzEPOs1alEU4E X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(2906002)(40460700003)(30864003)(37006003)(54906003)(26005)(186003)(1076003)(6286002)(16526019)(36756003)(7636003)(356005)(7696005)(36860700001)(107886003)(47076005)(82310400005)(83380400001)(336012)(2616005)(426003)(40480700001)(82740400003)(6666004)(86362001)(55016003)(5660300002)(316002)(8936002)(8676002)(70586007)(70206006)(6636002)(6862004)(4326008)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 03:15:58.7059 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f20c8e25-96ce-4441-7f4c-08db5d9786f8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT091.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6266 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit generates AES-GCM capability based on the NIC attributes and enables AES-GCM algo. An new devarg "algo" is added to identify if the crypto PMD will be initialized as AES-GCM(algo=1) or AES-XTS(algo=0, default). Signed-off-by: Suanming Mou --- doc/guides/cryptodevs/mlx5.rst | 48 +++++++++++++++++++- doc/guides/rel_notes/release_23_07.rst | 1 + drivers/crypto/mlx5/mlx5_crypto.c | 26 +++++++++-- drivers/crypto/mlx5/mlx5_crypto.h | 1 + drivers/crypto/mlx5/mlx5_crypto_gcm.c | 63 ++++++++++++++++++++++++++ 5 files changed, 134 insertions(+), 5 deletions(-) diff --git a/doc/guides/cryptodevs/mlx5.rst b/doc/guides/cryptodevs/mlx5.rst index b35ac5f5f2..9a0ae8b0d2 100644 --- a/doc/guides/cryptodevs/mlx5.rst +++ b/doc/guides/cryptodevs/mlx5.rst @@ -21,6 +21,11 @@ and **NVIDIA BlueField-3** family adapters. Overview -------- +Nvidia MLX5 crypto driver supports AES-XTs and AES-GCM cryption. + +AES-XTS +------- + The device can provide disk encryption services, allowing data encryption and decryption towards a disk. Having all encryption/decryption operations done in a single device @@ -38,13 +43,19 @@ The encryption does not require text to be aligned to the AES block size (128b). See :doc:`../../platform/mlx5` guide for more design details. +AES-GCM +------- +The encryption and decryption processes the traffic as standard RTE crypto +API defines. The supported AAD/digest/key size can be read from dev_info. + + Configuration ------------- See the :ref:`mlx5 common configuration `. A device comes out of NVIDIA factory with pre-defined import methods. -There are two possible import methods: wrapped or plaintext. +There are two possible import methods: wrapped or plaintext(valid to AES-XTS only). In case the device is in wrapped mode, it needs to be moved to crypto operational mode. In order to move the device to crypto operational mode, credential and KEK @@ -120,24 +131,36 @@ Driver options Please refer to :ref:`mlx5 common options ` for an additional list of options shared with other mlx5 drivers. +- ``algo`` parameter [int] + + - 0. AES-XTS crypto. + + - 1. AES-GCM crypto. + + Set to zero(AES-XTS) by default. + - ``wcs_file`` parameter [string] - mandatory in wrapped mode File path including only the wrapped credential in string format of hexadecimal numbers, represent 48 bytes (8 bytes IV added by the AES key wrap algorithm). + This option is valid only to AES-XTS. - ``import_kek_id`` parameter [int] The identifier of the KEK, default value is 0 represents the operational register import_kek.. + This option is valid only to AES-XTS. - ``credential_id`` parameter [int] The identifier of the credential, default value is 0 represents the operational register credential. + This option is valid only to AES-XTS. - ``keytag`` parameter [int] The plaintext of the keytag appended to the AES-XTS keys, default value is 0. + This option is valid only to AES-XTS. - ``max_segs_num`` parameter [int] @@ -161,6 +184,8 @@ Limitations - The supported data-unit lengths are 512B and 4KB and 1MB. In case the `dataunit_len` is not provided in the cipher xform, the OP length is limited to the above values. +- AES-GCM is only supported on BlueField-3. +- AES-GCM only supported key import plaintext mode. Prerequisites @@ -172,6 +197,7 @@ FW Prerequisites - xx.31.0328 for ConnectX-6. - xx.32.0108 for ConnectX-6 Dx and BlueField-2. - xx.36.xxxx for ConnectX-7 and BlueField-3. +- xx.37.3010 for BlueField-3 and newer for AES-GCM. Linux Prerequisites ~~~~~~~~~~~~~~~~~~~ @@ -186,3 +212,23 @@ Windows Prerequisites - NVIDIA WINOF-2 version: **2.60** or higher. See :ref:`mlx5 common prerequisites ` for more details. + + +Notes for rte_crypto AES-GCM +---------------------------- + +In AES-GCM mode, the HW requires continuous input and output of Additional +Authenticated Data (AAD), payload, and digest (if needed). However, the RTE +API only provides a single AAD input, which means that in the out-of-place +mode, the AAD will be used in both input and output. This reuse of AAD in the +out-of-place mode breaks the continuous output, which degrades the performance +and introduces extra UMR WQE. If digest is not continuous after payload will +also lead to that extra UMR WQE. + +To address this issue, current RTE API provides min_mbuf_headroom_req and +min_mbuf_tailroom_req in rte_cryptodev_info as a hint to the PMD. It +indicates the PMD can use the buffer before and after the mbuf payload as AAD +and digest space. With this hint, the PMD will use the buffer before and +after the mbuf payload directly via copying AAD and digest. However, the +application must ensure that there is enough headroom and tailroom reserved +for the mbuf. Or, for non-continuous operations, extra UMR WQE will be used. diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index 946f89e83b..fbbdceab0b 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -29,6 +29,7 @@ New Features * Added support for multi-packet RQ on Windows. * Added support for CQE compression on Windows. * Added support for enhanced multi-packet write on Windows. + * Added support for AES-GCM crypto. * **Added flow matching of tx queue.** diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 4d7d3ef2a3..081e96ad4d 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -269,6 +269,14 @@ mlx5_crypto_args_check_handler(const char *key, const char *val, void *opaque) attr->credential_pointer = (uint32_t)tmp; } else if (strcmp(key, "keytag") == 0) { devarg_prms->keytag = tmp; + } else if (strcmp(key, "algo") == 0) { + if (tmp == 1) { + devarg_prms->is_aes_gcm = 1; + } else if (tmp > 1) { + DRV_LOG(ERR, "Invalid algo."); + rte_errno = EINVAL; + return -rte_errno; + } } return 0; } @@ -285,6 +293,7 @@ mlx5_crypto_parse_devargs(struct mlx5_kvargs_ctrl *mkvlist, "keytag", "max_segs_num", "wcs_file", + "algo", NULL, }; @@ -370,10 +379,19 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, priv->crypto_dev = crypto_dev; priv->is_wrapped_mode = wrapped_mode; priv->max_segs_num = devarg_prms.max_segs_num; - ret = mlx5_crypto_xts_init(priv); - if (ret) { - DRV_LOG(ERR, "Failed to init AES-XTS crypto."); - return -ENOTSUP; + /* Init and override AES-GCM configuration. */ + if (devarg_prms.is_aes_gcm) { + ret = mlx5_crypto_gcm_init(priv); + if (ret) { + DRV_LOG(ERR, "Failed to init AES-GCM crypto."); + return -ENOTSUP; + } + } else { + ret = mlx5_crypto_xts_init(priv); + if (ret) { + DRV_LOG(ERR, "Failed to init AES-XTS crypto."); + return -ENOTSUP; + } } if (mlx5_devx_uar_prepare(cdev, &priv->uar) != 0) { rte_cryptodev_pmd_destroy(priv->crypto_dev); diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 6dcb41b27c..36dacdcda4 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -92,6 +92,7 @@ struct mlx5_crypto_devarg_params { struct mlx5_devx_crypto_login_attr login_attr; uint64_t keytag; uint32_t max_segs_num; + uint32_t is_aes_gcm:1; }; struct mlx5_crypto_session { diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index 2231bcbe6f..d481cd0716 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -107,6 +107,60 @@ mlx5_crypto_dek_fill_gcm_attr(struct mlx5_crypto_dek *dek, return 0; } +static int +mlx5_crypto_generate_gcm_cap(struct mlx5_hca_crypto_mmo_attr *mmo_attr, + struct rte_cryptodev_capabilities *cap) +{ + /* Init key size. */ + if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt && + mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) { + cap->sym.aead.key_size.min = 16; + cap->sym.aead.key_size.max = 32; + cap->sym.aead.key_size.increment = 16; + } else if (mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) { + cap->sym.aead.key_size.min = 32; + cap->sym.aead.key_size.max = 32; + cap->sym.aead.key_size.increment = 0; + } else if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt) { + cap->sym.aead.key_size.min = 16; + cap->sym.aead.key_size.max = 16; + cap->sym.aead.key_size.increment = 0; + } else { + DRV_LOG(ERR, "No available AES-GCM encryption/decryption supported."); + return -1; + } + /* Init tag size. */ + if (mmo_attr->gcm_auth_tag_128 && mmo_attr->gcm_auth_tag_96) { + cap->sym.aead.digest_size.min = 12; + cap->sym.aead.digest_size.max = 16; + cap->sym.aead.digest_size.increment = 4; + } else if (mmo_attr->gcm_auth_tag_96) { + cap->sym.aead.digest_size.min = 12; + cap->sym.aead.digest_size.max = 12; + cap->sym.aead.digest_size.increment = 0; + } else if (mmo_attr->gcm_auth_tag_128) { + cap->sym.aead.digest_size.min = 16; + cap->sym.aead.digest_size.max = 16; + cap->sym.aead.digest_size.increment = 0; + } else { + DRV_LOG(ERR, "No available AES-GCM tag size supported."); + return -1; + } + /* Init AAD size. */ + cap->sym.aead.aad_size.min = 0; + cap->sym.aead.aad_size.max = UINT16_MAX; + cap->sym.aead.aad_size.increment = 1; + /* Init IV size. */ + cap->sym.aead.iv_size.min = 12; + cap->sym.aead.iv_size.max = 12; + cap->sym.aead.iv_size.increment = 0; + /* Init left items. */ + cap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cap->sym.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD; + cap->sym.aead.algo = RTE_CRYPTO_AEAD_AES_GCM; + return 0; +} + static int mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, @@ -915,8 +969,10 @@ mlx5_crypto_gcm_dequeue_burst(void *queue_pair, int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { + struct mlx5_common_device *cdev = priv->cdev; struct rte_cryptodev *crypto_dev = priv->crypto_dev; struct rte_cryptodev_ops *dev_ops = crypto_dev->dev_ops; + int ret; /* Override AES-GCM specified ops. */ dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; @@ -926,6 +982,13 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) crypto_dev->dequeue_burst = mlx5_crypto_gcm_dequeue_burst; crypto_dev->enqueue_burst = mlx5_crypto_gcm_enqueue_burst; priv->max_klm_num = RTE_ALIGN((priv->max_segs_num + 1) * 2 + 1, MLX5_UMR_KLM_NUM_ALIGN); + /* Generate GCM capability. */ + ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo, + mlx5_crypto_gcm_caps); + if (ret) { + DRV_LOG(ERR, "No enough AES-GCM cap."); + return -1; + } priv->caps = mlx5_crypto_gcm_caps; return 0; }