From patchwork Sun Dec 4 18:17:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 17649 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 711EFFA8B; Sun, 4 Dec 2016 13:45:19 +0100 (CET) Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-bl2nam02on0056.outbound.protection.outlook.com [104.47.38.56]) by dpdk.org (Postfix) with ESMTP id BBB82FA45 for ; Sun, 4 Dec 2016 13:44:24 +0100 (CET) Received: from BN6PR03CA0067.namprd03.prod.outlook.com (10.173.137.29) by CY1PR0301MB0748.namprd03.prod.outlook.com (10.160.159.154) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.734.8; Sun, 4 Dec 2016 12:44:21 +0000 Received: from BN1AFFO11FD016.protection.gbl (2a01:111:f400:7c10::131) by BN6PR03CA0067.outlook.office365.com (2603:10b6:404:4c::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.747.13 via Frontend Transport; Sun, 4 Dec 2016 12:44:20 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; nxp.com; dkim=none (message not signed) header.d=none; nxp.com; dmarc=fail action=none header.from=nxp.com; nxp.com; dkim=none (message not signed) header.d=none; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD016.mail.protection.outlook.com (10.58.52.76) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.734.4 via Frontend Transport; Sun, 4 Dec 2016 12:44:20 +0000 X-IncomingTopHeaderMarker: OriginalChecksum:; UpperCasedChecksum:; SizeAsReceived:722; Count:10 Received: from bf-netperf1.idc ([10.232.134.28]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id uB4ChXbK032240; Sun, 4 Dec 2016 05:44:18 -0700 From: Hemant Agrawal To: CC: , , , Hemant Agrawal Date: Sun, 4 Dec 2016 23:47:12 +0530 Message-ID: <1480875447-23680-18-git-send-email-hemant.agrawal@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1480875447-23680-1-git-send-email-hemant.agrawal@nxp.com> References: <1480875447-23680-1-git-send-email-hemant.agrawal@nxp.com> X-IncomingHeaderCount: 10 X-EOPAttributedMessage: 0 X-Matching-Connectors: 131253290609927708; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(7916002)(2980300002)(1110001)(1109001)(336004)(339900001)(199003)(189002)(2906002)(105606002)(6916009)(81156014)(8936002)(97736004)(5660300001)(48376002)(5003940100001)(50226002)(81166006)(50466002)(8666005)(4326007)(92566002)(626004)(356003)(39410400001)(77096006)(305945005)(86362001)(76176999)(39450400002)(36756003)(39400400001)(8676002)(85426001)(39380400001)(38730400001)(7846002)(50986999)(189998001)(33646002)(68736007)(47776003)(6666003)(2351001)(104016004)(106466001)(2950100002)(110136003)(7059030); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0301MB0748; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD016; 1:1cKvzV/+ab6GZ4j9rsJDvPTXCUHYbmtILrK8RKDBR3KlGiTXKYqpHyH1jqrnnF/17ru9f//Mx/IlaSknxp7kKbqQCkWz0XXZ/FRxAXxpPBcUt7oBU5NZmNzoBUhNRVXusd18LXG7uTHk+5RtoDxm9R9iN5kD2d6JZKsSWvbmvB0uff9rrshLUwBQv/Fmd5bQgZ8vEt/n7VyKu2XYaGWHze220XBFEccJBNHM6qkWzBD+PATzwNmgQ9hLmdEVfkZ+6EPnkV2A/Lhe27un5T7BY+lIuthRNwinZU8kn1DSgHVC7W8bOthh1QLdZzak65j3MIJSqae22Y2M9gzCThjIwEpOFgw4Ak6CzJ7AhoyVtS0T8Kh4yWj2hrAuIkYDwJx9KrVPP7OzN//J9vDyptcCgGrFQqIrC1lXl5fS9I4LeQDZBOoVFyhMm+GBNBYRvghCcNYR0T1q0n38qE4rqyCchjNgWdCmxOg3uYcklfJiTqzLEAVxlgXZdAWbJJaQmT0HxtS0FH3+zbLCZPMDBIDqnHReX3fY7dJG9GJkExMc6wuSrrN4ozzWZqAgxh3YJ5J7l+wa76Hi/UKsZhKGtUslmRty2IBE8LNTfSQPydukZcTM9jsTMsZcUcMTcqeP3YJ2NEcKcbyBSi/8m7QcnrAiuQ7Irp2/h4myppnUy4c63qIr7k57Ns29vxbmqe9miR8N900TiBAZwBisL5R5eQ6eZyP1ORrhZ+TBGECQ93YzWZX2hDdX4m2mnzrB818SfQYEzV7LqGZT/2VBIMzDRXbyfw== MIME-Version: 1.0 X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 2:K5DZclvv60vFHjapOAOWrCpcpslzDhaViDwv26MiT4vMXg8Y/Mzh7tmPrya40WydbauJmt7qLL9NbYuAug3J06omzV/dGOvYT2k+sBQSZW8CkbWKIRpkmKCJRM4v4YVya+UKxvkIzoOmDpNdoRMIs6ELpjvzFldV1DHS//r/fKk=; 3:aTYJO4HkycfKVGcOiLt2iR8OLccW74aaVeX+NASHnV5npkc7+jE1+OmZwvsrO2Czx4WFXT6ju43byumqrcBEVdSqkhzbAaNOgLfCquc72m1tJ+Fl0PnmGdMPTnhugAEhuaXDQg8N2OvRFVSL7splWs9sLDyNbbsb/ZvhFwqri2Nuny9kty44Kpgki6rDcM87+UAR8hIVGJ+rZjxUPw83cJWtxy0WbvTIdCNvmcj+oKnDBFXgA/mfkZ68K5wVMuF49iEmU7ePiEsjPN9KPsOlDQ== X-MS-Office365-Filtering-Correlation-Id: 24f7564c-870d-457e-84bb-08d41c4344dc X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:CY1PR0301MB0748; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 25:djm+3Batz4aCG4vhcNMRk5bILkq2KOipiz62jzhAC6bVovKWqeuvEo3UN8Pt4DeKoes1n+AyrxG74dPtLChyenG1aAlNYcvAfqmcK2gCKxpkXBUDdF7awPs9tomOVuVWZ6xSfqKOPSwS6aMFceSkqE7Vcmuqo/zmctga0UWZjyM4cXzGUPjXjYgx3r+dP7PTL7xLqw6/h0FH8dHXwQAVKZTJX1uNzo3CXBn9YZtZgXdSDl/1YdpPIaEGQHQV3U1yHRM6eyaWYiISeWRCDPYhgzu4xsbdzJAJfxiFaqXh/FF5H/dMhZ19glruTsKMAF0b2PidchdE0l/cmPfk2O6Str3o8UEED9/e09IYHOmwpz37xPGcbnNzDe7b0Xhq1bhWSk2PqcRbvD8zAcx+SviydxaW9E020uu9ElcVwOcbz2hzR0nvSxX/iISL/SevRr925/e5lziuN5UBVgG/9RUzbVoanYsLOBDGjniWr60X/ZwCqrVH+hjuti9mQ6qMP57kXcZ31jKC9pYhTqJM+wsacFq1G0SlFzL2KyBcnnu/2+w9OZxpSlQsqpfiMqNd2A7e+QumMzd/u2kjPz4X1uhslrIHNNkDizM2/4y0JBlCfOsi3NAkvxaJDpoUFf7w0s6MhJ76eNza09ADUlRTYqOmxOsQzv8ZTFpzLuPlTS9KUew91cpW/qqnVM1tfuo9yX4V/EfcZx5Uh6ed6Eyiech98Yvt4rHe4rxr+d4s7brwJ1kg7d7rBZLvUkTzNhlns10uw5XhaMxI6ojctrE9IZHMtQ== X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 31:6bRLNXM1ASbJbaPs1ktvxpaftc0msPpb4pPkJzfGBJH0pE/BusgLTvT0ymSvxKAZL/r5knxiTnUvQzQIKtec9mPjPNUztRdK+MHMX/qIr+cFaIYNSHmMeY47pTmQw/xlsnh3mjQBYIbUHt4uPPdbEjmBsxg+jlf0NeklizZBdyKe3Z8qGxA1H6J+OuCjd5d6ywCa9DCp3bAodr/CDl9fvyl5lFzMwJreXK/g3VlstZ2WFZfl8POW4cOqrZK10utOMdnT3ZBqzhg7+HbYvrB4LHPxRAhw6HvkikM3wHBJv84= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095060)(601004)(2401047)(13023025)(13024025)(13015025)(13017025)(13018025)(8121501046)(5005006)(10201501046)(3002001)(6055026)(6096035)(20161123559025)(20161123563025)(20161123561025)(20161123565025)(20161123556025); SRVR:CY1PR0301MB0748; BCL:0; PCL:0; RULEID:(400006); SRVR:CY1PR0301MB0748; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 4:jrut+zw9ljgxNvMpqiP+Sdq7HX/umFjW7cTdFNL1W/lDF2Mivh0PM0K1dudLm9hGq2SLBDVRxDvZjpp3WkjI4w0lPQUQUOaEiIaqy7bx8erg2nLk57sHVxPihCiRY0QYWLhlc39Q3elIt3EYesGa36fS4C4X0lKsWNx8cQjpzXmb3ovW3HdpdcpVUR5HH+J0VmYO1GfZKzuTT53PzKhSdq9b+gqteYXemzqm9SqJOQTvOreRaFenIy8UdwsdsB/hThrwpHL6897700bzBMK11328yF59vjjhkPMKncgJcMV0tL4YiIJBdyqAXcO8GOFpluI8IhmTvf/+KsYy191D1DDcehxjlf/Ot0k0YX9RE8LkzcUH5GWzRcszKukZBXhqNVxK/P6jeG9y8FuH/o9DE/q8JEepi52kQlB9IZWuG+no7z54Bk/4/CQjtfINJQVFLdvxEh2VP/CTIKTQo8L5FlrtxDL9mVO2vWzxTIe5RfkrYtGCac9FrqQald7GI/pcU8C5GeZOOGoaJR7IA+WLvYDPbazWZbYOLJT4mhZdBqpeEg8nvYPTacKbLYjc1cbGqd0XcBwXpwZTbf10g6QXVHsNRzou7Lw0/RPAw2OGGwDTnzQemu3c/A7JhAzEH4b8VHSHJzZ7lKH2VRJbE4IPTGnfMsnq7FxmnLJvf4iy/TYJR6XB5fsslAYHcicMUbkforFbN8AB6IBsREKuPU/4u5rGztfzkR8g/UpKpJXO3j7S3ylpKtGFDaEU/kPeJRdinDrun9Q/XPJZLQogHIP6zw== X-Forefront-PRVS: 014617085B X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR0301MB0748; 23:ZuV7rTaDrVrWD6KUnQ6gAHA/0NsrwWBEy6w7EGc?= =?us-ascii?Q?fxwfBJOLL3a11mPX9oBZvWEjCup9J09RGlYPESxbl4TxYPQSRqdcBOk+ffh0?= =?us-ascii?Q?7bNbXqfN8xc2K//KrarABeuJepL/thwpzGrjNqlN4LsgphEM4VECyXHIIdV5?= =?us-ascii?Q?Lht4RxpJkU6l5Yk9blGtbDcOVeaPkeCeuqIAjjMhdpLToWYhVtjoHYPYcgU0?= =?us-ascii?Q?fba0ctVksCb9Ql/HRyXDVcJCIiNnNeCoTC8c7hf9QhDQw5s9gh5J0/JgMhN9?= =?us-ascii?Q?Ji5+mThmkD3EkuJXJnCDzc7By93I/HRiFxvUiv17PZqC4DebtLUzkfCwaKE1?= =?us-ascii?Q?D3DNuMvhrAPLMuXwos7FrhCIjHBSm3rbkLCJ+5q6WkkTYMkPIYZna5C2Rpkf?= =?us-ascii?Q?8S73IVc3r+Gb7bOPkV+if11XcP4Whj9ABbP2zBCdBYkZ1Gmcqjs8xkudIVU/?= =?us-ascii?Q?5e+K3oDsHMCpuIIm6m8VfWhwU5NIotLj690VsMqB5w/FRszM3SbZMDCPumSy?= =?us-ascii?Q?MfnjhLxqPoZGgKIy5udF/IpwKTnPRuWPx7awj/wqbjIPlmls7mUuBf82E4yx?= =?us-ascii?Q?YdDLlh51BAw1a1a6q0Hj4/YX/F7o/Y+PVEozascNPHy+wRGBRevXSNCgQs8R?= =?us-ascii?Q?auUtEcCWUQpDDj2/u2CJalmjKczgXwJf8O6/u/mTRZpRkIHLGFu8GEfpz7Ah?= =?us-ascii?Q?8UZsZZG5UbiVU9jN9MR7EzkR1udAitG675dfrntVhOU4eTOO9Iy01yXQt8bG?= =?us-ascii?Q?W1l/RAr5wUJwbur0uGMuNTi+qW+FIdOPXHq4iwDeCb1j8fl4PTrdvZOCrmU6?= =?us-ascii?Q?CIG5CFv9wEFsCKvIV8Iy7uzye0oekAFhuAg13FLGdyTCiXDt6wSaa14P7HHM?= =?us-ascii?Q?jUT3pqbCFvKfUWjEhXn3HUA/Ms3dAtAdCz386QDjAzw5hxubHgGlFEtq34vL?= =?us-ascii?Q?aICgfc0tBR+iZMnp3MPHIQmSFcKFLoLm/BN3kOzU0Ws0Y0wIL3WdEMvxrpmE?= =?us-ascii?Q?eD+Ik+3pyqUf+t1whN7Hxz367ijqdCbcZlzzZKjHGl/1t+mH3KdnQgBXK3Iz?= =?us-ascii?Q?EVNiz7Lq9Y21XImX/m2Xbk/emqEnBiB2JykS9RQpjDXQ+ckPNPJAIYZHOR0e?= =?us-ascii?Q?cgw+h2wqcKmNxGGjrqEzWQfmu0YeCLhrvFoAHcrkkP/xcqudtuX+CXtCylpA?= =?us-ascii?Q?fjvFy2F6fMZv9svh4XlGDj0xSN+5uE3lxSmWMIPvPypympSoEOIGC1Wo4Tw?= =?us-ascii?Q?=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 6:sp2rwDBSXevmeCbBiHufP5dg/GRL3oorI0p/yCVRJGBVWss/sIZjeZhoLfbtG0hT5riBfRp3nsiGK8H51yNbhXrJYFvAKcqJyTG/EnRO21ko2FmZ5QjmQ5EcMCH2Ak8U2d8vbKa+sWA83suBamigbAIO8pk7T1pEcJE2+Wmr3iT90sIwPWnBLKihEVpzsFStxY4VbhcefxkzVgWvclgiOyGmpWZ4yfNTWk3Ve5ecEMNSej5U+/ybV4LiHf/aM4DNfwJmr/v81aDFcDnfJZYKAiXSIr1o/GRBpODJ/PXcOCKJqQpIOYllgnEHT3mh3vx2qOv6XHZuHb8nxP/be+5CUg==; 5:4FMH7sLuVR8s2fHjndZpZ7krfH3udvEOMI9vdsV+ynQCExBwNQYIl2GZiidcgAVIOFhZUMj82TjZMmxgNrzJEWB3XBaMgOjlkykPWo3AEE6nppL1BGIDeKPyzZQNOlRH2hLOccYoiXL+EAjEdyir80hl267gum2NXjqJIFmneYNJosRyKpQjbkTFnE+GcjPv; 24:LC9LBd6HFS8hQjtgy5GLLd2nQbnM3EtpPKEeGfZI0PDfLEn+3Y4b3ivAsBYEHfSU5REs7cUk2bWpVRXNdxzqor0XNoMKikh+tJzJk3uAwSw= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB0748; 7:xlOdafPGOpHPysYy/RQTDOarzemfc0Q/+FLKKJGGznN5bEi2fMJrD5+y24A6HHjl6nsyyrsJCX4wWc05cxG3fuJVar2McP4XEGT1ZETep6bypZ6CDeQvBlnI8tvZycoZlTZ6mCRR3vJcn8f9cCJjEUQRRWHxIQZMf6rfDzgVRw4JzibZo6UkdljVt++HYRmkYI49Ig3VL6rOqXiI5nirdM81oeo/JBahdwSK6+xqmMqxoioHPnFOKEUk5BoKlNCYJ380FTlsRt8pDfgIhnvDbtPSLlfiUquQUkq1FUMB9ykh/DVVH2kkR8gn7ZNEpfINxDmaFGuwkHOmpatEo7DzASpjziBnFtF93K5G0ueEaC8= X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Dec 2016 12:44:20.7899 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0301MB0748 Subject: [dpdk-dev] [PATCH 17/32] net/dpaa2: dpbp based mempool hw offload driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DPBP represent a buffer pool instance in DPAA2-QBMAN HW accelerator. All buffers needs to be programmed in the HW accelerator. Signed-off-by: Hemant Agrawal --- config/defconfig_arm64-dpaa2-linuxapp-gcc | 5 + drivers/net/dpaa2/Makefile | 2 + drivers/net/dpaa2/base/dpaa2_hw_dpbp.c | 366 ++++++++++++++++++++++++++++++ drivers/net/dpaa2/base/dpaa2_hw_dpbp.h | 101 +++++++++ drivers/net/dpaa2/base/dpaa2_hw_pvt.h | 7 + drivers/net/dpaa2/dpaa2_vfio.c | 13 +- 6 files changed, 493 insertions(+), 1 deletion(-) create mode 100644 drivers/net/dpaa2/base/dpaa2_hw_dpbp.c create mode 100644 drivers/net/dpaa2/base/dpaa2_hw_dpbp.h diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc index 5ff884b..bcb6e88 100644 --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc @@ -42,6 +42,11 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a57+fp+simd" CONFIG_RTE_MAX_LCORE=8 CONFIG_RTE_MAX_NUMA_NODES=1 +CONFIG_RTE_PKTMBUF_HEADROOM=256 +# FSL DPAA2 based hw mempool +# +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa2" + # Compile software PMD backed by NXP DPAA2 files # CONFIG_RTE_LIBRTE_DPAA2_PMD=y diff --git a/drivers/net/dpaa2/Makefile b/drivers/net/dpaa2/Makefile index b04c3d2..c4981b2 100644 --- a/drivers/net/dpaa2/Makefile +++ b/drivers/net/dpaa2/Makefile @@ -56,6 +56,8 @@ EXPORT_MAP := rte_pmd_dpaa2_version.map LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += base/dpaa2_hw_dpio.c +SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += base/dpaa2_hw_dpbp.c + SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2_vfio.c # Interfaces with DPDK SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_PMD) += dpaa2_bus.c diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpbp.c b/drivers/net/dpaa2/base/dpaa2_hw_dpbp.c new file mode 100644 index 0000000..2b30036 --- /dev/null +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpbp.c @@ -0,0 +1,366 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "dpaa2_logs.h" +#include +#include +#include +#include + +static struct dpbp_node *g_dpbp_list; +static struct dpbp_node *avail_dpbp; + +struct dpaa2_bp_info bpid_info[MAX_BPID]; + +struct dpaa2_bp_list *h_bp_list; + +int +dpaa2_create_dpbp_device( + int dpbp_id) +{ + struct dpbp_node *dpbp_node; + int ret; + + /* Allocate DPAA2 dpbp handle */ + dpbp_node = (struct dpbp_node *)malloc(sizeof(struct dpbp_node)); + if (!dpbp_node) { + PMD_INIT_LOG(ERR, "Memory allocation failed for DPBP Device"); + return -1; + } + + /* Open the dpbp object */ + dpbp_node->dpbp.regs = mcp_ptr_list[MC_PORTAL_INDEX]; + ret = dpbp_open(&dpbp_node->dpbp, + CMD_PRI_LOW, dpbp_id, &dpbp_node->token); + if (ret) { + PMD_INIT_LOG(ERR, "Resource alloc failure with err code: %d", + ret); + free(dpbp_node); + return -1; + } + + /* Clean the device first */ + ret = dpbp_reset(&dpbp_node->dpbp, CMD_PRI_LOW, dpbp_node->token); + if (ret) { + PMD_INIT_LOG(ERR, "Failure cleaning dpbp device with" + " error code %d\n", ret); + return -1; + } + + dpbp_node->dpbp_id = dpbp_id; + /* Add the dpbp handle into the global list */ + dpbp_node->next = g_dpbp_list; + g_dpbp_list = dpbp_node; + avail_dpbp = g_dpbp_list; + + PMD_INIT_LOG(DEBUG, "Buffer pool resource initialized %d", dpbp_id); + + return 0; +} + +static int +hw_mbuf_create_pool(struct rte_mempool *mp) +{ + struct dpaa2_bp_list *bp_list; + struct dpbp_attr dpbp_attr; + uint32_t bpid; + int ret; + + if (!avail_dpbp) { + PMD_DRV_LOG(ERR, "DPAA2 resources not available"); + return -1; + } + + ret = dpbp_enable(&avail_dpbp->dpbp, CMD_PRI_LOW, avail_dpbp->token); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource enable failure with" + " err code: %d\n", ret); + return -1; + } + + ret = dpbp_get_attributes(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token, &dpbp_attr); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource read failure with" + " err code: %d\n", ret); + ret = dpbp_disable(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token); + return -1; + } + + /* Allocate the bp_list which will be added into global_bp_list */ + bp_list = (struct dpaa2_bp_list *)malloc(sizeof(struct dpaa2_bp_list)); + if (!bp_list) { + PMD_INIT_LOG(ERR, "No heap memory available"); + return -1; + } + + /* Set parameters of buffer pool list */ + bp_list->buf_pool.num_bufs = mp->size; + bp_list->buf_pool.size = mp->elt_size + - sizeof(struct rte_mbuf) - rte_pktmbuf_priv_size(mp); + bp_list->buf_pool.bpid = dpbp_attr.bpid; + bp_list->buf_pool.h_bpool_mem = NULL; + bp_list->buf_pool.mp = mp; + bp_list->buf_pool.dpbp_node = avail_dpbp; + bp_list->next = h_bp_list; + + bpid = dpbp_attr.bpid; + + /* Increment the available DPBP */ + avail_dpbp = avail_dpbp->next; + + bpid_info[bpid].meta_data_size = sizeof(struct rte_mbuf) + + rte_pktmbuf_priv_size(mp); + bpid_info[bpid].bp_list = bp_list; + bpid_info[bpid].bpid = bpid; + + mp->pool_data = (void *)&bpid_info[bpid]; + + PMD_INIT_LOG(DEBUG, "BP List created for bpid =%d", dpbp_attr.bpid); + + h_bp_list = bp_list; + /* Identification for our offloaded pool_data structure + */ + mp->flags |= MEMPOOL_F_HW_PKT_POOL; + return 0; +} + +static void +hw_mbuf_free_pool(struct rte_mempool *mp __rte_unused) +{ + /* TODO: + * 1. Release bp_list memory allocation + * 2. opposite of dpbp_enable() + * + */ + struct dpaa2_bp_list *bp; + + /* Iterate over h_bp_list linked list and release each element */ + while (h_bp_list) { + bp = h_bp_list; + h_bp_list = bp->next; + + /* TODO: Should be changed to rte_free */ + free(bp); + } +} + +static +void dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused, + void * const *obj_table, + uint32_t bpid, + uint32_t meta_data_size, + int count) +{ + struct qbman_release_desc releasedesc; + struct qbman_swp *swp; + int ret; + int i, n; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + /* Create a release descriptor required for releasing + * buffers into QBMAN + */ + qbman_release_desc_clear(&releasedesc); + qbman_release_desc_set_bpid(&releasedesc, bpid); + + n = count % DPAA2_MBUF_MAX_ACQ_REL; + + /* convert mbuf to buffers for the remainder*/ + for (i = 0; i < n ; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t)rte_mempool_virt2phy(pool, obj_table[i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[i] + meta_data_size; +#endif + } + /* feed them to bman*/ + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, n); + } while (ret == -EBUSY); + + /* if there are more buffers to free */ + while (n < count) { + /* convert mbuf to buffers */ + for (i = 0; i < DPAA2_MBUF_MAX_ACQ_REL; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t) + rte_mempool_virt2phy(pool, obj_table[n + i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[n + i] + meta_data_size; +#endif + } + + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } while (ret == -EBUSY); + n += DPAA2_MBUF_MAX_ACQ_REL; + } +} + +int hw_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned count) +{ +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + static int alloc; +#endif + struct qbman_swp *swp; + uint32_t mbuf_size; + uint16_t bpid; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + int i, ret; + unsigned n = 0; + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured\n"); + return -2; + } + + bpid = bp_info->bpid; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return -1; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + mbuf_size = sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(pool); + + while (n < count) { + /* Acquire is all-or-nothing, so we drain in 7s, + * then the remainder. + */ + if ((count - n) > DPAA2_MBUF_MAX_ACQ_REL) { + ret = qbman_swp_acquire(swp, bpid, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } else { + ret = qbman_swp_acquire(swp, bpid, bufs, + count - n); + } + /* In case of less than requested number of buffers available + * in pool, qbman_swp_acquire returns 0 + */ + if (ret <= 0) { + PMD_TX_LOG(ERR, "Buffer acquire failed with" + " err code: %d", ret); + /* The API expect the exact number of requested bufs */ + /* Releasing all buffers allocated */ + dpaa2_mbuf_release(pool, obj_table, bpid, + bp_info->meta_data_size, n); + return -1; + } + /* assigning mbuf from the acquired objects */ + for (i = 0; (i < ret) && bufs[i]; i++) { + /* TODO-errata - observed that bufs may be null + * i.e. first buffer is valid, + * remaining 6 buffers may be null + */ + obj_table[n] = (struct rte_mbuf *)(bufs[i] - mbuf_size); + rte_mbuf_refcnt_set((struct rte_mbuf *)obj_table[n], 0); + PMD_TX_LOG(DEBUG, "Acquired %p address %p from BMAN", + (void *)bufs[i], (void *)obj_table[n]); + n++; + } + } + +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + alloc += n; + PMD_TX_LOG(DEBUG, "Total = %d , req = %d done = %d", + alloc, count, n); +#endif + return 0; +} + +static int +hw_mbuf_free_bulk(struct rte_mempool *pool, + void * const *obj_table, unsigned n) +{ + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured"); + return -1; + } + dpaa2_mbuf_release(pool, obj_table, bp_info->bpid, + bp_info->meta_data_size, n); + + return 0; +} + +struct rte_mempool_ops dpaa2_mpool_ops = { + .name = "dpaa2", + .alloc = hw_mbuf_create_pool, + .free = hw_mbuf_free_pool, + .enqueue = hw_mbuf_free_bulk, + .dequeue = hw_mbuf_alloc_bulk, +}; + +MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops); diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpbp.h b/drivers/net/dpaa2/base/dpaa2_hw_dpbp.h new file mode 100644 index 0000000..6efe24f --- /dev/null +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpbp.h @@ -0,0 +1,101 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _DPAA2_HW_DPBP_H_ +#define _DPAA2_HW_DPBP_H_ + +#define DPAA2_MAX_BUF_POOLS 8 + +struct dpbp_node { + struct dpbp_node *next; + struct fsl_mc_io dpbp; + uint16_t token; + int dpbp_id; +}; + +struct buf_pool_cfg { + void *addr; /*!< The address from where DPAA2 will carve out the + * buffers. 'addr' should be 'NULL' if user wants + * to create buffers from the memory which user + * asked DPAA2 to reserve during 'nadk init' */ + phys_addr_t phys_addr; /*!< corresponding physical address + * of the memory provided in addr */ + uint32_t num; /*!< number of buffers */ + uint32_t size; /*!< size of each buffer. 'size' should include + * any headroom to be reserved and alignment */ + uint16_t align; /*!< Buffer alignment (in bytes) */ + uint16_t bpid; /*!< The buffer pool id. This will be filled + *in by DPAA2 for each buffer pool */ +}; + +struct buf_pool { + uint32_t size; + uint32_t num_bufs; + uint16_t bpid; + uint8_t *h_bpool_mem; + struct rte_mempool *mp; + struct dpbp_node *dpbp_node; +}; + +/*! + * Buffer pool list configuration structure. User need to give DPAA2 the + * valid number of 'num_buf_pools'. + */ +struct dpaa2_bp_list_cfg { + struct buf_pool_cfg buf_pool; /* Configuration + * of each buffer pool */ +}; + +struct dpaa2_bp_list { + struct dpaa2_bp_list *next; + struct rte_mempool *mp; + struct buf_pool buf_pool; +}; + +struct dpaa2_bp_info { + uint32_t meta_data_size; + uint32_t bpid; + struct dpaa2_bp_list *bp_list; +}; + +#define mempool_to_bpinfo(mp) ((struct dpaa2_bp_info *)mp->pool_data) +#define mempool_to_bpid(mp) ((mempool_to_bpinfo(mp))->bpid) + +extern struct dpaa2_bp_info bpid_info[MAX_BPID]; + +int dpaa2_create_dpbp_device(int dpbp_id); + +int hw_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned count); + +#endif /* _DPAA2_HW_DPBP_H_ */ diff --git a/drivers/net/dpaa2/base/dpaa2_hw_pvt.h b/drivers/net/dpaa2/base/dpaa2_hw_pvt.h index 7dffd5d..5038209 100644 --- a/drivers/net/dpaa2/base/dpaa2_hw_pvt.h +++ b/drivers/net/dpaa2/base/dpaa2_hw_pvt.h @@ -41,6 +41,13 @@ #define MC_PORTAL_INDEX 0 #define NUM_DPIO_REGIONS 2 +#define MEMPOOL_F_HW_PKT_POOL 0x8000 /**< mpool flag to check offloaded pool */ + +/* Maximum release/acquire from QBMAN */ +#define DPAA2_MBUF_MAX_ACQ_REL 7 + +#define MAX_BPID 256 + struct dpaa2_dpio_dev { TAILQ_ENTRY(dpaa2_dpio_dev) next; /**< Pointer to Next device instance */ diff --git a/drivers/net/dpaa2/dpaa2_vfio.c b/drivers/net/dpaa2/dpaa2_vfio.c index 71b491b..946a444 100644 --- a/drivers/net/dpaa2/dpaa2_vfio.c +++ b/drivers/net/dpaa2/dpaa2_vfio.c @@ -62,7 +62,10 @@ #include #include "dpaa2_vfio.h" +/* DPAA2 Base interface files */ +#include #include +#include #define VFIO_MAX_CONTAINERS 1 @@ -272,7 +275,7 @@ int dpaa2_vfio_process_group(struct rte_bus *bus) char path[PATH_MAX]; int64_t v_addr; int ndev_count; - int dpio_count = 0; + int dpio_count = 0, dpbp_count = 0; struct dpaa2_vfio_group *group = &vfio_groups[0]; static int process_once; @@ -423,12 +426,20 @@ int dpaa2_vfio_process_group(struct rte_bus *bus) if (!ret) dpio_count++; } + if (!strcmp(object_type, "dpbp")) { + ret = dpaa2_create_dpbp_device(object_id); + if (!ret) + dpbp_count++; + } } closedir(d); ret = dpaa2_affine_qbman_swp(); if (ret) DPAA2_VFIO_LOG(DEBUG, "Error in affining qbman swp %d", ret); + + RTE_LOG(INFO, PMD, "DPAA2: Added dpbp_count = %d dpio_count=%d\n", + dpbp_count, dpio_count); return 0; FAILURE: