From patchwork Sat Dec 11 09:04:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerin Jacob X-Patchwork-Id: 105109 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EF3AA0032; Mon, 13 Dec 2021 12:23:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 009AE406A2; Mon, 13 Dec 2021 12:23:50 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AE88740685 for ; Sat, 11 Dec 2021 10:09:24 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1BB7W8p1018919; Sat, 11 Dec 2021 01:09:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=c0FJeCWpemftFpJSqbo9AcFp+uF3q6yFX0Q1LNrp+Ng=; b=DyjWtrr+O4THN6UqUwznAVUkp9KMlaoklj9MPa9sjqsXKvKh371pCFP1+ZCM83sFKVdT Qc7spOKYVA+lRJGkbSvJOjcUPR6RMPlvs1W/qbmcL278lETQlKkRWOhAdFidjzcGBcJA FPClNzIU+o1WL8RCl4s5z84JAcioZhAA0BA0o1dHEFWBJLuyTksejbEiF98DdJMhRW38 98/y85GJ+f4gwTaBJI9PUYvmSgErQ3QAhBsZaZsAkZ2NBPCYGrR1/4dFqD7GEdZGdxgT cK+eauk6BDPCekqKh6/BJMCICn5bFb3Eb/oPQp7F93pvHTwsC4QwTX21r6yuKRcOBtrn qQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3cuv5hp4pu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sat, 11 Dec 2021 01:09:14 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 11 Dec 2021 01:09:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 11 Dec 2021 01:09:09 -0800 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1A7363F7068; Sat, 11 Dec 2021 01:09:00 -0800 (PST) From: To: , Thomas Monjalon , Akhil Goyal , Declan Doherty , Jerin Jacob , Ruifeng Wang , Jan Viktorin , Bruce Richardson , "Ray Kinsella" , Ankur Dwivedi , Anoob Joseph , Radha Mohan Chintakuntla , Veerasenareddy Burru , Pavan Nikhilesh , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Nalla Pradeep , Ciara Power , Shijith Thotton , Ashwin Sekhar T K , "Anatoly Burakov" CC: , Subject: [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers Date: Sat, 11 Dec 2021 14:34:35 +0530 Message-ID: <20211211090435.2889574-6-jerinj@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211211090435.2889574-1-jerinj@marvell.com> References: <20211207183143.27145-1-lironh@marvell.com> <20211211090435.2889574-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: hh_56CnzA0wbkf2OAcRUQk2j5UJ0YSAA X-Proofpoint-ORIG-GUID: hh_56CnzA0wbkf2OAcRUQk2j5UJ0YSAA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2021-12-11_03,2021-12-10_01,2021-12-02_01 X-Mailman-Approved-At: Mon, 13 Dec 2021 12:23:47 +0100 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jerin Jacob As per the deprecation notice, In the view of enabling unified driver for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2 drivers and replace with drivers/cnxk/ which supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs. This patch does the following - Replace drivers/common/octeontx2/ with drivers/common/cnxk/ - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/ - Replace drivers/net/octeontx2/ with drivers/net/cnxk/ - Replace drivers/event/octeontx2/ with drivers/event/cnxk/ - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/ - Rename config/arm/arm64_octeontx2_linux_gcc as config/arm/arm64_cn9k_linux_gcc - Update the documentation and MAINTAINERS to reflect the same. - Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and the kernel related documentation is not accounted for this change. Signed-off-by: Jerin Jacob Acked-by: Ferruh Yigit Acked-by: Akhil Goyal Acked-by: Ruifeng Wang --- MAINTAINERS | 37 - app/test/meson.build | 1 - app/test/test_cryptodev.c | 7 - app/test/test_cryptodev.h | 1 - app/test/test_cryptodev_asym.c | 17 - app/test/test_eventdev.c | 8 - config/arm/arm64_cn10k_linux_gcc | 1 - ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +- config/arm/meson.build | 10 +- devtools/check-abi.sh | 2 +- doc/guides/cryptodevs/features/octeontx2.ini | 87 - doc/guides/cryptodevs/index.rst | 1 - doc/guides/cryptodevs/octeontx2.rst | 188 - doc/guides/dmadevs/cnxk.rst | 2 +- doc/guides/eventdevs/features/octeontx2.ini | 30 - doc/guides/eventdevs/index.rst | 1 - doc/guides/eventdevs/octeontx2.rst | 178 - doc/guides/mempool/index.rst | 1 - doc/guides/mempool/octeontx2.rst | 92 - doc/guides/nics/cnxk.rst | 4 +- doc/guides/nics/features/octeontx2.ini | 97 - doc/guides/nics/features/octeontx2_vec.ini | 48 - doc/guides/nics/features/octeontx2_vf.ini | 45 - doc/guides/nics/index.rst | 1 - doc/guides/nics/octeontx2.rst | 465 --- doc/guides/nics/octeontx_ep.rst | 4 +- doc/guides/platform/cnxk.rst | 12 + .../octeontx2_packet_flow_hw_accelerators.svg | 2804 -------------- .../img/octeontx2_resource_virtualization.svg | 2418 ------------ doc/guides/platform/index.rst | 1 - doc/guides/platform/octeontx2.rst | 520 --- doc/guides/rel_notes/deprecation.rst | 17 - doc/guides/rel_notes/release_19_08.rst | 8 +- doc/guides/rel_notes/release_19_11.rst | 2 +- doc/guides/tools/cryptoperf.rst | 1 - drivers/common/meson.build | 1 - drivers/common/octeontx2/hw/otx2_nix.h | 1391 ------- drivers/common/octeontx2/hw/otx2_npa.h | 305 -- drivers/common/octeontx2/hw/otx2_npc.h | 503 --- drivers/common/octeontx2/hw/otx2_ree.h | 27 - drivers/common/octeontx2/hw/otx2_rvu.h | 219 -- drivers/common/octeontx2/hw/otx2_sdp.h | 184 - drivers/common/octeontx2/hw/otx2_sso.h | 209 -- drivers/common/octeontx2/hw/otx2_ssow.h | 56 - drivers/common/octeontx2/hw/otx2_tim.h | 34 - drivers/common/octeontx2/meson.build | 24 - drivers/common/octeontx2/otx2_common.c | 216 -- drivers/common/octeontx2/otx2_common.h | 179 - drivers/common/octeontx2/otx2_dev.c | 1074 ------ drivers/common/octeontx2/otx2_dev.h | 161 - drivers/common/octeontx2/otx2_io_arm64.h | 114 - drivers/common/octeontx2/otx2_io_generic.h | 75 - drivers/common/octeontx2/otx2_irq.c | 288 -- drivers/common/octeontx2/otx2_irq.h | 28 - drivers/common/octeontx2/otx2_mbox.c | 465 --- drivers/common/octeontx2/otx2_mbox.h | 1958 ---------- drivers/common/octeontx2/otx2_sec_idev.c | 183 - drivers/common/octeontx2/otx2_sec_idev.h | 43 - drivers/common/octeontx2/version.map | 44 - drivers/crypto/meson.build | 1 - drivers/crypto/octeontx2/meson.build | 30 - drivers/crypto/octeontx2/otx2_cryptodev.c | 188 - drivers/crypto/octeontx2/otx2_cryptodev.h | 63 - .../octeontx2/otx2_cryptodev_capabilities.c | 924 ----- .../octeontx2/otx2_cryptodev_capabilities.h | 45 - .../octeontx2/otx2_cryptodev_hw_access.c | 225 -- .../octeontx2/otx2_cryptodev_hw_access.h | 161 - .../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 -- .../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 - drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 ------- drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 - .../octeontx2/otx2_cryptodev_ops_helper.h | 82 - drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 - drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ---- drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 - .../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 -- drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 -- drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 --- drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 - drivers/crypto/octeontx2/otx2_security.h | 37 - drivers/crypto/octeontx2/version.map | 13 - drivers/event/cnxk/cn9k_eventdev.c | 10 + drivers/event/meson.build | 1 - drivers/event/octeontx2/meson.build | 26 - drivers/event/octeontx2/otx2_evdev.c | 1900 ---------- drivers/event/octeontx2/otx2_evdev.h | 430 --- drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ---- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 - .../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 - .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 - drivers/event/octeontx2/otx2_evdev_irq.c | 272 -- drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 -------- drivers/event/octeontx2/otx2_evdev_stats.h | 286 -- drivers/event/octeontx2/otx2_tim_evdev.c | 735 ---- drivers/event/octeontx2/otx2_tim_evdev.h | 256 -- drivers/event/octeontx2/otx2_tim_worker.c | 192 - drivers/event/octeontx2/otx2_tim_worker.h | 598 --- drivers/event/octeontx2/otx2_worker.c | 372 -- drivers/event/octeontx2/otx2_worker.h | 339 -- drivers/event/octeontx2/otx2_worker_dual.c | 345 -- drivers/event/octeontx2/otx2_worker_dual.h | 110 - drivers/event/octeontx2/version.map | 3 - drivers/mempool/cnxk/cnxk_mempool.c | 56 +- drivers/mempool/meson.build | 1 - drivers/mempool/octeontx2/meson.build | 18 - drivers/mempool/octeontx2/otx2_mempool.c | 457 --- drivers/mempool/octeontx2/otx2_mempool.h | 221 -- .../mempool/octeontx2/otx2_mempool_debug.c | 135 - drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 -- drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 ----- drivers/mempool/octeontx2/version.map | 8 - drivers/net/cnxk/cn9k_ethdev.c | 15 + drivers/net/meson.build | 1 - drivers/net/octeontx2/meson.build | 47 - drivers/net/octeontx2/otx2_ethdev.c | 2814 -------------- drivers/net/octeontx2/otx2_ethdev.h | 619 --- drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ---- drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 -- drivers/net/octeontx2/otx2_ethdev_irq.c | 493 --- drivers/net/octeontx2/otx2_ethdev_ops.c | 589 --- drivers/net/octeontx2/otx2_ethdev_sec.c | 923 ----- drivers/net/octeontx2/otx2_ethdev_sec.h | 130 - drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 - drivers/net/octeontx2/otx2_flow.c | 1189 ------ drivers/net/octeontx2/otx2_flow.h | 414 -- drivers/net/octeontx2/otx2_flow_ctrl.c | 252 -- drivers/net/octeontx2/otx2_flow_dump.c | 595 --- drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------ drivers/net/octeontx2/otx2_flow_utils.c | 969 ----- drivers/net/octeontx2/otx2_link.c | 287 -- drivers/net/octeontx2/otx2_lookup.c | 352 -- drivers/net/octeontx2/otx2_mac.c | 151 - drivers/net/octeontx2/otx2_mcast.c | 339 -- drivers/net/octeontx2/otx2_ptp.c | 450 --- drivers/net/octeontx2/otx2_rss.c | 427 --- drivers/net/octeontx2/otx2_rx.c | 430 --- drivers/net/octeontx2/otx2_rx.h | 583 --- drivers/net/octeontx2/otx2_stats.c | 397 -- drivers/net/octeontx2/otx2_tm.c | 3317 ----------------- drivers/net/octeontx2/otx2_tm.h | 176 - drivers/net/octeontx2/otx2_tx.c | 1077 ------ drivers/net/octeontx2/otx2_tx.h | 791 ---- drivers/net/octeontx2/otx2_vlan.c | 1035 ----- drivers/net/octeontx2/version.map | 3 - drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +- drivers/net/octeontx_ep/otx_ep_common.h | 16 +- drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +- drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +- usertools/dpdk-devbind.py | 12 +- 149 files changed, 92 insertions(+), 52124 deletions(-) rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%) delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini delete mode 100644 doc/guides/cryptodevs/octeontx2.rst delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini delete mode 100644 doc/guides/eventdevs/octeontx2.rst delete mode 100644 doc/guides/mempool/octeontx2.rst delete mode 100644 doc/guides/nics/features/octeontx2.ini delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini delete mode 100644 doc/guides/nics/octeontx2.rst delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg delete mode 100644 doc/guides/platform/octeontx2.rst delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h delete mode 100644 drivers/common/octeontx2/meson.build delete mode 100644 drivers/common/octeontx2/otx2_common.c delete mode 100644 drivers/common/octeontx2/otx2_common.h delete mode 100644 drivers/common/octeontx2/otx2_dev.c delete mode 100644 drivers/common/octeontx2/otx2_dev.h delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h delete mode 100644 drivers/common/octeontx2/otx2_irq.c delete mode 100644 drivers/common/octeontx2/otx2_irq.h delete mode 100644 drivers/common/octeontx2/otx2_mbox.c delete mode 100644 drivers/common/octeontx2/otx2_mbox.h delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h delete mode 100644 drivers/common/octeontx2/version.map delete mode 100644 drivers/crypto/octeontx2/meson.build delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h delete mode 100644 drivers/crypto/octeontx2/otx2_security.h delete mode 100644 drivers/crypto/octeontx2/version.map delete mode 100644 drivers/event/octeontx2/meson.build delete mode 100644 drivers/event/octeontx2/otx2_evdev.c delete mode 100644 drivers/event/octeontx2/otx2_evdev.h delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h delete mode 100644 drivers/event/octeontx2/otx2_worker.c delete mode 100644 drivers/event/octeontx2/otx2_worker.h delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h delete mode 100644 drivers/event/octeontx2/version.map delete mode 100644 drivers/mempool/octeontx2/meson.build delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c delete mode 100644 drivers/mempool/octeontx2/version.map delete mode 100644 drivers/net/octeontx2/meson.build delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h delete mode 100644 drivers/net/octeontx2/otx2_flow.c delete mode 100644 drivers/net/octeontx2/otx2_flow.h delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c delete mode 100644 drivers/net/octeontx2/otx2_link.c delete mode 100644 drivers/net/octeontx2/otx2_lookup.c delete mode 100644 drivers/net/octeontx2/otx2_mac.c delete mode 100644 drivers/net/octeontx2/otx2_mcast.c delete mode 100644 drivers/net/octeontx2/otx2_ptp.c delete mode 100644 drivers/net/octeontx2/otx2_rss.c delete mode 100644 drivers/net/octeontx2/otx2_rx.c delete mode 100644 drivers/net/octeontx2/otx2_rx.h delete mode 100644 drivers/net/octeontx2/otx2_stats.c delete mode 100644 drivers/net/octeontx2/otx2_tm.c delete mode 100644 drivers/net/octeontx2/otx2_tm.h delete mode 100644 drivers/net/octeontx2/otx2_tx.c delete mode 100644 drivers/net/octeontx2/otx2_tx.h delete mode 100644 drivers/net/octeontx2/otx2_vlan.c delete mode 100644 drivers/net/octeontx2/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 854b81f2a3..336bbb3547 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -534,15 +534,6 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/mempool/cnxk/ F: doc/guides/mempool/cnxk.rst -Marvell OCTEON TX2 -M: Jerin Jacob -M: Nithin Dabilpuram -F: drivers/common/octeontx2/ -F: drivers/mempool/octeontx2/ -F: doc/guides/platform/img/octeontx2_* -F: doc/guides/platform/octeontx2.rst -F: doc/guides/mempool/octeontx2.rst - Bus Drivers ----------- @@ -795,21 +786,6 @@ F: drivers/net/mvneta/ F: doc/guides/nics/mvneta.rst F: doc/guides/nics/features/mvneta.ini -Marvell OCTEON TX2 -M: Jerin Jacob -M: Nithin Dabilpuram -M: Kiran Kumar K -T: git://dpdk.org/next/dpdk-next-net-mrvl -F: drivers/net/octeontx2/ -F: doc/guides/nics/features/octeontx2*.ini -F: doc/guides/nics/octeontx2.rst - -Marvell OCTEON TX2 - security -M: Anoob Joseph -T: git://dpdk.org/next/dpdk-next-crypto -F: drivers/common/octeontx2/otx2_sec* -F: drivers/net/octeontx2/otx2_ethdev_sec* - Marvell OCTEON TX EP - endpoint M: Nalla Pradeep M: Radha Mohan Chintakuntla @@ -1115,13 +1091,6 @@ F: drivers/crypto/nitrox/ F: doc/guides/cryptodevs/nitrox.rst F: doc/guides/cryptodevs/features/nitrox.ini -Marvell OCTEON TX2 crypto -M: Ankur Dwivedi -M: Anoob Joseph -F: drivers/crypto/octeontx2/ -F: doc/guides/cryptodevs/octeontx2.rst -F: doc/guides/cryptodevs/features/octeontx2.ini - Mellanox mlx5 M: Matan Azrad F: drivers/crypto/mlx5/ @@ -1298,12 +1267,6 @@ M: Shijith Thotton F: drivers/event/cnxk/ F: doc/guides/eventdevs/cnxk.rst -Marvell OCTEON TX2 -M: Pavan Nikhilesh -M: Jerin Jacob -F: drivers/event/octeontx2/ -F: doc/guides/eventdevs/octeontx2.rst - NXP DPAA eventdev M: Hemant Agrawal M: Nipun Gupta diff --git a/app/test/meson.build b/app/test/meson.build index 2b480adfba..344a609a4d 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -341,7 +341,6 @@ driver_test_names = [ 'cryptodev_dpaa_sec_autotest', 'cryptodev_dpaa2_sec_autotest', 'cryptodev_null_autotest', - 'cryptodev_octeontx2_autotest', 'cryptodev_openssl_autotest', 'cryptodev_openssl_asym_autotest', 'cryptodev_qat_autotest', diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 10b48cdadb..293f59b48c 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -15615,12 +15615,6 @@ test_cryptodev_octeontx(void) return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX_SYM_PMD)); } -static int -test_cryptodev_octeontx2(void) -{ - return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD)); -} - static int test_cryptodev_caam_jr(void) { @@ -15733,7 +15727,6 @@ REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec); REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp); REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio); REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx); -REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2); REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr); REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox); REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs); diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 90c8287365..70f23a3f67 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -68,7 +68,6 @@ #define CRYPTODEV_NAME_CCP_PMD crypto_ccp #define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio #define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx -#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2 #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym #define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c index 9d19a6d6d9..68f4d8e7a6 100644 --- a/app/test/test_cryptodev_asym.c +++ b/app/test/test_cryptodev_asym.c @@ -2375,20 +2375,6 @@ test_cryptodev_octeontx_asym(void) return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite); } -static int -test_cryptodev_octeontx2_asym(void) -{ - gbl_driver_id = rte_cryptodev_driver_id_get( - RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD)); - if (gbl_driver_id == -1) { - RTE_LOG(ERR, USER1, "OCTEONTX2 PMD must be loaded.\n"); - return TEST_FAILED; - } - - /* Use test suite registered for crypto_octeontx PMD */ - return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite); -} - static int test_cryptodev_cn9k_asym(void) { @@ -2424,8 +2410,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym); REGISTER_TEST_COMMAND(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym); - -REGISTER_TEST_COMMAND(cryptodev_octeontx2_asym_autotest, - test_cryptodev_octeontx2_asym); REGISTER_TEST_COMMAND(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym); REGISTER_TEST_COMMAND(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym); diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 843d9766b0..10028fe11d 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1018,12 +1018,6 @@ test_eventdev_selftest_octeontx(void) return test_eventdev_selftest_impl("event_octeontx", ""); } -static int -test_eventdev_selftest_octeontx2(void) -{ - return test_eventdev_selftest_impl("event_octeontx2", ""); -} - static int test_eventdev_selftest_dpaa2(void) { @@ -1052,8 +1046,6 @@ REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common); REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw); REGISTER_TEST_COMMAND(eventdev_selftest_octeontx, test_eventdev_selftest_octeontx); -REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2, - test_eventdev_selftest_octeontx2); REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2); REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2); REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k); diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc index 88e5f10945..a3578c03a1 100644 --- a/config/arm/arm64_cn10k_linux_gcc +++ b/config/arm/arm64_cn10k_linux_gcc @@ -14,4 +14,3 @@ endian = 'little' [properties] platform = 'cn10k' -disable_drivers = 'common/octeontx2' diff --git a/config/arm/arm64_octeontx2_linux_gcc b/config/arm/arm64_cn9k_linux_gcc similarity index 84% rename from config/arm/arm64_octeontx2_linux_gcc rename to config/arm/arm64_cn9k_linux_gcc index 8fbdd3868d..a94b44a551 100644 --- a/config/arm/arm64_octeontx2_linux_gcc +++ b/config/arm/arm64_cn9k_linux_gcc @@ -13,5 +13,4 @@ cpu = 'armv8-a' endian = 'little' [properties] -platform = 'octeontx2' -disable_drivers = 'common/cnxk' +platform = 'cn9k' diff --git a/config/arm/meson.build b/config/arm/meson.build index 213324d262..16e808cdd5 100644 --- a/config/arm/meson.build +++ b/config/arm/meson.build @@ -139,7 +139,7 @@ implementer_cavium = { 'march_features': ['crc', 'crypto', 'lse'], 'compiler_options': ['-mcpu=octeontx2'], 'flags': [ - ['RTE_MACHINE', '"octeontx2"'], + ['RTE_MACHINE', '"cn9k"'], ['RTE_ARM_FEATURE_ATOMICS', true], ['RTE_USE_C11_MEM_MODEL', true], ['RTE_MAX_LCORE', 36], @@ -340,8 +340,8 @@ soc_n2 = { 'numa': false } -soc_octeontx2 = { - 'description': 'Marvell OCTEON TX2', +soc_cn9k = { + 'description': 'Marvell OCTEON 9', 'implementer': '0x43', 'part_number': '0xb2', 'numa': false @@ -377,6 +377,7 @@ generic_aarch32: Generic un-optimized build for armv8 aarch32 execution mode. armada: Marvell ARMADA bluefield: NVIDIA BlueField centriq2400: Qualcomm Centriq 2400 +cn9k: Marvell OCTEON 9 cn10k: Marvell OCTEON 10 dpaa: NXP DPAA emag: Ampere eMAG @@ -385,7 +386,6 @@ kunpeng920: HiSilicon Kunpeng 920 kunpeng930: HiSilicon Kunpeng 930 n1sdp: Arm Neoverse N1SDP n2: Arm Neoverse N2 -octeontx2: Marvell OCTEON TX2 stingray: Broadcom Stingray thunderx2: Marvell ThunderX2 T99 thunderxt88: Marvell ThunderX T88 @@ -399,6 +399,7 @@ socs = { 'armada': soc_armada, 'bluefield': soc_bluefield, 'centriq2400': soc_centriq2400, + 'cn9k': soc_cn9k, 'cn10k' : soc_cn10k, 'dpaa': soc_dpaa, 'emag': soc_emag, @@ -407,7 +408,6 @@ socs = { 'kunpeng930': soc_kunpeng930, 'n1sdp': soc_n1sdp, 'n2': soc_n2, - 'octeontx2': soc_octeontx2, 'stingray': soc_stingray, 'thunderx2': soc_thunderx2, 'thunderxt88': soc_thunderxt88 diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh index 5e654189a8..675f10142e 100755 --- a/devtools/check-abi.sh +++ b/devtools/check-abi.sh @@ -48,7 +48,7 @@ for dump in $(find $refdir -name "*.dump"); do echo "Skipped removed driver $name." continue fi - if grep -qE "\`_ - -Features --------- - -The OCTEON TX2 crypto PMD has support for: - -Symmetric Crypto Algorithms -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cipher algorithms: - -* ``RTE_CRYPTO_CIPHER_NULL`` -* ``RTE_CRYPTO_CIPHER_3DES_CBC`` -* ``RTE_CRYPTO_CIPHER_3DES_ECB`` -* ``RTE_CRYPTO_CIPHER_AES_CBC`` -* ``RTE_CRYPTO_CIPHER_AES_CTR`` -* ``RTE_CRYPTO_CIPHER_AES_XTS`` -* ``RTE_CRYPTO_CIPHER_DES_CBC`` -* ``RTE_CRYPTO_CIPHER_KASUMI_F8`` -* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2`` -* ``RTE_CRYPTO_CIPHER_ZUC_EEA3`` - -Hash algorithms: - -* ``RTE_CRYPTO_AUTH_NULL`` -* ``RTE_CRYPTO_AUTH_AES_GMAC`` -* ``RTE_CRYPTO_AUTH_KASUMI_F9`` -* ``RTE_CRYPTO_AUTH_MD5`` -* ``RTE_CRYPTO_AUTH_MD5_HMAC`` -* ``RTE_CRYPTO_AUTH_SHA1`` -* ``RTE_CRYPTO_AUTH_SHA1_HMAC`` -* ``RTE_CRYPTO_AUTH_SHA224`` -* ``RTE_CRYPTO_AUTH_SHA224_HMAC`` -* ``RTE_CRYPTO_AUTH_SHA256`` -* ``RTE_CRYPTO_AUTH_SHA256_HMAC`` -* ``RTE_CRYPTO_AUTH_SHA384`` -* ``RTE_CRYPTO_AUTH_SHA384_HMAC`` -* ``RTE_CRYPTO_AUTH_SHA512`` -* ``RTE_CRYPTO_AUTH_SHA512_HMAC`` -* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2`` -* ``RTE_CRYPTO_AUTH_ZUC_EIA3`` - -AEAD algorithms: - -* ``RTE_CRYPTO_AEAD_AES_GCM`` -* ``RTE_CRYPTO_AEAD_CHACHA20_POLY1305`` - -Asymmetric Crypto Algorithms -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -* ``RTE_CRYPTO_ASYM_XFORM_RSA`` -* ``RTE_CRYPTO_ASYM_XFORM_MODEX`` - - -Installation ------------- - -The OCTEON TX2 crypto PMD may be compiled natively on an OCTEON TX2 platform or -cross-compiled on an x86 platform. - -Refer to :doc:`../platform/octeontx2` for instructions to build your DPDK -application. - -.. note:: - - The OCTEON TX2 crypto PMD uses services from the kernel mode OCTEON TX2 - crypto PF driver in linux. This driver is included in the OCTEON TX SDK. - -Initialization --------------- - -List the CPT PF devices available on your OCTEON TX2 platform: - -.. code-block:: console - - lspci -d:a0fd - -``a0fd`` is the CPT PF device id. You should see output similar to: - -.. code-block:: console - - 0002:10:00.0 Class 1080: Device 177d:a0fd - -Set ``sriov_numvfs`` on the CPT PF device, to create a VF: - -.. code-block:: console - - echo 1 > /sys/bus/pci/drivers/octeontx2-cpt/0002:10:00.0/sriov_numvfs - -Bind the CPT VF device to the vfio_pci driver: - -.. code-block:: console - - echo '177d a0fe' > /sys/bus/pci/drivers/vfio-pci/new_id - echo 0002:10:00.1 > /sys/bus/pci/devices/0002:10:00.1/driver/unbind - echo 0002:10:00.1 > /sys/bus/pci/drivers/vfio-pci/bind - -Another way to bind the VF would be to use the ``dpdk-devbind.py`` script: - -.. code-block:: console - - cd - ./usertools/dpdk-devbind.py -u 0002:10:00.1 - ./usertools/dpdk-devbind.py -b vfio-pci 0002:10.00.1 - -.. note:: - - * For CN98xx SoC, it is recommended to use even and odd DBDF VFs to achieve - higher performance as even VF uses one crypto engine and odd one uses - another crypto engine. - - * Ensure that sufficient huge pages are available for your application:: - - dpdk-hugepages.py --setup 4G --pagesize 512M - - Refer to :ref:`linux_gsg_hugepages` for more details. - -Debugging Options ------------------ - -.. _table_octeontx2_crypto_debug_options: - -.. table:: OCTEON TX2 crypto PMD debug options - - +---+------------+-------------------------------------------------------+ - | # | Component | EAL log command | - +===+============+=======================================================+ - | 1 | CPT | --log-level='pmd\.crypto\.octeontx2,8' | - +---+------------+-------------------------------------------------------+ - -Testing -------- - -The symmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test -application: - -.. code-block:: console - - ./dpdk-test - RTE>>cryptodev_octeontx2_autotest - -The asymmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test -application: - -.. code-block:: console - - ./dpdk-test - RTE>>cryptodev_octeontx2_asym_autotest - - -Lookaside IPsec Support ------------------------ - -The OCTEON TX2 SoC can accelerate IPsec traffic in lookaside protocol mode, -with its **cryptographic accelerator (CPT)**. ``OCTEON TX2 crypto PMD`` implements -this as an ``RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL`` offload. - -Refer to :doc:`../prog_guide/rte_security` for more details on protocol offloads. - -This feature can be tested with ipsec-secgw sample application. - - -Features supported -~~~~~~~~~~~~~~~~~~ - -* IPv4 -* IPv6 -* ESP -* Tunnel mode -* Transport mode(IPv4) -* ESN -* Anti-replay -* UDP Encapsulation -* AES-128/192/256-GCM -* AES-128/192/256-CBC-SHA1-HMAC -* AES-128/192/256-CBC-SHA256-128-HMAC diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst index da2dd59071..418b9a9d63 100644 --- a/doc/guides/dmadevs/cnxk.rst +++ b/doc/guides/dmadevs/cnxk.rst @@ -7,7 +7,7 @@ CNXK DMA Device Driver ====================== The ``cnxk`` dmadev driver provides a poll-mode driver (PMD) for Marvell DPI DMA -Hardware Accelerator block found in OCTEONTX2 and OCTEONTX3 family of SoCs. +Hardware Accelerator block found in OCTEON 9 and OCTEON 10 family of SoCs. Each DMA queue is exposed as a VF function when SRIOV is enabled. The block supports following modes of DMA transfers: diff --git a/doc/guides/eventdevs/features/octeontx2.ini b/doc/guides/eventdevs/features/octeontx2.ini deleted file mode 100644 index 05b84beb6e..0000000000 --- a/doc/guides/eventdevs/features/octeontx2.ini +++ /dev/null @@ -1,30 +0,0 @@ -; -; Supported features of the 'octeontx2' eventdev driver. -; -; Refer to default.ini for the full list of available PMD features. -; -[Scheduling Features] -queue_qos = Y -distributed_sched = Y -queue_all_types = Y -nonseq_mode = Y -runtime_port_link = Y -multiple_queue_port = Y -carry_flow_id = Y -maintenance_free = Y - -[Eth Rx adapter Features] -internal_port = Y -multi_eventq = Y - -[Eth Tx adapter Features] -internal_port = Y - -[Crypto adapter Features] -internal_port_op_new = Y -internal_port_op_fwd = Y -internal_port_qp_ev_bind = Y - -[Timer adapter Features] -internal_port = Y -periodic = Y diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst index b11657f7ae..eed19ad28c 100644 --- a/doc/guides/eventdevs/index.rst +++ b/doc/guides/eventdevs/index.rst @@ -19,5 +19,4 @@ application through the eventdev API. dsw sw octeontx - octeontx2 opdl diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst deleted file mode 100644 index 0fa57abfa3..0000000000 --- a/doc/guides/eventdevs/octeontx2.rst +++ /dev/null @@ -1,178 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Marvell International Ltd. - -OCTEON TX2 SSO Eventdev Driver -=============================== - -The OCTEON TX2 SSO PMD (**librte_event_octeontx2**) provides poll mode -eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2** -SoC family. - -More information about OCTEON TX2 SoC can be found at `Marvell Official Website -`_. - -Features --------- - -Features of the OCTEON TX2 SSO PMD are: - -- 256 Event queues -- 26 (dual) and 52 (single) Event ports -- HW event scheduler -- Supports 1M flows per event queue -- Flow based event pipelining -- Flow pinning support in flow based event pipelining -- Queue based event pipelining -- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow -- Event scheduling QoS based on event queue priority -- Open system with configurable amount of outstanding events limited only by - DRAM -- HW accelerated dequeue timeout support to enable power management -- HW managed event timers support through TIM, with high precision and - time granularity of 2.5us. -- Up to 256 TIM rings aka event timer adapters. -- Up to 8 rings traversed in parallel. -- HW managed packets enqueued from ethdev to eventdev exposed through event eth - RX adapter. -- N:1 ethernet device Rx queue to Event queue mapping. -- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` - capability while maintaining receive packet order. -- Full Rx/Tx offload support defined through ethdev queue config. - -Prerequisites and Compilation procedure ---------------------------------------- - - See :doc:`../platform/octeontx2` for setup information. - - -Runtime Config Options ----------------------- - -- ``Maximum number of in-flight events`` (default ``8192``) - - In **Marvell OCTEON TX2** the max number of in-flight events are only limited - by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide - upper limit for in-flight events. - For example:: - - -a 0002:0e:00.0,xae_cnt=16384 - -- ``Force legacy mode`` - - The ``single_ws`` devargs parameter is introduced to force legacy mode i.e - single workslot mode in SSO and disable the default dual workslot mode. - For example:: - - -a 0002:0e:00.0,single_ws=1 - -- ``Event Group QoS support`` - - SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight - events. By default the buffers are assigned to the SSO GGRPs to - satisfy minimum HW requirements. SSO is free to assign the remaining - buffers to GGRPs based on a preconfigured threshold. - We can control the QoS of SSO GGRP by modifying the above mentioned - thresholds. GGRPs that have higher importance can be assigned higher - thresholds than the rest. The dictionary format is as follows - [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents - default. - For example:: - - -a 0002:0e:00.0,qos=[1-50-50-50] - -- ``TIM disable NPA`` - - By default chunks are allocated from NPA then TIM can automatically free - them when traversing the list of chunks. The ``tim_disable_npa`` devargs - parameter disables NPA and uses software mempool to manage chunks - For example:: - - -a 0002:0e:00.0,tim_disable_npa=1 - -- ``TIM modify chunk slots`` - - The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots. - Chunks are used to store event timers, a chunk can be visualised as an array - where the last element points to the next chunk and rest of them are used to - store events. TIM traverses the list of chunks and enqueues the event timers - to SSO. The default value is 255 and the max value is 4095. - For example:: - - -a 0002:0e:00.0,tim_chnk_slots=1023 - -- ``TIM enable arm/cancel statistics`` - - The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of - event timer adapter. - For example:: - - -a 0002:0e:00.0,tim_stats_ena=1 - -- ``TIM limit max rings reserved`` - - The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM - rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW - resources we can avoid starving other applications by not grabbing all the - rings. - For example:: - - -a 0002:0e:00.0,tim_rings_lmt=5 - -- ``TIM ring control internal parameters`` - - When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to - control each TIM rings internal parameters uniquely. The following dict - format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents - default values. - For Example:: - - -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0] - -- ``Lock NPA contexts in NDC`` - - Lock NPA aura and pool contexts in NDC cache. - The device args take hexadecimal bitmask where each bit represent the - corresponding aura/pool id. - - For example:: - - -a 0002:0e:00.0,npa_lock_mask=0xf - -- ``Force Rx Back pressure`` - - Force Rx back pressure when same mempool is used across ethernet device - connected to event device. - - For example:: - - -a 0002:0e:00.0,force_rx_bp=1 - -Debugging Options ------------------ - -.. _table_octeontx2_event_debug_options: - -.. table:: OCTEON TX2 event device debug options - - +---+------------+-------------------------------------------------------+ - | # | Component | EAL log command | - +===+============+=======================================================+ - | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' | - +---+------------+-------------------------------------------------------+ - | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' | - +---+------------+-------------------------------------------------------+ - -Limitations ------------ - -Rx adapter support -~~~~~~~~~~~~~~~~~~ - -Using the same mempool for all the ethernet device ports connected to -event device would cause back pressure to be asserted only on the first -ethernet device. -Back pressure is automatically disabled when using same mempool for all the -ethernet devices connected to event device to override this applications can -use `force_rx_bp=1` device arguments. -Using unique mempool per each ethernet device is recommended when they are -connected to event device. diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst index ce53bc1ac7..e4b6ee7d31 100644 --- a/doc/guides/mempool/index.rst +++ b/doc/guides/mempool/index.rst @@ -13,6 +13,5 @@ application through the mempool API. cnxk octeontx - octeontx2 ring stack diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst deleted file mode 100644 index 1272c1e72b..0000000000 --- a/doc/guides/mempool/octeontx2.rst +++ /dev/null @@ -1,92 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Marvell International Ltd. - -OCTEON TX2 NPA Mempool Driver -============================= - -The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool -driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family. - -More information about OCTEON TX2 SoC can be found at `Marvell Official Website -`_. - -Features --------- - -OCTEON TX2 NPA PMD supports: - -- Up to 128 NPA LFs -- 1M Pools per LF -- HW mempool manager -- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path. -- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path. - -Prerequisites and Compilation procedure ---------------------------------------- - - See :doc:`../platform/octeontx2` for setup information. - -Pre-Installation Configuration ------------------------------- - - -Runtime Config Options -~~~~~~~~~~~~~~~~~~~~~~ - -- ``Maximum number of mempools per application`` (default ``128``) - - The maximum number of mempools per application needs to be configured on - HW during mempool driver initialization. HW can support up to 1M mempools, - Since each mempool costs set of HW resources, the ``max_pools`` ``devargs`` - parameter is being introduced to configure the number of mempools required - for the application. - For example:: - - -a 0002:02:00.0,max_pools=512 - - With the above configuration, the driver will set up only 512 mempools for - the given application to save HW resources. - -.. note:: - - Since this configuration is per application, the end user needs to - provide ``max_pools`` parameter to the first PCIe device probed by the given - application. - -- ``Lock NPA contexts in NDC`` - - Lock NPA aura and pool contexts in NDC cache. - The device args take hexadecimal bitmask where each bit represent the - corresponding aura/pool id. - - For example:: - - -a 0002:02:00.0,npa_lock_mask=0xf - -Debugging Options -~~~~~~~~~~~~~~~~~ - -.. _table_octeontx2_mempool_debug_options: - -.. table:: OCTEON TX2 mempool debug options - - +---+------------+-------------------------------------------------------+ - | # | Component | EAL log command | - +===+============+=======================================================+ - | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' | - +---+------------+-------------------------------------------------------+ - -Standalone mempool device -~~~~~~~~~~~~~~~~~~~~~~~~~ - - The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices - available in the system. In order to avoid, the end user to bind the mempool - device prior to use ethdev and/or eventdev device, the respective driver - configures an NPA LF and attach to the first probed ethdev or eventdev device. - In case, if end user need to run mempool as a standalone device - (without ethdev or eventdev), end user needs to bind a mempool device using - ``usertools/dpdk-devbind.py`` - - Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device:: - - echo "mempool_autotest" | /app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa" diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 84f9865654..2119ba51c8 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -178,7 +178,7 @@ Runtime Config Options * ``rss_adder<7:0> = flow_tag<7:0>`` Latter one aligns with standard NIC behavior vs former one is a legacy - RSS adder scheme used in OCTEON TX2 products. + RSS adder scheme used in OCTEON 9 products. By default, the driver runs in the latter mode. Setting this flag to 1 to select the legacy mode. @@ -291,7 +291,7 @@ Limitations The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager. ``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler as it is performance wise most effective way for packet allocation and Tx buffer -recycling on OCTEON TX2 SoC platform. +recycling on OCTEON 9 SoC platform. CRC stripping ~~~~~~~~~~~~~ diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini deleted file mode 100644 index bf0c2890f2..0000000000 --- a/doc/guides/nics/features/octeontx2.ini +++ /dev/null @@ -1,97 +0,0 @@ -; -; Supported features of the 'octeontx2' network poll mode driver. -; -; Refer to default.ini for the full list of available PMD features. -; -[Features] -Speed capabilities = Y -Rx interrupt = Y -Lock-free Tx queue = Y -SR-IOV = Y -Multiprocess aware = Y -Link status = Y -Link status event = Y -Runtime Rx queue setup = Y -Runtime Tx queue setup = Y -Burst mode info = Y -Fast mbuf free = Y -Free Tx mbuf on demand = Y -Queue start/stop = Y -MTU update = Y -TSO = Y -Promiscuous mode = Y -Allmulticast mode = Y -Unicast MAC filter = Y -Multicast MAC filter = Y -RSS hash = Y -RSS key update = Y -RSS reta update = Y -Inner RSS = Y -Inline protocol = Y -VLAN filter = Y -Flow control = Y -Rate limitation = Y -Scattered Rx = Y -VLAN offload = Y -QinQ offload = Y -L3 checksum offload = Y -L4 checksum offload = Y -Inner L3 checksum = Y -Inner L4 checksum = Y -Packet type parsing = Y -Timesync = Y -Timestamp offload = Y -Rx descriptor status = Y -Tx descriptor status = Y -Basic stats = Y -Stats per queue = Y -Extended stats = Y -FW version = Y -Module EEPROM dump = Y -Registers dump = Y -Linux = Y -ARMv8 = Y -Usage doc = Y - -[rte_flow items] -any = Y -arp_eth_ipv4 = Y -esp = Y -eth = Y -e_tag = Y -geneve = Y -gre = Y -gre_key = Y -gtpc = Y -gtpu = Y -higig2 = Y -icmp = Y -ipv4 = Y -ipv6 = Y -ipv6_ext = Y -mpls = Y -nvgre = Y -raw = Y -sctp = Y -tcp = Y -udp = Y -vlan = Y -vxlan = Y -vxlan_gpe = Y - -[rte_flow actions] -count = Y -drop = Y -flag = Y -mark = Y -of_pop_vlan = Y -of_push_vlan = Y -of_set_vlan_pcp = Y -of_set_vlan_vid = Y -pf = Y -port_id = Y -port_representor = Y -queue = Y -rss = Y -security = Y -vf = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini deleted file mode 100644 index c405db7cf9..0000000000 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ /dev/null @@ -1,48 +0,0 @@ -; -; Supported features of the 'octeontx2_vec' network poll mode driver. -; -; Refer to default.ini for the full list of available PMD features. -; -[Features] -Speed capabilities = Y -Lock-free Tx queue = Y -SR-IOV = Y -Multiprocess aware = Y -Link status = Y -Link status event = Y -Runtime Rx queue setup = Y -Runtime Tx queue setup = Y -Burst mode info = Y -Fast mbuf free = Y -Free Tx mbuf on demand = Y -Queue start/stop = Y -MTU update = Y -Promiscuous mode = Y -Allmulticast mode = Y -Unicast MAC filter = Y -Multicast MAC filter = Y -RSS hash = Y -RSS key update = Y -RSS reta update = Y -Inner RSS = Y -VLAN filter = Y -Flow control = Y -Rate limitation = Y -VLAN offload = Y -QinQ offload = Y -L3 checksum offload = Y -L4 checksum offload = Y -Inner L3 checksum = Y -Inner L4 checksum = Y -Packet type parsing = Y -Rx descriptor status = Y -Tx descriptor status = Y -Basic stats = Y -Extended stats = Y -Stats per queue = Y -FW version = Y -Module EEPROM dump = Y -Registers dump = Y -Linux = Y -ARMv8 = Y -Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini deleted file mode 100644 index 5ac7a49a5c..0000000000 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ /dev/null @@ -1,45 +0,0 @@ -; -; Supported features of the 'octeontx2_vf' network poll mode driver. -; -; Refer to default.ini for the full list of available PMD features. -; -[Features] -Speed capabilities = Y -Lock-free Tx queue = Y -Multiprocess aware = Y -Rx interrupt = Y -Link status = Y -Link status event = Y -Runtime Rx queue setup = Y -Runtime Tx queue setup = Y -Burst mode info = Y -Fast mbuf free = Y -Free Tx mbuf on demand = Y -Queue start/stop = Y -TSO = Y -RSS hash = Y -RSS key update = Y -RSS reta update = Y -Inner RSS = Y -Inline protocol = Y -VLAN filter = Y -Rate limitation = Y -Scattered Rx = Y -VLAN offload = Y -QinQ offload = Y -L3 checksum offload = Y -L4 checksum offload = Y -Inner L3 checksum = Y -Inner L4 checksum = Y -Packet type parsing = Y -Rx descriptor status = Y -Tx descriptor status = Y -Basic stats = Y -Extended stats = Y -Stats per queue = Y -FW version = Y -Module EEPROM dump = Y -Registers dump = Y -Linux = Y -ARMv8 = Y -Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 1c94caccea..f48e9f815c 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -52,7 +52,6 @@ Network Interface Controller Drivers ngbe null octeontx - octeontx2 octeontx_ep pfe qede diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst deleted file mode 100644 index 4ce067f2c5..0000000000 --- a/doc/guides/nics/octeontx2.rst +++ /dev/null @@ -1,465 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(C) 2019 Marvell International Ltd. - -OCTEON TX2 Poll Mode driver -=========================== - -The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev -driver support for the inbuilt network device found in **Marvell OCTEON TX2** -SoC family as well as for their virtual functions (VF) in SR-IOV context. - -More information can be found at `Marvell Official Website -`_. - -Features --------- - -Features of the OCTEON TX2 Ethdev PMD are: - -- Packet type information -- Promiscuous mode -- Jumbo frames -- SR-IOV VF -- Lock-free Tx queue -- Multiple queues for TX and RX -- Receiver Side Scaling (RSS) -- MAC/VLAN filtering -- Multicast MAC filtering -- Generic flow API -- Inner and Outer Checksum offload -- VLAN/QinQ stripping and insertion -- Port hardware statistics -- Link state information -- Link flow control -- MTU update -- Scatter-Gather IO support -- Vector Poll mode driver -- Debug utilities - Context dump and error interrupt support -- IEEE1588 timestamping -- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection -- Support Rx interrupt -- Inline IPsec processing support -- :ref:`Traffic Management API ` - -Prerequisites -------------- - -See :doc:`../platform/octeontx2` for setup information. - - -Driver compilation and testing ------------------------------- - -Refer to the document :ref:`compiling and testing a PMD for a NIC ` -for details. - -#. Running testpmd: - - Follow instructions available in the document - :ref:`compiling and testing a PMD for a NIC ` - to run testpmd. - - Example output: - - .. code-block:: console - - .//app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1 - EAL: Detected 24 lcore(s) - EAL: Detected 1 NUMA nodes - EAL: Multi-process socket /var/run/dpdk/rte/mp_socket - EAL: No available hugepages reported in hugepages-2048kB - EAL: Probing VFIO support... - EAL: VFIO support initialized - EAL: PCI device 0002:02:00.0 on NUMA socket 0 - EAL: probe driver: 177d:a063 net_octeontx2 - EAL: using IOMMU type 1 (Type 1) - testpmd: create a new mbuf pool : n=267456, size=2176, socket=0 - testpmd: preferred mempool ops selected: octeontx2_npa - Configuring Port 0 (socket 0) - PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex - - Port 0: link state change event - Port 0: 36:10:66:88:7A:57 - Checking link statuses... - Done - No commandline core given, start packet forwarding - io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native - Logical Core 9 (socket 0) forwards packets on 1 streams: - RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 - - io packet forwarding packets/burst=32 - nb forwarding cores=1 - nb forwarding ports=1 - port 0: RX queue number: 1 Tx queue number: 1 - Rx offloads=0x0 Tx offloads=0x10000 - RX queue: 0 - RX desc=512 - RX free threshold=0 - RX threshold registers: pthresh=0 hthresh=0 wthresh=0 - RX Offloads=0x0 - TX queue: 0 - TX desc=512 - TX free threshold=0 - TX threshold registers: pthresh=0 hthresh=0 wthresh=0 - TX offloads=0x10000 - TX RS bit threshold=0 - Press enter to exit - -Runtime Config Options ----------------------- - -- ``Rx&Tx scalar mode enable`` (default ``0``) - - Ethdev supports both scalar and vector mode, it may be selected at runtime - using ``scalar_enable`` ``devargs`` parameter. - -- ``RSS reta size`` (default ``64``) - - RSS redirection table size may be configured during runtime using ``reta_size`` - ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,reta_size=256 - - With the above configuration, reta table of size 256 is populated. - -- ``Flow priority levels`` (default ``3``) - - RTE Flow priority levels can be configured during runtime using - ``flow_max_priority`` ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,flow_max_priority=10 - - With the above configuration, priority level was set to 10 (0-9). Max - priority level supported is 32. - -- ``Reserve Flow entries`` (default ``8``) - - RTE flow entries can be pre allocated and the size of pre allocation can be - selected runtime using ``flow_prealloc_size`` ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,flow_prealloc_size=4 - - With the above configuration, pre alloc size was set to 4. Max pre alloc - size supported is 32. - -- ``Max SQB buffer count`` (default ``512``) - - Send queue descriptor buffer count may be limited during runtime using - ``max_sqb_count`` ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,max_sqb_count=64 - - With the above configuration, each send queue's descriptor buffer count is - limited to a maximum of 64 buffers. - -- ``Switch header enable`` (default ``none``) - - A port can be configured to a specific switch header type by using - ``switch_header`` ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,switch_header="higig2" - - With the above configuration, higig2 will be enabled on that port and the - traffic on this port should be higig2 traffic only. Supported switch header - types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa". - -- ``RSS tag as XOR`` (default ``0``) - - C0 HW revision onward, The HW gives an option to configure the RSS adder as - - * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>`` - - * ``rss_adder<7:0> = flow_tag<7:0>`` - - Latter one aligns with standard NIC behavior vs former one is a legacy - RSS adder scheme used in OCTEON TX2 products. - - By default, the driver runs in the latter mode from C0 HW revision onward. - Setting this flag to 1 to select the legacy mode. - - For example to select the legacy mode(RSS tag adder as XOR):: - - -a 0002:02:00.0,tag_as_xor=1 - -- ``Max SPI for inbound inline IPsec`` (default ``1``) - - Max SPI supported for inbound inline IPsec processing can be specified by - ``ipsec_in_max_spi`` ``devargs`` parameter. - - For example:: - - -a 0002:02:00.0,ipsec_in_max_spi=128 - - With the above configuration, application can enable inline IPsec processing - on 128 SAs (SPI 0-127). - -- ``Lock Rx contexts in NDC cache`` - - Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter. - - For example:: - - -a 0002:02:00.0,lock_rx_ctx=1 - -- ``Lock Tx contexts in NDC cache`` - - Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter. - - For example:: - - -a 0002:02:00.0,lock_tx_ctx=1 - -.. note:: - - Above devarg parameters are configurable per device, user needs to pass the - parameters to all the PCIe devices if application requires to configure on - all the ethdev ports. - -- ``Lock NPA contexts in NDC`` - - Lock NPA aura and pool contexts in NDC cache. - The device args take hexadecimal bitmask where each bit represent the - corresponding aura/pool id. - - For example:: - - -a 0002:02:00.0,npa_lock_mask=0xf - -.. _otx2_tmapi: - -Traffic Management API ----------------------- - -OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to -configure the following features: - -#. Hierarchical scheduling -#. Single rate - Two color, Two rate - Three color shaping - -Both DWRR and Static Priority(SP) hierarchical scheduling is supported. - -Every parent can have atmost 10 SP Children and unlimited DWRR children. - -Both PF & VF supports traffic management API with PF supporting 6 levels -and VF supporting 5 levels of topology. - -Limitations ------------ - -``mempool_octeontx2`` external mempool handler dependency -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager. -``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler -as it is performance wise most effective way for packet allocation and Tx buffer -recycling on OCTEON TX2 SoC platform. - -CRC stripping -~~~~~~~~~~~~~ - -The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by -the host interface irrespective of the offload configuration. - -Multicast MAC filtering -~~~~~~~~~~~~~~~~~~~~~~~ - -``net_octeontx2`` PMD supports multicast mac filtering feature only on physical -function devices. - -SDP interface support -~~~~~~~~~~~~~~~~~~~~~ -OCTEON TX2 SDP interface support is limited to PF device, No VF support. - -Inline Protocol Processing -~~~~~~~~~~~~~~~~~~~~~~~~~~ -``net_octeontx2`` PMD doesn't support the following features for packets to be -inline protocol processed. -- TSO offload -- VLAN/QinQ offload -- Fragmentation - -Debugging Options ------------------ - -.. _table_octeontx2_ethdev_debug_options: - -.. table:: OCTEON TX2 ethdev debug options - - +---+------------+-------------------------------------------------------+ - | # | Component | EAL log command | - +===+============+=======================================================+ - | 1 | NIX | --log-level='pmd\.net.octeontx2,8' | - +---+------------+-------------------------------------------------------+ - | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' | - +---+------------+-------------------------------------------------------+ - -RTE Flow Support ----------------- - -The OCTEON TX2 SoC family NIC has support for the following patterns and -actions. - -Patterns: - -.. _table_octeontx2_supported_flow_item_types: - -.. table:: Item types - - +----+--------------------------------+ - | # | Pattern Type | - +====+================================+ - | 1 | RTE_FLOW_ITEM_TYPE_ETH | - +----+--------------------------------+ - | 2 | RTE_FLOW_ITEM_TYPE_VLAN | - +----+--------------------------------+ - | 3 | RTE_FLOW_ITEM_TYPE_E_TAG | - +----+--------------------------------+ - | 4 | RTE_FLOW_ITEM_TYPE_IPV4 | - +----+--------------------------------+ - | 5 | RTE_FLOW_ITEM_TYPE_IPV6 | - +----+--------------------------------+ - | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4| - +----+--------------------------------+ - | 7 | RTE_FLOW_ITEM_TYPE_MPLS | - +----+--------------------------------+ - | 8 | RTE_FLOW_ITEM_TYPE_ICMP | - +----+--------------------------------+ - | 9 | RTE_FLOW_ITEM_TYPE_UDP | - +----+--------------------------------+ - | 10 | RTE_FLOW_ITEM_TYPE_TCP | - +----+--------------------------------+ - | 11 | RTE_FLOW_ITEM_TYPE_SCTP | - +----+--------------------------------+ - | 12 | RTE_FLOW_ITEM_TYPE_ESP | - +----+--------------------------------+ - | 13 | RTE_FLOW_ITEM_TYPE_GRE | - +----+--------------------------------+ - | 14 | RTE_FLOW_ITEM_TYPE_NVGRE | - +----+--------------------------------+ - | 15 | RTE_FLOW_ITEM_TYPE_VXLAN | - +----+--------------------------------+ - | 16 | RTE_FLOW_ITEM_TYPE_GTPC | - +----+--------------------------------+ - | 17 | RTE_FLOW_ITEM_TYPE_GTPU | - +----+--------------------------------+ - | 18 | RTE_FLOW_ITEM_TYPE_GENEVE | - +----+--------------------------------+ - | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE | - +----+--------------------------------+ - | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT | - +----+--------------------------------+ - | 21 | RTE_FLOW_ITEM_TYPE_VOID | - +----+--------------------------------+ - | 22 | RTE_FLOW_ITEM_TYPE_ANY | - +----+--------------------------------+ - | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY | - +----+--------------------------------+ - | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 | - +----+--------------------------------+ - | 25 | RTE_FLOW_ITEM_TYPE_RAW | - +----+--------------------------------+ - -.. note:: - - ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing - bits in the GRE header are equal to 0. - -Actions: - -.. _table_octeontx2_supported_ingress_action_types: - -.. table:: Ingress action types - - +----+-----------------------------------------+ - | # | Action Type | - +====+=========================================+ - | 1 | RTE_FLOW_ACTION_TYPE_VOID | - +----+-----------------------------------------+ - | 2 | RTE_FLOW_ACTION_TYPE_MARK | - +----+-----------------------------------------+ - | 3 | RTE_FLOW_ACTION_TYPE_FLAG | - +----+-----------------------------------------+ - | 4 | RTE_FLOW_ACTION_TYPE_COUNT | - +----+-----------------------------------------+ - | 5 | RTE_FLOW_ACTION_TYPE_DROP | - +----+-----------------------------------------+ - | 6 | RTE_FLOW_ACTION_TYPE_QUEUE | - +----+-----------------------------------------+ - | 7 | RTE_FLOW_ACTION_TYPE_RSS | - +----+-----------------------------------------+ - | 8 | RTE_FLOW_ACTION_TYPE_SECURITY | - +----+-----------------------------------------+ - | 9 | RTE_FLOW_ACTION_TYPE_PF | - +----+-----------------------------------------+ - | 10 | RTE_FLOW_ACTION_TYPE_VF | - +----+-----------------------------------------+ - | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN | - +----+-----------------------------------------+ - | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID | - +----+-----------------------------------------+ - | 13 | RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR | - +----+-----------------------------------------+ - -.. note:: - - ``RTE_FLOW_ACTION_TYPE_PORT_ID``, ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` - are only supported between PF and its VFs. - -.. _table_octeontx2_supported_egress_action_types: - -.. table:: Egress action types - - +----+-----------------------------------------+ - | # | Action Type | - +====+=========================================+ - | 1 | RTE_FLOW_ACTION_TYPE_COUNT | - +----+-----------------------------------------+ - | 2 | RTE_FLOW_ACTION_TYPE_DROP | - +----+-----------------------------------------+ - | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN | - +----+-----------------------------------------+ - | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID | - +----+-----------------------------------------+ - | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP | - +----+-----------------------------------------+ - -Custom protocols supported in RTE Flow -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols. - -* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level. -* ``NGIO`` can be parsed at L3 level. - -For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the -respective switch header. - -For example:: - - -a 0002:02:00.0,switch_header="vlan_exdsa" - -The below fields of ``struct rte_flow_item_raw`` shall be used to specify the -pattern. - -- ``relative`` Selects the layer at which parsing is done. - - - 0 for ``exdsa`` and ``vlan_exdsa``. - - - 1 for ``NGIO``. - -- ``offset`` The offset in the header where the pattern should be matched. -- ``length`` Length of the pattern. -- ``pattern`` Pattern as a byte string. - -Example usage in testpmd:: - - ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \ - --rx-offloads=0x00080000 --rxq 8 --txq 8 - testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \ - spec ab pattern mask ab offset is 4 / end actions queue index 1 / end diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst index b512ccfdab..2ec8a034b5 100644 --- a/doc/guides/nics/octeontx_ep.rst +++ b/doc/guides/nics/octeontx_ep.rst @@ -5,7 +5,7 @@ OCTEON TX EP Poll Mode driver ============================= The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode -ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2** +ethdev driver support for the virtual functions (VF) of **Marvell OCTEON 9** and **Cavium OCTEON TX** families of adapters in SR-IOV context. More information can be found at `Marvell Official Website @@ -24,4 +24,4 @@ must be installed separately: allocates resources such as number of VFs, input/output queues for itself and the number of i/o queues each VF can use. -See :doc:`../platform/octeontx2` for SDP interface information which provides PCIe endpoint support for a remote host. +See :doc:`../platform/cnxk` for SDP interface information which provides PCIe endpoint support for a remote host. diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst index 5213df3ccd..97e38c868c 100644 --- a/doc/guides/platform/cnxk.rst +++ b/doc/guides/platform/cnxk.rst @@ -13,6 +13,9 @@ More information about CN9K and CN10K SoC can be found at `Marvell Official Webs Supported OCTEON cnxk SoCs -------------------------- +- CN93xx +- CN96xx +- CN98xx - CN106xx - CNF105xx @@ -583,6 +586,15 @@ Cross Compilation Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details. +CN9K: + +.. code-block:: console + + meson build --cross-file config/arm/arm64_cn9k_linux_gcc + ninja -C build + +CN10K: + .. code-block:: console meson build --cross-file config/arm/arm64_cn10k_linux_gcc diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg deleted file mode 100644 index ecd575947a..0000000000 --- a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg +++ /dev/null @@ -1,2804 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - - - - - - DDDpk - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Tx Rx - HW loop back device - - - - - - - - - - - - - - - - - Ethdev Ports (NIX) - Ingress Classification(NPC) - Egress Classification(NPC) - Rx Queues - Tx Queues - EgressTraffic Manager(NIX) - Scheduler SSO - Supports both poll mode and/or event modeby configuring scheduler - ARMv8Cores - Hardware Libraries - Software Libraries - Mempool(NPA) - Timer(TIM) - Crypto(CPT) - Compress(ZIP) - SharedMemory - SW Ring - HASHLPMACL - Mbuf - De(Frag) - - diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg deleted file mode 100644 index bf976b52af..0000000000 --- a/doc/guides/platform/img/octeontx2_resource_virtualization.svg +++ /dev/null @@ -1,2418 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - -   - - - - - - - - - - NIX AF - NPA AF - SSO AF - NPC AF - CPT AF - RVU AF - Linux AF driver(octeontx2_af)PF0 - - - CGX-0 - - - - CGX-1 - - - - - CGX-2 - - CGX-FW Interface - - - - - - - - - AF-PF MBOX - Linux Netdev PFdriver(octeontx2_pf)PFx - - NIX LF - - NPA LF - - - PF-VF MBOX - CGX-x LMAC-y - - - - - - - - Linux Netdev VFdriver(octeontx2_vf)PFx-VF0 - - NIX LF - - NPA LF - DPDK Ethdev VFdriverPFx-VF1 - - NIX LF - - NPA LF - - - DPDK Ethdev PFdriverPFy - - NIX LF - - NPA LF - PF-VF MBOX - - DPDK Eventdev PFdriverPFz - - TIM LF - - SSO LF - Linux Crypto PFdriverPFm - - NIX LF - - NPA LF - DPDK Ethdev VFdriverPFy-VF0 - - CPT LF - DPDK Crypto VFdriverPFm-VF0 - PF-VF MBOX - - DDDpk DPDK-APP1 with one ethdev over Linux PF - - DPDK-APP2 with Two ethdevs(PF,VF) ,eventdev, timer adapter and cryptodev - - - - - CGX-x LMAC-y - - diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst index 7614e1a368..2ff91a6018 100644 --- a/doc/guides/platform/index.rst +++ b/doc/guides/platform/index.rst @@ -15,4 +15,3 @@ The following are platform specific guides and setup information. dpaa dpaa2 octeontx - octeontx2 diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst deleted file mode 100644 index 5ab43abbdd..0000000000 --- a/doc/guides/platform/octeontx2.rst +++ /dev/null @@ -1,520 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Marvell International Ltd. - -Marvell OCTEON TX2 Platform Guide -================================= - -This document gives an overview of **Marvell OCTEON TX2** RVU H/W block, -packet flow and procedure to build DPDK on OCTEON TX2 platform. - -More information about OCTEON TX2 SoC can be found at `Marvell Official Website -`_. - -Supported OCTEON TX2 SoCs -------------------------- - -- CN98xx -- CN96xx -- CN93xx - -OCTEON TX2 Resource Virtualization Unit architecture ----------------------------------------------------- - -The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the -RVU architecture and a resource provisioning example. - -.. _figure_octeontx2_resource_virtualization: - -.. figure:: img/octeontx2_resource_virtualization.* - - OCTEON TX2 Resource virtualization architecture and provisioning example - - -Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW -resources belonging to the network, crypto and other functional blocks onto -PCI-compatible physical and virtual functions. - -Each functional block has multiple local functions (LFs) for -provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV -physical functions (PFs) and virtual functions (VFs). - -The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local -functions (LFs) provided by the RVU and its functional mapping to -DPDK subsystem. - -.. _table_octeontx2_rvu_dpdk_mapping: - -.. table:: RVU managed functional blocks and its mapping to DPDK subsystem - - +---+-----+--------------------------------------------------------------+ - | # | LF | DPDK subsystem mapping | - +===+=====+==============================================================+ - | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security| - +---+-----+--------------------------------------------------------------+ - | 2 | NPA | rte_mempool | - +---+-----+--------------------------------------------------------------+ - | 3 | NPC | rte_flow | - +---+-----+--------------------------------------------------------------+ - | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter | - +---+-----+--------------------------------------------------------------+ - | 5 | SSO | rte_eventdev | - +---+-----+--------------------------------------------------------------+ - | 6 | TIM | rte_event_timer_adapter | - +---+-----+--------------------------------------------------------------+ - | 7 | LBK | rte_ethdev | - +---+-----+--------------------------------------------------------------+ - | 8 | DPI | rte_rawdev | - +---+-----+--------------------------------------------------------------+ - | 9 | SDP | rte_ethdev | - +---+-----+--------------------------------------------------------------+ - | 10| REE | rte_regexdev | - +---+-----+--------------------------------------------------------------+ - -PF0 is called the administrative / admin function (AF) and has exclusive -privileges to provision RVU functional block's LFs to each of the PF/VF. - -PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving -requests from PF/VF, AF does resource provisioning and other HW configuration. - -AF is always attached to host, but PF/VFs may be used by host kernel itself, -or attached to VMs or to userspace applications like DPDK, etc. So, AF has to -handle provisioning/configuration requests sent by any device from any domain. - -The AF driver does not receive or process any data. -It is only a configuration driver used in control path. - -The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a -resource provisioning example where, - -1. PFx and PFx-VF0 bound to Linux netdev driver. -2. PFx-VF1 ethdev driver bound to the first DPDK application. -3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application. - -LBK HW Access -------------- - -Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX. -The loopback block has N channels and contains data buffering that is shared across -all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's -VFs are exposed as ethdev device and odd-even pairs of VFs are tied together, -that is, packets sent on odd VF end up received on even VF and vice versa. -This would enable HW accelerated means of communication between two domains -where even VF bound to the first domain and odd VF bound to the second domain. - -Typical application usage models are, - -#. Communication between the Linux kernel and DPDK application. -#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement. -#. Communication between two different DPDK applications. - -SDP interface -------------- - -System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host -to DMA packets into and out of OCTEON TX2 SoC. SDP interface comes in to live only when -OCTEON TX2 SoC is connected in PCIe endpoint mode. It can be used to send/receive -packets to/from remote host machine using input/output queue pairs exposed to it. -SDP interface receives input packets from remote host from NIX-RX and sends packets -to remote host using NIX-TX. Remote host machine need to use corresponding driver -(kernel/user mode) to communicate with SDP interface on OCTEON TX2 SoC. SDP supports -single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users -can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports. - -The primary use case for SDP is to enable the smart NIC use case. Typical usage models are, - -#. Communication channel between remote host and OCTEON TX2 SoC over PCIe. -#. Transfer packets received from network interface to remote host over PCIe and - vice-versa. - -OCTEON TX2 packet flow ----------------------- - -The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts -the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators. - -.. _figure_octeontx2_packet_flow_hw_accelerators: - -.. figure:: img/octeontx2_packet_flow_hw_accelerators.* - - OCTEON TX2 packet flow in conjunction with use of HW accelerators - -HW Offload Drivers ------------------- - -This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. - -#. **Ethdev Driver** - See :doc:`../nics/octeontx2` for NIX Ethdev driver information. - -#. **Mempool Driver** - See :doc:`../mempool/octeontx2` for NPA mempool driver information. - -#. **Event Device Driver** - See :doc:`../eventdevs/octeontx2` for SSO event device driver information. - -#. **Crypto Device Driver** - See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information. - -Procedure to Setup Platform ---------------------------- - -There are three main prerequisites for setting up DPDK on OCTEON TX2 -compatible board: - -1. **OCTEON TX2 Linux kernel driver** - - The dependent kernel drivers can be obtained from the - `kernel.org `_. - - Alternatively, the Marvell SDK also provides the required kernel drivers. - - Linux kernel should be configured with the following features enabled: - -.. code-block:: console - - # 64K pages enabled for better performance - CONFIG_ARM64_64K_PAGES=y - CONFIG_ARM64_VA_BITS_48=y - # huge pages support enabled - CONFIG_HUGETLBFS=y - CONFIG_HUGETLB_PAGE=y - # VFIO enabled with TYPE1 IOMMU at minimum - CONFIG_VFIO_IOMMU_TYPE1=y - CONFIG_VFIO_VIRQFD=y - CONFIG_VFIO=y - CONFIG_VFIO_NOIOMMU=y - CONFIG_VFIO_PCI=y - CONFIG_VFIO_PCI_MMAP=y - # SMMUv3 driver - CONFIG_ARM_SMMU_V3=y - # ARMv8.1 LSE atomics - CONFIG_ARM64_LSE_ATOMICS=y - # OCTEONTX2 drivers - CONFIG_OCTEONTX2_MBOX=y - CONFIG_OCTEONTX2_AF=y - # Enable if netdev PF driver required - CONFIG_OCTEONTX2_PF=y - # Enable if netdev VF driver required - CONFIG_OCTEONTX2_VF=y - CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y - # Enable if OCTEONTX2 DMA PF driver required - CONFIG_OCTEONTX2_DPI_PF=n - -2. **ARM64 Linux Tool Chain** - - For example, the *aarch64* Linaro Toolchain, which can be obtained from - `here `_. - - Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is - optimized for OCTEON TX2 CPU. - -3. **Rootfile system** - - Any *aarch64* supporting filesystem may be used. For example, - Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained - from ``_. - - Alternatively, the Marvell SDK provides the buildroot based root filesystem. - The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board. - -- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment. - - -Debugging Options ------------------ - -.. _table_octeontx2_common_debug_options: - -.. table:: OCTEON TX2 common debug options - - +---+------------+-------------------------------------------------------+ - | # | Component | EAL log command | - +===+============+=======================================================+ - | 1 | Common | --log-level='pmd\.octeontx2\.base,8' | - +---+------------+-------------------------------------------------------+ - | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' | - +---+------------+-------------------------------------------------------+ - -Debugfs support -~~~~~~~~~~~~~~~ - -The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks -context or stats using debugfs. - -Enable ``debugfs`` by: - -1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``. -2. Boot OCTEON TX2 with debugfs supported kernel. -3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using. - -.. code-block:: console - - # mount -t debugfs none /sys/kernel/debug - -Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC, -SSO & CGX. - -The file structure under ``/sys/kernel/debug`` is as follows - -.. code-block:: console - - octeontx2/ - |-- cgx - | |-- cgx0 - | | '-- lmac0 - | | '-- stats - | |-- cgx1 - | | |-- lmac0 - | | | '-- stats - | | '-- lmac1 - | | '-- stats - | '-- cgx2 - | '-- lmac0 - | '-- stats - |-- cpt - | |-- cpt_engines_info - | |-- cpt_engines_sts - | |-- cpt_err_info - | |-- cpt_lfs_info - | '-- cpt_pc - |---- nix - | |-- cq_ctx - | |-- ndc_rx_cache - | |-- ndc_rx_hits_miss - | |-- ndc_tx_cache - | |-- ndc_tx_hits_miss - | |-- qsize - | |-- rq_ctx - | |-- sq_ctx - | '-- tx_stall_hwissue - |-- npa - | |-- aura_ctx - | |-- ndc_cache - | |-- ndc_hits_miss - | |-- pool_ctx - | '-- qsize - |-- npc - | |-- mcam_info - | '-- rx_miss_act_stats - |-- rsrc_alloc - '-- sso - |-- hws - | '-- sso_hws_info - '-- hwgrp - |-- sso_hwgrp_aq_thresh - |-- sso_hwgrp_iaq_walk - |-- sso_hwgrp_pc - |-- sso_hwgrp_free_list_walk - |-- sso_hwgrp_ient_walk - '-- sso_hwgrp_taq_walk - -RVU block LF allocation: - -.. code-block:: console - - cat /sys/kernel/debug/octeontx2/rsrc_alloc - - pcifunc NPA NIX SSO GROUP SSOWS TIM CPT - PF1 0 0 - PF4 1 - PF13 0, 1 0, 1 0 - -CGX example usage: - -.. code-block:: console - - cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats - - =======Link Status====== - Link is UP 40000 Mbps - =======RX_STATS====== - Received packets: 0 - Octets of received packets: 0 - Received PAUSE packets: 0 - Received PAUSE and control packets: 0 - Filtered DMAC0 (NIX-bound) packets: 0 - Filtered DMAC0 (NIX-bound) octets: 0 - Packets dropped due to RX FIFO full: 0 - Octets dropped due to RX FIFO full: 0 - Error packets: 0 - Filtered DMAC1 (NCSI-bound) packets: 0 - Filtered DMAC1 (NCSI-bound) octets: 0 - NCSI-bound packets dropped: 0 - NCSI-bound octets dropped: 0 - =======TX_STATS====== - Packets dropped due to excessive collisions: 0 - Packets dropped due to excessive deferral: 0 - Multiple collisions before successful transmission: 0 - Single collisions before successful transmission: 0 - Total octets sent on the interface: 0 - Total frames sent on the interface: 0 - Packets sent with an octet count < 64: 0 - Packets sent with an octet count == 64: 0 - Packets sent with an octet count of 65127: 0 - Packets sent with an octet count of 128-255: 0 - Packets sent with an octet count of 256-511: 0 - Packets sent with an octet count of 512-1023: 0 - Packets sent with an octet count of 1024-1518: 0 - Packets sent with an octet count of > 1518: 0 - Packets sent to a broadcast DMAC: 0 - Packets sent to the multicast DMAC: 0 - Transmit underflow and were truncated: 0 - Control/PAUSE packets sent: 0 - -CPT example usage: - -.. code-block:: console - - cat /sys/kernel/debug/octeontx2/cpt/cpt_pc - - CPT instruction requests 0 - CPT instruction latency 0 - CPT NCB read requests 0 - CPT NCB read latency 0 - CPT read requests caused by UC fills 0 - CPT active cycles pc 1395642 - CPT clock count pc 5579867595493 - -NIX example usage: - -.. code-block:: console - - Usage: echo [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx - cat /sys/kernel/debug/octeontx2/nix/cq_ctx - echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx - cat /sys/kernel/debug/octeontx2/nix/cq_ctx - - =====cq_ctx for nixlf:0 and qidx:0 is===== - W0: base 158ef1a00 - - W1: wrptr 0 - W1: avg_con 0 - W1: cint_idx 0 - W1: cq_err 0 - W1: qint_idx 0 - W1: bpid 0 - W1: bp_ena 0 - - W2: update_time 31043 - W2:avg_level 255 - W2: head 0 - W2:tail 0 - - W3: cq_err_int_ena 5 - W3:cq_err_int 0 - W3: qsize 4 - W3:caching 1 - W3: substream 0x000 - W3: ena 1 - W3: drop_ena 1 - W3: drop 64 - W3: bp 0 - -NPA example usage: - -.. code-block:: console - - Usage: echo [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx - cat /sys/kernel/debug/octeontx2/npa/pool_ctx - echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx - cat /sys/kernel/debug/octeontx2/npa/pool_ctx - - ======POOL : 0======= - W0: Stack base 1375bff00 - W1: ena 1 - W1: nat_align 1 - W1: stack_caching 1 - W1: stack_way_mask 0 - W1: buf_offset 1 - W1: buf_size 19 - W2: stack_max_pages 24315 - W2: stack_pages 24314 - W3: op_pc 267456 - W4: stack_offset 2 - W4: shift 5 - W4: avg_level 255 - W4: avg_con 0 - W4: fc_ena 0 - W4: fc_stype 0 - W4: fc_hyst_bits 0 - W4: fc_up_crossing 0 - W4: update_time 62993 - W5: fc_addr 0 - W6: ptr_start 1593adf00 - W7: ptr_end 180000000 - W8: err_int 0 - W8: err_int_ena 7 - W8: thresh_int 0 - W8: thresh_int_ena 0 - W8: thresh_up 0 - W8: thresh_qint_idx 0 - W8: err_qint_idx 0 - -NPC example usage: - -.. code-block:: console - - cat /sys/kernel/debug/octeontx2/npc/mcam_info - - NPC MCAM info: - RX keywidth : 224bits - TX keywidth : 224bits - - MCAM entries : 2048 - Reserved : 158 - Available : 1890 - - MCAM counters : 512 - Reserved : 1 - Available : 511 - -SSO example usage: - -.. code-block:: console - - Usage: echo [/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info - echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info - - ================================================== - SSOW HWS[0] Arbitration State 0x0 - SSOW HWS[0] Guest Machine Control 0x0 - SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff - SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff - SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff - SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff - SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff - SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff - SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff - SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff - ================================================== - -Compile DPDK ------------- - -DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on -an x86 based platform. - -Native Compilation -~~~~~~~~~~~~~~~~~~ - -.. code-block:: console - - meson build - ninja -C build - -Cross Compilation -~~~~~~~~~~~~~~~~~ - -Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details. - -.. code-block:: console - - meson build --cross-file config/arm/arm64_octeontx2_linux_gcc - ninja -C build - -.. note:: - - By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain, - if Marvell toolchain is available then it can be used by overriding the - c, cpp, ar, strip ``binaries`` attributes to respective Marvell - toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file. diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 5581822d10..4e5b23c53d 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -125,20 +125,3 @@ Deprecation Notices applications should be updated to use the ``dmadev`` library instead, with the underlying HW-functionality being provided by the ``ioat`` or ``idxd`` dma drivers - -* drivers/octeontx2: remove octeontx2 drivers - - In the view of enabling unified driver for ``octeontx2(cn9k)``/``octeontx3(cn10k)``, - removing ``drivers/octeontx2`` drivers and replace with ``drivers/cnxk/`` which - supports both ``octeontx2(cn9k)`` and ``octeontx3(cn10k)`` SoCs. - This deprecation notice is to do following actions in DPDK v22.02 version. - - #. Replace ``drivers/common/octeontx2/`` with ``drivers/common/cnxk/`` - #. Replace ``drivers/mempool/octeontx2/`` with ``drivers/mempool/cnxk/`` - #. Replace ``drivers/net/octeontx2/`` with ``drivers/net/cnxk/`` - #. Replace ``drivers/event/octeontx2/`` with ``drivers/event/cnxk/`` - #. Replace ``drivers/crypto/octeontx2/`` with ``drivers/crypto/cnxk/`` - #. Rename ``drivers/regex/octeontx2/`` as ``drivers/regex/cn9k/`` - #. Rename ``config/arm/arm64_octeontx2_linux_gcc`` as ``config/arm/arm64_cn9k_linux_gcc`` - - Last two actions are to align naming convention as cnxk scheme. diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst index 1a0e6111d7..31fcebdf95 100644 --- a/doc/guides/rel_notes/release_19_08.rst +++ b/doc/guides/rel_notes/release_19_08.rst @@ -152,11 +152,11 @@ New Features ``eventdev Tx adapter``, ``eventdev Timer adapter`` and ``rawdev DMA`` drivers for various HW co-processors available in ``OCTEON TX2`` SoC. - See :doc:`../platform/octeontx2` and driver information: + See ``platform/octeontx2`` and driver information: - * :doc:`../nics/octeontx2` - * :doc:`../mempool/octeontx2` - * :doc:`../eventdevs/octeontx2` + * ``nics/octeontx2`` + * ``mempool/octeontx2`` + * ``eventdevs/octeontx2`` * ``rawdevs/octeontx2_dma`` * **Introduced the Intel NTB PMD.** diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index 302b3e5f37..79f3475ae6 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -192,7 +192,7 @@ New Features Added a new PMD for hardware crypto offload block on ``OCTEON TX2`` SoC. - See :doc:`../cryptodevs/octeontx2` for more details + See ``cryptodevs/octeontx2`` for more details * **Updated NXP crypto PMDs for PDCP support.** diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst index ce93483291..d3d5ebe4dc 100644 --- a/doc/guides/tools/cryptoperf.rst +++ b/doc/guides/tools/cryptoperf.rst @@ -157,7 +157,6 @@ The following are the application command-line options: crypto_mvsam crypto_null crypto_octeontx - crypto_octeontx2 crypto_openssl crypto_qat crypto_scheduler diff --git a/drivers/common/meson.build b/drivers/common/meson.build index 4acbad60b1..ea261dd70a 100644 --- a/drivers/common/meson.build +++ b/drivers/common/meson.build @@ -8,5 +8,4 @@ drivers = [ 'iavf', 'mvep', 'octeontx', - 'octeontx2', ] diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h deleted file mode 100644 index e3b68505b7..0000000000 --- a/drivers/common/octeontx2/hw/otx2_nix.h +++ /dev/null @@ -1,1391 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_NIX_HW_H__ -#define __OTX2_NIX_HW_H__ - -/* Register offsets */ - -#define NIX_AF_CFG (0x0ull) -#define NIX_AF_STATUS (0x10ull) -#define NIX_AF_NDC_CFG (0x18ull) -#define NIX_AF_CONST (0x20ull) -#define NIX_AF_CONST1 (0x28ull) -#define NIX_AF_CONST2 (0x30ull) -#define NIX_AF_CONST3 (0x38ull) -#define NIX_AF_SQ_CONST (0x40ull) -#define NIX_AF_CQ_CONST (0x48ull) -#define NIX_AF_RQ_CONST (0x50ull) -#define NIX_AF_PSE_CONST (0x60ull) -#define NIX_AF_TL1_CONST (0x70ull) -#define NIX_AF_TL2_CONST (0x78ull) -#define NIX_AF_TL3_CONST (0x80ull) -#define NIX_AF_TL4_CONST (0x88ull) -#define NIX_AF_MDQ_CONST (0x90ull) -#define NIX_AF_MC_MIRROR_CONST (0x98ull) -#define NIX_AF_LSO_CFG (0xa8ull) -#define NIX_AF_BLK_RST (0xb0ull) -#define NIX_AF_TX_TSTMP_CFG (0xc0ull) -#define NIX_AF_RX_CFG (0xd0ull) -#define NIX_AF_AVG_DELAY (0xe0ull) -#define NIX_AF_CINT_DELAY (0xf0ull) -#define NIX_AF_RX_MCAST_BASE (0x100ull) -#define NIX_AF_RX_MCAST_CFG (0x110ull) -#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull) -#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull) -#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull) -#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull) -#define NIX_AF_LF_RST (0x150ull) -#define NIX_AF_GEN_INT (0x160ull) -#define NIX_AF_GEN_INT_W1S (0x168ull) -#define NIX_AF_GEN_INT_ENA_W1S (0x170ull) -#define NIX_AF_GEN_INT_ENA_W1C (0x178ull) -#define NIX_AF_ERR_INT (0x180ull) -#define NIX_AF_ERR_INT_W1S (0x188ull) -#define NIX_AF_ERR_INT_ENA_W1S (0x190ull) -#define NIX_AF_ERR_INT_ENA_W1C (0x198ull) -#define NIX_AF_RAS (0x1a0ull) -#define NIX_AF_RAS_W1S (0x1a8ull) -#define NIX_AF_RAS_ENA_W1S (0x1b0ull) -#define NIX_AF_RAS_ENA_W1C (0x1b8ull) -#define NIX_AF_RVU_INT (0x1c0ull) -#define NIX_AF_RVU_INT_W1S (0x1c8ull) -#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull) -#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull) -#define NIX_AF_TCP_TIMER (0x1e0ull) -#define NIX_AF_RX_DEF_OL2 (0x200ull) -#define NIX_AF_RX_DEF_OIP4 (0x210ull) -#define NIX_AF_RX_DEF_IIP4 (0x220ull) -#define NIX_AF_RX_DEF_OIP6 (0x230ull) -#define NIX_AF_RX_DEF_IIP6 (0x240ull) -#define NIX_AF_RX_DEF_OTCP (0x250ull) -#define NIX_AF_RX_DEF_ITCP (0x260ull) -#define NIX_AF_RX_DEF_OUDP (0x270ull) -#define NIX_AF_RX_DEF_IUDP (0x280ull) -#define NIX_AF_RX_DEF_OSCTP (0x290ull) -#define NIX_AF_RX_DEF_ISCTP (0x2a0ull) -#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3) -#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull) -#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3) -#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3) -#define NIX_AF_NDC_RX_SYNC (0x3e0ull) -#define NIX_AF_NDC_TX_SYNC (0x3f0ull) -#define NIX_AF_AQ_CFG (0x400ull) -#define NIX_AF_AQ_BASE (0x410ull) -#define NIX_AF_AQ_STATUS (0x420ull) -#define NIX_AF_AQ_DOOR (0x430ull) -#define NIX_AF_AQ_DONE_WAIT (0x440ull) -#define NIX_AF_AQ_DONE (0x450ull) -#define NIX_AF_AQ_DONE_ACK (0x460ull) -#define NIX_AF_AQ_DONE_TIMER (0x470ull) -#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull) -#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull) -#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16) -#define NIX_AF_RX_SW_SYNC (0x550ull) -#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16) -#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull) -#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull) -#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull) -#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull) -#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull) -#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3) -#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3) -#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16) -#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16) -#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16) -#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16) -#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16) -#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull) -#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull) -#define NIX_AF_PSE_SHAPER_CFG (0x810ull) -#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull) -#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18) -#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16) -#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16) -#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16) -#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16) -#define NIX_AF_SDP_LINK_CREDIT (0xa40ull) -#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3) -#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3) -#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16) -#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_SCHEDULE(a) \ - (0x1000ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_SHAPE(a) \ - (0x1010ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_CIR(a) \ - (0x1020ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_PIR(a) \ - (0x1030ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_SCHED_STATE(a) \ - (0x1040ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_SHAPE_STATE(a) \ - (0x1050ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_SW_XOFF(a) \ - (0x1070ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_TOPOLOGY(a) \ - (0x1080ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_PARENT(a) \ - (0x1088ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_MD_DEBUG0(a) \ - (0x10c0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_MD_DEBUG1(a) \ - (0x10c8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_MD_DEBUG2(a) \ - (0x10d0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3X_MD_DEBUG3(a) \ - (0x10d8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SCHEDULE(a) \ - (0x1200ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SHAPE(a) \ - (0x1210ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_CIR(a) \ - (0x1220ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_PIR(a) \ - (0x1230ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SCHED_STATE(a) \ - (0x1240ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SHAPE_STATE(a) \ - (0x1250ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_SW_XOFF(a) \ - (0x1270ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_TOPOLOGY(a) \ - (0x1280ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_PARENT(a) \ - (0x1288ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_MD_DEBUG0(a) \ - (0x12c0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_MD_DEBUG1(a) \ - (0x12c8ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_MD_DEBUG2(a) \ - (0x12d0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL4X_MD_DEBUG3(a) \ - (0x12d8ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_SCHEDULE(a) \ - (0x1400ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_SHAPE(a) \ - (0x1410ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_CIR(a) \ - (0x1420ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_PIR(a) \ - (0x1430ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_SCHED_STATE(a) \ - (0x1440ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_SHAPE_STATE(a) \ - (0x1450ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_SW_XOFF(a) \ - (0x1470ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_PARENT(a) \ - (0x1480ull | (uint64_t)(a) << 16) -#define NIX_AF_MDQX_MD_DEBUG(a) \ - (0x14c0ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3_TL2X_CFG(a) \ - (0x1600ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3_TL2X_BP_STATUS(a) \ - (0x1610ull | (uint64_t)(a) << 16) -#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \ - (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) -#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \ - (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3) -#define NIX_AF_TX_MCASTX(a) \ - (0x1900ull | (uint64_t)(a) << 15) -#define NIX_AF_TX_VTAG_DEFX_CTL(a) \ - (0x1a00ull | (uint64_t)(a) << 16) -#define NIX_AF_TX_VTAG_DEFX_DATA(a) \ - (0x1a10ull | (uint64_t)(a) << 16) -#define NIX_AF_RX_BPIDX_STATUS(a) \ - (0x1a20ull | (uint64_t)(a) << 17) -#define NIX_AF_RX_CHANX_CFG(a) \ - (0x1a30ull | (uint64_t)(a) << 15) -#define NIX_AF_CINT_TIMERX(a) \ - (0x1a40ull | (uint64_t)(a) << 18) -#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \ - (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) -#define NIX_AF_LFX_CFG(a) \ - (0x4000ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_SQS_CFG(a) \ - (0x4020ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_TX_CFG2(a) \ - (0x4028ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_SQS_BASE(a) \ - (0x4030ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RQS_CFG(a) \ - (0x4040ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RQS_BASE(a) \ - (0x4050ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_CQS_CFG(a) \ - (0x4060ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_CQS_BASE(a) \ - (0x4070ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_TX_CFG(a) \ - (0x4080ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_TX_PARSE_CFG(a) \ - (0x4090ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_CFG(a) \ - (0x40a0ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RSS_CFG(a) \ - (0x40c0ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RSS_BASE(a) \ - (0x40d0ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_QINTS_CFG(a) \ - (0x4100ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_QINTS_BASE(a) \ - (0x4110ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_CINTS_CFG(a) \ - (0x4120ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_CINTS_BASE(a) \ - (0x4130ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \ - (0x4140ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \ - (0x4148ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \ - (0x4150ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \ - (0x4158ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \ - (0x4170ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_TX_STATUS(a) \ - (0x4180ull | (uint64_t)(a) << 17) -#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \ - (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) -#define NIX_AF_LFX_LOCKX(a, b) \ - (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) -#define NIX_AF_LFX_TX_STATX(a, b) \ - (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) -#define NIX_AF_LFX_RX_STATX(a, b) \ - (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) -#define NIX_AF_LFX_RSS_GRPX(a, b) \ - (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) -#define NIX_AF_RX_NPC_MC_RCV (0x4700ull) -#define NIX_AF_RX_NPC_MC_DROP (0x4710ull) -#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull) -#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull) -#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \ - (0x4800ull | (uint64_t)(a) << 16) -#define NIX_PRIV_AF_INT_CFG (0x8000000ull) -#define NIX_PRIV_LFX_CFG(a) \ - (0x8000010ull | (uint64_t)(a) << 8) -#define NIX_PRIV_LFX_INT_CFG(a) \ - (0x8000020ull | (uint64_t)(a) << 8) -#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull) - -#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3) -#define NIX_LF_CFG (0x100ull) -#define NIX_LF_GINT (0x200ull) -#define NIX_LF_GINT_W1S (0x208ull) -#define NIX_LF_GINT_ENA_W1C (0x210ull) -#define NIX_LF_GINT_ENA_W1S (0x218ull) -#define NIX_LF_ERR_INT (0x220ull) -#define NIX_LF_ERR_INT_W1S (0x228ull) -#define NIX_LF_ERR_INT_ENA_W1C (0x230ull) -#define NIX_LF_ERR_INT_ENA_W1S (0x238ull) -#define NIX_LF_RAS (0x240ull) -#define NIX_LF_RAS_W1S (0x248ull) -#define NIX_LF_RAS_ENA_W1C (0x250ull) -#define NIX_LF_RAS_ENA_W1S (0x258ull) -#define NIX_LF_SQ_OP_ERR_DBG (0x260ull) -#define NIX_LF_MNQ_ERR_DBG (0x270ull) -#define NIX_LF_SEND_ERR_DBG (0x280ull) -#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3) -#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3) -#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3) -#define NIX_LF_RQ_OP_INT (0x900ull) -#define NIX_LF_RQ_OP_OCTS (0x910ull) -#define NIX_LF_RQ_OP_PKTS (0x920ull) -#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull) -#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull) -#define NIX_LF_RQ_OP_RE_PKTS (0x950ull) -#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull) -#define NIX_LF_SQ_OP_INT (0xa00ull) -#define NIX_LF_SQ_OP_OCTS (0xa10ull) -#define NIX_LF_SQ_OP_PKTS (0xa20ull) -#define NIX_LF_SQ_OP_STATUS (0xa30ull) -#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull) -#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull) -#define NIX_LF_CQ_OP_INT (0xb00ull) -#define NIX_LF_CQ_OP_DOOR (0xb30ull) -#define NIX_LF_CQ_OP_STATUS (0xb40ull) -#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12) -#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12) -#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12) -#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12) -#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12) - - -/* Enum offsets */ - -#define NIX_TX_VTAGOP_NOP (0x0ull) -#define NIX_TX_VTAGOP_INSERT (0x1ull) -#define NIX_TX_VTAGOP_REPLACE (0x2ull) - -#define NIX_TX_ACTIONOP_DROP (0x0ull) -#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull) -#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull) -#define NIX_TX_ACTIONOP_MCAST (0x3ull) -#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull) - -#define NIX_INTF_RX (0x0ull) -#define NIX_INTF_TX (0x1ull) - -#define NIX_TXLAYER_OL3 (0x0ull) -#define NIX_TXLAYER_OL4 (0x1ull) -#define NIX_TXLAYER_IL3 (0x2ull) -#define NIX_TXLAYER_IL4 (0x3ull) - -#define NIX_SUBDC_NOP (0x0ull) -#define NIX_SUBDC_EXT (0x1ull) -#define NIX_SUBDC_CRC (0x2ull) -#define NIX_SUBDC_IMM (0x3ull) -#define NIX_SUBDC_SG (0x4ull) -#define NIX_SUBDC_MEM (0x5ull) -#define NIX_SUBDC_JUMP (0x6ull) -#define NIX_SUBDC_WORK (0x7ull) -#define NIX_SUBDC_SOD (0xfull) - -#define NIX_STYPE_STF (0x0ull) -#define NIX_STYPE_STT (0x1ull) -#define NIX_STYPE_STP (0x2ull) - -#define NIX_STAT_LF_TX_TX_UCAST (0x0ull) -#define NIX_STAT_LF_TX_TX_BCAST (0x1ull) -#define NIX_STAT_LF_TX_TX_MCAST (0x2ull) -#define NIX_STAT_LF_TX_TX_DROP (0x3ull) -#define NIX_STAT_LF_TX_TX_OCTS (0x4ull) - -#define NIX_STAT_LF_RX_RX_OCTS (0x0ull) -#define NIX_STAT_LF_RX_RX_UCAST (0x1ull) -#define NIX_STAT_LF_RX_RX_BCAST (0x2ull) -#define NIX_STAT_LF_RX_RX_MCAST (0x3ull) -#define NIX_STAT_LF_RX_RX_DROP (0x4ull) -#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull) -#define NIX_STAT_LF_RX_RX_FCS (0x6ull) -#define NIX_STAT_LF_RX_RX_ERR (0x7ull) -#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull) -#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull) -#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull) -#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull) - -#define NIX_SQOPERR_SQ_OOR (0x0ull) -#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull) -#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull) -#define NIX_SQOPERR_SQ_DISABLED (0x3ull) -#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull) -#define NIX_SQOPERR_SQE_OFLOW (0x5ull) -#define NIX_SQOPERR_SQB_NULL (0x6ull) -#define NIX_SQOPERR_SQB_FAULT (0x7ull) - -#define NIX_XQESZ_W64 (0x0ull) -#define NIX_XQESZ_W16 (0x1ull) - -#define NIX_VTAGSIZE_T4 (0x0ull) -#define NIX_VTAGSIZE_T8 (0x1ull) - -#define NIX_RX_ACTIONOP_DROP (0x0ull) -#define NIX_RX_ACTIONOP_UCAST (0x1ull) -#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull) -#define NIX_RX_ACTIONOP_MCAST (0x3ull) -#define NIX_RX_ACTIONOP_RSS (0x4ull) -#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull) -#define NIX_RX_ACTIONOP_MIRROR (0x6ull) - -#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull) -#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull) -#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull) -#define NIX_TX_VTAGACTION_VTAG0_RELPTR \ - (sizeof(struct nix_inst_hdr_s) + 2 * 6) -#define NIX_TX_VTAGACTION_VTAG1_RELPTR \ - (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4) -#define NIX_RQINT_DROP (0x0ull) -#define NIX_RQINT_RED (0x1ull) -#define NIX_RQINT_R2 (0x2ull) -#define NIX_RQINT_R3 (0x3ull) -#define NIX_RQINT_R4 (0x4ull) -#define NIX_RQINT_R5 (0x5ull) -#define NIX_RQINT_R6 (0x6ull) -#define NIX_RQINT_R7 (0x7ull) - -#define NIX_MAXSQESZ_W16 (0x0ull) -#define NIX_MAXSQESZ_W8 (0x1ull) - -#define NIX_LSOALG_NOP (0x0ull) -#define NIX_LSOALG_ADD_SEGNUM (0x1ull) -#define NIX_LSOALG_ADD_PAYLEN (0x2ull) -#define NIX_LSOALG_ADD_OFFSET (0x3ull) -#define NIX_LSOALG_TCP_FLAGS (0x4ull) - -#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull) -#define NIX_MNQERR_SQ_CTX_POISON (0x1ull) -#define NIX_MNQERR_SQB_FAULT (0x2ull) -#define NIX_MNQERR_SQB_POISON (0x3ull) -#define NIX_MNQERR_TOTAL_ERR (0x4ull) -#define NIX_MNQERR_LSO_ERR (0x5ull) -#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull) -#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull) -#define NIX_MNQERR_MAXLEN_ERR (0x8ull) -#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull) - -#define NIX_MDTYPE_RSVD (0x0ull) -#define NIX_MDTYPE_FLUSH (0x1ull) -#define NIX_MDTYPE_PMD (0x2ull) - -#define NIX_NDC_TX_PORT_LMT (0x0ull) -#define NIX_NDC_TX_PORT_ENQ (0x1ull) -#define NIX_NDC_TX_PORT_MNQ (0x2ull) -#define NIX_NDC_TX_PORT_DEQ (0x3ull) -#define NIX_NDC_TX_PORT_DMA (0x4ull) -#define NIX_NDC_TX_PORT_XQE (0x5ull) - -#define NIX_NDC_RX_PORT_AQ (0x0ull) -#define NIX_NDC_RX_PORT_CQ (0x1ull) -#define NIX_NDC_RX_PORT_CINT (0x2ull) -#define NIX_NDC_RX_PORT_MC (0x3ull) -#define NIX_NDC_RX_PORT_PKT (0x4ull) -#define NIX_NDC_RX_PORT_RQ (0x5ull) - -#define NIX_RE_OPCODE_RE_NONE (0x0ull) -#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull) -#define NIX_RE_OPCODE_RE_JABBER (0x2ull) -#define NIX_RE_OPCODE_RE_FCS (0x7ull) -#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull) -#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull) -#define NIX_RE_OPCODE_RE_RX_CTL (0xbull) -#define NIX_RE_OPCODE_RE_SKIP (0xcull) -#define NIX_RE_OPCODE_RE_DMAPKT (0xfull) -#define NIX_RE_OPCODE_UNDERSIZE (0x10ull) -#define NIX_RE_OPCODE_OVERSIZE (0x11ull) -#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull) - -#define NIX_REDALG_STD (0x0ull) -#define NIX_REDALG_SEND (0x1ull) -#define NIX_REDALG_STALL (0x2ull) -#define NIX_REDALG_DISCARD (0x3ull) - -#define NIX_RX_MCOP_RQ (0x0ull) -#define NIX_RX_MCOP_RSS (0x1ull) - -#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull) -#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull) -#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull) -#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull) -#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull) -#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull) -#define NIX_RX_PERRCODE_MEMOUT (0x9ull) -#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull) -#define NIX_RX_PERRCODE_OL3_LEN (0x10ull) -#define NIX_RX_PERRCODE_OL4_LEN (0x11ull) -#define NIX_RX_PERRCODE_OL4_CHK (0x12ull) -#define NIX_RX_PERRCODE_OL4_PORT (0x13ull) -#define NIX_RX_PERRCODE_IL3_LEN (0x20ull) -#define NIX_RX_PERRCODE_IL4_LEN (0x21ull) -#define NIX_RX_PERRCODE_IL4_CHK (0x22ull) -#define NIX_RX_PERRCODE_IL4_PORT (0x23ull) - -#define NIX_SENDCRCALG_CRC32 (0x0ull) -#define NIX_SENDCRCALG_CRC32C (0x1ull) -#define NIX_SENDCRCALG_ONES16 (0x2ull) - -#define NIX_SENDL3TYPE_NONE (0x0ull) -#define NIX_SENDL3TYPE_IP4 (0x2ull) -#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull) -#define NIX_SENDL3TYPE_IP6 (0x4ull) - -#define NIX_SENDL4TYPE_NONE (0x0ull) -#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull) -#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull) -#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull) - -#define NIX_SENDLDTYPE_LDD (0x0ull) -#define NIX_SENDLDTYPE_LDT (0x1ull) -#define NIX_SENDLDTYPE_LDWB (0x2ull) - -#define NIX_SENDMEMALG_SET (0x0ull) -#define NIX_SENDMEMALG_SETTSTMP (0x1ull) -#define NIX_SENDMEMALG_SETRSLT (0x2ull) -#define NIX_SENDMEMALG_ADD (0x8ull) -#define NIX_SENDMEMALG_SUB (0x9ull) -#define NIX_SENDMEMALG_ADDLEN (0xaull) -#define NIX_SENDMEMALG_SUBLEN (0xbull) -#define NIX_SENDMEMALG_ADDMBUF (0xcull) -#define NIX_SENDMEMALG_SUBMBUF (0xdull) - -#define NIX_SENDMEMDSZ_B64 (0x0ull) -#define NIX_SENDMEMDSZ_B32 (0x1ull) -#define NIX_SENDMEMDSZ_B16 (0x2ull) -#define NIX_SENDMEMDSZ_B8 (0x3ull) - -#define NIX_SEND_STATUS_GOOD (0x0ull) -#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull) -#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull) -#define NIX_SEND_STATUS_SQB_FAULT (0x3ull) -#define NIX_SEND_STATUS_SQB_POISON (0x4ull) -#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull) -#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull) -#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull) -#define NIX_SEND_STATUS_JUMP_POISON (0x8ull) -#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull) -#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull) -#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull) -#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull) -#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull) -#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull) -#define NIX_SEND_STATUS_DATA_FAULT (0x16ull) -#define NIX_SEND_STATUS_DATA_POISON (0x17ull) -#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull) -#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull) -#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull) -#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull) -#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull) -#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull) -#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull) -#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull) - -#define NIX_SQINT_LMT_ERR (0x0ull) -#define NIX_SQINT_MNQ_ERR (0x1ull) -#define NIX_SQINT_SEND_ERR (0x2ull) -#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull) - -#define NIX_XQE_TYPE_INVALID (0x0ull) -#define NIX_XQE_TYPE_RX (0x1ull) -#define NIX_XQE_TYPE_RX_IPSECS (0x2ull) -#define NIX_XQE_TYPE_RX_IPSECH (0x3ull) -#define NIX_XQE_TYPE_RX_IPSECD (0x4ull) -#define NIX_XQE_TYPE_SEND (0x8ull) - -#define NIX_AQ_COMP_NOTDONE (0x0ull) -#define NIX_AQ_COMP_GOOD (0x1ull) -#define NIX_AQ_COMP_SWERR (0x2ull) -#define NIX_AQ_COMP_CTX_POISON (0x3ull) -#define NIX_AQ_COMP_CTX_FAULT (0x4ull) -#define NIX_AQ_COMP_LOCKERR (0x5ull) -#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull) - -#define NIX_AF_INT_VEC_RVU (0x0ull) -#define NIX_AF_INT_VEC_GEN (0x1ull) -#define NIX_AF_INT_VEC_AQ_DONE (0x2ull) -#define NIX_AF_INT_VEC_AF_ERR (0x3ull) -#define NIX_AF_INT_VEC_POISON (0x4ull) - -#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull) -#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull) -#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull) -#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull) - -#define NIX_AQ_INSTOP_NOP (0x0ull) -#define NIX_AQ_INSTOP_INIT (0x1ull) -#define NIX_AQ_INSTOP_WRITE (0x2ull) -#define NIX_AQ_INSTOP_READ (0x3ull) -#define NIX_AQ_INSTOP_LOCK (0x4ull) -#define NIX_AQ_INSTOP_UNLOCK (0x5ull) - -#define NIX_AQ_CTYPE_RQ (0x0ull) -#define NIX_AQ_CTYPE_SQ (0x1ull) -#define NIX_AQ_CTYPE_CQ (0x2ull) -#define NIX_AQ_CTYPE_MCE (0x3ull) -#define NIX_AQ_CTYPE_RSS (0x4ull) -#define NIX_AQ_CTYPE_DYNO (0x5ull) - -#define NIX_COLORRESULT_GREEN (0x0ull) -#define NIX_COLORRESULT_YELLOW (0x1ull) -#define NIX_COLORRESULT_RED_SEND (0x2ull) -#define NIX_COLORRESULT_RED_DROP (0x3ull) - -#define NIX_CHAN_LBKX_CHX(a, b) \ - (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b)) -#define NIX_CHAN_R4 (0x400ull) -#define NIX_CHAN_R5 (0x500ull) -#define NIX_CHAN_R6 (0x600ull) -#define NIX_CHAN_SDP_CH_END (0x7ffull) -#define NIX_CHAN_SDP_CH_START (0x700ull) -#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \ - (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \ - (uint64_t)(c)) - -#define NIX_INTF_SDP (0x4ull) -#define NIX_INTF_CGX0 (0x0ull) -#define NIX_INTF_CGX1 (0x1ull) -#define NIX_INTF_CGX2 (0x2ull) -#define NIX_INTF_LBK0 (0x3ull) - -#define NIX_CQERRINT_DOOR_ERR (0x0ull) -#define NIX_CQERRINT_WR_FULL (0x1ull) -#define NIX_CQERRINT_CQE_FAULT (0x2ull) - -#define NIX_LF_INT_VEC_GINT (0x80ull) -#define NIX_LF_INT_VEC_ERR_INT (0x81ull) -#define NIX_LF_INT_VEC_POISON (0x82ull) -#define NIX_LF_INT_VEC_QINT_END (0x3full) -#define NIX_LF_INT_VEC_QINT_START (0x0ull) -#define NIX_LF_INT_VEC_CINT_END (0x7full) -#define NIX_LF_INT_VEC_CINT_START (0x40ull) - -/* Enums definitions */ - -/* Structures definitions */ - -/* NIX admin queue instruction structure */ -struct nix_aq_inst_s { - uint64_t op : 4; - uint64_t ctype : 4; - uint64_t lf : 7; - uint64_t rsvd_23_15 : 9; - uint64_t cindex : 20; - uint64_t rsvd_62_44 : 19; - uint64_t doneint : 1; - uint64_t res_addr : 64; /* W1 */ -}; - -/* NIX admin queue result structure */ -struct nix_aq_res_s { - uint64_t op : 4; - uint64_t ctype : 4; - uint64_t compcode : 8; - uint64_t doneint : 1; - uint64_t rsvd_63_17 : 47; - uint64_t rsvd_127_64 : 64; /* W1 */ -}; - -/* NIX completion interrupt context hardware structure */ -struct nix_cint_hw_s { - uint64_t ecount : 32; - uint64_t qcount : 16; - uint64_t intr : 1; - uint64_t ena : 1; - uint64_t timer_idx : 8; - uint64_t rsvd_63_58 : 6; - uint64_t ecount_wait : 32; - uint64_t qcount_wait : 16; - uint64_t time_wait : 8; - uint64_t rsvd_127_120 : 8; -}; - -/* NIX completion queue entry header structure */ -struct nix_cqe_hdr_s { - uint64_t tag : 32; - uint64_t q : 20; - uint64_t rsvd_57_52 : 6; - uint64_t node : 2; - uint64_t cqe_type : 4; -}; - -/* NIX completion queue context structure */ -struct nix_cq_ctx_s { - uint64_t base : 64;/* W0 */ - uint64_t rsvd_67_64 : 4; - uint64_t bp_ena : 1; - uint64_t rsvd_71_69 : 3; - uint64_t bpid : 9; - uint64_t rsvd_83_81 : 3; - uint64_t qint_idx : 7; - uint64_t cq_err : 1; - uint64_t cint_idx : 7; - uint64_t avg_con : 9; - uint64_t wrptr : 20; - uint64_t tail : 20; - uint64_t head : 20; - uint64_t avg_level : 8; - uint64_t update_time : 16; - uint64_t bp : 8; - uint64_t drop : 8; - uint64_t drop_ena : 1; - uint64_t ena : 1; - uint64_t rsvd_211_210 : 2; - uint64_t substream : 20; - uint64_t caching : 1; - uint64_t rsvd_235_233 : 3; - uint64_t qsize : 4; - uint64_t cq_err_int : 8; - uint64_t cq_err_int_ena : 8; -}; - -/* NIX instruction header structure */ -struct nix_inst_hdr_s { - uint64_t pf_func : 16; - uint64_t sq : 20; - uint64_t rsvd_63_36 : 28; -}; - -/* NIX i/o virtual address structure */ -struct nix_iova_s { - uint64_t addr : 64; /* W0 */ -}; - -/* NIX IPsec dynamic ordering counter structure */ -struct nix_ipsec_dyno_s { - uint32_t count : 32; /* W0 */ -}; - -/* NIX memory value structure */ -struct nix_mem_result_s { - uint64_t v : 1; - uint64_t color : 2; - uint64_t rsvd_63_3 : 61; -}; - -/* NIX statistics operation write data structure */ -struct nix_op_q_wdata_s { - uint64_t rsvd_31_0 : 32; - uint64_t q : 20; - uint64_t rsvd_63_52 : 12; -}; - -/* NIX queue interrupt context hardware structure */ -struct nix_qint_hw_s { - uint32_t count : 22; - uint32_t rsvd_30_22 : 9; - uint32_t ena : 1; -}; - -/* NIX receive queue context structure */ -struct nix_rq_ctx_hw_s { - uint64_t ena : 1; - uint64_t sso_ena : 1; - uint64_t ipsech_ena : 1; - uint64_t ena_wqwd : 1; - uint64_t cq : 20; - uint64_t substream : 20; - uint64_t wqe_aura : 20; - uint64_t spb_aura : 20; - uint64_t lpb_aura : 20; - uint64_t sso_grp : 10; - uint64_t sso_tt : 2; - uint64_t pb_caching : 2; - uint64_t wqe_caching : 1; - uint64_t xqe_drop_ena : 1; - uint64_t spb_drop_ena : 1; - uint64_t lpb_drop_ena : 1; - uint64_t wqe_skip : 2; - uint64_t rsvd_127_124 : 4; - uint64_t rsvd_139_128 : 12; - uint64_t spb_sizem1 : 6; - uint64_t rsvd_150_146 : 5; - uint64_t spb_ena : 1; - uint64_t lpb_sizem1 : 12; - uint64_t first_skip : 7; - uint64_t rsvd_171 : 1; - uint64_t later_skip : 6; - uint64_t xqe_imm_size : 6; - uint64_t rsvd_189_184 : 6; - uint64_t xqe_imm_copy : 1; - uint64_t xqe_hdr_split : 1; - uint64_t xqe_drop : 8; - uint64_t xqe_pass : 8; - uint64_t wqe_pool_drop : 8; - uint64_t wqe_pool_pass : 8; - uint64_t spb_aura_drop : 8; - uint64_t spb_aura_pass : 8; - uint64_t spb_pool_drop : 8; - uint64_t spb_pool_pass : 8; - uint64_t lpb_aura_drop : 8; - uint64_t lpb_aura_pass : 8; - uint64_t lpb_pool_drop : 8; - uint64_t lpb_pool_pass : 8; - uint64_t rsvd_319_288 : 32; - uint64_t ltag : 24; - uint64_t good_utag : 8; - uint64_t bad_utag : 8; - uint64_t flow_tagw : 6; - uint64_t rsvd_383_366 : 18; - uint64_t octs : 48; - uint64_t rsvd_447_432 : 16; - uint64_t pkts : 48; - uint64_t rsvd_511_496 : 16; - uint64_t drop_octs : 48; - uint64_t rsvd_575_560 : 16; - uint64_t drop_pkts : 48; - uint64_t rsvd_639_624 : 16; - uint64_t re_pkts : 48; - uint64_t rsvd_702_688 : 15; - uint64_t ena_copy : 1; - uint64_t rsvd_739_704 : 36; - uint64_t rq_int : 8; - uint64_t rq_int_ena : 8; - uint64_t qint_idx : 7; - uint64_t rsvd_767_763 : 5; - uint64_t rsvd_831_768 : 64;/* W12 */ - uint64_t rsvd_895_832 : 64;/* W13 */ - uint64_t rsvd_959_896 : 64;/* W14 */ - uint64_t rsvd_1023_960 : 64;/* W15 */ -}; - -/* NIX receive queue context structure */ -struct nix_rq_ctx_s { - uint64_t ena : 1; - uint64_t sso_ena : 1; - uint64_t ipsech_ena : 1; - uint64_t ena_wqwd : 1; - uint64_t cq : 20; - uint64_t substream : 20; - uint64_t wqe_aura : 20; - uint64_t spb_aura : 20; - uint64_t lpb_aura : 20; - uint64_t sso_grp : 10; - uint64_t sso_tt : 2; - uint64_t pb_caching : 2; - uint64_t wqe_caching : 1; - uint64_t xqe_drop_ena : 1; - uint64_t spb_drop_ena : 1; - uint64_t lpb_drop_ena : 1; - uint64_t rsvd_127_122 : 6; - uint64_t rsvd_139_128 : 12; - uint64_t spb_sizem1 : 6; - uint64_t wqe_skip : 2; - uint64_t rsvd_150_148 : 3; - uint64_t spb_ena : 1; - uint64_t lpb_sizem1 : 12; - uint64_t first_skip : 7; - uint64_t rsvd_171 : 1; - uint64_t later_skip : 6; - uint64_t xqe_imm_size : 6; - uint64_t rsvd_189_184 : 6; - uint64_t xqe_imm_copy : 1; - uint64_t xqe_hdr_split : 1; - uint64_t xqe_drop : 8; - uint64_t xqe_pass : 8; - uint64_t wqe_pool_drop : 8; - uint64_t wqe_pool_pass : 8; - uint64_t spb_aura_drop : 8; - uint64_t spb_aura_pass : 8; - uint64_t spb_pool_drop : 8; - uint64_t spb_pool_pass : 8; - uint64_t lpb_aura_drop : 8; - uint64_t lpb_aura_pass : 8; - uint64_t lpb_pool_drop : 8; - uint64_t lpb_pool_pass : 8; - uint64_t rsvd_291_288 : 4; - uint64_t rq_int : 8; - uint64_t rq_int_ena : 8; - uint64_t qint_idx : 7; - uint64_t rsvd_319_315 : 5; - uint64_t ltag : 24; - uint64_t good_utag : 8; - uint64_t bad_utag : 8; - uint64_t flow_tagw : 6; - uint64_t rsvd_383_366 : 18; - uint64_t octs : 48; - uint64_t rsvd_447_432 : 16; - uint64_t pkts : 48; - uint64_t rsvd_511_496 : 16; - uint64_t drop_octs : 48; - uint64_t rsvd_575_560 : 16; - uint64_t drop_pkts : 48; - uint64_t rsvd_639_624 : 16; - uint64_t re_pkts : 48; - uint64_t rsvd_703_688 : 16; - uint64_t rsvd_767_704 : 64;/* W11 */ - uint64_t rsvd_831_768 : 64;/* W12 */ - uint64_t rsvd_895_832 : 64;/* W13 */ - uint64_t rsvd_959_896 : 64;/* W14 */ - uint64_t rsvd_1023_960 : 64;/* W15 */ -}; - -/* NIX receive side scaling entry structure */ -struct nix_rsse_s { - uint32_t rq : 20; - uint32_t rsvd_31_20 : 12; -}; - -/* NIX receive action structure */ -struct nix_rx_action_s { - uint64_t op : 4; - uint64_t pf_func : 16; - uint64_t index : 20; - uint64_t match_id : 16; - uint64_t flow_key_alg : 5; - uint64_t rsvd_63_61 : 3; -}; - -/* NIX receive immediate sub descriptor structure */ -struct nix_rx_imm_s { - uint64_t size : 16; - uint64_t apad : 3; - uint64_t rsvd_59_19 : 41; - uint64_t subdc : 4; -}; - -/* NIX receive multicast/mirror entry structure */ -struct nix_rx_mce_s { - uint64_t op : 2; - uint64_t rsvd_2 : 1; - uint64_t eol : 1; - uint64_t index : 20; - uint64_t rsvd_31_24 : 8; - uint64_t pf_func : 16; - uint64_t next : 16; -}; - -/* NIX receive parse structure */ -struct nix_rx_parse_s { - uint64_t chan : 12; - uint64_t desc_sizem1 : 5; - uint64_t imm_copy : 1; - uint64_t express : 1; - uint64_t wqwd : 1; - uint64_t errlev : 4; - uint64_t errcode : 8; - uint64_t latype : 4; - uint64_t lbtype : 4; - uint64_t lctype : 4; - uint64_t ldtype : 4; - uint64_t letype : 4; - uint64_t lftype : 4; - uint64_t lgtype : 4; - uint64_t lhtype : 4; - uint64_t pkt_lenm1 : 16; - uint64_t l2m : 1; - uint64_t l2b : 1; - uint64_t l3m : 1; - uint64_t l3b : 1; - uint64_t vtag0_valid : 1; - uint64_t vtag0_gone : 1; - uint64_t vtag1_valid : 1; - uint64_t vtag1_gone : 1; - uint64_t pkind : 6; - uint64_t rsvd_95_94 : 2; - uint64_t vtag0_tci : 16; - uint64_t vtag1_tci : 16; - uint64_t laflags : 8; - uint64_t lbflags : 8; - uint64_t lcflags : 8; - uint64_t ldflags : 8; - uint64_t leflags : 8; - uint64_t lfflags : 8; - uint64_t lgflags : 8; - uint64_t lhflags : 8; - uint64_t eoh_ptr : 8; - uint64_t wqe_aura : 20; - uint64_t pb_aura : 20; - uint64_t match_id : 16; - uint64_t laptr : 8; - uint64_t lbptr : 8; - uint64_t lcptr : 8; - uint64_t ldptr : 8; - uint64_t leptr : 8; - uint64_t lfptr : 8; - uint64_t lgptr : 8; - uint64_t lhptr : 8; - uint64_t vtag0_ptr : 8; - uint64_t vtag1_ptr : 8; - uint64_t flow_key_alg : 5; - uint64_t rsvd_383_341 : 43; - uint64_t rsvd_447_384 : 64; /* W6 */ -}; - -/* NIX receive scatter/gather sub descriptor structure */ -struct nix_rx_sg_s { - uint64_t seg1_size : 16; - uint64_t seg2_size : 16; - uint64_t seg3_size : 16; - uint64_t segs : 2; - uint64_t rsvd_59_50 : 10; - uint64_t subdc : 4; -}; - -/* NIX receive vtag action structure */ -struct nix_rx_vtag_action_s { - uint64_t vtag0_relptr : 8; - uint64_t vtag0_lid : 3; - uint64_t rsvd_11 : 1; - uint64_t vtag0_type : 3; - uint64_t vtag0_valid : 1; - uint64_t rsvd_31_16 : 16; - uint64_t vtag1_relptr : 8; - uint64_t vtag1_lid : 3; - uint64_t rsvd_43 : 1; - uint64_t vtag1_type : 3; - uint64_t vtag1_valid : 1; - uint64_t rsvd_63_48 : 16; -}; - -/* NIX send completion structure */ -struct nix_send_comp_s { - uint64_t status : 8; - uint64_t sqe_id : 16; - uint64_t rsvd_63_24 : 40; -}; - -/* NIX send CRC sub descriptor structure */ -struct nix_send_crc_s { - uint64_t size : 16; - uint64_t start : 16; - uint64_t insert : 16; - uint64_t rsvd_57_48 : 10; - uint64_t alg : 2; - uint64_t subdc : 4; - uint64_t iv : 32; - uint64_t rsvd_127_96 : 32; -}; - -/* NIX send extended header sub descriptor structure */ -RTE_STD_C11 -union nix_send_ext_w0_u { - uint64_t u; - struct { - uint64_t lso_mps : 14; - uint64_t lso : 1; - uint64_t tstmp : 1; - uint64_t lso_sb : 8; - uint64_t lso_format : 5; - uint64_t rsvd_31_29 : 3; - uint64_t shp_chg : 9; - uint64_t shp_dis : 1; - uint64_t shp_ra : 2; - uint64_t markptr : 8; - uint64_t markform : 7; - uint64_t mark_en : 1; - uint64_t subdc : 4; - }; -}; - -RTE_STD_C11 -union nix_send_ext_w1_u { - uint64_t u; - struct { - uint64_t vlan0_ins_ptr : 8; - uint64_t vlan0_ins_tci : 16; - uint64_t vlan1_ins_ptr : 8; - uint64_t vlan1_ins_tci : 16; - uint64_t vlan0_ins_ena : 1; - uint64_t vlan1_ins_ena : 1; - uint64_t rsvd_127_114 : 14; - }; -}; - -struct nix_send_ext_s { - union nix_send_ext_w0_u w0; - union nix_send_ext_w1_u w1; -}; - -/* NIX send header sub descriptor structure */ -RTE_STD_C11 -union nix_send_hdr_w0_u { - uint64_t u; - struct { - uint64_t total : 18; - uint64_t rsvd_18 : 1; - uint64_t df : 1; - uint64_t aura : 20; - uint64_t sizem1 : 3; - uint64_t pnc : 1; - uint64_t sq : 20; - }; -}; - -RTE_STD_C11 -union nix_send_hdr_w1_u { - uint64_t u; - struct { - uint64_t ol3ptr : 8; - uint64_t ol4ptr : 8; - uint64_t il3ptr : 8; - uint64_t il4ptr : 8; - uint64_t ol3type : 4; - uint64_t ol4type : 4; - uint64_t il3type : 4; - uint64_t il4type : 4; - uint64_t sqe_id : 16; - }; -}; - -struct nix_send_hdr_s { - union nix_send_hdr_w0_u w0; - union nix_send_hdr_w1_u w1; -}; - -/* NIX send immediate sub descriptor structure */ -struct nix_send_imm_s { - uint64_t size : 16; - uint64_t apad : 3; - uint64_t rsvd_59_19 : 41; - uint64_t subdc : 4; -}; - -/* NIX send jump sub descriptor structure */ -struct nix_send_jump_s { - uint64_t sizem1 : 7; - uint64_t rsvd_13_7 : 7; - uint64_t ld_type : 2; - uint64_t aura : 20; - uint64_t rsvd_58_36 : 23; - uint64_t f : 1; - uint64_t subdc : 4; - uint64_t addr : 64; /* W1 */ -}; - -/* NIX send memory sub descriptor structure */ -struct nix_send_mem_s { - uint64_t offset : 16; - uint64_t rsvd_52_16 : 37; - uint64_t wmem : 1; - uint64_t dsz : 2; - uint64_t alg : 4; - uint64_t subdc : 4; - uint64_t addr : 64; /* W1 */ -}; - -/* NIX send scatter/gather sub descriptor structure */ -RTE_STD_C11 -union nix_send_sg_s { - uint64_t u; - struct { - uint64_t seg1_size : 16; - uint64_t seg2_size : 16; - uint64_t seg3_size : 16; - uint64_t segs : 2; - uint64_t rsvd_54_50 : 5; - uint64_t i1 : 1; - uint64_t i2 : 1; - uint64_t i3 : 1; - uint64_t ld_type : 2; - uint64_t subdc : 4; - }; -}; - -/* NIX send work sub descriptor structure */ -struct nix_send_work_s { - uint64_t tag : 32; - uint64_t tt : 2; - uint64_t grp : 10; - uint64_t rsvd_59_44 : 16; - uint64_t subdc : 4; - uint64_t addr : 64; /* W1 */ -}; - -/* NIX sq context hardware structure */ -struct nix_sq_ctx_hw_s { - uint64_t ena : 1; - uint64_t substream : 20; - uint64_t max_sqe_size : 2; - uint64_t sqe_way_mask : 16; - uint64_t sqb_aura : 20; - uint64_t gbl_rsvd1 : 5; - uint64_t cq_id : 20; - uint64_t cq_ena : 1; - uint64_t qint_idx : 6; - uint64_t gbl_rsvd2 : 1; - uint64_t sq_int : 8; - uint64_t sq_int_ena : 8; - uint64_t xoff : 1; - uint64_t sqe_stype : 2; - uint64_t gbl_rsvd : 17; - uint64_t head_sqb : 64;/* W2 */ - uint64_t head_offset : 6; - uint64_t sqb_dequeue_count : 16; - uint64_t default_chan : 12; - uint64_t sdp_mcast : 1; - uint64_t sso_ena : 1; - uint64_t dse_rsvd1 : 28; - uint64_t sqb_enqueue_count : 16; - uint64_t tail_offset : 6; - uint64_t lmt_dis : 1; - uint64_t smq_rr_quantum : 24; - uint64_t dnq_rsvd1 : 17; - uint64_t tail_sqb : 64;/* W5 */ - uint64_t next_sqb : 64;/* W6 */ - uint64_t mnq_dis : 1; - uint64_t smq : 9; - uint64_t smq_pend : 1; - uint64_t smq_next_sq : 20; - uint64_t smq_next_sq_vld : 1; - uint64_t scm1_rsvd2 : 32; - uint64_t smenq_sqb : 64;/* W8 */ - uint64_t smenq_offset : 6; - uint64_t cq_limit : 8; - uint64_t smq_rr_count : 25; - uint64_t scm_lso_rem : 18; - uint64_t scm_dq_rsvd0 : 7; - uint64_t smq_lso_segnum : 8; - uint64_t vfi_lso_total : 18; - uint64_t vfi_lso_sizem1 : 3; - uint64_t vfi_lso_sb : 8; - uint64_t vfi_lso_mps : 14; - uint64_t vfi_lso_vlan0_ins_ena : 1; - uint64_t vfi_lso_vlan1_ins_ena : 1; - uint64_t vfi_lso_vld : 1; - uint64_t smenq_next_sqb_vld : 1; - uint64_t scm_dq_rsvd1 : 9; - uint64_t smenq_next_sqb : 64;/* W11 */ - uint64_t seb_rsvd1 : 64;/* W12 */ - uint64_t drop_pkts : 48; - uint64_t drop_octs_lsw : 16; - uint64_t drop_octs_msw : 32; - uint64_t pkts_lsw : 32; - uint64_t pkts_msw : 16; - uint64_t octs : 48; -}; - -/* NIX send queue context structure */ -struct nix_sq_ctx_s { - uint64_t ena : 1; - uint64_t qint_idx : 6; - uint64_t substream : 20; - uint64_t sdp_mcast : 1; - uint64_t cq : 20; - uint64_t sqe_way_mask : 16; - uint64_t smq : 9; - uint64_t cq_ena : 1; - uint64_t xoff : 1; - uint64_t sso_ena : 1; - uint64_t smq_rr_quantum : 24; - uint64_t default_chan : 12; - uint64_t sqb_count : 16; - uint64_t smq_rr_count : 25; - uint64_t sqb_aura : 20; - uint64_t sq_int : 8; - uint64_t sq_int_ena : 8; - uint64_t sqe_stype : 2; - uint64_t rsvd_191 : 1; - uint64_t max_sqe_size : 2; - uint64_t cq_limit : 8; - uint64_t lmt_dis : 1; - uint64_t mnq_dis : 1; - uint64_t smq_next_sq : 20; - uint64_t smq_lso_segnum : 8; - uint64_t tail_offset : 6; - uint64_t smenq_offset : 6; - uint64_t head_offset : 6; - uint64_t smenq_next_sqb_vld : 1; - uint64_t smq_pend : 1; - uint64_t smq_next_sq_vld : 1; - uint64_t rsvd_255_253 : 3; - uint64_t next_sqb : 64;/* W4 */ - uint64_t tail_sqb : 64;/* W5 */ - uint64_t smenq_sqb : 64;/* W6 */ - uint64_t smenq_next_sqb : 64;/* W7 */ - uint64_t head_sqb : 64;/* W8 */ - uint64_t rsvd_583_576 : 8; - uint64_t vfi_lso_total : 18; - uint64_t vfi_lso_sizem1 : 3; - uint64_t vfi_lso_sb : 8; - uint64_t vfi_lso_mps : 14; - uint64_t vfi_lso_vlan0_ins_ena : 1; - uint64_t vfi_lso_vlan1_ins_ena : 1; - uint64_t vfi_lso_vld : 1; - uint64_t rsvd_639_630 : 10; - uint64_t scm_lso_rem : 18; - uint64_t rsvd_703_658 : 46; - uint64_t octs : 48; - uint64_t rsvd_767_752 : 16; - uint64_t pkts : 48; - uint64_t rsvd_831_816 : 16; - uint64_t rsvd_895_832 : 64;/* W13 */ - uint64_t drop_octs : 48; - uint64_t rsvd_959_944 : 16; - uint64_t drop_pkts : 48; - uint64_t rsvd_1023_1008 : 16; -}; - -/* NIX transmit action structure */ -struct nix_tx_action_s { - uint64_t op : 4; - uint64_t rsvd_11_4 : 8; - uint64_t index : 20; - uint64_t match_id : 16; - uint64_t rsvd_63_48 : 16; -}; - -/* NIX transmit vtag action structure */ -struct nix_tx_vtag_action_s { - uint64_t vtag0_relptr : 8; - uint64_t vtag0_lid : 3; - uint64_t rsvd_11 : 1; - uint64_t vtag0_op : 2; - uint64_t rsvd_15_14 : 2; - uint64_t vtag0_def : 10; - uint64_t rsvd_31_26 : 6; - uint64_t vtag1_relptr : 8; - uint64_t vtag1_lid : 3; - uint64_t rsvd_43 : 1; - uint64_t vtag1_op : 2; - uint64_t rsvd_47_46 : 2; - uint64_t vtag1_def : 10; - uint64_t rsvd_63_58 : 6; -}; - -/* NIX work queue entry header structure */ -struct nix_wqe_hdr_s { - uint64_t tag : 32; - uint64_t tt : 2; - uint64_t grp : 10; - uint64_t node : 2; - uint64_t q : 14; - uint64_t wqe_type : 4; -}; - -/* NIX Rx flow key algorithm field structure */ -struct nix_rx_flowkey_alg { - uint64_t key_offset :6; - uint64_t ln_mask :1; - uint64_t fn_mask :1; - uint64_t hdr_offset :8; - uint64_t bytesm1 :5; - uint64_t lid :3; - uint64_t reserved_24_24 :1; - uint64_t ena :1; - uint64_t sel_chan :1; - uint64_t ltype_mask :4; - uint64_t ltype_match :4; - uint64_t reserved_35_63 :29; -}; - -/* NIX LSO format field structure */ -struct nix_lso_format { - uint64_t offset : 8; - uint64_t layer : 2; - uint64_t rsvd_10_11 : 2; - uint64_t sizem1 : 2; - uint64_t rsvd_14_15 : 2; - uint64_t alg : 3; - uint64_t rsvd_19_63 : 45; -}; - -#define NIX_LSO_FIELD_MAX (8) -#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16) -#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12) -#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8) -#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0) - -#define NIX_LSO_FIELD_MASK \ - (NIX_LSO_FIELD_OFF_MASK | \ - NIX_LSO_FIELD_LY_MASK | \ - NIX_LSO_FIELD_SZ_MASK | \ - NIX_LSO_FIELD_ALG_MASK) - -#endif /* __OTX2_NIX_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h deleted file mode 100644 index 2224216c96..0000000000 --- a/drivers/common/octeontx2/hw/otx2_npa.h +++ /dev/null @@ -1,305 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_NPA_HW_H__ -#define __OTX2_NPA_HW_H__ - -/* Register offsets */ - -#define NPA_AF_BLK_RST (0x0ull) -#define NPA_AF_CONST (0x10ull) -#define NPA_AF_CONST1 (0x18ull) -#define NPA_AF_LF_RST (0x20ull) -#define NPA_AF_GEN_CFG (0x30ull) -#define NPA_AF_NDC_CFG (0x40ull) -#define NPA_AF_NDC_SYNC (0x50ull) -#define NPA_AF_INP_CTL (0xd0ull) -#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull) -#define NPA_AF_AVG_DELAY (0x100ull) -#define NPA_AF_GEN_INT (0x140ull) -#define NPA_AF_GEN_INT_W1S (0x148ull) -#define NPA_AF_GEN_INT_ENA_W1S (0x150ull) -#define NPA_AF_GEN_INT_ENA_W1C (0x158ull) -#define NPA_AF_RVU_INT (0x160ull) -#define NPA_AF_RVU_INT_W1S (0x168ull) -#define NPA_AF_RVU_INT_ENA_W1S (0x170ull) -#define NPA_AF_RVU_INT_ENA_W1C (0x178ull) -#define NPA_AF_ERR_INT (0x180ull) -#define NPA_AF_ERR_INT_W1S (0x188ull) -#define NPA_AF_ERR_INT_ENA_W1S (0x190ull) -#define NPA_AF_ERR_INT_ENA_W1C (0x198ull) -#define NPA_AF_RAS (0x1a0ull) -#define NPA_AF_RAS_W1S (0x1a8ull) -#define NPA_AF_RAS_ENA_W1S (0x1b0ull) -#define NPA_AF_RAS_ENA_W1C (0x1b8ull) -#define NPA_AF_AQ_CFG (0x600ull) -#define NPA_AF_AQ_BASE (0x610ull) -#define NPA_AF_AQ_STATUS (0x620ull) -#define NPA_AF_AQ_DOOR (0x630ull) -#define NPA_AF_AQ_DONE_WAIT (0x640ull) -#define NPA_AF_AQ_DONE (0x650ull) -#define NPA_AF_AQ_DONE_ACK (0x660ull) -#define NPA_AF_AQ_DONE_TIMER (0x670ull) -#define NPA_AF_AQ_DONE_INT (0x680ull) -#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull) -#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull) -#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18) -#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18) -#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18) -#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18) -#define NPA_PRIV_AF_INT_CFG (0x10000ull) -#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8) -#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8) -#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull) -#define NPA_AF_DTX_FILTER_CTL (0x10040ull) - -#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3) -#define NPA_LF_AURA_OP_FREE0 (0x20ull) -#define NPA_LF_AURA_OP_FREE1 (0x28ull) -#define NPA_LF_AURA_OP_CNT (0x30ull) -#define NPA_LF_AURA_OP_LIMIT (0x50ull) -#define NPA_LF_AURA_OP_INT (0x60ull) -#define NPA_LF_AURA_OP_THRESH (0x70ull) -#define NPA_LF_POOL_OP_PC (0x100ull) -#define NPA_LF_POOL_OP_AVAILABLE (0x110ull) -#define NPA_LF_POOL_OP_PTR_START0 (0x120ull) -#define NPA_LF_POOL_OP_PTR_START1 (0x128ull) -#define NPA_LF_POOL_OP_PTR_END0 (0x130ull) -#define NPA_LF_POOL_OP_PTR_END1 (0x138ull) -#define NPA_LF_POOL_OP_INT (0x160ull) -#define NPA_LF_POOL_OP_THRESH (0x170ull) -#define NPA_LF_ERR_INT (0x200ull) -#define NPA_LF_ERR_INT_W1S (0x208ull) -#define NPA_LF_ERR_INT_ENA_W1C (0x210ull) -#define NPA_LF_ERR_INT_ENA_W1S (0x218ull) -#define NPA_LF_RAS (0x220ull) -#define NPA_LF_RAS_W1S (0x228ull) -#define NPA_LF_RAS_ENA_W1C (0x230ull) -#define NPA_LF_RAS_ENA_W1S (0x238ull) -#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12) -#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12) -#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12) -#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12) - - -/* Enum offsets */ - -#define NPA_AQ_COMP_NOTDONE (0x0ull) -#define NPA_AQ_COMP_GOOD (0x1ull) -#define NPA_AQ_COMP_SWERR (0x2ull) -#define NPA_AQ_COMP_CTX_POISON (0x3ull) -#define NPA_AQ_COMP_CTX_FAULT (0x4ull) -#define NPA_AQ_COMP_LOCKERR (0x5ull) - -#define NPA_AF_INT_VEC_RVU (0x0ull) -#define NPA_AF_INT_VEC_GEN (0x1ull) -#define NPA_AF_INT_VEC_AQ_DONE (0x2ull) -#define NPA_AF_INT_VEC_AF_ERR (0x3ull) -#define NPA_AF_INT_VEC_POISON (0x4ull) - -#define NPA_AQ_INSTOP_NOP (0x0ull) -#define NPA_AQ_INSTOP_INIT (0x1ull) -#define NPA_AQ_INSTOP_WRITE (0x2ull) -#define NPA_AQ_INSTOP_READ (0x3ull) -#define NPA_AQ_INSTOP_LOCK (0x4ull) -#define NPA_AQ_INSTOP_UNLOCK (0x5ull) - -#define NPA_AQ_CTYPE_AURA (0x0ull) -#define NPA_AQ_CTYPE_POOL (0x1ull) - -#define NPA_BPINTF_NIX0_RX (0x0ull) -#define NPA_BPINTF_NIX1_RX (0x1ull) - -#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull) -#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull) -#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull) -#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull) -#define NPA_AURA_ERR_INT_R4 (0x4ull) -#define NPA_AURA_ERR_INT_R5 (0x5ull) -#define NPA_AURA_ERR_INT_R6 (0x6ull) -#define NPA_AURA_ERR_INT_R7 (0x7ull) - -#define NPA_LF_INT_VEC_ERR_INT (0x40ull) -#define NPA_LF_INT_VEC_POISON (0x41ull) -#define NPA_LF_INT_VEC_QINT_END (0x3full) -#define NPA_LF_INT_VEC_QINT_START (0x0ull) - -#define NPA_INPQ_SSO (0x4ull) -#define NPA_INPQ_TIM (0x5ull) -#define NPA_INPQ_DPI (0x6ull) -#define NPA_INPQ_AURA_OP (0xeull) -#define NPA_INPQ_INTERNAL_RSV (0xfull) -#define NPA_INPQ_NIX0_RX (0x0ull) -#define NPA_INPQ_NIX1_RX (0x2ull) -#define NPA_INPQ_NIX0_TX (0x1ull) -#define NPA_INPQ_NIX1_TX (0x3ull) -#define NPA_INPQ_R_END (0xdull) -#define NPA_INPQ_R_START (0x7ull) - -#define NPA_POOL_ERR_INT_OVFLS (0x0ull) -#define NPA_POOL_ERR_INT_RANGE (0x1ull) -#define NPA_POOL_ERR_INT_PERR (0x2ull) -#define NPA_POOL_ERR_INT_R3 (0x3ull) -#define NPA_POOL_ERR_INT_R4 (0x4ull) -#define NPA_POOL_ERR_INT_R5 (0x5ull) -#define NPA_POOL_ERR_INT_R6 (0x6ull) -#define NPA_POOL_ERR_INT_R7 (0x7ull) - -#define NPA_NDC0_PORT_AURA0 (0x0ull) -#define NPA_NDC0_PORT_AURA1 (0x1ull) -#define NPA_NDC0_PORT_POOL0 (0x2ull) -#define NPA_NDC0_PORT_POOL1 (0x3ull) -#define NPA_NDC0_PORT_STACK0 (0x4ull) -#define NPA_NDC0_PORT_STACK1 (0x5ull) - -#define NPA_LF_ERR_INT_AURA_DIS (0x0ull) -#define NPA_LF_ERR_INT_AURA_OOR (0x1ull) -#define NPA_LF_ERR_INT_AURA_FAULT (0xcull) -#define NPA_LF_ERR_INT_POOL_FAULT (0xdull) -#define NPA_LF_ERR_INT_STACK_FAULT (0xeull) -#define NPA_LF_ERR_INT_QINT_FAULT (0xfull) - -/* Structures definitions */ - -/* NPA admin queue instruction structure */ -struct npa_aq_inst_s { - uint64_t op : 4; - uint64_t ctype : 4; - uint64_t lf : 9; - uint64_t rsvd_23_17 : 7; - uint64_t cindex : 20; - uint64_t rsvd_62_44 : 19; - uint64_t doneint : 1; - uint64_t res_addr : 64; /* W1 */ -}; - -/* NPA admin queue result structure */ -struct npa_aq_res_s { - uint64_t op : 4; - uint64_t ctype : 4; - uint64_t compcode : 8; - uint64_t doneint : 1; - uint64_t rsvd_63_17 : 47; - uint64_t rsvd_127_64 : 64; /* W1 */ -}; - -/* NPA aura operation write data structure */ -struct npa_aura_op_wdata_s { - uint64_t aura : 20; - uint64_t rsvd_62_20 : 43; - uint64_t drop : 1; -}; - -/* NPA aura context structure */ -struct npa_aura_s { - uint64_t pool_addr : 64;/* W0 */ - uint64_t ena : 1; - uint64_t rsvd_66_65 : 2; - uint64_t pool_caching : 1; - uint64_t pool_way_mask : 16; - uint64_t avg_con : 9; - uint64_t rsvd_93 : 1; - uint64_t pool_drop_ena : 1; - uint64_t aura_drop_ena : 1; - uint64_t bp_ena : 2; - uint64_t rsvd_103_98 : 6; - uint64_t aura_drop : 8; - uint64_t shift : 6; - uint64_t rsvd_119_118 : 2; - uint64_t avg_level : 8; - uint64_t count : 36; - uint64_t rsvd_167_164 : 4; - uint64_t nix0_bpid : 9; - uint64_t rsvd_179_177 : 3; - uint64_t nix1_bpid : 9; - uint64_t rsvd_191_189 : 3; - uint64_t limit : 36; - uint64_t rsvd_231_228 : 4; - uint64_t bp : 8; - uint64_t rsvd_243_240 : 4; - uint64_t fc_ena : 1; - uint64_t fc_up_crossing : 1; - uint64_t fc_stype : 2; - uint64_t fc_hyst_bits : 4; - uint64_t rsvd_255_252 : 4; - uint64_t fc_addr : 64;/* W4 */ - uint64_t pool_drop : 8; - uint64_t update_time : 16; - uint64_t err_int : 8; - uint64_t err_int_ena : 8; - uint64_t thresh_int : 1; - uint64_t thresh_int_ena : 1; - uint64_t thresh_up : 1; - uint64_t rsvd_363 : 1; - uint64_t thresh_qint_idx : 7; - uint64_t rsvd_371 : 1; - uint64_t err_qint_idx : 7; - uint64_t rsvd_383_379 : 5; - uint64_t thresh : 36; - uint64_t rsvd_447_420 : 28; - uint64_t rsvd_511_448 : 64;/* W7 */ -}; - -/* NPA pool context structure */ -struct npa_pool_s { - uint64_t stack_base : 64;/* W0 */ - uint64_t ena : 1; - uint64_t nat_align : 1; - uint64_t rsvd_67_66 : 2; - uint64_t stack_caching : 1; - uint64_t rsvd_71_69 : 3; - uint64_t stack_way_mask : 16; - uint64_t buf_offset : 12; - uint64_t rsvd_103_100 : 4; - uint64_t buf_size : 11; - uint64_t rsvd_127_115 : 13; - uint64_t stack_max_pages : 32; - uint64_t stack_pages : 32; - uint64_t op_pc : 48; - uint64_t rsvd_255_240 : 16; - uint64_t stack_offset : 4; - uint64_t rsvd_263_260 : 4; - uint64_t shift : 6; - uint64_t rsvd_271_270 : 2; - uint64_t avg_level : 8; - uint64_t avg_con : 9; - uint64_t fc_ena : 1; - uint64_t fc_stype : 2; - uint64_t fc_hyst_bits : 4; - uint64_t fc_up_crossing : 1; - uint64_t rsvd_299_297 : 3; - uint64_t update_time : 16; - uint64_t rsvd_319_316 : 4; - uint64_t fc_addr : 64;/* W5 */ - uint64_t ptr_start : 64;/* W6 */ - uint64_t ptr_end : 64;/* W7 */ - uint64_t rsvd_535_512 : 24; - uint64_t err_int : 8; - uint64_t err_int_ena : 8; - uint64_t thresh_int : 1; - uint64_t thresh_int_ena : 1; - uint64_t thresh_up : 1; - uint64_t rsvd_555 : 1; - uint64_t thresh_qint_idx : 7; - uint64_t rsvd_563 : 1; - uint64_t err_qint_idx : 7; - uint64_t rsvd_575_571 : 5; - uint64_t thresh : 36; - uint64_t rsvd_639_612 : 28; - uint64_t rsvd_703_640 : 64;/* W10 */ - uint64_t rsvd_767_704 : 64;/* W11 */ - uint64_t rsvd_831_768 : 64;/* W12 */ - uint64_t rsvd_895_832 : 64;/* W13 */ - uint64_t rsvd_959_896 : 64;/* W14 */ - uint64_t rsvd_1023_960 : 64;/* W15 */ -}; - -/* NPA queue interrupt context hardware structure */ -struct npa_qint_hw_s { - uint32_t count : 22; - uint32_t rsvd_30_22 : 9; - uint32_t ena : 1; -}; - -#endif /* __OTX2_NPA_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h deleted file mode 100644 index b4e3c1eedc..0000000000 --- a/drivers/common/octeontx2/hw/otx2_npc.h +++ /dev/null @@ -1,503 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_NPC_HW_H__ -#define __OTX2_NPC_HW_H__ - -/* Register offsets */ - -#define NPC_AF_CFG (0x0ull) -#define NPC_AF_ACTIVE_PC (0x10ull) -#define NPC_AF_CONST (0x20ull) -#define NPC_AF_CONST1 (0x30ull) -#define NPC_AF_BLK_RST (0x40ull) -#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull) -#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull) -#define NPC_AF_KPUX_CFG(a) \ - (0x500ull | (uint64_t)(a) << 3) -#define NPC_AF_PCK_CFG (0x600ull) -#define NPC_AF_PCK_DEF_OL2 (0x610ull) -#define NPC_AF_PCK_DEF_OIP4 (0x620ull) -#define NPC_AF_PCK_DEF_OIP6 (0x630ull) -#define NPC_AF_PCK_DEF_IIP4 (0x640ull) -#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \ - (0x800ull | (uint64_t)(a) << 3) -#define NPC_AF_INTFX_KEX_CFG(a) \ - (0x1010ull | (uint64_t)(a) << 8) -#define NPC_AF_PKINDX_ACTION0(a) \ - (0x80000ull | (uint64_t)(a) << 6) -#define NPC_AF_PKINDX_ACTION1(a) \ - (0x80008ull | (uint64_t)(a) << 6) -#define NPC_AF_PKINDX_CPI_DEFX(a, b) \ - (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) -#define NPC_AF_CHLEN90B_PKIND (0x3bull) -#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \ - (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \ - (uint64_t)(c) << 3) -#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \ - (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) -#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \ - (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) -#define NPC_AF_KPUX_ENTRY_DISX(a, b) \ - (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) -#define NPC_AF_CPIX_CFG(a) \ - (0x200000ull | (uint64_t)(a) << 3) -#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \ - (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ - (uint64_t)(c) << 5 | (uint64_t)(d) << 3) -#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \ - (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ - (uint64_t)(c) << 3) -#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \ - (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ - (uint64_t)(c) << 3) -#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \ - (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ - (uint64_t)(c) << 3) -#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \ - (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ - (uint64_t)(c) << 3) -#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \ - (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \ - (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define NPC_AF_MATCH_STATX(a) \ - (0x1880008ull | (uint64_t)(a) << 8) -#define NPC_AF_INTFX_MISS_STAT_ACT(a) \ - (0x1880040ull + (uint64_t)(a) * 0x8) -#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \ - (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \ - (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define NPC_AF_INTFX_MISS_ACT(a) \ - (0x1a00000ull | (uint64_t)(a) << 4) -#define NPC_AF_INTFX_MISS_TAG_ACT(a) \ - (0x1b00008ull | (uint64_t)(a) << 4) -#define NPC_AF_MCAM_BANKX_HITX(a, b) \ - (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define NPC_AF_LKUP_CTL (0x2000000ull) -#define NPC_AF_LKUP_DATAX(a) \ - (0x2000200ull | (uint64_t)(a) << 4) -#define NPC_AF_LKUP_RESULTX(a) \ - (0x2000400ull | (uint64_t)(a) << 4) -#define NPC_AF_INTFX_STAT(a) \ - (0x2000800ull | (uint64_t)(a) << 4) -#define NPC_AF_DBG_CTL (0x3000000ull) -#define NPC_AF_DBG_STATUS (0x3000010ull) -#define NPC_AF_KPUX_DBG(a) \ - (0x3000020ull | (uint64_t)(a) << 8) -#define NPC_AF_IKPU_ERR_CTL (0x3000080ull) -#define NPC_AF_KPUX_ERR_CTL(a) \ - (0x30000a0ull | (uint64_t)(a) << 8) -#define NPC_AF_MCAM_DBG (0x3001000ull) -#define NPC_AF_DBG_DATAX(a) \ - (0x3001400ull | (uint64_t)(a) << 4) -#define NPC_AF_DBG_RESULTX(a) \ - (0x3001800ull | (uint64_t)(a) << 4) - - -/* Enum offsets */ - -#define NPC_INTF_NIX0_RX (0x0ull) -#define NPC_INTF_NIX0_TX (0x1ull) - -#define NPC_LKUPOP_PKT (0x0ull) -#define NPC_LKUPOP_KEY (0x1ull) - -#define NPC_MCAM_KEY_X1 (0x0ull) -#define NPC_MCAM_KEY_X2 (0x1ull) -#define NPC_MCAM_KEY_X4 (0x2ull) - -enum NPC_ERRLEV_E { - NPC_ERRLEV_RE = 0, - NPC_ERRLEV_LA = 1, - NPC_ERRLEV_LB = 2, - NPC_ERRLEV_LC = 3, - NPC_ERRLEV_LD = 4, - NPC_ERRLEV_LE = 5, - NPC_ERRLEV_LF = 6, - NPC_ERRLEV_LG = 7, - NPC_ERRLEV_LH = 8, - NPC_ERRLEV_R9 = 9, - NPC_ERRLEV_R10 = 10, - NPC_ERRLEV_R11 = 11, - NPC_ERRLEV_R12 = 12, - NPC_ERRLEV_R13 = 13, - NPC_ERRLEV_R14 = 14, - NPC_ERRLEV_NIX = 15, - NPC_ERRLEV_ENUM_LAST = 16, -}; - -enum npc_kpu_err_code { - NPC_EC_NOERR = 0, /* has to be zero */ - NPC_EC_UNK, - NPC_EC_IH_LENGTH, - NPC_EC_EDSA_UNK, - NPC_EC_L2_K1, - NPC_EC_L2_K2, - NPC_EC_L2_K3, - NPC_EC_L2_K3_ETYPE_UNK, - NPC_EC_L2_K4, - NPC_EC_MPLS_2MANY, - NPC_EC_MPLS_UNK, - NPC_EC_NSH_UNK, - NPC_EC_IP_TTL_0, - NPC_EC_IP_FRAG_OFFSET_1, - NPC_EC_IP_VER, - NPC_EC_IP6_HOP_0, - NPC_EC_IP6_VER, - NPC_EC_TCP_FLAGS_FIN_ONLY, - NPC_EC_TCP_FLAGS_ZERO, - NPC_EC_TCP_FLAGS_RST_FIN, - NPC_EC_TCP_FLAGS_URG_SYN, - NPC_EC_TCP_FLAGS_RST_SYN, - NPC_EC_TCP_FLAGS_SYN_FIN, - NPC_EC_VXLAN, - NPC_EC_NVGRE, - NPC_EC_GRE, - NPC_EC_GRE_VER1, - NPC_EC_L4, - NPC_EC_OIP4_CSUM, - NPC_EC_IIP4_CSUM, - NPC_EC_LAST /* has to be the last item */ -}; - -enum NPC_LID_E { - NPC_LID_LA = 0, - NPC_LID_LB, - NPC_LID_LC, - NPC_LID_LD, - NPC_LID_LE, - NPC_LID_LF, - NPC_LID_LG, - NPC_LID_LH, -}; - -#define NPC_LT_NA 0 - -enum npc_kpu_la_ltype { - NPC_LT_LA_8023 = 1, - NPC_LT_LA_ETHER, - NPC_LT_LA_IH_NIX_ETHER, - NPC_LT_LA_IH_8_ETHER, - NPC_LT_LA_IH_4_ETHER, - NPC_LT_LA_IH_2_ETHER, - NPC_LT_LA_HIGIG2_ETHER, - NPC_LT_LA_IH_NIX_HIGIG2_ETHER, - NPC_LT_LA_CUSTOM_L2_90B_ETHER, - NPC_LT_LA_CPT_HDR, - NPC_LT_LA_CUSTOM_L2_24B_ETHER, - NPC_LT_LA_CUSTOM0 = 0xE, - NPC_LT_LA_CUSTOM1 = 0xF, -}; - -enum npc_kpu_lb_ltype { - NPC_LT_LB_ETAG = 1, - NPC_LT_LB_CTAG, - NPC_LT_LB_STAG_QINQ, - NPC_LT_LB_BTAG, - NPC_LT_LB_ITAG, - NPC_LT_LB_DSA, - NPC_LT_LB_DSA_VLAN, - NPC_LT_LB_EDSA, - NPC_LT_LB_EDSA_VLAN, - NPC_LT_LB_EXDSA, - NPC_LT_LB_EXDSA_VLAN, - NPC_LT_LB_FDSA, - NPC_LT_LB_VLAN_EXDSA, - NPC_LT_LB_CUSTOM0 = 0xE, - NPC_LT_LB_CUSTOM1 = 0xF, -}; - -enum npc_kpu_lc_ltype { - NPC_LT_LC_PTP = 1, - NPC_LT_LC_IP, - NPC_LT_LC_IP_OPT, - NPC_LT_LC_IP6, - NPC_LT_LC_IP6_EXT, - NPC_LT_LC_ARP, - NPC_LT_LC_RARP, - NPC_LT_LC_MPLS, - NPC_LT_LC_NSH, - NPC_LT_LC_FCOE, - NPC_LT_LC_NGIO, - NPC_LT_LC_CUSTOM0 = 0xE, - NPC_LT_LC_CUSTOM1 = 0xF, -}; - -/* Don't modify Ltypes up to SCTP, otherwise it will - * effect flow tag calculation and thus RSS. - */ -enum npc_kpu_ld_ltype { - NPC_LT_LD_TCP = 1, - NPC_LT_LD_UDP, - NPC_LT_LD_ICMP, - NPC_LT_LD_SCTP, - NPC_LT_LD_ICMP6, - NPC_LT_LD_CUSTOM0, - NPC_LT_LD_CUSTOM1, - NPC_LT_LD_IGMP = 8, - NPC_LT_LD_AH, - NPC_LT_LD_GRE, - NPC_LT_LD_NVGRE, - NPC_LT_LD_NSH, - NPC_LT_LD_TU_MPLS_IN_NSH, - NPC_LT_LD_TU_MPLS_IN_IP, -}; - -enum npc_kpu_le_ltype { - NPC_LT_LE_VXLAN = 1, - NPC_LT_LE_GENEVE, - NPC_LT_LE_ESP, - NPC_LT_LE_GTPU = 4, - NPC_LT_LE_VXLANGPE, - NPC_LT_LE_GTPC, - NPC_LT_LE_NSH, - NPC_LT_LE_TU_MPLS_IN_GRE, - NPC_LT_LE_TU_NSH_IN_GRE, - NPC_LT_LE_TU_MPLS_IN_UDP, - NPC_LT_LE_CUSTOM0 = 0xE, - NPC_LT_LE_CUSTOM1 = 0xF, -}; - -enum npc_kpu_lf_ltype { - NPC_LT_LF_TU_ETHER = 1, - NPC_LT_LF_TU_PPP, - NPC_LT_LF_TU_MPLS_IN_VXLANGPE, - NPC_LT_LF_TU_NSH_IN_VXLANGPE, - NPC_LT_LF_TU_MPLS_IN_NSH, - NPC_LT_LF_TU_3RD_NSH, - NPC_LT_LF_CUSTOM0 = 0xE, - NPC_LT_LF_CUSTOM1 = 0xF, -}; - -enum npc_kpu_lg_ltype { - NPC_LT_LG_TU_IP = 1, - NPC_LT_LG_TU_IP6, - NPC_LT_LG_TU_ARP, - NPC_LT_LG_TU_ETHER_IN_NSH, - NPC_LT_LG_CUSTOM0 = 0xE, - NPC_LT_LG_CUSTOM1 = 0xF, -}; - -/* Don't modify Ltypes up to SCTP, otherwise it will - * effect flow tag calculation and thus RSS. - */ -enum npc_kpu_lh_ltype { - NPC_LT_LH_TU_TCP = 1, - NPC_LT_LH_TU_UDP, - NPC_LT_LH_TU_ICMP, - NPC_LT_LH_TU_SCTP, - NPC_LT_LH_TU_ICMP6, - NPC_LT_LH_TU_IGMP = 8, - NPC_LT_LH_TU_ESP, - NPC_LT_LH_TU_AH, - NPC_LT_LH_CUSTOM0 = 0xE, - NPC_LT_LH_CUSTOM1 = 0xF, -}; - -/* Structures definitions */ -struct npc_kpu_profile_cam { - uint8_t state; - uint8_t state_mask; - uint16_t dp0; - uint16_t dp0_mask; - uint16_t dp1; - uint16_t dp1_mask; - uint16_t dp2; - uint16_t dp2_mask; -}; - -struct npc_kpu_profile_action { - uint8_t errlev; - uint8_t errcode; - uint8_t dp0_offset; - uint8_t dp1_offset; - uint8_t dp2_offset; - uint8_t bypass_count; - uint8_t parse_done; - uint8_t next_state; - uint8_t ptr_advance; - uint8_t cap_ena; - uint8_t lid; - uint8_t ltype; - uint8_t flags; - uint8_t offset; - uint8_t mask; - uint8_t right; - uint8_t shift; -}; - -struct npc_kpu_profile { - int cam_entries; - int action_entries; - struct npc_kpu_profile_cam *cam; - struct npc_kpu_profile_action *action; -}; - -/* NPC KPU register formats */ -struct npc_kpu_cam { - uint64_t dp0_data : 16; - uint64_t dp1_data : 16; - uint64_t dp2_data : 16; - uint64_t state : 8; - uint64_t rsvd_63_56 : 8; -}; - -struct npc_kpu_action0 { - uint64_t var_len_shift : 3; - uint64_t var_len_right : 1; - uint64_t var_len_mask : 8; - uint64_t var_len_offset : 8; - uint64_t ptr_advance : 8; - uint64_t capture_flags : 8; - uint64_t capture_ltype : 4; - uint64_t capture_lid : 3; - uint64_t rsvd_43 : 1; - uint64_t next_state : 8; - uint64_t parse_done : 1; - uint64_t capture_ena : 1; - uint64_t byp_count : 3; - uint64_t rsvd_63_57 : 7; -}; - -struct npc_kpu_action1 { - uint64_t dp0_offset : 8; - uint64_t dp1_offset : 8; - uint64_t dp2_offset : 8; - uint64_t errcode : 8; - uint64_t errlev : 4; - uint64_t rsvd_63_36 : 28; -}; - -struct npc_kpu_pkind_cpi_def { - uint64_t cpi_base : 10; - uint64_t rsvd_11_10 : 2; - uint64_t add_shift : 3; - uint64_t rsvd_15 : 1; - uint64_t add_mask : 8; - uint64_t add_offset : 8; - uint64_t flags_mask : 8; - uint64_t flags_match : 8; - uint64_t ltype_mask : 4; - uint64_t ltype_match : 4; - uint64_t lid : 3; - uint64_t rsvd_62_59 : 4; - uint64_t ena : 1; -}; - -struct nix_rx_action { - uint64_t op :4; - uint64_t pf_func :16; - uint64_t index :20; - uint64_t match_id :16; - uint64_t flow_key_alg :5; - uint64_t rsvd_63_61 :3; -}; - -struct nix_tx_action { - uint64_t op :4; - uint64_t rsvd_11_4 :8; - uint64_t index :20; - uint64_t match_id :16; - uint64_t rsvd_63_48 :16; -}; - -/* NPC layer parse information structure */ -struct npc_layer_info_s { - uint32_t lptr : 8; - uint32_t flags : 8; - uint32_t ltype : 4; - uint32_t rsvd_31_20 : 12; -}; - -/* NPC layer mcam search key extract structure */ -struct npc_layer_kex_s { - uint16_t flags : 8; - uint16_t ltype : 4; - uint16_t rsvd_15_12 : 4; -}; - -/* NPC mcam search key x1 structure */ -struct npc_mcam_key_x1_s { - uint64_t intf : 2; - uint64_t rsvd_63_2 : 62; - uint64_t kw0 : 64; /* W1 */ - uint64_t kw1 : 48; - uint64_t rsvd_191_176 : 16; -}; - -/* NPC mcam search key x2 structure */ -struct npc_mcam_key_x2_s { - uint64_t intf : 2; - uint64_t rsvd_63_2 : 62; - uint64_t kw0 : 64; /* W1 */ - uint64_t kw1 : 64; /* W2 */ - uint64_t kw2 : 64; /* W3 */ - uint64_t kw3 : 32; - uint64_t rsvd_319_288 : 32; -}; - -/* NPC mcam search key x4 structure */ -struct npc_mcam_key_x4_s { - uint64_t intf : 2; - uint64_t rsvd_63_2 : 62; - uint64_t kw0 : 64; /* W1 */ - uint64_t kw1 : 64; /* W2 */ - uint64_t kw2 : 64; /* W3 */ - uint64_t kw3 : 64; /* W4 */ - uint64_t kw4 : 64; /* W5 */ - uint64_t kw5 : 64; /* W6 */ - uint64_t kw6 : 64; /* W7 */ -}; - -/* NPC parse key extract structure */ -struct npc_parse_kex_s { - uint64_t chan : 12; - uint64_t errlev : 4; - uint64_t errcode : 8; - uint64_t l2m : 1; - uint64_t l2b : 1; - uint64_t l3m : 1; - uint64_t l3b : 1; - uint64_t la : 12; - uint64_t lb : 12; - uint64_t lc : 12; - uint64_t ld : 12; - uint64_t le : 12; - uint64_t lf : 12; - uint64_t lg : 12; - uint64_t lh : 12; - uint64_t rsvd_127_124 : 4; -}; - -/* NPC result structure */ -struct npc_result_s { - uint64_t intf : 2; - uint64_t pkind : 6; - uint64_t chan : 12; - uint64_t errlev : 4; - uint64_t errcode : 8; - uint64_t l2m : 1; - uint64_t l2b : 1; - uint64_t l3m : 1; - uint64_t l3b : 1; - uint64_t eoh_ptr : 8; - uint64_t rsvd_63_44 : 20; - uint64_t action : 64; /* W1 */ - uint64_t vtag_action : 64; /* W2 */ - uint64_t la : 20; - uint64_t lb : 20; - uint64_t lc : 20; - uint64_t rsvd_255_252 : 4; - uint64_t ld : 20; - uint64_t le : 20; - uint64_t lf : 20; - uint64_t rsvd_319_316 : 4; - uint64_t lg : 20; - uint64_t lh : 20; - uint64_t rsvd_383_360 : 24; -}; - -#endif /* __OTX2_NPC_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_ree.h b/drivers/common/octeontx2/hw/otx2_ree.h deleted file mode 100644 index b7481f125f..0000000000 --- a/drivers/common/octeontx2/hw/otx2_ree.h +++ /dev/null @@ -1,27 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_REE_HW_H__ -#define __OTX2_REE_HW_H__ - -/* REE BAR0*/ -#define REE_AF_REEXM_MAX_MATCH (0x80c8) - -/* REE BAR02 */ -#define REE_LF_MISC_INT (0x300) -#define REE_LF_DONE_INT (0x120) - -#define REE_AF_QUEX_GMCTL(a) (0x800 | (a) << 3) - -#define REE_AF_INT_VEC_RAS (0x0ull) -#define REE_AF_INT_VEC_RVU (0x1ull) -#define REE_AF_INT_VEC_QUE_DONE (0x2ull) -#define REE_AF_INT_VEC_AQ (0x3ull) - -/* ENUMS */ - -#define REE_LF_INT_VEC_QUE_DONE (0x0ull) -#define REE_LF_INT_VEC_MISC (0x1ull) - -#endif /* __OTX2_REE_HW_H__*/ diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h deleted file mode 100644 index b98dbcb1cd..0000000000 --- a/drivers/common/octeontx2/hw/otx2_rvu.h +++ /dev/null @@ -1,219 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_RVU_HW_H__ -#define __OTX2_RVU_HW_H__ - -/* Register offsets */ - -#define RVU_AF_MSIXTR_BASE (0x10ull) -#define RVU_AF_BLK_RST (0x30ull) -#define RVU_AF_PF_BAR4_ADDR (0x40ull) -#define RVU_AF_RAS (0x100ull) -#define RVU_AF_RAS_W1S (0x108ull) -#define RVU_AF_RAS_ENA_W1S (0x110ull) -#define RVU_AF_RAS_ENA_W1C (0x118ull) -#define RVU_AF_GEN_INT (0x120ull) -#define RVU_AF_GEN_INT_W1S (0x128ull) -#define RVU_AF_GEN_INT_ENA_W1S (0x130ull) -#define RVU_AF_GEN_INT_ENA_W1C (0x138ull) -#define RVU_AF_AFPFX_MBOXX(a, b) \ - (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3) -#define RVU_AF_PFME_STATUS (0x2800ull) -#define RVU_AF_PFTRPEND (0x2810ull) -#define RVU_AF_PFTRPEND_W1S (0x2820ull) -#define RVU_AF_PF_RST (0x2840ull) -#define RVU_AF_HWVF_RST (0x2850ull) -#define RVU_AF_PFAF_MBOX_INT (0x2880ull) -#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull) -#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull) -#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull) -#define RVU_AF_PFFLR_INT (0x28a0ull) -#define RVU_AF_PFFLR_INT_W1S (0x28a8ull) -#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull) -#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull) -#define RVU_AF_PFME_INT (0x28c0ull) -#define RVU_AF_PFME_INT_W1S (0x28c8ull) -#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull) -#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull) -#define RVU_PRIV_CONST (0x8000000ull) -#define RVU_PRIV_GEN_CFG (0x8000010ull) -#define RVU_PRIV_CLK_CFG (0x8000020ull) -#define RVU_PRIV_ACTIVE_PC (0x8000030ull) -#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_NIXX_CFG(a, b) \ - (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) -#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16) -#define RVU_PRIV_PFX_CPTX_CFG(a, b) \ - (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) -#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3) -#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16) -#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \ - (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) -#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16) -#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16) -#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16) -#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16) -#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \ - (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) - -#define RVU_PF_VFX_PFVF_MBOXX(a, b) \ - (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3) -#define RVU_PF_VF_BAR4_ADDR (0x10ull) -#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) -#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3) -#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3) -#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3) -#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3) -#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3) -#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3) -#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3) -#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3) -#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3) -#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3) -#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3) -#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3) -#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3) -#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3) -#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3) -#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3) -#define RVU_PF_INT (0xc20ull) -#define RVU_PF_INT_W1S (0xc28ull) -#define RVU_PF_INT_ENA_W1S (0xc30ull) -#define RVU_PF_INT_ENA_W1C (0xc38ull) -#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) -#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) -#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) -#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3) -#define RVU_VF_INT (0x20ull) -#define RVU_VF_INT_W1S (0x28ull) -#define RVU_VF_INT_ENA_W1S (0x30ull) -#define RVU_VF_INT_ENA_W1C (0x38ull) -#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) -#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) -#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) -#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) - - -/* Enum offsets */ - -#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull) -#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull) -#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \ - (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25)) - -#define RVU_AF_INT_VEC_POISON (0x0ull) -#define RVU_AF_INT_VEC_PFFLR (0x1ull) -#define RVU_AF_INT_VEC_PFME (0x2ull) -#define RVU_AF_INT_VEC_GEN (0x3ull) -#define RVU_AF_INT_VEC_MBOX (0x4ull) - -#define RVU_BLOCK_TYPE_RVUM (0x0ull) -#define RVU_BLOCK_TYPE_LMT (0x2ull) -#define RVU_BLOCK_TYPE_NIX (0x3ull) -#define RVU_BLOCK_TYPE_NPA (0x4ull) -#define RVU_BLOCK_TYPE_NPC (0x5ull) -#define RVU_BLOCK_TYPE_SSO (0x6ull) -#define RVU_BLOCK_TYPE_SSOW (0x7ull) -#define RVU_BLOCK_TYPE_TIM (0x8ull) -#define RVU_BLOCK_TYPE_CPT (0x9ull) -#define RVU_BLOCK_TYPE_NDC (0xaull) -#define RVU_BLOCK_TYPE_DDF (0xbull) -#define RVU_BLOCK_TYPE_ZIP (0xcull) -#define RVU_BLOCK_TYPE_RAD (0xdull) -#define RVU_BLOCK_TYPE_DFA (0xeull) -#define RVU_BLOCK_TYPE_HNA (0xfull) -#define RVU_BLOCK_TYPE_REE (0xeull) - -#define RVU_BLOCK_ADDR_RVUM (0x0ull) -#define RVU_BLOCK_ADDR_LMT (0x1ull) -#define RVU_BLOCK_ADDR_NPA (0x3ull) -#define RVU_BLOCK_ADDR_NIX0 (0x4ull) -#define RVU_BLOCK_ADDR_NIX1 (0x5ull) -#define RVU_BLOCK_ADDR_NPC (0x6ull) -#define RVU_BLOCK_ADDR_SSO (0x7ull) -#define RVU_BLOCK_ADDR_SSOW (0x8ull) -#define RVU_BLOCK_ADDR_TIM (0x9ull) -#define RVU_BLOCK_ADDR_CPT0 (0xaull) -#define RVU_BLOCK_ADDR_CPT1 (0xbull) -#define RVU_BLOCK_ADDR_NDC0 (0xcull) -#define RVU_BLOCK_ADDR_NDC1 (0xdull) -#define RVU_BLOCK_ADDR_NDC2 (0xeull) -#define RVU_BLOCK_ADDR_R_END (0x1full) -#define RVU_BLOCK_ADDR_R_START (0x14ull) -#define RVU_BLOCK_ADDR_REE0 (0x14ull) -#define RVU_BLOCK_ADDR_REE1 (0x15ull) - -#define RVU_VF_INT_VEC_MBOX (0x0ull) - -#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull) -#define RVU_PF_INT_VEC_VFFLR0 (0x0ull) -#define RVU_PF_INT_VEC_VFFLR1 (0x1ull) -#define RVU_PF_INT_VEC_VFME0 (0x2ull) -#define RVU_PF_INT_VEC_VFME1 (0x3ull) -#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull) -#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull) - - -#define AF_BAR2_ALIASX_SIZE (0x100000ull) - -#define TIM_AF_BAR2_SEL (0x9000000ull) -#define SSO_AF_BAR2_SEL (0x9000000ull) -#define NIX_AF_BAR2_SEL (0x9000000ull) -#define SSOW_AF_BAR2_SEL (0x9000000ull) -#define NPA_AF_BAR2_SEL (0x9000000ull) -#define CPT_AF_BAR2_SEL (0x9000000ull) -#define RVU_AF_BAR2_SEL (0x9000000ull) -#define REE_AF_BAR2_SEL (0x9000000ull) - -#define AF_BAR2_ALIASX(a, b) \ - (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b)) -#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) -#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) -#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) -#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) -#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) -#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) -#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) -#define REE_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) - -/* Structures definitions */ - -/* RVU admin function register address structure */ -struct rvu_af_addr_s { - uint64_t addr : 28; - uint64_t block : 5; - uint64_t rsvd_63_33 : 31; -}; - -/* RVU function-unique address structure */ -struct rvu_func_addr_s { - uint32_t addr : 12; - uint32_t lf_slot : 8; - uint32_t block : 5; - uint32_t rsvd_31_25 : 7; -}; - -/* RVU msi-x vector structure */ -struct rvu_msix_vec_s { - uint64_t addr : 64; /* W0 */ - uint64_t data : 32; - uint64_t mask : 1; - uint64_t pend : 1; - uint64_t rsvd_127_98 : 30; -}; - -/* RVU pf function identification structure */ -struct rvu_pf_func_s { - uint16_t func : 10; - uint16_t pf : 6; -}; - -#endif /* __OTX2_RVU_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_sdp.h b/drivers/common/octeontx2/hw/otx2_sdp.h deleted file mode 100644 index 1e690f8b32..0000000000 --- a/drivers/common/octeontx2/hw/otx2_sdp.h +++ /dev/null @@ -1,184 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_SDP_HW_H_ -#define __OTX2_SDP_HW_H_ - -/* SDP VF IOQs */ -#define SDP_MIN_RINGS_PER_VF (1) -#define SDP_MAX_RINGS_PER_VF (8) - -/* SDP VF IQ configuration */ -#define SDP_VF_MAX_IQ_DESCRIPTORS (512) -#define SDP_VF_MIN_IQ_DESCRIPTORS (128) - -#define SDP_VF_DB_MIN (1) -#define SDP_VF_DB_TIMEOUT (1) -#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF) - -#define SDP_VF_64BYTE_INSTR (64) -#define SDP_VF_32BYTE_INSTR (32) - -/* SDP VF OQ configuration */ -#define SDP_VF_MAX_OQ_DESCRIPTORS (512) -#define SDP_VF_MIN_OQ_DESCRIPTORS (128) -#define SDP_VF_OQ_BUF_SIZE (2048) -#define SDP_VF_OQ_REFIL_THRESHOLD (16) - -#define SDP_VF_OQ_INFOPTR_MODE (1) -#define SDP_VF_OQ_BUFPTR_MODE (0) - -#define SDP_VF_OQ_INTR_PKT (1) -#define SDP_VF_OQ_INTR_TIME (10) -#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF - -/* Wait time in milliseconds for FLR */ -#define SDP_VF_PCI_FLR_WAIT (100) -#define SDP_VF_BUSY_LOOP_COUNT (10000) - -#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF -#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF - -/* SDP VF IOQs per rawdev */ -#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES -#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES - -/* SDP VF Register definitions */ -#define SDP_VF_RING_OFFSET (0x1ull << 17) - -/* SDP VF IQ Registers */ -#define SDP_VF_R_IN_CONTROL_START (0x10000) -#define SDP_VF_R_IN_ENABLE_START (0x10010) -#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020) -#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030) -#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040) -#define SDP_VF_R_IN_CNTS_START (0x10050) -#define SDP_VF_R_IN_INT_LEVELS_START (0x10060) -#define SDP_VF_R_IN_PKT_CNT_START (0x10080) -#define SDP_VF_R_IN_BYTE_CNT_START (0x10090) - -#define SDP_VF_R_IN_CONTROL(ring) \ - (SDP_VF_R_IN_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_ENABLE(ring) \ - (SDP_VF_R_IN_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_INSTR_BADDR(ring) \ - (SDP_VF_R_IN_INSTR_BADDR_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_INSTR_RSIZE(ring) \ - (SDP_VF_R_IN_INSTR_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_INSTR_DBELL(ring) \ - (SDP_VF_R_IN_INSTR_DBELL_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_CNTS(ring) \ - (SDP_VF_R_IN_CNTS_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_INT_LEVELS(ring) \ - (SDP_VF_R_IN_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_PKT_CNT(ring) \ - (SDP_VF_R_IN_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_IN_BYTE_CNT(ring) \ - (SDP_VF_R_IN_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET)) - -/* SDP VF IQ Masks */ -#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF) -#define SDP_VF_R_IN_CTL_RPVF_POS (48) - -#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28) -#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */ -#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24) -#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8) -#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6) -#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5) -#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3) -#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1) -#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0) - -#define SDP_VF_R_IN_CTL_MASK \ - (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B) - -/* SDP VF OQ Registers */ -#define SDP_VF_R_OUT_CNTS_START (0x10100) -#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110) -#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120) -#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130) -#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140) -#define SDP_VF_R_OUT_CONTROL_START (0x10150) -#define SDP_VF_R_OUT_ENABLE_START (0x10160) -#define SDP_VF_R_OUT_PKT_CNT_START (0x10180) -#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190) - -#define SDP_VF_R_OUT_CONTROL(ring) \ - (SDP_VF_R_OUT_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_ENABLE(ring) \ - (SDP_VF_R_OUT_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_SLIST_BADDR(ring) \ - (SDP_VF_R_OUT_SLIST_BADDR_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \ - (SDP_VF_R_OUT_SLIST_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_SLIST_DBELL(ring) \ - (SDP_VF_R_OUT_SLIST_DBELL_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_CNTS(ring) \ - (SDP_VF_R_OUT_CNTS_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_INT_LEVELS(ring) \ - (SDP_VF_R_OUT_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_PKT_CNT(ring) \ - (SDP_VF_R_OUT_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET)) - -#define SDP_VF_R_OUT_BYTE_CNT(ring) \ - (SDP_VF_R_OUT_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET)) - -/* SDP VF OQ Masks */ -#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40) -#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34) -#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33) -#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32) -#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30) -#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29) -#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28) -#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26) -#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25) -#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24) -#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23) - -#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63) -#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32) - -/* SDP Instruction Header */ -struct sdp_instr_ih { - /* Data Len */ - uint64_t tlen:16; - - /* Reserved1 */ - uint64_t rsvd1:20; - - /* PKIND for SDP */ - uint64_t pkind:6; - - /* Front Data size */ - uint64_t fsz:6; - - /* No. of entries in gather list */ - uint64_t gsz:14; - - /* Gather indicator */ - uint64_t gather:1; - - /* Reserved2 */ - uint64_t rsvd2:1; -} __rte_packed; - -#endif /* __OTX2_SDP_HW_H_ */ - diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h deleted file mode 100644 index 98a8130b16..0000000000 --- a/drivers/common/octeontx2/hw/otx2_sso.h +++ /dev/null @@ -1,209 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_SSO_HW_H__ -#define __OTX2_SSO_HW_H__ - -/* Register offsets */ - -#define SSO_AF_CONST (0x1000ull) -#define SSO_AF_CONST1 (0x1008ull) -#define SSO_AF_WQ_INT_PC (0x1020ull) -#define SSO_AF_NOS_CNT (0x1050ull) -#define SSO_AF_AW_WE (0x1080ull) -#define SSO_AF_WS_CFG (0x1088ull) -#define SSO_AF_GWE_CFG (0x1098ull) -#define SSO_AF_GWE_RANDOM (0x10b0ull) -#define SSO_AF_LF_HWGRP_RST (0x10e0ull) -#define SSO_AF_AW_CFG (0x10f0ull) -#define SSO_AF_BLK_RST (0x10f8ull) -#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull) -#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull) -#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull) -#define SSO_AF_ERR0 (0x1220ull) -#define SSO_AF_ERR0_W1S (0x1228ull) -#define SSO_AF_ERR0_ENA_W1C (0x1230ull) -#define SSO_AF_ERR0_ENA_W1S (0x1238ull) -#define SSO_AF_ERR2 (0x1260ull) -#define SSO_AF_ERR2_W1S (0x1268ull) -#define SSO_AF_ERR2_ENA_W1C (0x1270ull) -#define SSO_AF_ERR2_ENA_W1S (0x1278ull) -#define SSO_AF_UNMAP_INFO (0x12f0ull) -#define SSO_AF_UNMAP_INFO2 (0x1300ull) -#define SSO_AF_UNMAP_INFO3 (0x1310ull) -#define SSO_AF_RAS (0x1420ull) -#define SSO_AF_RAS_W1S (0x1430ull) -#define SSO_AF_RAS_ENA_W1C (0x1460ull) -#define SSO_AF_RAS_ENA_W1S (0x1470ull) -#define SSO_AF_AW_INP_CTL (0x2070ull) -#define SSO_AF_AW_ADD (0x2080ull) -#define SSO_AF_AW_READ_ARB (0x2090ull) -#define SSO_AF_XAQ_REQ_PC (0x20b0ull) -#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull) -#define SSO_AF_TAQ_CNT (0x20c0ull) -#define SSO_AF_TAQ_ADD (0x20e0ull) -#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3) -#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3) -#define SSO_PRIV_AF_INT_CFG (0x3000ull) -#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull) -#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3) -#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3) -#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3) -#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3) -#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3) -#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3) -#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12) -#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12) -#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12) -#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12) -#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12) -#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \ - (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \ - (uint64_t)(c) << 3) -#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3) -#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3) -#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3) -#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3) -#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3) -#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3) -#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3) -#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3) -#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3) -#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3) -#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3) -#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3) -#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3) -#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3) -#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3) -#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3) -#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3) -#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3) -#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3) -#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3) -#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3) -#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3) -#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3) -#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3) -#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3) -#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3) -#define SSO_AF_TAQX_WAEX_TAG(a, b) \ - (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) -#define SSO_AF_TAQX_WAEX_WQP(a, b) \ - (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) - -#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) -#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull) -#define SSO_LF_GGRP_QCTL (0x20ull) -#define SSO_LF_GGRP_EXE_DIS (0x80ull) -#define SSO_LF_GGRP_INT (0x100ull) -#define SSO_LF_GGRP_INT_W1S (0x108ull) -#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull) -#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull) -#define SSO_LF_GGRP_INT_THR (0x140ull) -#define SSO_LF_GGRP_INT_CNT (0x180ull) -#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull) -#define SSO_LF_GGRP_AQ_CNT (0x1c0ull) -#define SSO_LF_GGRP_AQ_THR (0x1e0ull) -#define SSO_LF_GGRP_MISC_CNT (0x200ull) - -#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull -#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull -#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16 -#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK -#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull -#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16 -#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull -#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull -#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32 -#define SSO_HWGRP_IAQ_RSVD_THR 0x2 - -#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull -#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull -#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16 -#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK -#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull -#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16 -#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull -#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull -#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32 -#define SSO_HWGRP_TAQ_RSVD_THR 0x3 - -#define SSO_HWGRP_PRI_AFF_MASK 0xFull -#define SSO_HWGRP_PRI_AFF_SHIFT 8 -#define SSO_HWGRP_PRI_WGT_MASK 0x3Full -#define SSO_HWGRP_PRI_WGT_SHIFT 16 -#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full -#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24 - -#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0) -#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1) -#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2) -#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3) -#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4) - -#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8) -#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9) -#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull -#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull - -/* Enum offsets */ - -#define SSO_LF_INT_VEC_GRP (0x0ull) - -#define SSO_AF_INT_VEC_ERR0 (0x0ull) -#define SSO_AF_INT_VEC_ERR2 (0x1ull) -#define SSO_AF_INT_VEC_RAS (0x2ull) - -#define SSO_WA_IOBN (0x0ull) -#define SSO_WA_NIXRX (0x1ull) -#define SSO_WA_CPT (0x2ull) -#define SSO_WA_ADDWQ (0x3ull) -#define SSO_WA_DPI (0x4ull) -#define SSO_WA_NIXTX (0x5ull) -#define SSO_WA_TIM (0x6ull) -#define SSO_WA_ZIP (0x7ull) - -#define SSO_TT_ORDERED (0x0ull) -#define SSO_TT_ATOMIC (0x1ull) -#define SSO_TT_UNTAGGED (0x2ull) -#define SSO_TT_EMPTY (0x3ull) - - -/* Structures definitions */ - -#endif /* __OTX2_SSO_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h deleted file mode 100644 index 8a44578036..0000000000 --- a/drivers/common/octeontx2/hw/otx2_ssow.h +++ /dev/null @@ -1,56 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_SSOW_HW_H__ -#define __OTX2_SSOW_HW_H__ - -/* Register offsets */ - -#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull) -#define SSOW_AF_LF_HWS_RST (0x30ull) -#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3) -#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3) -#define SSOW_AF_SCRATCH_WS (0x100000ull) -#define SSOW_AF_SCRATCH_GW (0x200000ull) -#define SSOW_AF_SCRATCH_AW (0x300000ull) - -#define SSOW_LF_GWS_LINKS (0x10ull) -#define SSOW_LF_GWS_PENDWQP (0x40ull) -#define SSOW_LF_GWS_PENDSTATE (0x50ull) -#define SSOW_LF_GWS_NW_TIM (0x70ull) -#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull) -#define SSOW_LF_GWS_INT (0x100ull) -#define SSOW_LF_GWS_INT_W1S (0x108ull) -#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull) -#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull) -#define SSOW_LF_GWS_TAG (0x200ull) -#define SSOW_LF_GWS_WQP (0x210ull) -#define SSOW_LF_GWS_SWTP (0x220ull) -#define SSOW_LF_GWS_PENDTAG (0x230ull) -#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) -#define SSOW_LF_GWS_OP_GET_WORK (0x600ull) -#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull) -#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull) -#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull) -#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull) -#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull) -#define SSOW_LF_GWS_OP_DESCHED (0x880ull) -#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) -#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull) -#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) -#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) -#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) -#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull) -#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull) -#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull) -#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull) -#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull) - - -/* Enum offsets */ - -#define SSOW_LF_INT_VEC_IOP (0x0ull) - - -#endif /* __OTX2_SSOW_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h deleted file mode 100644 index 41442ad0a8..0000000000 --- a/drivers/common/octeontx2/hw/otx2_tim.h +++ /dev/null @@ -1,34 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_TIM_HW_H__ -#define __OTX2_TIM_HW_H__ - -/* TIM */ -#define TIM_AF_CONST (0x90) -#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3) -#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3) -#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000) -#define TIM_AF_BLK_RST (0x10) -#define TIM_AF_LF_RST (0x20) -#define TIM_AF_BLK_RST (0x10) -#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3) -#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3) -#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3) -#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3) -#define TIM_AF_FLAGS_REG (0x80) -#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0) -#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47) -#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50) -#define TIM_AF_RINGX_CLT1_CLK_10NS (0) -#define TIM_AF_RINGX_CLT1_CLK_GPIO (1) -#define TIM_AF_RINGX_CLT1_CLK_GTI (2) -#define TIM_AF_RINGX_CLT1_CLK_PTP (3) - -/* ENUMS */ - -#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull) -#define TIM_LF_INT_VEC_RAS_INT (0x1ull) - -#endif /* __OTX2_TIM_HW_H__ */ diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build deleted file mode 100644 index 223ba5ef51..0000000000 --- a/drivers/common/octeontx2/meson.build +++ /dev/null @@ -1,24 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. -# - -if not is_linux or not dpdk_conf.get('RTE_ARCH_64') - build = false - reason = 'only supported on 64-bit Linux' - subdir_done() -endif - -sources= files( - 'otx2_common.c', - 'otx2_dev.c', - 'otx2_irq.c', - 'otx2_mbox.c', - 'otx2_sec_idev.c', -) - -deps = ['eal', 'pci', 'ethdev', 'kvargs'] -includes += include_directories( - '../../common/octeontx2', - '../../mempool/octeontx2', - '../../bus/pci', -) diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c deleted file mode 100644 index d23c50242e..0000000000 --- a/drivers/common/octeontx2/otx2_common.c +++ /dev/null @@ -1,216 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_dev.h" -#include "otx2_mbox.h" - -/** - * @internal - * Set default NPA configuration. - */ -void -otx2_npa_set_defaults(struct otx2_idev_cfg *idev) -{ - idev->npa_pf_func = 0; - rte_atomic16_set(&idev->npa_refcnt, 0); -} - -/** - * @internal - * Get intra device config structure. - */ -struct otx2_idev_cfg * -otx2_intra_dev_get_cfg(void) -{ - const char name[] = "octeontx2_intra_device_conf"; - const struct rte_memzone *mz; - struct otx2_idev_cfg *idev; - - mz = rte_memzone_lookup(name); - if (mz != NULL) - return mz->addr; - - /* Request for the first time */ - mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg), - SOCKET_ID_ANY, 0, OTX2_ALIGN); - if (mz != NULL) { - idev = mz->addr; - idev->sso_pf_func = 0; - idev->npa_lf = NULL; - otx2_npa_set_defaults(idev); - return idev; - } - return NULL; -} - -/** - * @internal - * Get SSO PF_FUNC. - */ -uint16_t -otx2_sso_pf_func_get(void) -{ - struct otx2_idev_cfg *idev; - uint16_t sso_pf_func; - - sso_pf_func = 0; - idev = otx2_intra_dev_get_cfg(); - - if (idev != NULL) - sso_pf_func = idev->sso_pf_func; - - return sso_pf_func; -} - -/** - * @internal - * Set SSO PF_FUNC. - */ -void -otx2_sso_pf_func_set(uint16_t sso_pf_func) -{ - struct otx2_idev_cfg *idev; - - idev = otx2_intra_dev_get_cfg(); - - if (idev != NULL) { - idev->sso_pf_func = sso_pf_func; - rte_smp_wmb(); - } -} - -/** - * @internal - * Get NPA PF_FUNC. - */ -uint16_t -otx2_npa_pf_func_get(void) -{ - struct otx2_idev_cfg *idev; - uint16_t npa_pf_func; - - npa_pf_func = 0; - idev = otx2_intra_dev_get_cfg(); - - if (idev != NULL) - npa_pf_func = idev->npa_pf_func; - - return npa_pf_func; -} - -/** - * @internal - * Get NPA lf object. - */ -struct otx2_npa_lf * -otx2_npa_lf_obj_get(void) -{ - struct otx2_idev_cfg *idev; - - idev = otx2_intra_dev_get_cfg(); - - if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt)) - return idev->npa_lf; - - return NULL; -} - -/** - * @internal - * Is NPA lf active for the given device?. - */ -int -otx2_npa_lf_active(void *otx2_dev) -{ - struct otx2_dev *dev = otx2_dev; - struct otx2_idev_cfg *idev; - - /* Check if npalf is actively used on this dev */ - idev = otx2_intra_dev_get_cfg(); - if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox) - return 0; - - return rte_atomic16_read(&idev->npa_refcnt); -} - -/* - * @internal - * Gets reference only to existing NPA LF object. - */ -int otx2_npa_lf_obj_ref(void) -{ - struct otx2_idev_cfg *idev; - uint16_t cnt; - int rc; - - idev = otx2_intra_dev_get_cfg(); - - /* Check if ref not possible */ - if (idev == NULL) - return -EINVAL; - - - /* Get ref only if > 0 */ - cnt = rte_atomic16_read(&idev->npa_refcnt); - while (cnt != 0) { - rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1); - if (rc) - break; - - cnt = rte_atomic16_read(&idev->npa_refcnt); - } - - return cnt ? 0 : -EINVAL; -} - -static int -parse_npa_lock_mask(const char *key, const char *value, void *extra_args) -{ - RTE_SET_USED(key); - uint64_t val; - - val = strtoull(value, NULL, 16); - - *(uint64_t *)extra_args = val; - - return 0; -} - -/* - * @internal - * Parse common device arguments - */ -void otx2_parse_common_devargs(struct rte_kvargs *kvlist) -{ - - struct otx2_idev_cfg *idev; - uint64_t npa_lock_mask = 0; - - idev = otx2_intra_dev_get_cfg(); - - if (idev == NULL) - return; - - rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK, - &parse_npa_lock_mask, &npa_lock_mask); - - idev->npa_lock_mask = npa_lock_mask; -} - -RTE_LOG_REGISTER(otx2_logtype_base, pmd.octeontx2.base, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_mbox, pmd.octeontx2.mbox, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_npa, pmd.mempool.octeontx2, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_nix, pmd.net.octeontx2, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_npc, pmd.net.octeontx2.flow, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_tm, pmd.net.octeontx2.tm, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_sso, pmd.event.octeontx2, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_tim, pmd.event.octeontx2.timer, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_dpi, pmd.raw.octeontx2.dpi, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_ep, pmd.raw.octeontx2.ep, NOTICE); -RTE_LOG_REGISTER(otx2_logtype_ree, pmd.regex.octeontx2, NOTICE); diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h deleted file mode 100644 index cd52e098e6..0000000000 --- a/drivers/common/octeontx2/otx2_common.h +++ /dev/null @@ -1,179 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_COMMON_H_ -#define _OTX2_COMMON_H_ - -#include -#include -#include -#include -#include -#include -#include - -#include "hw/otx2_rvu.h" -#include "hw/otx2_nix.h" -#include "hw/otx2_npc.h" -#include "hw/otx2_npa.h" -#include "hw/otx2_sdp.h" -#include "hw/otx2_sso.h" -#include "hw/otx2_ssow.h" -#include "hw/otx2_tim.h" -#include "hw/otx2_ree.h" - -/* Alignment */ -#define OTX2_ALIGN 128 - -/* Bits manipulation */ -#ifndef BIT_ULL -#define BIT_ULL(nr) (1ULL << (nr)) -#endif -#ifndef BIT -#define BIT(nr) (1UL << (nr)) -#endif - -#ifndef BITS_PER_LONG -#define BITS_PER_LONG (__SIZEOF_LONG__ * 8) -#endif -#ifndef BITS_PER_LONG_LONG -#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8) -#endif - -#ifndef GENMASK -#define GENMASK(h, l) \ - (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) -#endif -#ifndef GENMASK_ULL -#define GENMASK_ULL(h, l) \ - (((~0ULL) - (1ULL << (l)) + 1) & \ - (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) -#endif - -#define OTX2_NPA_LOCK_MASK "npa_lock_mask" - -/* Intra device related functions */ -struct otx2_npa_lf; -struct otx2_idev_cfg { - uint16_t sso_pf_func; - uint16_t npa_pf_func; - struct otx2_npa_lf *npa_lf; - RTE_STD_C11 - union { - rte_atomic16_t npa_refcnt; - uint16_t npa_refcnt_u16; - }; - uint64_t npa_lock_mask; -}; - -__rte_internal -struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void); -__rte_internal -void otx2_sso_pf_func_set(uint16_t sso_pf_func); -__rte_internal -uint16_t otx2_sso_pf_func_get(void); -__rte_internal -uint16_t otx2_npa_pf_func_get(void); -__rte_internal -struct otx2_npa_lf *otx2_npa_lf_obj_get(void); -__rte_internal -void otx2_npa_set_defaults(struct otx2_idev_cfg *idev); -__rte_internal -int otx2_npa_lf_active(void *dev); -__rte_internal -int otx2_npa_lf_obj_ref(void); -__rte_internal -void otx2_parse_common_devargs(struct rte_kvargs *kvlist); - -/* Log */ -extern int otx2_logtype_base; -extern int otx2_logtype_mbox; -extern int otx2_logtype_npa; -extern int otx2_logtype_nix; -extern int otx2_logtype_sso; -extern int otx2_logtype_npc; -extern int otx2_logtype_tm; -extern int otx2_logtype_tim; -extern int otx2_logtype_dpi; -extern int otx2_logtype_ep; -extern int otx2_logtype_ree; - -#define otx2_err(fmt, args...) \ - RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", \ - __func__, __LINE__, ## args) - -#define otx2_info(fmt, args...) \ - RTE_LOG(INFO, PMD, fmt"\n", ## args) - -#define otx2_dbg(subsystem, fmt, args...) \ - rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \ - "[%s] %s():%u " fmt "\n", \ - #subsystem, __func__, __LINE__, ##args) - -#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__) -#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__) -#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__) -#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__) -#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__) -#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__) -#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__) -#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__) -#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__) -#define otx2_sdp_dbg(fmt, ...) otx2_dbg(ep, fmt, ##__VA_ARGS__) -#define otx2_ree_dbg(fmt, ...) otx2_dbg(ree, fmt, ##__VA_ARGS__) - -/* PCI IDs */ -#define PCI_VENDOR_ID_CAVIUM 0x177D -#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063 -#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064 -#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065 -#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9 -#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA -#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB -#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC -#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD -#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE -#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8 -#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081 -#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */ -/* OCTEON TX2 98xx EP mode */ -#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103 -#define PCI_DEVID_OCTEONTX2_EP_RAW_VF 0xB204 /* OCTEON TX2 EP mode */ -#define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6 -#define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7 -#define PCI_DEVID_OCTEONTX2_RVU_REE_PF 0xA0f4 -#define PCI_DEVID_OCTEONTX2_RVU_REE_VF 0xA0f5 - -/* - * REVID for RVU PCIe devices. - * Bits 0..1: minor pass - * Bits 3..2: major pass - * Bits 7..4: midr id, 0:96, 1:95, 2:loki, f:unknown - */ - -#define RVU_PCI_REV_MIDR_ID(rev_id) (rev_id >> 4) -#define RVU_PCI_REV_MAJOR(rev_id) ((rev_id >> 2) & 0x3) -#define RVU_PCI_REV_MINOR(rev_id) (rev_id & 0x3) - -#define RVU_PCI_CN96XX_MIDR_ID 0x0 -#define RVU_PCI_CNF95XX_MIDR_ID 0x1 - -/* PCI Config offsets */ -#define RVU_PCI_REVISION_ID 0x08 - -/* IO Access */ -#define otx2_read64(addr) rte_read64_relaxed((void *)(addr)) -#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr)) - -#if defined(RTE_ARCH_ARM64) -#include "otx2_io_arm64.h" -#else -#include "otx2_io_generic.h" -#endif - -/* Fastpath lookup */ -#define OTX2_NIX_FASTPATH_LOOKUP_MEM "otx2_nix_fastpath_lookup_mem" -#define OTX2_NIX_SA_TBL_START (4096*4 + 69632*2) - -#endif /* _OTX2_COMMON_H_ */ diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c deleted file mode 100644 index 08dca87848..0000000000 --- a/drivers/common/octeontx2/otx2_dev.c +++ /dev/null @@ -1,1074 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -#include "otx2_dev.h" -#include "otx2_mbox.h" - -#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */ -#define RVU_MAX_INT_RETRY 3 - -/* PF/VF message handling timer */ -#define VF_PF_MBOX_TIMER_MS (20 * 1000) - -static void * -mbox_mem_map(off_t off, size_t size) -{ - void *va = MAP_FAILED; - int mem_fd; - - if (size <= 0) - goto error; - - mem_fd = open("/dev/mem", O_RDWR); - if (mem_fd < 0) - goto error; - - va = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE, - RTE_MAP_SHARED, mem_fd, off); - close(mem_fd); - - if (va == NULL) - otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd", - size, mem_fd, (intmax_t)off); -error: - return va; -} - -static void -mbox_mem_unmap(void *va, size_t size) -{ - if (va) - rte_mem_unmap(va, size); -} - -static int -pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp) -{ - uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox; - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - volatile uint64_t int_status; - struct mbox_msghdr *msghdr; - uint64_t off; - int rc = 0; - - /* We need to disable PF interrupts. We are in timer interrupt */ - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); - - /* Send message */ - otx2_mbox_msg_send(mbox, 0); - - do { - rte_delay_ms(sleep); - timeout += sleep; - if (timeout >= MBOX_RSP_TIMEOUT) { - otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT); - rc = -EIO; - break; - } - int_status = otx2_read64(dev->bar2 + RVU_PF_INT); - } while ((int_status & 0x1) != 0x1); - - /* Clear */ - otx2_write64(int_status, dev->bar2 + RVU_PF_INT); - - /* Enable interrupts */ - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); - - if (rc == 0) { - /* Get message */ - off = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off); - if (rsp) - *rsp = msghdr; - rc = msghdr->rc; - } - - return rc; -} - -static int -af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg) -{ - uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox; - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - volatile uint64_t int_status; - struct mbox_hdr *req_hdr; - struct mbox_msghdr *msg; - struct mbox_msghdr *rsp; - uint64_t offset; - size_t size; - int i; - - /* We need to disable PF interrupts. We are in timer interrupt */ - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); - - /* Send message */ - otx2_mbox_msg_send(mbox, 0); - - do { - rte_delay_ms(sleep); - timeout++; - if (timeout >= MBOX_RSP_TIMEOUT) { - otx2_err("Routed messages %d timeout: %dms", - num_msg, MBOX_RSP_TIMEOUT); - break; - } - int_status = otx2_read64(dev->bar2 + RVU_PF_INT); - } while ((int_status & 0x1) != 0x1); - - /* Clear */ - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); - - /* Enable interrupts */ - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); - - rte_spinlock_lock(&mdev->mbox_lock); - - req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - if (req_hdr->num_msgs != num_msg) - otx2_err("Routed messages: %d received: %d", num_msg, - req_hdr->num_msgs); - - /* Get messages from mbox */ - offset = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - for (i = 0; i < req_hdr->num_msgs; i++) { - msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - size = mbox->rx_start + msg->next_msgoff - offset; - - /* Reserve PF/VF mbox message */ - size = RTE_ALIGN(size, MBOX_MSG_ALIGN); - rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size); - otx2_mbox_rsp_init(msg->id, rsp); - - /* Copy message from AF<->PF mbox to PF<->VF mbox */ - otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), - (uint8_t *)msg + sizeof(struct mbox_msghdr), - size - sizeof(struct mbox_msghdr)); - - /* Set status and sender pf_func data */ - rsp->rc = msg->rc; - rsp->pcifunc = msg->pcifunc; - - /* Whenever a PF comes up, AF sends the link status to it but - * when VF comes up no such event is sent to respective VF. - * Using MBOX_MSG_NIX_LF_START_RX response from AF for the - * purpose and send the link status of PF to VF. - */ - if (msg->id == MBOX_MSG_NIX_LF_START_RX) { - /* Send link status to VF */ - struct cgx_link_user_info linfo; - struct mbox_msghdr *vf_msg; - size_t sz; - - /* Get the link status */ - if (dev->ops && dev->ops->link_status_get) - dev->ops->link_status_get(dev, &linfo); - - sz = RTE_ALIGN(otx2_mbox_id2size( - MBOX_MSG_CGX_LINK_EVENT), MBOX_MSG_ALIGN); - /* Prepare the message to be sent */ - vf_msg = otx2_mbox_alloc_msg(&dev->mbox_vfpf_up, vf, - sz); - otx2_mbox_req_init(MBOX_MSG_CGX_LINK_EVENT, vf_msg); - memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr), - &linfo, sizeof(struct cgx_link_user_info)); - - vf_msg->rc = msg->rc; - vf_msg->pcifunc = msg->pcifunc; - /* Send to VF */ - otx2_mbox_msg_send(&dev->mbox_vfpf_up, vf); - } - offset = mbox->rx_start + msg->next_msgoff; - } - rte_spinlock_unlock(&mdev->mbox_lock); - - return req_hdr->num_msgs; -} - -static int -vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf) -{ - int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf; - struct otx2_mbox_dev *mdev = &mbox->dev[vf]; - struct mbox_hdr *req_hdr; - struct mbox_msghdr *msg; - size_t size; - uint16_t i; - - req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - if (!req_hdr->num_msgs) - return 0; - - offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); - - for (i = 0; i < req_hdr->num_msgs; i++) { - - msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - size = mbox->rx_start + msg->next_msgoff - offset; - - /* RVU_PF_FUNC_S */ - msg->pcifunc = otx2_pfvf_func(dev->pf, vf); - - if (msg->id == MBOX_MSG_READY) { - struct ready_msg_rsp *rsp; - uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; - - /* Handle READY message in PF */ - dev->active_vfs[vf / max_bits] |= - BIT_ULL(vf % max_bits); - rsp = (struct ready_msg_rsp *) - otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp)); - otx2_mbox_rsp_init(msg->id, rsp); - - /* PF/VF function ID */ - rsp->hdr.pcifunc = msg->pcifunc; - rsp->hdr.rc = 0; - } else { - struct mbox_msghdr *af_req; - /* Reserve AF/PF mbox message */ - size = RTE_ALIGN(size, MBOX_MSG_ALIGN); - af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size); - otx2_mbox_req_init(msg->id, af_req); - - /* Copy message from VF<->PF mbox to PF<->AF mbox */ - otx2_mbox_memcpy((uint8_t *)af_req + - sizeof(struct mbox_msghdr), - (uint8_t *)msg + sizeof(struct mbox_msghdr), - size - sizeof(struct mbox_msghdr)); - af_req->pcifunc = msg->pcifunc; - routed++; - } - offset = mbox->rx_start + msg->next_msgoff; - } - - if (routed > 0) { - otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF", - dev->pf, routed, vf); - af_pf_wait_msg(dev, vf, routed); - otx2_mbox_reset(dev->mbox, 0); - } - - /* Send mbox responses to VF */ - if (mdev->num_msgs) { - otx2_base_dbg("pf:%d reply %d messages to vf:%d", - dev->pf, mdev->num_msgs, vf); - otx2_mbox_msg_send(mbox, vf); - } - - return i; -} - -static int -vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf) -{ - struct otx2_mbox *mbox = &dev->mbox_vfpf_up; - struct otx2_mbox_dev *mdev = &mbox->dev[vf]; - struct mbox_hdr *req_hdr; - struct mbox_msghdr *msg; - int msgs_acked = 0; - int offset; - uint16_t i; - - req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - if (req_hdr->num_msgs == 0) - return 0; - - offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); - - for (i = 0; i < req_hdr->num_msgs; i++) { - msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - - msgs_acked++; - /* RVU_PF_FUNC_S */ - msg->pcifunc = otx2_pfvf_func(dev->pf, vf); - - switch (msg->id) { - case MBOX_MSG_CGX_LINK_EVENT: - otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", - msg->id, otx2_mbox_id2name(msg->id), - msg->pcifunc, otx2_get_pf(msg->pcifunc), - otx2_get_vf(msg->pcifunc)); - break; - case MBOX_MSG_CGX_PTP_RX_INFO: - otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", - msg->id, otx2_mbox_id2name(msg->id), - msg->pcifunc, otx2_get_pf(msg->pcifunc), - otx2_get_vf(msg->pcifunc)); - break; - default: - otx2_err("Not handled UP msg 0x%x (%s) func:0x%x", - msg->id, otx2_mbox_id2name(msg->id), - msg->pcifunc); - } - offset = mbox->rx_start + msg->next_msgoff; - } - otx2_mbox_reset(mbox, vf); - mdev->msgs_acked = msgs_acked; - rte_wmb(); - - return i; -} - -static void -otx2_vf_pf_mbox_handle_msg(void *param) -{ - uint16_t vf, max_vf, max_bits; - struct otx2_dev *dev = param; - - max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t); - max_vf = max_bits * MAX_VFPF_DWORD_BITS; - - for (vf = 0; vf < max_vf; vf++) { - if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) { - otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)", - vf, dev->pf, dev->vf); - vf_pf_process_msgs(dev, vf); - /* UP messages */ - vf_pf_process_up_msgs(dev, vf); - dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits)); - } - } - dev->timer_set = 0; -} - -static void -otx2_vf_pf_mbox_irq(void *param) -{ - struct otx2_dev *dev = param; - bool alarm_set = false; - uint64_t intr; - int vfpf; - - for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) { - intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); - if (!intr) - continue; - - otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)", - vfpf, intr, dev->pf, dev->vf); - - /* Save and clear intr bits */ - dev->intr.bits[vfpf] |= intr; - otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); - alarm_set = true; - } - - if (!dev->timer_set && alarm_set) { - dev->timer_set = 1; - /* Start timer to handle messages */ - rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS, - otx2_vf_pf_mbox_handle_msg, dev); - } -} - -static void -otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - struct mbox_hdr *req_hdr; - struct mbox_msghdr *msg; - int msgs_acked = 0; - int offset; - uint16_t i; - - req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - if (req_hdr->num_msgs == 0) - return; - - offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); - for (i = 0; i < req_hdr->num_msgs; i++) { - msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - - msgs_acked++; - otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", - msg->id, otx2_mbox_id2name(msg->id), - otx2_get_pf(msg->pcifunc), - otx2_get_vf(msg->pcifunc)); - - switch (msg->id) { - /* Add message id's that are handled here */ - case MBOX_MSG_READY: - /* Get our identity */ - dev->pf_func = msg->pcifunc; - break; - - default: - if (msg->rc) - otx2_err("Message (%s) response has err=%d", - otx2_mbox_id2name(msg->id), msg->rc); - break; - } - offset = mbox->rx_start + msg->next_msgoff; - } - - otx2_mbox_reset(mbox, 0); - /* Update acked if someone is waiting a message */ - mdev->msgs_acked = msgs_acked; - rte_wmb(); -} - -/* Copies the message received from AF and sends it to VF */ -static void -pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg) -{ - uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t); - struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up; - struct msg_req *msg = rec_msg; - struct mbox_msghdr *vf_msg; - uint16_t vf; - size_t size; - - size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN); - /* Send UP message to all VF's */ - for (vf = 0; vf < vf_mbox->ndevs; vf++) { - /* VF active */ - if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf)))) - continue; - - otx2_base_dbg("(%s) size: %zx to VF: %d", - otx2_mbox_id2name(msg->hdr.id), size, vf); - - /* Reserve PF/VF mbox message */ - vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size); - if (!vf_msg) { - otx2_err("Failed to alloc VF%d UP message", vf); - continue; - } - otx2_mbox_req_init(msg->hdr.id, vf_msg); - - /* - * Copy message from AF<->PF UP mbox - * to PF<->VF UP mbox - */ - otx2_mbox_memcpy((uint8_t *)vf_msg + - sizeof(struct mbox_msghdr), (uint8_t *)msg - + sizeof(struct mbox_msghdr), size - - sizeof(struct mbox_msghdr)); - - vf_msg->rc = msg->hdr.rc; - /* Set PF to be a sender */ - vf_msg->pcifunc = dev->pf_func; - - /* Send to VF */ - otx2_mbox_msg_send(vf_mbox, vf); - } -} - -static int -otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev, - struct cgx_link_info_msg *msg, - struct msg_rsp *rsp) -{ - struct cgx_link_user_info *linfo = &msg->link_info; - - otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d", - otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func), - linfo->link_up ? "UP" : "DOWN", msg->hdr.id, - otx2_mbox_id2name(msg->hdr.id), - otx2_get_pf(msg->hdr.pcifunc), - otx2_get_vf(msg->hdr.pcifunc)); - - /* PF gets link notification from AF */ - if (otx2_get_pf(msg->hdr.pcifunc) == 0) { - if (dev->ops && dev->ops->link_status_update) - dev->ops->link_status_update(dev, linfo); - - /* Forward the same message as received from AF to VF */ - pf_vf_mbox_send_up_msg(dev, msg); - } else { - /* VF gets link up notification */ - if (dev->ops && dev->ops->link_status_update) - dev->ops->link_status_update(dev, linfo); - } - - rsp->hdr.rc = 0; - return 0; -} - -static int -otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev, - struct cgx_ptp_rx_info_msg *msg, - struct msg_rsp *rsp) -{ - otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d", - otx2_get_pf(dev->pf_func), - otx2_get_vf(dev->pf_func), - msg->ptp_en ? "ENABLED" : "DISABLED", - msg->hdr.id, otx2_mbox_id2name(msg->hdr.id), - otx2_get_pf(msg->hdr.pcifunc), - otx2_get_vf(msg->hdr.pcifunc)); - - /* PF gets PTP notification from AF */ - if (otx2_get_pf(msg->hdr.pcifunc) == 0) { - if (dev->ops && dev->ops->ptp_info_update) - dev->ops->ptp_info_update(dev, msg->ptp_en); - - /* Forward the same message as received from AF to VF */ - pf_vf_mbox_send_up_msg(dev, msg); - } else { - /* VF gets PTP notification */ - if (dev->ops && dev->ops->ptp_info_update) - dev->ops->ptp_info_update(dev, msg->ptp_en); - } - - rsp->hdr.rc = 0; - return 0; -} - -static int -mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req) -{ - /* Check if valid, if not reply with a invalid msg */ - if (req->sig != OTX2_MBOX_REQ_SIG) - return -EIO; - - switch (req->id) { -#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ - case _id: { \ - struct _rsp_type *rsp; \ - int err; \ - \ - rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \ - &dev->mbox_up, 0, \ - sizeof(struct _rsp_type)); \ - if (!rsp) \ - return -ENOMEM; \ - \ - rsp->hdr.id = _id; \ - rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \ - rsp->hdr.pcifunc = dev->pf_func; \ - rsp->hdr.rc = 0; \ - \ - err = otx2_mbox_up_handler_ ## _fn_name( \ - dev, (struct _req_type *)req, rsp); \ - return err; \ - } -MBOX_UP_CGX_MESSAGES -#undef M - - default : - otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id); - } - - return -ENODEV; -} - -static void -otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - struct mbox_hdr *req_hdr; - struct mbox_msghdr *msg; - int i, err, offset; - - req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - if (req_hdr->num_msgs == 0) - return; - - offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); - for (i = 0; i < req_hdr->num_msgs; i++) { - msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - - otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", - msg->id, otx2_mbox_id2name(msg->id), - otx2_get_pf(msg->pcifunc), - otx2_get_vf(msg->pcifunc)); - err = mbox_process_msgs_up(dev, msg); - if (err) - otx2_err("Error %d handling 0x%x (%s)", - err, msg->id, otx2_mbox_id2name(msg->id)); - offset = mbox->rx_start + msg->next_msgoff; - } - /* Send mbox responses */ - if (mdev->num_msgs) { - otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs); - otx2_mbox_msg_send(mbox, 0); - } -} - -static void -otx2_pf_vf_mbox_irq(void *param) -{ - struct otx2_dev *dev = param; - uint64_t intr; - - intr = otx2_read64(dev->bar2 + RVU_VF_INT); - if (intr == 0) - otx2_base_dbg("Proceeding to check mbox UP messages if any"); - - otx2_write64(intr, dev->bar2 + RVU_VF_INT); - otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); - - /* First process all configuration messages */ - otx2_process_msgs(dev, dev->mbox); - - /* Process Uplink messages */ - otx2_process_msgs_up(dev, &dev->mbox_up); -} - -static void -otx2_af_pf_mbox_irq(void *param) -{ - struct otx2_dev *dev = param; - uint64_t intr; - - intr = otx2_read64(dev->bar2 + RVU_PF_INT); - if (intr == 0) - otx2_base_dbg("Proceeding to check mbox UP messages if any"); - - otx2_write64(intr, dev->bar2 + RVU_PF_INT); - otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); - - /* First process all configuration messages */ - otx2_process_msgs(dev, dev->mbox); - - /* Process Uplink messages */ - otx2_process_msgs_up(dev, &dev->mbox_up); -} - -static int -mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - int i, rc; - - /* HW clear irq */ - for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) - otx2_write64(~0ull, dev->bar2 + - RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); - - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); - - dev->timer_set = 0; - - /* MBOX interrupt for VF(0...63) <-> PF */ - rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, - RVU_PF_INT_VEC_VFPF_MBOX0); - - if (rc) { - otx2_err("Fail to register PF(VF0-63) mbox irq"); - return rc; - } - /* MBOX interrupt for VF(64...128) <-> PF */ - rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, - RVU_PF_INT_VEC_VFPF_MBOX1); - - if (rc) { - otx2_err("Fail to register PF(VF64-128) mbox irq"); - return rc; - } - /* MBOX interrupt AF <-> PF */ - rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq, - dev, RVU_PF_INT_VEC_AFPF_MBOX); - if (rc) { - otx2_err("Fail to register AF<->PF mbox irq"); - return rc; - } - - /* HW enable intr */ - for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) - otx2_write64(~0ull, dev->bar2 + - RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i)); - - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); - - return rc; -} - -static int -mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - int rc; - - /* Clear irq */ - otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); - - /* MBOX interrupt PF <-> VF */ - rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq, - dev, RVU_VF_INT_VEC_MBOX); - if (rc) { - otx2_err("Fail to register PF<->VF mbox irq"); - return rc; - } - - /* HW enable intr */ - otx2_write64(~0ull, dev->bar2 + RVU_VF_INT); - otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S); - - return rc; -} - -static int -mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - if (otx2_dev_is_vf(dev)) - return mbox_register_vf_irq(pci_dev, dev); - else - return mbox_register_pf_irq(pci_dev, dev); -} - -static void -mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - int i; - - /* HW clear irq */ - for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) - otx2_write64(~0ull, dev->bar2 + - RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); - - otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); - - dev->timer_set = 0; - - rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev); - - /* Unregister the interrupt handler for each vectors */ - /* MBOX interrupt for VF(0...63) <-> PF */ - otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, - RVU_PF_INT_VEC_VFPF_MBOX0); - - /* MBOX interrupt for VF(64...128) <-> PF */ - otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, - RVU_PF_INT_VEC_VFPF_MBOX1); - - /* MBOX interrupt AF <-> PF */ - otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev, - RVU_PF_INT_VEC_AFPF_MBOX); - -} - -static void -mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - - /* Clear irq */ - otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); - - /* Unregister the interrupt handler */ - otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev, - RVU_VF_INT_VEC_MBOX); -} - -static void -mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - if (otx2_dev_is_vf(dev)) - mbox_unregister_vf_irq(pci_dev, dev); - else - mbox_unregister_pf_irq(pci_dev, dev); -} - -static int -vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf) -{ - struct otx2_mbox *mbox = dev->mbox; - struct msg_req *req; - int rc; - - req = otx2_mbox_alloc_msg_vf_flr(mbox); - /* Overwrite pcifunc to indicate VF */ - req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf); - - /* Sync message in interrupt context */ - rc = pf_af_sync_msg(dev, NULL); - if (rc) - otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc); - - return rc; -} - -static void -otx2_pf_vf_flr_irq(void *param) -{ - struct otx2_dev *dev = (struct otx2_dev *)param; - uint16_t max_vf = 64, vf; - uintptr_t bar2; - uint64_t intr; - int i; - - max_vf = (dev->maxvf > 0) ? dev->maxvf : 64; - bar2 = dev->bar2; - - otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf); - - for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { - intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i)); - if (!intr) - continue; - - for (vf = 0; vf < max_vf; vf++) { - if (!(intr & (1ULL << vf))) - continue; - - otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d", - i, intr, (64 * i + vf)); - /* Clear interrupt */ - otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i)); - /* Disable the interrupt */ - otx2_write64(BIT_ULL(vf), - bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); - /* Inform AF about VF reset */ - vf_flr_send_msg(dev, vf); - - /* Signal FLR finish */ - otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i)); - /* Enable interrupt */ - otx2_write64(~0ull, - bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); - } - } -} - -static int -vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - int i; - - otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name); - - /* HW clear irq */ - for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) - otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); - - otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev, - RVU_PF_INT_VEC_VFFLR0); - - otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev, - RVU_PF_INT_VEC_VFFLR1); - - return 0; -} - -static int -vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - struct rte_intr_handle *handle = pci_dev->intr_handle; - int i, rc; - - otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name); - - rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev, - RVU_PF_INT_VEC_VFFLR0); - if (rc) - otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc); - - rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev, - RVU_PF_INT_VEC_VFFLR1); - if (rc) - otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc); - - /* Enable HW interrupt */ - for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { - otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i)); - otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i)); - otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); - } - return 0; -} - -/** - * @internal - * Get number of active VFs for the given PF device. - */ -int -otx2_dev_active_vfs(void *otx2_dev) -{ - struct otx2_dev *dev = otx2_dev; - int i, count = 0; - - for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) - count += __builtin_popcount(dev->active_vfs[i]); - - return count; -} - -static void -otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev) -{ - switch (pci_dev->id.device_id) { - case PCI_DEVID_OCTEONTX2_RVU_PF: - break; - case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF: - case PCI_DEVID_OCTEONTX2_RVU_NPA_VF: - case PCI_DEVID_OCTEONTX2_RVU_CPT_VF: - case PCI_DEVID_OCTEONTX2_RVU_AF_VF: - case PCI_DEVID_OCTEONTX2_RVU_VF: - case PCI_DEVID_OCTEONTX2_RVU_SDP_VF: - dev->hwcap |= OTX2_HWCAP_F_VF; - break; - } -} - -/** - * @internal - * Initialize the otx2 device - */ -int -otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev) -{ - int up_direction = MBOX_DIR_PFAF_UP; - int rc, direction = MBOX_DIR_PFAF; - uint64_t intr_offset = RVU_PF_INT; - struct otx2_dev *dev = otx2_dev; - uintptr_t bar2, bar4; - uint64_t bar4_addr; - void *hwbase; - - bar2 = (uintptr_t)pci_dev->mem_resource[2].addr; - bar4 = (uintptr_t)pci_dev->mem_resource[4].addr; - - if (bar2 == 0 || bar4 == 0) { - otx2_err("Failed to get pci bars"); - rc = -ENODEV; - goto error; - } - - dev->node = pci_dev->device.numa_node; - dev->maxvf = pci_dev->max_vfs; - dev->bar2 = bar2; - dev->bar4 = bar4; - - otx2_update_vf_hwcap(pci_dev, dev); - - if (otx2_dev_is_vf(dev)) { - direction = MBOX_DIR_VFPF; - up_direction = MBOX_DIR_VFPF_UP; - intr_offset = RVU_VF_INT; - } - - /* Initialize the local mbox */ - rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1, - intr_offset); - if (rc) - goto error; - dev->mbox = &dev->mbox_local; - - rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1, - intr_offset); - if (rc) - goto error; - - /* Register mbox interrupts */ - rc = mbox_register_irq(pci_dev, dev); - if (rc) - goto mbox_fini; - - /* Check the readiness of PF/VF */ - rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func); - if (rc) - goto mbox_unregister; - - dev->pf = otx2_get_pf(dev->pf_func); - dev->vf = otx2_get_vf(dev->pf_func); - memset(&dev->active_vfs, 0, sizeof(dev->active_vfs)); - - /* Found VF devices in a PF device */ - if (pci_dev->max_vfs > 0) { - - /* Remap mbox area for all vf's */ - bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR); - if (bar4_addr == 0) { - rc = -ENODEV; - goto mbox_fini; - } - - hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs); - if (hwbase == MAP_FAILED) { - rc = -ENOMEM; - goto mbox_fini; - } - /* Init mbox object */ - rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase, - bar2, MBOX_DIR_PFVF, pci_dev->max_vfs, - intr_offset); - if (rc) - goto iounmap; - - /* PF -> VF UP messages */ - rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase, - bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs, - intr_offset); - if (rc) - goto mbox_fini; - } - - /* Register VF-FLR irq handlers */ - if (otx2_dev_is_pf(dev)) { - rc = vf_flr_register_irqs(pci_dev, dev); - if (rc) - goto iounmap; - } - dev->mbox_active = 1; - return rc; - -iounmap: - mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs); -mbox_unregister: - mbox_unregister_irq(pci_dev, dev); -mbox_fini: - otx2_mbox_fini(dev->mbox); - otx2_mbox_fini(&dev->mbox_up); -error: - return rc; -} - -/** - * @internal - * Finalize the otx2 device - */ -void -otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) -{ - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - struct otx2_dev *dev = otx2_dev; - struct otx2_idev_cfg *idev; - struct otx2_mbox *mbox; - - /* Clear references to this pci dev */ - idev = otx2_intra_dev_get_cfg(); - if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev) - idev->npa_lf = NULL; - - mbox_unregister_irq(pci_dev, dev); - - if (otx2_dev_is_pf(dev)) - vf_flr_unregister_irqs(pci_dev, dev); - /* Release PF - VF */ - mbox = &dev->mbox_vfpf; - if (mbox->hwbase && mbox->dev) - mbox_mem_unmap((void *)mbox->hwbase, - MBOX_SIZE * pci_dev->max_vfs); - otx2_mbox_fini(mbox); - mbox = &dev->mbox_vfpf_up; - otx2_mbox_fini(mbox); - - /* Release PF - AF */ - mbox = dev->mbox; - otx2_mbox_fini(mbox); - mbox = &dev->mbox_up; - otx2_mbox_fini(mbox); - dev->mbox_active = 0; - - /* Disable MSIX vectors */ - otx2_disable_irqs(intr_handle); -} diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h deleted file mode 100644 index d5b2b0d9af..0000000000 --- a/drivers/common/octeontx2/otx2_dev.h +++ /dev/null @@ -1,161 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_DEV_H -#define _OTX2_DEV_H - -#include - -#include "otx2_common.h" -#include "otx2_irq.h" -#include "otx2_mbox.h" -#include "otx2_mempool.h" - -/* Common HWCAP flags. Use from LSB bits */ -#define OTX2_HWCAP_F_VF BIT_ULL(8) /* VF device */ -#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF) -#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF)) -#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \ - (dev->tx_chan_base < 0x700)) -#define otx2_dev_revid(dev) (dev->hwcap & 0xFF) -#define otx2_dev_is_sdp(dev) (dev->sdp_link) - -#define otx2_dev_is_vf_or_sdp(dev) \ - (otx2_dev_is_vf(dev) || otx2_dev_is_sdp(dev)) - -#define otx2_dev_is_A0(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0)) -#define otx2_dev_is_Ax(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0)) - -#define otx2_dev_is_95xx_A0(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1)) -#define otx2_dev_is_95xx_Ax(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1)) - -#define otx2_dev_is_96xx_A0(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0)) -#define otx2_dev_is_96xx_Ax(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0)) - -#define otx2_dev_is_96xx_Cx(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0)) - -#define otx2_dev_is_96xx_C0(dev) \ - ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \ - (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0)) - -#define otx2_dev_is_98xx(dev) \ - (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x3) - -struct otx2_dev; - -/* Link status update callback */ -typedef void (*otx2_link_status_update_t)(struct otx2_dev *dev, - struct cgx_link_user_info *link); -/* PTP info callback */ -typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en); -/* Link status get callback */ -typedef void (*otx2_link_status_get_t)(struct otx2_dev *dev, - struct cgx_link_user_info *link); - -struct otx2_dev_ops { - otx2_link_status_update_t link_status_update; - otx2_ptp_info_t ptp_info_update; - otx2_link_status_get_t link_status_get; -}; - -#define OTX2_DEV \ - int node __rte_cache_aligned; \ - uint16_t pf; \ - int16_t vf; \ - uint16_t pf_func; \ - uint8_t mbox_active; \ - bool drv_inited; \ - uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \ - uintptr_t bar2; \ - uintptr_t bar4; \ - struct otx2_mbox mbox_local; \ - struct otx2_mbox mbox_up; \ - struct otx2_mbox mbox_vfpf; \ - struct otx2_mbox mbox_vfpf_up; \ - otx2_intr_t intr; \ - int timer_set; /* ~0 : no alarm handling */ \ - uint64_t hwcap; \ - struct otx2_npa_lf npalf; \ - struct otx2_mbox *mbox; \ - uint16_t maxvf; \ - const struct otx2_dev_ops *ops - -struct otx2_dev { - OTX2_DEV; -}; - -__rte_internal -int otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev); - -/* Common dev init and fini routines */ - -static __rte_always_inline int -otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) -{ - struct otx2_dev *dev = otx2_dev; - uint8_t rev_id; - int rc; - - rc = rte_pci_read_config(pci_dev, &rev_id, - 1, RVU_PCI_REVISION_ID); - if (rc != 1) { - otx2_err("Failed to read pci revision id, rc=%d", rc); - return rc; - } - - dev->hwcap = rev_id; - return otx2_dev_priv_init(pci_dev, otx2_dev); -} - -__rte_internal -void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev); -__rte_internal -int otx2_dev_active_vfs(void *otx2_dev); - -#define RVU_PFVF_PF_SHIFT 10 -#define RVU_PFVF_PF_MASK 0x3F -#define RVU_PFVF_FUNC_SHIFT 0 -#define RVU_PFVF_FUNC_MASK 0x3FF - -static inline int -otx2_get_vf(uint16_t pf_func) -{ - return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1); -} - -static inline int -otx2_get_pf(uint16_t pf_func) -{ - return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK; -} - -static inline int -otx2_pfvf_func(int pf, int vf) -{ - return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1); -} - -static inline int -otx2_is_afvf(uint16_t pf_func) -{ - return !(pf_func & ~RVU_PFVF_FUNC_MASK); -} - -#endif /* _OTX2_DEV_H */ diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h deleted file mode 100644 index 34268e3af3..0000000000 --- a/drivers/common/octeontx2/otx2_io_arm64.h +++ /dev/null @@ -1,114 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_IO_ARM64_H_ -#define _OTX2_IO_ARM64_H_ - -#define otx2_load_pair(val0, val1, addr) ({ \ - asm volatile( \ - "ldp %x[x0], %x[x1], [%x[p1]]" \ - :[x0]"=r"(val0), [x1]"=r"(val1) \ - :[p1]"r"(addr) \ - ); }) - -#define otx2_store_pair(val0, val1, addr) ({ \ - asm volatile( \ - "stp %x[x0], %x[x1], [%x[p1],#0]!" \ - ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \ - ); }) - -#define otx2_prefetch_store_keep(ptr) ({\ - asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); }) - -#if defined(__ARM_FEATURE_SVE) -#define __LSE_PREAMBLE " .cpu generic+lse+sve\n" -#else -#define __LSE_PREAMBLE " .cpu generic+lse\n" -#endif - -static __rte_always_inline uint64_t -otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr) -{ - uint64_t result; - - /* Atomic add with no ordering */ - asm volatile ( - __LSE_PREAMBLE - "ldadd %x[i], %x[r], [%[b]]" - : [r] "=r" (result), "+m" (*ptr) - : [i] "r" (incr), [b] "r" (ptr) - : "memory"); - return result; -} - -static __rte_always_inline uint64_t -otx2_atomic64_add_sync(int64_t incr, int64_t *ptr) -{ - uint64_t result; - - /* Atomic add with ordering */ - asm volatile ( - __LSE_PREAMBLE - "ldadda %x[i], %x[r], [%[b]]" - : [r] "=r" (result), "+m" (*ptr) - : [i] "r" (incr), [b] "r" (ptr) - : "memory"); - return result; -} - -static __rte_always_inline uint64_t -otx2_lmt_submit(rte_iova_t io_address) -{ - uint64_t result; - - asm volatile ( - __LSE_PREAMBLE - "ldeor xzr,%x[rf],[%[rs]]" : - [rf] "=r"(result): [rs] "r"(io_address)); - return result; -} - -static __rte_always_inline uint64_t -otx2_lmt_submit_release(rte_iova_t io_address) -{ - uint64_t result; - - asm volatile ( - __LSE_PREAMBLE - "ldeorl xzr,%x[rf],[%[rs]]" : - [rf] "=r"(result) : [rs] "r"(io_address)); - return result; -} - -static __rte_always_inline void -otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext) -{ - volatile const __uint128_t *src128 = (const __uint128_t *)in; - volatile __uint128_t *dst128 = (__uint128_t *)out; - dst128[0] = src128[0]; - dst128[1] = src128[1]; - /* lmtext receives following value: - * 1: NIX_SUBDC_EXT needed i.e. tx vlan case - * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case - */ - if (lmtext) { - dst128[2] = src128[2]; - if (lmtext > 1) - dst128[3] = src128[3]; - } -} - -static __rte_always_inline void -otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) -{ - volatile const __uint128_t *src128 = (const __uint128_t *)in; - volatile __uint128_t *dst128 = (__uint128_t *)out; - uint8_t i; - - for (i = 0; i < segdw; i++) - dst128[i] = src128[i]; -} - -#undef __LSE_PREAMBLE -#endif /* _OTX2_IO_ARM64_H_ */ diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h deleted file mode 100644 index 3436a6c3d5..0000000000 --- a/drivers/common/octeontx2/otx2_io_generic.h +++ /dev/null @@ -1,75 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_IO_GENERIC_H_ -#define _OTX2_IO_GENERIC_H_ - -#include - -#define otx2_load_pair(val0, val1, addr) \ -do { \ - val0 = rte_read64_relaxed((void *)(addr)); \ - val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \ -} while (0) - -#define otx2_store_pair(val0, val1, addr) \ -do { \ - rte_write64_relaxed(val0, (void *)(addr)); \ - rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \ -} while (0) - -#define otx2_prefetch_store_keep(ptr) do {} while (0) - -static inline uint64_t -otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr) -{ - RTE_SET_USED(ptr); - RTE_SET_USED(incr); - - return 0; -} - -static inline uint64_t -otx2_atomic64_add_sync(int64_t incr, int64_t *ptr) -{ - RTE_SET_USED(ptr); - RTE_SET_USED(incr); - - return 0; -} - -static inline int64_t -otx2_lmt_submit(uint64_t io_address) -{ - RTE_SET_USED(io_address); - - return 0; -} - -static inline int64_t -otx2_lmt_submit_release(uint64_t io_address) -{ - RTE_SET_USED(io_address); - - return 0; -} - -static __rte_always_inline void -otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext) -{ - /* Copy four words if lmtext = 0 - * six words if lmtext = 1 - * eight words if lmtext =2 - */ - memcpy(out, in, (4 + (2 * lmtext)) * sizeof(uint64_t)); -} - -static __rte_always_inline void -otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) -{ - RTE_SET_USED(out); - RTE_SET_USED(in); - RTE_SET_USED(segdw); -} -#endif /* _OTX2_IO_GENERIC_H_ */ diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c deleted file mode 100644 index 93fc95c0e1..0000000000 --- a/drivers/common/octeontx2/otx2_irq.c +++ /dev/null @@ -1,288 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_irq.h" - -#ifdef RTE_EAL_VFIO - -#include -#include -#include -#include -#include - -#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID -#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \ - sizeof(int) * (MAX_INTR_VEC_ID)) - -static int -irq_get_info(struct rte_intr_handle *intr_handle) -{ - struct vfio_irq_info irq = { .argsz = sizeof(irq) }; - int rc, vfio_dev_fd; - - irq.index = VFIO_PCI_MSIX_IRQ_INDEX; - - vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); - rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); - if (rc < 0) { - otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); - return rc; - } - - otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x", - irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID); - - if (irq.count > MAX_INTR_VEC_ID) { - otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d", - rte_intr_max_intr_get(intr_handle), - MAX_INTR_VEC_ID); - if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID)) - return -1; - } else { - if (rte_intr_max_intr_set(intr_handle, irq.count)) - return -1; - } - - return 0; -} - -static int -irq_config(struct rte_intr_handle *intr_handle, unsigned int vec) -{ - char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; - struct vfio_irq_set *irq_set; - int len, rc, vfio_dev_fd; - int32_t *fd_ptr; - - if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) { - otx2_err("vector=%d greater than max_intr=%d", vec, - rte_intr_max_intr_get(intr_handle)); - return -EINVAL; - } - - len = sizeof(struct vfio_irq_set) + sizeof(int32_t); - - irq_set = (struct vfio_irq_set *)irq_set_buf; - irq_set->argsz = len; - - irq_set->start = vec; - irq_set->count = 1; - irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | - VFIO_IRQ_SET_ACTION_TRIGGER; - irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; - - /* Use vec fd to set interrupt vectors */ - fd_ptr = (int32_t *)&irq_set->data[0]; - fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec); - - vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); - rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); - if (rc) - otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); - - return rc; -} - -static int -irq_init(struct rte_intr_handle *intr_handle) -{ - char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; - struct vfio_irq_set *irq_set; - int len, rc, vfio_dev_fd; - int32_t *fd_ptr; - uint32_t i; - - if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) { - otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d", - rte_intr_max_intr_get(intr_handle), - MAX_INTR_VEC_ID); - return -ERANGE; - } - - len = sizeof(struct vfio_irq_set) + - sizeof(int32_t) * rte_intr_max_intr_get(intr_handle); - - irq_set = (struct vfio_irq_set *)irq_set_buf; - irq_set->argsz = len; - irq_set->start = 0; - irq_set->count = rte_intr_max_intr_get(intr_handle); - irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | - VFIO_IRQ_SET_ACTION_TRIGGER; - irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; - - fd_ptr = (int32_t *)&irq_set->data[0]; - for (i = 0; i < irq_set->count; i++) - fd_ptr[i] = -1; - - vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); - rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); - if (rc) - otx2_err("Failed to set irqs vector rc=%d", rc); - - return rc; -} - -/** - * @internal - * Disable IRQ - */ -int -otx2_disable_irqs(struct rte_intr_handle *intr_handle) -{ - /* Clear max_intr to indicate re-init next time */ - if (rte_intr_max_intr_set(intr_handle, 0)) - return -1; - return rte_intr_disable(intr_handle); -} - -/** - * @internal - * Register IRQ - */ -int -otx2_register_irq(struct rte_intr_handle *intr_handle, - rte_intr_callback_fn cb, void *data, unsigned int vec) -{ - struct rte_intr_handle *tmp_handle; - uint32_t nb_efd, tmp_nb_efd; - int rc, fd; - - /* If no max_intr read from VFIO */ - if (rte_intr_max_intr_get(intr_handle) == 0) { - irq_get_info(intr_handle); - irq_init(intr_handle); - } - - if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) { - otx2_err("Vector=%d greater than max_intr=%d", vec, - rte_intr_max_intr_get(intr_handle)); - return -EINVAL; - } - - tmp_handle = intr_handle; - /* Create new eventfd for interrupt vector */ - fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); - if (fd == -1) - return -ENODEV; - - if (rte_intr_fd_set(tmp_handle, fd)) - return errno; - - /* Register vector interrupt callback */ - rc = rte_intr_callback_register(tmp_handle, cb, data); - if (rc) { - otx2_err("Failed to register vector:0x%x irq callback.", vec); - return rc; - } - - rte_intr_efds_index_set(intr_handle, vec, fd); - nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ? - vec : (uint32_t)rte_intr_nb_efd_get(intr_handle); - rte_intr_nb_efd_set(intr_handle, nb_efd); - - tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1; - if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle)) - rte_intr_max_intr_set(intr_handle, tmp_nb_efd); - - otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec, - rte_intr_nb_efd_get(intr_handle), - rte_intr_max_intr_get(intr_handle)); - - /* Enable MSIX vectors to VFIO */ - return irq_config(intr_handle, vec); -} - -/** - * @internal - * Unregister IRQ - */ -void -otx2_unregister_irq(struct rte_intr_handle *intr_handle, - rte_intr_callback_fn cb, void *data, unsigned int vec) -{ - struct rte_intr_handle *tmp_handle; - uint8_t retries = 5; /* 5 ms */ - int rc, fd; - - if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) { - otx2_err("Error unregistering MSI-X interrupts vec:%d > %d", - vec, rte_intr_max_intr_get(intr_handle)); - return; - } - - tmp_handle = intr_handle; - fd = rte_intr_efds_index_get(intr_handle, vec); - if (fd == -1) - return; - - if (rte_intr_fd_set(tmp_handle, fd)) - return; - - do { - /* Un-register callback func from platform lib */ - rc = rte_intr_callback_unregister(tmp_handle, cb, data); - /* Retry only if -EAGAIN */ - if (rc != -EAGAIN) - break; - rte_delay_ms(1); - retries--; - } while (retries); - - if (rc < 0) { - otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc); - return; - } - - otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec, - rte_intr_nb_efd_get(intr_handle), - rte_intr_max_intr_get(intr_handle)); - - if (rte_intr_efds_index_get(intr_handle, vec) != -1) - close(rte_intr_efds_index_get(intr_handle, vec)); - /* Disable MSIX vectors from VFIO */ - rte_intr_efds_index_set(intr_handle, vec, -1); - irq_config(intr_handle, vec); -} - -#else - -/** - * @internal - * Register IRQ - */ -int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle, - __rte_unused rte_intr_callback_fn cb, - __rte_unused void *data, __rte_unused unsigned int vec) -{ - return -ENOTSUP; -} - - -/** - * @internal - * Unregister IRQ - */ -void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle, - __rte_unused rte_intr_callback_fn cb, - __rte_unused void *data, __rte_unused unsigned int vec) -{ -} - -/** - * @internal - * Disable IRQ - */ -int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle) -{ - return -ENOTSUP; -} - -#endif /* RTE_EAL_VFIO */ diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h deleted file mode 100644 index 0683cf5543..0000000000 --- a/drivers/common/octeontx2/otx2_irq.h +++ /dev/null @@ -1,28 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_IRQ_H_ -#define _OTX2_IRQ_H_ - -#include -#include - -#include "otx2_common.h" - -typedef struct { -/* 128 devices translate to two 64 bits dwords */ -#define MAX_VFPF_DWORD_BITS 2 - uint64_t bits[MAX_VFPF_DWORD_BITS]; -} otx2_intr_t; - -__rte_internal -int otx2_register_irq(struct rte_intr_handle *intr_handle, - rte_intr_callback_fn cb, void *data, unsigned int vec); -__rte_internal -void otx2_unregister_irq(struct rte_intr_handle *intr_handle, - rte_intr_callback_fn cb, void *data, unsigned int vec); -__rte_internal -int otx2_disable_irqs(struct rte_intr_handle *intr_handle); - -#endif /* _OTX2_IRQ_H_ */ diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c deleted file mode 100644 index 6df1e8ea63..0000000000 --- a/drivers/common/octeontx2/otx2_mbox.c +++ /dev/null @@ -1,465 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include - -#include -#include -#include - -#include "otx2_mbox.h" -#include "otx2_dev.h" - -#define RVU_AF_AFPF_MBOX0 (0x02000) -#define RVU_AF_AFPF_MBOX1 (0x02008) - -#define RVU_PF_PFAF_MBOX0 (0xC00) -#define RVU_PF_PFAF_MBOX1 (0xC08) - -#define RVU_PF_VFX_PFVF_MBOX0 (0x0000) -#define RVU_PF_VFX_PFVF_MBOX1 (0x0008) - -#define RVU_VF_VFPF_MBOX0 (0x0000) -#define RVU_VF_VFPF_MBOX1 (0x0008) - -static inline uint16_t -msgs_offset(void) -{ - return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); -} - -void -otx2_mbox_fini(struct otx2_mbox *mbox) -{ - mbox->reg_base = 0; - mbox->hwbase = 0; - rte_free(mbox->dev); - mbox->dev = NULL; -} - -void -otx2_mbox_reset(struct otx2_mbox *mbox, int devid) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - struct mbox_hdr *tx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); - struct mbox_hdr *rx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - - rte_spinlock_lock(&mdev->mbox_lock); - mdev->msg_size = 0; - mdev->rsp_size = 0; - tx_hdr->msg_size = 0; - tx_hdr->num_msgs = 0; - rx_hdr->msg_size = 0; - rx_hdr->num_msgs = 0; - rte_spinlock_unlock(&mdev->mbox_lock); -} - -int -otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base, - int direction, int ndevs, uint64_t intr_offset) -{ - struct otx2_mbox_dev *mdev; - int devid; - - mbox->intr_offset = intr_offset; - mbox->reg_base = reg_base; - mbox->hwbase = hwbase; - - switch (direction) { - case MBOX_DIR_AFPF: - case MBOX_DIR_PFVF: - mbox->tx_start = MBOX_DOWN_TX_START; - mbox->rx_start = MBOX_DOWN_RX_START; - mbox->tx_size = MBOX_DOWN_TX_SIZE; - mbox->rx_size = MBOX_DOWN_RX_SIZE; - break; - case MBOX_DIR_PFAF: - case MBOX_DIR_VFPF: - mbox->tx_start = MBOX_DOWN_RX_START; - mbox->rx_start = MBOX_DOWN_TX_START; - mbox->tx_size = MBOX_DOWN_RX_SIZE; - mbox->rx_size = MBOX_DOWN_TX_SIZE; - break; - case MBOX_DIR_AFPF_UP: - case MBOX_DIR_PFVF_UP: - mbox->tx_start = MBOX_UP_TX_START; - mbox->rx_start = MBOX_UP_RX_START; - mbox->tx_size = MBOX_UP_TX_SIZE; - mbox->rx_size = MBOX_UP_RX_SIZE; - break; - case MBOX_DIR_PFAF_UP: - case MBOX_DIR_VFPF_UP: - mbox->tx_start = MBOX_UP_RX_START; - mbox->rx_start = MBOX_UP_TX_START; - mbox->tx_size = MBOX_UP_RX_SIZE; - mbox->rx_size = MBOX_UP_TX_SIZE; - break; - default: - return -ENODEV; - } - - switch (direction) { - case MBOX_DIR_AFPF: - case MBOX_DIR_AFPF_UP: - mbox->trigger = RVU_AF_AFPF_MBOX0; - mbox->tr_shift = 4; - break; - case MBOX_DIR_PFAF: - case MBOX_DIR_PFAF_UP: - mbox->trigger = RVU_PF_PFAF_MBOX1; - mbox->tr_shift = 0; - break; - case MBOX_DIR_PFVF: - case MBOX_DIR_PFVF_UP: - mbox->trigger = RVU_PF_VFX_PFVF_MBOX0; - mbox->tr_shift = 12; - break; - case MBOX_DIR_VFPF: - case MBOX_DIR_VFPF_UP: - mbox->trigger = RVU_VF_VFPF_MBOX1; - mbox->tr_shift = 0; - break; - default: - return -ENODEV; - } - - mbox->dev = rte_zmalloc("mbox dev", - ndevs * sizeof(struct otx2_mbox_dev), - OTX2_ALIGN); - if (!mbox->dev) { - otx2_mbox_fini(mbox); - return -ENOMEM; - } - mbox->ndevs = ndevs; - for (devid = 0; devid < ndevs; devid++) { - mdev = &mbox->dev[devid]; - mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE)); - rte_spinlock_init(&mdev->mbox_lock); - /* Init header to reset value */ - otx2_mbox_reset(mbox, devid); - } - - return 0; -} - -/** - * @internal - * Allocate a message response - */ -struct mbox_msghdr * -otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size, - int size_rsp) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - struct mbox_msghdr *msghdr = NULL; - - rte_spinlock_lock(&mdev->mbox_lock); - size = RTE_ALIGN(size, MBOX_MSG_ALIGN); - size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN); - /* Check if there is space in mailbox */ - if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset()) - goto exit; - if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset()) - goto exit; - if (mdev->msg_size == 0) - mdev->num_msgs = 0; - mdev->num_msgs++; - - msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase + - mbox->tx_start + msgs_offset() + mdev->msg_size)); - - /* Clear the whole msg region */ - otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size); - /* Init message header with reset values */ - msghdr->ver = OTX2_MBOX_VERSION; - mdev->msg_size += size; - mdev->rsp_size += size_rsp; - msghdr->next_msgoff = mdev->msg_size + msgs_offset(); -exit: - rte_spinlock_unlock(&mdev->mbox_lock); - - return msghdr; -} - -/** - * @internal - * Send a mailbox message - */ -void -otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - struct mbox_hdr *tx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); - struct mbox_hdr *rx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); - - /* Reset header for next messages */ - tx_hdr->msg_size = mdev->msg_size; - mdev->msg_size = 0; - mdev->rsp_size = 0; - mdev->msgs_acked = 0; - - /* num_msgs != 0 signals to the peer that the buffer has a number of - * messages. So this should be written after copying txmem - */ - tx_hdr->num_msgs = mdev->num_msgs; - rx_hdr->num_msgs = 0; - - /* Sync mbox data into memory */ - rte_wmb(); - - /* The interrupt should be fired after num_msgs is written - * to the shared memory - */ - rte_write64(1, (volatile void *)(mbox->reg_base + - (mbox->trigger | (devid << mbox->tr_shift)))); -} - -/** - * @internal - * Wait and get mailbox response - */ -int -otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - struct mbox_msghdr *msghdr; - uint64_t offset; - int rc; - - rc = otx2_mbox_wait_for_rsp(mbox, devid); - if (rc != 1) - return -EIO; - - rte_rmb(); - - offset = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - if (msg != NULL) - *msg = msghdr; - - return msghdr->rc; -} - -/** - * Polling for given wait time to get mailbox response - */ -static int -mbox_poll(struct otx2_mbox *mbox, uint32_t wait) -{ - uint32_t timeout = 0, sleep = 1; - uint32_t wait_us = wait * 1000; - uint64_t rsp_reg = 0; - uintptr_t reg_addr; - - reg_addr = mbox->reg_base + mbox->intr_offset; - do { - rsp_reg = otx2_read64(reg_addr); - - if (timeout >= wait_us) - return -ETIMEDOUT; - - rte_delay_us(sleep); - timeout += sleep; - } while (!rsp_reg); - - rte_smp_rmb(); - - /* Clear interrupt */ - otx2_write64(rsp_reg, reg_addr); - - /* Reset mbox */ - otx2_mbox_reset(mbox, 0); - - return 0; -} - -/** - * @internal - * Wait and get mailbox response with timeout - */ -int -otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg, - uint32_t tmo) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - struct mbox_msghdr *msghdr; - uint64_t offset; - int rc; - - rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo); - if (rc != 1) - return -EIO; - - rte_rmb(); - - offset = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); - if (msg != NULL) - *msg = msghdr; - - return msghdr->rc; -} - -static int -mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo) -{ - volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - uint32_t timeout = 0, sleep = 1; - - rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */ - while (mdev->num_msgs > mdev->msgs_acked) { - rte_delay_us(sleep); - timeout += sleep; - if (timeout >= rst_timo) { - struct mbox_hdr *tx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + - mbox->tx_start); - struct mbox_hdr *rx_hdr = - (struct mbox_hdr *)((uintptr_t)mdev->mbase + - mbox->rx_start); - - otx2_err("MBOX[devid: %d] message wait timeout %d, " - "num_msgs: %d, msgs_acked: %d " - "(tx/rx num_msgs: %d/%d), msg_size: %d, " - "rsp_size: %d", - devid, timeout, mdev->num_msgs, - mdev->msgs_acked, tx_hdr->num_msgs, - rx_hdr->num_msgs, mdev->msg_size, - mdev->rsp_size); - - return -EIO; - } - rte_rmb(); - } - return 0; -} - -int -otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - int rc = 0; - - /* Sync with mbox region */ - rte_rmb(); - - if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 || - mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) { - /* In case of VF, Wait a bit more to account round trip delay */ - tmo = tmo * 2; - } - - /* Wait message */ - if (rte_thread_is_intr()) - rc = mbox_poll(mbox, tmo); - else - rc = mbox_wait(mbox, devid, tmo); - - if (!rc) - rc = mdev->num_msgs; - - return rc; -} - -/** - * @internal - * Wait for the mailbox response - */ -int -otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid) -{ - return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT); -} - -int -otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - int avail; - - rte_spinlock_lock(&mdev->mbox_lock); - avail = mbox->tx_size - mdev->msg_size - msgs_offset(); - rte_spinlock_unlock(&mdev->mbox_lock); - - return avail; -} - -int -otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc) -{ - struct ready_msg_rsp *rsp; - int rc; - - otx2_mbox_alloc_msg_ready(mbox); - - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); - if (rc) - return rc; - - if (rsp->hdr.ver != OTX2_MBOX_VERSION) { - otx2_err("Incompatible MBox versions(AF: 0x%04x DPDK: 0x%04x)", - rsp->hdr.ver, OTX2_MBOX_VERSION); - return -EPIPE; - } - - if (pcifunc) - *pcifunc = rsp->hdr.pcifunc; - - return 0; -} - -int -otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc, - uint16_t id) -{ - struct msg_rsp *rsp; - - rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp)); - if (!rsp) - return -ENOMEM; - rsp->hdr.id = id; - rsp->hdr.sig = OTX2_MBOX_RSP_SIG; - rsp->hdr.rc = MBOX_MSG_INVALID; - rsp->hdr.pcifunc = pcifunc; - - return 0; -} - -/** - * @internal - * Convert mail box ID to name - */ -const char *otx2_mbox_id2name(uint16_t id) -{ - switch (id) { -#define M(_name, _id, _1, _2, _3) case _id: return # _name; - MBOX_MESSAGES - MBOX_UP_CGX_MESSAGES -#undef M - default : - return "INVALID ID"; - } -} - -int otx2_mbox_id2size(uint16_t id) -{ - switch (id) { -#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type); - MBOX_MESSAGES - MBOX_UP_CGX_MESSAGES -#undef M - default : - return 0; - } -} diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h deleted file mode 100644 index 25b521a7fa..0000000000 --- a/drivers/common/octeontx2/otx2_mbox.h +++ /dev/null @@ -1,1958 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_MBOX_H__ -#define __OTX2_MBOX_H__ - -#include -#include - -#include -#include - -#include - -#define SZ_64K (64ULL * 1024ULL) -#define SZ_1K (1ULL * 1024ULL) -#define MBOX_SIZE SZ_64K - -/* AF/PF: PF initiated, PF/VF VF initiated */ -#define MBOX_DOWN_RX_START 0 -#define MBOX_DOWN_RX_SIZE (46 * SZ_1K) -#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE) -#define MBOX_DOWN_TX_SIZE (16 * SZ_1K) -/* AF/PF: AF initiated, PF/VF PF initiated */ -#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE) -#define MBOX_UP_RX_SIZE SZ_1K -#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE) -#define MBOX_UP_TX_SIZE SZ_1K - -#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE -# error "Incorrect mailbox area sizes" -#endif - -#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull)) - -#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */ - -#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */ - -/* Mailbox directions */ -#define MBOX_DIR_AFPF 0 /* AF replies to PF */ -#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */ -#define MBOX_DIR_PFVF 2 /* PF replies to VF */ -#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */ -#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */ -#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */ -#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */ -#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */ - -/* Device memory does not support unaligned access, instruct compiler to - * not optimize the memory access when working with mailbox memory. - */ -#define __otx2_io volatile - -struct otx2_mbox_dev { - void *mbase; /* This dev's mbox region */ - rte_spinlock_t mbox_lock; - uint16_t msg_size; /* Total msg size to be sent */ - uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */ - uint16_t num_msgs; /* No of msgs sent or waiting for response */ - uint16_t msgs_acked; /* No of msgs for which response is received */ -}; - -struct otx2_mbox { - uintptr_t hwbase; /* Mbox region advertised by HW */ - uintptr_t reg_base;/* CSR base for this dev */ - uint64_t trigger; /* Trigger mbox notification */ - uint16_t tr_shift; /* Mbox trigger shift */ - uint64_t rx_start; /* Offset of Rx region in mbox memory */ - uint64_t tx_start; /* Offset of Tx region in mbox memory */ - uint16_t rx_size; /* Size of Rx region */ - uint16_t tx_size; /* Size of Tx region */ - uint16_t ndevs; /* The number of peers */ - struct otx2_mbox_dev *dev; - uint64_t intr_offset; /* Offset to interrupt register */ -}; - -/* Header which precedes all mbox messages */ -struct mbox_hdr { - uint64_t __otx2_io msg_size; /* Total msgs size embedded */ - uint16_t __otx2_io num_msgs; /* No of msgs embedded */ -}; - -/* Header which precedes every msg and is also part of it */ -struct mbox_msghdr { - uint16_t __otx2_io pcifunc; /* Who's sending this msg */ - uint16_t __otx2_io id; /* Mbox message ID */ -#define OTX2_MBOX_REQ_SIG (0xdead) -#define OTX2_MBOX_RSP_SIG (0xbeef) - /* Signature, for validating corrupted msgs */ - uint16_t __otx2_io sig; -#define OTX2_MBOX_VERSION (0x000b) - /* Version of msg's structure for this ID */ - uint16_t __otx2_io ver; - /* Offset of next msg within mailbox region */ - uint16_t __otx2_io next_msgoff; - int __otx2_io rc; /* Msg processed response code */ -}; - -/* Mailbox message types */ -#define MBOX_MSG_MASK 0xFFFF -#define MBOX_MSG_INVALID 0xFFFE -#define MBOX_MSG_MAX 0xFFFF - -#define MBOX_MESSAGES \ -/* Generic mbox IDs (range 0x000 - 0x1FF) */ \ -M(READY, 0x001, ready, msg_req, ready_msg_rsp) \ -M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\ -M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\ -M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \ -M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \ -M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \ -M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \ -M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \ -M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \ -/* CGX mbox IDs (range 0x200 - 0x3FF) */ \ -M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ -M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ -M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \ -M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\ - cgx_mac_addr_set_or_get) \ -M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\ - cgx_mac_addr_set_or_get) \ -M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \ -M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \ -M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \ -M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \ -M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\ -M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \ -M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \ -M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \ -M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \ -M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \ - cgx_pause_frm_cfg) \ -M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \ -M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \ -M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \ - cgx_mac_addr_add_rsp) \ -M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \ - msg_rsp) \ -M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \ - cgx_max_dmac_entries_get_rsp) \ -M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \ - cgx_set_link_state_msg, msg_rsp) \ -M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \ - cgx_phy_mod_type) \ -M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \ - msg_rsp) \ -M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \ -M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\ - cgx_set_link_mode_rsp) \ -M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \ -M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \ -/* NPA mbox IDs (range 0x400 - 0x5FF) */ \ -M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \ - npa_lf_alloc_rsp) \ -M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \ -M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\ -M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\ -/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \ -M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \ - sso_lf_alloc_rsp) \ -M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \ -M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\ -M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \ -M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \ - msg_rsp) \ -M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \ - msg_rsp) \ -M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \ - sso_grp_priority) \ -M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \ -M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \ - msg_rsp) \ -M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \ - sso_grp_stats) \ -M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \ - sso_hws_stats) \ -M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \ - sso_release_xaq, msg_rsp) \ -/* TIM mbox IDs (range 0x800 - 0x9FF) */ \ -M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \ - tim_lf_alloc_rsp) \ -M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \ -M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\ -M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \ - tim_enable_rsp) \ -M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \ -/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \ -M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, \ - cpt_lf_alloc_rsp_msg) \ -M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \ -M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \ - cpt_rd_wr_reg_msg) \ -M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \ - cpt_set_crypto_grp_req_msg, \ - msg_rsp) \ -M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \ - cpt_inline_ipsec_cfg_msg, msg_rsp) \ -M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \ - cpt_rx_inline_lf_cfg_msg, msg_rsp) \ -M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \ -/* REE mbox IDs (range 0xE00 - 0xFFF) */ \ -M(REE_CONFIG_LF, 0xE01, ree_config_lf, ree_lf_req_msg, \ - msg_rsp) \ -M(REE_RD_WR_REGISTER, 0xE02, ree_rd_wr_register, ree_rd_wr_reg_msg, \ - ree_rd_wr_reg_msg) \ -M(REE_RULE_DB_PROG, 0xE03, ree_rule_db_prog, \ - ree_rule_db_prog_req_msg, \ - msg_rsp) \ -M(REE_RULE_DB_LEN_GET, 0xE04, ree_rule_db_len_get, ree_req_msg, \ - ree_rule_db_len_rsp_msg) \ -M(REE_RULE_DB_GET, 0xE05, ree_rule_db_get, \ - ree_rule_db_get_req_msg, \ - ree_rule_db_get_rsp_msg) \ -/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \ -M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \ - npc_mcam_alloc_entry_req, \ - npc_mcam_alloc_entry_rsp) \ -M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \ - npc_mcam_free_entry_req, msg_rsp) \ -M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \ - npc_mcam_write_entry_req, msg_rsp) \ -M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \ - npc_mcam_ena_dis_entry_req, msg_rsp) \ -M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \ - npc_mcam_ena_dis_entry_req, msg_rsp) \ -M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \ - npc_mcam_shift_entry_req, \ - npc_mcam_shift_entry_rsp) \ -M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \ - npc_mcam_alloc_counter_req, \ - npc_mcam_alloc_counter_rsp) \ -M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \ - npc_mcam_oper_counter_req, \ - msg_rsp) \ -M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \ - npc_mcam_unmap_counter_req, \ - msg_rsp) \ -M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \ - npc_mcam_oper_counter_req, \ - msg_rsp) \ -M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \ - npc_mcam_oper_counter_req, \ - npc_mcam_oper_counter_rsp) \ -M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\ - npc_mcam_alloc_and_write_entry_req, \ - npc_mcam_alloc_and_write_entry_rsp) \ -M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \ - npc_get_kex_cfg_rsp) \ -M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, \ - npc_install_flow_req, \ - npc_install_flow_rsp) \ -M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, \ - npc_delete_flow_req, msg_rsp) \ -M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \ - npc_mcam_read_entry_req, \ - npc_mcam_read_entry_rsp) \ -M(NPC_SET_PKIND, 0x6010, npc_set_pkind, \ - npc_set_pkind, \ - msg_rsp) \ -M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \ - npc_mcam_read_base_rule_rsp) \ -/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \ -M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \ - nix_lf_alloc_rsp) \ -M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \ -M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \ - nix_aq_enq_rsp) \ -M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \ - msg_rsp) \ -M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \ - nix_txsch_alloc_rsp) \ -M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \ - msg_rsp) \ -M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \ - nix_txschq_config) \ -M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \ -M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \ -M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \ - nix_rss_flowkey_cfg, \ - nix_rss_flowkey_cfg_rsp) \ -M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \ - msg_rsp) \ -M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \ -M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \ -M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \ -M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \ -M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \ - nix_mark_format_cfg, \ - nix_mark_format_cfg_rsp) \ -M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \ -M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \ - nix_lso_format_cfg_rsp) \ -M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \ - msg_rsp) \ -M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \ - msg_rsp) \ -M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \ - msg_rsp) \ -M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \ - nix_bp_cfg_rsp) \ -M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\ -M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \ - nix_get_mac_addr_rsp) \ -M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \ - nix_inline_ipsec_cfg, msg_rsp) \ -M(NIX_INLINE_IPSEC_LF_CFG, \ - 0x801a, nix_inline_ipsec_lf_cfg, \ - nix_inline_ipsec_lf_cfg, msg_rsp) - -/* Messages initiated by AF (range 0xC00 - 0xDFF) */ -#define MBOX_UP_CGX_MESSAGES \ -M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \ - msg_rsp) \ -M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \ - msg_rsp) - -enum { -#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, -MBOX_MESSAGES -MBOX_UP_CGX_MESSAGES -#undef M -}; - -/* Mailbox message formats */ - -#define RVU_DEFAULT_PF_FUNC 0xFFFF - -/* Generic request msg used for those mbox messages which - * don't send any data in the request. - */ -struct msg_req { - struct mbox_msghdr hdr; -}; - -/* Generic response msg used a ack or response for those mbox - * messages which doesn't have a specific rsp msg format. - */ -struct msg_rsp { - struct mbox_msghdr hdr; -}; - -/* RVU mailbox error codes - * Range 256 - 300. - */ -enum rvu_af_status { - RVU_INVALID_VF_ID = -256, -}; - -struct ready_msg_rsp { - struct mbox_msghdr hdr; - uint16_t __otx2_io sclk_feq; /* SCLK frequency */ - uint16_t __otx2_io rclk_freq; /* RCLK frequency */ -}; - -enum npc_pkind_type { - NPC_RX_CUSTOM_PRE_L2_PKIND = 55ULL, - NPC_RX_VLAN_EXDSA_PKIND = 56ULL, - NPC_RX_CHLEN24B_PKIND, - NPC_RX_CPT_HDR_PKIND, - NPC_RX_CHLEN90B_PKIND, - NPC_TX_HIGIG_PKIND, - NPC_RX_HIGIG_PKIND, - NPC_RX_EXDSA_PKIND, - NPC_RX_EDSA_PKIND, - NPC_TX_DEF_PKIND, -}; - -#define OTX2_PRIV_FLAGS_CH_LEN_90B 254 -#define OTX2_PRIV_FLAGS_CH_LEN_24B 255 - -/* Struct to set pkind */ -struct npc_set_pkind { - struct mbox_msghdr hdr; -#define OTX2_PRIV_FLAGS_DEFAULT BIT_ULL(0) -#define OTX2_PRIV_FLAGS_EDSA BIT_ULL(1) -#define OTX2_PRIV_FLAGS_HIGIG BIT_ULL(2) -#define OTX2_PRIV_FLAGS_FDSA BIT_ULL(3) -#define OTX2_PRIV_FLAGS_EXDSA BIT_ULL(4) -#define OTX2_PRIV_FLAGS_VLAN_EXDSA BIT_ULL(5) -#define OTX2_PRIV_FLAGS_CUSTOM BIT_ULL(63) - uint64_t __otx2_io mode; -#define PKIND_TX BIT_ULL(0) -#define PKIND_RX BIT_ULL(1) - uint8_t __otx2_io dir; - uint8_t __otx2_io pkind; /* valid only in case custom flag */ - uint8_t __otx2_io var_len_off; - /* Offset of custom header length field. - * Valid only for pkind NPC_RX_CUSTOM_PRE_L2_PKIND - */ - uint8_t __otx2_io var_len_off_mask; /* Mask for length with in offset */ - uint8_t __otx2_io shift_dir; - /* Shift direction to get length of the - * header at var_len_off - */ -}; - -/* Structure for requesting resource provisioning. - * 'modify' flag to be used when either requesting more - * or to detach partial of a certain resource type. - * Rest of the fields specify how many of what type to - * be attached. - * To request LFs from two blocks of same type this mailbox - * can be sent twice as below: - * struct rsrc_attach *attach; - * .. Allocate memory for message .. - * attach->cptlfs = 3; <3 LFs from CPT0> - * .. Send message .. - * .. Allocate memory for message .. - * attach->modify = 1; - * attach->cpt_blkaddr = BLKADDR_CPT1; - * attach->cptlfs = 2; <2 LFs from CPT1> - * .. Send message .. - */ -struct rsrc_attach_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io modify:1; - uint8_t __otx2_io npalf:1; - uint8_t __otx2_io nixlf:1; - uint16_t __otx2_io sso; - uint16_t __otx2_io ssow; - uint16_t __otx2_io timlfs; - uint16_t __otx2_io cptlfs; - uint16_t __otx2_io reelfs; - /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */ - int __otx2_io cpt_blkaddr; - /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */ - int __otx2_io ree_blkaddr; -}; - -/* Structure for relinquishing resources. - * 'partial' flag to be used when relinquishing all resources - * but only of a certain type. If not set, all resources of all - * types provisioned to the RVU function will be detached. - */ -struct rsrc_detach_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io partial:1; - uint8_t __otx2_io npalf:1; - uint8_t __otx2_io nixlf:1; - uint8_t __otx2_io sso:1; - uint8_t __otx2_io ssow:1; - uint8_t __otx2_io timlfs:1; - uint8_t __otx2_io cptlfs:1; - uint8_t __otx2_io reelfs:1; -}; - -/* NIX Transmit schedulers */ -#define NIX_TXSCH_LVL_SMQ 0x0 -#define NIX_TXSCH_LVL_MDQ 0x0 -#define NIX_TXSCH_LVL_TL4 0x1 -#define NIX_TXSCH_LVL_TL3 0x2 -#define NIX_TXSCH_LVL_TL2 0x3 -#define NIX_TXSCH_LVL_TL1 0x4 -#define NIX_TXSCH_LVL_CNT 0x5 - -/* - * Number of resources available to the caller. - * In reply to MBOX_MSG_FREE_RSRC_CNT. - */ -struct free_rsrcs_rsp { - struct mbox_msghdr hdr; - uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; - uint16_t __otx2_io sso; - uint16_t __otx2_io tim; - uint16_t __otx2_io ssow; - uint16_t __otx2_io cpt; - uint8_t __otx2_io npa; - uint8_t __otx2_io nix; - uint16_t __otx2_io schq_nix1[NIX_TXSCH_LVL_CNT]; - uint8_t __otx2_io nix1; - uint8_t __otx2_io cpt1; - uint8_t __otx2_io ree0; - uint8_t __otx2_io ree1; -}; - -#define MSIX_VECTOR_INVALID 0xFFFF -#define MAX_RVU_BLKLF_CNT 256 - -struct msix_offset_rsp { - struct mbox_msghdr hdr; - uint16_t __otx2_io npa_msixoff; - uint16_t __otx2_io nix_msixoff; - uint16_t __otx2_io sso; - uint16_t __otx2_io ssow; - uint16_t __otx2_io timlfs; - uint16_t __otx2_io cptlfs; - uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io cpt1_lfs; - uint16_t __otx2_io ree0_lfs; - uint16_t __otx2_io ree1_lfs; - uint16_t __otx2_io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT]; - uint16_t __otx2_io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT]; - -}; - -/* CGX mbox message formats */ - -struct cgx_stats_rsp { - struct mbox_msghdr hdr; -#define CGX_RX_STATS_COUNT 13 -#define CGX_TX_STATS_COUNT 18 - uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT]; - uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT]; -}; - -struct cgx_fec_stats_rsp { - struct mbox_msghdr hdr; - uint64_t __otx2_io fec_corr_blks; - uint64_t __otx2_io fec_uncorr_blks; -}; -/* Structure for requesting the operation for - * setting/getting mac address in the CGX interface - */ -struct cgx_mac_addr_set_or_get { - struct mbox_msghdr hdr; - uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; -}; - -/* Structure for requesting the operation to - * add DMAC filter entry into CGX interface - */ -struct cgx_mac_addr_add_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; -}; - -/* Structure for response against the operation to - * add DMAC filter entry into CGX interface - */ -struct cgx_mac_addr_add_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io index; -}; - -/* Structure for requesting the operation to - * delete DMAC filter entry from CGX interface - */ -struct cgx_mac_addr_del_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io index; -}; - -/* Structure for response against the operation to - * get maximum supported DMAC filter entries - */ -struct cgx_max_dmac_entries_get_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io max_dmac_filters; -}; - -struct cgx_link_user_info { - uint64_t __otx2_io link_up:1; - uint64_t __otx2_io full_duplex:1; - uint64_t __otx2_io lmac_type_id:4; - uint64_t __otx2_io speed:20; /* speed in Mbps */ - uint64_t __otx2_io an:1; /* AN supported or not */ - uint64_t __otx2_io fec:2; /* FEC type if enabled else 0 */ - uint64_t __otx2_io port:8; -#define LMACTYPE_STR_LEN 16 - char lmac_type[LMACTYPE_STR_LEN]; -}; - -struct cgx_link_info_msg { - struct mbox_msghdr hdr; - struct cgx_link_user_info link_info; -}; - -struct cgx_ptp_rx_info_msg { - struct mbox_msghdr hdr; - uint8_t __otx2_io ptp_en; -}; - -struct cgx_pause_frm_cfg { - struct mbox_msghdr hdr; - uint8_t __otx2_io set; - /* set = 1 if the request is to config pause frames */ - /* set = 0 if the request is to fetch pause frames config */ - uint8_t __otx2_io rx_pause; - uint8_t __otx2_io tx_pause; -}; - -struct sfp_eeprom_s { -#define SFP_EEPROM_SIZE 256 - uint16_t __otx2_io sff_id; - uint8_t __otx2_io buf[SFP_EEPROM_SIZE]; - uint64_t __otx2_io reserved; -}; - -enum fec_type { - OTX2_FEC_NONE, - OTX2_FEC_BASER, - OTX2_FEC_RS, -}; - -struct phy_s { - uint64_t __otx2_io can_change_mod_type : 1; - uint64_t __otx2_io mod_type : 1; -}; - -struct cgx_lmac_fwdata_s { - uint16_t __otx2_io rw_valid; - uint64_t __otx2_io supported_fec; - uint64_t __otx2_io supported_an; - uint64_t __otx2_io supported_link_modes; - /* Only applicable if AN is supported */ - uint64_t __otx2_io advertised_fec; - uint64_t __otx2_io advertised_link_modes; - /* Only applicable if SFP/QSFP slot is present */ - struct sfp_eeprom_s sfp_eeprom; - struct phy_s phy; -#define LMAC_FWDATA_RESERVED_MEM 1023 - uint64_t __otx2_io reserved[LMAC_FWDATA_RESERVED_MEM]; -}; - -struct cgx_fw_data { - struct mbox_msghdr hdr; - struct cgx_lmac_fwdata_s fwdata; -}; - -struct fec_mode { - struct mbox_msghdr hdr; - int __otx2_io fec; -}; - -struct cgx_set_link_state_msg { - struct mbox_msghdr hdr; - uint8_t __otx2_io enable; -}; - -struct cgx_phy_mod_type { - struct mbox_msghdr hdr; - int __otx2_io mod; -}; - -struct cgx_set_link_mode_args { - uint32_t __otx2_io speed; - uint8_t __otx2_io duplex; - uint8_t __otx2_io an; - uint8_t __otx2_io ports; - uint64_t __otx2_io mode; -}; - -struct cgx_set_link_mode_req { - struct mbox_msghdr hdr; - struct cgx_set_link_mode_args args; -}; - -struct cgx_set_link_mode_rsp { - struct mbox_msghdr hdr; - int __otx2_io status; -}; -/* NPA mbox message formats */ - -/* NPA mailbox error codes - * Range 301 - 400. - */ -enum npa_af_status { - NPA_AF_ERR_PARAM = -301, - NPA_AF_ERR_AQ_FULL = -302, - NPA_AF_ERR_AQ_ENQUEUE = -303, - NPA_AF_ERR_AF_LF_INVALID = -304, - NPA_AF_ERR_AF_LF_ALLOC = -305, - NPA_AF_ERR_LF_RESET = -306, -}; - -#define NPA_AURA_SZ_0 0 -#define NPA_AURA_SZ_128 1 -#define NPA_AURA_SZ_256 2 -#define NPA_AURA_SZ_512 3 -#define NPA_AURA_SZ_1K 4 -#define NPA_AURA_SZ_2K 5 -#define NPA_AURA_SZ_4K 6 -#define NPA_AURA_SZ_8K 7 -#define NPA_AURA_SZ_16K 8 -#define NPA_AURA_SZ_32K 9 -#define NPA_AURA_SZ_64K 10 -#define NPA_AURA_SZ_128K 11 -#define NPA_AURA_SZ_256K 12 -#define NPA_AURA_SZ_512K 13 -#define NPA_AURA_SZ_1M 14 -#define NPA_AURA_SZ_MAX 15 - -/* For NPA LF context alloc and init */ -struct npa_lf_alloc_req { - struct mbox_msghdr hdr; - int __otx2_io node; - int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */ - uint32_t __otx2_io nr_pools; /* No of pools */ - uint64_t __otx2_io way_mask; -}; - -struct npa_lf_alloc_rsp { - struct mbox_msghdr hdr; - uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */ - uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */ - uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */ -}; - -/* NPA AQ enqueue msg */ -struct npa_aq_enq_req { - struct mbox_msghdr hdr; - uint32_t __otx2_io aura_id; - uint8_t __otx2_io ctype; - uint8_t __otx2_io op; - union { - /* Valid when op == WRITE/INIT and ctype == AURA. - * LF fills the pool_id in aura.pool_addr. AF will translate - * the pool_id to pool context pointer. - */ - __otx2_io struct npa_aura_s aura; - /* Valid when op == WRITE/INIT and ctype == POOL */ - __otx2_io struct npa_pool_s pool; - }; - /* Mask data when op == WRITE (1=write, 0=don't write) */ - union { - /* Valid when op == WRITE and ctype == AURA */ - __otx2_io struct npa_aura_s aura_mask; - /* Valid when op == WRITE and ctype == POOL */ - __otx2_io struct npa_pool_s pool_mask; - }; -}; - -struct npa_aq_enq_rsp { - struct mbox_msghdr hdr; - union { - /* Valid when op == READ and ctype == AURA */ - __otx2_io struct npa_aura_s aura; - /* Valid when op == READ and ctype == POOL */ - __otx2_io struct npa_pool_s pool; - }; -}; - -/* Disable all contexts of type 'ctype' */ -struct hwctx_disable_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io ctype; -}; - -/* NIX mbox message formats */ - -/* NIX mailbox error codes - * Range 401 - 500. - */ -enum nix_af_status { - NIX_AF_ERR_PARAM = -401, - NIX_AF_ERR_AQ_FULL = -402, - NIX_AF_ERR_AQ_ENQUEUE = -403, - NIX_AF_ERR_AF_LF_INVALID = -404, - NIX_AF_ERR_AF_LF_ALLOC = -405, - NIX_AF_ERR_TLX_ALLOC_FAIL = -406, - NIX_AF_ERR_TLX_INVALID = -407, - NIX_AF_ERR_RSS_SIZE_INVALID = -408, - NIX_AF_ERR_RSS_GRPS_INVALID = -409, - NIX_AF_ERR_FRS_INVALID = -410, - NIX_AF_ERR_RX_LINK_INVALID = -411, - NIX_AF_INVAL_TXSCHQ_CFG = -412, - NIX_AF_SMQ_FLUSH_FAILED = -413, - NIX_AF_ERR_LF_RESET = -414, - NIX_AF_ERR_RSS_NOSPC_FIELD = -415, - NIX_AF_ERR_RSS_NOSPC_ALGO = -416, - NIX_AF_ERR_MARK_CFG_FAIL = -417, - NIX_AF_ERR_LSO_CFG_FAIL = -418, - NIX_AF_INVAL_NPA_PF_FUNC = -419, - NIX_AF_INVAL_SSO_PF_FUNC = -420, - NIX_AF_ERR_TX_VTAG_NOSPC = -421, - NIX_AF_ERR_RX_VTAG_INUSE = -422, - NIX_AF_ERR_PTP_CONFIG_FAIL = -423, -}; - -/* For NIX LF context alloc and init */ -struct nix_lf_alloc_req { - struct mbox_msghdr hdr; - int __otx2_io node; - uint32_t __otx2_io rq_cnt; /* No of receive queues */ - uint32_t __otx2_io sq_cnt; /* No of send queues */ - uint32_t __otx2_io cq_cnt; /* No of completion queues */ - uint8_t __otx2_io xqe_sz; - uint16_t __otx2_io rss_sz; - uint8_t __otx2_io rss_grps; - uint16_t __otx2_io npa_func; - /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */ - uint16_t __otx2_io sso_func; - uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */ - uint64_t __otx2_io way_mask; -#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0) - uint64_t flags; -}; - -struct nix_lf_alloc_rsp { - struct mbox_msghdr hdr; - uint16_t __otx2_io sqb_size; - uint16_t __otx2_io rx_chan_base; - uint16_t __otx2_io tx_chan_base; - uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */ - uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */ - uint8_t __otx2_io lso_tsov4_idx; - uint8_t __otx2_io lso_tsov6_idx; - uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; - uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */ - uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */ - uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */ - uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */ - uint8_t __otx2_io hw_rx_tstamp_en; /*set if rx timestamping enabled */ - uint8_t __otx2_io cgx_links; /* No. of CGX links present in HW */ - uint8_t __otx2_io lbk_links; /* No. of LBK links present in HW */ - uint8_t __otx2_io sdp_links; /* No. of SDP links present in HW */ - uint8_t __otx2_io tx_link; /* Transmit channel link number */ -}; - -struct nix_lf_free_req { - struct mbox_msghdr hdr; -#define NIX_LF_DISABLE_FLOWS BIT_ULL(0) -#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1) - uint64_t __otx2_io flags; -}; - -/* NIX AQ enqueue msg */ -struct nix_aq_enq_req { - struct mbox_msghdr hdr; - uint32_t __otx2_io qidx; - uint8_t __otx2_io ctype; - uint8_t __otx2_io op; - union { - /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */ - __otx2_io struct nix_rq_ctx_s rq; - /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */ - __otx2_io struct nix_sq_ctx_s sq; - /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */ - __otx2_io struct nix_cq_ctx_s cq; - /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */ - __otx2_io struct nix_rsse_s rss; - /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */ - __otx2_io struct nix_rx_mce_s mce; - }; - /* Mask data when op == WRITE (1=write, 0=don't write) */ - union { - /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */ - __otx2_io struct nix_rq_ctx_s rq_mask; - /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */ - __otx2_io struct nix_sq_ctx_s sq_mask; - /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */ - __otx2_io struct nix_cq_ctx_s cq_mask; - /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */ - __otx2_io struct nix_rsse_s rss_mask; - /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */ - __otx2_io struct nix_rx_mce_s mce_mask; - }; -}; - -struct nix_aq_enq_rsp { - struct mbox_msghdr hdr; - union { - __otx2_io struct nix_rq_ctx_s rq; - __otx2_io struct nix_sq_ctx_s sq; - __otx2_io struct nix_cq_ctx_s cq; - __otx2_io struct nix_rsse_s rss; - __otx2_io struct nix_rx_mce_s mce; - }; -}; - -/* Tx scheduler/shaper mailbox messages */ - -#define MAX_TXSCHQ_PER_FUNC 128 - -struct nix_txsch_alloc_req { - struct mbox_msghdr hdr; - /* Scheduler queue count request at each level */ - uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ - uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ -}; - -struct nix_txsch_alloc_rsp { - struct mbox_msghdr hdr; - /* Scheduler queue count allocated at each level */ - uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ - uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ - /* Scheduler queue list allocated at each level */ - uint16_t __otx2_io - schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; - uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; - /* Traffic aggregation scheduler level */ - uint8_t __otx2_io aggr_level; - /* Aggregation lvl's RR_PRIO config */ - uint8_t __otx2_io aggr_lvl_rr_prio; - /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */ - uint8_t __otx2_io link_cfg_lvl; -}; - -struct nix_txsch_free_req { - struct mbox_msghdr hdr; -#define TXSCHQ_FREE_ALL BIT_ULL(0) - uint16_t __otx2_io flags; - /* Scheduler queue level to be freed */ - uint16_t __otx2_io schq_lvl; - /* List of scheduler queues to be freed */ - uint16_t __otx2_io schq; -}; - -struct nix_txschq_config { - struct mbox_msghdr hdr; - uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */ - uint8_t __otx2_io read; -#define TXSCHQ_IDX_SHIFT 16 -#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1) -#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK) - uint8_t __otx2_io num_regs; -#define MAX_REGS_PER_MBOX_MSG 20 - uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG]; - uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG]; - /* All 0's => overwrite with new value */ - uint64_t __otx2_io regval_mask[MAX_REGS_PER_MBOX_MSG]; -}; - -struct nix_vtag_config { - struct mbox_msghdr hdr; - /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */ - uint8_t __otx2_io vtag_size; - /* cfg_type is '0' for tx vlan cfg - * cfg_type is '1' for rx vlan cfg - */ - uint8_t __otx2_io cfg_type; - union { - /* Valid when cfg_type is '0' */ - struct { - uint64_t __otx2_io vtag0; - uint64_t __otx2_io vtag1; - - /* cfg_vtag0 & cfg_vtag1 fields are valid - * when free_vtag0 & free_vtag1 are '0's. - */ - /* cfg_vtag0 = 1 to configure vtag0 */ - uint8_t __otx2_io cfg_vtag0 :1; - /* cfg_vtag1 = 1 to configure vtag1 */ - uint8_t __otx2_io cfg_vtag1 :1; - - /* vtag0_idx & vtag1_idx are only valid when - * both cfg_vtag0 & cfg_vtag1 are '0's, - * these fields are used along with free_vtag0 - * & free_vtag1 to free the nix lf's tx_vlan - * configuration. - * - * Denotes the indices of tx_vtag def registers - * that needs to be cleared and freed. - */ - int __otx2_io vtag0_idx; - int __otx2_io vtag1_idx; - - /* Free_vtag0 & free_vtag1 fields are valid - * when cfg_vtag0 & cfg_vtag1 are '0's. - */ - /* Free_vtag0 = 1 clears vtag0 configuration - * vtag0_idx denotes the index to be cleared. - */ - uint8_t __otx2_io free_vtag0 :1; - /* Free_vtag1 = 1 clears vtag1 configuration - * vtag1_idx denotes the index to be cleared. - */ - uint8_t __otx2_io free_vtag1 :1; - } tx; - - /* Valid when cfg_type is '1' */ - struct { - /* Rx vtag type index, valid values are in 0..7 range */ - uint8_t __otx2_io vtag_type; - /* Rx vtag strip */ - uint8_t __otx2_io strip_vtag :1; - /* Rx vtag capture */ - uint8_t __otx2_io capture_vtag :1; - } rx; - }; -}; - -struct nix_vtag_config_rsp { - struct mbox_msghdr hdr; - /* Indices of tx_vtag def registers used to configure - * tx vtag0 & vtag1 headers, these indices are valid - * when nix_vtag_config mbox requested for vtag0 and/ - * or vtag1 configuration. - */ - int __otx2_io vtag0_idx; - int __otx2_io vtag1_idx; -}; - -struct nix_rss_flowkey_cfg { - struct mbox_msghdr hdr; - int __otx2_io mcam_index; /* MCAM entry index to modify */ - uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */ -#define FLOW_KEY_TYPE_PORT BIT(0) -#define FLOW_KEY_TYPE_IPV4 BIT(1) -#define FLOW_KEY_TYPE_IPV6 BIT(2) -#define FLOW_KEY_TYPE_TCP BIT(3) -#define FLOW_KEY_TYPE_UDP BIT(4) -#define FLOW_KEY_TYPE_SCTP BIT(5) -#define FLOW_KEY_TYPE_NVGRE BIT(6) -#define FLOW_KEY_TYPE_VXLAN BIT(7) -#define FLOW_KEY_TYPE_GENEVE BIT(8) -#define FLOW_KEY_TYPE_ETH_DMAC BIT(9) -#define FLOW_KEY_TYPE_IPV6_EXT BIT(10) -#define FLOW_KEY_TYPE_GTPU BIT(11) -#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12) -#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13) -#define FLOW_KEY_TYPE_INNR_TCP BIT(14) -#define FLOW_KEY_TYPE_INNR_UDP BIT(15) -#define FLOW_KEY_TYPE_INNR_SCTP BIT(16) -#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17) -#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18) -#define FLOW_KEY_TYPE_CUSTOM0 BIT(19) -#define FLOW_KEY_TYPE_VLAN BIT(20) -#define FLOW_KEY_TYPE_L4_DST BIT(28) -#define FLOW_KEY_TYPE_L4_SRC BIT(29) -#define FLOW_KEY_TYPE_L3_DST BIT(30) -#define FLOW_KEY_TYPE_L3_SRC BIT(31) - uint8_t __otx2_io group; /* RSS context or group */ -}; - -struct nix_rss_flowkey_cfg_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io alg_idx; /* Selected algo index */ -}; - -struct nix_set_mac_addr { - struct mbox_msghdr hdr; - uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; -}; - -struct nix_get_mac_addr_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; -}; - -struct nix_mark_format_cfg { - struct mbox_msghdr hdr; - uint8_t __otx2_io offset; - uint8_t __otx2_io y_mask; - uint8_t __otx2_io y_val; - uint8_t __otx2_io r_mask; - uint8_t __otx2_io r_val; -}; - -struct nix_mark_format_cfg_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io mark_format_idx; -}; - -struct nix_lso_format_cfg { - struct mbox_msghdr hdr; - uint64_t __otx2_io field_mask; - uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX]; -}; - -struct nix_lso_format_cfg_rsp { - struct mbox_msghdr hdr; - uint8_t __otx2_io lso_format_idx; -}; - -struct nix_rx_mode { - struct mbox_msghdr hdr; -#define NIX_RX_MODE_UCAST BIT(0) -#define NIX_RX_MODE_PROMISC BIT(1) -#define NIX_RX_MODE_ALLMULTI BIT(2) - uint16_t __otx2_io mode; -}; - -struct nix_rx_cfg { - struct mbox_msghdr hdr; -#define NIX_RX_OL3_VERIFY BIT(0) -#define NIX_RX_OL4_VERIFY BIT(1) - uint8_t __otx2_io len_verify; /* Outer L3/L4 len check */ -#define NIX_RX_CSUM_OL4_VERIFY BIT(0) - uint8_t __otx2_io csum_verify; /* Outer L4 checksum verification */ -}; - -struct nix_frs_cfg { - struct mbox_msghdr hdr; - uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */ - uint8_t __otx2_io update_minlen; /* Set minlen also */ - uint8_t __otx2_io sdp_link; /* Set SDP RX link */ - uint16_t __otx2_io maxlen; - uint16_t __otx2_io minlen; -}; - -struct nix_set_vlan_tpid { - struct mbox_msghdr hdr; -#define NIX_VLAN_TYPE_INNER 0 -#define NIX_VLAN_TYPE_OUTER 1 - uint8_t __otx2_io vlan_type; - uint16_t __otx2_io tpid; -}; - -struct nix_bp_cfg_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io chan_base; /* Starting channel number */ - uint8_t __otx2_io chan_cnt; /* Number of channels */ - uint8_t __otx2_io bpid_per_chan; - /* bpid_per_chan = 0 assigns single bp id for range of channels */ - /* bpid_per_chan = 1 assigns separate bp id for each channel */ -}; - -/* PF can be mapped to either CGX or LBK interface, - * so maximum 64 channels are possible. - */ -#define NIX_MAX_CHAN 64 -struct nix_bp_cfg_rsp { - struct mbox_msghdr hdr; - /* Channel and bpid mapping */ - uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN]; - /* Number of channel for which bpids are assigned */ - uint8_t __otx2_io chan_cnt; -}; - -/* Global NIX inline IPSec configuration */ -struct nix_inline_ipsec_cfg { - struct mbox_msghdr hdr; - uint32_t __otx2_io cpt_credit; - struct { - uint8_t __otx2_io egrp; - uint8_t __otx2_io opcode; - } gen_cfg; - struct { - uint16_t __otx2_io cpt_pf_func; - uint8_t __otx2_io cpt_slot; - } inst_qsel; - uint8_t __otx2_io enable; -}; - -/* Per NIX LF inline IPSec configuration */ -struct nix_inline_ipsec_lf_cfg { - struct mbox_msghdr hdr; - uint64_t __otx2_io sa_base_addr; - struct { - uint32_t __otx2_io tag_const; - uint16_t __otx2_io lenm1_max; - uint8_t __otx2_io sa_pow2_size; - uint8_t __otx2_io tt; - } ipsec_cfg0; - struct { - uint32_t __otx2_io sa_idx_max; - uint8_t __otx2_io sa_idx_w; - } ipsec_cfg1; - uint8_t __otx2_io enable; -}; - -/* SSO mailbox error codes - * Range 501 - 600. - */ -enum sso_af_status { - SSO_AF_ERR_PARAM = -501, - SSO_AF_ERR_LF_INVALID = -502, - SSO_AF_ERR_AF_LF_ALLOC = -503, - SSO_AF_ERR_GRP_EBUSY = -504, - SSO_AF_INVAL_NPA_PF_FUNC = -505, -}; - -struct sso_lf_alloc_req { - struct mbox_msghdr hdr; - int __otx2_io node; - uint16_t __otx2_io hwgrps; -}; - -struct sso_lf_alloc_rsp { - struct mbox_msghdr hdr; - uint32_t __otx2_io xaq_buf_size; - uint32_t __otx2_io xaq_wq_entries; - uint32_t __otx2_io in_unit_entries; - uint16_t __otx2_io hwgrps; -}; - -struct sso_lf_free_req { - struct mbox_msghdr hdr; - int __otx2_io node; - uint16_t __otx2_io hwgrps; -}; - -/* SSOW mailbox error codes - * Range 601 - 700. - */ -enum ssow_af_status { - SSOW_AF_ERR_PARAM = -601, - SSOW_AF_ERR_LF_INVALID = -602, - SSOW_AF_ERR_AF_LF_ALLOC = -603, -}; - -struct ssow_lf_alloc_req { - struct mbox_msghdr hdr; - int __otx2_io node; - uint16_t __otx2_io hws; -}; - -struct ssow_lf_free_req { - struct mbox_msghdr hdr; - int __otx2_io node; - uint16_t __otx2_io hws; -}; - -struct sso_hw_setconfig { - struct mbox_msghdr hdr; - uint32_t __otx2_io npa_aura_id; - uint16_t __otx2_io npa_pf_func; - uint16_t __otx2_io hwgrps; -}; - -struct sso_release_xaq { - struct mbox_msghdr hdr; - uint16_t __otx2_io hwgrps; -}; - -struct sso_info_req { - struct mbox_msghdr hdr; - union { - uint16_t __otx2_io grp; - uint16_t __otx2_io hws; - }; -}; - -struct sso_grp_priority { - struct mbox_msghdr hdr; - uint16_t __otx2_io grp; - uint8_t __otx2_io priority; - uint8_t __otx2_io affinity; - uint8_t __otx2_io weight; -}; - -struct sso_grp_qos_cfg { - struct mbox_msghdr hdr; - uint16_t __otx2_io grp; - uint32_t __otx2_io xaq_limit; - uint16_t __otx2_io taq_thr; - uint16_t __otx2_io iaq_thr; -}; - -struct sso_grp_stats { - struct mbox_msghdr hdr; - uint16_t __otx2_io grp; - uint64_t __otx2_io ws_pc; - uint64_t __otx2_io ext_pc; - uint64_t __otx2_io wa_pc; - uint64_t __otx2_io ts_pc; - uint64_t __otx2_io ds_pc; - uint64_t __otx2_io dq_pc; - uint64_t __otx2_io aw_status; - uint64_t __otx2_io page_cnt; -}; - -struct sso_hws_stats { - struct mbox_msghdr hdr; - uint16_t __otx2_io hws; - uint64_t __otx2_io arbitration; -}; - -/* CPT mailbox error codes - * Range 901 - 1000. - */ -enum cpt_af_status { - CPT_AF_ERR_PARAM = -901, - CPT_AF_ERR_GRP_INVALID = -902, - CPT_AF_ERR_LF_INVALID = -903, - CPT_AF_ERR_ACCESS_DENIED = -904, - CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905, - CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906, - CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907, - CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908 -}; - -/* CPT mbox message formats */ - -struct cpt_rd_wr_reg_msg { - struct mbox_msghdr hdr; - uint64_t __otx2_io reg_offset; - uint64_t __otx2_io *ret_val; - uint64_t __otx2_io val; - uint8_t __otx2_io is_write; - /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */ - uint8_t __otx2_io blkaddr; -}; - -struct cpt_set_crypto_grp_req_msg { - struct mbox_msghdr hdr; - uint8_t __otx2_io crypto_eng_grp; -}; - -struct cpt_lf_alloc_req_msg { - struct mbox_msghdr hdr; - uint16_t __otx2_io nix_pf_func; - uint16_t __otx2_io sso_pf_func; - uint16_t __otx2_io eng_grpmask; - /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */ - uint8_t __otx2_io blkaddr; -}; - -struct cpt_lf_alloc_rsp_msg { - struct mbox_msghdr hdr; - uint16_t __otx2_io eng_grpmsk; -}; - -#define CPT_INLINE_INBOUND 0 -#define CPT_INLINE_OUTBOUND 1 - -struct cpt_inline_ipsec_cfg_msg { - struct mbox_msghdr hdr; - uint8_t __otx2_io enable; - uint8_t __otx2_io slot; - uint8_t __otx2_io dir; - uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */ - uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */ -}; - -struct cpt_rx_inline_lf_cfg_msg { - struct mbox_msghdr hdr; - uint16_t __otx2_io sso_pf_func; -}; - -enum cpt_eng_type { - CPT_ENG_TYPE_AE = 1, - CPT_ENG_TYPE_SE = 2, - CPT_ENG_TYPE_IE = 3, - CPT_MAX_ENG_TYPES, -}; - -/* CPT HW capabilities */ -union cpt_eng_caps { - uint64_t __otx2_io u; - struct { - uint64_t __otx2_io reserved_0_4:5; - uint64_t __otx2_io mul:1; - uint64_t __otx2_io sha1_sha2:1; - uint64_t __otx2_io chacha20:1; - uint64_t __otx2_io zuc_snow3g:1; - uint64_t __otx2_io sha3:1; - uint64_t __otx2_io aes:1; - uint64_t __otx2_io kasumi:1; - uint64_t __otx2_io des:1; - uint64_t __otx2_io crc:1; - uint64_t __otx2_io reserved_14_63:50; - }; -}; - -struct cpt_caps_rsp_msg { - struct mbox_msghdr hdr; - uint16_t __otx2_io cpt_pf_drv_version; - uint8_t __otx2_io cpt_revision; - union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES]; -}; - -/* NPC mbox message structs */ - -#define NPC_MCAM_ENTRY_INVALID 0xFFFF -#define NPC_MCAM_INVALID_MAP 0xFFFF - -/* NPC mailbox error codes - * Range 701 - 800. - */ -enum npc_af_status { - NPC_MCAM_INVALID_REQ = -701, - NPC_MCAM_ALLOC_DENIED = -702, - NPC_MCAM_ALLOC_FAILED = -703, - NPC_MCAM_PERM_DENIED = -704, - NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705, -}; - -struct npc_mcam_alloc_entry_req { - struct mbox_msghdr hdr; -#define NPC_MAX_NONCONTIG_ENTRIES 256 - uint8_t __otx2_io contig; /* Contiguous entries ? */ -#define NPC_MCAM_ANY_PRIO 0 -#define NPC_MCAM_LOWER_PRIO 1 -#define NPC_MCAM_HIGHER_PRIO 2 - uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */ - uint16_t __otx2_io ref_entry; - uint16_t __otx2_io count; /* Number of entries requested */ -}; - -struct npc_mcam_alloc_entry_rsp { - struct mbox_msghdr hdr; - /* Entry alloc'ed or start index if contiguous. - * Invalid in case of non-contiguous. - */ - uint16_t __otx2_io entry; - uint16_t __otx2_io count; /* Number of entries allocated */ - uint16_t __otx2_io free_count; /* Number of entries available */ - uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES]; -}; - -struct npc_mcam_free_entry_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io entry; /* Entry index to be freed */ - uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */ -}; - -struct mcam_entry { -#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */ - uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY]; - uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY]; - uint64_t __otx2_io action; - uint64_t __otx2_io vtag_action; -}; - -struct npc_mcam_write_entry_req { - struct mbox_msghdr hdr; - struct mcam_entry entry_data; - uint16_t __otx2_io entry; /* MCAM entry to write this match key */ - uint16_t __otx2_io cntr; /* Counter for this MCAM entry */ - uint8_t __otx2_io intf; /* Rx or Tx interface */ - uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */ - uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */ -}; - -/* Enable/Disable a given entry */ -struct npc_mcam_ena_dis_entry_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io entry; -}; - -struct npc_mcam_shift_entry_req { - struct mbox_msghdr hdr; -#define NPC_MCAM_MAX_SHIFTS 64 - uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS]; - uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS]; - uint16_t __otx2_io shift_count; /* Number of entries to shift */ -}; - -struct npc_mcam_shift_entry_rsp { - struct mbox_msghdr hdr; - /* Index in 'curr_entry', not entry itself */ - uint16_t __otx2_io failed_entry_idx; -}; - -struct npc_mcam_alloc_counter_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io contig; /* Contiguous counters ? */ -#define NPC_MAX_NONCONTIG_COUNTERS 64 - uint16_t __otx2_io count; /* Number of counters requested */ -}; - -struct npc_mcam_alloc_counter_rsp { - struct mbox_msghdr hdr; - /* Counter alloc'ed or start idx if contiguous. - * Invalid incase of non-contiguous. - */ - uint16_t __otx2_io cntr; - uint16_t __otx2_io count; /* Number of counters allocated */ - uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS]; -}; - -struct npc_mcam_oper_counter_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */ -}; - -struct npc_mcam_oper_counter_rsp { - struct mbox_msghdr hdr; - /* valid only while fetching counter's stats */ - uint64_t __otx2_io stat; -}; - -struct npc_mcam_unmap_counter_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io cntr; - uint16_t __otx2_io entry; /* Entry and counter to be unmapped */ - uint8_t __otx2_io all; /* Unmap all entries using this counter ? */ -}; - -struct npc_mcam_alloc_and_write_entry_req { - struct mbox_msghdr hdr; - struct mcam_entry entry_data; - uint16_t __otx2_io ref_entry; - uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */ - uint8_t __otx2_io intf; /* Rx or Tx interface */ - uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */ - uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */ -}; - -struct npc_mcam_alloc_and_write_entry_rsp { - struct mbox_msghdr hdr; - uint16_t __otx2_io entry; - uint16_t __otx2_io cntr; -}; - -struct npc_get_kex_cfg_rsp { - struct mbox_msghdr hdr; - uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */ - uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */ -#define NPC_MAX_INTF 2 -#define NPC_MAX_LID 8 -#define NPC_MAX_LT 16 -#define NPC_MAX_LD 2 -#define NPC_MAX_LFL 16 - /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ - uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD]; - /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */ - uint64_t __otx2_io - intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; - /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */ - uint64_t __otx2_io - intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; -#define MKEX_NAME_LEN 128 - uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN]; -}; - -enum header_fields { - NPC_DMAC, - NPC_SMAC, - NPC_ETYPE, - NPC_OUTER_VID, - NPC_TOS, - NPC_SIP_IPV4, - NPC_DIP_IPV4, - NPC_SIP_IPV6, - NPC_DIP_IPV6, - NPC_SPORT_TCP, - NPC_DPORT_TCP, - NPC_SPORT_UDP, - NPC_DPORT_UDP, - NPC_FDSA_VAL, - NPC_HEADER_FIELDS_MAX, -}; - -struct flow_msg { - unsigned char __otx2_io dmac[6]; - unsigned char __otx2_io smac[6]; - uint16_t __otx2_io etype; - uint16_t __otx2_io vlan_etype; - uint16_t __otx2_io vlan_tci; - union { - uint32_t __otx2_io ip4src; - uint32_t __otx2_io ip6src[4]; - }; - union { - uint32_t __otx2_io ip4dst; - uint32_t __otx2_io ip6dst[4]; - }; - uint8_t __otx2_io tos; - uint8_t __otx2_io ip_ver; - uint8_t __otx2_io ip_proto; - uint8_t __otx2_io tc; - uint16_t __otx2_io sport; - uint16_t __otx2_io dport; -}; - -struct npc_install_flow_req { - struct mbox_msghdr hdr; - struct flow_msg packet; - struct flow_msg mask; - uint64_t __otx2_io features; - uint16_t __otx2_io entry; - uint16_t __otx2_io channel; - uint8_t __otx2_io intf; - uint8_t __otx2_io set_cntr; - uint8_t __otx2_io default_rule; - /* Overwrite(0) or append(1) flow to default rule? */ - uint8_t __otx2_io append; - uint16_t __otx2_io vf; - /* action */ - uint32_t __otx2_io index; - uint16_t __otx2_io match_id; - uint8_t __otx2_io flow_key_alg; - uint8_t __otx2_io op; - /* vtag action */ - uint8_t __otx2_io vtag0_type; - uint8_t __otx2_io vtag0_valid; - uint8_t __otx2_io vtag1_type; - uint8_t __otx2_io vtag1_valid; - - /* vtag tx action */ - uint16_t __otx2_io vtag0_def; - uint8_t __otx2_io vtag0_op; - uint16_t __otx2_io vtag1_def; - uint8_t __otx2_io vtag1_op; -}; - -struct npc_install_flow_rsp { - struct mbox_msghdr hdr; - /* Negative if no counter else counter number */ - int __otx2_io counter; -}; - -struct npc_delete_flow_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io entry; - uint16_t __otx2_io start;/*Disable range of entries */ - uint16_t __otx2_io end; - uint8_t __otx2_io all; /* PF + VFs */ -}; - -struct npc_mcam_read_entry_req { - struct mbox_msghdr hdr; - /* MCAM entry to read */ - uint16_t __otx2_io entry; -}; - -struct npc_mcam_read_entry_rsp { - struct mbox_msghdr hdr; - struct mcam_entry entry_data; - uint8_t __otx2_io intf; - uint8_t __otx2_io enable; -}; - -struct npc_mcam_read_base_rule_rsp { - struct mbox_msghdr hdr; - struct mcam_entry entry_data; -}; - -/* TIM mailbox error codes - * Range 801 - 900. - */ -enum tim_af_status { - TIM_AF_NO_RINGS_LEFT = -801, - TIM_AF_INVALID_NPA_PF_FUNC = -802, - TIM_AF_INVALID_SSO_PF_FUNC = -803, - TIM_AF_RING_STILL_RUNNING = -804, - TIM_AF_LF_INVALID = -805, - TIM_AF_CSIZE_NOT_ALIGNED = -806, - TIM_AF_CSIZE_TOO_SMALL = -807, - TIM_AF_CSIZE_TOO_BIG = -808, - TIM_AF_INTERVAL_TOO_SMALL = -809, - TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810, - TIM_AF_INVALID_CLOCK_SOURCE = -811, - TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812, - TIM_AF_INVALID_BSIZE = -813, - TIM_AF_INVALID_ENABLE_PERIODIC = -814, - TIM_AF_INVALID_ENABLE_DONTFREE = -815, - TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816, - TIM_AF_RING_ALREADY_DISABLED = -817, -}; - -enum tim_clk_srcs { - TIM_CLK_SRCS_TENNS = 0, - TIM_CLK_SRCS_GPIO = 1, - TIM_CLK_SRCS_GTI = 2, - TIM_CLK_SRCS_PTP = 3, - TIM_CLK_SRSC_INVALID, -}; - -enum tim_gpio_edge { - TIM_GPIO_NO_EDGE = 0, - TIM_GPIO_LTOH_TRANS = 1, - TIM_GPIO_HTOL_TRANS = 2, - TIM_GPIO_BOTH_TRANS = 3, - TIM_GPIO_INVALID, -}; - -enum ptp_op { - PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */ - PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */ -}; - -struct ptp_req { - struct mbox_msghdr hdr; - uint8_t __otx2_io op; - int64_t __otx2_io scaled_ppm; - uint8_t __otx2_io is_pmu; -}; - -struct ptp_rsp { - struct mbox_msghdr hdr; - uint64_t __otx2_io clk; - uint64_t __otx2_io tsc; -}; - -struct get_hw_cap_rsp { - struct mbox_msghdr hdr; - /* Schq mapping fixed or flexible */ - uint8_t __otx2_io nix_fixed_txschq_mapping; - uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */ -}; - -struct ndc_sync_op { - struct mbox_msghdr hdr; - uint8_t __otx2_io nix_lf_tx_sync; - uint8_t __otx2_io nix_lf_rx_sync; - uint8_t __otx2_io npa_lf_sync; -}; - -struct tim_lf_alloc_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io ring; - uint16_t __otx2_io npa_pf_func; - uint16_t __otx2_io sso_pf_func; -}; - -struct tim_ring_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io ring; -}; - -struct tim_config_req { - struct mbox_msghdr hdr; - uint16_t __otx2_io ring; - uint8_t __otx2_io bigendian; - uint8_t __otx2_io clocksource; - uint8_t __otx2_io enableperiodic; - uint8_t __otx2_io enabledontfreebuffer; - uint32_t __otx2_io bucketsize; - uint32_t __otx2_io chunksize; - uint32_t __otx2_io interval; -}; - -struct tim_lf_alloc_rsp { - struct mbox_msghdr hdr; - uint64_t __otx2_io tenns_clk; -}; - -struct tim_enable_rsp { - struct mbox_msghdr hdr; - uint64_t __otx2_io timestarted; - uint32_t __otx2_io currentbucket; -}; - -/* REE mailbox error codes - * Range 1001 - 1100. - */ -enum ree_af_status { - REE_AF_ERR_RULE_UNKNOWN_VALUE = -1001, - REE_AF_ERR_LF_NO_MORE_RESOURCES = -1002, - REE_AF_ERR_LF_INVALID = -1003, - REE_AF_ERR_ACCESS_DENIED = -1004, - REE_AF_ERR_RULE_DB_PARTIAL = -1005, - REE_AF_ERR_RULE_DB_EQ_BAD_VALUE = -1006, - REE_AF_ERR_RULE_DB_BLOCK_ALLOC_FAILED = -1007, - REE_AF_ERR_BLOCK_NOT_IMPLEMENTED = -1008, - REE_AF_ERR_RULE_DB_INC_OFFSET_TOO_BIG = -1009, - REE_AF_ERR_RULE_DB_OFFSET_TOO_BIG = -1010, - REE_AF_ERR_Q_IS_GRACEFUL_DIS = -1011, - REE_AF_ERR_Q_NOT_GRACEFUL_DIS = -1012, - REE_AF_ERR_RULE_DB_ALLOC_FAILED = -1013, - REE_AF_ERR_RULE_DB_TOO_BIG = -1014, - REE_AF_ERR_RULE_DB_GEQ_BAD_VALUE = -1015, - REE_AF_ERR_RULE_DB_LEQ_BAD_VALUE = -1016, - REE_AF_ERR_RULE_DB_WRONG_LENGTH = -1017, - REE_AF_ERR_RULE_DB_WRONG_OFFSET = -1018, - REE_AF_ERR_RULE_DB_BLOCK_TOO_BIG = -1019, - REE_AF_ERR_RULE_DB_SHOULD_FILL_REQUEST = -1020, - REE_AF_ERR_RULE_DBI_ALLOC_FAILED = -1021, - REE_AF_ERR_LF_WRONG_PRIORITY = -1022, - REE_AF_ERR_LF_SIZE_TOO_BIG = -1023, -}; - -/* REE mbox message formats */ - -struct ree_req_msg { - struct mbox_msghdr hdr; - uint32_t __otx2_io blkaddr; -}; - -struct ree_lf_req_msg { - struct mbox_msghdr hdr; - uint32_t __otx2_io blkaddr; - uint32_t __otx2_io size; - uint8_t __otx2_io lf; - uint8_t __otx2_io pri; -}; - -struct ree_rule_db_prog_req_msg { - struct mbox_msghdr hdr; -#define REE_RULE_DB_REQ_BLOCK_SIZE (MBOX_SIZE >> 1) - uint8_t __otx2_io rule_db[REE_RULE_DB_REQ_BLOCK_SIZE]; - uint32_t __otx2_io blkaddr; /* REE0 or REE1 */ - uint32_t __otx2_io total_len; /* total len of rule db */ - uint32_t __otx2_io offset; /* offset of current rule db block */ - uint16_t __otx2_io len; /* length of rule db block */ - uint8_t __otx2_io is_last; /* is this the last block */ - uint8_t __otx2_io is_incremental; /* is incremental flow */ - uint8_t __otx2_io is_dbi; /* is rule db incremental */ -}; - -struct ree_rule_db_get_req_msg { - struct mbox_msghdr hdr; - uint32_t __otx2_io blkaddr; - uint32_t __otx2_io offset; /* retrieve db from this offset */ - uint8_t __otx2_io is_dbi; /* is request for rule db incremental */ -}; - -struct ree_rd_wr_reg_msg { - struct mbox_msghdr hdr; - uint64_t __otx2_io reg_offset; - uint64_t __otx2_io *ret_val; - uint64_t __otx2_io val; - uint32_t __otx2_io blkaddr; - uint8_t __otx2_io is_write; -}; - -struct ree_rule_db_len_rsp_msg { - struct mbox_msghdr hdr; - uint32_t __otx2_io blkaddr; - uint32_t __otx2_io len; - uint32_t __otx2_io inc_len; -}; - -struct ree_rule_db_get_rsp_msg { - struct mbox_msghdr hdr; -#define REE_RULE_DB_RSP_BLOCK_SIZE (MBOX_DOWN_TX_SIZE - SZ_1K) - uint8_t __otx2_io rule_db[REE_RULE_DB_RSP_BLOCK_SIZE]; - uint32_t __otx2_io total_len; /* total len of rule db */ - uint32_t __otx2_io offset; /* offset of current rule db block */ - uint16_t __otx2_io len; /* length of rule db block */ - uint8_t __otx2_io is_last; /* is this the last block */ -}; - -__rte_internal -const char *otx2_mbox_id2name(uint16_t id); -int otx2_mbox_id2size(uint16_t id); -void otx2_mbox_reset(struct otx2_mbox *mbox, int devid); -int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base, - int direction, int ndevsi, uint64_t intr_offset); -void otx2_mbox_fini(struct otx2_mbox *mbox); -__rte_internal -void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); -__rte_internal -int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); -int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo); -__rte_internal -int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg); -__rte_internal -int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg, - uint32_t tmo); -int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid); -__rte_internal -struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, - int size, int size_rsp); - -static inline struct mbox_msghdr * -otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size) -{ - return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0); -} - -static inline void -otx2_mbox_req_init(uint16_t mbox_id, void *msghdr) -{ - struct mbox_msghdr *hdr = msghdr; - - hdr->sig = OTX2_MBOX_REQ_SIG; - hdr->ver = OTX2_MBOX_VERSION; - hdr->id = mbox_id; - hdr->pcifunc = 0; -} - -static inline void -otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr) -{ - struct mbox_msghdr *hdr = msghdr; - - hdr->sig = OTX2_MBOX_RSP_SIG; - hdr->rc = -ETIMEDOUT; - hdr->id = mbox_id; -} - -static inline bool -otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid) -{ - struct otx2_mbox_dev *mdev = &mbox->dev[devid]; - bool ret; - - rte_spinlock_lock(&mdev->mbox_lock); - ret = mdev->num_msgs != 0; - rte_spinlock_unlock(&mdev->mbox_lock); - - return ret; -} - -static inline int -otx2_mbox_process(struct otx2_mbox *mbox) -{ - otx2_mbox_msg_send(mbox, 0); - return otx2_mbox_get_rsp(mbox, 0, NULL); -} - -static inline int -otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg) -{ - otx2_mbox_msg_send(mbox, 0); - return otx2_mbox_get_rsp(mbox, 0, msg); -} - -static inline int -otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo) -{ - otx2_mbox_msg_send(mbox, 0); - return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo); -} - -static inline int -otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo) -{ - otx2_mbox_msg_send(mbox, 0); - return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo); -} - -int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */); -int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func, - uint16_t id); - -#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ -static inline struct _req_type \ -*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \ -{ \ - struct _req_type *req; \ - \ - req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ - mbox, 0, sizeof(struct _req_type), \ - sizeof(struct _rsp_type)); \ - if (!req) \ - return NULL; \ - \ - req->hdr.sig = OTX2_MBOX_REQ_SIG; \ - req->hdr.id = _id; \ - otx2_mbox_dbg("id=0x%x (%s)", \ - req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \ - return req; \ -} - -MBOX_MESSAGES -#undef M - -/* This is required for copy operations from device memory which do not work on - * addresses which are unaligned to 16B. This is because of specific - * optimizations to libc memcpy. - */ -static inline volatile void * -otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l) -{ - const volatile uint8_t *sb; - volatile uint8_t *db; - size_t i; - - if (!d || !s) - return NULL; - db = (volatile uint8_t *)d; - sb = (const volatile uint8_t *)s; - for (i = 0; i < l; i++) - db[i] = sb[i]; - return d; -} - -/* This is required for memory operations from device memory which do not - * work on addresses which are unaligned to 16B. This is because of specific - * optimizations to libc memset. - */ -static inline void -otx2_mbox_memset(volatile void *d, uint8_t val, size_t l) -{ - volatile uint8_t *db; - size_t i = 0; - - if (!d || !l) - return; - db = (volatile uint8_t *)d; - for (i = 0; i < l; i++) - db[i] = val; -} - -#endif /* __OTX2_MBOX_H__ */ diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c deleted file mode 100644 index b561b67174..0000000000 --- a/drivers/common/octeontx2/otx2_sec_idev.c +++ /dev/null @@ -1,183 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2020 Marvell International Ltd. - */ - -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_sec_idev.h" - -static struct otx2_sec_idev_cfg sec_cfg[OTX2_MAX_INLINE_PORTS]; - -/** - * @internal - * Check if rte_eth_dev is security offload capable otx2_eth_dev - */ -uint8_t -otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev) -{ - struct rte_pci_device *pci_dev; - - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - - if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_PF || - pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_VF || - pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_AF_VF) - return 1; - - return 0; -} - -int -otx2_sec_idev_cfg_init(int port_id) -{ - struct otx2_sec_idev_cfg *cfg; - int i; - - cfg = &sec_cfg[port_id]; - cfg->tx_cpt_idx = 0; - rte_spinlock_init(&cfg->tx_cpt_lock); - - for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) { - cfg->tx_cpt[i].qp = NULL; - rte_atomic16_set(&cfg->tx_cpt[i].ref_cnt, 0); - } - - return 0; -} - -int -otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp) -{ - struct otx2_sec_idev_cfg *cfg; - int i, ret; - - if (qp == NULL || port_id >= OTX2_MAX_INLINE_PORTS) - return -EINVAL; - - cfg = &sec_cfg[port_id]; - - /* Find a free slot to save CPT LF */ - - rte_spinlock_lock(&cfg->tx_cpt_lock); - - for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) { - if (cfg->tx_cpt[i].qp == NULL) { - cfg->tx_cpt[i].qp = qp; - ret = 0; - goto unlock; - } - } - - ret = -EINVAL; - -unlock: - rte_spinlock_unlock(&cfg->tx_cpt_lock); - return ret; -} - -int -otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp) -{ - struct otx2_sec_idev_cfg *cfg; - uint16_t port_id; - int i, ret; - - if (qp == NULL) - return -EINVAL; - - for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) { - cfg = &sec_cfg[port_id]; - - rte_spinlock_lock(&cfg->tx_cpt_lock); - - for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) { - if (cfg->tx_cpt[i].qp != qp) - continue; - - /* Don't free if the QP is in use by any sec session */ - if (rte_atomic16_read(&cfg->tx_cpt[i].ref_cnt)) { - ret = -EBUSY; - } else { - cfg->tx_cpt[i].qp = NULL; - ret = 0; - } - - goto unlock; - } - - rte_spinlock_unlock(&cfg->tx_cpt_lock); - } - - return -ENOENT; - -unlock: - rte_spinlock_unlock(&cfg->tx_cpt_lock); - return ret; -} - -int -otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp) -{ - struct otx2_sec_idev_cfg *cfg; - uint16_t index; - int i, ret; - - if (port_id >= OTX2_MAX_INLINE_PORTS || qp == NULL) - return -EINVAL; - - cfg = &sec_cfg[port_id]; - - rte_spinlock_lock(&cfg->tx_cpt_lock); - - index = cfg->tx_cpt_idx; - - /* Get the next index with valid data */ - for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) { - if (cfg->tx_cpt[index].qp != NULL) - break; - index = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT; - } - - if (i >= OTX2_MAX_CPT_QP_PER_PORT) { - ret = -EINVAL; - goto unlock; - } - - *qp = cfg->tx_cpt[index].qp; - rte_atomic16_inc(&cfg->tx_cpt[index].ref_cnt); - - cfg->tx_cpt_idx = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT; - - ret = 0; - -unlock: - rte_spinlock_unlock(&cfg->tx_cpt_lock); - return ret; -} - -int -otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp) -{ - struct otx2_sec_idev_cfg *cfg; - uint16_t port_id; - int i; - - if (qp == NULL) - return -EINVAL; - - for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) { - cfg = &sec_cfg[port_id]; - for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) { - if (cfg->tx_cpt[i].qp == qp) { - rte_atomic16_dec(&cfg->tx_cpt[i].ref_cnt); - return 0; - } - } - } - - return -EINVAL; -} diff --git a/drivers/common/octeontx2/otx2_sec_idev.h b/drivers/common/octeontx2/otx2_sec_idev.h deleted file mode 100644 index 89cdaf66ab..0000000000 --- a/drivers/common/octeontx2/otx2_sec_idev.h +++ /dev/null @@ -1,43 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2020 Marvell International Ltd. - */ - -#ifndef _OTX2_SEC_IDEV_H_ -#define _OTX2_SEC_IDEV_H_ - -#include - -#define OTX2_MAX_CPT_QP_PER_PORT 64 -#define OTX2_MAX_INLINE_PORTS 64 - -struct otx2_cpt_qp; - -struct otx2_sec_idev_cfg { - struct { - struct otx2_cpt_qp *qp; - rte_atomic16_t ref_cnt; - } tx_cpt[OTX2_MAX_CPT_QP_PER_PORT]; - - uint16_t tx_cpt_idx; - rte_spinlock_t tx_cpt_lock; -}; - -__rte_internal -uint8_t otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev); - -__rte_internal -int otx2_sec_idev_cfg_init(int port_id); - -__rte_internal -int otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp); - -__rte_internal -int otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp); - -__rte_internal -int otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp); - -__rte_internal -int otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp); - -#endif /* _OTX2_SEC_IDEV_H_ */ diff --git a/drivers/common/octeontx2/version.map b/drivers/common/octeontx2/version.map deleted file mode 100644 index b58f19ce32..0000000000 --- a/drivers/common/octeontx2/version.map +++ /dev/null @@ -1,44 +0,0 @@ -INTERNAL { - global: - - otx2_dev_active_vfs; - otx2_dev_fini; - otx2_dev_priv_init; - otx2_disable_irqs; - otx2_eth_dev_is_sec_capable; - otx2_intra_dev_get_cfg; - otx2_logtype_base; - otx2_logtype_dpi; - otx2_logtype_ep; - otx2_logtype_mbox; - otx2_logtype_nix; - otx2_logtype_npa; - otx2_logtype_npc; - otx2_logtype_ree; - otx2_logtype_sso; - otx2_logtype_tim; - otx2_logtype_tm; - otx2_mbox_alloc_msg_rsp; - otx2_mbox_get_rsp; - otx2_mbox_get_rsp_tmo; - otx2_mbox_id2name; - otx2_mbox_msg_send; - otx2_mbox_wait_for_rsp; - otx2_npa_lf_active; - otx2_npa_lf_obj_get; - otx2_npa_lf_obj_ref; - otx2_npa_pf_func_get; - otx2_npa_set_defaults; - otx2_parse_common_devargs; - otx2_register_irq; - otx2_sec_idev_cfg_init; - otx2_sec_idev_tx_cpt_qp_add; - otx2_sec_idev_tx_cpt_qp_get; - otx2_sec_idev_tx_cpt_qp_put; - otx2_sec_idev_tx_cpt_qp_remove; - otx2_sso_pf_func_get; - otx2_sso_pf_func_set; - otx2_unregister_irq; - - local: *; -}; diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build index 59f02ea47c..147b8cf633 100644 --- a/drivers/crypto/meson.build +++ b/drivers/crypto/meson.build @@ -16,7 +16,6 @@ drivers = [ 'nitrox', 'null', 'octeontx', - 'octeontx2', 'openssl', 'scheduler', 'virtio', diff --git a/drivers/crypto/octeontx2/meson.build b/drivers/crypto/octeontx2/meson.build deleted file mode 100644 index 3b387cc570..0000000000 --- a/drivers/crypto/octeontx2/meson.build +++ /dev/null @@ -1,30 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright (C) 2019 Marvell International Ltd. - -if not is_linux or not dpdk_conf.get('RTE_ARCH_64') - build = false - reason = 'only supported on 64-bit Linux' - subdir_done() -endif - -deps += ['bus_pci'] -deps += ['common_cpt'] -deps += ['common_octeontx2'] -deps += ['ethdev'] -deps += ['eventdev'] -deps += ['security'] - -sources = files( - 'otx2_cryptodev.c', - 'otx2_cryptodev_capabilities.c', - 'otx2_cryptodev_hw_access.c', - 'otx2_cryptodev_mbox.c', - 'otx2_cryptodev_ops.c', - 'otx2_cryptodev_sec.c', -) - -includes += include_directories('../../common/cpt') -includes += include_directories('../../common/octeontx2') -includes += include_directories('../../crypto/octeontx2') -includes += include_directories('../../mempool/octeontx2') -includes += include_directories('../../net/octeontx2') diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c deleted file mode 100644 index fc7ad05366..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev.c +++ /dev/null @@ -1,188 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_capabilities.h" -#include "otx2_cryptodev_mbox.h" -#include "otx2_cryptodev_ops.h" -#include "otx2_cryptodev_sec.h" -#include "otx2_dev.h" - -/* CPT common headers */ -#include "cpt_common.h" -#include "cpt_pmd_logs.h" - -uint8_t otx2_cryptodev_driver_id; - -static struct rte_pci_id pci_id_cpt_table[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_OCTEONTX2_RVU_CPT_VF) - }, - /* sentinel */ - { - .device_id = 0 - }, -}; - -uint64_t -otx2_cpt_default_ff_get(void) -{ - return RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_HW_ACCELERATED | - RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | - RTE_CRYPTODEV_FF_IN_PLACE_SGL | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | - RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT | - RTE_CRYPTODEV_FF_SYM_SESSIONLESS | - RTE_CRYPTODEV_FF_SECURITY | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; -} - -static int -otx2_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) -{ - struct rte_cryptodev_pmd_init_params init_params = { - .name = "", - .socket_id = rte_socket_id(), - .private_data_size = sizeof(struct otx2_cpt_vf) - }; - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - struct rte_cryptodev *dev; - struct otx2_dev *otx2_dev; - struct otx2_cpt_vf *vf; - uint16_t nb_queues; - int ret; - - rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); - - dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params); - if (dev == NULL) { - ret = -ENODEV; - goto exit; - } - - dev->dev_ops = &otx2_cpt_ops; - - dev->driver_id = otx2_cryptodev_driver_id; - - /* Get private data space allocated */ - vf = dev->data->dev_private; - - otx2_dev = &vf->otx2_dev; - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - /* Initialize the base otx2_dev object */ - ret = otx2_dev_init(pci_dev, otx2_dev); - if (ret) { - CPT_LOG_ERR("Could not initialize otx2_dev"); - goto pmd_destroy; - } - - /* Get number of queues available on the device */ - ret = otx2_cpt_available_queues_get(dev, &nb_queues); - if (ret) { - CPT_LOG_ERR("Could not determine the number of queues available"); - goto otx2_dev_fini; - } - - /* Don't exceed the limits set per VF */ - nb_queues = RTE_MIN(nb_queues, OTX2_CPT_MAX_QUEUES_PER_VF); - - if (nb_queues == 0) { - CPT_LOG_ERR("No free queues available on the device"); - goto otx2_dev_fini; - } - - vf->max_queues = nb_queues; - - CPT_LOG_INFO("Max queues supported by device: %d", - vf->max_queues); - - ret = otx2_cpt_hardware_caps_get(dev, vf->hw_caps); - if (ret) { - CPT_LOG_ERR("Could not determine hardware capabilities"); - goto otx2_dev_fini; - } - } - - otx2_crypto_capabilities_init(vf->hw_caps); - otx2_crypto_sec_capabilities_init(vf->hw_caps); - - /* Create security ctx */ - ret = otx2_crypto_sec_ctx_create(dev); - if (ret) - goto otx2_dev_fini; - - dev->feature_flags = otx2_cpt_default_ff_get(); - - if (rte_eal_process_type() == RTE_PROC_SECONDARY) - otx2_cpt_set_enqdeq_fns(dev); - - rte_cryptodev_pmd_probing_finish(dev); - - return 0; - -otx2_dev_fini: - if (rte_eal_process_type() == RTE_PROC_PRIMARY) - otx2_dev_fini(pci_dev, otx2_dev); -pmd_destroy: - rte_cryptodev_pmd_destroy(dev); -exit: - CPT_LOG_ERR("Could not create device (vendor_id: 0x%x device_id: 0x%x)", - pci_dev->id.vendor_id, pci_dev->id.device_id); - return ret; -} - -static int -otx2_cpt_pci_remove(struct rte_pci_device *pci_dev) -{ - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - struct rte_cryptodev *dev; - - if (pci_dev == NULL) - return -EINVAL; - - rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); - - dev = rte_cryptodev_pmd_get_named_dev(name); - if (dev == NULL) - return -ENODEV; - - /* Destroy security ctx */ - otx2_crypto_sec_ctx_destroy(dev); - - return rte_cryptodev_pmd_destroy(dev); -} - -static struct rte_pci_driver otx2_cryptodev_pmd = { - .id_table = pci_id_cpt_table, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING, - .probe = otx2_cpt_pci_probe, - .remove = otx2_cpt_pci_remove, -}; - -static struct cryptodev_driver otx2_cryptodev_drv; - -RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_OCTEONTX2_PMD, otx2_cryptodev_pmd); -RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_OCTEONTX2_PMD, pci_id_cpt_table); -RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_OCTEONTX2_PMD, "vfio-pci"); -RTE_PMD_REGISTER_CRYPTO_DRIVER(otx2_cryptodev_drv, otx2_cryptodev_pmd.driver, - otx2_cryptodev_driver_id); -RTE_LOG_REGISTER_DEFAULT(otx2_cpt_logtype, NOTICE); diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.h b/drivers/crypto/octeontx2/otx2_cryptodev.h deleted file mode 100644 index 15ecfe45b6..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev.h +++ /dev/null @@ -1,63 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_H_ -#define _OTX2_CRYPTODEV_H_ - -#include "cpt_common.h" -#include "cpt_hw_types.h" - -#include "otx2_dev.h" - -/* Marvell OCTEON TX2 Crypto PMD device name */ -#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2 - -#define OTX2_CPT_MAX_LFS 128 -#define OTX2_CPT_MAX_QUEUES_PER_VF 64 -#define OTX2_CPT_MAX_BLKS 2 -#define OTX2_CPT_PMD_VERSION 3 -#define OTX2_CPT_REVISION_ID_3 3 - -/** - * Device private data - */ -struct otx2_cpt_vf { - struct otx2_dev otx2_dev; - /**< Base class */ - uint16_t max_queues; - /**< Max queues supported */ - uint8_t nb_queues; - /**< Number of crypto queues attached */ - uint16_t lf_msixoff[OTX2_CPT_MAX_LFS]; - /**< MSI-X offsets */ - uint8_t lf_blkaddr[OTX2_CPT_MAX_LFS]; - /**< CPT0/1 BLKADDR of LFs */ - uint8_t cpt_revision; - /**< CPT revision */ - uint8_t err_intr_registered:1; - /**< Are error interrupts registered? */ - union cpt_eng_caps hw_caps[CPT_MAX_ENG_TYPES]; - /**< CPT device capabilities */ -}; - -struct cpt_meta_info { - uint64_t deq_op_info[5]; - uint64_t comp_code_sz; - union cpt_res_s cpt_res __rte_aligned(16); - struct cpt_request_info cpt_req; -}; - -#define CPT_LOGTYPE otx2_cpt_logtype - -extern int otx2_cpt_logtype; - -/* - * Crypto device driver ID - */ -extern uint8_t otx2_cryptodev_driver_id; - -uint64_t otx2_cpt_default_ff_get(void); -void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); - -#endif /* _OTX2_CRYPTODEV_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c deleted file mode 100644 index ba3fbbbe22..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c +++ /dev/null @@ -1,924 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_capabilities.h" -#include "otx2_mbox.h" - -#define CPT_EGRP_GET(hw_caps, name, egrp) do { \ - if ((hw_caps[CPT_ENG_TYPE_SE].name) && \ - (hw_caps[CPT_ENG_TYPE_IE].name)) \ - *egrp = OTX2_CPT_EGRP_SE_IE; \ - else if (hw_caps[CPT_ENG_TYPE_SE].name) \ - *egrp = OTX2_CPT_EGRP_SE; \ - else if (hw_caps[CPT_ENG_TYPE_AE].name) \ - *egrp = OTX2_CPT_EGRP_AE; \ - else \ - *egrp = OTX2_CPT_EGRP_MAX; \ -} while (0) - -#define CPT_CAPS_ADD(hw_caps, name) do { \ - enum otx2_cpt_egrp egrp; \ - CPT_EGRP_GET(hw_caps, name, &egrp); \ - if (egrp < OTX2_CPT_EGRP_MAX) \ - cpt_caps_add(caps_##name, RTE_DIM(caps_##name)); \ -} while (0) - -#define SEC_CAPS_ADD(hw_caps, name) do { \ - enum otx2_cpt_egrp egrp; \ - CPT_EGRP_GET(hw_caps, name, &egrp); \ - if (egrp < OTX2_CPT_EGRP_MAX) \ - sec_caps_add(sec_caps_##name, RTE_DIM(sec_caps_##name));\ -} while (0) - -#define OTX2_CPT_MAX_CAPS 34 -#define OTX2_SEC_MAX_CAPS 4 - -static struct rte_cryptodev_capabilities otx2_cpt_caps[OTX2_CPT_MAX_CAPS]; -static struct rte_cryptodev_capabilities otx2_cpt_sec_caps[OTX2_SEC_MAX_CAPS]; - -static const struct rte_cryptodev_capabilities caps_mul[] = { - { /* RSA */ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, - {.asym = { - .xform_capa = { - .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, - .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | - (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | - (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | - (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), - {.modlen = { - .min = 17, - .max = 1024, - .increment = 1 - }, } - } - }, } - }, - { /* MOD_EXP */ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, - {.asym = { - .xform_capa = { - .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX, - .op_types = 0, - {.modlen = { - .min = 17, - .max = 1024, - .increment = 1 - }, } - } - }, } - }, - { /* ECDSA */ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, - {.asym = { - .xform_capa = { - .xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA, - .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | - (1 << RTE_CRYPTO_ASYM_OP_VERIFY)), - } - }, - } - }, - { /* ECPM */ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, - {.asym = { - .xform_capa = { - .xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM, - .op_types = 0 - } - }, - } - }, -}; - -static const struct rte_cryptodev_capabilities caps_sha1_sha2[] = { - { /* SHA1 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA1, - .block_size = 64, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 20, - .max = 20, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA1 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 12, - .max = 20, - .increment = 8 - }, - }, } - }, } - }, - { /* SHA224 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA224, - .block_size = 64, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 28, - .max = 28, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA224 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 28, - .max = 28, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA256 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA256, - .block_size = 64, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 32, - .max = 32, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA256 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 16, - .max = 32, - .increment = 16 - }, - }, } - }, } - }, - { /* SHA384 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA384, - .block_size = 64, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 48, - .max = 48, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA384 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 24, - .max = 48, - .increment = 24 - }, - }, } - }, } - }, - { /* SHA512 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA512, - .block_size = 128, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 64, - .max = 64, - .increment = 0 - }, - }, } - }, } - }, - { /* SHA512 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, - .block_size = 128, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 32, - .max = 64, - .increment = 32 - }, - }, } - }, } - }, - { /* MD5 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_MD5, - .block_size = 64, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - }, } - }, } - }, - { /* MD5 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_MD5_HMAC, - .block_size = 64, - .key_size = { - .min = 8, - .max = 64, - .increment = 8 - }, - .digest_size = { - .min = 12, - .max = 16, - .increment = 4 - }, - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_chacha20[] = { - { /* Chacha20-Poly1305 */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, - {.aead = { - .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, - .block_size = 64, - .key_size = { - .min = 32, - .max = 32, - .increment = 0 - }, - .digest_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .aad_size = { - .min = 0, - .max = 1024, - .increment = 1 - }, - .iv_size = { - .min = 12, - .max = 12, - .increment = 0 - }, - }, } - }, } - } -}; - -static const struct rte_cryptodev_capabilities caps_zuc_snow3g[] = { - { /* SNOW 3G (UEA2) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2, - .block_size = 16, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, - { /* ZUC (EEA3) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3, - .block_size = 16, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, - { /* SNOW 3G (UIA2) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2, - .block_size = 16, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .digest_size = { - .min = 4, - .max = 4, - .increment = 0 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, - { /* ZUC (EIA3) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_ZUC_EIA3, - .block_size = 16, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .digest_size = { - .min = 4, - .max = 4, - .increment = 0 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_aes[] = { - { /* AES GMAC (AUTH) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_AES_GMAC, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .digest_size = { - .min = 8, - .max = 16, - .increment = 4 - }, - .iv_size = { - .min = 12, - .max = 12, - .increment = 0 - } - }, } - }, } - }, - { /* AES CBC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_AES_CBC, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, - { /* AES CTR */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_AES_CTR, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .iv_size = { - .min = 12, - .max = 16, - .increment = 4 - } - }, } - }, } - }, - { /* AES XTS */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_AES_XTS, - .block_size = 16, - .key_size = { - .min = 32, - .max = 64, - .increment = 0 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, - { /* AES GCM */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, - {.aead = { - .algo = RTE_CRYPTO_AEAD_AES_GCM, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .digest_size = { - .min = 4, - .max = 16, - .increment = 1 - }, - .aad_size = { - .min = 0, - .max = 1024, - .increment = 1 - }, - .iv_size = { - .min = 12, - .max = 12, - .increment = 0 - } - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_kasumi[] = { - { /* KASUMI (F8) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_KASUMI_F8, - .block_size = 8, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .iv_size = { - .min = 8, - .max = 8, - .increment = 0 - } - }, } - }, } - }, - { /* KASUMI (F9) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_KASUMI_F9, - .block_size = 8, - .key_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .digest_size = { - .min = 4, - .max = 4, - .increment = 0 - }, - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_des[] = { - { /* 3DES CBC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_3DES_CBC, - .block_size = 8, - .key_size = { - .min = 24, - .max = 24, - .increment = 0 - }, - .iv_size = { - .min = 8, - .max = 16, - .increment = 8 - } - }, } - }, } - }, - { /* 3DES ECB */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_3DES_ECB, - .block_size = 8, - .key_size = { - .min = 24, - .max = 24, - .increment = 0 - }, - .iv_size = { - .min = 0, - .max = 0, - .increment = 0 - } - }, } - }, } - }, - { /* DES CBC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_DES_CBC, - .block_size = 8, - .key_size = { - .min = 8, - .max = 8, - .increment = 0 - }, - .iv_size = { - .min = 8, - .max = 8, - .increment = 0 - } - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_null[] = { - { /* NULL (AUTH) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_NULL, - .block_size = 1, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .digest_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - }, }, - }, }, - }, - { /* NULL (CIPHER) */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_NULL, - .block_size = 1, - .key_size = { - .min = 0, - .max = 0, - .increment = 0 - }, - .iv_size = { - .min = 0, - .max = 0, - .increment = 0 - } - }, }, - }, } - }, -}; - -static const struct rte_cryptodev_capabilities caps_end[] = { - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities sec_caps_aes[] = { - { /* AES GCM */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, - {.aead = { - .algo = RTE_CRYPTO_AEAD_AES_GCM, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .digest_size = { - .min = 16, - .max = 16, - .increment = 0 - }, - .aad_size = { - .min = 8, - .max = 12, - .increment = 4 - }, - .iv_size = { - .min = 12, - .max = 12, - .increment = 0 - } - }, } - }, } - }, - { /* AES CBC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, - {.cipher = { - .algo = RTE_CRYPTO_CIPHER_AES_CBC, - .block_size = 16, - .key_size = { - .min = 16, - .max = 32, - .increment = 8 - }, - .iv_size = { - .min = 16, - .max = 16, - .increment = 0 - } - }, } - }, } - }, -}; - -static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = { - { /* SHA1 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 12, - .max = 20, - .increment = 8 - }, - }, } - }, } - }, - { /* SHA256 HMAC */ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, - {.sym = { - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, - {.auth = { - .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, - .block_size = 64, - .key_size = { - .min = 1, - .max = 1024, - .increment = 1 - }, - .digest_size = { - .min = 16, - .max = 32, - .increment = 16 - }, - }, } - }, } - }, -}; - -static const struct rte_security_capability -otx2_crypto_sec_capabilities[] = { - { /* IPsec Lookaside Protocol ESP Tunnel Ingress */ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, - .protocol = RTE_SECURITY_PROTOCOL_IPSEC, - .ipsec = { - .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, - .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, - .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS, - .options = { 0 } - }, - .crypto_capabilities = otx2_cpt_sec_caps, - .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA - }, - { /* IPsec Lookaside Protocol ESP Tunnel Egress */ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, - .protocol = RTE_SECURITY_PROTOCOL_IPSEC, - .ipsec = { - .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, - .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, - .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS, - .options = { 0 } - }, - .crypto_capabilities = otx2_cpt_sec_caps, - .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA - }, - { - .action = RTE_SECURITY_ACTION_TYPE_NONE - } -}; - -static void -cpt_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps) -{ - static int cur_pos; - - if (cur_pos + nb_caps > OTX2_CPT_MAX_CAPS) - return; - - memcpy(&otx2_cpt_caps[cur_pos], caps, nb_caps * sizeof(caps[0])); - cur_pos += nb_caps; -} - -void -otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps) -{ - CPT_CAPS_ADD(hw_caps, mul); - CPT_CAPS_ADD(hw_caps, sha1_sha2); - CPT_CAPS_ADD(hw_caps, chacha20); - CPT_CAPS_ADD(hw_caps, zuc_snow3g); - CPT_CAPS_ADD(hw_caps, aes); - CPT_CAPS_ADD(hw_caps, kasumi); - CPT_CAPS_ADD(hw_caps, des); - - cpt_caps_add(caps_null, RTE_DIM(caps_null)); - cpt_caps_add(caps_end, RTE_DIM(caps_end)); -} - -const struct rte_cryptodev_capabilities * -otx2_cpt_capabilities_get(void) -{ - return otx2_cpt_caps; -} - -static void -sec_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps) -{ - static int cur_pos; - - if (cur_pos + nb_caps > OTX2_SEC_MAX_CAPS) - return; - - memcpy(&otx2_cpt_sec_caps[cur_pos], caps, nb_caps * sizeof(caps[0])); - cur_pos += nb_caps; -} - -void -otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps) -{ - SEC_CAPS_ADD(hw_caps, aes); - SEC_CAPS_ADD(hw_caps, sha1_sha2); - - sec_caps_add(caps_end, RTE_DIM(caps_end)); -} - -const struct rte_security_capability * -otx2_crypto_sec_capabilities_get(void *device __rte_unused) -{ - return otx2_crypto_sec_capabilities; -} diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h deleted file mode 100644 index c1e0001190..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h +++ /dev/null @@ -1,45 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_CAPABILITIES_H_ -#define _OTX2_CRYPTODEV_CAPABILITIES_H_ - -#include - -#include "otx2_mbox.h" - -enum otx2_cpt_egrp { - OTX2_CPT_EGRP_SE = 0, - OTX2_CPT_EGRP_SE_IE = 1, - OTX2_CPT_EGRP_AE = 2, - OTX2_CPT_EGRP_MAX, -}; - -/* - * Initialize crypto capabilities for the device - * - */ -void otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps); - -/* - * Get capabilities list for the device - * - */ -const struct rte_cryptodev_capabilities * -otx2_cpt_capabilities_get(void); - -/* - * Initialize security capabilities for the device - * - */ -void otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps); - -/* - * Get security capabilities list for the device - * - */ -const struct rte_security_capability * -otx2_crypto_sec_capabilities_get(void *device __rte_unused); - -#endif /* _OTX2_CRYPTODEV_CAPABILITIES_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c deleted file mode 100644 index d5d6b5bad7..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c +++ /dev/null @@ -1,225 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ -#include - -#include "otx2_common.h" -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_mbox.h" -#include "otx2_cryptodev_ops.h" -#include "otx2_dev.h" - -#include "cpt_pmd_logs.h" - -static void -otx2_cpt_lf_err_intr_handler(void *param) -{ - uintptr_t base = (uintptr_t)param; - uint8_t lf_id; - uint64_t intr; - - lf_id = (base >> 12) & 0xFF; - - intr = otx2_read64(base + OTX2_CPT_LF_MISC_INT); - if (intr == 0) - return; - - CPT_LOG_ERR("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr); - - /* Clear interrupt */ - otx2_write64(intr, base + OTX2_CPT_LF_MISC_INT); -} - -static void -otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev, - uint16_t msix_off, uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = pci_dev->intr_handle; - - /* Disable error interrupts */ - otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C); - - otx2_unregister_irq(handle, otx2_cpt_lf_err_intr_handler, (void *)base, - msix_off); -} - -void -otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - uintptr_t base; - uint32_t i; - - for (i = 0; i < vf->nb_queues; i++) { - base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i); - otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base); - } - - vf->err_intr_registered = 0; -} - -static int -otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev, - uint16_t msix_off, uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = pci_dev->intr_handle; - int ret; - - /* Disable error interrupts */ - otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C); - - /* Register error interrupt handler */ - ret = otx2_register_irq(handle, otx2_cpt_lf_err_intr_handler, - (void *)base, msix_off); - if (ret) - return ret; - - /* Enable error interrupts */ - otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1S); - - return 0; -} - -int -otx2_cpt_err_intr_register(const struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - uint32_t i, j, ret; - uintptr_t base; - - for (i = 0; i < vf->nb_queues; i++) { - if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) { - CPT_LOG_ERR("Invalid CPT LF MSI-X offset: 0x%x", - vf->lf_msixoff[i]); - return -EINVAL; - } - } - - for (i = 0; i < vf->nb_queues; i++) { - base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i); - ret = otx2_cpt_lf_err_intr_register(dev, vf->lf_msixoff[i], - base); - if (ret) - goto intr_unregister; - } - - vf->err_intr_registered = 1; - return 0; - -intr_unregister: - /* Unregister the ones already registered */ - for (j = 0; j < i; j++) { - base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[j], j); - otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base); - } - - /* - * Failed to register error interrupt. Not returning error as this would - * prevent application from enabling larger number of devs. - * - * This failure is a known issue because otx2_dev_init() initializes - * interrupts based on static values from ATF, and the actual number - * of interrupts needed (which is based on LFs) can be determined only - * after otx2_dev_init() sets up interrupts which includes mbox - * interrupts. - */ - return 0; -} - -int -otx2_cpt_iq_enable(const struct rte_cryptodev *dev, - const struct otx2_cpt_qp *qp, uint8_t grp_mask, uint8_t pri, - uint32_t size_div40) -{ - union otx2_cpt_af_lf_ctl af_lf_ctl; - union otx2_cpt_lf_inprog inprog; - union otx2_cpt_lf_q_base base; - union otx2_cpt_lf_q_size size; - union otx2_cpt_lf_ctl lf_ctl; - int ret; - - /* Set engine group mask and priority */ - - ret = otx2_cpt_af_reg_read(dev, OTX2_CPT_AF_LF_CTL(qp->id), - qp->blkaddr, &af_lf_ctl.u); - if (ret) - return ret; - af_lf_ctl.s.grp = grp_mask; - af_lf_ctl.s.pri = pri ? 1 : 0; - ret = otx2_cpt_af_reg_write(dev, OTX2_CPT_AF_LF_CTL(qp->id), - qp->blkaddr, af_lf_ctl.u); - if (ret) - return ret; - - /* Set instruction queue base address */ - - base.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_BASE); - base.s.fault = 0; - base.s.stopped = 0; - base.s.addr = qp->iq_dma_addr >> 7; - otx2_write64(base.u, qp->base + OTX2_CPT_LF_Q_BASE); - - /* Set instruction queue size */ - - size.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_SIZE); - size.s.size_div40 = size_div40; - otx2_write64(size.u, qp->base + OTX2_CPT_LF_Q_SIZE); - - /* Enable instruction queue */ - - lf_ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL); - lf_ctl.s.ena = 1; - otx2_write64(lf_ctl.u, qp->base + OTX2_CPT_LF_CTL); - - /* Start instruction execution */ - - inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG); - inprog.s.eena = 1; - otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG); - - return 0; -} - -void -otx2_cpt_iq_disable(struct otx2_cpt_qp *qp) -{ - union otx2_cpt_lf_q_grp_ptr grp_ptr; - union otx2_cpt_lf_inprog inprog; - union otx2_cpt_lf_ctl ctl; - int cnt; - - /* Stop instruction execution */ - inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG); - inprog.s.eena = 0x0; - otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG); - - /* Disable instructions enqueuing */ - ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL); - ctl.s.ena = 0; - otx2_write64(ctl.u, qp->base + OTX2_CPT_LF_CTL); - - /* Wait for instruction queue to become empty */ - cnt = 0; - do { - inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG); - if (inprog.s.grb_partial) - cnt = 0; - else - cnt++; - grp_ptr.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_GRP_PTR); - } while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr)); - - cnt = 0; - do { - inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG); - if ((inprog.s.inflight == 0) && - (inprog.s.gwb_cnt < 40) && - ((inprog.s.grb_cnt == 0) || (inprog.s.grb_cnt == 40))) - cnt++; - else - cnt = 0; - } while (cnt < 10); -} diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h deleted file mode 100644 index 90a338e05a..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h +++ /dev/null @@ -1,161 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_HW_ACCESS_H_ -#define _OTX2_CRYPTODEV_HW_ACCESS_H_ - -#include - -#include -#include - -#include "cpt_common.h" -#include "cpt_hw_types.h" -#include "cpt_mcode_defines.h" - -#include "otx2_dev.h" -#include "otx2_cryptodev_qp.h" - -/* CPT instruction queue length. - * Use queue size as power of 2 for aiding in pending queue calculations. - */ -#define OTX2_CPT_DEFAULT_CMD_QLEN 8192 - -/* Mask which selects all engine groups */ -#define OTX2_CPT_ENG_GRPS_MASK 0xFF - -/* Register offsets */ - -/* LMT LF registers */ -#define OTX2_LMT_LF_LMTLINE(a) (0x0ull | (uint64_t)(a) << 3) - -/* CPT LF registers */ -#define OTX2_CPT_LF_CTL 0x10ull -#define OTX2_CPT_LF_INPROG 0x40ull -#define OTX2_CPT_LF_MISC_INT 0xb0ull -#define OTX2_CPT_LF_MISC_INT_ENA_W1S 0xd0ull -#define OTX2_CPT_LF_MISC_INT_ENA_W1C 0xe0ull -#define OTX2_CPT_LF_Q_BASE 0xf0ull -#define OTX2_CPT_LF_Q_SIZE 0x100ull -#define OTX2_CPT_LF_Q_GRP_PTR 0x120ull -#define OTX2_CPT_LF_NQ(a) (0x400ull | (uint64_t)(a) << 3) - -#define OTX2_CPT_AF_LF_CTL(a) (0x27000ull | (uint64_t)(a) << 3) -#define OTX2_CPT_AF_LF_CTL2(a) (0x29000ull | (uint64_t)(a) << 3) - -#define OTX2_CPT_LF_BAR2(vf, blk_addr, q_id) \ - ((vf)->otx2_dev.bar2 + \ - ((blk_addr << 20) | ((q_id) << 12))) - -#define OTX2_CPT_QUEUE_HI_PRIO 0x1 - -union otx2_cpt_lf_ctl { - uint64_t u; - struct { - uint64_t ena : 1; - uint64_t fc_ena : 1; - uint64_t fc_up_crossing : 1; - uint64_t reserved_3_3 : 1; - uint64_t fc_hyst_bits : 4; - uint64_t reserved_8_63 : 56; - } s; -}; - -union otx2_cpt_lf_inprog { - uint64_t u; - struct { - uint64_t inflight : 9; - uint64_t reserved_9_15 : 7; - uint64_t eena : 1; - uint64_t grp_drp : 1; - uint64_t reserved_18_30 : 13; - uint64_t grb_partial : 1; - uint64_t grb_cnt : 8; - uint64_t gwb_cnt : 8; - uint64_t reserved_48_63 : 16; - } s; -}; - -union otx2_cpt_lf_q_base { - uint64_t u; - struct { - uint64_t fault : 1; - uint64_t stopped : 1; - uint64_t reserved_2_6 : 5; - uint64_t addr : 46; - uint64_t reserved_53_63 : 11; - } s; -}; - -union otx2_cpt_lf_q_size { - uint64_t u; - struct { - uint64_t size_div40 : 15; - uint64_t reserved_15_63 : 49; - } s; -}; - -union otx2_cpt_af_lf_ctl { - uint64_t u; - struct { - uint64_t pri : 1; - uint64_t reserved_1_8 : 8; - uint64_t pf_func_inst : 1; - uint64_t cont_err : 1; - uint64_t reserved_11_15 : 5; - uint64_t nixtx_en : 1; - uint64_t reserved_17_47 : 31; - uint64_t grp : 8; - uint64_t reserved_56_63 : 8; - } s; -}; - -union otx2_cpt_af_lf_ctl2 { - uint64_t u; - struct { - uint64_t exe_no_swap : 1; - uint64_t exe_ldwb : 1; - uint64_t reserved_2_31 : 30; - uint64_t sso_pf_func : 16; - uint64_t nix_pf_func : 16; - } s; -}; - -union otx2_cpt_lf_q_grp_ptr { - uint64_t u; - struct { - uint64_t dq_ptr : 15; - uint64_t reserved_31_15 : 17; - uint64_t nq_ptr : 15; - uint64_t reserved_47_62 : 16; - uint64_t xq_xor : 1; - } s; -}; - -/* - * Enumeration cpt_9x_comp_e - * - * CPT 9X Completion Enumeration - * Enumerates the values of CPT_RES_S[COMPCODE]. - */ -enum cpt_9x_comp_e { - CPT_9X_COMP_E_NOTDONE = 0x00, - CPT_9X_COMP_E_GOOD = 0x01, - CPT_9X_COMP_E_FAULT = 0x02, - CPT_9X_COMP_E_HWERR = 0x04, - CPT_9X_COMP_E_INSTERR = 0x05, - CPT_9X_COMP_E_LAST_ENTRY = 0x06 -}; - -void otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev); - -int otx2_cpt_err_intr_register(const struct rte_cryptodev *dev); - -int otx2_cpt_iq_enable(const struct rte_cryptodev *dev, - const struct otx2_cpt_qp *qp, uint8_t grp_mask, - uint8_t pri, uint32_t size_div40); - -void otx2_cpt_iq_disable(struct otx2_cpt_qp *qp); - -#endif /* _OTX2_CRYPTODEV_HW_ACCESS_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c deleted file mode 100644 index f9e7b0b474..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c +++ /dev/null @@ -1,285 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_mbox.h" -#include "otx2_dev.h" -#include "otx2_ethdev.h" -#include "otx2_sec_idev.h" -#include "otx2_mbox.h" - -#include "cpt_pmd_logs.h" - -int -otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev, - union cpt_eng_caps *hw_caps) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_dev *otx2_dev = &vf->otx2_dev; - struct cpt_caps_rsp_msg *rsp; - int ret; - - otx2_mbox_alloc_msg_cpt_caps_get(otx2_dev->mbox); - - ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp); - if (ret) - return -EIO; - - if (rsp->cpt_pf_drv_version != OTX2_CPT_PMD_VERSION) { - otx2_err("Incompatible CPT PMD version" - "(Kernel: 0x%04x DPDK: 0x%04x)", - rsp->cpt_pf_drv_version, OTX2_CPT_PMD_VERSION); - return -EPIPE; - } - - vf->cpt_revision = rsp->cpt_revision; - otx2_mbox_memcpy(hw_caps, rsp->eng_caps, - sizeof(union cpt_eng_caps) * CPT_MAX_ENG_TYPES); - - return 0; -} - -int -otx2_cpt_available_queues_get(const struct rte_cryptodev *dev, - uint16_t *nb_queues) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_dev *otx2_dev = &vf->otx2_dev; - struct free_rsrcs_rsp *rsp; - int ret; - - otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox); - - ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp); - if (ret) - return -EIO; - - *nb_queues = rsp->cpt + rsp->cpt1; - return 0; -} - -int -otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - int blkaddr[OTX2_CPT_MAX_BLKS]; - struct rsrc_attach_req *req; - int blknum = 0; - int i, ret; - - blkaddr[0] = RVU_BLOCK_ADDR_CPT0; - blkaddr[1] = RVU_BLOCK_ADDR_CPT1; - - /* Ask AF to attach required LFs */ - - req = otx2_mbox_alloc_msg_attach_resources(mbox); - - if ((vf->cpt_revision == OTX2_CPT_REVISION_ID_3) && - (vf->otx2_dev.pf_func & 0x1)) - blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS; - - /* 1 LF = 1 queue */ - req->cptlfs = nb_queues; - req->cpt_blkaddr = blkaddr[blknum]; - - ret = otx2_mbox_process(mbox); - if (ret == -ENOSPC) { - if (vf->cpt_revision == OTX2_CPT_REVISION_ID_3) { - blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS; - req->cpt_blkaddr = blkaddr[blknum]; - if (otx2_mbox_process(mbox) < 0) - return -EIO; - } else { - return -EIO; - } - } else if (ret < 0) { - return -EIO; - } - - /* Update number of attached queues */ - vf->nb_queues = nb_queues; - for (i = 0; i < nb_queues; i++) - vf->lf_blkaddr[i] = req->cpt_blkaddr; - - return 0; -} - -int -otx2_cpt_queues_detach(const struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct rsrc_detach_req *req; - - req = otx2_mbox_alloc_msg_detach_resources(mbox); - req->cptlfs = true; - req->partial = true; - if (otx2_mbox_process(mbox) < 0) - return -EIO; - - /* Queues have been detached */ - vf->nb_queues = 0; - - return 0; -} - -int -otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct msix_offset_rsp *rsp; - uint32_t i, ret; - - /* Get CPT MSI-X vector offsets */ - - otx2_mbox_alloc_msg_msix_offset(mbox); - - ret = otx2_mbox_process_msg(mbox, (void *)&rsp); - if (ret) - return ret; - - for (i = 0; i < vf->nb_queues; i++) - vf->lf_msixoff[i] = (vf->lf_blkaddr[i] == RVU_BLOCK_ADDR_CPT1) ? - rsp->cpt1_lf_msixoff[i] : rsp->cptlf_msixoff[i]; - - return 0; -} - -static int -otx2_cpt_send_mbox_msg(struct otx2_cpt_vf *vf) -{ - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - int ret; - - otx2_mbox_msg_send(mbox, 0); - - ret = otx2_mbox_wait_for_rsp(mbox, 0); - if (ret < 0) { - CPT_LOG_ERR("Could not get mailbox response"); - return ret; - } - - return 0; -} - -int -otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg, - uint8_t blkaddr, uint64_t *val) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - struct cpt_rd_wr_reg_msg *msg; - int ret, off; - - msg = (struct cpt_rd_wr_reg_msg *) - otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg), - sizeof(*msg)); - if (msg == NULL) { - CPT_LOG_ERR("Could not allocate mailbox message"); - return -EFAULT; - } - - msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER; - msg->hdr.sig = OTX2_MBOX_REQ_SIG; - msg->hdr.pcifunc = vf->otx2_dev.pf_func; - msg->is_write = 0; - msg->reg_offset = reg; - msg->ret_val = val; - msg->blkaddr = blkaddr; - - ret = otx2_cpt_send_mbox_msg(vf); - if (ret < 0) - return ret; - - off = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - msg = (struct cpt_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off); - - *val = msg->val; - - return 0; -} - -int -otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg, - uint8_t blkaddr, uint64_t val) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct cpt_rd_wr_reg_msg *msg; - - msg = (struct cpt_rd_wr_reg_msg *) - otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg), - sizeof(*msg)); - if (msg == NULL) { - CPT_LOG_ERR("Could not allocate mailbox message"); - return -EFAULT; - } - - msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER; - msg->hdr.sig = OTX2_MBOX_REQ_SIG; - msg->hdr.pcifunc = vf->otx2_dev.pf_func; - msg->is_write = 1; - msg->reg_offset = reg; - msg->val = val; - msg->blkaddr = blkaddr; - - return otx2_cpt_send_mbox_msg(vf); -} - -int -otx2_cpt_inline_init(const struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct cpt_rx_inline_lf_cfg_msg *msg; - int ret; - - msg = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(mbox); - msg->sso_pf_func = otx2_sso_pf_func_get(); - - otx2_mbox_msg_send(mbox, 0); - ret = otx2_mbox_process(mbox); - if (ret < 0) - return -EIO; - - return 0; -} - -int -otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp, - uint16_t port_id) -{ - struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id]; - struct otx2_cpt_vf *vf = dev->data->dev_private; - struct otx2_mbox *mbox = vf->otx2_dev.mbox; - struct cpt_inline_ipsec_cfg_msg *msg; - struct otx2_eth_dev *otx2_eth_dev; - int ret; - - if (!otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id])) - return -EINVAL; - - otx2_eth_dev = otx2_eth_pmd_priv(eth_dev); - - msg = otx2_mbox_alloc_msg_cpt_inline_ipsec_cfg(mbox); - msg->dir = CPT_INLINE_OUTBOUND; - msg->enable = 1; - msg->slot = qp->id; - - msg->nix_pf_func = otx2_eth_dev->pf_func; - - otx2_mbox_msg_send(mbox, 0); - ret = otx2_mbox_process(mbox); - if (ret < 0) - return -EIO; - - return 0; -} diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h deleted file mode 100644 index 03323e418c..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h +++ /dev/null @@ -1,37 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_MBOX_H_ -#define _OTX2_CRYPTODEV_MBOX_H_ - -#include - -#include "otx2_cryptodev_hw_access.h" - -int otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev, - union cpt_eng_caps *hw_caps); - -int otx2_cpt_available_queues_get(const struct rte_cryptodev *dev, - uint16_t *nb_queues); - -int otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues); - -int otx2_cpt_queues_detach(const struct rte_cryptodev *dev); - -int otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev); - -__rte_internal -int otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg, - uint8_t blkaddr, uint64_t *val); - -__rte_internal -int otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg, - uint8_t blkaddr, uint64_t val); - -int otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, - struct otx2_cpt_qp *qp, uint16_t port_id); - -int otx2_cpt_inline_init(const struct rte_cryptodev *dev); - -#endif /* _OTX2_CRYPTODEV_MBOX_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c deleted file mode 100644 index 339b82f33e..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ /dev/null @@ -1,1438 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#include - -#include -#include -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_capabilities.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_mbox.h" -#include "otx2_cryptodev_ops.h" -#include "otx2_cryptodev_ops_helper.h" -#include "otx2_ipsec_anti_replay.h" -#include "otx2_ipsec_po_ops.h" -#include "otx2_mbox.h" -#include "otx2_sec_idev.h" -#include "otx2_security.h" - -#include "cpt_hw_types.h" -#include "cpt_pmd_logs.h" -#include "cpt_pmd_ops_helper.h" -#include "cpt_ucode.h" -#include "cpt_ucode_asym.h" - -#define METABUF_POOL_CACHE_SIZE 512 - -static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX]; - -/* Forward declarations */ - -static int -otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id); - -static void -qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) -{ - snprintf(name, size, "otx2_cpt_lf_mem_%u:%u", dev_id, qp_id); -} - -static int -otx2_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev, - struct otx2_cpt_qp *qp, uint8_t qp_id, - unsigned int nb_elements) -{ - char mempool_name[RTE_MEMPOOL_NAMESIZE]; - struct cpt_qp_meta_info *meta_info; - int lcore_cnt = rte_lcore_count(); - int ret, max_mlen, mb_pool_sz; - struct rte_mempool *pool; - int asym_mlen = 0; - int lb_mlen = 0; - int sg_mlen = 0; - - if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) { - - /* Get meta len for scatter gather mode */ - sg_mlen = cpt_pmd_ops_helper_get_mlen_sg_mode(); - - /* Extra 32B saved for future considerations */ - sg_mlen += 4 * sizeof(uint64_t); - - /* Get meta len for linear buffer (direct) mode */ - lb_mlen = cpt_pmd_ops_helper_get_mlen_direct_mode(); - - /* Extra 32B saved for future considerations */ - lb_mlen += 4 * sizeof(uint64_t); - } - - if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) { - - /* Get meta len required for asymmetric operations */ - asym_mlen = cpt_pmd_ops_helper_asym_get_mlen(); - } - - /* - * Check max requirement for meta buffer to - * support crypto op of any type (sym/asym). - */ - max_mlen = RTE_MAX(RTE_MAX(lb_mlen, sg_mlen), asym_mlen); - - /* Allocate mempool */ - - snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "otx2_cpt_mb_%u:%u", - dev->data->dev_id, qp_id); - - mb_pool_sz = nb_elements; - - /* For poll mode, core that enqueues and core that dequeues can be - * different. For event mode, all cores are allowed to use same crypto - * queue pair. - */ - mb_pool_sz += (RTE_MAX(2, lcore_cnt) * METABUF_POOL_CACHE_SIZE); - - pool = rte_mempool_create_empty(mempool_name, mb_pool_sz, max_mlen, - METABUF_POOL_CACHE_SIZE, 0, - rte_socket_id(), 0); - - if (pool == NULL) { - CPT_LOG_ERR("Could not create mempool for metabuf"); - return rte_errno; - } - - ret = rte_mempool_set_ops_byname(pool, RTE_MBUF_DEFAULT_MEMPOOL_OPS, - NULL); - if (ret) { - CPT_LOG_ERR("Could not set mempool ops"); - goto mempool_free; - } - - ret = rte_mempool_populate_default(pool); - if (ret <= 0) { - CPT_LOG_ERR("Could not populate metabuf pool"); - goto mempool_free; - } - - meta_info = &qp->meta_info; - - meta_info->pool = pool; - meta_info->lb_mlen = lb_mlen; - meta_info->sg_mlen = sg_mlen; - - return 0; - -mempool_free: - rte_mempool_free(pool); - return ret; -} - -static void -otx2_cpt_metabuf_mempool_destroy(struct otx2_cpt_qp *qp) -{ - struct cpt_qp_meta_info *meta_info = &qp->meta_info; - - rte_mempool_free(meta_info->pool); - - meta_info->pool = NULL; - meta_info->lb_mlen = 0; - meta_info->sg_mlen = 0; -} - -static int -otx2_cpt_qp_inline_cfg(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp) -{ - static rte_atomic16_t port_offset = RTE_ATOMIC16_INIT(-1); - uint16_t port_id, nb_ethport = rte_eth_dev_count_avail(); - int i, ret; - - for (i = 0; i < nb_ethport; i++) { - port_id = rte_atomic16_add_return(&port_offset, 1) % nb_ethport; - if (otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id])) - break; - } - - if (i >= nb_ethport) - return 0; - - ret = otx2_cpt_qp_ethdev_bind(dev, qp, port_id); - if (ret) - return ret; - - /* Publish inline Tx QP to eth dev security */ - ret = otx2_sec_idev_tx_cpt_qp_add(port_id, qp); - if (ret) - return ret; - - return 0; -} - -static struct otx2_cpt_qp * -otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id, - uint8_t group) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - uint64_t pg_sz = sysconf(_SC_PAGESIZE); - const struct rte_memzone *lf_mem; - uint32_t len, iq_len, size_div40; - char name[RTE_MEMZONE_NAMESIZE]; - uint64_t used_len, iova; - struct otx2_cpt_qp *qp; - uint64_t lmtline; - uint8_t *va; - int ret; - - /* Allocate queue pair */ - qp = rte_zmalloc_socket("OCTEON TX2 Crypto PMD Queue Pair", sizeof(*qp), - OTX2_ALIGN, 0); - if (qp == NULL) { - CPT_LOG_ERR("Could not allocate queue pair"); - return NULL; - } - - /* - * Pending queue updates make assumption that queue size is a power - * of 2. - */ - RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(OTX2_CPT_DEFAULT_CMD_QLEN)); - - iq_len = OTX2_CPT_DEFAULT_CMD_QLEN; - - /* - * Queue size must be a multiple of 40 and effective queue size to - * software is (size_div40 - 1) * 40 - */ - size_div40 = (iq_len + 40 - 1) / 40 + 1; - - /* For pending queue */ - len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8); - - /* Space for instruction group memory */ - len += size_div40 * 16; - - /* So that instruction queues start as pg size aligned */ - len = RTE_ALIGN(len, pg_sz); - - /* For instruction queues */ - len += OTX2_CPT_DEFAULT_CMD_QLEN * sizeof(union cpt_inst_s); - - /* Wastage after instruction queues */ - len = RTE_ALIGN(len, pg_sz); - - qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, - qp_id); - - lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node, - RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB, - RTE_CACHE_LINE_SIZE); - if (lf_mem == NULL) { - CPT_LOG_ERR("Could not allocate reserved memzone"); - goto qp_free; - } - - va = lf_mem->addr; - iova = lf_mem->iova; - - memset(va, 0, len); - - ret = otx2_cpt_metabuf_mempool_create(dev, qp, qp_id, iq_len); - if (ret) { - CPT_LOG_ERR("Could not create mempool for metabuf"); - goto lf_mem_free; - } - - /* Initialize pending queue */ - qp->pend_q.rid_queue = (void **)va; - qp->pend_q.tail = 0; - qp->pend_q.head = 0; - - used_len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8); - used_len += size_div40 * 16; - used_len = RTE_ALIGN(used_len, pg_sz); - iova += used_len; - - qp->iq_dma_addr = iova; - qp->id = qp_id; - qp->blkaddr = vf->lf_blkaddr[qp_id]; - qp->base = OTX2_CPT_LF_BAR2(vf, qp->blkaddr, qp_id); - - lmtline = vf->otx2_dev.bar2 + - (RVU_BLOCK_ADDR_LMT << 20 | qp_id << 12) + - OTX2_LMT_LF_LMTLINE(0); - - qp->lmtline = (void *)lmtline; - - qp->lf_nq_reg = qp->base + OTX2_CPT_LF_NQ(0); - - ret = otx2_sec_idev_tx_cpt_qp_remove(qp); - if (ret && (ret != -ENOENT)) { - CPT_LOG_ERR("Could not delete inline configuration"); - goto mempool_destroy; - } - - otx2_cpt_iq_disable(qp); - - ret = otx2_cpt_qp_inline_cfg(dev, qp); - if (ret) { - CPT_LOG_ERR("Could not configure queue for inline IPsec"); - goto mempool_destroy; - } - - ret = otx2_cpt_iq_enable(dev, qp, group, OTX2_CPT_QUEUE_HI_PRIO, - size_div40); - if (ret) { - CPT_LOG_ERR("Could not enable instruction queue"); - goto mempool_destroy; - } - - return qp; - -mempool_destroy: - otx2_cpt_metabuf_mempool_destroy(qp); -lf_mem_free: - rte_memzone_free(lf_mem); -qp_free: - rte_free(qp); - return NULL; -} - -static int -otx2_cpt_qp_destroy(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp) -{ - const struct rte_memzone *lf_mem; - char name[RTE_MEMZONE_NAMESIZE]; - int ret; - - ret = otx2_sec_idev_tx_cpt_qp_remove(qp); - if (ret && (ret != -ENOENT)) { - CPT_LOG_ERR("Could not delete inline configuration"); - return ret; - } - - otx2_cpt_iq_disable(qp); - - otx2_cpt_metabuf_mempool_destroy(qp); - - qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, - qp->id); - - lf_mem = rte_memzone_lookup(name); - - ret = rte_memzone_free(lf_mem); - if (ret) - return ret; - - rte_free(qp); - - return 0; -} - -static int -sym_xform_verify(struct rte_crypto_sym_xform *xform) -{ - if (xform->next) { - if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH && - xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER && - xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT && - (xform->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC || - xform->next->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC)) - return -ENOTSUP; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && - xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT && - xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH && - (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC || - xform->next->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC)) - return -ENOTSUP; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && - xform->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC && - xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH && - xform->next->auth.algo == RTE_CRYPTO_AUTH_SHA1) - return -ENOTSUP; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH && - xform->auth.algo == RTE_CRYPTO_AUTH_SHA1 && - xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER && - xform->next->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) - return -ENOTSUP; - - } else { - if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH && - xform->auth.algo == RTE_CRYPTO_AUTH_NULL && - xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY) - return -ENOTSUP; - } - return 0; -} - -static int -sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, - struct rte_cryptodev_sym_session *sess, - struct rte_mempool *pool) -{ - struct rte_crypto_sym_xform *temp_xform = xform; - struct cpt_sess_misc *misc; - vq_cmd_word3_t vq_cmd_w3; - void *priv; - int ret; - - ret = sym_xform_verify(xform); - if (unlikely(ret)) - return ret; - - if (unlikely(rte_mempool_get(pool, &priv))) { - CPT_LOG_ERR("Could not allocate session private data"); - return -ENOMEM; - } - - memset(priv, 0, sizeof(struct cpt_sess_misc) + - offsetof(struct cpt_ctx, mc_ctx)); - - misc = priv; - - for ( ; xform != NULL; xform = xform->next) { - switch (xform->type) { - case RTE_CRYPTO_SYM_XFORM_AEAD: - ret = fill_sess_aead(xform, misc); - break; - case RTE_CRYPTO_SYM_XFORM_CIPHER: - ret = fill_sess_cipher(xform, misc); - break; - case RTE_CRYPTO_SYM_XFORM_AUTH: - if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) - ret = fill_sess_gmac(xform, misc); - else - ret = fill_sess_auth(xform, misc); - break; - default: - ret = -1; - } - - if (ret) - goto priv_put; - } - - if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) && - cpt_mac_len_verify(&temp_xform->auth)) { - CPT_LOG_ERR("MAC length is not supported"); - struct cpt_ctx *ctx = SESS_PRIV(misc); - if (ctx->auth_key != NULL) { - rte_free(ctx->auth_key); - ctx->auth_key = NULL; - } - ret = -ENOTSUP; - goto priv_put; - } - - set_sym_session_private_data(sess, driver_id, misc); - - misc->ctx_dma_addr = rte_mempool_virt2iova(misc) + - sizeof(struct cpt_sess_misc); - - vq_cmd_w3.u64 = 0; - vq_cmd_w3.s.cptr = misc->ctx_dma_addr + offsetof(struct cpt_ctx, - mc_ctx); - - /* - * IE engines support IPsec operations - * SE engines support IPsec operations, Chacha-Poly and - * Air-Crypto operations - */ - if (misc->zsk_flag || misc->chacha_poly) - vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE; - else - vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE_IE; - - misc->cpt_inst_w7 = vq_cmd_w3.u64; - - return 0; - -priv_put: - rte_mempool_put(pool, priv); - - return -ENOTSUP; -} - -static __rte_always_inline int32_t __rte_hot -otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, - struct cpt_request_info *req, - void *lmtline, - struct rte_crypto_op *op, - uint64_t cpt_inst_w7) -{ - union rte_event_crypto_metadata *m_data; - union cpt_inst_s inst; - uint64_t lmt_status; - - if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - m_data = rte_cryptodev_sym_session_get_user_data( - op->sym->session); - if (m_data == NULL) { - rte_pktmbuf_free(op->sym->m_src); - rte_crypto_op_free(op); - rte_errno = EINVAL; - return -EINVAL; - } - } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && - op->private_data_offset) { - m_data = (union rte_event_crypto_metadata *) - ((uint8_t *)op + - op->private_data_offset); - } else { - return -EINVAL; - } - - inst.u[0] = 0; - inst.s9x.res_addr = req->comp_baddr; - inst.u[2] = 0; - inst.u[3] = 0; - - inst.s9x.ei0 = req->ist.ei0; - inst.s9x.ei1 = req->ist.ei1; - inst.s9x.ei2 = req->ist.ei2; - inst.s9x.ei3 = cpt_inst_w7; - - inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | - m_data->response_info.flow_id) | - ((uint64_t)m_data->response_info.sched_type << 32) | - ((uint64_t)m_data->response_info.queue_id << 34)); - inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); - req->qp = qp; - - do { - /* Copy CPT command to LMTLINE */ - memcpy(lmtline, &inst, sizeof(inst)); - - /* - * Make sure compiler does not reorder memcpy and ldeor. - * LMTST transactions are always flushed from the write - * buffer immediately, a DMB is not required to push out - * LMTSTs. - */ - rte_io_wmb(); - lmt_status = otx2_lmt_submit(qp->lf_nq_reg); - } while (lmt_status == 0); - - return 0; -} - -static __rte_always_inline int32_t __rte_hot -otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, - struct pending_queue *pend_q, - struct cpt_request_info *req, - struct rte_crypto_op *op, - uint64_t cpt_inst_w7, - unsigned int burst_index) -{ - void *lmtline = qp->lmtline; - union cpt_inst_s inst; - uint64_t lmt_status; - - if (qp->ca_enable) - return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); - - inst.u[0] = 0; - inst.s9x.res_addr = req->comp_baddr; - inst.u[2] = 0; - inst.u[3] = 0; - - inst.s9x.ei0 = req->ist.ei0; - inst.s9x.ei1 = req->ist.ei1; - inst.s9x.ei2 = req->ist.ei2; - inst.s9x.ei3 = cpt_inst_w7; - - req->time_out = rte_get_timer_cycles() + - DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); - - do { - /* Copy CPT command to LMTLINE */ - memcpy(lmtline, &inst, sizeof(inst)); - - /* - * Make sure compiler does not reorder memcpy and ldeor. - * LMTST transactions are always flushed from the write - * buffer immediately, a DMB is not required to push out - * LMTSTs. - */ - rte_io_wmb(); - lmt_status = otx2_lmt_submit(qp->lf_nq_reg); - } while (lmt_status == 0); - - pending_queue_push(pend_q, req, burst_index, OTX2_CPT_DEFAULT_CMD_QLEN); - - return 0; -} - -static __rte_always_inline int32_t __rte_hot -otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, - struct rte_crypto_op *op, - struct pending_queue *pend_q, - unsigned int burst_index) -{ - struct cpt_qp_meta_info *minfo = &qp->meta_info; - struct rte_crypto_asym_op *asym_op = op->asym; - struct asym_op_params params = {0}; - struct cpt_asym_sess_misc *sess; - uintptr_t *cop; - void *mdata; - int ret; - - if (unlikely(rte_mempool_get(minfo->pool, &mdata) < 0)) { - CPT_LOG_ERR("Could not allocate meta buffer for request"); - return -ENOMEM; - } - - sess = get_asym_session_private_data(asym_op->session, - otx2_cryptodev_driver_id); - - /* Store IO address of the mdata to meta_buf */ - params.meta_buf = rte_mempool_virt2iova(mdata); - - cop = mdata; - cop[0] = (uintptr_t)mdata; - cop[1] = (uintptr_t)op; - cop[2] = cop[3] = 0ULL; - - params.req = RTE_PTR_ADD(cop, 4 * sizeof(uintptr_t)); - params.req->op = cop; - - /* Adjust meta_buf to point to end of cpt_request_info structure */ - params.meta_buf += (4 * sizeof(uintptr_t)) + - sizeof(struct cpt_request_info); - switch (sess->xfrm_type) { - case RTE_CRYPTO_ASYM_XFORM_MODEX: - ret = cpt_modex_prep(¶ms, &sess->mod_ctx); - if (unlikely(ret)) - goto req_fail; - break; - case RTE_CRYPTO_ASYM_XFORM_RSA: - ret = cpt_enqueue_rsa_op(op, ¶ms, sess); - if (unlikely(ret)) - goto req_fail; - break; - case RTE_CRYPTO_ASYM_XFORM_ECDSA: - ret = cpt_enqueue_ecdsa_op(op, ¶ms, sess, otx2_fpm_iova); - if (unlikely(ret)) - goto req_fail; - break; - case RTE_CRYPTO_ASYM_XFORM_ECPM: - ret = cpt_ecpm_prep(&asym_op->ecpm, ¶ms, - sess->ec_ctx.curveid); - if (unlikely(ret)) - goto req_fail; - break; - default: - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - ret = -EINVAL; - goto req_fail; - } - - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, - sess->cpt_inst_w7, burst_index); - if (unlikely(ret)) { - CPT_LOG_DP_ERR("Could not enqueue crypto req"); - goto req_fail; - } - - return 0; - -req_fail: - free_op_meta(mdata, minfo->pool); - - return ret; -} - -static __rte_always_inline int __rte_hot -otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, - struct pending_queue *pend_q, unsigned int burst_index) -{ - struct rte_crypto_sym_op *sym_op = op->sym; - struct cpt_request_info *req; - struct cpt_sess_misc *sess; - uint64_t cpt_op; - void *mdata; - int ret; - - sess = get_sym_session_private_data(sym_op->session, - otx2_cryptodev_driver_id); - - cpt_op = sess->cpt_op; - - if (cpt_op & CPT_OP_CIPHER_MASK) - ret = fill_fc_params(op, sess, &qp->meta_info, &mdata, - (void **)&req); - else - ret = fill_digest_params(op, sess, &qp->meta_info, &mdata, - (void **)&req); - - if (unlikely(ret)) { - CPT_LOG_DP_ERR("Crypto req : op %p, cpt_op 0x%x ret 0x%x", - op, (unsigned int)cpt_op, ret); - return ret; - } - - ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7, - burst_index); - if (unlikely(ret)) { - /* Free buffer allocated by fill params routines */ - free_op_meta(mdata, qp->meta_info.pool); - } - - return ret; -} - -static __rte_always_inline int __rte_hot -otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, - struct pending_queue *pend_q, - const unsigned int burst_index) -{ - uint32_t winsz, esn_low = 0, esn_hi = 0, seql = 0, seqh = 0; - struct rte_mbuf *m_src = op->sym->m_src; - struct otx2_sec_session_ipsec_lp *sess; - struct otx2_ipsec_po_sa_ctl *ctl_wrd; - struct otx2_ipsec_po_in_sa *sa; - struct otx2_sec_session *priv; - struct cpt_request_info *req; - uint64_t seq_in_sa, seq = 0; - uint8_t esn; - int ret; - - priv = get_sec_session_private_data(op->sym->sec_session); - sess = &priv->ipsec.lp; - sa = &sess->in_sa; - - ctl_wrd = &sa->ctl; - esn = ctl_wrd->esn_en; - winsz = sa->replay_win_sz; - - if (ctl_wrd->direction == OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND) - ret = process_outb_sa(op, sess, &qp->meta_info, (void **)&req); - else { - if (winsz) { - esn_low = rte_be_to_cpu_32(sa->esn_low); - esn_hi = rte_be_to_cpu_32(sa->esn_hi); - seql = *rte_pktmbuf_mtod_offset(m_src, uint32_t *, - sizeof(struct rte_ipv4_hdr) + 4); - seql = rte_be_to_cpu_32(seql); - - if (!esn) - seq = (uint64_t)seql; - else { - seqh = anti_replay_get_seqh(winsz, seql, esn_hi, - esn_low); - seq = ((uint64_t)seqh << 32) | seql; - } - - if (unlikely(seq == 0)) - return IPSEC_ANTI_REPLAY_FAILED; - - ret = anti_replay_check(sa->replay, seq, winsz); - if (unlikely(ret)) { - otx2_err("Anti replay check failed"); - return IPSEC_ANTI_REPLAY_FAILED; - } - - if (esn) { - seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; - if (seq > seq_in_sa) { - sa->esn_low = rte_cpu_to_be_32(seql); - sa->esn_hi = rte_cpu_to_be_32(seqh); - } - } - } - - ret = process_inb_sa(op, sess, &qp->meta_info, (void **)&req); - } - - if (unlikely(ret)) { - otx2_err("Crypto req : op %p, ret 0x%x", op, ret); - return ret; - } - - ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7, - burst_index); - - return ret; -} - -static __rte_always_inline int __rte_hot -otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, - struct pending_queue *pend_q, - unsigned int burst_index) -{ - const int driver_id = otx2_cryptodev_driver_id; - struct rte_crypto_sym_op *sym_op = op->sym; - struct rte_cryptodev_sym_session *sess; - int ret; - - /* Create temporary session */ - sess = rte_cryptodev_sym_session_create(qp->sess_mp); - if (sess == NULL) - return -ENOMEM; - - ret = sym_session_configure(driver_id, sym_op->xform, sess, - qp->sess_mp_priv); - if (ret) - goto sess_put; - - sym_op->session = sess; - - ret = otx2_cpt_enqueue_sym(qp, op, pend_q, burst_index); - - if (unlikely(ret)) - goto priv_put; - - return 0; - -priv_put: - sym_session_clear(driver_id, sess); -sess_put: - rte_mempool_put(qp->sess_mp, sess); - return ret; -} - -static uint16_t -otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) -{ - uint16_t nb_allowed, count = 0; - struct otx2_cpt_qp *qp = qptr; - struct pending_queue *pend_q; - struct rte_crypto_op *op; - int ret; - - pend_q = &qp->pend_q; - - nb_allowed = pending_queue_free_slots(pend_q, - OTX2_CPT_DEFAULT_CMD_QLEN, 0); - nb_ops = RTE_MIN(nb_ops, nb_allowed); - - for (count = 0; count < nb_ops; count++) { - op = ops[count]; - if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) - ret = otx2_cpt_enqueue_sec(qp, op, pend_q, - count); - else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ret = otx2_cpt_enqueue_sym(qp, op, pend_q, - count); - else - ret = otx2_cpt_enqueue_sym_sessless(qp, op, - pend_q, count); - } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { - if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ret = otx2_cpt_enqueue_asym(qp, op, pend_q, - count); - else - break; - } else - break; - - if (unlikely(ret)) - break; - } - - if (unlikely(!qp->ca_enable)) - pending_queue_commit(pend_q, count, OTX2_CPT_DEFAULT_CMD_QLEN); - - return count; -} - -static __rte_always_inline void -otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req, - struct rte_crypto_rsa_xform *rsa_ctx) -{ - struct rte_crypto_rsa_op_param *rsa = &cop->asym->rsa; - - switch (rsa->op_type) { - case RTE_CRYPTO_ASYM_OP_ENCRYPT: - rsa->cipher.length = rsa_ctx->n.length; - memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length); - break; - case RTE_CRYPTO_ASYM_OP_DECRYPT: - if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) { - rsa->message.length = rsa_ctx->n.length; - memcpy(rsa->message.data, req->rptr, - rsa->message.length); - } else { - /* Get length of decrypted output */ - rsa->message.length = rte_cpu_to_be_16 - (*((uint16_t *)req->rptr)); - /* - * Offset output data pointer by length field - * (2 bytes) and copy decrypted data. - */ - memcpy(rsa->message.data, req->rptr + 2, - rsa->message.length); - } - break; - case RTE_CRYPTO_ASYM_OP_SIGN: - rsa->sign.length = rsa_ctx->n.length; - memcpy(rsa->sign.data, req->rptr, rsa->sign.length); - break; - case RTE_CRYPTO_ASYM_OP_VERIFY: - if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) { - rsa->sign.length = rsa_ctx->n.length; - memcpy(rsa->sign.data, req->rptr, rsa->sign.length); - } else { - /* Get length of signed output */ - rsa->sign.length = rte_cpu_to_be_16 - (*((uint16_t *)req->rptr)); - /* - * Offset output data pointer by length field - * (2 bytes) and copy signed data. - */ - memcpy(rsa->sign.data, req->rptr + 2, - rsa->sign.length); - } - if (memcmp(rsa->sign.data, rsa->message.data, - rsa->message.length)) { - CPT_LOG_DP_ERR("RSA verification failed"); - cop->status = RTE_CRYPTO_OP_STATUS_ERROR; - } - break; - default: - CPT_LOG_DP_DEBUG("Invalid RSA operation type"); - cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - break; - } -} - -static __rte_always_inline void -otx2_cpt_asym_dequeue_ecdsa_op(struct rte_crypto_ecdsa_op_param *ecdsa, - struct cpt_request_info *req, - struct cpt_asym_ec_ctx *ec) -{ - int prime_len = ec_grp[ec->curveid].prime.length; - - if (ecdsa->op_type == RTE_CRYPTO_ASYM_OP_VERIFY) - return; - - /* Separate out sign r and s components */ - memcpy(ecdsa->r.data, req->rptr, prime_len); - memcpy(ecdsa->s.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8), - prime_len); - ecdsa->r.length = prime_len; - ecdsa->s.length = prime_len; -} - -static __rte_always_inline void -otx2_cpt_asym_dequeue_ecpm_op(struct rte_crypto_ecpm_op_param *ecpm, - struct cpt_request_info *req, - struct cpt_asym_ec_ctx *ec) -{ - int prime_len = ec_grp[ec->curveid].prime.length; - - memcpy(ecpm->r.x.data, req->rptr, prime_len); - memcpy(ecpm->r.y.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8), - prime_len); - ecpm->r.x.length = prime_len; - ecpm->r.y.length = prime_len; -} - -static void -otx2_cpt_asym_post_process(struct rte_crypto_op *cop, - struct cpt_request_info *req) -{ - struct rte_crypto_asym_op *op = cop->asym; - struct cpt_asym_sess_misc *sess; - - sess = get_asym_session_private_data(op->session, - otx2_cryptodev_driver_id); - - switch (sess->xfrm_type) { - case RTE_CRYPTO_ASYM_XFORM_RSA: - otx2_cpt_asym_rsa_op(cop, req, &sess->rsa_ctx); - break; - case RTE_CRYPTO_ASYM_XFORM_MODEX: - op->modex.result.length = sess->mod_ctx.modulus.length; - memcpy(op->modex.result.data, req->rptr, - op->modex.result.length); - break; - case RTE_CRYPTO_ASYM_XFORM_ECDSA: - otx2_cpt_asym_dequeue_ecdsa_op(&op->ecdsa, req, &sess->ec_ctx); - break; - case RTE_CRYPTO_ASYM_XFORM_ECPM: - otx2_cpt_asym_dequeue_ecpm_op(&op->ecpm, req, &sess->ec_ctx); - break; - default: - CPT_LOG_DP_DEBUG("Invalid crypto xform type"); - cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - break; - } -} - -static void -otx2_cpt_sec_post_process(struct rte_crypto_op *cop, uintptr_t *rsp) -{ - struct cpt_request_info *req = (struct cpt_request_info *)rsp[2]; - vq_cmd_word0_t *word0 = (vq_cmd_word0_t *)&req->ist.ei0; - struct rte_crypto_sym_op *sym_op = cop->sym; - struct rte_mbuf *m = sym_op->m_src; - struct rte_ipv6_hdr *ip6; - struct rte_ipv4_hdr *ip; - uint16_t m_len = 0; - int mdata_len; - char *data; - - mdata_len = (int)rsp[3]; - rte_pktmbuf_trim(m, mdata_len); - - if (word0->s.opcode.major == OTX2_IPSEC_PO_PROCESS_IPSEC_INB) { - data = rte_pktmbuf_mtod(m, char *); - ip = (struct rte_ipv4_hdr *)(data + - OTX2_IPSEC_PO_INB_RPTR_HDR); - - if ((ip->version_ihl >> 4) == 4) { - m_len = rte_be_to_cpu_16(ip->total_length); - } else { - ip6 = (struct rte_ipv6_hdr *)(data + - OTX2_IPSEC_PO_INB_RPTR_HDR); - m_len = rte_be_to_cpu_16(ip6->payload_len) + - sizeof(struct rte_ipv6_hdr); - } - - m->data_len = m_len; - m->pkt_len = m_len; - m->data_off += OTX2_IPSEC_PO_INB_RPTR_HDR; - } -} - -static inline void -otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop, - uintptr_t *rsp, uint8_t cc) -{ - unsigned int sz; - - if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { - if (likely(cc == OTX2_IPSEC_PO_CC_SUCCESS)) { - otx2_cpt_sec_post_process(cop, rsp); - cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS; - } else - cop->status = RTE_CRYPTO_OP_STATUS_ERROR; - - return; - } - - if (likely(cc == NO_ERR)) { - /* Verify authentication data if required */ - if (unlikely(rsp[2])) - compl_auth_verify(cop, (uint8_t *)rsp[2], - rsp[3]); - else - cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS; - } else { - if (cc == ERR_GC_ICV_MISCOMPARE) - cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; - else - cop->status = RTE_CRYPTO_OP_STATUS_ERROR; - } - - if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) { - sym_session_clear(otx2_cryptodev_driver_id, - cop->sym->session); - sz = rte_cryptodev_sym_get_existing_header_session_size( - cop->sym->session); - memset(cop->sym->session, 0, sz); - rte_mempool_put(qp->sess_mp, cop->sym->session); - cop->sym->session = NULL; - } - } - - if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { - if (likely(cc == NO_ERR)) { - cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS; - /* - * Pass cpt_req_info stored in metabuf during - * enqueue. - */ - rsp = RTE_PTR_ADD(rsp, 4 * sizeof(uintptr_t)); - otx2_cpt_asym_post_process(cop, - (struct cpt_request_info *)rsp); - } else - cop->status = RTE_CRYPTO_OP_STATUS_ERROR; - } -} - -static uint16_t -otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) -{ - int i, nb_pending, nb_completed; - struct otx2_cpt_qp *qp = qptr; - struct pending_queue *pend_q; - struct cpt_request_info *req; - struct rte_crypto_op *cop; - uint8_t cc[nb_ops]; - uintptr_t *rsp; - void *metabuf; - - pend_q = &qp->pend_q; - - nb_pending = pending_queue_level(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN); - - /* Ensure pcount isn't read before data lands */ - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); - - nb_ops = RTE_MIN(nb_ops, nb_pending); - - for (i = 0; i < nb_ops; i++) { - pending_queue_peek(pend_q, (void **)&req, - OTX2_CPT_DEFAULT_CMD_QLEN, 0); - - cc[i] = otx2_cpt_compcode_get(req); - - if (unlikely(cc[i] == ERR_REQ_PENDING)) - break; - - ops[i] = req->op; - - pending_queue_pop(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN); - } - - nb_completed = i; - - for (i = 0; i < nb_completed; i++) { - rsp = (void *)ops[i]; - - metabuf = (void *)rsp[0]; - cop = (void *)rsp[1]; - - ops[i] = cop; - - otx2_cpt_dequeue_post_process(qp, cop, rsp, cc[i]); - - free_op_meta(metabuf, qp->meta_info.pool); - } - - return nb_completed; -} - -void -otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev) -{ - dev->enqueue_burst = otx2_cpt_enqueue_burst; - dev->dequeue_burst = otx2_cpt_dequeue_burst; - - rte_mb(); -} - -/* PMD ops */ - -static int -otx2_cpt_dev_config(struct rte_cryptodev *dev, - struct rte_cryptodev_config *conf) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - int ret; - - if (conf->nb_queue_pairs > vf->max_queues) { - CPT_LOG_ERR("Invalid number of queue pairs requested"); - return -EINVAL; - } - - dev->feature_flags = otx2_cpt_default_ff_get() & ~conf->ff_disable; - - if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) { - /* Initialize shared FPM table */ - ret = cpt_fpm_init(otx2_fpm_iova); - if (ret) - return ret; - } - - /* Unregister error interrupts */ - if (vf->err_intr_registered) - otx2_cpt_err_intr_unregister(dev); - - /* Detach queues */ - if (vf->nb_queues) { - ret = otx2_cpt_queues_detach(dev); - if (ret) { - CPT_LOG_ERR("Could not detach CPT queues"); - return ret; - } - } - - /* Attach queues */ - ret = otx2_cpt_queues_attach(dev, conf->nb_queue_pairs); - if (ret) { - CPT_LOG_ERR("Could not attach CPT queues"); - return -ENODEV; - } - - ret = otx2_cpt_msix_offsets_get(dev); - if (ret) { - CPT_LOG_ERR("Could not get MSI-X offsets"); - goto queues_detach; - } - - /* Register error interrupts */ - ret = otx2_cpt_err_intr_register(dev); - if (ret) { - CPT_LOG_ERR("Could not register error interrupts"); - goto queues_detach; - } - - ret = otx2_cpt_inline_init(dev); - if (ret) { - CPT_LOG_ERR("Could not enable inline IPsec"); - goto intr_unregister; - } - - otx2_cpt_set_enqdeq_fns(dev); - - return 0; - -intr_unregister: - otx2_cpt_err_intr_unregister(dev); -queues_detach: - otx2_cpt_queues_detach(dev); - return ret; -} - -static int -otx2_cpt_dev_start(struct rte_cryptodev *dev) -{ - RTE_SET_USED(dev); - - CPT_PMD_INIT_FUNC_TRACE(); - - return 0; -} - -static void -otx2_cpt_dev_stop(struct rte_cryptodev *dev) -{ - CPT_PMD_INIT_FUNC_TRACE(); - - if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) - cpt_fpm_clear(); -} - -static int -otx2_cpt_dev_close(struct rte_cryptodev *dev) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - int i, ret = 0; - - for (i = 0; i < dev->data->nb_queue_pairs; i++) { - ret = otx2_cpt_queue_pair_release(dev, i); - if (ret) - return ret; - } - - /* Unregister error interrupts */ - if (vf->err_intr_registered) - otx2_cpt_err_intr_unregister(dev); - - /* Detach queues */ - if (vf->nb_queues) { - ret = otx2_cpt_queues_detach(dev); - if (ret) - CPT_LOG_ERR("Could not detach CPT queues"); - } - - return ret; -} - -static void -otx2_cpt_dev_info_get(struct rte_cryptodev *dev, - struct rte_cryptodev_info *info) -{ - struct otx2_cpt_vf *vf = dev->data->dev_private; - - if (info != NULL) { - info->max_nb_queue_pairs = vf->max_queues; - info->feature_flags = otx2_cpt_default_ff_get(); - info->capabilities = otx2_cpt_capabilities_get(); - info->sym.max_nb_sessions = 0; - info->driver_id = otx2_cryptodev_driver_id; - info->min_mbuf_headroom_req = OTX2_CPT_MIN_HEADROOM_REQ; - info->min_mbuf_tailroom_req = OTX2_CPT_MIN_TAILROOM_REQ; - } -} - -static int -otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, - const struct rte_cryptodev_qp_conf *conf, - int socket_id __rte_unused) -{ - uint8_t grp_mask = OTX2_CPT_ENG_GRPS_MASK; - struct rte_pci_device *pci_dev; - struct otx2_cpt_qp *qp; - - CPT_PMD_INIT_FUNC_TRACE(); - - if (dev->data->queue_pairs[qp_id] != NULL) - otx2_cpt_queue_pair_release(dev, qp_id); - - if (conf->nb_descriptors > OTX2_CPT_DEFAULT_CMD_QLEN) { - CPT_LOG_ERR("Could not setup queue pair for %u descriptors", - conf->nb_descriptors); - return -EINVAL; - } - - pci_dev = RTE_DEV_TO_PCI(dev->device); - - if (pci_dev->mem_resource[2].addr == NULL) { - CPT_LOG_ERR("Invalid PCI mem address"); - return -EIO; - } - - qp = otx2_cpt_qp_create(dev, qp_id, grp_mask); - if (qp == NULL) { - CPT_LOG_ERR("Could not create queue pair %d", qp_id); - return -ENOMEM; - } - - qp->sess_mp = conf->mp_session; - qp->sess_mp_priv = conf->mp_session_private; - dev->data->queue_pairs[qp_id] = qp; - - return 0; -} - -static int -otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) -{ - struct otx2_cpt_qp *qp = dev->data->queue_pairs[qp_id]; - int ret; - - CPT_PMD_INIT_FUNC_TRACE(); - - if (qp == NULL) - return -EINVAL; - - CPT_LOG_INFO("Releasing queue pair %d", qp_id); - - ret = otx2_cpt_qp_destroy(dev, qp); - if (ret) { - CPT_LOG_ERR("Could not destroy queue pair %d", qp_id); - return ret; - } - - dev->data->queue_pairs[qp_id] = NULL; - - return 0; -} - -static unsigned int -otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) -{ - return cpt_get_session_size(); -} - -static int -otx2_cpt_sym_session_configure(struct rte_cryptodev *dev, - struct rte_crypto_sym_xform *xform, - struct rte_cryptodev_sym_session *sess, - struct rte_mempool *pool) -{ - CPT_PMD_INIT_FUNC_TRACE(); - - return sym_session_configure(dev->driver_id, xform, sess, pool); -} - -static void -otx2_cpt_sym_session_clear(struct rte_cryptodev *dev, - struct rte_cryptodev_sym_session *sess) -{ - CPT_PMD_INIT_FUNC_TRACE(); - - return sym_session_clear(dev->driver_id, sess); -} - -static unsigned int -otx2_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused) -{ - return sizeof(struct cpt_asym_sess_misc); -} - -static int -otx2_cpt_asym_session_cfg(struct rte_cryptodev *dev, - struct rte_crypto_asym_xform *xform, - struct rte_cryptodev_asym_session *sess, - struct rte_mempool *pool) -{ - struct cpt_asym_sess_misc *priv; - vq_cmd_word3_t vq_cmd_w3; - int ret; - - CPT_PMD_INIT_FUNC_TRACE(); - - if (rte_mempool_get(pool, (void **)&priv)) { - CPT_LOG_ERR("Could not allocate session_private_data"); - return -ENOMEM; - } - - memset(priv, 0, sizeof(struct cpt_asym_sess_misc)); - - ret = cpt_fill_asym_session_parameters(priv, xform); - if (ret) { - CPT_LOG_ERR("Could not configure session parameters"); - - /* Return session to mempool */ - rte_mempool_put(pool, priv); - return ret; - } - - vq_cmd_w3.u64 = 0; - vq_cmd_w3.s.grp = OTX2_CPT_EGRP_AE; - priv->cpt_inst_w7 = vq_cmd_w3.u64; - - set_asym_session_private_data(sess, dev->driver_id, priv); - - return 0; -} - -static void -otx2_cpt_asym_session_clear(struct rte_cryptodev *dev, - struct rte_cryptodev_asym_session *sess) -{ - struct cpt_asym_sess_misc *priv; - struct rte_mempool *sess_mp; - - CPT_PMD_INIT_FUNC_TRACE(); - - priv = get_asym_session_private_data(sess, dev->driver_id); - if (priv == NULL) - return; - - /* Free resources allocated in session_cfg */ - cpt_free_asym_session_parameters(priv); - - /* Reset and free object back to pool */ - memset(priv, 0, otx2_cpt_asym_session_size_get(dev)); - sess_mp = rte_mempool_from_obj(priv); - set_asym_session_private_data(sess, dev->driver_id, NULL); - rte_mempool_put(sess_mp, priv); -} - -struct rte_cryptodev_ops otx2_cpt_ops = { - /* Device control ops */ - .dev_configure = otx2_cpt_dev_config, - .dev_start = otx2_cpt_dev_start, - .dev_stop = otx2_cpt_dev_stop, - .dev_close = otx2_cpt_dev_close, - .dev_infos_get = otx2_cpt_dev_info_get, - - .stats_get = NULL, - .stats_reset = NULL, - .queue_pair_setup = otx2_cpt_queue_pair_setup, - .queue_pair_release = otx2_cpt_queue_pair_release, - - /* Symmetric crypto ops */ - .sym_session_get_size = otx2_cpt_sym_session_get_size, - .sym_session_configure = otx2_cpt_sym_session_configure, - .sym_session_clear = otx2_cpt_sym_session_clear, - - /* Asymmetric crypto ops */ - .asym_session_get_size = otx2_cpt_asym_session_size_get, - .asym_session_configure = otx2_cpt_asym_session_cfg, - .asym_session_clear = otx2_cpt_asym_session_clear, - -}; diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h deleted file mode 100644 index 7faf7ad034..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h +++ /dev/null @@ -1,15 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2019 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_OPS_H_ -#define _OTX2_CRYPTODEV_OPS_H_ - -#include - -#define OTX2_CPT_MIN_HEADROOM_REQ 48 -#define OTX2_CPT_MIN_TAILROOM_REQ 208 - -extern struct rte_cryptodev_ops otx2_cpt_ops; - -#endif /* _OTX2_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h deleted file mode 100644 index 01c081a216..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h +++ /dev/null @@ -1,82 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef _OTX2_CRYPTODEV_OPS_HELPER_H_ -#define _OTX2_CRYPTODEV_OPS_HELPER_H_ - -#include "cpt_pmd_logs.h" - -static void -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess) -{ - void *priv = get_sym_session_private_data(sess, driver_id); - struct cpt_sess_misc *misc; - struct rte_mempool *pool; - struct cpt_ctx *ctx; - - if (priv == NULL) - return; - - misc = priv; - ctx = SESS_PRIV(misc); - - if (ctx->auth_key != NULL) - rte_free(ctx->auth_key); - - memset(priv, 0, cpt_get_session_size()); - - pool = rte_mempool_from_obj(priv); - - set_sym_session_private_data(sess, driver_id, NULL); - - rte_mempool_put(pool, priv); -} - -static __rte_always_inline uint8_t -otx2_cpt_compcode_get(struct cpt_request_info *req) -{ - volatile struct cpt_res_s_9s *res; - uint8_t ret; - - res = (volatile struct cpt_res_s_9s *)req->completion_addr; - - if (unlikely(res->compcode == CPT_9X_COMP_E_NOTDONE)) { - if (rte_get_timer_cycles() < req->time_out) - return ERR_REQ_PENDING; - - CPT_LOG_DP_ERR("Request timed out"); - return ERR_REQ_TIMEOUT; - } - - if (likely(res->compcode == CPT_9X_COMP_E_GOOD)) { - ret = NO_ERR; - if (unlikely(res->uc_compcode)) { - ret = res->uc_compcode; - CPT_LOG_DP_DEBUG("Request failed with microcode error"); - CPT_LOG_DP_DEBUG("MC completion code 0x%x", - res->uc_compcode); - } - } else { - CPT_LOG_DP_DEBUG("HW completion code 0x%x", res->compcode); - - ret = res->compcode; - switch (res->compcode) { - case CPT_9X_COMP_E_INSTERR: - CPT_LOG_DP_ERR("Request failed with instruction error"); - break; - case CPT_9X_COMP_E_FAULT: - CPT_LOG_DP_ERR("Request failed with DMA fault"); - break; - case CPT_9X_COMP_E_HWERR: - CPT_LOG_DP_ERR("Request failed with hardware error"); - break; - default: - CPT_LOG_DP_ERR("Request failed with unknown completion code"); - } - } - - return ret; -} - -#endif /* _OTX2_CRYPTODEV_OPS_HELPER_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h deleted file mode 100644 index 95bce3621a..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h +++ /dev/null @@ -1,46 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020-2021 Marvell. - */ - -#ifndef _OTX2_CRYPTODEV_QP_H_ -#define _OTX2_CRYPTODEV_QP_H_ - -#include -#include -#include -#include - -#include "cpt_common.h" - -struct otx2_cpt_qp { - uint32_t id; - /**< Queue pair id */ - uint8_t blkaddr; - /**< CPT0/1 BLKADDR of LF */ - uintptr_t base; - /**< Base address where BAR is mapped */ - void *lmtline; - /**< Address of LMTLINE */ - rte_iova_t lf_nq_reg; - /**< LF enqueue register address */ - struct pending_queue pend_q; - /**< Pending queue */ - struct rte_mempool *sess_mp; - /**< Session mempool */ - struct rte_mempool *sess_mp_priv; - /**< Session private data mempool */ - struct cpt_qp_meta_info meta_info; - /**< Metabuf info required to support operations on the queue pair */ - rte_iova_t iq_dma_addr; - /**< Instruction queue address */ - struct rte_event ev; - /**< Event information required for binding cryptodev queue to - * eventdev queue. Used by crypto adapter. - */ - uint8_t ca_enable; - /**< Set when queue pair is added to crypto adapter */ - uint8_t qp_ev_bind; - /**< Set when queue pair is bound to event queue */ -}; - -#endif /* _OTX2_CRYPTODEV_QP_H_ */ diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c deleted file mode 100644 index 9a4f84f8d8..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c +++ /dev/null @@ -1,655 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_capabilities.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_ops.h" -#include "otx2_cryptodev_sec.h" -#include "otx2_security.h" - -static int -ipsec_lp_len_precalc(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform, - struct otx2_sec_session_ipsec_lp *lp) -{ - struct rte_crypto_sym_xform *cipher_xform, *auth_xform; - - lp->partial_len = 0; - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { - if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) - lp->partial_len = sizeof(struct rte_ipv4_hdr); - else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) - lp->partial_len = sizeof(struct rte_ipv6_hdr); - else - return -EINVAL; - } - - if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) { - lp->partial_len += sizeof(struct rte_esp_hdr); - lp->roundup_len = sizeof(struct rte_esp_tail); - } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) { - lp->partial_len += OTX2_SEC_AH_HDR_LEN; - } else { - return -EINVAL; - } - - if (ipsec->options.udp_encap) - lp->partial_len += sizeof(struct rte_udp_hdr); - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { - lp->partial_len += OTX2_SEC_AES_GCM_IV_LEN; - lp->partial_len += OTX2_SEC_AES_GCM_MAC_LEN; - lp->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN; - return 0; - } else { - return -EINVAL; - } - } - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) { - cipher_xform = xform; - auth_xform = xform->next; - } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { - auth_xform = xform; - cipher_xform = xform->next; - } else { - return -EINVAL; - } - - if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { - lp->partial_len += OTX2_SEC_AES_CBC_IV_LEN; - lp->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN; - } else { - return -EINVAL; - } - - if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) - lp->partial_len += OTX2_SEC_SHA1_HMAC_LEN; - else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) - lp->partial_len += OTX2_SEC_SHA2_HMAC_LEN; - else - return -EINVAL; - - return 0; -} - -static int -otx2_cpt_enq_sa_write(struct otx2_sec_session_ipsec_lp *lp, - struct otx2_cpt_qp *qptr, uint8_t opcode) -{ - uint64_t lmt_status, time_out; - void *lmtline = qptr->lmtline; - struct otx2_cpt_inst_s inst; - struct otx2_cpt_res *res; - uint64_t *mdata; - int ret = 0; - - if (unlikely(rte_mempool_get(qptr->meta_info.pool, - (void **)&mdata) < 0)) - return -ENOMEM; - - res = (struct otx2_cpt_res *)RTE_PTR_ALIGN(mdata, 16); - res->compcode = CPT_9X_COMP_E_NOTDONE; - - inst.opcode = opcode | (lp->ctx_len << 8); - inst.param1 = 0; - inst.param2 = 0; - inst.dlen = lp->ctx_len << 3; - inst.dptr = rte_mempool_virt2iova(lp); - inst.rptr = 0; - inst.cptr = rte_mempool_virt2iova(lp); - inst.egrp = OTX2_CPT_EGRP_SE; - - inst.u64[0] = 0; - inst.u64[2] = 0; - inst.u64[3] = 0; - inst.res_addr = rte_mempool_virt2iova(res); - - rte_io_wmb(); - - do { - /* Copy CPT command to LMTLINE */ - otx2_lmt_mov(lmtline, &inst, 2); - lmt_status = otx2_lmt_submit(qptr->lf_nq_reg); - } while (lmt_status == 0); - - time_out = rte_get_timer_cycles() + - DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); - - while (res->compcode == CPT_9X_COMP_E_NOTDONE) { - if (rte_get_timer_cycles() > time_out) { - rte_mempool_put(qptr->meta_info.pool, mdata); - otx2_err("Request timed out"); - return -ETIMEDOUT; - } - rte_io_rmb(); - } - - if (unlikely(res->compcode != CPT_9X_COMP_E_GOOD)) { - ret = res->compcode; - switch (ret) { - case CPT_9X_COMP_E_INSTERR: - otx2_err("Request failed with instruction error"); - break; - case CPT_9X_COMP_E_FAULT: - otx2_err("Request failed with DMA fault"); - break; - case CPT_9X_COMP_E_HWERR: - otx2_err("Request failed with hardware error"); - break; - default: - otx2_err("Request failed with unknown hardware " - "completion code : 0x%x", ret); - } - goto mempool_put; - } - - if (unlikely(res->uc_compcode != OTX2_IPSEC_PO_CC_SUCCESS)) { - ret = res->uc_compcode; - switch (ret) { - case OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED: - otx2_err("Invalid auth type"); - break; - case OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED: - otx2_err("Invalid encrypt type"); - break; - default: - otx2_err("Request failed with unknown microcode " - "completion code : 0x%x", ret); - } - } - -mempool_put: - rte_mempool_put(qptr->meta_info.pool, mdata); - return ret; -} - -static void -set_session_misc_attributes(struct otx2_sec_session_ipsec_lp *sess, - struct rte_crypto_sym_xform *crypto_xform, - struct rte_crypto_sym_xform *auth_xform, - struct rte_crypto_sym_xform *cipher_xform) -{ - if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - sess->iv_offset = crypto_xform->aead.iv.offset; - sess->iv_length = crypto_xform->aead.iv.length; - sess->aad_length = crypto_xform->aead.aad_length; - sess->mac_len = crypto_xform->aead.digest_length; - } else { - sess->iv_offset = cipher_xform->cipher.iv.offset; - sess->iv_length = cipher_xform->cipher.iv.length; - sess->auth_iv_offset = auth_xform->auth.iv.offset; - sess->auth_iv_length = auth_xform->auth.iv.length; - sess->mac_len = auth_xform->auth.digest_length; - } -} - -static int -crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev, - struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *crypto_xform, - struct rte_security_session *sec_sess) -{ - struct rte_crypto_sym_xform *auth_xform, *cipher_xform; - struct otx2_ipsec_po_ip_template *template = NULL; - const uint8_t *cipher_key, *auth_key; - struct otx2_sec_session_ipsec_lp *lp; - struct otx2_ipsec_po_sa_ctl *ctl; - int cipher_key_len, auth_key_len; - struct otx2_ipsec_po_out_sa *sa; - struct otx2_sec_session *sess; - struct otx2_cpt_inst_s inst; - struct rte_ipv6_hdr *ip6; - struct rte_ipv4_hdr *ip; - int ret, ctx_len; - - sess = get_sec_session_private_data(sec_sess); - sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS; - lp = &sess->ipsec.lp; - - sa = &lp->out_sa; - ctl = &sa->ctl; - if (ctl->valid) { - otx2_err("SA already registered"); - return -EINVAL; - } - - memset(sa, 0, sizeof(struct otx2_ipsec_po_out_sa)); - - /* Initialize lookaside ipsec private data */ - lp->ip_id = 0; - lp->seq_lo = 1; - lp->seq_hi = 0; - - ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl); - if (ret) - return ret; - - ret = ipsec_lp_len_precalc(ipsec, crypto_xform, lp); - if (ret) - return ret; - - /* Start ip id from 1 */ - lp->ip_id = 1; - - if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) { - template = &sa->aes_gcm.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - aes_gcm.template) + sizeof( - sa->aes_gcm.template.ip4); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else if (ctl->auth_type == - OTX2_IPSEC_PO_SA_AUTH_SHA1) { - template = &sa->sha1.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - sha1.template) + sizeof( - sa->sha1.template.ip4); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else if (ctl->auth_type == - OTX2_IPSEC_PO_SA_AUTH_SHA2_256) { - template = &sa->sha2.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - sha2.template) + sizeof( - sa->sha2.template.ip4); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else { - return -EINVAL; - } - ip = &template->ip4.ipv4_hdr; - if (ipsec->options.udp_encap) { - ip->next_proto_id = IPPROTO_UDP; - template->ip4.udp_src = rte_be_to_cpu_16(4500); - template->ip4.udp_dst = rte_be_to_cpu_16(4500); - } else { - ip->next_proto_id = IPPROTO_ESP; - } - - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { - if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) { - ip->version_ihl = RTE_IPV4_VHL_DEF; - ip->time_to_live = ipsec->tunnel.ipv4.ttl; - ip->type_of_service |= (ipsec->tunnel.ipv4.dscp << 2); - if (ipsec->tunnel.ipv4.df) - ip->fragment_offset = BIT(14); - memcpy(&ip->src_addr, &ipsec->tunnel.ipv4.src_ip, - sizeof(struct in_addr)); - memcpy(&ip->dst_addr, &ipsec->tunnel.ipv4.dst_ip, - sizeof(struct in_addr)); - } else if (ipsec->tunnel.type == - RTE_SECURITY_IPSEC_TUNNEL_IPV6) { - - if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) { - template = &sa->aes_gcm.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - aes_gcm.template) + sizeof( - sa->aes_gcm.template.ip6); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else if (ctl->auth_type == - OTX2_IPSEC_PO_SA_AUTH_SHA1) { - template = &sa->sha1.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - sha1.template) + sizeof( - sa->sha1.template.ip6); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else if (ctl->auth_type == - OTX2_IPSEC_PO_SA_AUTH_SHA2_256) { - template = &sa->sha2.template; - ctx_len = offsetof(struct otx2_ipsec_po_out_sa, - sha2.template) + sizeof( - sa->sha2.template.ip6); - ctx_len = RTE_ALIGN_CEIL(ctx_len, 8); - lp->ctx_len = ctx_len >> 3; - } else { - return -EINVAL; - } - - ip6 = &template->ip6.ipv6_hdr; - if (ipsec->options.udp_encap) { - ip6->proto = IPPROTO_UDP; - template->ip6.udp_src = rte_be_to_cpu_16(4500); - template->ip6.udp_dst = rte_be_to_cpu_16(4500); - } else { - ip6->proto = (ipsec->proto == - RTE_SECURITY_IPSEC_SA_PROTO_ESP) ? - IPPROTO_ESP : IPPROTO_AH; - } - ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 | - ((ipsec->tunnel.ipv6.dscp << - RTE_IPV6_HDR_TC_SHIFT) & - RTE_IPV6_HDR_TC_MASK) | - ((ipsec->tunnel.ipv6.flabel << - RTE_IPV6_HDR_FL_SHIFT) & - RTE_IPV6_HDR_FL_MASK)); - ip6->hop_limits = ipsec->tunnel.ipv6.hlimit; - memcpy(&ip6->src_addr, &ipsec->tunnel.ipv6.src_addr, - sizeof(struct in6_addr)); - memcpy(&ip6->dst_addr, &ipsec->tunnel.ipv6.dst_addr, - sizeof(struct in6_addr)); - } - } - - cipher_xform = crypto_xform; - auth_xform = crypto_xform->next; - - cipher_key_len = 0; - auth_key_len = 0; - - if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) - memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4); - cipher_key = crypto_xform->aead.key.data; - cipher_key_len = crypto_xform->aead.key.length; - } else { - cipher_key = cipher_xform->cipher.key.data; - cipher_key_len = cipher_xform->cipher.key.length; - auth_key = auth_xform->auth.key.data; - auth_key_len = auth_xform->auth.key.length; - - if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) - memcpy(sa->sha1.hmac_key, auth_key, auth_key_len); - else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) - memcpy(sa->sha2.hmac_key, auth_key, auth_key_len); - } - - if (cipher_key_len != 0) - memcpy(sa->cipher_key, cipher_key, cipher_key_len); - else - return -EINVAL; - - inst.u64[7] = 0; - inst.egrp = OTX2_CPT_EGRP_SE; - inst.cptr = rte_mempool_virt2iova(sa); - - lp->cpt_inst_w7 = inst.u64[7]; - lp->ucmd_opcode = (lp->ctx_len << 8) | - (OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB); - - /* Set per packet IV and IKEv2 bits */ - lp->ucmd_param1 = BIT(11) | BIT(9); - lp->ucmd_param2 = 0; - - set_session_misc_attributes(lp, crypto_xform, - auth_xform, cipher_xform); - - return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0], - OTX2_IPSEC_PO_WRITE_IPSEC_OUTB); -} - -static int -crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev, - struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *crypto_xform, - struct rte_security_session *sec_sess) -{ - struct rte_crypto_sym_xform *auth_xform, *cipher_xform; - const uint8_t *cipher_key, *auth_key; - struct otx2_sec_session_ipsec_lp *lp; - struct otx2_ipsec_po_sa_ctl *ctl; - int cipher_key_len, auth_key_len; - struct otx2_ipsec_po_in_sa *sa; - struct otx2_sec_session *sess; - struct otx2_cpt_inst_s inst; - int ret; - - sess = get_sec_session_private_data(sec_sess); - sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; - lp = &sess->ipsec.lp; - - sa = &lp->in_sa; - ctl = &sa->ctl; - - if (ctl->valid) { - otx2_err("SA already registered"); - return -EINVAL; - } - - memset(sa, 0, sizeof(struct otx2_ipsec_po_in_sa)); - sa->replay_win_sz = ipsec->replay_win_sz; - - ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl); - if (ret) - return ret; - - auth_xform = crypto_xform; - cipher_xform = crypto_xform->next; - - cipher_key_len = 0; - auth_key_len = 0; - - if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) - memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4); - cipher_key = crypto_xform->aead.key.data; - cipher_key_len = crypto_xform->aead.key.length; - - lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa, - aes_gcm.hmac_key[0]) >> 3; - RTE_ASSERT(lp->ctx_len == OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN); - } else { - cipher_key = cipher_xform->cipher.key.data; - cipher_key_len = cipher_xform->cipher.key.length; - auth_key = auth_xform->auth.key.data; - auth_key_len = auth_xform->auth.key.length; - - if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) { - memcpy(sa->aes_gcm.hmac_key, auth_key, auth_key_len); - lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa, - aes_gcm.selector) >> 3; - } else if (auth_xform->auth.algo == - RTE_CRYPTO_AUTH_SHA256_HMAC) { - memcpy(sa->sha2.hmac_key, auth_key, auth_key_len); - lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa, - sha2.selector) >> 3; - } - } - - if (cipher_key_len != 0) - memcpy(sa->cipher_key, cipher_key, cipher_key_len); - else - return -EINVAL; - - inst.u64[7] = 0; - inst.egrp = OTX2_CPT_EGRP_SE; - inst.cptr = rte_mempool_virt2iova(sa); - - lp->cpt_inst_w7 = inst.u64[7]; - lp->ucmd_opcode = (lp->ctx_len << 8) | - (OTX2_IPSEC_PO_PROCESS_IPSEC_INB); - lp->ucmd_param1 = 0; - - /* Set IKEv2 bit */ - lp->ucmd_param2 = BIT(12); - - set_session_misc_attributes(lp, crypto_xform, - auth_xform, cipher_xform); - - if (sa->replay_win_sz) { - if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) { - otx2_err("Replay window size is not supported"); - return -ENOTSUP; - } - sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay), - 0); - if (sa->replay == NULL) - return -ENOMEM; - - /* Set window bottom to 1, base and top to size of window */ - sa->replay->winb = 1; - sa->replay->wint = sa->replay_win_sz; - sa->replay->base = sa->replay_win_sz; - sa->esn_low = 0; - sa->esn_hi = 0; - } - - return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0], - OTX2_IPSEC_PO_WRITE_IPSEC_INB); -} - -static int -crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev, - struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *crypto_xform, - struct rte_security_session *sess) -{ - int ret; - - if (crypto_dev->data->queue_pairs[0] == NULL) { - otx2_err("Setup cpt queue pair before creating sec session"); - return -EPERM; - } - - ret = ipsec_po_xform_verify(ipsec, crypto_xform); - if (ret) - return ret; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) - return crypto_sec_ipsec_inb_session_create(crypto_dev, ipsec, - crypto_xform, sess); - else - return crypto_sec_ipsec_outb_session_create(crypto_dev, ipsec, - crypto_xform, sess); -} - -static int -otx2_crypto_sec_session_create(void *device, - struct rte_security_session_conf *conf, - struct rte_security_session *sess, - struct rte_mempool *mempool) -{ - struct otx2_sec_session *priv; - int ret; - - if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) - return -ENOTSUP; - - if (rte_security_dynfield_register() < 0) - return -rte_errno; - - if (rte_mempool_get(mempool, (void **)&priv)) { - otx2_err("Could not allocate security session private data"); - return -ENOMEM; - } - - set_sec_session_private_data(sess, priv); - - priv->userdata = conf->userdata; - - if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) - ret = crypto_sec_ipsec_session_create(device, &conf->ipsec, - conf->crypto_xform, - sess); - else - ret = -ENOTSUP; - - if (ret) - goto mempool_put; - - return 0; - -mempool_put: - rte_mempool_put(mempool, priv); - set_sec_session_private_data(sess, NULL); - return ret; -} - -static int -otx2_crypto_sec_session_destroy(void *device __rte_unused, - struct rte_security_session *sess) -{ - struct otx2_sec_session *priv; - struct rte_mempool *sess_mp; - - priv = get_sec_session_private_data(sess); - - if (priv == NULL) - return 0; - - sess_mp = rte_mempool_from_obj(priv); - - memset(priv, 0, sizeof(*priv)); - - set_sec_session_private_data(sess, NULL); - rte_mempool_put(sess_mp, priv); - - return 0; -} - -static unsigned int -otx2_crypto_sec_session_get_size(void *device __rte_unused) -{ - return sizeof(struct otx2_sec_session); -} - -static int -otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused, - struct rte_security_session *session, - struct rte_mbuf *m, void *params __rte_unused) -{ - /* Set security session as the pkt metadata */ - *rte_security_dynfield(m) = (rte_security_dynfield_t)session; - - return 0; -} - -static int -otx2_crypto_sec_get_userdata(void *device __rte_unused, uint64_t md, - void **userdata) -{ - /* Retrieve userdata */ - *userdata = (void *)md; - - return 0; -} - -static struct rte_security_ops otx2_crypto_sec_ops = { - .session_create = otx2_crypto_sec_session_create, - .session_destroy = otx2_crypto_sec_session_destroy, - .session_get_size = otx2_crypto_sec_session_get_size, - .set_pkt_metadata = otx2_crypto_sec_set_pkt_mdata, - .get_userdata = otx2_crypto_sec_get_userdata, - .capabilities_get = otx2_crypto_sec_capabilities_get -}; - -int -otx2_crypto_sec_ctx_create(struct rte_cryptodev *cdev) -{ - struct rte_security_ctx *ctx; - - ctx = rte_malloc("otx2_cpt_dev_sec_ctx", - sizeof(struct rte_security_ctx), 0); - - if (ctx == NULL) - return -ENOMEM; - - /* Populate ctx */ - ctx->device = cdev; - ctx->ops = &otx2_crypto_sec_ops; - ctx->sess_cnt = 0; - - cdev->security_ctx = ctx; - - return 0; -} - -void -otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *cdev) -{ - rte_free(cdev->security_ctx); -} diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h b/drivers/crypto/octeontx2/otx2_cryptodev_sec.h deleted file mode 100644 index ff3329c9c1..0000000000 --- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h +++ /dev/null @@ -1,64 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_CRYPTODEV_SEC_H__ -#define __OTX2_CRYPTODEV_SEC_H__ - -#include - -#include "otx2_ipsec_po.h" - -struct otx2_sec_session_ipsec_lp { - RTE_STD_C11 - union { - /* Inbound SA */ - struct otx2_ipsec_po_in_sa in_sa; - /* Outbound SA */ - struct otx2_ipsec_po_out_sa out_sa; - }; - - uint64_t cpt_inst_w7; - union { - uint64_t ucmd_w0; - struct { - uint16_t ucmd_dlen; - uint16_t ucmd_param2; - uint16_t ucmd_param1; - uint16_t ucmd_opcode; - }; - }; - - uint8_t partial_len; - uint8_t roundup_len; - uint8_t roundup_byte; - uint16_t ip_id; - union { - uint64_t esn; - struct { - uint32_t seq_lo; - uint32_t seq_hi; - }; - }; - - /** Context length in 8-byte words */ - size_t ctx_len; - /** Auth IV offset in bytes */ - uint16_t auth_iv_offset; - /** IV offset in bytes */ - uint16_t iv_offset; - /** AAD length */ - uint16_t aad_length; - /** MAC len in bytes */ - uint8_t mac_len; - /** IV length in bytes */ - uint8_t iv_length; - /** Auth IV length in bytes */ - uint8_t auth_iv_length; -}; - -int otx2_crypto_sec_ctx_create(struct rte_cryptodev *crypto_dev); - -void otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *crypto_dev); - -#endif /* __OTX2_CRYPTODEV_SEC_H__ */ diff --git a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h b/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h deleted file mode 100644 index 089a3d073a..0000000000 --- a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h +++ /dev/null @@ -1,227 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_IPSEC_ANTI_REPLAY_H__ -#define __OTX2_IPSEC_ANTI_REPLAY_H__ - -#include - -#include "otx2_ipsec_fp.h" - -#define WORD_SHIFT 6 -#define WORD_SIZE (1 << WORD_SHIFT) -#define WORD_MASK (WORD_SIZE - 1) - -#define IPSEC_ANTI_REPLAY_FAILED (-1) - -static inline int -anti_replay_check(struct otx2_ipsec_replay *replay, uint64_t seq, - uint64_t winsz) -{ - uint64_t *window = &replay->window[0]; - uint64_t ex_winsz = winsz + WORD_SIZE; - uint64_t winwords = ex_winsz >> WORD_SHIFT; - uint64_t base = replay->base; - uint32_t winb = replay->winb; - uint32_t wint = replay->wint; - uint64_t seqword, shiftwords; - uint64_t bit_pos; - uint64_t shift; - uint64_t *wptr; - uint64_t tmp; - - if (winsz > 64) - goto slow_shift; - /* Check if the seq is the biggest one yet */ - if (likely(seq > base)) { - shift = seq - base; - if (shift < winsz) { /* In window */ - /* - * If more than 64-bit anti-replay window, - * use slow shift routine - */ - wptr = window + (shift >> WORD_SHIFT); - *wptr <<= shift; - *wptr |= 1ull; - } else { - /* No special handling of window size > 64 */ - wptr = window + ((winsz - 1) >> WORD_SHIFT); - /* - * Zero out the whole window (especially for - * bigger than 64b window) till the last 64b word - * as the incoming sequence number minus - * base sequence is more than the window size. - */ - while (window != wptr) - *window++ = 0ull; - /* - * Set the last bit (of the window) to 1 - * as that corresponds to the base sequence number. - * Now any incoming sequence number which is - * (base - window size - 1) will pass anti-replay check - */ - *wptr = 1ull; - } - /* - * Set the base to incoming sequence number as - * that is the biggest sequence number seen yet - */ - replay->base = seq; - return 0; - } - - bit_pos = base - seq; - - /* If seq falls behind the window, return failure */ - if (bit_pos >= winsz) - return IPSEC_ANTI_REPLAY_FAILED; - - /* seq is within anti-replay window */ - wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT); - bit_pos &= WORD_MASK; - - /* Check if this is a replayed packet */ - if (*wptr & ((1ull) << bit_pos)) - return IPSEC_ANTI_REPLAY_FAILED; - - /* mark as seen */ - *wptr |= ((1ull) << bit_pos); - return 0; - -slow_shift: - if (likely(seq > base)) { - uint32_t i; - - shift = seq - base; - if (unlikely(shift >= winsz)) { - /* - * shift is bigger than the window, - * so just zero out everything - */ - for (i = 0; i < winwords; i++) - window[i] = 0; -winupdate: - /* Find out the word */ - seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT; - - /* Find out the bit in the word */ - bit_pos = (seq - 1) & WORD_MASK; - - /* - * Set the bit corresponding to sequence number - * in window to mark it as received - */ - window[seqword] |= (1ull << (63 - bit_pos)); - - /* wint and winb range from 1 to ex_winsz */ - replay->wint = ((wint + shift - 1) % ex_winsz) + 1; - replay->winb = ((winb + shift - 1) % ex_winsz) + 1; - - replay->base = seq; - return 0; - } - - /* - * New sequence number is bigger than the base but - * it's not bigger than base + window size - */ - - shiftwords = ((wint + shift - 1) >> WORD_SHIFT) - - ((wint - 1) >> WORD_SHIFT); - if (unlikely(shiftwords)) { - tmp = (wint + WORD_SIZE - 1) / WORD_SIZE; - for (i = 0; i < shiftwords; i++) { - tmp %= winwords; - window[tmp++] = 0; - } - } - - goto winupdate; - } - - /* Sequence number is before the window */ - if (unlikely((seq + winsz) <= base)) - return IPSEC_ANTI_REPLAY_FAILED; - - /* Sequence number is within the window */ - - /* Find out the word */ - seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT; - - /* Find out the bit in the word */ - bit_pos = (seq - 1) & WORD_MASK; - - /* Check if this is a replayed packet */ - if (window[seqword] & (1ull << (63 - bit_pos))) - return IPSEC_ANTI_REPLAY_FAILED; - - /* - * Set the bit corresponding to sequence number - * in window to mark it as received - */ - window[seqword] |= (1ull << (63 - bit_pos)); - - return 0; -} - -static inline int -cpt_ipsec_ip_antireplay_check(struct otx2_ipsec_fp_in_sa *sa, void *l3_ptr) -{ - struct otx2_ipsec_fp_res_hdr *hdr = l3_ptr; - uint64_t seq_in_sa; - uint32_t seqh = 0; - uint32_t seql; - uint64_t seq; - uint8_t esn; - int ret; - - esn = sa->ctl.esn_en; - seql = rte_be_to_cpu_32(hdr->seq_no_lo); - - if (!esn) - seq = (uint64_t)seql; - else { - seqh = rte_be_to_cpu_32(hdr->seq_no_hi); - seq = ((uint64_t)seqh << 32) | seql; - } - - if (unlikely(seq == 0)) - return IPSEC_ANTI_REPLAY_FAILED; - - rte_spinlock_lock(&sa->replay->lock); - ret = anti_replay_check(sa->replay, seq, sa->replay_win_sz); - if (esn && (ret == 0)) { - seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) | - rte_be_to_cpu_32(sa->esn_low); - if (seq > seq_in_sa) { - sa->esn_low = rte_cpu_to_be_32(seql); - sa->esn_hi = rte_cpu_to_be_32(seqh); - } - } - rte_spinlock_unlock(&sa->replay->lock); - - return ret; -} - -static inline uint32_t -anti_replay_get_seqh(uint32_t winsz, uint32_t seql, - uint32_t esn_hi, uint32_t esn_low) -{ - uint32_t win_low = esn_low - winsz + 1; - - if (esn_low > winsz - 1) { - /* Window is in one sequence number subspace */ - if (seql > win_low) - return esn_hi; - else - return esn_hi + 1; - } else { - /* Window is split across two sequence number subspaces */ - if (seql > win_low) - return esn_hi - 1; - else - return esn_hi; - } -} -#endif /* __OTX2_IPSEC_ANTI_REPLAY_H__ */ diff --git a/drivers/crypto/octeontx2/otx2_ipsec_fp.h b/drivers/crypto/octeontx2/otx2_ipsec_fp.h deleted file mode 100644 index 2461e7462b..0000000000 --- a/drivers/crypto/octeontx2/otx2_ipsec_fp.h +++ /dev/null @@ -1,371 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_IPSEC_FP_H__ -#define __OTX2_IPSEC_FP_H__ - -#include -#include - -/* Macros for anti replay and ESN */ -#define OTX2_IPSEC_MAX_REPLAY_WIN_SZ 1024 - -struct otx2_ipsec_fp_res_hdr { - uint32_t spi; - uint32_t seq_no_lo; - uint32_t seq_no_hi; - uint32_t rsvd; -}; - -enum { - OTX2_IPSEC_FP_SA_DIRECTION_INBOUND = 0, - OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND = 1, -}; - -enum { - OTX2_IPSEC_FP_SA_IP_VERSION_4 = 0, - OTX2_IPSEC_FP_SA_IP_VERSION_6 = 1, -}; - -enum { - OTX2_IPSEC_FP_SA_MODE_TRANSPORT = 0, - OTX2_IPSEC_FP_SA_MODE_TUNNEL = 1, -}; - -enum { - OTX2_IPSEC_FP_SA_PROTOCOL_AH = 0, - OTX2_IPSEC_FP_SA_PROTOCOL_ESP = 1, -}; - -enum { - OTX2_IPSEC_FP_SA_AES_KEY_LEN_128 = 1, - OTX2_IPSEC_FP_SA_AES_KEY_LEN_192 = 2, - OTX2_IPSEC_FP_SA_AES_KEY_LEN_256 = 3, -}; - -enum { - OTX2_IPSEC_FP_SA_ENC_NULL = 0, - OTX2_IPSEC_FP_SA_ENC_DES_CBC = 1, - OTX2_IPSEC_FP_SA_ENC_3DES_CBC = 2, - OTX2_IPSEC_FP_SA_ENC_AES_CBC = 3, - OTX2_IPSEC_FP_SA_ENC_AES_CTR = 4, - OTX2_IPSEC_FP_SA_ENC_AES_GCM = 5, - OTX2_IPSEC_FP_SA_ENC_AES_CCM = 6, -}; - -enum { - OTX2_IPSEC_FP_SA_AUTH_NULL = 0, - OTX2_IPSEC_FP_SA_AUTH_MD5 = 1, - OTX2_IPSEC_FP_SA_AUTH_SHA1 = 2, - OTX2_IPSEC_FP_SA_AUTH_SHA2_224 = 3, - OTX2_IPSEC_FP_SA_AUTH_SHA2_256 = 4, - OTX2_IPSEC_FP_SA_AUTH_SHA2_384 = 5, - OTX2_IPSEC_FP_SA_AUTH_SHA2_512 = 6, - OTX2_IPSEC_FP_SA_AUTH_AES_GMAC = 7, - OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128 = 8, -}; - -enum { - OTX2_IPSEC_FP_SA_FRAG_POST = 0, - OTX2_IPSEC_FP_SA_FRAG_PRE = 1, -}; - -enum { - OTX2_IPSEC_FP_SA_ENCAP_NONE = 0, - OTX2_IPSEC_FP_SA_ENCAP_UDP = 1, -}; - -struct otx2_ipsec_fp_sa_ctl { - rte_be32_t spi : 32; - uint64_t exp_proto_inter_frag : 8; - uint64_t rsvd_42_40 : 3; - uint64_t esn_en : 1; - uint64_t rsvd_45_44 : 2; - uint64_t encap_type : 2; - uint64_t enc_type : 3; - uint64_t rsvd_48 : 1; - uint64_t auth_type : 4; - uint64_t valid : 1; - uint64_t direction : 1; - uint64_t outer_ip_ver : 1; - uint64_t inner_ip_ver : 1; - uint64_t ipsec_mode : 1; - uint64_t ipsec_proto : 1; - uint64_t aes_key_len : 2; -}; - -struct otx2_ipsec_fp_out_sa { - /* w0 */ - struct otx2_ipsec_fp_sa_ctl ctl; - - /* w1 */ - uint8_t nonce[4]; - uint16_t udp_src; - uint16_t udp_dst; - - /* w2 */ - uint32_t ip_src; - uint32_t ip_dst; - - /* w3-w6 */ - uint8_t cipher_key[32]; - - /* w7-w12 */ - uint8_t hmac_key[48]; -}; - -struct otx2_ipsec_replay { - rte_spinlock_t lock; - uint32_t winb; - uint32_t wint; - uint64_t base; /**< base of the anti-replay window */ - uint64_t window[17]; /**< anti-replay window */ -}; - -struct otx2_ipsec_fp_in_sa { - /* w0 */ - struct otx2_ipsec_fp_sa_ctl ctl; - - /* w1 */ - uint8_t nonce[4]; /* Only for AES-GCM */ - uint32_t unused; - - /* w2 */ - uint32_t esn_hi; - uint32_t esn_low; - - /* w3-w6 */ - uint8_t cipher_key[32]; - - /* w7-w12 */ - uint8_t hmac_key[48]; - - RTE_STD_C11 - union { - void *userdata; - uint64_t udata64; - }; - union { - struct otx2_ipsec_replay *replay; - uint64_t replay64; - }; - uint32_t replay_win_sz; - - uint32_t reserved1; -}; - -static inline int -ipsec_fp_xform_cipher_verify(struct rte_crypto_sym_xform *xform) -{ - if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { - switch (xform->cipher.key.length) { - case 16: - case 24: - case 32: - break; - default: - return -ENOTSUP; - } - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_fp_xform_auth_verify(struct rte_crypto_sym_xform *xform) -{ - uint16_t keylen = xform->auth.key.length; - - if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) { - if (keylen >= 20 && keylen <= 64) - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_fp_xform_aead_verify(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform) -{ - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS && - xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT) - return -EINVAL; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && - xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT) - return -EINVAL; - - if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { - switch (xform->aead.key.length) { - case 16: - case 24: - case 32: - break; - default: - return -EINVAL; - } - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_fp_xform_verify(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform) -{ - struct rte_crypto_sym_xform *auth_xform, *cipher_xform; - int ret; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) - return ipsec_fp_xform_aead_verify(ipsec, xform); - - if (xform->next == NULL) - return -EINVAL; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { - /* Ingress */ - if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH || - xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER) - return -EINVAL; - auth_xform = xform; - cipher_xform = xform->next; - } else { - /* Egress */ - if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER || - xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH) - return -EINVAL; - cipher_xform = xform; - auth_xform = xform->next; - } - - ret = ipsec_fp_xform_cipher_verify(cipher_xform); - if (ret) - return ret; - - ret = ipsec_fp_xform_auth_verify(auth_xform); - if (ret) - return ret; - - return 0; -} - -static inline int -ipsec_fp_sa_ctl_set(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform, - struct otx2_ipsec_fp_sa_ctl *ctl) -{ - struct rte_crypto_sym_xform *cipher_xform, *auth_xform; - int aes_key_len; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) { - ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND; - cipher_xform = xform; - auth_xform = xform->next; - } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { - ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_INBOUND; - auth_xform = xform; - cipher_xform = xform->next; - } else { - return -EINVAL; - } - - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { - if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) - ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4; - else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) - ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_6; - else - return -EINVAL; - } - - ctl->inner_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4; - - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) - ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TRANSPORT; - else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) - ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TUNNEL; - else - return -EINVAL; - - if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) - ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_AH; - else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) - ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_ESP; - else - return -EINVAL; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { - ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_GCM; - aes_key_len = xform->aead.key.length; - } else { - return -ENOTSUP; - } - } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { - ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_CBC; - aes_key_len = cipher_xform->cipher.key.length; - } else { - return -ENOTSUP; - } - - switch (aes_key_len) { - case 16: - ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_128; - break; - case 24: - ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_192; - break; - case 32: - ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_256; - break; - default: - return -EINVAL; - } - - if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) { - switch (auth_xform->auth.algo) { - case RTE_CRYPTO_AUTH_NULL: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_NULL; - break; - case RTE_CRYPTO_AUTH_MD5_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_MD5; - break; - case RTE_CRYPTO_AUTH_SHA1_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA1; - break; - case RTE_CRYPTO_AUTH_SHA224_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_224; - break; - case RTE_CRYPTO_AUTH_SHA256_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_256; - break; - case RTE_CRYPTO_AUTH_SHA384_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_384; - break; - case RTE_CRYPTO_AUTH_SHA512_HMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_512; - break; - case RTE_CRYPTO_AUTH_AES_GMAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_GMAC; - break; - case RTE_CRYPTO_AUTH_AES_XCBC_MAC: - ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128; - break; - default: - return -ENOTSUP; - } - } - - if (ipsec->options.esn == 1) - ctl->esn_en = 1; - - ctl->spi = rte_cpu_to_be_32(ipsec->spi); - - return 0; -} - -#endif /* __OTX2_IPSEC_FP_H__ */ diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po.h b/drivers/crypto/octeontx2/otx2_ipsec_po.h deleted file mode 100644 index 695f552644..0000000000 --- a/drivers/crypto/octeontx2/otx2_ipsec_po.h +++ /dev/null @@ -1,447 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_IPSEC_PO_H__ -#define __OTX2_IPSEC_PO_H__ - -#include -#include -#include - -#define OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN 0x09 - -#define OTX2_IPSEC_PO_WRITE_IPSEC_OUTB 0x20 -#define OTX2_IPSEC_PO_WRITE_IPSEC_INB 0x21 -#define OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB 0x23 -#define OTX2_IPSEC_PO_PROCESS_IPSEC_INB 0x24 - -#define OTX2_IPSEC_PO_INB_RPTR_HDR 0x8 - -enum otx2_ipsec_po_comp_e { - OTX2_IPSEC_PO_CC_SUCCESS = 0x00, - OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED = 0xB0, - OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED = 0xB1, -}; - -enum { - OTX2_IPSEC_PO_SA_DIRECTION_INBOUND = 0, - OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND = 1, -}; - -enum { - OTX2_IPSEC_PO_SA_IP_VERSION_4 = 0, - OTX2_IPSEC_PO_SA_IP_VERSION_6 = 1, -}; - -enum { - OTX2_IPSEC_PO_SA_MODE_TRANSPORT = 0, - OTX2_IPSEC_PO_SA_MODE_TUNNEL = 1, -}; - -enum { - OTX2_IPSEC_PO_SA_PROTOCOL_AH = 0, - OTX2_IPSEC_PO_SA_PROTOCOL_ESP = 1, -}; - -enum { - OTX2_IPSEC_PO_SA_AES_KEY_LEN_128 = 1, - OTX2_IPSEC_PO_SA_AES_KEY_LEN_192 = 2, - OTX2_IPSEC_PO_SA_AES_KEY_LEN_256 = 3, -}; - -enum { - OTX2_IPSEC_PO_SA_ENC_NULL = 0, - OTX2_IPSEC_PO_SA_ENC_DES_CBC = 1, - OTX2_IPSEC_PO_SA_ENC_3DES_CBC = 2, - OTX2_IPSEC_PO_SA_ENC_AES_CBC = 3, - OTX2_IPSEC_PO_SA_ENC_AES_CTR = 4, - OTX2_IPSEC_PO_SA_ENC_AES_GCM = 5, - OTX2_IPSEC_PO_SA_ENC_AES_CCM = 6, -}; - -enum { - OTX2_IPSEC_PO_SA_AUTH_NULL = 0, - OTX2_IPSEC_PO_SA_AUTH_MD5 = 1, - OTX2_IPSEC_PO_SA_AUTH_SHA1 = 2, - OTX2_IPSEC_PO_SA_AUTH_SHA2_224 = 3, - OTX2_IPSEC_PO_SA_AUTH_SHA2_256 = 4, - OTX2_IPSEC_PO_SA_AUTH_SHA2_384 = 5, - OTX2_IPSEC_PO_SA_AUTH_SHA2_512 = 6, - OTX2_IPSEC_PO_SA_AUTH_AES_GMAC = 7, - OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128 = 8, -}; - -enum { - OTX2_IPSEC_PO_SA_FRAG_POST = 0, - OTX2_IPSEC_PO_SA_FRAG_PRE = 1, -}; - -enum { - OTX2_IPSEC_PO_SA_ENCAP_NONE = 0, - OTX2_IPSEC_PO_SA_ENCAP_UDP = 1, -}; - -struct otx2_ipsec_po_out_hdr { - uint32_t ip_id; - uint32_t seq; - uint8_t iv[16]; -}; - -union otx2_ipsec_po_bit_perfect_iv { - uint8_t aes_iv[16]; - uint8_t des_iv[8]; - struct { - uint8_t nonce[4]; - uint8_t iv[8]; - uint8_t counter[4]; - } gcm; -}; - -struct otx2_ipsec_po_traffic_selector { - rte_be16_t src_port[2]; - rte_be16_t dst_port[2]; - RTE_STD_C11 - union { - struct { - rte_be32_t src_addr[2]; - rte_be32_t dst_addr[2]; - } ipv4; - struct { - uint8_t src_addr[32]; - uint8_t dst_addr[32]; - } ipv6; - }; -}; - -struct otx2_ipsec_po_sa_ctl { - rte_be32_t spi : 32; - uint64_t exp_proto_inter_frag : 8; - uint64_t rsvd_42_40 : 3; - uint64_t esn_en : 1; - uint64_t rsvd_45_44 : 2; - uint64_t encap_type : 2; - uint64_t enc_type : 3; - uint64_t rsvd_48 : 1; - uint64_t auth_type : 4; - uint64_t valid : 1; - uint64_t direction : 1; - uint64_t outer_ip_ver : 1; - uint64_t inner_ip_ver : 1; - uint64_t ipsec_mode : 1; - uint64_t ipsec_proto : 1; - uint64_t aes_key_len : 2; -}; - -struct otx2_ipsec_po_in_sa { - /* w0 */ - struct otx2_ipsec_po_sa_ctl ctl; - - /* w1-w4 */ - uint8_t cipher_key[32]; - - /* w5-w6 */ - union otx2_ipsec_po_bit_perfect_iv iv; - - /* w7 */ - uint32_t esn_hi; - uint32_t esn_low; - - /* w8 */ - uint8_t udp_encap[8]; - - /* w9-w33 */ - union { - struct { - uint8_t hmac_key[48]; - struct otx2_ipsec_po_traffic_selector selector; - } aes_gcm; - struct { - uint8_t hmac_key[64]; - uint8_t hmac_iv[64]; - struct otx2_ipsec_po_traffic_selector selector; - } sha2; - }; - union { - struct otx2_ipsec_replay *replay; - uint64_t replay64; - }; - uint32_t replay_win_sz; -}; - -struct otx2_ipsec_po_ip_template { - RTE_STD_C11 - union { - struct { - struct rte_ipv4_hdr ipv4_hdr; - uint16_t udp_src; - uint16_t udp_dst; - } ip4; - struct { - struct rte_ipv6_hdr ipv6_hdr; - uint16_t udp_src; - uint16_t udp_dst; - } ip6; - }; -}; - -struct otx2_ipsec_po_out_sa { - /* w0 */ - struct otx2_ipsec_po_sa_ctl ctl; - - /* w1-w4 */ - uint8_t cipher_key[32]; - - /* w5-w6 */ - union otx2_ipsec_po_bit_perfect_iv iv; - - /* w7 */ - uint32_t esn_hi; - uint32_t esn_low; - - /* w8-w55 */ - union { - struct { - struct otx2_ipsec_po_ip_template template; - } aes_gcm; - struct { - uint8_t hmac_key[24]; - uint8_t unused[24]; - struct otx2_ipsec_po_ip_template template; - } sha1; - struct { - uint8_t hmac_key[64]; - uint8_t hmac_iv[64]; - struct otx2_ipsec_po_ip_template template; - } sha2; - }; -}; - -static inline int -ipsec_po_xform_cipher_verify(struct rte_crypto_sym_xform *xform) -{ - if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { - switch (xform->cipher.key.length) { - case 16: - case 24: - case 32: - break; - default: - return -ENOTSUP; - } - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_po_xform_auth_verify(struct rte_crypto_sym_xform *xform) -{ - uint16_t keylen = xform->auth.key.length; - - if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) { - if (keylen >= 20 && keylen <= 64) - return 0; - } else if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) { - if (keylen >= 32 && keylen <= 64) - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_po_xform_aead_verify(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform) -{ - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS && - xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT) - return -EINVAL; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && - xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT) - return -EINVAL; - - if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { - switch (xform->aead.key.length) { - case 16: - case 24: - case 32: - break; - default: - return -EINVAL; - } - return 0; - } - - return -ENOTSUP; -} - -static inline int -ipsec_po_xform_verify(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform) -{ - struct rte_crypto_sym_xform *auth_xform, *cipher_xform; - int ret; - - if (ipsec->life.bytes_hard_limit != 0 || - ipsec->life.bytes_soft_limit != 0 || - ipsec->life.packets_hard_limit != 0 || - ipsec->life.packets_soft_limit != 0) - return -ENOTSUP; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) - return ipsec_po_xform_aead_verify(ipsec, xform); - - if (xform->next == NULL) - return -EINVAL; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { - /* Ingress */ - if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH || - xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER) - return -EINVAL; - auth_xform = xform; - cipher_xform = xform->next; - } else { - /* Egress */ - if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER || - xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH) - return -EINVAL; - cipher_xform = xform; - auth_xform = xform->next; - } - - ret = ipsec_po_xform_cipher_verify(cipher_xform); - if (ret) - return ret; - - ret = ipsec_po_xform_auth_verify(auth_xform); - if (ret) - return ret; - - return 0; -} - -static inline int -ipsec_po_sa_ctl_set(struct rte_security_ipsec_xform *ipsec, - struct rte_crypto_sym_xform *xform, - struct otx2_ipsec_po_sa_ctl *ctl) -{ - struct rte_crypto_sym_xform *cipher_xform, *auth_xform; - int aes_key_len; - - if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) { - ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND; - cipher_xform = xform; - auth_xform = xform->next; - } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { - ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_INBOUND; - auth_xform = xform; - cipher_xform = xform->next; - } else { - return -EINVAL; - } - - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { - if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) - ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_4; - else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) - ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_6; - else - return -EINVAL; - } - - ctl->inner_ip_ver = ctl->outer_ip_ver; - - if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) - ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TRANSPORT; - else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) - ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TUNNEL; - else - return -EINVAL; - - if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) - ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_AH; - else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) - ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_ESP; - else - return -EINVAL; - - if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { - if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { - ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_GCM; - aes_key_len = xform->aead.key.length; - } else { - return -ENOTSUP; - } - } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { - ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_CBC; - aes_key_len = cipher_xform->cipher.key.length; - } else { - return -ENOTSUP; - } - - - switch (aes_key_len) { - case 16: - ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_128; - break; - case 24: - ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_192; - break; - case 32: - ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_256; - break; - default: - return -EINVAL; - } - - if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) { - switch (auth_xform->auth.algo) { - case RTE_CRYPTO_AUTH_NULL: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_NULL; - break; - case RTE_CRYPTO_AUTH_MD5_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_MD5; - break; - case RTE_CRYPTO_AUTH_SHA1_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA1; - break; - case RTE_CRYPTO_AUTH_SHA224_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_224; - break; - case RTE_CRYPTO_AUTH_SHA256_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_256; - break; - case RTE_CRYPTO_AUTH_SHA384_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_384; - break; - case RTE_CRYPTO_AUTH_SHA512_HMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_512; - break; - case RTE_CRYPTO_AUTH_AES_GMAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_GMAC; - break; - case RTE_CRYPTO_AUTH_AES_XCBC_MAC: - ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128; - break; - default: - return -ENOTSUP; - } - } - - if (ipsec->options.esn) - ctl->esn_en = 1; - - if (ipsec->options.udp_encap == 1) - ctl->encap_type = OTX2_IPSEC_PO_SA_ENCAP_UDP; - - ctl->spi = rte_cpu_to_be_32(ipsec->spi); - ctl->valid = 1; - - return 0; -} - -#endif /* __OTX2_IPSEC_PO_H__ */ diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h b/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h deleted file mode 100644 index c3abf02187..0000000000 --- a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h +++ /dev/null @@ -1,167 +0,0 @@ - -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_IPSEC_PO_OPS_H__ -#define __OTX2_IPSEC_PO_OPS_H__ - -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_security.h" - -static __rte_always_inline int32_t -otx2_ipsec_po_out_rlen_get(struct otx2_sec_session_ipsec_lp *sess, - uint32_t plen) -{ - uint32_t enc_payload_len; - - enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len, - sess->roundup_byte); - - return sess->partial_len + enc_payload_len; -} - -static __rte_always_inline struct cpt_request_info * -alloc_request_struct(char *maddr, void *cop, int mdata_len) -{ - struct cpt_request_info *req; - struct cpt_meta_info *meta; - uint8_t *resp_addr; - uintptr_t *op; - - meta = (void *)RTE_PTR_ALIGN((uint8_t *)maddr, 16); - - op = (uintptr_t *)meta->deq_op_info; - req = &meta->cpt_req; - resp_addr = (uint8_t *)&meta->cpt_res; - - req->completion_addr = (uint64_t *)((uint8_t *)resp_addr); - *req->completion_addr = COMPLETION_CODE_INIT; - req->comp_baddr = rte_mem_virt2iova(resp_addr); - req->op = op; - - op[0] = (uintptr_t)((uint64_t)meta | 1ull); - op[1] = (uintptr_t)cop; - op[2] = (uintptr_t)req; - op[3] = mdata_len; - - return req; -} - -static __rte_always_inline int -process_outb_sa(struct rte_crypto_op *cop, - struct otx2_sec_session_ipsec_lp *sess, - struct cpt_qp_meta_info *m_info, void **prep_req) -{ - uint32_t dlen, rlen, extend_head, extend_tail; - struct rte_crypto_sym_op *sym_op = cop->sym; - struct rte_mbuf *m_src = sym_op->m_src; - struct cpt_request_info *req = NULL; - struct otx2_ipsec_po_out_hdr *hdr; - struct otx2_ipsec_po_out_sa *sa; - int hdr_len, mdata_len, ret = 0; - vq_cmd_word0_t word0; - char *mdata, *data; - - sa = &sess->out_sa; - hdr_len = sizeof(*hdr); - - dlen = rte_pktmbuf_pkt_len(m_src) + hdr_len; - rlen = otx2_ipsec_po_out_rlen_get(sess, dlen - hdr_len); - - extend_head = hdr_len + RTE_ETHER_HDR_LEN; - extend_tail = rlen - dlen; - mdata_len = m_info->lb_mlen + 8; - - mdata = rte_pktmbuf_append(m_src, extend_tail + mdata_len); - if (unlikely(mdata == NULL)) { - otx2_err("Not enough tail room\n"); - ret = -ENOMEM; - goto exit; - } - - mdata += extend_tail; /* mdata follows encrypted data */ - req = alloc_request_struct(mdata, (void *)cop, mdata_len); - - data = rte_pktmbuf_prepend(m_src, extend_head); - if (unlikely(data == NULL)) { - otx2_err("Not enough head room\n"); - ret = -ENOMEM; - goto exit; - } - - /* - * Move the Ethernet header, to insert otx2_ipsec_po_out_hdr prior - * to the IP header - */ - memcpy(data, data + hdr_len, RTE_ETHER_HDR_LEN); - - hdr = (struct otx2_ipsec_po_out_hdr *)rte_pktmbuf_adj(m_src, - RTE_ETHER_HDR_LEN); - - memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *, - sess->iv_offset), sess->iv_length); - - /* Prepare CPT instruction */ - word0.u64 = sess->ucmd_w0; - word0.s.dlen = dlen; - - req->ist.ei0 = word0.u64; - req->ist.ei1 = rte_pktmbuf_iova(m_src); - req->ist.ei2 = req->ist.ei1; - - sa->esn_hi = sess->seq_hi; - - hdr->seq = rte_cpu_to_be_32(sess->seq_lo); - hdr->ip_id = rte_cpu_to_be_32(sess->ip_id); - - sess->ip_id++; - sess->esn++; - -exit: - *prep_req = req; - - return ret; -} - -static __rte_always_inline int -process_inb_sa(struct rte_crypto_op *cop, - struct otx2_sec_session_ipsec_lp *sess, - struct cpt_qp_meta_info *m_info, void **prep_req) -{ - struct rte_crypto_sym_op *sym_op = cop->sym; - struct rte_mbuf *m_src = sym_op->m_src; - struct cpt_request_info *req = NULL; - int mdata_len, ret = 0; - vq_cmd_word0_t word0; - uint32_t dlen; - char *mdata; - - dlen = rte_pktmbuf_pkt_len(m_src); - mdata_len = m_info->lb_mlen + 8; - - mdata = rte_pktmbuf_append(m_src, mdata_len); - if (unlikely(mdata == NULL)) { - otx2_err("Not enough tail room\n"); - ret = -ENOMEM; - goto exit; - } - - req = alloc_request_struct(mdata, (void *)cop, mdata_len); - - /* Prepare CPT instruction */ - word0.u64 = sess->ucmd_w0; - word0.s.dlen = dlen; - - req->ist.ei0 = word0.u64; - req->ist.ei1 = rte_pktmbuf_iova(m_src); - req->ist.ei2 = req->ist.ei1; - -exit: - *prep_req = req; - return ret; -} -#endif /* __OTX2_IPSEC_PO_OPS_H__ */ diff --git a/drivers/crypto/octeontx2/otx2_security.h b/drivers/crypto/octeontx2/otx2_security.h deleted file mode 100644 index 29c8fc351b..0000000000 --- a/drivers/crypto/octeontx2/otx2_security.h +++ /dev/null @@ -1,37 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef __OTX2_SECURITY_H__ -#define __OTX2_SECURITY_H__ - -#include - -#include "otx2_cryptodev_sec.h" -#include "otx2_ethdev_sec.h" - -#define OTX2_SEC_AH_HDR_LEN 12 -#define OTX2_SEC_AES_GCM_IV_LEN 8 -#define OTX2_SEC_AES_GCM_MAC_LEN 16 -#define OTX2_SEC_AES_CBC_IV_LEN 16 -#define OTX2_SEC_SHA1_HMAC_LEN 12 -#define OTX2_SEC_SHA2_HMAC_LEN 16 - -#define OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN 4 -#define OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN 16 - -struct otx2_sec_session_ipsec { - union { - struct otx2_sec_session_ipsec_ip ip; - struct otx2_sec_session_ipsec_lp lp; - }; - enum rte_security_ipsec_sa_direction dir; -}; - -struct otx2_sec_session { - struct otx2_sec_session_ipsec ipsec; - void *userdata; - /**< Userdata registered by the application */ -} __rte_cache_aligned; - -#endif /* __OTX2_SECURITY_H__ */ diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map deleted file mode 100644 index d36663132a..0000000000 --- a/drivers/crypto/octeontx2/version.map +++ /dev/null @@ -1,13 +0,0 @@ -DPDK_22 { - local: *; -}; - -INTERNAL { - global: - - otx2_cryptodev_driver_id; - otx2_cpt_af_reg_read; - otx2_cpt_af_reg_write; - - local: *; -}; diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index b68ce6c0a4..8db9775d7b 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1127,6 +1127,16 @@ cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) } static const struct rte_pci_id cn9k_pci_sso_map[] = { + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_VF), { .vendor_id = 0, }, diff --git a/drivers/event/meson.build b/drivers/event/meson.build index 63d6b410b2..d6706b57f7 100644 --- a/drivers/event/meson.build +++ b/drivers/event/meson.build @@ -11,7 +11,6 @@ drivers = [ 'dpaa', 'dpaa2', 'dsw', - 'octeontx2', 'opdl', 'skeleton', 'sw', diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build deleted file mode 100644 index ce360af5f8..0000000000 --- a/drivers/event/octeontx2/meson.build +++ /dev/null @@ -1,26 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. -# - -if not is_linux or not dpdk_conf.get('RTE_ARCH_64') - build = false - reason = 'only supported on 64-bit Linux' - subdir_done() -endif - -sources = files( - 'otx2_worker.c', - 'otx2_worker_dual.c', - 'otx2_evdev.c', - 'otx2_evdev_adptr.c', - 'otx2_evdev_crypto_adptr.c', - 'otx2_evdev_irq.c', - 'otx2_evdev_selftest.c', - 'otx2_tim_evdev.c', - 'otx2_tim_worker.c', -) - -deps += ['bus_pci', 'common_octeontx2', 'crypto_octeontx2', 'mempool_octeontx2', 'net_octeontx2'] - -includes += include_directories('../../crypto/octeontx2') -includes += include_directories('../../common/cpt') diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c deleted file mode 100644 index ccf28b678b..0000000000 --- a/drivers/event/octeontx2/otx2_evdev.c +++ /dev/null @@ -1,1900 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include - -#include -#include -#include -#include -#include -#include -#include - -#include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_tx.h" -#include "otx2_evdev_stats.h" -#include "otx2_irq.h" -#include "otx2_tim_evdev.h" - -static inline int -sso_get_msix_offsets(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); - struct otx2_mbox *mbox = dev->mbox; - struct msix_offset_rsp *msix_rsp; - int i, rc; - - /* Get SSO and SSOW MSIX vector offsets */ - otx2_mbox_alloc_msg_msix_offset(mbox); - rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); - - for (i = 0; i < nb_ports; i++) - dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i]; - - for (i = 0; i < dev->nb_event_queues; i++) - dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i]; - - return rc; -} - -void -sso_fastpath_fns_set(struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - /* Single WS modes */ - const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_deq_timeout_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_deq_seg_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_deq_seg_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_deq_seg_timeout_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_deq_seg_timeout_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - - /* Dual WS modes */ - const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_dual_deq_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_timeout_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_timeout_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_seg_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_t - ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_seg_timeout_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - const event_dequeue_burst_t - ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = { -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_deq_seg_timeout_burst_ ##name, -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - }; - - /* Tx modes */ - const event_tx_adapter_enqueue_t - ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = { -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_tx_adptr_enq_ ## name, - SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - }; - - const event_tx_adapter_enqueue_t - ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = { -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_tx_adptr_enq_seg_ ## name, - SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - }; - - const event_tx_adapter_enqueue_t - ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = { -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_tx_adptr_enq_ ## name, - SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - }; - - const event_tx_adapter_enqueue_t - ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = { -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ - [f6][f5][f4][f3][f2][f1][f0] = \ - otx2_ssogws_dual_tx_adptr_enq_seg_ ## name, - SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - }; - - event_dev->enqueue = otx2_ssogws_enq; - event_dev->enqueue_burst = otx2_ssogws_enq_burst; - event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst; - event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst; - if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = ssogws_deq_seg - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_deq_seg_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - if (dev->is_timeout_deq) { - event_dev->dequeue = ssogws_deq_seg_timeout - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - ssogws_deq_seg_timeout_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - } - } else { - event_dev->dequeue = ssogws_deq - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_deq_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - if (dev->is_timeout_deq) { - event_dev->dequeue = ssogws_deq_timeout - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - ssogws_deq_timeout_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - } - } - - if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { - /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } else { - event_dev->txa_enqueue = ssogws_tx_adptr_enq - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } - event_dev->ca_enqueue = otx2_ssogws_ca_enq; - - if (dev->dual_ws) { - event_dev->enqueue = otx2_ssogws_dual_enq; - event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst; - event_dev->enqueue_new_burst = - otx2_ssogws_dual_enq_new_burst; - event_dev->enqueue_forward_burst = - otx2_ssogws_dual_enq_fwd_burst; - - if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = ssogws_dual_deq_seg - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_dual_deq_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - if (dev->is_timeout_deq) { - event_dev->dequeue = - ssogws_dual_deq_seg_timeout - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - ssogws_dual_deq_seg_timeout_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - } - } else { - event_dev->dequeue = ssogws_dual_deq - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_dual_deq_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - if (dev->is_timeout_deq) { - event_dev->dequeue = - ssogws_dual_deq_timeout - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - ssogws_dual_deq_timeout_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_SECURITY_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - } - } - - if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { - /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_SECURITY_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } else { - event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_SECURITY_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } - event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; - } - - event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; - rte_mb(); -} - -static void -otx2_sso_info_get(struct rte_eventdev *event_dev, - struct rte_event_dev_info *dev_info) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - - dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD); - dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns; - dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns; - dev_info->max_event_queues = dev->max_event_queues; - dev_info->max_event_queue_flows = (1ULL << 20); - dev_info->max_event_queue_priority_levels = 8; - dev_info->max_event_priority_levels = 1; - dev_info->max_event_ports = dev->max_event_ports; - dev_info->max_event_port_dequeue_depth = 1; - dev_info->max_event_port_enqueue_depth = 1; - dev_info->max_num_events = dev->max_num_events; - dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS | - RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | - RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES | - RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | - RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | - RTE_EVENT_DEV_CAP_NONSEQ_MODE | - RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; -} - -static void -sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable) -{ - uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); - uint64_t val; - - val = queue; - val |= 0ULL << 12; /* SET 0 */ - val |= 0x8000800080000000; /* Dont modify rest of the masks */ - val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */ - - otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG); -} - -static int -otx2_sso_port_link(struct rte_eventdev *event_dev, void *port, - const uint8_t queues[], const uint8_t priorities[], - uint16_t nb_links) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t port_id = 0; - uint16_t link; - - RTE_SET_USED(priorities); - for (link = 0; link < nb_links; link++) { - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = port; - - port_id = ws->port; - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[0], queues[link], true); - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[1], queues[link], true); - } else { - struct otx2_ssogws *ws = port; - - port_id = ws->port; - sso_port_link_modify(ws, queues[link], true); - } - } - sso_func_trace("Port=%d nb_links=%d", port_id, nb_links); - - return (int)nb_links; -} - -static int -otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port, - uint8_t queues[], uint16_t nb_unlinks) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t port_id = 0; - uint16_t unlink; - - for (unlink = 0; unlink < nb_unlinks; unlink++) { - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = port; - - port_id = ws->port; - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[0], queues[unlink], - false); - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[1], queues[unlink], - false); - } else { - struct otx2_ssogws *ws = port; - - port_id = ws->port; - sso_port_link_modify(ws, queues[unlink], false); - } - } - sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks); - - return (int)nb_unlinks; -} - -static int -sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type, - uint16_t nb_lf, uint8_t attach) -{ - if (attach) { - struct rsrc_attach_req *req; - - req = otx2_mbox_alloc_msg_attach_resources(mbox); - switch (type) { - case SSO_LF_GGRP: - req->sso = nb_lf; - break; - case SSO_LF_GWS: - req->ssow = nb_lf; - break; - default: - return -EINVAL; - } - req->modify = true; - if (otx2_mbox_process(mbox) < 0) - return -EIO; - } else { - struct rsrc_detach_req *req; - - req = otx2_mbox_alloc_msg_detach_resources(mbox); - switch (type) { - case SSO_LF_GGRP: - req->sso = true; - break; - case SSO_LF_GWS: - req->ssow = true; - break; - default: - return -EINVAL; - } - req->partial = true; - if (otx2_mbox_process(mbox) < 0) - return -EIO; - } - - return 0; -} - -static int -sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox, - enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc) -{ - void *rsp; - int rc; - - if (alloc) { - switch (type) { - case SSO_LF_GGRP: - { - struct sso_lf_alloc_req *req_ggrp; - req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox); - req_ggrp->hwgrps = nb_lf; - } - break; - case SSO_LF_GWS: - { - struct ssow_lf_alloc_req *req_hws; - req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox); - req_hws->hws = nb_lf; - } - break; - default: - return -EINVAL; - } - } else { - switch (type) { - case SSO_LF_GGRP: - { - struct sso_lf_free_req *req_ggrp; - req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox); - req_ggrp->hwgrps = nb_lf; - } - break; - case SSO_LF_GWS: - { - struct ssow_lf_free_req *req_hws; - req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox); - req_hws->hws = nb_lf; - } - break; - default: - return -EINVAL; - } - } - - rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0); - if (rc < 0) - return rc; - - if (alloc && type == SSO_LF_GGRP) { - struct sso_lf_alloc_rsp *rsp_ggrp = rsp; - - dev->xaq_buf_size = rsp_ggrp->xaq_buf_size; - dev->xae_waes = rsp_ggrp->xaq_wq_entries; - dev->iue = rsp_ggrp->in_unit_entries; - } - - return 0; -} - -static void -otx2_sso_port_release(void *port) -{ - struct otx2_ssogws_cookie *gws_cookie = ssogws_get_cookie(port); - struct otx2_sso_evdev *dev; - int i; - - if (!gws_cookie->configured) - goto free; - - dev = sso_pmd_priv(gws_cookie->event_dev); - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = port; - - for (i = 0; i < dev->nb_event_queues; i++) { - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[0], i, false); - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[1], i, false); - } - memset(ws, 0, sizeof(*ws)); - } else { - struct otx2_ssogws *ws = port; - - for (i = 0; i < dev->nb_event_queues; i++) - sso_port_link_modify(ws, i, false); - memset(ws, 0, sizeof(*ws)); - } - - memset(gws_cookie, 0, sizeof(*gws_cookie)); - -free: - rte_free(gws_cookie); -} - -static void -otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) -{ - RTE_SET_USED(event_dev); - RTE_SET_USED(queue_id); -} - -static void -sso_restore_links(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint16_t *links_map; - int i, j; - - for (i = 0; i < dev->nb_event_ports; i++) { - links_map = event_dev->data->links_map; - /* Point links_map to this port specific area */ - links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws; - - ws = event_dev->data->ports[i]; - for (j = 0; j < dev->nb_event_queues; j++) { - if (links_map[j] == 0xdead) - continue; - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[0], j, true); - sso_port_link_modify((struct otx2_ssogws *) - &ws->ws_state[1], j, true); - sso_func_trace("Restoring port %d queue %d " - "link", i, j); - } - } else { - struct otx2_ssogws *ws; - - ws = event_dev->data->ports[i]; - for (j = 0; j < dev->nb_event_queues; j++) { - if (links_map[j] == 0xdead) - continue; - sso_port_link_modify(ws, j, true); - sso_func_trace("Restoring port %d queue %d " - "link", i, j); - } - } - } -} - -static void -sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base) -{ - ws->tag_op = base + SSOW_LF_GWS_TAG; - ws->wqp_op = base + SSOW_LF_GWS_WQP; - ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK; - ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH; - ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM; - ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED; -} - -static int -sso_configure_dual_ports(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_mbox *mbox = dev->mbox; - uint8_t vws = 0; - uint8_t nb_lf; - int i, rc; - - otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports); - - nb_lf = dev->nb_event_ports * 2; - /* Ask AF to attach required LFs. */ - rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true); - if (rc < 0) { - otx2_err("Failed to attach SSO GWS LF"); - return -ENODEV; - } - - if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) { - sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); - otx2_err("Failed to init SSO GWS LF"); - return -ENODEV; - } - - for (i = 0; i < dev->nb_event_ports; i++) { - struct otx2_ssogws_cookie *gws_cookie; - struct otx2_ssogws_dual *ws; - uintptr_t base; - - if (event_dev->data->ports[i] != NULL) { - ws = event_dev->data->ports[i]; - } else { - /* Allocate event port memory */ - ws = rte_zmalloc_socket("otx2_sso_ws", - sizeof(struct otx2_ssogws_dual) + - RTE_CACHE_LINE_SIZE, - RTE_CACHE_LINE_SIZE, - event_dev->data->socket_id); - if (ws == NULL) { - otx2_err("Failed to alloc memory for port=%d", - i); - rc = -ENOMEM; - break; - } - - /* First cache line is reserved for cookie */ - ws = (struct otx2_ssogws_dual *) - ((uint8_t *)ws + RTE_CACHE_LINE_SIZE); - } - - ws->port = i; - base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12); - sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base); - ws->base[0] = base; - vws++; - - base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12); - sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base); - ws->base[1] = base; - vws++; - - gws_cookie = ssogws_get_cookie(ws); - gws_cookie->event_dev = event_dev; - gws_cookie->configured = 1; - - event_dev->data->ports[i] = ws; - } - - if (rc < 0) { - sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false); - sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); - } - - return rc; -} - -static int -sso_configure_ports(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_mbox *mbox = dev->mbox; - uint8_t nb_lf; - int i, rc; - - otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports); - - nb_lf = dev->nb_event_ports; - /* Ask AF to attach required LFs. */ - rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true); - if (rc < 0) { - otx2_err("Failed to attach SSO GWS LF"); - return -ENODEV; - } - - if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) { - sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); - otx2_err("Failed to init SSO GWS LF"); - return -ENODEV; - } - - for (i = 0; i < nb_lf; i++) { - struct otx2_ssogws_cookie *gws_cookie; - struct otx2_ssogws *ws; - uintptr_t base; - - if (event_dev->data->ports[i] != NULL) { - ws = event_dev->data->ports[i]; - } else { - /* Allocate event port memory */ - ws = rte_zmalloc_socket("otx2_sso_ws", - sizeof(struct otx2_ssogws) + - RTE_CACHE_LINE_SIZE, - RTE_CACHE_LINE_SIZE, - event_dev->data->socket_id); - if (ws == NULL) { - otx2_err("Failed to alloc memory for port=%d", - i); - rc = -ENOMEM; - break; - } - - /* First cache line is reserved for cookie */ - ws = (struct otx2_ssogws *) - ((uint8_t *)ws + RTE_CACHE_LINE_SIZE); - } - - ws->port = i; - base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12); - sso_set_port_ops(ws, base); - ws->base = base; - - gws_cookie = ssogws_get_cookie(ws); - gws_cookie->event_dev = event_dev; - gws_cookie->configured = 1; - - event_dev->data->ports[i] = ws; - } - - if (rc < 0) { - sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false); - sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); - } - - return rc; -} - -static int -sso_configure_queues(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_mbox *mbox = dev->mbox; - uint8_t nb_lf; - int rc; - - otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues); - - nb_lf = dev->nb_event_queues; - /* Ask AF to attach required LFs. */ - rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true); - if (rc < 0) { - otx2_err("Failed to attach SSO GGRP LF"); - return -ENODEV; - } - - if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) { - sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false); - otx2_err("Failed to init SSO GGRP LF"); - return -ENODEV; - } - - return rc; -} - -static int -sso_xaq_allocate(struct otx2_sso_evdev *dev) -{ - const struct rte_memzone *mz; - struct npa_aura_s *aura; - static int reconfig_cnt; - char pool_name[RTE_MEMZONE_NAMESIZE]; - uint32_t xaq_cnt; - int rc; - - if (dev->xaq_pool) - rte_mempool_free(dev->xaq_pool); - - /* - * Allocate memory for Add work backpressure. - */ - mz = rte_memzone_lookup(OTX2_SSO_FC_NAME); - if (mz == NULL) - mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME, - OTX2_ALIGN + - sizeof(struct npa_aura_s), - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, - OTX2_ALIGN); - if (mz == NULL) { - otx2_err("Failed to allocate mem for fcmem"); - return -ENOMEM; - } - - dev->fc_iova = mz->iova; - dev->fc_mem = mz->addr; - *dev->fc_mem = 0; - aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN); - memset(aura, 0, sizeof(struct npa_aura_s)); - - aura->fc_ena = 1; - aura->fc_addr = dev->fc_iova; - aura->fc_hyst_bits = 0; /* Store count on all updates */ - - /* Taken from HRM 14.3.3(4) */ - xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT; - if (dev->xae_cnt) - xaq_cnt += dev->xae_cnt / dev->xae_waes; - else if (dev->adptr_xae_cnt) - xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) + - (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); - else - xaq_cnt += (dev->iue / dev->xae_waes) + - (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); - - otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt); - /* Setup XAQ based on number of nb queues. */ - snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt); - dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name, - xaq_cnt, dev->xaq_buf_size, 0, 0, - rte_socket_id(), 0); - - if (dev->xaq_pool == NULL) { - otx2_err("Unable to create empty mempool."); - rte_memzone_free(mz); - return -ENOMEM; - } - - rc = rte_mempool_set_ops_byname(dev->xaq_pool, - rte_mbuf_platform_mempool_ops(), aura); - if (rc != 0) { - otx2_err("Unable to set xaqpool ops."); - goto alloc_fail; - } - - rc = rte_mempool_populate_default(dev->xaq_pool); - if (rc < 0) { - otx2_err("Unable to set populate xaqpool."); - goto alloc_fail; - } - reconfig_cnt++; - /* When SW does addwork (enqueue) check if there is space in XAQ by - * comparing fc_addr above against the xaq_lmt calculated below. - * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO - * to request XAQ to cache them even before enqueue is called. - */ - dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 * - dev->nb_event_queues); - dev->nb_xaq_cfg = xaq_cnt; - - return 0; -alloc_fail: - rte_mempool_free(dev->xaq_pool); - rte_memzone_free(mz); - return rc; -} - -static int -sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - struct sso_hw_setconfig *req; - - otx2_sso_dbg("Configuring XAQ for GGRPs"); - req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox); - req->npa_pf_func = otx2_npa_pf_func_get(); - req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id); - req->hwgrps = dev->nb_event_queues; - - return otx2_mbox_process(mbox); -} - -static int -sso_ggrp_free_xaq(struct otx2_sso_evdev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - struct sso_release_xaq *req; - - otx2_sso_dbg("Freeing XAQ for GGRPs"); - req = otx2_mbox_alloc_msg_sso_hw_release_xaq_aura(mbox); - req->hwgrps = dev->nb_event_queues; - - return otx2_mbox_process(mbox); -} - -static void -sso_lf_teardown(struct otx2_sso_evdev *dev, - enum otx2_sso_lf_type lf_type) -{ - uint8_t nb_lf; - - switch (lf_type) { - case SSO_LF_GGRP: - nb_lf = dev->nb_event_queues; - break; - case SSO_LF_GWS: - nb_lf = dev->nb_event_ports; - nb_lf *= dev->dual_ws ? 2 : 1; - break; - default: - return; - } - - sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false); - sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false); -} - -static int -otx2_sso_configure(const struct rte_eventdev *event_dev) -{ - struct rte_event_dev_config *conf = &event_dev->data->dev_conf; - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint32_t deq_tmo_ns; - int rc; - - sso_func_trace(); - deq_tmo_ns = conf->dequeue_timeout_ns; - - if (deq_tmo_ns == 0) - deq_tmo_ns = dev->min_dequeue_timeout_ns; - - if (deq_tmo_ns < dev->min_dequeue_timeout_ns || - deq_tmo_ns > dev->max_dequeue_timeout_ns) { - otx2_err("Unsupported dequeue timeout requested"); - return -EINVAL; - } - - if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) - dev->is_timeout_deq = 1; - - dev->deq_tmo_ns = deq_tmo_ns; - - if (conf->nb_event_ports > dev->max_event_ports || - conf->nb_event_queues > dev->max_event_queues) { - otx2_err("Unsupported event queues/ports requested"); - return -EINVAL; - } - - if (conf->nb_event_port_dequeue_depth > 1) { - otx2_err("Unsupported event port deq depth requested"); - return -EINVAL; - } - - if (conf->nb_event_port_enqueue_depth > 1) { - otx2_err("Unsupported event port enq depth requested"); - return -EINVAL; - } - - if (dev->configured) - sso_unregister_irqs(event_dev); - - if (dev->nb_event_queues) { - /* Finit any previous queues. */ - sso_lf_teardown(dev, SSO_LF_GGRP); - } - if (dev->nb_event_ports) { - /* Finit any previous ports. */ - sso_lf_teardown(dev, SSO_LF_GWS); - } - - dev->nb_event_queues = conf->nb_event_queues; - dev->nb_event_ports = conf->nb_event_ports; - - if (dev->dual_ws) - rc = sso_configure_dual_ports(event_dev); - else - rc = sso_configure_ports(event_dev); - - if (rc < 0) { - otx2_err("Failed to configure event ports"); - return -ENODEV; - } - - if (sso_configure_queues(event_dev) < 0) { - otx2_err("Failed to configure event queues"); - rc = -ENODEV; - goto teardown_hws; - } - - if (sso_xaq_allocate(dev) < 0) { - rc = -ENOMEM; - goto teardown_hwggrp; - } - - /* Restore any prior port-queue mapping. */ - sso_restore_links(event_dev); - rc = sso_ggrp_alloc_xaq(dev); - if (rc < 0) { - otx2_err("Failed to alloc xaq to ggrp %d", rc); - goto teardown_hwggrp; - } - - rc = sso_get_msix_offsets(event_dev); - if (rc < 0) { - otx2_err("Failed to get msix offsets %d", rc); - goto teardown_hwggrp; - } - - rc = sso_register_irqs(event_dev); - if (rc < 0) { - otx2_err("Failed to register irq %d", rc); - goto teardown_hwggrp; - } - - dev->configured = 1; - rte_mb(); - - return 0; -teardown_hwggrp: - sso_lf_teardown(dev, SSO_LF_GGRP); -teardown_hws: - sso_lf_teardown(dev, SSO_LF_GWS); - dev->nb_event_queues = 0; - dev->nb_event_ports = 0; - dev->configured = 0; - return rc; -} - -static void -otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, - struct rte_event_queue_conf *queue_conf) -{ - RTE_SET_USED(event_dev); - RTE_SET_USED(queue_id); - - queue_conf->nb_atomic_flows = (1ULL << 20); - queue_conf->nb_atomic_order_sequences = (1ULL << 20); - queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES; - queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL; -} - -static int -otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, - const struct rte_event_queue_conf *queue_conf) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_mbox *mbox = dev->mbox; - struct sso_grp_priority *req; - int rc; - - sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority); - - req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox); - req->grp = queue_id; - req->weight = 0xFF; - req->affinity = 0xFF; - /* Normalize <0-255> to <0-7> */ - req->priority = queue_conf->priority / 32; - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to set priority queue=%d", queue_id); - return rc; - } - - return 0; -} - -static void -otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, - struct rte_event_port_conf *port_conf) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - - RTE_SET_USED(port_id); - port_conf->new_event_threshold = dev->max_num_events; - port_conf->dequeue_depth = 1; - port_conf->enqueue_depth = 1; -} - -static int -otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, - const struct rte_event_port_conf *port_conf) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0}; - uint64_t val; - uint16_t q; - - sso_func_trace("Port=%d", port_id); - RTE_SET_USED(port_conf); - - if (event_dev->data->ports[port_id] == NULL) { - otx2_err("Invalid port Id %d", port_id); - return -EINVAL; - } - - for (q = 0; q < dev->nb_event_queues; q++) { - grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12); - if (grps_base[q] == 0) { - otx2_err("Failed to get grp[%d] base addr", q); - return -EINVAL; - } - } - - /* Set get_work timeout for HWS */ - val = NSEC2USEC(dev->deq_tmo_ns) - 1; - - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id]; - - rte_memcpy(ws->grps_base, grps_base, - sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); - ws->fc_mem = dev->fc_mem; - ws->xaq_lmt = dev->xaq_lmt; - ws->tstamp = dev->tstamp; - otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( - ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM); - otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( - ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM); - } else { - struct otx2_ssogws *ws = event_dev->data->ports[port_id]; - uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); - - rte_memcpy(ws->grps_base, grps_base, - sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); - ws->fc_mem = dev->fc_mem; - ws->xaq_lmt = dev->xaq_lmt; - ws->tstamp = dev->tstamp; - otx2_write64(val, base + SSOW_LF_GWS_NW_TIM); - } - - otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]); - - return 0; -} - -static int -otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns, - uint64_t *tmo_ticks) -{ - RTE_SET_USED(event_dev); - *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz()); - - return 0; -} - -static void -ssogws_dump(struct otx2_ssogws *ws, FILE *f) -{ - uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); - - fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base); - fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_LINKS)); - fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_PENDWQP)); - fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_PENDSTATE)); - fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_NW_TIM)); - fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_TAG)); - fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_TAG)); - fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_SWTP)); - fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n", - otx2_read64(base + SSOW_LF_GWS_PENDTAG)); -} - -static void -ssoggrp_dump(uintptr_t base, FILE *f) -{ - fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base); - fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n", - otx2_read64(base + SSO_LF_GGRP_QCTL)); - fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n", - otx2_read64(base + SSO_LF_GGRP_XAQ_CNT)); - fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n", - otx2_read64(base + SSO_LF_GGRP_INT_THR)); - fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n", - otx2_read64(base + SSO_LF_GGRP_INT_CNT)); - fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n", - otx2_read64(base + SSO_LF_GGRP_AQ_CNT)); - fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n", - otx2_read64(base + SSO_LF_GGRP_AQ_THR)); - fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n", - otx2_read64(base + SSO_LF_GGRP_MISC_CNT)); -} - -static void -otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t queue; - uint8_t port; - - fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ? - "dual_ws" : "single_ws"); - /* Dump SSOW registers */ - for (port = 0; port < dev->nb_event_ports; port++) { - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = - event_dev->data->ports[port]; - - fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n", - __func__, port, 0); - ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f); - fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n", - __func__, port, 1); - ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f); - } else { - fprintf(f, "[%s]SSO single workslot[%d] dump\n", - __func__, port); - ssogws_dump(event_dev->data->ports[port], f); - } - } - - /* Dump SSO registers */ - for (queue = 0; queue < dev->nb_event_queues; queue++) { - fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue); - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = event_dev->data->ports[0]; - ssoggrp_dump(ws->grps_base[queue], f); - } else { - struct otx2_ssogws *ws = event_dev->data->ports[0]; - ssoggrp_dump(ws->grps_base[queue], f); - } - } -} - -static void -otx2_handle_event(void *arg, struct rte_event event) -{ - struct rte_eventdev *event_dev = arg; - - if (event_dev->dev_ops->dev_stop_flush != NULL) - event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id, - event, event_dev->data->dev_stop_flush_arg); -} - -static void -sso_qos_cfg(struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct sso_grp_qos_cfg *req; - uint16_t i; - - for (i = 0; i < dev->qos_queue_cnt; i++) { - uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt; - uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt; - uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt; - - if (dev->qos_parse_data[i].queue >= dev->nb_event_queues) - continue; - - req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox); - req->xaq_limit = (dev->nb_xaq_cfg * - (xaq_prcnt ? xaq_prcnt : 100)) / 100; - req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK * - (iaq_prcnt ? iaq_prcnt : 100)) / 100; - req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK * - (taq_prcnt ? taq_prcnt : 100)) / 100; - } - - if (dev->qos_queue_cnt) - otx2_mbox_process(dev->mbox); -} - -static void -sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint16_t i; - - for (i = 0; i < dev->nb_event_ports; i++) { - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws; - - ws = event_dev->data->ports[i]; - ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]); - ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]); - ws->swtag_req = 0; - ws->vws = 0; - ws->fc_mem = dev->fc_mem; - ws->xaq_lmt = dev->xaq_lmt; - } else { - struct otx2_ssogws *ws; - - ws = event_dev->data->ports[i]; - ssogws_reset(ws); - ws->swtag_req = 0; - ws->fc_mem = dev->fc_mem; - ws->xaq_lmt = dev->xaq_lmt; - } - } - - rte_mb(); - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = event_dev->data->ports[0]; - struct otx2_ssogws temp_ws; - - memcpy(&temp_ws, &ws->ws_state[0], - sizeof(struct otx2_ssogws_state)); - for (i = 0; i < dev->nb_event_queues; i++) { - /* Consume all the events through HWS0 */ - ssogws_flush_events(&temp_ws, i, ws->grps_base[i], - otx2_handle_event, event_dev); - /* Enable/Disable SSO GGRP */ - otx2_write64(enable, ws->grps_base[i] + - SSO_LF_GGRP_QCTL); - } - } else { - struct otx2_ssogws *ws = event_dev->data->ports[0]; - - for (i = 0; i < dev->nb_event_queues; i++) { - /* Consume all the events through HWS0 */ - ssogws_flush_events(ws, i, ws->grps_base[i], - otx2_handle_event, event_dev); - /* Enable/Disable SSO GGRP */ - otx2_write64(enable, ws->grps_base[i] + - SSO_LF_GGRP_QCTL); - } - } - - /* reset SSO GWS cache */ - otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox); - otx2_mbox_process(dev->mbox); -} - -int -sso_xae_reconfigure(struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - int rc = 0; - - if (event_dev->data->dev_started) - sso_cleanup(event_dev, 0); - - rc = sso_ggrp_free_xaq(dev); - if (rc < 0) { - otx2_err("Failed to free XAQ\n"); - return rc; - } - - rte_mempool_free(dev->xaq_pool); - dev->xaq_pool = NULL; - rc = sso_xaq_allocate(dev); - if (rc < 0) { - otx2_err("Failed to alloc xaq pool %d", rc); - return rc; - } - rc = sso_ggrp_alloc_xaq(dev); - if (rc < 0) { - otx2_err("Failed to alloc xaq to ggrp %d", rc); - return rc; - } - - rte_mb(); - if (event_dev->data->dev_started) - sso_cleanup(event_dev, 1); - - return 0; -} - -static int -otx2_sso_start(struct rte_eventdev *event_dev) -{ - sso_func_trace(); - sso_qos_cfg(event_dev); - sso_cleanup(event_dev, 1); - sso_fastpath_fns_set(event_dev); - - return 0; -} - -static void -otx2_sso_stop(struct rte_eventdev *event_dev) -{ - sso_func_trace(); - sso_cleanup(event_dev, 0); - rte_mb(); -} - -static int -otx2_sso_close(struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; - uint16_t i; - - if (!dev->configured) - return 0; - - sso_unregister_irqs(event_dev); - - for (i = 0; i < dev->nb_event_queues; i++) - all_queues[i] = i; - - for (i = 0; i < dev->nb_event_ports; i++) - otx2_sso_port_unlink(event_dev, event_dev->data->ports[i], - all_queues, dev->nb_event_queues); - - sso_lf_teardown(dev, SSO_LF_GGRP); - sso_lf_teardown(dev, SSO_LF_GWS); - dev->nb_event_ports = 0; - dev->nb_event_queues = 0; - rte_mempool_free(dev->xaq_pool); - rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME)); - - return 0; -} - -/* Initialize and register event driver with DPDK Application */ -static struct eventdev_ops otx2_sso_ops = { - .dev_infos_get = otx2_sso_info_get, - .dev_configure = otx2_sso_configure, - .queue_def_conf = otx2_sso_queue_def_conf, - .queue_setup = otx2_sso_queue_setup, - .queue_release = otx2_sso_queue_release, - .port_def_conf = otx2_sso_port_def_conf, - .port_setup = otx2_sso_port_setup, - .port_release = otx2_sso_port_release, - .port_link = otx2_sso_port_link, - .port_unlink = otx2_sso_port_unlink, - .timeout_ticks = otx2_sso_timeout_ticks, - - .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get, - .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add, - .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del, - .eth_rx_adapter_start = otx2_sso_rx_adapter_start, - .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop, - - .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get, - .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add, - .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del, - - .timer_adapter_caps_get = otx2_tim_caps_get, - - .crypto_adapter_caps_get = otx2_ca_caps_get, - .crypto_adapter_queue_pair_add = otx2_ca_qp_add, - .crypto_adapter_queue_pair_del = otx2_ca_qp_del, - - .xstats_get = otx2_sso_xstats_get, - .xstats_reset = otx2_sso_xstats_reset, - .xstats_get_names = otx2_sso_xstats_get_names, - - .dump = otx2_sso_dump, - .dev_start = otx2_sso_start, - .dev_stop = otx2_sso_stop, - .dev_close = otx2_sso_close, - .dev_selftest = otx2_sso_selftest, -}; - -#define OTX2_SSO_XAE_CNT "xae_cnt" -#define OTX2_SSO_SINGLE_WS "single_ws" -#define OTX2_SSO_GGRP_QOS "qos" -#define OTX2_SSO_FORCE_BP "force_rx_bp" - -static void -parse_queue_param(char *value, void *opaque) -{ - struct otx2_sso_qos queue_qos = {0}; - uint8_t *val = (uint8_t *)&queue_qos; - struct otx2_sso_evdev *dev = opaque; - char *tok = strtok(value, "-"); - struct otx2_sso_qos *old_ptr; - - if (!strlen(value)) - return; - - while (tok != NULL) { - *val = atoi(tok); - tok = strtok(NULL, "-"); - val++; - } - - if (val != (&queue_qos.iaq_prcnt + 1)) { - otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]"); - return; - } - - dev->qos_queue_cnt++; - old_ptr = dev->qos_parse_data; - dev->qos_parse_data = rte_realloc(dev->qos_parse_data, - sizeof(struct otx2_sso_qos) * - dev->qos_queue_cnt, 0); - if (dev->qos_parse_data == NULL) { - dev->qos_parse_data = old_ptr; - dev->qos_queue_cnt--; - return; - } - dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos; -} - -static void -parse_qos_list(const char *value, void *opaque) -{ - char *s = strdup(value); - char *start = NULL; - char *end = NULL; - char *f = s; - - while (*s) { - if (*s == '[') - start = s; - else if (*s == ']') - end = s; - - if (start && start < end) { - *end = 0; - parse_queue_param(start + 1, opaque); - s = end; - start = end; - } - s++; - } - - free(f); -} - -static int -parse_sso_kvargs_dict(const char *key, const char *value, void *opaque) -{ - RTE_SET_USED(key); - - /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ',' - * isn't allowed. Everything is expressed in percentages, 0 represents - * default. - */ - parse_qos_list(value, opaque); - - return 0; -} - -static void -sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) -{ - struct rte_kvargs *kvlist; - uint8_t single_ws = 0; - - if (devargs == NULL) - return; - kvlist = rte_kvargs_parse(devargs->args, NULL); - if (kvlist == NULL) - return; - - rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value, - &dev->xae_cnt); - rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag, - &single_ws); - rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict, - dev); - rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag, - &dev->force_rx_bp); - otx2_parse_common_devargs(kvlist); - dev->dual_ws = !single_ws; - rte_kvargs_free(kvlist); -} - -static int -otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) -{ - return rte_event_pmd_pci_probe(pci_drv, pci_dev, - sizeof(struct otx2_sso_evdev), - otx2_sso_init); -} - -static int -otx2_sso_remove(struct rte_pci_device *pci_dev) -{ - return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini); -} - -static const struct rte_pci_id pci_sso_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF) - }, - { - .vendor_id = 0, - }, -}; - -static struct rte_pci_driver pci_sso = { - .id_table = pci_sso_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA, - .probe = otx2_sso_probe, - .remove = otx2_sso_remove, -}; - -int -otx2_sso_init(struct rte_eventdev *event_dev) -{ - struct free_rsrcs_rsp *rsrc_cnt; - struct rte_pci_device *pci_dev; - struct otx2_sso_evdev *dev; - int rc; - - event_dev->dev_ops = &otx2_sso_ops; - /* For secondary processes, the primary has done all the work */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - sso_fastpath_fns_set(event_dev); - return 0; - } - - dev = sso_pmd_priv(event_dev); - - pci_dev = container_of(event_dev->dev, struct rte_pci_device, device); - - /* Initialize the base otx2_dev object */ - rc = otx2_dev_init(pci_dev, dev); - if (rc < 0) { - otx2_err("Failed to initialize otx2_dev rc=%d", rc); - goto error; - } - - /* Get SSO and SSOW MSIX rsrc cnt */ - otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); - rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); - if (rc < 0) { - otx2_err("Unable to get free rsrc count"); - goto otx2_dev_uninit; - } - otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso, - rsrc_cnt->ssow, rsrc_cnt->npa); - - dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS); - dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP); - /* Grab the NPA LF if required */ - rc = otx2_npa_lf_init(pci_dev, dev); - if (rc < 0) { - otx2_err("Unable to init NPA lf. It might not be provisioned"); - goto otx2_dev_uninit; - } - - dev->drv_inited = true; - dev->is_timeout_deq = 0; - dev->min_dequeue_timeout_ns = USEC2NSEC(1); - dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF); - dev->max_num_events = -1; - dev->nb_event_queues = 0; - dev->nb_event_ports = 0; - - if (!dev->max_event_ports || !dev->max_event_queues) { - otx2_err("Not enough eventdev resource queues=%d ports=%d", - dev->max_event_queues, dev->max_event_ports); - rc = -ENODEV; - goto otx2_npa_lf_uninit; - } - - dev->dual_ws = 1; - sso_parse_devargs(dev, pci_dev->device.devargs); - if (dev->dual_ws) { - otx2_sso_dbg("Using dual workslot mode"); - dev->max_event_ports = dev->max_event_ports / 2; - } else { - otx2_sso_dbg("Using single workslot mode"); - } - - otx2_sso_pf_func_set(dev->pf_func); - otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d", - event_dev->data->name, dev->max_event_queues, - dev->max_event_ports); - - otx2_tim_init(pci_dev, (struct otx2_dev *)dev); - - return 0; - -otx2_npa_lf_uninit: - otx2_npa_lf_fini(); -otx2_dev_uninit: - otx2_dev_fini(pci_dev, dev); -error: - return rc; -} - -int -otx2_sso_fini(struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct rte_pci_device *pci_dev; - - /* For secondary processes, nothing to be done */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - pci_dev = container_of(event_dev->dev, struct rte_pci_device, device); - - if (!dev->drv_inited) - goto dev_fini; - - dev->drv_inited = false; - otx2_npa_lf_fini(); - -dev_fini: - if (otx2_npa_lf_active(dev)) { - otx2_info("Common resource in use by other devices"); - return -EAGAIN; - } - - otx2_tim_fini(); - otx2_dev_fini(pci_dev, dev); - - return 0; -} - -RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso); -RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); -RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); -RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" - OTX2_SSO_SINGLE_WS "=1" - OTX2_SSO_GGRP_QOS "=" - OTX2_SSO_FORCE_BP "=1" - OTX2_NPA_LOCK_MASK "=<1-65535>"); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h deleted file mode 100644 index a5d34b7df7..0000000000 --- a/drivers/event/octeontx2/otx2_evdev.h +++ /dev/null @@ -1,430 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_EVDEV_H__ -#define __OTX2_EVDEV_H__ - -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_dev.h" -#include "otx2_ethdev.h" -#include "otx2_mempool.h" -#include "otx2_tim_evdev.h" - -#define EVENTDEV_NAME_OCTEONTX2_PMD event_octeontx2 - -#define sso_func_trace otx2_sso_dbg - -#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV -#define OTX2_SSO_MAX_VHWS (UINT8_MAX) -#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc" -#define OTX2_SSO_SQB_LIMIT (0x180) -#define OTX2_SSO_XAQ_SLACK (8) -#define OTX2_SSO_XAQ_CACHE_CNT (0x7) -#define OTX2_SSO_WQE_SG_PTR (9) - -/* SSO LF register offsets (BAR2) */ -#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) -#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull) - -#define SSO_LF_GGRP_QCTL (0x20ull) -#define SSO_LF_GGRP_EXE_DIS (0x80ull) -#define SSO_LF_GGRP_INT (0x100ull) -#define SSO_LF_GGRP_INT_W1S (0x108ull) -#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull) -#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull) -#define SSO_LF_GGRP_INT_THR (0x140ull) -#define SSO_LF_GGRP_INT_CNT (0x180ull) -#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull) -#define SSO_LF_GGRP_AQ_CNT (0x1c0ull) -#define SSO_LF_GGRP_AQ_THR (0x1e0ull) -#define SSO_LF_GGRP_MISC_CNT (0x200ull) - -/* SSOW LF register offsets (BAR2) */ -#define SSOW_LF_GWS_LINKS (0x10ull) -#define SSOW_LF_GWS_PENDWQP (0x40ull) -#define SSOW_LF_GWS_PENDSTATE (0x50ull) -#define SSOW_LF_GWS_NW_TIM (0x70ull) -#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull) -#define SSOW_LF_GWS_INT (0x100ull) -#define SSOW_LF_GWS_INT_W1S (0x108ull) -#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull) -#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull) -#define SSOW_LF_GWS_TAG (0x200ull) -#define SSOW_LF_GWS_WQP (0x210ull) -#define SSOW_LF_GWS_SWTP (0x220ull) -#define SSOW_LF_GWS_PENDTAG (0x230ull) -#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) -#define SSOW_LF_GWS_OP_GET_WORK (0x600ull) -#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull) -#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull) -#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull) -#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull) -#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull) -#define SSOW_LF_GWS_OP_DESCHED (0x880ull) -#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) -#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull) -#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) -#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) -#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) -#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull) -#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull) -#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull) -#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull) -#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull) - -#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK) -#define OTX2_SSOW_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) -#define OTX2_SSOW_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff) - -#define NSEC2USEC(__ns) ((__ns) / 1E3) -#define USEC2NSEC(__us) ((__us) * 1E3) -#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) -#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq)) - -enum otx2_sso_lf_type { - SSO_LF_GGRP, - SSO_LF_GWS -}; - -union otx2_sso_event { - uint64_t get_work0; - struct { - uint32_t flow_id:20; - uint32_t sub_event_type:8; - uint32_t event_type:4; - uint8_t op:2; - uint8_t rsvd:4; - uint8_t sched_type:2; - uint8_t queue_id; - uint8_t priority; - uint8_t impl_opaque; - }; -} __rte_aligned(64); - -enum { - SSO_SYNC_ORDERED, - SSO_SYNC_ATOMIC, - SSO_SYNC_UNTAGGED, - SSO_SYNC_EMPTY -}; - -struct otx2_sso_qos { - uint8_t queue; - uint8_t xaq_prcnt; - uint8_t taq_prcnt; - uint8_t iaq_prcnt; -}; - -struct otx2_sso_evdev { - OTX2_DEV; /* Base class */ - uint8_t max_event_queues; - uint8_t max_event_ports; - uint8_t is_timeout_deq; - uint8_t nb_event_queues; - uint8_t nb_event_ports; - uint8_t configured; - uint32_t deq_tmo_ns; - uint32_t min_dequeue_timeout_ns; - uint32_t max_dequeue_timeout_ns; - int32_t max_num_events; - uint64_t *fc_mem; - uint64_t xaq_lmt; - uint64_t nb_xaq_cfg; - rte_iova_t fc_iova; - struct rte_mempool *xaq_pool; - uint64_t rx_offloads; - uint64_t tx_offloads; - uint64_t adptr_xae_cnt; - uint16_t rx_adptr_pool_cnt; - uint64_t *rx_adptr_pools; - uint16_t max_port_id; - uint16_t tim_adptr_ring_cnt; - uint16_t *timer_adptr_rings; - uint64_t *timer_adptr_sz; - /* Dev args */ - uint8_t dual_ws; - uint32_t xae_cnt; - uint8_t qos_queue_cnt; - uint8_t force_rx_bp; - struct otx2_sso_qos *qos_parse_data; - /* HW const */ - uint32_t xae_waes; - uint32_t xaq_buf_size; - uint32_t iue; - /* MSIX offsets */ - uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP]; - uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS]; - /* PTP timestamp */ - struct otx2_timesync_info *tstamp; -} __rte_cache_aligned; - -#define OTX2_SSOGWS_OPS \ - /* WS ops */ \ - uintptr_t getwrk_op; \ - uintptr_t tag_op; \ - uintptr_t wqp_op; \ - uintptr_t swtag_flush_op; \ - uintptr_t swtag_norm_op; \ - uintptr_t swtag_desched_op; - -/* Event port aka GWS */ -struct otx2_ssogws { - /* Get Work Fastpath data */ - OTX2_SSOGWS_OPS; - /* PTP timestamp */ - struct otx2_timesync_info *tstamp; - void *lookup_mem; - uint8_t swtag_req; - uint8_t port; - /* Add Work Fastpath data */ - uint64_t xaq_lmt __rte_cache_aligned; - uint64_t *fc_mem; - uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; - /* Tx Fastpath data */ - uint64_t base __rte_cache_aligned; - uint8_t tx_adptr_data[]; -} __rte_cache_aligned; - -struct otx2_ssogws_state { - OTX2_SSOGWS_OPS; -}; - -struct otx2_ssogws_dual { - /* Get Work Fastpath data */ - struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */ - /* PTP timestamp */ - struct otx2_timesync_info *tstamp; - void *lookup_mem; - uint8_t swtag_req; - uint8_t vws; /* Ping pong bit */ - uint8_t port; - /* Add Work Fastpath data */ - uint64_t xaq_lmt __rte_cache_aligned; - uint64_t *fc_mem; - uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; - /* Tx Fastpath data */ - uint64_t base[2] __rte_cache_aligned; - uint8_t tx_adptr_data[]; -} __rte_cache_aligned; - -static inline struct otx2_sso_evdev * -sso_pmd_priv(const struct rte_eventdev *event_dev) -{ - return event_dev->data->dev_private; -} - -struct otx2_ssogws_cookie { - const struct rte_eventdev *event_dev; - bool configured; -}; - -static inline struct otx2_ssogws_cookie * -ssogws_get_cookie(void *ws) -{ - return (struct otx2_ssogws_cookie *) - ((uint8_t *)ws - RTE_CACHE_LINE_SIZE); -} - -static const union mbuf_initializer mbuf_init = { - .fields = { - .data_off = RTE_PKTMBUF_HEADROOM, - .refcnt = 1, - .nb_segs = 1, - .port = 0 - } -}; - -static __rte_always_inline void -otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id, - const uint32_t tag, const uint32_t flags, - const void * const lookup_mem) -{ - struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1; - uint64_t val = mbuf_init.value | (uint64_t)port_id << 48; - - if (flags & NIX_RX_OFFLOAD_TSTAMP_F) - val |= NIX_TIMESYNC_RX_OFFSET; - - otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag, - (struct rte_mbuf *)mbuf, lookup_mem, - val, flags); - -} - -static inline int -parse_kvargs_flag(const char *key, const char *value, void *opaque) -{ - RTE_SET_USED(key); - - *(uint8_t *)opaque = !!atoi(value); - return 0; -} - -static inline int -parse_kvargs_value(const char *key, const char *value, void *opaque) -{ - RTE_SET_USED(key); - - *(uint32_t *)opaque = (uint32_t)atoi(value); - return 0; -} - -#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES -#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES - -/* Single WS API's */ -uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); -uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); -uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); -uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); - -/* Dual WS API's */ -uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev); -uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); -uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); -uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); - -/* Auto generated API's */ -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ - \ -uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks);\ - -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\ - uint16_t nb_events); \ -uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ -uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ -uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ - -SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - -void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, - uint32_t event_type); -int sso_xae_reconfigure(struct rte_eventdev *event_dev); -void sso_fastpath_fns_set(struct rte_eventdev *event_dev); - -int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - uint32_t *caps); -int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t rx_queue_id, - const struct rte_event_eth_rx_adapter_queue_conf *queue_conf); -int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t rx_queue_id); -int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev); -int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev); -int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, - const struct rte_eth_dev *eth_dev, - uint32_t *caps); -int otx2_sso_tx_adapter_queue_add(uint8_t id, - const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t tx_queue_id); - -int otx2_sso_tx_adapter_queue_del(uint8_t id, - const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t tx_queue_id); - -/* Event crypto adapter API's */ -int otx2_ca_caps_get(const struct rte_eventdev *dev, - const struct rte_cryptodev *cdev, uint32_t *caps); - -int otx2_ca_qp_add(const struct rte_eventdev *dev, - const struct rte_cryptodev *cdev, int32_t queue_pair_id, - const struct rte_event *event); - -int otx2_ca_qp_del(const struct rte_eventdev *dev, - const struct rte_cryptodev *cdev, int32_t queue_pair_id); - -/* Clean up API's */ -typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); -void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, - uintptr_t base, otx2_handle_event_t fn, void *arg); -void ssogws_reset(struct otx2_ssogws *ws); -/* Selftest */ -int otx2_sso_selftest(void); -/* Init and Fini API's */ -int otx2_sso_init(struct rte_eventdev *event_dev); -int otx2_sso_fini(struct rte_eventdev *event_dev); -/* IRQ handlers */ -int sso_register_irqs(const struct rte_eventdev *event_dev); -void sso_unregister_irqs(const struct rte_eventdev *event_dev); - -#endif /* __OTX2_EVDEV_H__ */ diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c deleted file mode 100644 index a91f784b1e..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ /dev/null @@ -1,656 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019-2021 Marvell. - */ - -#include "otx2_evdev.h" - -#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100) - -int -otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, uint32_t *caps) -{ - int rc; - - RTE_SET_USED(event_dev); - rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); - if (rc) - *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; - else - *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT | - RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ; - - return 0; -} - -static inline int -sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp, - uint16_t eth_port_id) -{ - struct otx2_mbox *mbox = dev->mbox; - struct nix_aq_enq_req *aq; - int rc; - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->cq.ena = 0; - aq->cq.caching = 0; - - otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s)); - aq->cq_mask.ena = ~(aq->cq_mask.ena); - aq->cq_mask.caching = ~(aq->cq_mask.caching); - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to disable cq context"); - goto fail; - } - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->rq.sso_ena = 1; - aq->rq.sso_tt = tt; - aq->rq.sso_grp = ggrp; - aq->rq.ena_wqwd = 1; - /* Mbuf Header generation : - * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as - * it already has data related to mbuf size, headroom, private area. - * > Using WQE_SKIP we can directly assign - * mbuf = wqe - sizeof(struct mbuf); - * so that mbuf header will not have unpredicted values while headroom - * and private data starts at the beginning of wqe_data. - */ - aq->rq.wqe_skip = 1; - aq->rq.wqe_caching = 1; - aq->rq.spb_ena = 0; - aq->rq.flow_tagw = 20; /* 20-bits */ - - /* Flow Tag calculation : - * - * rq_tag <31:24> = good/bad_tag<8:0>; - * rq_tag <23:0> = [ltag] - * - * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20> - * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag); - * - * Setup : - * ltag<23:0> = (eth_port_id & 0xF) << 20; - * good/bad_tag<8:0> = - * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4); - * - * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) | - * <27:20> (eth_port_id) | <20:0> [TAG] - */ - - aq->rq.ltag = (eth_port_id & 0xF) << 20; - aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) | - (RTE_EVENT_TYPE_ETHDEV << 4); - aq->rq.bad_utag = aq->rq.good_utag; - - aq->rq.ena = 0; /* Don't enable RQ yet */ - aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ - aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ - - otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s)); - /* mask the bits to write. */ - aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena); - aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt); - aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp); - aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd); - aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip); - aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching); - aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena); - aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw); - aq->rq_mask.ltag = ~(aq->rq_mask.ltag); - aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag); - aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag); - aq->rq_mask.ena = ~(aq->rq_mask.ena); - aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching); - aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size); - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to init rx adapter context"); - goto fail; - } - - return 0; -fail: - return rc; -} - -static inline int -sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid) -{ - struct otx2_mbox *mbox = dev->mbox; - struct nix_aq_enq_req *aq; - int rc; - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->cq.ena = 1; - aq->cq.caching = 1; - - otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s)); - aq->cq_mask.ena = ~(aq->cq_mask.ena); - aq->cq_mask.caching = ~(aq->cq_mask.caching); - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to enable cq context"); - goto fail; - } - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->rq.sso_ena = 0; - aq->rq.sso_tt = SSO_TT_UNTAGGED; - aq->rq.sso_grp = 0; - aq->rq.ena_wqwd = 0; - aq->rq.wqe_caching = 0; - aq->rq.wqe_skip = 0; - aq->rq.spb_ena = 0; - aq->rq.flow_tagw = 0x20; - aq->rq.ltag = 0; - aq->rq.good_utag = 0; - aq->rq.bad_utag = 0; - aq->rq.ena = 1; - aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ - aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ - - otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s)); - /* mask the bits to write. */ - aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena); - aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt); - aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp); - aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd); - aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching); - aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip); - aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena); - aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw); - aq->rq_mask.ltag = ~(aq->rq_mask.ltag); - aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag); - aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag); - aq->rq_mask.ena = ~(aq->rq_mask.ena); - aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching); - aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size); - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to clear rx adapter context"); - goto fail; - } - - return 0; -fail: - return rc; -} - -void -sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) -{ - int i; - - switch (event_type) { - case RTE_EVENT_TYPE_ETHDEV: - { - struct otx2_eth_rxq *rxq = data; - uint64_t *old_ptr; - - for (i = 0; i < dev->rx_adptr_pool_cnt; i++) { - if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i]) - return; - } - - dev->rx_adptr_pool_cnt++; - old_ptr = dev->rx_adptr_pools; - dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools, - sizeof(uint64_t) * - dev->rx_adptr_pool_cnt, 0); - if (dev->rx_adptr_pools == NULL) { - dev->adptr_xae_cnt += rxq->pool->size; - dev->rx_adptr_pools = old_ptr; - dev->rx_adptr_pool_cnt--; - return; - } - dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] = - (uint64_t)rxq->pool; - - dev->adptr_xae_cnt += rxq->pool->size; - break; - } - case RTE_EVENT_TYPE_TIMER: - { - struct otx2_tim_ring *timr = data; - uint16_t *old_ring_ptr; - uint64_t *old_sz_ptr; - - for (i = 0; i < dev->tim_adptr_ring_cnt; i++) { - if (timr->ring_id != dev->timer_adptr_rings[i]) - continue; - if (timr->nb_timers == dev->timer_adptr_sz[i]) - return; - dev->adptr_xae_cnt -= dev->timer_adptr_sz[i]; - dev->adptr_xae_cnt += timr->nb_timers; - dev->timer_adptr_sz[i] = timr->nb_timers; - - return; - } - - dev->tim_adptr_ring_cnt++; - old_ring_ptr = dev->timer_adptr_rings; - old_sz_ptr = dev->timer_adptr_sz; - - dev->timer_adptr_rings = rte_realloc(dev->timer_adptr_rings, - sizeof(uint16_t) * - dev->tim_adptr_ring_cnt, - 0); - if (dev->timer_adptr_rings == NULL) { - dev->adptr_xae_cnt += timr->nb_timers; - dev->timer_adptr_rings = old_ring_ptr; - dev->tim_adptr_ring_cnt--; - return; - } - - dev->timer_adptr_sz = rte_realloc(dev->timer_adptr_sz, - sizeof(uint64_t) * - dev->tim_adptr_ring_cnt, - 0); - - if (dev->timer_adptr_sz == NULL) { - dev->adptr_xae_cnt += timr->nb_timers; - dev->timer_adptr_sz = old_sz_ptr; - dev->tim_adptr_ring_cnt--; - return; - } - - dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] = - timr->ring_id; - dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] = - timr->nb_timers; - - dev->adptr_xae_cnt += timr->nb_timers; - break; - } - default: - break; - } -} - -static inline void -sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - int i; - - for (i = 0; i < dev->nb_event_ports; i++) { - if (dev->dual_ws) { - struct otx2_ssogws_dual *ws = event_dev->data->ports[i]; - - ws->lookup_mem = lookup_mem; - } else { - struct otx2_ssogws *ws = event_dev->data->ports[i]; - - ws->lookup_mem = lookup_mem; - } - } -} - -static inline void -sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev, - struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq, - uint8_t ena) -{ - struct otx2_fc_info *fc = &otx2_eth_dev->fc_info; - struct npa_aq_enq_req *req; - struct npa_aq_enq_rsp *rsp; - struct otx2_npa_lf *lf; - struct otx2_mbox *mbox; - uint32_t limit; - int rc; - - if (otx2_dev_is_sdp(otx2_eth_dev)) - return; - - lf = otx2_npa_lf_obj_get(); - if (!lf) - return; - mbox = lf->mbox; - - req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - if (req == NULL) - return; - - req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id); - req->ctype = NPA_AQ_CTYPE_AURA; - req->op = NPA_AQ_INSTOP_READ; - - rc = otx2_mbox_process_msg(mbox, (void *)&rsp); - if (rc) - return; - - limit = rsp->aura.limit; - /* BP is already enabled. */ - if (rsp->aura.bp_ena) { - /* If BP ids don't match disable BP. */ - if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) { - req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - if (req == NULL) - return; - - req->aura_id = - npa_lf_aura_handle_to_aura(rxq->pool->pool_id); - req->ctype = NPA_AQ_CTYPE_AURA; - req->op = NPA_AQ_INSTOP_WRITE; - - req->aura.bp_ena = 0; - req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena); - - otx2_mbox_process(mbox); - } - return; - } - - /* BP was previously enabled but now disabled skip. */ - if (rsp->aura.bp) - return; - - req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - if (req == NULL) - return; - - req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id); - req->ctype = NPA_AQ_CTYPE_AURA; - req->op = NPA_AQ_INSTOP_WRITE; - - if (ena) { - req->aura.nix0_bpid = fc->bpid[0]; - req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid); - req->aura.bp = NIX_RQ_AURA_THRESH( - limit > 128 ? 256 : limit); /* 95% of size*/ - req->aura_mask.bp = ~(req->aura_mask.bp); - } - - req->aura.bp_ena = !!ena; - req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena); - - otx2_mbox_process(mbox); -} - -int -otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t rx_queue_id, - const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) -{ - struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint16_t port = eth_dev->data->port_id; - struct otx2_eth_rxq *rxq; - int i, rc; - - rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); - if (rc) - return -EINVAL; - - if (rx_queue_id < 0) { - for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) { - rxq = eth_dev->data->rx_queues[i]; - sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); - sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true); - rc = sso_xae_reconfigure( - (struct rte_eventdev *)(uintptr_t)event_dev); - rc |= sso_rxq_enable(otx2_eth_dev, i, - queue_conf->ev.sched_type, - queue_conf->ev.queue_id, port); - } - rxq = eth_dev->data->rx_queues[0]; - sso_updt_lookup_mem(event_dev, rxq->lookup_mem); - } else { - rxq = eth_dev->data->rx_queues[rx_queue_id]; - sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); - sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true); - rc = sso_xae_reconfigure((struct rte_eventdev *) - (uintptr_t)event_dev); - rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id, - queue_conf->ev.sched_type, - queue_conf->ev.queue_id, port); - sso_updt_lookup_mem(event_dev, rxq->lookup_mem); - } - - if (rc < 0) { - otx2_err("Failed to configure Rx adapter port=%d, q=%d", port, - queue_conf->ev.queue_id); - return rc; - } - - dev->rx_offloads |= otx2_eth_dev->rx_offload_flags; - dev->tstamp = &otx2_eth_dev->tstamp; - sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); - - return 0; -} - -int -otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t rx_queue_id) -{ - struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - int i, rc; - - rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); - if (rc) - return -EINVAL; - - if (rx_queue_id < 0) { - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { - rc = sso_rxq_disable(otx2_eth_dev, i); - sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, - eth_dev->data->rx_queues[i], false); - } - } else { - rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id); - sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, - eth_dev->data->rx_queues[rx_queue_id], - false); - } - - if (rc < 0) - otx2_err("Failed to clear Rx adapter config port=%d, q=%d", - eth_dev->data->port_id, rx_queue_id); - - return rc; -} - -int -otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(event_dev); - RTE_SET_USED(eth_dev); - - return 0; -} - -int -otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(event_dev); - RTE_SET_USED(eth_dev); - - return 0; -} - -int -otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, - const struct rte_eth_dev *eth_dev, uint32_t *caps) -{ - int ret; - - RTE_SET_USED(dev); - ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13); - if (ret) - *caps = 0; - else - *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; - - return 0; -} - -static int -sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs) -{ - struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; - struct npa_aq_enq_req *aura_req; - - aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); - aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); - aura_req->ctype = NPA_AQ_CTYPE_AURA; - aura_req->op = NPA_AQ_INSTOP_WRITE; - - aura_req->aura.limit = nb_sqb_bufs; - aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit); - - return otx2_mbox_process(npa_lf->mbox); -} - -static int -sso_add_tx_queue_data(const struct rte_eventdev *event_dev, - uint16_t eth_port_id, uint16_t tx_queue_id, - struct otx2_eth_txq *txq) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - int i; - - for (i = 0; i < event_dev->data->nb_ports; i++) { - dev->max_port_id = RTE_MAX(dev->max_port_id, eth_port_id); - if (dev->dual_ws) { - struct otx2_ssogws_dual *old_dws; - struct otx2_ssogws_dual *dws; - - old_dws = event_dev->data->ports[i]; - dws = rte_realloc_socket(ssogws_get_cookie(old_dws), - sizeof(struct otx2_ssogws_dual) - + RTE_CACHE_LINE_SIZE + - (sizeof(uint64_t) * - (dev->max_port_id + 1) * - RTE_MAX_QUEUES_PER_PORT), - RTE_CACHE_LINE_SIZE, - event_dev->data->socket_id); - if (dws == NULL) - return -ENOMEM; - - /* First cache line is reserved for cookie */ - dws = (struct otx2_ssogws_dual *) - ((uint8_t *)dws + RTE_CACHE_LINE_SIZE); - - ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT] - )&dws->tx_adptr_data)[eth_port_id][tx_queue_id] = - (uint64_t)txq; - event_dev->data->ports[i] = dws; - } else { - struct otx2_ssogws *old_ws; - struct otx2_ssogws *ws; - - old_ws = event_dev->data->ports[i]; - ws = rte_realloc_socket(ssogws_get_cookie(old_ws), - sizeof(struct otx2_ssogws) + - RTE_CACHE_LINE_SIZE + - (sizeof(uint64_t) * - (dev->max_port_id + 1) * - RTE_MAX_QUEUES_PER_PORT), - RTE_CACHE_LINE_SIZE, - event_dev->data->socket_id); - if (ws == NULL) - return -ENOMEM; - - /* First cache line is reserved for cookie */ - ws = (struct otx2_ssogws *) - ((uint8_t *)ws + RTE_CACHE_LINE_SIZE); - - ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT] - )&ws->tx_adptr_data)[eth_port_id][tx_queue_id] = - (uint64_t)txq; - event_dev->data->ports[i] = ws; - } - } - - return 0; -} - -int -otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t tx_queue_id) -{ - struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_eth_txq *txq; - int i, ret; - - RTE_SET_USED(id); - if (tx_queue_id < 0) { - for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) { - txq = eth_dev->data->tx_queues[i]; - sso_sqb_aura_limit_edit(txq->sqb_pool, - OTX2_SSO_SQB_LIMIT); - ret = sso_add_tx_queue_data(event_dev, - eth_dev->data->port_id, i, - txq); - if (ret < 0) - return ret; - } - } else { - txq = eth_dev->data->tx_queues[tx_queue_id]; - sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT); - ret = sso_add_tx_queue_data(event_dev, eth_dev->data->port_id, - tx_queue_id, txq); - if (ret < 0) - return ret; - } - - dev->tx_offloads |= otx2_eth_dev->tx_offload_flags; - sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); - - return 0; -} - -int -otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, - const struct rte_eth_dev *eth_dev, - int32_t tx_queue_id) -{ - struct otx2_eth_txq *txq; - int i; - - RTE_SET_USED(id); - RTE_SET_USED(eth_dev); - RTE_SET_USED(event_dev); - if (tx_queue_id < 0) { - for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) { - txq = eth_dev->data->tx_queues[i]; - sso_sqb_aura_limit_edit(txq->sqb_pool, - txq->nb_sqb_bufs); - } - } else { - txq = eth_dev->data->tx_queues[tx_queue_id]; - sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs); - } - - return 0; -} diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c deleted file mode 100644 index d59d6c53f6..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ /dev/null @@ -1,132 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020-2021 Marvell. - */ - -#include -#include - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_qp.h" -#include "otx2_cryptodev_mbox.h" -#include "otx2_evdev.h" - -int -otx2_ca_caps_get(const struct rte_eventdev *dev, - const struct rte_cryptodev *cdev, uint32_t *caps) -{ - RTE_SET_USED(dev); - RTE_SET_USED(cdev); - - *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; - - return 0; -} - -static int -otx2_ca_qp_sso_link(const struct rte_cryptodev *cdev, struct otx2_cpt_qp *qp, - uint16_t sso_pf_func) -{ - union otx2_cpt_af_lf_ctl2 af_lf_ctl2; - int ret; - - ret = otx2_cpt_af_reg_read(cdev, OTX2_CPT_AF_LF_CTL2(qp->id), - qp->blkaddr, &af_lf_ctl2.u); - if (ret) - return ret; - - af_lf_ctl2.s.sso_pf_func = sso_pf_func; - ret = otx2_cpt_af_reg_write(cdev, OTX2_CPT_AF_LF_CTL2(qp->id), - qp->blkaddr, af_lf_ctl2.u); - return ret; -} - -static void -otx2_ca_qp_init(struct otx2_cpt_qp *qp, const struct rte_event *event) -{ - if (event) { - qp->qp_ev_bind = 1; - rte_memcpy(&qp->ev, event, sizeof(struct rte_event)); - } else { - qp->qp_ev_bind = 0; - } - qp->ca_enable = 1; -} - -int -otx2_ca_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev, - int32_t queue_pair_id, const struct rte_event *event) -{ - struct otx2_sso_evdev *sso_evdev = sso_pmd_priv(dev); - struct otx2_cpt_vf *vf = cdev->data->dev_private; - uint16_t sso_pf_func = otx2_sso_pf_func_get(); - struct otx2_cpt_qp *qp; - uint8_t qp_id; - int ret; - - if (queue_pair_id == -1) { - for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) { - qp = cdev->data->queue_pairs[qp_id]; - ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func); - if (ret) { - uint8_t qp_tmp; - for (qp_tmp = 0; qp_tmp < qp_id; qp_tmp++) - otx2_ca_qp_del(dev, cdev, qp_tmp); - return ret; - } - otx2_ca_qp_init(qp, event); - } - } else { - qp = cdev->data->queue_pairs[queue_pair_id]; - ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func); - if (ret) - return ret; - otx2_ca_qp_init(qp, event); - } - - sso_evdev->rx_offloads |= NIX_RX_OFFLOAD_SECURITY_F; - sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)dev); - - /* Update crypto adapter xae count */ - if (queue_pair_id == -1) - sso_evdev->adptr_xae_cnt += - vf->nb_queues * OTX2_CPT_DEFAULT_CMD_QLEN; - else - sso_evdev->adptr_xae_cnt += OTX2_CPT_DEFAULT_CMD_QLEN; - sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)dev); - - return 0; -} - -int -otx2_ca_qp_del(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev, - int32_t queue_pair_id) -{ - struct otx2_cpt_vf *vf = cdev->data->dev_private; - struct otx2_cpt_qp *qp; - uint8_t qp_id; - int ret; - - RTE_SET_USED(dev); - - ret = 0; - if (queue_pair_id == -1) { - for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) { - qp = cdev->data->queue_pairs[qp_id]; - ret = otx2_ca_qp_sso_link(cdev, qp, 0); - if (ret) - return ret; - qp->ca_enable = 0; - } - } else { - qp = cdev->data->queue_pairs[queue_pair_id]; - ret = otx2_ca_qp_sso_link(cdev, qp, 0); - if (ret) - return ret; - qp->ca_enable = 0; - } - - return 0; -} diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h deleted file mode 100644 index b33cb7e139..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h +++ /dev/null @@ -1,77 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2020 Marvell International Ltd. - */ - -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ - -#include -#include -#include - -#include "cpt_pmd_logs.h" -#include "cpt_ucode.h" - -#include "otx2_cryptodev.h" -#include "otx2_cryptodev_hw_access.h" -#include "otx2_cryptodev_ops_helper.h" -#include "otx2_cryptodev_qp.h" - -static inline void -otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp, - struct rte_crypto_op *cop, uintptr_t *rsp, - uint8_t cc) -{ - if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - if (likely(cc == NO_ERR)) { - /* Verify authentication data if required */ - if (unlikely(rsp[2])) - compl_auth_verify(cop, (uint8_t *)rsp[2], - rsp[3]); - else - cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS; - } else { - if (cc == ERR_GC_ICV_MISCOMPARE) - cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; - else - cop->status = RTE_CRYPTO_OP_STATUS_ERROR; - } - - if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) { - sym_session_clear(otx2_cryptodev_driver_id, - cop->sym->session); - memset(cop->sym->session, 0, - rte_cryptodev_sym_get_existing_header_session_size( - cop->sym->session)); - rte_mempool_put(qp->sess_mp, cop->sym->session); - cop->sym->session = NULL; - } - } - -} - -static inline uint64_t -otx2_handle_crypto_event(uint64_t get_work1) -{ - struct cpt_request_info *req; - const struct otx2_cpt_qp *qp; - struct rte_crypto_op *cop; - uintptr_t *rsp; - void *metabuf; - uint8_t cc; - - req = (struct cpt_request_info *)(get_work1); - cc = otx2_cpt_compcode_get(req); - qp = req->qp; - - rsp = req->op; - metabuf = (void *)rsp[0]; - cop = (void *)rsp[1]; - - otx2_ca_deq_post_process(qp, cop, rsp, cc); - - rte_mempool_put(qp->meta_info.pool, metabuf); - - return (uint64_t)(cop); -} -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h deleted file mode 100644 index 1fc56f903b..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h +++ /dev/null @@ -1,83 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (C) 2021 Marvell International Ltd. - */ - -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ - -#include -#include -#include -#include - -#include -#include - -static inline uint16_t -otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) -{ - union rte_event_crypto_metadata *m_data; - struct rte_crypto_op *crypto_op; - struct rte_cryptodev *cdev; - struct otx2_cpt_qp *qp; - uint8_t cdev_id; - uint16_t qp_id; - - crypto_op = ev->event_ptr; - if (crypto_op == NULL) - return 0; - - if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - m_data = rte_cryptodev_sym_session_get_user_data( - crypto_op->sym->session); - if (m_data == NULL) - goto free_op; - - cdev_id = m_data->request_info.cdev_id; - qp_id = m_data->request_info.queue_pair_id; - } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && - crypto_op->private_data_offset) { - m_data = (union rte_event_crypto_metadata *) - ((uint8_t *)crypto_op + - crypto_op->private_data_offset); - cdev_id = m_data->request_info.cdev_id; - qp_id = m_data->request_info.queue_pair_id; - } else { - goto free_op; - } - - cdev = &rte_cryptodevs[cdev_id]; - qp = cdev->data->queue_pairs[qp_id]; - - if (!ev->sched_type) - otx2_ssogws_head_wait(tag_op); - if (qp->ca_enable) - return cdev->enqueue_burst(qp, &crypto_op, 1); - -free_op: - rte_pktmbuf_free(crypto_op->sym->m_src); - rte_crypto_op_free(crypto_op); - rte_errno = EINVAL; - return 0; -} - -static uint16_t __rte_hot -otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) -{ - struct otx2_ssogws *ws = port; - - RTE_SET_USED(nb_events); - - return otx2_ca_enq(ws->tag_op, ev); -} - -static uint16_t __rte_hot -otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) -{ - struct otx2_ssogws_dual *ws = port; - - RTE_SET_USED(nb_events); - - return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); -} -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c deleted file mode 100644 index 9b7ad27b04..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_irq.c +++ /dev/null @@ -1,272 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include "otx2_evdev.h" -#include "otx2_tim_evdev.h" - -static void -sso_lf_irq(void *param) -{ - uintptr_t base = (uintptr_t)param; - uint64_t intr; - uint8_t ggrp; - - ggrp = (base >> 12) & 0xFF; - - intr = otx2_read64(base + SSO_LF_GGRP_INT); - if (intr == 0) - return; - - otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr); - - /* Clear interrupt */ - otx2_write64(intr, base + SSO_LF_GGRP_INT); -} - -static int -sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff, - uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = pci_dev->intr_handle; - int rc, vec; - - vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C); - /* Set used interrupt vectors */ - rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec); - /* Enable hw interrupt */ - otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S); - - return rc; -} - -static void -ssow_lf_irq(void *param) -{ - uintptr_t base = (uintptr_t)param; - uint8_t gws = (base >> 12) & 0xFF; - uint64_t intr; - - intr = otx2_read64(base + SSOW_LF_GWS_INT); - if (intr == 0) - return; - - otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr); - - /* Clear interrupt */ - otx2_write64(intr, base + SSOW_LF_GWS_INT); -} - -static int -ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff, - uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = pci_dev->intr_handle; - int rc, vec; - - vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C); - /* Set used interrupt vectors */ - rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec); - /* Enable hw interrupt */ - otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S); - - return rc; -} - -static void -sso_lf_unregister_irq(const struct rte_eventdev *event_dev, - uint16_t ggrp_msixoff, uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = pci_dev->intr_handle; - int vec; - - vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C); - otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec); -} - -static void -ssow_lf_unregister_irq(const struct rte_eventdev *event_dev, - uint16_t gws_msixoff, uintptr_t base) -{ - struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = pci_dev->intr_handle; - int vec; - - vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C); - otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec); -} - -int -sso_register_irqs(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - int i, rc = -EINVAL; - uint8_t nb_ports; - - nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); - - for (i = 0; i < dev->nb_event_queues; i++) { - if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) { - otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x", - i, dev->sso_msixoff[i]); - goto fail; - } - } - - for (i = 0; i < nb_ports; i++) { - if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) { - otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x", - i, dev->ssow_msixoff[i]); - goto fail; - } - } - - for (i = 0; i < dev->nb_event_queues; i++) { - uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | - i << 12); - rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base); - } - - for (i = 0; i < nb_ports; i++) { - uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | - i << 12); - rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i], - base); - } - -fail: - return rc; -} - -void -sso_unregister_irqs(const struct rte_eventdev *event_dev) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t nb_ports; - int i; - - nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); - - for (i = 0; i < dev->nb_event_queues; i++) { - uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | - i << 12); - sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base); - } - - for (i = 0; i < nb_ports; i++) { - uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | - i << 12); - ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base); - } -} - -static void -tim_lf_irq(void *param) -{ - uintptr_t base = (uintptr_t)param; - uint64_t intr; - uint8_t ring; - - ring = (base >> 12) & 0xFF; - - intr = otx2_read64(base + TIM_LF_NRSPERR_INT); - otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr); - intr = otx2_read64(base + TIM_LF_RAS_INT); - otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr); - - /* Clear interrupt */ - otx2_write64(intr, base + TIM_LF_NRSPERR_INT); - otx2_write64(intr, base + TIM_LF_RAS_INT); -} - -static int -tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, - uintptr_t base) -{ - struct rte_intr_handle *handle = pci_dev->intr_handle; - int rc, vec; - - vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT); - /* Set used interrupt vectors */ - rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec); - /* Enable hw interrupt */ - otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S); - - vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + TIM_LF_RAS_INT); - /* Set used interrupt vectors */ - rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec); - /* Enable hw interrupt */ - otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S); - - return rc; -} - -static void -tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, - uintptr_t base) -{ - struct rte_intr_handle *handle = pci_dev->intr_handle; - int vec; - - vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C); - otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec); - - vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C); - otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec); -} - -int -tim_register_irq(uint16_t ring_id) -{ - struct otx2_tim_evdev *dev = tim_priv_get(); - int rc = -EINVAL; - uintptr_t base; - - if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) { - otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x", - ring_id, dev->tim_msixoff[ring_id]); - goto fail; - } - - base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); - rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base); -fail: - return rc; -} - -void -tim_unregister_irq(uint16_t ring_id) -{ - struct otx2_tim_evdev *dev = tim_priv_get(); - uintptr_t base; - - base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); - tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base); -} diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c deleted file mode 100644 index 48bfaf893d..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ /dev/null @@ -1,1517 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "otx2_evdev.h" - -#define NUM_PACKETS (1024) -#define MAX_EVENTS (1024) - -#define OCTEONTX2_TEST_RUN(setup, teardown, test) \ - octeontx_test_run(setup, teardown, test, #test) - -static int total; -static int passed; -static int failed; -static int unsupported; - -static int evdev; -static struct rte_mempool *eventdev_test_mempool; - -struct event_attr { - uint32_t flow_id; - uint8_t event_type; - uint8_t sub_event_type; - uint8_t sched_type; - uint8_t queue; - uint8_t port; -}; - -static uint32_t seqn_list_index; -static int seqn_list[NUM_PACKETS]; - -static inline void -seqn_list_init(void) -{ - RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); - memset(seqn_list, 0, sizeof(seqn_list)); - seqn_list_index = 0; -} - -static inline int -seqn_list_update(int val) -{ - if (seqn_list_index >= NUM_PACKETS) - return -1; - - seqn_list[seqn_list_index++] = val; - rte_smp_wmb(); - return 0; -} - -static inline int -seqn_list_check(int limit) -{ - int i; - - for (i = 0; i < limit; i++) { - if (seqn_list[i] != i) { - otx2_err("Seqn mismatch %d %d", seqn_list[i], i); - return -1; - } - } - return 0; -} - -struct test_core_param { - rte_atomic32_t *total_events; - uint64_t dequeue_tmo_ticks; - uint8_t port; - uint8_t sched_type; -}; - -static int -testsuite_setup(void) -{ - const char *eventdev_name = "event_octeontx2"; - - evdev = rte_event_dev_get_dev_id(eventdev_name); - if (evdev < 0) { - otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name); - return -1; - } - return 0; -} - -static void -testsuite_teardown(void) -{ - rte_event_dev_close(evdev); -} - -static inline void -devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf, - struct rte_event_dev_info *info) -{ - memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); - dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; - dev_conf->nb_event_ports = info->max_event_ports; - dev_conf->nb_event_queues = info->max_event_queues; - dev_conf->nb_event_queue_flows = info->max_event_queue_flows; - dev_conf->nb_event_port_dequeue_depth = - info->max_event_port_dequeue_depth; - dev_conf->nb_event_port_enqueue_depth = - info->max_event_port_enqueue_depth; - dev_conf->nb_event_port_enqueue_depth = - info->max_event_port_enqueue_depth; - dev_conf->nb_events_limit = - info->max_num_events; -} - -enum { - TEST_EVENTDEV_SETUP_DEFAULT, - TEST_EVENTDEV_SETUP_PRIORITY, - TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT, -}; - -static inline int -_eventdev_setup(int mode) -{ - const char *pool_name = "evdev_octeontx_test_pool"; - struct rte_event_dev_config dev_conf; - struct rte_event_dev_info info; - int i, ret; - - /* Create and destrory pool for each test case to make it standalone */ - eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, - 0, 0, 512, - rte_socket_id()); - if (!eventdev_test_mempool) { - otx2_err("ERROR creating mempool"); - return -1; - } - - ret = rte_event_dev_info_get(evdev, &info); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - - devconf_set_default_sane_values(&dev_conf, &info); - if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) - dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; - - ret = rte_event_dev_configure(evdev, &dev_conf); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); - - uint32_t queue_count; - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - - if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { - if (queue_count > 8) - queue_count = 8; - - /* Configure event queues(0 to n) with - * RTE_EVENT_DEV_PRIORITY_HIGHEST to - * RTE_EVENT_DEV_PRIORITY_LOWEST - */ - uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / - queue_count; - for (i = 0; i < (int)queue_count; i++) { - struct rte_event_queue_conf queue_conf; - - ret = rte_event_queue_default_conf_get(evdev, i, - &queue_conf); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", - i); - queue_conf.priority = i * step; - ret = rte_event_queue_setup(evdev, i, &queue_conf); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", - i); - } - - } else { - /* Configure event queues with default priority */ - for (i = 0; i < (int)queue_count; i++) { - ret = rte_event_queue_setup(evdev, i, NULL); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", - i); - } - } - /* Configure event ports */ - uint32_t port_count; - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), - "Port count get failed"); - for (i = 0; i < (int)port_count; i++) { - ret = rte_event_port_setup(evdev, i, NULL); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); - ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", - i); - } - - ret = rte_event_dev_start(evdev); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); - - return 0; -} - -static inline int -eventdev_setup(void) -{ - return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT); -} - -static inline int -eventdev_setup_priority(void) -{ - return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY); -} - -static inline int -eventdev_setup_dequeue_timeout(void) -{ - return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT); -} - -static inline void -eventdev_teardown(void) -{ - rte_event_dev_stop(evdev); - rte_mempool_free(eventdev_test_mempool); -} - -static inline void -update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev, - uint32_t flow_id, uint8_t event_type, - uint8_t sub_event_type, uint8_t sched_type, - uint8_t queue, uint8_t port) -{ - struct event_attr *attr; - - /* Store the event attributes in mbuf for future reference */ - attr = rte_pktmbuf_mtod(m, struct event_attr *); - attr->flow_id = flow_id; - attr->event_type = event_type; - attr->sub_event_type = sub_event_type; - attr->sched_type = sched_type; - attr->queue = queue; - attr->port = port; - - ev->flow_id = flow_id; - ev->sub_event_type = sub_event_type; - ev->event_type = event_type; - /* Inject the new event */ - ev->op = RTE_EVENT_OP_NEW; - ev->sched_type = sched_type; - ev->queue_id = queue; - ev->mbuf = m; -} - -static inline int -inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, - uint8_t sched_type, uint8_t queue, uint8_t port, - unsigned int events) -{ - struct rte_mbuf *m; - unsigned int i; - - for (i = 0; i < events; i++) { - struct rte_event ev = {.event = 0, .u64 = 0}; - - m = rte_pktmbuf_alloc(eventdev_test_mempool); - RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - - *rte_event_pmd_selftest_seqn(m) = i; - update_event_and_validation_attr(m, &ev, flow_id, event_type, - sub_event_type, sched_type, - queue, port); - rte_event_enqueue_burst(evdev, port, &ev, 1); - } - return 0; -} - -static inline int -check_excess_events(uint8_t port) -{ - uint16_t valid_event; - struct rte_event ev; - int i; - - /* Check for excess events, try for a few times and exit */ - for (i = 0; i < 32; i++) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - - RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", - *rte_event_pmd_selftest_seqn(ev.mbuf)); - } - return 0; -} - -static inline int -generate_random_events(const unsigned int total_events) -{ - struct rte_event_dev_info info; - uint32_t queue_count; - unsigned int i; - int ret; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - - ret = rte_event_dev_info_get(evdev, &info); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - for (i = 0; i < total_events; i++) { - ret = inject_events( - rte_rand() % info.max_event_queue_flows /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - rte_rand() % 256 /* sub_event_type */, - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), - rte_rand() % queue_count /* queue */, - 0 /* port */, - 1 /* events */); - if (ret) - return -1; - } - return ret; -} - - -static inline int -validate_event(struct rte_event *ev) -{ - struct event_attr *attr; - - attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, - "flow_id mismatch enq=%d deq =%d", - attr->flow_id, ev->flow_id); - RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, - "event_type mismatch enq=%d deq =%d", - attr->event_type, ev->event_type); - RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, - "sub_event_type mismatch enq=%d deq =%d", - attr->sub_event_type, ev->sub_event_type); - RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, - "sched_type mismatch enq=%d deq =%d", - attr->sched_type, ev->sched_type); - RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, - "queue mismatch enq=%d deq =%d", - attr->queue, ev->queue_id); - return 0; -} - -typedef int (*validate_event_cb)(uint32_t index, uint8_t port, - struct rte_event *ev); - -static inline int -consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) -{ - uint32_t events = 0, forward_progress_cnt = 0, index = 0; - uint16_t valid_event; - struct rte_event ev; - int ret; - - while (1) { - if (++forward_progress_cnt > UINT16_MAX) { - otx2_err("Detected deadlock"); - return -1; - } - - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - if (!valid_event) - continue; - - forward_progress_cnt = 0; - ret = validate_event(&ev); - if (ret) - return -1; - - if (fn != NULL) { - ret = fn(index, port, &ev); - RTE_TEST_ASSERT_SUCCESS(ret, - "Failed to validate test specific event"); - } - - ++index; - - rte_pktmbuf_free(ev.mbuf); - if (++events >= total_events) - break; - } - - return check_excess_events(port); -} - -static int -validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) -{ - RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf), - "index=%d != seqn=%d", - index, *rte_event_pmd_selftest_seqn(ev->mbuf)); - return 0; -} - -static inline int -test_simple_enqdeq(uint8_t sched_type) -{ - int ret; - - ret = inject_events(0 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - 0 /* sub_event_type */, - sched_type, - 0 /* queue */, - 0 /* port */, - MAX_EVENTS); - if (ret) - return -1; - - return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); -} - -static int -test_simple_enqdeq_ordered(void) -{ - return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED); -} - -static int -test_simple_enqdeq_atomic(void) -{ - return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_simple_enqdeq_parallel(void) -{ - return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL); -} - -/* - * Generate a prescribed number of events and spread them across available - * queues. On dequeue, using single event port(port 0) verify the enqueued - * event attributes - */ -static int -test_multi_queue_enq_single_port_deq(void) -{ - int ret; - - ret = generate_random_events(MAX_EVENTS); - if (ret) - return -1; - - return consume_events(0 /* port */, MAX_EVENTS, NULL); -} - -/* - * Inject 0..MAX_EVENTS events over 0..queue_count with modulus - * operation - * - * For example, Inject 32 events over 0..7 queues - * enqueue events 0, 8, 16, 24 in queue 0 - * enqueue events 1, 9, 17, 25 in queue 1 - * .. - * .. - * enqueue events 7, 15, 23, 31 in queue 7 - * - * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31 - * order from queue0(highest priority) to queue7(lowest_priority) - */ -static int -validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) -{ - uint32_t queue_count; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - if (queue_count > 8) - queue_count = 8; - uint32_t range = MAX_EVENTS / queue_count; - uint32_t expected_val = (index % range) * queue_count; - - expected_val += ev->queue_id; - RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL( - *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val, - range, queue_count, MAX_EVENTS); - return 0; -} - -static int -test_multi_queue_priority(void) -{ - int i, max_evts_roundoff; - /* See validate_queue_priority() comments for priority validate logic */ - uint32_t queue_count; - struct rte_mbuf *m; - uint8_t queue; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - if (queue_count > 8) - queue_count = 8; - max_evts_roundoff = MAX_EVENTS / queue_count; - max_evts_roundoff *= queue_count; - - for (i = 0; i < max_evts_roundoff; i++) { - struct rte_event ev = {.event = 0, .u64 = 0}; - - m = rte_pktmbuf_alloc(eventdev_test_mempool); - RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - - *rte_event_pmd_selftest_seqn(m) = i; - queue = i % queue_count; - update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, - 0, RTE_SCHED_TYPE_PARALLEL, - queue, 0); - rte_event_enqueue_burst(evdev, 0, &ev, 1); - } - - return consume_events(0, max_evts_roundoff, validate_queue_priority); -} - -static int -worker_multi_port_fn(void *arg) -{ - struct test_core_param *param = arg; - rte_atomic32_t *total_events = param->total_events; - uint8_t port = param->port; - uint16_t valid_event; - struct rte_event ev; - int ret; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - if (!valid_event) - continue; - - ret = validate_event(&ev); - RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } - - return 0; -} - -static inline int -wait_workers_to_join(const rte_atomic32_t *count) -{ - uint64_t cycles, print_cycles; - - cycles = rte_get_timer_cycles(); - print_cycles = cycles; - while (rte_atomic32_read(count)) { - uint64_t new_cycles = rte_get_timer_cycles(); - - if (new_cycles - print_cycles > rte_get_timer_hz()) { - otx2_err("Events %d", rte_atomic32_read(count)); - print_cycles = new_cycles; - } - if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) { - otx2_err("No schedules for seconds, deadlock (%d)", - rte_atomic32_read(count)); - rte_event_dev_dump(evdev, stdout); - cycles = new_cycles; - return -1; - } - } - rte_eal_mp_wait_lcore(); - - return 0; -} - -static inline int -launch_workers_and_wait(int (*main_thread)(void *), - int (*worker_thread)(void *), uint32_t total_events, - uint8_t nb_workers, uint8_t sched_type) -{ - rte_atomic32_t atomic_total_events; - struct test_core_param *param; - uint64_t dequeue_tmo_ticks; - uint8_t port = 0; - int w_lcore; - int ret; - - if (!nb_workers) - return 0; - - rte_atomic32_set(&atomic_total_events, total_events); - seqn_list_init(); - - param = malloc(sizeof(struct test_core_param) * nb_workers); - if (!param) - return -1; - - ret = rte_event_dequeue_timeout_ticks(evdev, - rte_rand() % 10000000/* 10ms */, - &dequeue_tmo_ticks); - if (ret) { - free(param); - return -1; - } - - param[0].total_events = &atomic_total_events; - param[0].sched_type = sched_type; - param[0].port = 0; - param[0].dequeue_tmo_ticks = dequeue_tmo_ticks; - rte_wmb(); - - w_lcore = rte_get_next_lcore( - /* start core */ -1, - /* skip main */ 1, - /* wrap */ 0); - rte_eal_remote_launch(main_thread, ¶m[0], w_lcore); - - for (port = 1; port < nb_workers; port++) { - param[port].total_events = &atomic_total_events; - param[port].sched_type = sched_type; - param[port].port = port; - param[port].dequeue_tmo_ticks = dequeue_tmo_ticks; - rte_smp_wmb(); - w_lcore = rte_get_next_lcore(w_lcore, 1, 0); - rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore); - } - - rte_smp_wmb(); - ret = wait_workers_to_join(&atomic_total_events); - free(param); - - return ret; -} - -/* - * Generate a prescribed number of events and spread them across available - * queues. Dequeue the events through multiple ports and verify the enqueued - * event attributes - */ -static int -test_multi_queue_enq_multi_port_deq(void) -{ - const unsigned int total_events = MAX_EVENTS; - uint32_t nr_ports; - int ret; - - ret = generate_random_events(total_events); - if (ret) - return -1; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); - - if (!nr_ports) { - otx2_err("Not enough ports=%d or workers=%d", nr_ports, - rte_lcore_count() - 1); - return 0; - } - - return launch_workers_and_wait(worker_multi_port_fn, - worker_multi_port_fn, total_events, - nr_ports, 0xff /* invalid */); -} - -static -void flush(uint8_t dev_id, struct rte_event event, void *arg) -{ - unsigned int *count = arg; - - RTE_SET_USED(dev_id); - if (event.event_type == RTE_EVENT_TYPE_CPU) - *count = *count + 1; -} - -static int -test_dev_stop_flush(void) -{ - unsigned int total_events = MAX_EVENTS, count = 0; - int ret; - - ret = generate_random_events(total_events); - if (ret) - return -1; - - ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count); - if (ret) - return -2; - rte_event_dev_stop(evdev); - ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL); - if (ret) - return -3; - RTE_TEST_ASSERT_EQUAL(total_events, count, - "count mismatch total_events=%d count=%d", - total_events, count); - - return 0; -} - -static int -validate_queue_to_port_single_link(uint32_t index, uint8_t port, - struct rte_event *ev) -{ - RTE_SET_USED(index); - RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, - "queue mismatch enq=%d deq =%d", - port, ev->queue_id); - - return 0; -} - -/* - * Link queue x to port x and check correctness of link by checking - * queue_id == x on dequeue on the specific port x - */ -static int -test_queue_to_port_single_link(void) -{ - int i, nr_links, ret; - uint32_t queue_count; - uint32_t port_count; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), - "Port count get failed"); - - /* Unlink all connections that created in eventdev_setup */ - for (i = 0; i < (int)port_count; i++) { - ret = rte_event_port_unlink(evdev, i, NULL, 0); - RTE_TEST_ASSERT(ret >= 0, - "Failed to unlink all queues port=%d", i); - } - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - - nr_links = RTE_MIN(port_count, queue_count); - const unsigned int total_events = MAX_EVENTS / nr_links; - - /* Link queue x to port x and inject events to queue x through port x */ - for (i = 0; i < nr_links; i++) { - uint8_t queue = (uint8_t)i; - - ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); - - ret = inject_events(0x100 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - rte_rand() % 256 /* sub_event_type */, - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), - queue /* queue */, i /* port */, - total_events /* events */); - if (ret) - return -1; - } - - /* Verify the events generated from correct queue */ - for (i = 0; i < nr_links; i++) { - ret = consume_events(i /* port */, total_events, - validate_queue_to_port_single_link); - if (ret) - return -1; - } - - return 0; -} - -static int -validate_queue_to_port_multi_link(uint32_t index, uint8_t port, - struct rte_event *ev) -{ - RTE_SET_USED(index); - RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), - "queue mismatch enq=%d deq =%d", - port, ev->queue_id); - - return 0; -} - -/* - * Link all even number of queues to port 0 and all odd number of queues to - * port 1 and verify the link connection on dequeue - */ -static int -test_queue_to_port_multi_link(void) -{ - int ret, port0_events = 0, port1_events = 0; - uint32_t nr_queues = 0; - uint32_t nr_ports = 0; - uint8_t queue, port; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), - "Queue count get failed"); - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), - "Queue count get failed"); - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - - if (nr_ports < 2) { - otx2_err("Not enough ports to test ports=%d", nr_ports); - return 0; - } - - /* Unlink all connections that created in eventdev_setup */ - for (port = 0; port < nr_ports; port++) { - ret = rte_event_port_unlink(evdev, port, NULL, 0); - RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", - port); - } - - const unsigned int total_events = MAX_EVENTS / nr_queues; - - /* Link all even number of queues to port0 and odd numbers to port 1*/ - for (queue = 0; queue < nr_queues; queue++) { - port = queue & 0x1; - ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", - queue, port); - - ret = inject_events(0x100 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - rte_rand() % 256 /* sub_event_type */, - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), - queue /* queue */, port /* port */, - total_events /* events */); - if (ret) - return -1; - - if (port == 0) - port0_events += total_events; - else - port1_events += total_events; - } - - ret = consume_events(0 /* port */, port0_events, - validate_queue_to_port_multi_link); - if (ret) - return -1; - ret = consume_events(1 /* port */, port1_events, - validate_queue_to_port_multi_link); - if (ret) - return -1; - - return 0; -} - -static int -worker_flow_based_pipeline(void *arg) -{ - struct test_core_param *param = arg; - uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks; - rte_atomic32_t *total_events = param->total_events; - uint8_t new_sched_type = param->sched_type; - uint8_t port = param->port; - uint16_t valid_event; - struct rte_event ev; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, - dequeue_tmo_ticks); - if (!valid_event) - continue; - - /* Events from stage 0 */ - if (ev.sub_event_type == 0) { - /* Move to atomic flow to maintain the ordering */ - ev.flow_id = 0x2; - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.sub_event_type = 1; /* stage 1 */ - ev.sched_type = new_sched_type; - ev.op = RTE_EVENT_OP_FORWARD; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); - - if (seqn_list_update(seqn) == 0) { - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } else { - otx2_err("Failed to update seqn_list"); - return -1; - } - } else { - otx2_err("Invalid ev.sub_event_type = %d", - ev.sub_event_type); - return -1; - } - } - return 0; -} - -static int -test_multiport_flow_sched_type_test(uint8_t in_sched_type, - uint8_t out_sched_type) -{ - const unsigned int total_events = MAX_EVENTS; - uint32_t nr_ports; - int ret; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); - - if (!nr_ports) { - otx2_err("Not enough ports=%d or workers=%d", nr_ports, - rte_lcore_count() - 1); - return 0; - } - - /* Injects events with a 0 sequence number to total_events */ - ret = inject_events(0x1 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - 0 /* sub_event_type (stage 0) */, - in_sched_type, - 0 /* queue */, - 0 /* port */, - total_events /* events */); - if (ret) - return -1; - - rte_mb(); - ret = launch_workers_and_wait(worker_flow_based_pipeline, - worker_flow_based_pipeline, total_events, - nr_ports, out_sched_type); - if (ret) - return -1; - - if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && - out_sched_type == RTE_SCHED_TYPE_ATOMIC) { - /* Check the events order maintained or not */ - return seqn_list_check(total_events); - } - - return 0; -} - -/* Multi port ordered to atomic transaction */ -static int -test_multi_port_flow_ordered_to_atomic(void) -{ - /* Ingress event order test */ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_flow_ordered_to_ordered(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_flow_ordered_to_parallel(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -test_multi_port_flow_atomic_to_atomic(void) -{ - /* Ingress event order test */ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_flow_atomic_to_ordered(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_flow_atomic_to_parallel(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -test_multi_port_flow_parallel_to_atomic(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_flow_parallel_to_ordered(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_flow_parallel_to_parallel(void) -{ - return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -worker_group_based_pipeline(void *arg) -{ - struct test_core_param *param = arg; - uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks; - rte_atomic32_t *total_events = param->total_events; - uint8_t new_sched_type = param->sched_type; - uint8_t port = param->port; - uint16_t valid_event; - struct rte_event ev; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, - dequeue_tmo_ticks); - if (!valid_event) - continue; - - /* Events from stage 0(group 0) */ - if (ev.queue_id == 0) { - /* Move to atomic flow to maintain the ordering */ - ev.flow_id = 0x2; - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.sched_type = new_sched_type; - ev.queue_id = 1; /* Stage 1*/ - ev.op = RTE_EVENT_OP_FORWARD; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); - - if (seqn_list_update(seqn) == 0) { - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } else { - otx2_err("Failed to update seqn_list"); - return -1; - } - } else { - otx2_err("Invalid ev.queue_id = %d", ev.queue_id); - return -1; - } - } - - return 0; -} - -static int -test_multiport_queue_sched_type_test(uint8_t in_sched_type, - uint8_t out_sched_type) -{ - const unsigned int total_events = MAX_EVENTS; - uint32_t queue_count; - uint32_t nr_ports; - int ret; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - - nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - if (queue_count < 2 || !nr_ports) { - otx2_err("Not enough queues=%d ports=%d or workers=%d", - queue_count, nr_ports, - rte_lcore_count() - 1); - return 0; - } - - /* Injects events with a 0 sequence number to total_events */ - ret = inject_events(0x1 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - 0 /* sub_event_type (stage 0) */, - in_sched_type, - 0 /* queue */, - 0 /* port */, - total_events /* events */); - if (ret) - return -1; - - ret = launch_workers_and_wait(worker_group_based_pipeline, - worker_group_based_pipeline, total_events, - nr_ports, out_sched_type); - if (ret) - return -1; - - if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && - out_sched_type == RTE_SCHED_TYPE_ATOMIC) { - /* Check the events order maintained or not */ - return seqn_list_check(total_events); - } - - return 0; -} - -static int -test_multi_port_queue_ordered_to_atomic(void) -{ - /* Ingress event order test */ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_queue_ordered_to_ordered(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_queue_ordered_to_parallel(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -test_multi_port_queue_atomic_to_atomic(void) -{ - /* Ingress event order test */ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_queue_atomic_to_ordered(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_queue_atomic_to_parallel(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -test_multi_port_queue_parallel_to_atomic(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_ATOMIC); -} - -static int -test_multi_port_queue_parallel_to_ordered(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_ORDERED); -} - -static int -test_multi_port_queue_parallel_to_parallel(void) -{ - return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, - RTE_SCHED_TYPE_PARALLEL); -} - -static int -worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg) -{ - struct test_core_param *param = arg; - rte_atomic32_t *total_events = param->total_events; - uint8_t port = param->port; - uint16_t valid_event; - struct rte_event ev; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - if (!valid_event) - continue; - - if (ev.sub_event_type == 255) { /* last stage */ - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } else { - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.sub_event_type++; - ev.sched_type = - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); - ev.op = RTE_EVENT_OP_FORWARD; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } - } - - return 0; -} - -static int -launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) -{ - uint32_t nr_ports; - int ret; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); - - if (!nr_ports) { - otx2_err("Not enough ports=%d or workers=%d", - nr_ports, rte_lcore_count() - 1); - return 0; - } - - /* Injects events with a 0 sequence number to total_events */ - ret = inject_events(0x1 /*flow_id */, - RTE_EVENT_TYPE_CPU /* event_type */, - 0 /* sub_event_type (stage 0) */, - rte_rand() % - (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */, - 0 /* queue */, - 0 /* port */, - MAX_EVENTS /* events */); - if (ret) - return -1; - - return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, - 0xff /* invalid */); -} - -/* Flow based pipeline with maximum stages with random sched type */ -static int -test_multi_port_flow_max_stages_random_sched_type(void) -{ - return launch_multi_port_max_stages_random_sched_type( - worker_flow_based_pipeline_max_stages_rand_sched_type); -} - -static int -worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) -{ - struct test_core_param *param = arg; - uint8_t port = param->port; - uint32_t queue_count; - uint16_t valid_event; - struct rte_event ev; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - uint8_t nr_queues = queue_count; - rte_atomic32_t *total_events = param->total_events; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - if (!valid_event) - continue; - - if (ev.queue_id == nr_queues - 1) { /* last stage */ - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } else { - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.queue_id++; - ev.sched_type = - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); - ev.op = RTE_EVENT_OP_FORWARD; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } - } - - return 0; -} - -/* Queue based pipeline with maximum stages with random sched type */ -static int -test_multi_port_queue_max_stages_random_sched_type(void) -{ - return launch_multi_port_max_stages_random_sched_type( - worker_queue_based_pipeline_max_stages_rand_sched_type); -} - -static int -worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) -{ - struct test_core_param *param = arg; - uint8_t port = param->port; - uint32_t queue_count; - uint16_t valid_event; - struct rte_event ev; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), - "Queue count get failed"); - uint8_t nr_queues = queue_count; - rte_atomic32_t *total_events = param->total_events; - - while (rte_atomic32_read(total_events) > 0) { - valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - if (!valid_event) - continue; - - if (ev.queue_id == nr_queues - 1) { /* Last stage */ - rte_pktmbuf_free(ev.mbuf); - rte_atomic32_sub(total_events, 1); - } else { - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.queue_id++; - ev.sub_event_type = rte_rand() % 256; - ev.sched_type = - rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); - ev.op = RTE_EVENT_OP_FORWARD; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } - } - - return 0; -} - -/* Queue and flow based pipeline with maximum stages with random sched type */ -static int -test_multi_port_mixed_max_stages_random_sched_type(void) -{ - return launch_multi_port_max_stages_random_sched_type( - worker_mixed_pipeline_max_stages_rand_sched_type); -} - -static int -worker_ordered_flow_producer(void *arg) -{ - struct test_core_param *param = arg; - uint8_t port = param->port; - struct rte_mbuf *m; - int counter = 0; - - while (counter < NUM_PACKETS) { - m = rte_pktmbuf_alloc(eventdev_test_mempool); - if (m == NULL) - continue; - - *rte_event_pmd_selftest_seqn(m) = counter++; - - struct rte_event ev = {.event = 0, .u64 = 0}; - - ev.flow_id = 0x1; /* Generate a fat flow */ - ev.sub_event_type = 0; - /* Inject the new event */ - ev.op = RTE_EVENT_OP_NEW; - ev.event_type = RTE_EVENT_TYPE_CPU; - ev.sched_type = RTE_SCHED_TYPE_ORDERED; - ev.queue_id = 0; - ev.mbuf = m; - rte_event_enqueue_burst(evdev, port, &ev, 1); - } - - return 0; -} - -static inline int -test_producer_consumer_ingress_order_test(int (*fn)(void *)) -{ - uint32_t nr_ports; - - RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, - RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), - "Port count get failed"); - nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); - - if (rte_lcore_count() < 3 || nr_ports < 2) { - otx2_err("### Not enough cores for test."); - return 0; - } - - launch_workers_and_wait(worker_ordered_flow_producer, fn, - NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC); - /* Check the events order maintained or not */ - return seqn_list_check(NUM_PACKETS); -} - -/* Flow based producer consumer ingress order test */ -static int -test_flow_producer_consumer_ingress_order_test(void) -{ - return test_producer_consumer_ingress_order_test( - worker_flow_based_pipeline); -} - -/* Queue based producer consumer ingress order test */ -static int -test_queue_producer_consumer_ingress_order_test(void) -{ - return test_producer_consumer_ingress_order_test( - worker_group_based_pipeline); -} - -static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), - int (*test)(void), const char *name) -{ - if (setup() < 0) { - printf("Error setting up test %s", name); - unsupported++; - } else { - if (test() < 0) { - failed++; - printf("+ TestCase [%2d] : %s failed\n", total, name); - } else { - passed++; - printf("+ TestCase [%2d] : %s succeeded\n", total, - name); - } - } - - total++; - tdown(); -} - -int -otx2_sso_selftest(void) -{ - testsuite_setup(); - - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_dev_stop_flush); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test); - OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test); - OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority); - OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic); - OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic); - printf("Total tests : %d\n", total); - printf("Passed : %d\n", passed); - printf("Failed : %d\n", failed); - printf("Not supported : %d\n", unsupported); - - testsuite_teardown(); - - if (failed) - return -1; - - return 0; -} diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h deleted file mode 100644 index 74fcec8a07..0000000000 --- a/drivers/event/octeontx2/otx2_evdev_stats.h +++ /dev/null @@ -1,286 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_EVDEV_STATS_H__ -#define __OTX2_EVDEV_STATS_H__ - -#include "otx2_evdev.h" - -struct otx2_sso_xstats_name { - const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE]; - const size_t offset; - const uint64_t mask; - const uint8_t shift; - uint64_t reset_snap[OTX2_SSO_MAX_VHGRP]; -}; - -static struct otx2_sso_xstats_name sso_hws_xstats[] = { - {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration), - 0x3FF, 0, {0} }, - {"affinity_arbitration_credits", - offsetof(struct sso_hws_stats, arbitration), - 0xF, 16, {0} }, -}; - -static struct otx2_sso_xstats_name sso_grp_xstats[] = { - {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0, - {0} }, - {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0, - 0, {0} }, - {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0, - {0} }, - {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0, - {0} }, - {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0, - {0} }, - {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0, - {0} }, - {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3, - 0, {0} }, - {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F, - 16, {0} }, - {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt), - 0xFFFFFFFF, 0, {0} }, -}; - -#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats) -#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats) - -#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS) - -static int -otx2_sso_xstats_get(const struct rte_eventdev *event_dev, - enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id, - const unsigned int ids[], uint64_t values[], unsigned int n) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_sso_xstats_name *xstats; - struct otx2_sso_xstats_name *xstat; - struct otx2_mbox *mbox = dev->mbox; - uint32_t xstats_mode_count = 0; - uint32_t start_offset = 0; - unsigned int i; - uint64_t value; - void *req_rsp; - int rc; - - switch (mode) { - case RTE_EVENT_DEV_XSTATS_DEVICE: - return 0; - case RTE_EVENT_DEV_XSTATS_PORT: - if (queue_port_id >= (signed int)dev->nb_event_ports) - goto invalid_value; - - xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; - xstats = sso_hws_xstats; - - req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ? - 2 * queue_port_id : queue_port_id; - rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); - if (rc < 0) - goto invalid_value; - - if (dev->dual_ws) { - for (i = 0; i < n && i < xstats_mode_count; i++) { - xstat = &xstats[ids[i] - start_offset]; - values[i] = *(uint64_t *) - ((char *)req_rsp + xstat->offset); - values[i] = (values[i] >> xstat->shift) & - xstat->mask; - } - - req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = - (2 * queue_port_id) + 1; - rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); - if (rc < 0) - goto invalid_value; - } - - break; - case RTE_EVENT_DEV_XSTATS_QUEUE: - if (queue_port_id >= (signed int)dev->nb_event_queues) - goto invalid_value; - - xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; - start_offset = OTX2_SSO_NUM_HWS_XSTATS; - xstats = sso_grp_xstats; - - req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->grp = queue_port_id; - rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); - if (rc < 0) - goto invalid_value; - - break; - default: - otx2_err("Invalid mode received"); - goto invalid_value; - }; - - for (i = 0; i < n && i < xstats_mode_count; i++) { - xstat = &xstats[ids[i] - start_offset]; - value = *(uint64_t *)((char *)req_rsp + xstat->offset); - value = (value >> xstat->shift) & xstat->mask; - - if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws) - values[i] += value; - else - values[i] = value; - - values[i] -= xstat->reset_snap[queue_port_id]; - } - - return i; -invalid_value: - return -EINVAL; -} - -static int -otx2_sso_xstats_reset(struct rte_eventdev *event_dev, - enum rte_event_dev_xstats_mode mode, - int16_t queue_port_id, const uint32_t ids[], uint32_t n) -{ - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - struct otx2_sso_xstats_name *xstats; - struct otx2_sso_xstats_name *xstat; - struct otx2_mbox *mbox = dev->mbox; - uint32_t xstats_mode_count = 0; - uint32_t start_offset = 0; - unsigned int i; - uint64_t value; - void *req_rsp; - int rc; - - switch (mode) { - case RTE_EVENT_DEV_XSTATS_DEVICE: - return 0; - case RTE_EVENT_DEV_XSTATS_PORT: - if (queue_port_id >= (signed int)dev->nb_event_ports) - goto invalid_value; - - xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; - xstats = sso_hws_xstats; - - req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ? - 2 * queue_port_id : queue_port_id; - rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); - if (rc < 0) - goto invalid_value; - - if (dev->dual_ws) { - for (i = 0; i < n && i < xstats_mode_count; i++) { - xstat = &xstats[ids[i] - start_offset]; - xstat->reset_snap[queue_port_id] = *(uint64_t *) - ((char *)req_rsp + xstat->offset); - xstat->reset_snap[queue_port_id] = - (xstat->reset_snap[queue_port_id] >> - xstat->shift) & xstat->mask; - } - - req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = - (2 * queue_port_id) + 1; - rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); - if (rc < 0) - goto invalid_value; - } - - break; - case RTE_EVENT_DEV_XSTATS_QUEUE: - if (queue_port_id >= (signed int)dev->nb_event_queues) - goto invalid_value; - - xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; - start_offset = OTX2_SSO_NUM_HWS_XSTATS; - xstats = sso_grp_xstats; - - req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->grp = queue_port_id; - rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp); - if (rc < 0) - goto invalid_value; - - break; - default: - otx2_err("Invalid mode received"); - goto invalid_value; - }; - - for (i = 0; i < n && i < xstats_mode_count; i++) { - xstat = &xstats[ids[i] - start_offset]; - value = *(uint64_t *)((char *)req_rsp + xstat->offset); - value = (value >> xstat->shift) & xstat->mask; - - if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws) - xstat->reset_snap[queue_port_id] += value; - else - xstat->reset_snap[queue_port_id] = value; - } - return i; -invalid_value: - return -EINVAL; -} - -static int -otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev, - enum rte_event_dev_xstats_mode mode, - uint8_t queue_port_id, - struct rte_event_dev_xstats_name *xstats_names, - unsigned int *ids, unsigned int size) -{ - struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS]; - struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint32_t xstats_mode_count = 0; - uint32_t start_offset = 0; - unsigned int xidx = 0; - unsigned int i; - - for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) { - snprintf(xstats_names_copy[i].name, - sizeof(xstats_names_copy[i].name), "%s", - sso_hws_xstats[i].name); - } - - for (; i < OTX2_SSO_NUM_XSTATS; i++) { - snprintf(xstats_names_copy[i].name, - sizeof(xstats_names_copy[i].name), "%s", - sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name); - } - - switch (mode) { - case RTE_EVENT_DEV_XSTATS_DEVICE: - break; - case RTE_EVENT_DEV_XSTATS_PORT: - if (queue_port_id >= (signed int)dev->nb_event_ports) - break; - xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; - break; - case RTE_EVENT_DEV_XSTATS_QUEUE: - if (queue_port_id >= (signed int)dev->nb_event_queues) - break; - xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; - start_offset = OTX2_SSO_NUM_HWS_XSTATS; - break; - default: - otx2_err("Invalid mode received"); - return -EINVAL; - }; - - if (xstats_mode_count > size || !ids || !xstats_names) - return xstats_mode_count; - - for (i = 0; i < xstats_mode_count; i++) { - xidx = i + start_offset; - strncpy(xstats_names[i].name, xstats_names_copy[xidx].name, - sizeof(xstats_names[i].name)); - ids[i] = xidx; - } - - return i; -} - -#endif diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c deleted file mode 100644 index 6da8b14b78..0000000000 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ /dev/null @@ -1,735 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include - -#include "otx2_evdev.h" -#include "otx2_tim_evdev.h" - -static struct event_timer_adapter_ops otx2_tim_ops; - -static inline int -tim_get_msix_offsets(void) -{ - struct otx2_tim_evdev *dev = tim_priv_get(); - struct otx2_mbox *mbox = dev->mbox; - struct msix_offset_rsp *msix_rsp; - int i, rc; - - /* Get TIM MSIX vector offsets */ - otx2_mbox_alloc_msg_msix_offset(mbox); - rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); - - for (i = 0; i < dev->nb_rings; i++) - dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i]; - - return rc; -} - -static void -tim_set_fp_ops(struct otx2_tim_ring *tim_ring) -{ - uint8_t prod_flag = !tim_ring->prod_type_sp; - - /* [DFB/FB] [SP][MP]*/ - const rte_event_timer_arm_burst_t arm_burst[2][2][2] = { -#define FP(_name, _f3, _f2, _f1, flags) \ - [_f3][_f2][_f1] = otx2_tim_arm_burst_##_name, - TIM_ARM_FASTPATH_MODES -#undef FP - }; - - const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = { -#define FP(_name, _f2, _f1, flags) \ - [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_##_name, - TIM_ARM_TMO_FASTPATH_MODES -#undef FP - }; - - otx2_tim_ops.arm_burst = - arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag]; - otx2_tim_ops.arm_tmo_tick_burst = - arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb]; - otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst; -} - -static void -otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr, - struct rte_event_timer_adapter_info *adptr_info) -{ - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - - adptr_info->max_tmo_ns = tim_ring->max_tout; - adptr_info->min_resolution_ns = tim_ring->ena_periodic ? - tim_ring->max_tout : tim_ring->tck_nsec; - rte_memcpy(&adptr_info->conf, &adptr->data->conf, - sizeof(struct rte_event_timer_adapter_conf)); -} - -static int -tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, - struct rte_event_timer_adapter_conf *rcfg) -{ - unsigned int cache_sz = (tim_ring->nb_chunks / 1.5); - unsigned int mp_flags = 0; - char pool_name[25]; - int rc; - - cache_sz /= rte_lcore_count(); - /* Create chunk pool. */ - if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) { - mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET; - otx2_tim_dbg("Using single producer mode"); - tim_ring->prod_type_sp = true; - } - - snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d", - tim_ring->ring_id); - - if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE) - cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE; - - cache_sz = cache_sz != 0 ? cache_sz : 2; - tim_ring->nb_chunks += (cache_sz * rte_lcore_count()); - if (!tim_ring->disable_npa) { - tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, - tim_ring->nb_chunks, tim_ring->chunk_sz, - cache_sz, 0, rte_socket_id(), mp_flags); - - if (tim_ring->chunk_pool == NULL) { - otx2_err("Unable to create chunkpool."); - return -ENOMEM; - } - - rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, - rte_mbuf_platform_mempool_ops(), - NULL); - if (rc < 0) { - otx2_err("Unable to set chunkpool ops"); - goto free; - } - - rc = rte_mempool_populate_default(tim_ring->chunk_pool); - if (rc < 0) { - otx2_err("Unable to set populate chunkpool."); - goto free; - } - tim_ring->aura = npa_lf_aura_handle_to_aura( - tim_ring->chunk_pool->pool_id); - tim_ring->ena_dfb = tim_ring->ena_periodic ? 1 : 0; - } else { - tim_ring->chunk_pool = rte_mempool_create(pool_name, - tim_ring->nb_chunks, tim_ring->chunk_sz, - cache_sz, 0, NULL, NULL, NULL, NULL, - rte_socket_id(), - mp_flags); - if (tim_ring->chunk_pool == NULL) { - otx2_err("Unable to create chunkpool."); - return -ENOMEM; - } - tim_ring->ena_dfb = 1; - } - - return 0; - -free: - rte_mempool_free(tim_ring->chunk_pool); - return rc; -} - -static void -tim_err_desc(int rc) -{ - switch (rc) { - case TIM_AF_NO_RINGS_LEFT: - otx2_err("Unable to allocat new TIM ring."); - break; - case TIM_AF_INVALID_NPA_PF_FUNC: - otx2_err("Invalid NPA pf func."); - break; - case TIM_AF_INVALID_SSO_PF_FUNC: - otx2_err("Invalid SSO pf func."); - break; - case TIM_AF_RING_STILL_RUNNING: - otx2_tim_dbg("Ring busy."); - break; - case TIM_AF_LF_INVALID: - otx2_err("Invalid Ring id."); - break; - case TIM_AF_CSIZE_NOT_ALIGNED: - otx2_err("Chunk size specified needs to be multiple of 16."); - break; - case TIM_AF_CSIZE_TOO_SMALL: - otx2_err("Chunk size too small."); - break; - case TIM_AF_CSIZE_TOO_BIG: - otx2_err("Chunk size too big."); - break; - case TIM_AF_INTERVAL_TOO_SMALL: - otx2_err("Bucket traversal interval too small."); - break; - case TIM_AF_INVALID_BIG_ENDIAN_VALUE: - otx2_err("Invalid Big endian value."); - break; - case TIM_AF_INVALID_CLOCK_SOURCE: - otx2_err("Invalid Clock source specified."); - break; - case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED: - otx2_err("GPIO clock source not enabled."); - break; - case TIM_AF_INVALID_BSIZE: - otx2_err("Invalid bucket size."); - break; - case TIM_AF_INVALID_ENABLE_PERIODIC: - otx2_err("Invalid bucket size."); - break; - case TIM_AF_INVALID_ENABLE_DONTFREE: - otx2_err("Invalid Don't free value."); - break; - case TIM_AF_ENA_DONTFRE_NSET_PERIODIC: - otx2_err("Don't free bit not set when periodic is enabled."); - break; - case TIM_AF_RING_ALREADY_DISABLED: - otx2_err("Ring already stopped"); - break; - default: - otx2_err("Unknown Error."); - } -} - -static int -otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) -{ - struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf; - struct otx2_tim_evdev *dev = tim_priv_get(); - struct otx2_tim_ring *tim_ring; - struct tim_config_req *cfg_req; - struct tim_ring_req *free_req; - struct tim_lf_alloc_req *req; - struct tim_lf_alloc_rsp *rsp; - uint8_t is_periodic; - int i, rc; - - if (dev == NULL) - return -ENODEV; - - if (adptr->data->id >= dev->nb_rings) - return -ENODEV; - - req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox); - req->npa_pf_func = otx2_npa_pf_func_get(); - req->sso_pf_func = otx2_sso_pf_func_get(); - req->ring = adptr->data->id; - - rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { - tim_err_desc(rc); - return -ENODEV; - } - - if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10), - rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) { - if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES) - rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS, - rsp->tenns_clk); - else { - rc = -ERANGE; - goto rng_mem_err; - } - } - - is_periodic = 0; - if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) { - if (rcfg->max_tmo_ns && - rcfg->max_tmo_ns != rcfg->timer_tick_ns) { - rc = -ERANGE; - goto rng_mem_err; - } - - /* Use 2 buckets to avoid contention */ - rcfg->max_tmo_ns = rcfg->timer_tick_ns; - rcfg->timer_tick_ns /= 2; - is_periodic = 1; - } - - tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0); - if (tim_ring == NULL) { - rc = -ENOMEM; - goto rng_mem_err; - } - - adptr->data->adapter_priv = tim_ring; - - tim_ring->tenns_clk_freq = rsp->tenns_clk; - tim_ring->clk_src = (int)rcfg->clk_src; - tim_ring->ring_id = adptr->data->id; - tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10); - tim_ring->max_tout = is_periodic ? - rcfg->timer_tick_ns * 2 : rcfg->max_tmo_ns; - tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec); - tim_ring->chunk_sz = dev->chunk_sz; - tim_ring->nb_timers = rcfg->nb_timers; - tim_ring->disable_npa = dev->disable_npa; - tim_ring->ena_periodic = is_periodic; - tim_ring->enable_stats = dev->enable_stats; - - for (i = 0; i < dev->ring_ctl_cnt ; i++) { - struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i]; - - if (ring_ctl->ring == tim_ring->ring_id) { - tim_ring->chunk_sz = ring_ctl->chunk_slots ? - ((uint32_t)(ring_ctl->chunk_slots + 1) * - OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz; - tim_ring->enable_stats = ring_ctl->enable_stats; - tim_ring->disable_npa = ring_ctl->disable_npa; - } - } - - if (tim_ring->disable_npa) { - tim_ring->nb_chunks = - tim_ring->nb_timers / - OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); - tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts; - } else { - tim_ring->nb_chunks = tim_ring->nb_timers; - } - tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); - tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) * - sizeof(struct otx2_tim_bkt), - RTE_CACHE_LINE_SIZE); - if (tim_ring->bkt == NULL) - goto bkt_mem_err; - - rc = tim_chnk_pool_create(tim_ring, rcfg); - if (rc < 0) - goto chnk_mem_err; - - cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox); - - cfg_req->ring = tim_ring->ring_id; - cfg_req->bigendian = false; - cfg_req->clocksource = tim_ring->clk_src; - cfg_req->enableperiodic = tim_ring->ena_periodic; - cfg_req->enabledontfreebuffer = tim_ring->ena_dfb; - cfg_req->bucketsize = tim_ring->nb_bkts; - cfg_req->chunksize = tim_ring->chunk_sz; - cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec, - tim_ring->tenns_clk_freq); - - rc = otx2_mbox_process(dev->mbox); - if (rc < 0) { - tim_err_desc(rc); - goto chnk_mem_err; - } - - tim_ring->base = dev->bar2 + - (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12); - - rc = tim_register_irq(tim_ring->ring_id); - if (rc < 0) - goto chnk_mem_err; - - otx2_write64((uint64_t)tim_ring->bkt, - tim_ring->base + TIM_LF_RING_BASE); - otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA); - - /* Set fastpath ops. */ - tim_set_fp_ops(tim_ring); - - /* Update SSO xae count. */ - sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)tim_ring, - RTE_EVENT_TYPE_TIMER); - sso_xae_reconfigure(dev->event_dev); - - otx2_tim_dbg("Total memory used %"PRIu64"MB\n", - (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) - + (tim_ring->nb_bkts * sizeof(struct otx2_tim_bkt))) / - BIT_ULL(20))); - - return rc; - -chnk_mem_err: - rte_free(tim_ring->bkt); -bkt_mem_err: - rte_free(tim_ring); -rng_mem_err: - free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox); - free_req->ring = adptr->data->id; - otx2_mbox_process(dev->mbox); - return rc; -} - -static void -otx2_tim_calibrate_start_tsc(struct otx2_tim_ring *tim_ring) -{ -#define OTX2_TIM_CALIB_ITER 1E6 - uint32_t real_bkt, bucket; - int icount, ecount = 0; - uint64_t bkt_cyc; - - for (icount = 0; icount < OTX2_TIM_CALIB_ITER; icount++) { - real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44; - bkt_cyc = tim_cntvct(); - bucket = (bkt_cyc - tim_ring->ring_start_cyc) / - tim_ring->tck_int; - bucket = bucket % (tim_ring->nb_bkts); - tim_ring->ring_start_cyc = bkt_cyc - (real_bkt * - tim_ring->tck_int); - if (bucket != real_bkt) - ecount++; - } - tim_ring->last_updt_cyc = bkt_cyc; - otx2_tim_dbg("Bucket mispredict %3.2f distance %d\n", - 100 - (((double)(icount - ecount) / (double)icount) * 100), - bucket - real_bkt); -} - -static int -otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr) -{ - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - struct otx2_tim_evdev *dev = tim_priv_get(); - struct tim_enable_rsp *rsp; - struct tim_ring_req *req; - int rc; - - if (dev == NULL) - return -ENODEV; - - req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox); - req->ring = tim_ring->ring_id; - - rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp); - if (rc < 0) { - tim_err_desc(rc); - goto fail; - } - tim_ring->ring_start_cyc = rsp->timestarted; - tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, tim_cntfrq()); - tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts; - tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int); - tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts); - - otx2_tim_calibrate_start_tsc(tim_ring); - -fail: - return rc; -} - -static int -otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr) -{ - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - struct otx2_tim_evdev *dev = tim_priv_get(); - struct tim_ring_req *req; - int rc; - - if (dev == NULL) - return -ENODEV; - - req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox); - req->ring = tim_ring->ring_id; - - rc = otx2_mbox_process(dev->mbox); - if (rc < 0) { - tim_err_desc(rc); - rc = -EBUSY; - } - - return rc; -} - -static int -otx2_tim_ring_free(struct rte_event_timer_adapter *adptr) -{ - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - struct otx2_tim_evdev *dev = tim_priv_get(); - struct tim_ring_req *req; - int rc; - - if (dev == NULL) - return -ENODEV; - - tim_unregister_irq(tim_ring->ring_id); - - req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox); - req->ring = tim_ring->ring_id; - - rc = otx2_mbox_process(dev->mbox); - if (rc < 0) { - tim_err_desc(rc); - return -EBUSY; - } - - rte_free(tim_ring->bkt); - rte_mempool_free(tim_ring->chunk_pool); - rte_free(adptr->data->adapter_priv); - - return 0; -} - -static int -otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter, - struct rte_event_timer_adapter_stats *stats) -{ - struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv; - uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc; - - stats->evtim_exp_count = __atomic_load_n(&tim_ring->arm_cnt, - __ATOMIC_RELAXED); - stats->ev_enq_count = stats->evtim_exp_count; - stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc, - &tim_ring->fast_div); - return 0; -} - -static int -otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter) -{ - struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv; - - __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED); - return 0; -} - -int -otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, - uint32_t *caps, const struct event_timer_adapter_ops **ops) -{ - struct otx2_tim_evdev *dev = tim_priv_get(); - - RTE_SET_USED(flags); - - if (dev == NULL) - return -ENODEV; - - otx2_tim_ops.init = otx2_tim_ring_create; - otx2_tim_ops.uninit = otx2_tim_ring_free; - otx2_tim_ops.start = otx2_tim_ring_start; - otx2_tim_ops.stop = otx2_tim_ring_stop; - otx2_tim_ops.get_info = otx2_tim_ring_info_get; - - if (dev->enable_stats) { - otx2_tim_ops.stats_get = otx2_tim_stats_get; - otx2_tim_ops.stats_reset = otx2_tim_stats_reset; - } - - /* Store evdev pointer for later use. */ - dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev; - *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT | - RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC; - *ops = &otx2_tim_ops; - - return 0; -} - -#define OTX2_TIM_DISABLE_NPA "tim_disable_npa" -#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots" -#define OTX2_TIM_STATS_ENA "tim_stats_ena" -#define OTX2_TIM_RINGS_LMT "tim_rings_lmt" -#define OTX2_TIM_RING_CTL "tim_ring_ctl" - -static void -tim_parse_ring_param(char *value, void *opaque) -{ - struct otx2_tim_evdev *dev = opaque; - struct otx2_tim_ctl ring_ctl = {0}; - char *tok = strtok(value, "-"); - struct otx2_tim_ctl *old_ptr; - uint16_t *val; - - val = (uint16_t *)&ring_ctl; - - if (!strlen(value)) - return; - - while (tok != NULL) { - *val = atoi(tok); - tok = strtok(NULL, "-"); - val++; - } - - if (val != (&ring_ctl.enable_stats + 1)) { - otx2_err( - "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]"); - return; - } - - dev->ring_ctl_cnt++; - old_ptr = dev->ring_ctl_data; - dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data, - sizeof(struct otx2_tim_ctl) * - dev->ring_ctl_cnt, 0); - if (dev->ring_ctl_data == NULL) { - dev->ring_ctl_data = old_ptr; - dev->ring_ctl_cnt--; - return; - } - - dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl; -} - -static void -tim_parse_ring_ctl_list(const char *value, void *opaque) -{ - char *s = strdup(value); - char *start = NULL; - char *end = NULL; - char *f = s; - - while (*s) { - if (*s == '[') - start = s; - else if (*s == ']') - end = s; - - if (start && start < end) { - *end = 0; - tim_parse_ring_param(start + 1, opaque); - start = end; - s = end; - } - s++; - } - - free(f); -} - -static int -tim_parse_kvargs_dict(const char *key, const char *value, void *opaque) -{ - RTE_SET_USED(key); - - /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ',' - * isn't allowed. 0 represents default. - */ - tim_parse_ring_ctl_list(value, opaque); - - return 0; -} - -static void -tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) -{ - struct rte_kvargs *kvlist; - - if (devargs == NULL) - return; - - kvlist = rte_kvargs_parse(devargs->args, NULL); - if (kvlist == NULL) - return; - - rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA, - &parse_kvargs_flag, &dev->disable_npa); - rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS, - &parse_kvargs_value, &dev->chunk_slots); - rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag, - &dev->enable_stats); - rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value, - &dev->min_ring_cnt); - rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL, - &tim_parse_kvargs_dict, &dev); - - rte_kvargs_free(kvlist); -} - -void -otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) -{ - struct rsrc_attach_req *atch_req; - struct rsrc_detach_req *dtch_req; - struct free_rsrcs_rsp *rsrc_cnt; - const struct rte_memzone *mz; - struct otx2_tim_evdev *dev; - int rc; - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return; - - mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME), - sizeof(struct otx2_tim_evdev), - rte_socket_id(), 0); - if (mz == NULL) { - otx2_tim_dbg("Unable to allocate memory for TIM Event device"); - return; - } - - dev = mz->addr; - dev->pci_dev = pci_dev; - dev->mbox = cmn_dev->mbox; - dev->bar2 = cmn_dev->bar2; - - tim_parse_devargs(pci_dev->device.devargs, dev); - - otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); - rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); - if (rc < 0) { - otx2_err("Unable to get free rsrc count."); - goto mz_free; - } - - dev->nb_rings = dev->min_ring_cnt ? - RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim; - - if (!dev->nb_rings) { - otx2_tim_dbg("No TIM Logical functions provisioned."); - goto mz_free; - } - - atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox); - atch_req->modify = true; - atch_req->timlfs = dev->nb_rings; - - rc = otx2_mbox_process(dev->mbox); - if (rc < 0) { - otx2_err("Unable to attach TIM rings."); - goto mz_free; - } - - rc = tim_get_msix_offsets(); - if (rc < 0) { - otx2_err("Unable to get MSIX offsets for TIM."); - goto detach; - } - - if (dev->chunk_slots && - dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS && - dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) { - dev->chunk_sz = (dev->chunk_slots + 1) * - OTX2_TIM_CHUNK_ALIGNMENT; - } else { - dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; - } - - return; - -detach: - dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox); - dtch_req->partial = true; - dtch_req->timlfs = true; - - otx2_mbox_process(dev->mbox); -mz_free: - rte_memzone_free(mz); -} - -void -otx2_tim_fini(void) -{ - struct otx2_tim_evdev *dev = tim_priv_get(); - struct rsrc_detach_req *dtch_req; - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return; - - dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox); - dtch_req->partial = true; - dtch_req->timlfs = true; - - otx2_mbox_process(dev->mbox); - rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME))); -} diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h deleted file mode 100644 index dac642e0e1..0000000000 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ /dev/null @@ -1,256 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_TIM_EVDEV_H__ -#define __OTX2_TIM_EVDEV_H__ - -#include -#include -#include - -#include "otx2_dev.h" - -#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev - -#define otx2_tim_func_trace otx2_tim_dbg - -#define TIM_LF_RING_AURA (0x0) -#define TIM_LF_RING_BASE (0x130) -#define TIM_LF_NRSPERR_INT (0x200) -#define TIM_LF_NRSPERR_INT_W1S (0x208) -#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210) -#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218) -#define TIM_LF_RAS_INT (0x300) -#define TIM_LF_RAS_INT_W1S (0x308) -#define TIM_LF_RAS_INT_ENA_W1S (0x310) -#define TIM_LF_RAS_INT_ENA_W1C (0x318) -#define TIM_LF_RING_REL (0x400) - -#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48) -#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \ - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1) -#define TIM_BUCKET_W1_S_LOCK (40) -#define TIM_BUCKET_W1_M_LOCK ((1ULL << \ - (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \ - TIM_BUCKET_W1_S_LOCK)) - 1) -#define TIM_BUCKET_W1_S_RSVD (35) -#define TIM_BUCKET_W1_S_BSK (34) -#define TIM_BUCKET_W1_M_BSK ((1ULL << \ - (TIM_BUCKET_W1_S_RSVD - \ - TIM_BUCKET_W1_S_BSK)) - 1) -#define TIM_BUCKET_W1_S_HBT (33) -#define TIM_BUCKET_W1_M_HBT ((1ULL << \ - (TIM_BUCKET_W1_S_BSK - \ - TIM_BUCKET_W1_S_HBT)) - 1) -#define TIM_BUCKET_W1_S_SBT (32) -#define TIM_BUCKET_W1_M_SBT ((1ULL << \ - (TIM_BUCKET_W1_S_HBT - \ - TIM_BUCKET_W1_S_SBT)) - 1) -#define TIM_BUCKET_W1_S_NUM_ENTRIES (0) -#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \ - (TIM_BUCKET_W1_S_SBT - \ - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1) - -#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN) - -#define TIM_BUCKET_CHUNK_REMAIN \ - (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER) - -#define TIM_BUCKET_LOCK \ - (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK) - -#define TIM_BUCKET_SEMA_WLOCK \ - (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK)) - -#define OTX2_MAX_TIM_RINGS (256) -#define OTX2_TIM_MAX_BUCKETS (0xFFFFF) -#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) -#define OTX2_TIM_CHUNK_ALIGNMENT (16) -#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \ - OTX2_TIM_CHUNK_ALIGNMENT) -#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1) -#define OTX2_TIM_MIN_CHUNK_SLOTS (0x8) -#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE) -#define OTX2_TIM_MIN_TMO_TKS (256) - -#define OTX2_TIM_SP 0x1 -#define OTX2_TIM_MP 0x2 -#define OTX2_TIM_ENA_FB 0x10 -#define OTX2_TIM_ENA_DFB 0x20 -#define OTX2_TIM_ENA_STATS 0x40 - -enum otx2_tim_clk_src { - OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK, - OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0, - OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1, - OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2, -}; - -struct otx2_tim_bkt { - uint64_t first_chunk; - union { - uint64_t w1; - struct { - uint32_t nb_entry; - uint8_t sbt:1; - uint8_t hbt:1; - uint8_t bsk:1; - uint8_t rsvd:5; - uint8_t lock; - int16_t chunk_remainder; - }; - }; - uint64_t current_chunk; - uint64_t pad; -} __rte_packed __rte_aligned(32); - -struct otx2_tim_ent { - uint64_t w0; - uint64_t wqe; -} __rte_packed; - -struct otx2_tim_ctl { - uint16_t ring; - uint16_t chunk_slots; - uint16_t disable_npa; - uint16_t enable_stats; -}; - -struct otx2_tim_evdev { - struct rte_pci_device *pci_dev; - struct rte_eventdev *event_dev; - struct otx2_mbox *mbox; - uint16_t nb_rings; - uint32_t chunk_sz; - uintptr_t bar2; - /* Dev args */ - uint8_t disable_npa; - uint16_t chunk_slots; - uint16_t min_ring_cnt; - uint8_t enable_stats; - uint16_t ring_ctl_cnt; - struct otx2_tim_ctl *ring_ctl_data; - /* HW const */ - /* MSIX offsets */ - uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS]; -}; - -struct otx2_tim_ring { - uintptr_t base; - uint16_t nb_chunk_slots; - uint32_t nb_bkts; - uint64_t last_updt_cyc; - uint64_t ring_start_cyc; - uint64_t tck_int; - uint64_t tot_int; - struct otx2_tim_bkt *bkt; - struct rte_mempool *chunk_pool; - struct rte_reciprocal_u64 fast_div; - struct rte_reciprocal_u64 fast_bkt; - uint64_t arm_cnt; - uint8_t prod_type_sp; - uint8_t enable_stats; - uint8_t disable_npa; - uint8_t ena_dfb; - uint8_t ena_periodic; - uint16_t ring_id; - uint32_t aura; - uint64_t nb_timers; - uint64_t tck_nsec; - uint64_t max_tout; - uint64_t nb_chunks; - uint64_t chunk_sz; - uint64_t tenns_clk_freq; - enum otx2_tim_clk_src clk_src; -} __rte_cache_aligned; - -static inline struct otx2_tim_evdev * -tim_priv_get(void) -{ - const struct rte_memzone *mz; - - mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)); - if (mz == NULL) - return NULL; - - return mz->addr; -} - -#ifdef RTE_ARCH_ARM64 -static inline uint64_t -tim_cntvct(void) -{ - return __rte_arm64_cntvct(); -} - -static inline uint64_t -tim_cntfrq(void) -{ - return __rte_arm64_cntfrq(); -} -#else -static inline uint64_t -tim_cntvct(void) -{ - return 0; -} - -static inline uint64_t -tim_cntfrq(void) -{ - return 0; -} -#endif - -#define TIM_ARM_FASTPATH_MODES \ - FP(sp, 0, 0, 0, OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ - FP(mp, 0, 0, 1, OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ - FP(fb_sp, 0, 1, 0, OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ - FP(fb_mp, 0, 1, 1, OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ - FP(stats_mod_sp, 1, 0, 0, \ - OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ - FP(stats_mod_mp, 1, 0, 1, \ - OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ - FP(stats_mod_fb_sp, 1, 1, 0, \ - OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ - FP(stats_mod_fb_mp, 1, 1, 1, \ - OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_MP) - -#define TIM_ARM_TMO_FASTPATH_MODES \ - FP(dfb, 0, 0, OTX2_TIM_ENA_DFB) \ - FP(fb, 0, 1, OTX2_TIM_ENA_FB) \ - FP(stats_dfb, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB) \ - FP(stats_fb, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB) - -#define FP(_name, _f3, _f2, _f1, flags) \ - uint16_t otx2_tim_arm_burst_##_name( \ - const struct rte_event_timer_adapter *adptr, \ - struct rte_event_timer **tim, const uint16_t nb_timers); -TIM_ARM_FASTPATH_MODES -#undef FP - -#define FP(_name, _f2, _f1, flags) \ - uint16_t otx2_tim_arm_tmo_tick_burst_##_name( \ - const struct rte_event_timer_adapter *adptr, \ - struct rte_event_timer **tim, const uint64_t timeout_tick, \ - const uint16_t nb_timers); -TIM_ARM_TMO_FASTPATH_MODES -#undef FP - -uint16_t otx2_tim_timer_cancel_burst( - const struct rte_event_timer_adapter *adptr, - struct rte_event_timer **tim, const uint16_t nb_timers); - -int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, - uint32_t *caps, - const struct event_timer_adapter_ops **ops); - -void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev); -void otx2_tim_fini(void); - -/* TIM IRQ */ -int tim_register_irq(uint16_t ring_id); -void tim_unregister_irq(uint16_t ring_id); - -#endif /* __OTX2_TIM_EVDEV_H__ */ diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c deleted file mode 100644 index 9ee07958fd..0000000000 --- a/drivers/event/octeontx2/otx2_tim_worker.c +++ /dev/null @@ -1,192 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include "otx2_tim_evdev.h" -#include "otx2_tim_worker.h" - -static inline int -tim_arm_checks(const struct otx2_tim_ring * const tim_ring, - struct rte_event_timer * const tim) -{ - if (unlikely(tim->state)) { - tim->state = RTE_EVENT_TIMER_ERROR; - rte_errno = EALREADY; - goto fail; - } - - if (unlikely(!tim->timeout_ticks || - tim->timeout_ticks >= tim_ring->nb_bkts)) { - tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE - : RTE_EVENT_TIMER_ERROR_TOOEARLY; - rte_errno = EINVAL; - goto fail; - } - - return 0; - -fail: - return -EINVAL; -} - -static inline void -tim_format_event(const struct rte_event_timer * const tim, - struct otx2_tim_ent * const entry) -{ - entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 | - (tim->ev.event & 0xFFFFFFFFF); - entry->wqe = tim->ev.u64; -} - -static inline void -tim_sync_start_cyc(struct otx2_tim_ring *tim_ring) -{ - uint64_t cur_cyc = tim_cntvct(); - uint32_t real_bkt; - - if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) { - real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44; - cur_cyc = tim_cntvct(); - - tim_ring->ring_start_cyc = cur_cyc - - (real_bkt * tim_ring->tck_int); - tim_ring->last_updt_cyc = cur_cyc; - } - -} - -static __rte_always_inline uint16_t -tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr, - struct rte_event_timer **tim, - const uint16_t nb_timers, - const uint8_t flags) -{ - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - struct otx2_tim_ent entry; - uint16_t index; - int ret; - - tim_sync_start_cyc(tim_ring); - for (index = 0; index < nb_timers; index++) { - if (tim_arm_checks(tim_ring, tim[index])) - break; - - tim_format_event(tim[index], &entry); - if (flags & OTX2_TIM_SP) - ret = tim_add_entry_sp(tim_ring, - tim[index]->timeout_ticks, - tim[index], &entry, flags); - if (flags & OTX2_TIM_MP) - ret = tim_add_entry_mp(tim_ring, - tim[index]->timeout_ticks, - tim[index], &entry, flags); - - if (unlikely(ret)) { - rte_errno = -ret; - break; - } - } - - if (flags & OTX2_TIM_ENA_STATS) - __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED); - - return index; -} - -static __rte_always_inline uint16_t -tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr, - struct rte_event_timer **tim, - const uint64_t timeout_tick, - const uint16_t nb_timers, const uint8_t flags) -{ - struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned; - struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; - uint16_t set_timers = 0; - uint16_t arr_idx = 0; - uint16_t idx; - int ret; - - if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) { - const enum rte_event_timer_state state = timeout_tick ? - RTE_EVENT_TIMER_ERROR_TOOLATE : - RTE_EVENT_TIMER_ERROR_TOOEARLY; - for (idx = 0; idx < nb_timers; idx++) - tim[idx]->state = state; - - rte_errno = EINVAL; - return 0; - } - - tim_sync_start_cyc(tim_ring); - while (arr_idx < nb_timers) { - for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers); - idx++, arr_idx++) { - tim_format_event(tim[arr_idx], &entry[idx]); - } - ret = tim_add_entry_brst(tim_ring, timeout_tick, - &tim[set_timers], entry, idx, flags); - set_timers += ret; - if (ret != idx) - break; - } - if (flags & OTX2_TIM_ENA_STATS) - __atomic_fetch_add(&tim_ring->arm_cnt, set_timers, - __ATOMIC_RELAXED); - - return set_timers; -} - -#define FP(_name, _f3, _f2, _f1, _flags) \ -uint16_t __rte_noinline \ -otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ - struct rte_event_timer **tim, \ - const uint16_t nb_timers) \ -{ \ - return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \ -} -TIM_ARM_FASTPATH_MODES -#undef FP - -#define FP(_name, _f2, _f1, _flags) \ -uint16_t __rte_noinline \ -otx2_tim_arm_tmo_tick_burst_ ## _name( \ - const struct rte_event_timer_adapter *adptr, \ - struct rte_event_timer **tim, \ - const uint64_t timeout_tick, \ - const uint16_t nb_timers) \ -{ \ - return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \ - nb_timers, _flags); \ -} -TIM_ARM_TMO_FASTPATH_MODES -#undef FP - -uint16_t -otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr, - struct rte_event_timer **tim, - const uint16_t nb_timers) -{ - uint16_t index; - int ret; - - RTE_SET_USED(adptr); - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); - for (index = 0; index < nb_timers; index++) { - if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) { - rte_errno = EALREADY; - break; - } - - if (tim[index]->state != RTE_EVENT_TIMER_ARMED) { - rte_errno = EINVAL; - break; - } - ret = tim_rm_entry(tim[index]); - if (ret) { - rte_errno = -ret; - break; - } - } - - return index; -} diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h deleted file mode 100644 index efe88a8692..0000000000 --- a/drivers/event/octeontx2/otx2_tim_worker.h +++ /dev/null @@ -1,598 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_TIM_WORKER_H__ -#define __OTX2_TIM_WORKER_H__ - -#include "otx2_tim_evdev.h" - -static inline uint8_t -tim_bkt_fetch_lock(uint64_t w1) -{ - return (w1 >> TIM_BUCKET_W1_S_LOCK) & - TIM_BUCKET_W1_M_LOCK; -} - -static inline int16_t -tim_bkt_fetch_rem(uint64_t w1) -{ - return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) & - TIM_BUCKET_W1_M_CHUNK_REMAINDER; -} - -static inline int16_t -tim_bkt_get_rem(struct otx2_tim_bkt *bktp) -{ - return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE); -} - -static inline void -tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v) -{ - __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED); -} - -static inline void -tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v) -{ - __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED); -} - -static inline uint8_t -tim_bkt_get_hbt(uint64_t w1) -{ - return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT; -} - -static inline uint8_t -tim_bkt_get_bsk(uint64_t w1) -{ - return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK; -} - -static inline uint64_t -tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp) -{ - /* Clear everything except lock. */ - const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK; - - return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL); -} - -static inline uint64_t -tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp) -{ - return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK, - __ATOMIC_ACQUIRE); -} - -static inline uint64_t -tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp) -{ - return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED); -} - -static inline uint64_t -tim_bkt_inc_lock(struct otx2_tim_bkt *bktp) -{ - const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK; - - return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE); -} - -static inline void -tim_bkt_dec_lock(struct otx2_tim_bkt *bktp) -{ - __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE); -} - -static inline void -tim_bkt_dec_lock_relaxed(struct otx2_tim_bkt *bktp) -{ - __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED); -} - -static inline uint32_t -tim_bkt_get_nent(uint64_t w1) -{ - return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) & - TIM_BUCKET_W1_M_NUM_ENTRIES; -} - -static inline void -tim_bkt_inc_nent(struct otx2_tim_bkt *bktp) -{ - __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED); -} - -static inline void -tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v) -{ - __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED); -} - -static inline uint64_t -tim_bkt_clr_nent(struct otx2_tim_bkt *bktp) -{ - const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES << - TIM_BUCKET_W1_S_NUM_ENTRIES); - - return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL); -} - -static inline uint64_t -tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R) -{ - return (n - (d * rte_reciprocal_divide_u64(n, &R))); -} - -static __rte_always_inline void -tim_get_target_bucket(struct otx2_tim_ring *const tim_ring, - const uint32_t rel_bkt, struct otx2_tim_bkt **bkt, - struct otx2_tim_bkt **mirr_bkt) -{ - const uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc; - uint64_t bucket = - rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) + - rel_bkt; - uint64_t mirr_bucket = 0; - - bucket = - tim_bkt_fast_mod(bucket, tim_ring->nb_bkts, tim_ring->fast_bkt); - mirr_bucket = tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1), - tim_ring->nb_bkts, tim_ring->fast_bkt); - *bkt = &tim_ring->bkt[bucket]; - *mirr_bkt = &tim_ring->bkt[mirr_bucket]; -} - -static struct otx2_tim_ent * -tim_clr_bkt(struct otx2_tim_ring * const tim_ring, - struct otx2_tim_bkt * const bkt) -{ -#define TIM_MAX_OUTSTANDING_OBJ 64 - void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ]; - struct otx2_tim_ent *chunk; - struct otx2_tim_ent *pnext; - uint8_t objs = 0; - - - chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk); - chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk + - tim_ring->nb_chunk_slots)->w0; - while (chunk) { - pnext = (struct otx2_tim_ent *)(uintptr_t) - ((chunk + tim_ring->nb_chunk_slots)->w0); - if (objs == TIM_MAX_OUTSTANDING_OBJ) { - rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, - objs); - objs = 0; - } - pend_chunks[objs++] = chunk; - chunk = pnext; - } - - if (objs) - rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, - objs); - - return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk; -} - -static struct otx2_tim_ent * -tim_refill_chunk(struct otx2_tim_bkt * const bkt, - struct otx2_tim_bkt * const mirr_bkt, - struct otx2_tim_ring * const tim_ring) -{ - struct otx2_tim_ent *chunk; - - if (bkt->nb_entry || !bkt->first_chunk) { - if (unlikely(rte_mempool_get(tim_ring->chunk_pool, - (void **)&chunk))) - return NULL; - if (bkt->nb_entry) { - *(uint64_t *)(((struct otx2_tim_ent *) - mirr_bkt->current_chunk) + - tim_ring->nb_chunk_slots) = - (uintptr_t)chunk; - } else { - bkt->first_chunk = (uintptr_t)chunk; - } - } else { - chunk = tim_clr_bkt(tim_ring, bkt); - bkt->first_chunk = (uintptr_t)chunk; - } - *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; - - return chunk; -} - -static struct otx2_tim_ent * -tim_insert_chunk(struct otx2_tim_bkt * const bkt, - struct otx2_tim_bkt * const mirr_bkt, - struct otx2_tim_ring * const tim_ring) -{ - struct otx2_tim_ent *chunk; - - if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk))) - return NULL; - - *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; - if (bkt->nb_entry) { - *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t) - mirr_bkt->current_chunk) + - tim_ring->nb_chunk_slots) = (uintptr_t)chunk; - } else { - bkt->first_chunk = (uintptr_t)chunk; - } - return chunk; -} - -static __rte_always_inline int -tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, - const uint32_t rel_bkt, - struct rte_event_timer * const tim, - const struct otx2_tim_ent * const pent, - const uint8_t flags) -{ - struct otx2_tim_bkt *mirr_bkt; - struct otx2_tim_ent *chunk; - struct otx2_tim_bkt *bkt; - uint64_t lock_sema; - int16_t rem; - -__retry: - tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt); - - /* Get Bucket sema*/ - lock_sema = tim_bkt_fetch_sema_lock(bkt); - - /* Bucket related checks. */ - if (unlikely(tim_bkt_get_hbt(lock_sema))) { - if (tim_bkt_get_nent(lock_sema) != 0) { - uint64_t hbt_state; -#ifdef RTE_ARCH_ARM64 - asm volatile(" ldxr %[hbt], [%[w1]] \n" - " tbz %[hbt], 33, dne%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldxr %[hbt], [%[w1]] \n" - " tbnz %[hbt], 33, rty%= \n" - "dne%=: \n" - : [hbt] "=&r"(hbt_state) - : [w1] "r"((&bkt->w1)) - : "memory"); -#else - do { - hbt_state = __atomic_load_n(&bkt->w1, - __ATOMIC_RELAXED); - } while (hbt_state & BIT_ULL(33)); -#endif - - if (!(hbt_state & BIT_ULL(34))) { - tim_bkt_dec_lock(bkt); - goto __retry; - } - } - } - /* Insert the work. */ - rem = tim_bkt_fetch_rem(lock_sema); - - if (!rem) { - if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); - if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); - - if (unlikely(chunk == NULL)) { - bkt->chunk_remainder = 0; - tim->impl_opaque[0] = 0; - tim->impl_opaque[1] = 0; - tim->state = RTE_EVENT_TIMER_ERROR; - tim_bkt_dec_lock(bkt); - return -ENOMEM; - } - mirr_bkt->current_chunk = (uintptr_t)chunk; - bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1; - } else { - chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; - chunk += tim_ring->nb_chunk_slots - rem; - } - - /* Copy work entry. */ - *chunk = *pent; - - tim->impl_opaque[0] = (uintptr_t)chunk; - tim->impl_opaque[1] = (uintptr_t)bkt; - __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE); - tim_bkt_inc_nent(bkt); - tim_bkt_dec_lock_relaxed(bkt); - - return 0; -} - -static __rte_always_inline int -tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, - const uint32_t rel_bkt, - struct rte_event_timer * const tim, - const struct otx2_tim_ent * const pent, - const uint8_t flags) -{ - struct otx2_tim_bkt *mirr_bkt; - struct otx2_tim_ent *chunk; - struct otx2_tim_bkt *bkt; - uint64_t lock_sema; - int16_t rem; - -__retry: - tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt); - /* Get Bucket sema*/ - lock_sema = tim_bkt_fetch_sema_lock(bkt); - - /* Bucket related checks. */ - if (unlikely(tim_bkt_get_hbt(lock_sema))) { - if (tim_bkt_get_nent(lock_sema) != 0) { - uint64_t hbt_state; -#ifdef RTE_ARCH_ARM64 - asm volatile(" ldxr %[hbt], [%[w1]] \n" - " tbz %[hbt], 33, dne%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldxr %[hbt], [%[w1]] \n" - " tbnz %[hbt], 33, rty%= \n" - "dne%=: \n" - : [hbt] "=&r"(hbt_state) - : [w1] "r"((&bkt->w1)) - : "memory"); -#else - do { - hbt_state = __atomic_load_n(&bkt->w1, - __ATOMIC_RELAXED); - } while (hbt_state & BIT_ULL(33)); -#endif - - if (!(hbt_state & BIT_ULL(34))) { - tim_bkt_dec_lock(bkt); - goto __retry; - } - } - } - - rem = tim_bkt_fetch_rem(lock_sema); - if (rem < 0) { - tim_bkt_dec_lock(bkt); -#ifdef RTE_ARCH_ARM64 - uint64_t w1; - asm volatile(" ldxr %[w1], [%[crem]] \n" - " tbz %[w1], 63, dne%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldxr %[w1], [%[crem]] \n" - " tbnz %[w1], 63, rty%= \n" - "dne%=: \n" - : [w1] "=&r"(w1) - : [crem] "r"(&bkt->w1) - : "memory"); -#else - while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) < - 0) - ; -#endif - goto __retry; - } else if (!rem) { - /* Only one thread can be here*/ - if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); - if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); - - if (unlikely(chunk == NULL)) { - tim->impl_opaque[0] = 0; - tim->impl_opaque[1] = 0; - tim->state = RTE_EVENT_TIMER_ERROR; - tim_bkt_set_rem(bkt, 0); - tim_bkt_dec_lock(bkt); - return -ENOMEM; - } - *chunk = *pent; - if (tim_bkt_fetch_lock(lock_sema)) { - do { - lock_sema = __atomic_load_n(&bkt->w1, - __ATOMIC_RELAXED); - } while (tim_bkt_fetch_lock(lock_sema) - 1); - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); - } - mirr_bkt->current_chunk = (uintptr_t)chunk; - __atomic_store_n(&bkt->chunk_remainder, - tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE); - } else { - chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; - chunk += tim_ring->nb_chunk_slots - rem; - *chunk = *pent; - } - - tim->impl_opaque[0] = (uintptr_t)chunk; - tim->impl_opaque[1] = (uintptr_t)bkt; - __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE); - tim_bkt_inc_nent(bkt); - tim_bkt_dec_lock_relaxed(bkt); - - return 0; -} - -static inline uint16_t -tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, - struct otx2_tim_ent *chunk, - struct rte_event_timer ** const tim, - const struct otx2_tim_ent * const ents, - const struct otx2_tim_bkt * const bkt) -{ - for (; index < cpy_lmt; index++) { - *chunk = *(ents + index); - tim[index]->impl_opaque[0] = (uintptr_t)chunk++; - tim[index]->impl_opaque[1] = (uintptr_t)bkt; - tim[index]->state = RTE_EVENT_TIMER_ARMED; - } - - return index; -} - -/* Burst mode functions */ -static inline int -tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, - const uint16_t rel_bkt, - struct rte_event_timer ** const tim, - const struct otx2_tim_ent *ents, - const uint16_t nb_timers, const uint8_t flags) -{ - struct otx2_tim_ent *chunk = NULL; - struct otx2_tim_bkt *mirr_bkt; - struct otx2_tim_bkt *bkt; - uint16_t chunk_remainder; - uint16_t index = 0; - uint64_t lock_sema; - int16_t rem, crem; - uint8_t lock_cnt; - -__retry: - tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt); - - /* Only one thread beyond this. */ - lock_sema = tim_bkt_inc_lock(bkt); - lock_cnt = (uint8_t) - ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK); - - if (lock_cnt) { - tim_bkt_dec_lock(bkt); -#ifdef RTE_ARCH_ARM64 - asm volatile(" ldxrb %w[lock_cnt], [%[lock]] \n" - " tst %w[lock_cnt], 255 \n" - " beq dne%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldxrb %w[lock_cnt], [%[lock]] \n" - " tst %w[lock_cnt], 255 \n" - " bne rty%= \n" - "dne%=: \n" - : [lock_cnt] "=&r"(lock_cnt) - : [lock] "r"(&bkt->lock) - : "memory"); -#else - while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED)) - ; -#endif - goto __retry; - } - - /* Bucket related checks. */ - if (unlikely(tim_bkt_get_hbt(lock_sema))) { - if (tim_bkt_get_nent(lock_sema) != 0) { - uint64_t hbt_state; -#ifdef RTE_ARCH_ARM64 - asm volatile(" ldxr %[hbt], [%[w1]] \n" - " tbz %[hbt], 33, dne%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldxr %[hbt], [%[w1]] \n" - " tbnz %[hbt], 33, rty%= \n" - "dne%=: \n" - : [hbt] "=&r"(hbt_state) - : [w1] "r"((&bkt->w1)) - : "memory"); -#else - do { - hbt_state = __atomic_load_n(&bkt->w1, - __ATOMIC_RELAXED); - } while (hbt_state & BIT_ULL(33)); -#endif - - if (!(hbt_state & BIT_ULL(34))) { - tim_bkt_dec_lock(bkt); - goto __retry; - } - } - } - - chunk_remainder = tim_bkt_fetch_rem(lock_sema); - rem = chunk_remainder - nb_timers; - if (rem < 0) { - crem = tim_ring->nb_chunk_slots - chunk_remainder; - if (chunk_remainder && crem) { - chunk = ((struct otx2_tim_ent *) - mirr_bkt->current_chunk) + crem; - - index = tim_cpy_wrk(index, chunk_remainder, chunk, tim, - ents, bkt); - tim_bkt_sub_rem(bkt, chunk_remainder); - tim_bkt_add_nent(bkt, chunk_remainder); - } - - if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); - if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); - - if (unlikely(chunk == NULL)) { - tim_bkt_dec_lock(bkt); - rte_errno = ENOMEM; - tim[index]->state = RTE_EVENT_TIMER_ERROR; - return crem; - } - *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; - mirr_bkt->current_chunk = (uintptr_t)chunk; - tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); - - rem = nb_timers - chunk_remainder; - tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem); - tim_bkt_add_nent(bkt, rem); - } else { - chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; - chunk += (tim_ring->nb_chunk_slots - chunk_remainder); - - tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); - tim_bkt_sub_rem(bkt, nb_timers); - tim_bkt_add_nent(bkt, nb_timers); - } - - tim_bkt_dec_lock(bkt); - - return nb_timers; -} - -static int -tim_rm_entry(struct rte_event_timer *tim) -{ - struct otx2_tim_ent *entry; - struct otx2_tim_bkt *bkt; - uint64_t lock_sema; - - if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0) - return -ENOENT; - - entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0]; - if (entry->wqe != tim->ev.u64) { - tim->impl_opaque[0] = 0; - tim->impl_opaque[1] = 0; - return -ENOENT; - } - - bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1]; - lock_sema = tim_bkt_inc_lock(bkt); - if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) { - tim->impl_opaque[0] = 0; - tim->impl_opaque[1] = 0; - tim_bkt_dec_lock(bkt); - return -ENOENT; - } - - entry->w0 = 0; - entry->wqe = 0; - tim->state = RTE_EVENT_TIMER_CANCELED; - tim->impl_opaque[0] = 0; - tim->impl_opaque[1] = 0; - tim_bkt_dec_lock(bkt); - - return 0; -} - -#endif /* __OTX2_TIM_WORKER_H__ */ diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c deleted file mode 100644 index 95139d27a3..0000000000 --- a/drivers/event/octeontx2/otx2_worker.c +++ /dev/null @@ -1,372 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include "otx2_worker.h" - -static __rte_noinline uint8_t -otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev) -{ - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - const uint64_t event_ptr = ev->u64; - const uint16_t grp = ev->queue_id; - - if (ws->xaq_lmt <= *ws->fc_mem) - return 0; - - otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp); - - return 1; -} - -static __rte_always_inline void -otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev) -{ - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op)); - - /* 96XX model - * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED - * - * SSO_SYNC_ORDERED norm norm untag - * SSO_SYNC_ATOMIC norm norm untag - * SSO_SYNC_UNTAGGED norm norm NOOP - */ - - if (new_tt == SSO_SYNC_UNTAGGED) { - if (cur_tt != SSO_SYNC_UNTAGGED) - otx2_ssogws_swtag_untag(ws); - } else { - otx2_ssogws_swtag_norm(ws, tag, new_tt); - } - - ws->swtag_req = 1; -} - -static __rte_always_inline void -otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev, - const uint16_t grp) -{ - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - - otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_UPD_WQP_GRP1); - rte_smp_wmb(); - otx2_ssogws_swtag_desched(ws, tag, new_tt, grp); -} - -static __rte_always_inline void -otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev) -{ - const uint8_t grp = ev->queue_id; - - /* Group hasn't changed, Use SWTAG to forward the event */ - if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(ws->tag_op)) == grp) - otx2_ssogws_fwd_swtag(ws, ev); - else - /* - * Group has been changed for group based work pipelining, - * Use deschedule/add_work operation to transfer the event to - * new group/core - */ - otx2_ssogws_fwd_group(ws, ev, grp); -} - -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws *ws = port; \ - \ - RTE_SET_USED(timeout_ticks); \ - \ - if (ws->swtag_req) { \ - ws->swtag_req = 0; \ - otx2_ssogws_swtag_wait(ws); \ - return 1; \ - } \ - \ - return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws *ws = port; \ - uint16_t ret = 1; \ - uint64_t iter; \ - \ - if (ws->swtag_req) { \ - ws->swtag_req = 0; \ - otx2_ssogws_swtag_wait(ws); \ - return ret; \ - } \ - \ - ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \ - for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \ - ret = otx2_ssogws_get_work(ws, ev, flags, \ - ws->lookup_mem); \ - \ - return ret; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws *ws = port; \ - \ - RTE_SET_USED(timeout_ticks); \ - \ - if (ws->swtag_req) { \ - ws->swtag_req = 0; \ - otx2_ssogws_swtag_wait(ws); \ - return 1; \ - } \ - \ - return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws *ws = port; \ - uint16_t ret = 1; \ - uint64_t iter; \ - \ - if (ws->swtag_req) { \ - ws->swtag_req = 0; \ - otx2_ssogws_swtag_wait(ws); \ - return ret; \ - } \ - \ - ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ - for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \ - ret = otx2_ssogws_get_work(ws, ev, \ - flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem); \ - \ - return ret; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \ - timeout_ticks); \ -} - -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - -uint16_t __rte_hot -otx2_ssogws_enq(void *port, const struct rte_event *ev) -{ - struct otx2_ssogws *ws = port; - - switch (ev->op) { - case RTE_EVENT_OP_NEW: - rte_smp_mb(); - return otx2_ssogws_new_event(ws, ev); - case RTE_EVENT_OP_FORWARD: - otx2_ssogws_forward_event(ws, ev); - break; - case RTE_EVENT_OP_RELEASE: - otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op); - break; - default: - return 0; - } - - return 1; -} - -uint16_t __rte_hot -otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - RTE_SET_USED(nb_events); - return otx2_ssogws_enq(port, ev); -} - -uint16_t __rte_hot -otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - struct otx2_ssogws *ws = port; - uint16_t i, rc = 1; - - rte_smp_mb(); - if (ws->xaq_lmt <= *ws->fc_mem) - return 0; - - for (i = 0; i < nb_events && rc; i++) - rc = otx2_ssogws_new_event(ws, &ev[i]); - - return nb_events; -} - -uint16_t __rte_hot -otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - struct otx2_ssogws *ws = port; - - RTE_SET_USED(nb_events); - otx2_ssogws_forward_event(ws, ev); - - return 1; -} - -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \ - uint16_t nb_events) \ -{ \ - struct otx2_ssogws *ws = port; \ - uint64_t cmd[sz]; \ - \ - RTE_SET_USED(nb_events); \ - return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \ - (const uint64_t \ - (*)[RTE_MAX_QUEUES_PER_PORT]) \ - &ws->tx_adptr_data, \ - flags); \ -} -SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\ - uint16_t nb_events) \ -{ \ - uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ - struct otx2_ssogws *ws = port; \ - \ - RTE_SET_USED(nb_events); \ - return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \ - (const uint64_t \ - (*)[RTE_MAX_QUEUES_PER_PORT]) \ - &ws->tx_adptr_data, \ - (flags) | NIX_TX_MULTI_SEG_F); \ -} -SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - -void -ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base, - otx2_handle_event_t fn, void *arg) -{ - uint64_t cq_ds_cnt = 1; - uint64_t aq_cnt = 1; - uint64_t ds_cnt = 1; - struct rte_event ev; - uint64_t enable; - uint64_t val; - - enable = otx2_read64(base + SSO_LF_GGRP_QCTL); - if (!enable) - return; - - val = queue_id; /* GGRP ID */ - val |= BIT_ULL(18); /* Grouped */ - val |= BIT_ULL(16); /* WAIT */ - - aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT); - ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT); - cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT); - cq_ds_cnt &= 0x3FFF3FFF0000; - - while (aq_cnt || cq_ds_cnt || ds_cnt) { - otx2_write64(val, ws->getwrk_op); - otx2_ssogws_get_work_empty(ws, &ev, 0); - if (fn != NULL && ev.u64 != 0) - fn(arg, ev); - if (ev.sched_type != SSO_TT_EMPTY) - otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op); - rte_mb(); - aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT); - ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT); - cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT); - /* Extract cq and ds count */ - cq_ds_cnt &= 0x3FFF3FFF0000; - } - - otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_GWC_INVAL); - rte_mb(); -} - -void -ssogws_reset(struct otx2_ssogws *ws) -{ - uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); - uint64_t pend_state; - uint8_t pend_tt; - uint64_t tag; - - /* Wait till getwork/swtp/waitw/desched completes. */ - do { - pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE); - rte_mb(); - } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58))); - - tag = otx2_read64(base + SSOW_LF_GWS_TAG); - pend_tt = (tag >> 32) & 0x3; - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ - if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED) - otx2_ssogws_swtag_untag(ws); - otx2_ssogws_desched(ws); - } - rte_mb(); - - /* Wait for desched to complete. */ - do { - pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE); - rte_mb(); - } while (pend_state & BIT_ULL(58)); -} diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h deleted file mode 100644 index aa766c6602..0000000000 --- a/drivers/event/octeontx2/otx2_worker.h +++ /dev/null @@ -1,339 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_WORKER_H__ -#define __OTX2_WORKER_H__ - -#include -#include - -#include -#include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_rx.h" -#include "otx2_ethdev_sec_tx.h" - -/* SSO Operations */ - -static __rte_always_inline uint16_t -otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev, - const uint32_t flags, const void * const lookup_mem) -{ - union otx2_sso_event event; - uint64_t tstamp_ptr; - uint64_t get_work1; - uint64_t mbuf; - - otx2_write64(BIT_ULL(16) | /* wait for work. */ - 1, /* Use Mask set 0. */ - ws->getwrk_op); - - if (flags & NIX_RX_OFFLOAD_PTYPE_F) - rte_prefetch_non_temporal(lookup_mem); -#ifdef RTE_ARCH_ARM64 - asm volatile( - " ldr %[tag], [%[tag_loc]] \n" - " ldr %[wqp], [%[wqp_loc]] \n" - " tbz %[tag], 63, done%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldr %[tag], [%[tag_loc]] \n" - " ldr %[wqp], [%[wqp_loc]] \n" - " tbnz %[tag], 63, rty%= \n" - "done%=: dmb ld \n" - " prfm pldl1keep, [%[wqp], #8] \n" - " sub %[mbuf], %[wqp], #0x80 \n" - " prfm pldl1keep, [%[mbuf]] \n" - : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1), - [mbuf] "=&r" (mbuf) - : [tag_loc] "r" (ws->tag_op), - [wqp_loc] "r" (ws->wqp_op) - ); -#else - event.get_work0 = otx2_read64(ws->tag_op); - while ((BIT_ULL(63)) & event.get_work0) - event.get_work0 = otx2_read64(ws->tag_op); - - get_work1 = otx2_read64(ws->wqp_op); - rte_prefetch0((const void *)get_work1); - mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); - rte_prefetch0((const void *)mbuf); -#endif - - event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | - (event.get_work0 & (0x3FFull << 36)) << 4 | - (event.get_work0 & 0xffffffff); - - if (event.sched_type != SSO_TT_EMPTY) { - if ((flags & NIX_RX_OFFLOAD_SECURITY_F) && - (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - get_work1 = otx2_handle_crypto_event(get_work1); - } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) { - otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, - (uint32_t) event.get_work0, flags, - lookup_mem); - /* Extracting tstamp, if PTP enabled*/ - tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *) - get_work1) + - OTX2_SSO_WQE_SG_PTR); - otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, - ws->tstamp, flags, - (uint64_t *)tstamp_ptr); - get_work1 = mbuf; - } - } - - ev->event = event.get_work0; - ev->u64 = get_work1; - - return !!get_work1; -} - -/* Used in cleaning up workslot. */ -static __rte_always_inline uint16_t -otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev, - const uint32_t flags) -{ - union otx2_sso_event event; - uint64_t tstamp_ptr; - uint64_t get_work1; - uint64_t mbuf; - -#ifdef RTE_ARCH_ARM64 - asm volatile( - " ldr %[tag], [%[tag_loc]] \n" - " ldr %[wqp], [%[wqp_loc]] \n" - " tbz %[tag], 63, done%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldr %[tag], [%[tag_loc]] \n" - " ldr %[wqp], [%[wqp_loc]] \n" - " tbnz %[tag], 63, rty%= \n" - "done%=: dmb ld \n" - " prfm pldl1keep, [%[wqp], #8] \n" - " sub %[mbuf], %[wqp], #0x80 \n" - " prfm pldl1keep, [%[mbuf]] \n" - : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1), - [mbuf] "=&r" (mbuf) - : [tag_loc] "r" (ws->tag_op), - [wqp_loc] "r" (ws->wqp_op) - ); -#else - event.get_work0 = otx2_read64(ws->tag_op); - while ((BIT_ULL(63)) & event.get_work0) - event.get_work0 = otx2_read64(ws->tag_op); - - get_work1 = otx2_read64(ws->wqp_op); - rte_prefetch_non_temporal((const void *)get_work1); - mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); - rte_prefetch_non_temporal((const void *)mbuf); -#endif - - event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | - (event.get_work0 & (0x3FFull << 36)) << 4 | - (event.get_work0 & 0xffffffff); - - if (event.sched_type != SSO_TT_EMPTY && - event.event_type == RTE_EVENT_TYPE_ETHDEV) { - otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type, - (uint32_t) event.get_work0, flags, NULL); - /* Extracting tstamp, if PTP enabled*/ - tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)get_work1) - + OTX2_SSO_WQE_SG_PTR); - otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp, - flags, (uint64_t *)tstamp_ptr); - get_work1 = mbuf; - } - - ev->event = event.get_work0; - ev->u64 = get_work1; - - return !!get_work1; -} - -static __rte_always_inline void -otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr, - const uint32_t tag, const uint8_t new_tt, - const uint16_t grp) -{ - uint64_t add_work0; - - add_work0 = tag | ((uint64_t)(new_tt) << 32); - otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]); -} - -static __rte_always_inline void -otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt, - uint16_t grp) -{ - uint64_t val; - - val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34); - otx2_write64(val, ws->swtag_desched_op); -} - -static __rte_always_inline void -otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt) -{ - uint64_t val; - - val = tag | ((uint64_t)(new_tt & 0x3) << 32); - otx2_write64(val, ws->swtag_norm_op); -} - -static __rte_always_inline void -otx2_ssogws_swtag_untag(struct otx2_ssogws *ws) -{ - otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_SWTAG_UNTAG); -} - -static __rte_always_inline void -otx2_ssogws_swtag_flush(uint64_t tag_op, uint64_t flush_op) -{ - if (OTX2_SSOW_TT_FROM_TAG(otx2_read64(tag_op)) == SSO_TT_EMPTY) - return; - otx2_write64(0, flush_op); -} - -static __rte_always_inline void -otx2_ssogws_desched(struct otx2_ssogws *ws) -{ - otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_DESCHED); -} - -static __rte_always_inline void -otx2_ssogws_swtag_wait(struct otx2_ssogws *ws) -{ -#ifdef RTE_ARCH_ARM64 - uint64_t swtp; - - asm volatile(" ldr %[swtb], [%[swtp_loc]] \n" - " tbz %[swtb], 62, done%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldr %[swtb], [%[swtp_loc]] \n" - " tbnz %[swtb], 62, rty%= \n" - "done%=: \n" - : [swtb] "=&r" (swtp) - : [swtp_loc] "r" (ws->tag_op)); -#else - /* Wait for the SWTAG/SWTAG_FULL operation */ - while (otx2_read64(ws->tag_op) & BIT_ULL(62)) - ; -#endif -} - -static __rte_always_inline void -otx2_ssogws_head_wait(uint64_t tag_op) -{ -#ifdef RTE_ARCH_ARM64 - uint64_t tag; - - asm volatile ( - " ldr %[tag], [%[tag_op]] \n" - " tbnz %[tag], 35, done%= \n" - " sevl \n" - "rty%=: wfe \n" - " ldr %[tag], [%[tag_op]] \n" - " tbz %[tag], 35, rty%= \n" - "done%=: \n" - : [tag] "=&r" (tag) - : [tag_op] "r" (tag_op) - ); -#else - /* Wait for the HEAD to be set */ - while (!(otx2_read64(tag_op) & BIT_ULL(35))) - ; -#endif -} - -static __rte_always_inline const struct otx2_eth_txq * -otx2_ssogws_xtract_meta(struct rte_mbuf *m, - const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT]) -{ - return (const struct otx2_eth_txq *)txq_data[m->port][ - rte_event_eth_tx_adapter_txq_get(m)]; -} - -static __rte_always_inline void -otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m, - uint64_t *cmd, const uint32_t flags) -{ - otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags)); - otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt); -} - -static __rte_always_inline uint16_t -otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, - const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT], - const uint32_t flags) -{ - struct rte_mbuf *m = ev->mbuf; - const struct otx2_eth_txq *txq; - uint16_t ref_cnt = m->refcnt; - - if ((flags & NIX_TX_OFFLOAD_SECURITY_F) && - (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) { - txq = otx2_ssogws_xtract_meta(m, txq_data); - return otx2_sec_event_tx(base, ev, m, txq, flags); - } - - /* Perform header writes before barrier for TSO */ - otx2_nix_xmit_prepare_tso(m, flags); - /* Lets commit any changes in the packet here in case when - * fast free is set as no further changes will be made to mbuf. - * In case of fast free is not set, both otx2_nix_prepare_mseg() - * and otx2_nix_xmit_prepare() has a barrier after refcnt update. - */ - if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)) - rte_io_wmb(); - txq = otx2_ssogws_xtract_meta(m, txq_data); - otx2_ssogws_prepare_pkt(txq, m, cmd, flags); - - if (flags & NIX_TX_MULTI_SEG_F) { - const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags); - otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], - m->ol_flags, segdw, flags); - if (!ev->sched_type) { - otx2_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw); - otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); - if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0) - otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr, - txq->io_addr, segdw); - } else { - otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr, - txq->io_addr, segdw); - } - } else { - /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */ - otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], - m->ol_flags, 4, flags); - - if (!ev->sched_type) { - otx2_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags); - otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); - if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0) - otx2_nix_xmit_one(cmd, txq->lmt_addr, - txq->io_addr, flags); - } else { - otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr, - flags); - } - } - - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { - if (ref_cnt > 1) - return 1; - } - - otx2_ssogws_swtag_flush(base + SSOW_LF_GWS_TAG, - base + SSOW_LF_GWS_OP_SWTAG_FLUSH); - - return 1; -} - -#endif diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c deleted file mode 100644 index 81af4ca904..0000000000 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ /dev/null @@ -1,345 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include "otx2_worker_dual.h" -#include "otx2_worker.h" - -static __rte_noinline uint8_t -otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws, - const struct rte_event *ev) -{ - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - const uint64_t event_ptr = ev->u64; - const uint16_t grp = ev->queue_id; - - if (ws->xaq_lmt <= *ws->fc_mem) - return 0; - - otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp); - - return 1; -} - -static __rte_always_inline void -otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws, - const struct rte_event *ev) -{ - const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op)); - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - - /* 96XX model - * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED - * - * SSO_SYNC_ORDERED norm norm untag - * SSO_SYNC_ATOMIC norm norm untag - * SSO_SYNC_UNTAGGED norm norm NOOP - */ - if (new_tt == SSO_SYNC_UNTAGGED) { - if (cur_tt != SSO_SYNC_UNTAGGED) - otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws); - } else { - otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt); - } -} - -static __rte_always_inline void -otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws, - const struct rte_event *ev, const uint16_t grp) -{ - const uint32_t tag = (uint32_t)ev->event; - const uint8_t new_tt = ev->sched_type; - - otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_UPD_WQP_GRP1); - rte_smp_wmb(); - otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp); -} - -static __rte_always_inline void -otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws, - struct otx2_ssogws_state *vws, - const struct rte_event *ev) -{ - const uint8_t grp = ev->queue_id; - - /* Group hasn't changed, Use SWTAG to forward the event */ - if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(vws->tag_op)) == grp) { - otx2_ssogws_dual_fwd_swtag(vws, ev); - ws->swtag_req = 1; - } else { - /* - * Group has been changed for group based work pipelining, - * Use deschedule/add_work operation to transfer the event to - * new group/core - */ - otx2_ssogws_dual_fwd_group(vws, ev, grp); - } -} - -uint16_t __rte_hot -otx2_ssogws_dual_enq(void *port, const struct rte_event *ev) -{ - struct otx2_ssogws_dual *ws = port; - struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; - - switch (ev->op) { - case RTE_EVENT_OP_NEW: - rte_smp_mb(); - return otx2_ssogws_dual_new_event(ws, ev); - case RTE_EVENT_OP_FORWARD: - otx2_ssogws_dual_forward_event(ws, vws, ev); - break; - case RTE_EVENT_OP_RELEASE: - otx2_ssogws_swtag_flush(vws->tag_op, vws->swtag_flush_op); - break; - default: - return 0; - } - - return 1; -} - -uint16_t __rte_hot -otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - RTE_SET_USED(nb_events); - return otx2_ssogws_dual_enq(port, ev); -} - -uint16_t __rte_hot -otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - struct otx2_ssogws_dual *ws = port; - uint16_t i, rc = 1; - - rte_smp_mb(); - if (ws->xaq_lmt <= *ws->fc_mem) - return 0; - - for (i = 0; i < nb_events && rc; i++) - rc = otx2_ssogws_dual_new_event(ws, &ev[i]); - - return nb_events; -} - -uint16_t __rte_hot -otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) -{ - struct otx2_ssogws_dual *ws = port; - struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; - - RTE_SET_USED(nb_events); - otx2_ssogws_dual_forward_event(ws, vws, ev); - - return 1; -} - -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws_dual *ws = port; \ - uint8_t gw; \ - \ - rte_prefetch_non_temporal(ws); \ - RTE_SET_USED(timeout_ticks); \ - if (ws->swtag_req) { \ - otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ - &ws->ws_state[!ws->vws]); \ - ws->swtag_req = 0; \ - return 1; \ - } \ - \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], ev, \ - flags, ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - \ - return gw; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws_dual *ws = port; \ - uint64_t iter; \ - uint8_t gw; \ - \ - if (ws->swtag_req) { \ - otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ - &ws->ws_state[!ws->vws]); \ - ws->swtag_req = 0; \ - return 1; \ - } \ - \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], ev, \ - flags, ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], \ - ev, flags, \ - ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - } \ - \ - return gw; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \ - timeout_ticks); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws_dual *ws = port; \ - uint8_t gw; \ - \ - RTE_SET_USED(timeout_ticks); \ - if (ws->swtag_req) { \ - otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ - &ws->ws_state[!ws->vws]); \ - ws->swtag_req = 0; \ - return 1; \ - } \ - \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], ev, \ - flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - \ - return gw; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \ - timeout_ticks); \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks) \ -{ \ - struct otx2_ssogws_dual *ws = port; \ - uint64_t iter; \ - uint8_t gw; \ - \ - if (ws->swtag_req) { \ - otx2_ssogws_swtag_wait((struct otx2_ssogws *) \ - &ws->ws_state[!ws->vws]); \ - ws->swtag_req = 0; \ - return 1; \ - } \ - \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], ev, \ - flags | NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \ - gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \ - &ws->ws_state[!ws->vws], \ - ev, flags | \ - NIX_RX_MULTI_SEG_F, \ - ws->lookup_mem, \ - ws->tstamp); \ - ws->vws = !ws->vws; \ - } \ - \ - return gw; \ -} \ - \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks) \ -{ \ - RTE_SET_USED(nb_events); \ - \ - return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \ - timeout_ticks); \ -} - -SSO_RX_ADPTR_ENQ_FASTPATH_FUNC -#undef R - -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events) \ -{ \ - struct otx2_ssogws_dual *ws = port; \ - uint64_t cmd[sz]; \ - \ - RTE_SET_USED(nb_events); \ - return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \ - cmd, (const uint64_t \ - (*)[RTE_MAX_QUEUES_PER_PORT]) \ - &ws->tx_adptr_data, flags); \ -} -SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T - -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events) \ -{ \ - uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ - struct otx2_ssogws_dual *ws = port; \ - \ - RTE_SET_USED(nb_events); \ - return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \ - cmd, (const uint64_t \ - (*)[RTE_MAX_QUEUES_PER_PORT]) \ - &ws->tx_adptr_data, \ - (flags) | NIX_TX_MULTI_SEG_F);\ -} -SSO_TX_ADPTR_ENQ_FASTPATH_FUNC -#undef T diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h deleted file mode 100644 index 36ae4dd88f..0000000000 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ /dev/null @@ -1,110 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_WORKER_DUAL_H__ -#define __OTX2_WORKER_DUAL_H__ - -#include -#include - -#include -#include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_rx.h" - -/* SSO Operations */ -static __rte_always_inline uint16_t -otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, - struct otx2_ssogws_state *ws_pair, - struct rte_event *ev, const uint32_t flags, - const void * const lookup_mem, - struct otx2_timesync_info * const tstamp) -{ - const uint64_t set_gw = BIT_ULL(16) | 1; - union otx2_sso_event event; - uint64_t tstamp_ptr; - uint64_t get_work1; - uint64_t mbuf; - - if (flags & NIX_RX_OFFLOAD_PTYPE_F) - rte_prefetch_non_temporal(lookup_mem); -#ifdef RTE_ARCH_ARM64 - asm volatile( - "rty%=: \n" - " ldr %[tag], [%[tag_loc]] \n" - " ldr %[wqp], [%[wqp_loc]] \n" - " tbnz %[tag], 63, rty%= \n" - "done%=: str %[gw], [%[pong]] \n" - " dmb ld \n" - " prfm pldl1keep, [%[wqp], #8]\n" - " sub %[mbuf], %[wqp], #0x80 \n" - " prfm pldl1keep, [%[mbuf]] \n" - : [tag] "=&r" (event.get_work0), - [wqp] "=&r" (get_work1), - [mbuf] "=&r" (mbuf) - : [tag_loc] "r" (ws->tag_op), - [wqp_loc] "r" (ws->wqp_op), - [gw] "r" (set_gw), - [pong] "r" (ws_pair->getwrk_op) - ); -#else - event.get_work0 = otx2_read64(ws->tag_op); - while ((BIT_ULL(63)) & event.get_work0) - event.get_work0 = otx2_read64(ws->tag_op); - get_work1 = otx2_read64(ws->wqp_op); - otx2_write64(set_gw, ws_pair->getwrk_op); - - rte_prefetch0((const void *)get_work1); - mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf)); - rte_prefetch0((const void *)mbuf); -#endif - event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | - (event.get_work0 & (0x3FFull << 36)) << 4 | - (event.get_work0 & 0xffffffff); - - if (event.sched_type != SSO_TT_EMPTY) { - if ((flags & NIX_RX_OFFLOAD_SECURITY_F) && - (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - get_work1 = otx2_handle_crypto_event(get_work1); - } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) { - uint8_t port = event.sub_event_type; - - event.sub_event_type = 0; - otx2_wqe_to_mbuf(get_work1, mbuf, port, - event.flow_id, flags, lookup_mem); - /* Extracting tstamp, if PTP enabled. CGX will prepend - * the timestamp at starting of packet data and it can - * be derieved from WQE 9 dword which corresponds to SG - * iova. - * rte_pktmbuf_mtod_offset can be used for this purpose - * but it brings down the performance as it reads - * mbuf->buf_addr which is not part of cache in general - * fast path. - */ - tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *) - get_work1) + - OTX2_SSO_WQE_SG_PTR); - otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp, - flags, (uint64_t *)tstamp_ptr); - get_work1 = mbuf; - } - } - - ev->event = event.get_work0; - ev->u64 = get_work1; - - return !!get_work1; -} - -static __rte_always_inline void -otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr, - const uint32_t tag, const uint8_t new_tt, - const uint16_t grp) -{ - uint64_t add_work0; - - add_work0 = tag | ((uint64_t)(new_tt) << 32); - otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]); -} - -#endif diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map deleted file mode 100644 index c2e0723b4c..0000000000 --- a/drivers/event/octeontx2/version.map +++ /dev/null @@ -1,3 +0,0 @@ -DPDK_22 { - local: *; -}; diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c index 57be33b862..ea473552dd 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.c +++ b/drivers/mempool/cnxk/cnxk_mempool.c @@ -161,48 +161,20 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) } static const struct rte_pci_id npa_pci_map[] = { - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_PF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA, - }, - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_PF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS, - }, - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_PF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA, - }, - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_VF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA, - }, - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_VF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS, - }, - { - .class_id = RTE_CLASS_ANY_ID, - .vendor_id = PCI_VENDOR_ID_CAVIUM, - .device_id = PCI_DEVID_CNXK_RVU_NPA_VF, - .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, - .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA, - }, + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_VF), { .vendor_id = 0, }, diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build index d295263b87..dc88812585 100644 --- a/drivers/mempool/meson.build +++ b/drivers/mempool/meson.build @@ -7,7 +7,6 @@ drivers = [ 'dpaa', 'dpaa2', 'octeontx', - 'octeontx2', 'ring', 'stack', ] diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build deleted file mode 100644 index a4bea6d364..0000000000 --- a/drivers/mempool/octeontx2/meson.build +++ /dev/null @@ -1,18 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. -# - -if not is_linux or not dpdk_conf.get('RTE_ARCH_64') - build = false - reason = 'only supported on 64-bit Linux' - subdir_done() -endif - -sources = files( - 'otx2_mempool.c', - 'otx2_mempool_debug.c', - 'otx2_mempool_irq.c', - 'otx2_mempool_ops.c', -) - -deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool'] diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c deleted file mode 100644 index f63dc06ef2..0000000000 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ /dev/null @@ -1,457 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_dev.h" -#include "otx2_mempool.h" - -#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_) -#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE) - -static inline int -npa_lf_alloc(struct otx2_npa_lf *lf) -{ - struct otx2_mbox *mbox = lf->mbox; - struct npa_lf_alloc_req *req; - struct npa_lf_alloc_rsp *rsp; - int rc; - - req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox); - req->aura_sz = lf->aura_sz; - req->nr_pools = lf->nr_pools; - - rc = otx2_mbox_process_msg(mbox, (void *)&rsp); - if (rc) - return NPA_LF_ERR_ALLOC; - - lf->stack_pg_ptrs = rsp->stack_pg_ptrs; - lf->stack_pg_bytes = rsp->stack_pg_bytes; - lf->qints = rsp->qints; - - return 0; -} - -static int -npa_lf_free(struct otx2_mbox *mbox) -{ - otx2_mbox_alloc_msg_npa_lf_free(mbox); - - return otx2_mbox_process(mbox); -} - -static int -npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz, - uint32_t nr_pools, struct otx2_mbox *mbox) -{ - uint32_t i, bmp_sz; - int rc; - - /* Sanity checks */ - if (!lf || !base || !mbox || !nr_pools) - return NPA_LF_ERR_PARAM; - - if (base & AURA_ID_MASK) - return NPA_LF_ERR_BASE_INVALID; - - if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX) - return NPA_LF_ERR_PARAM; - - memset(lf, 0x0, sizeof(*lf)); - lf->base = base; - lf->aura_sz = aura_sz; - lf->nr_pools = nr_pools; - lf->mbox = mbox; - - rc = npa_lf_alloc(lf); - if (rc) - goto exit; - - bmp_sz = rte_bitmap_get_memory_footprint(nr_pools); - - /* Allocate memory for bitmap */ - lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz, - RTE_CACHE_LINE_SIZE); - if (lf->npa_bmp_mem == NULL) { - rc = -ENOMEM; - goto lf_free; - } - - /* Initialize pool resource bitmap array */ - lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz); - if (lf->npa_bmp == NULL) { - rc = -EINVAL; - goto bmap_mem_free; - } - - /* Mark all pools available */ - for (i = 0; i < nr_pools; i++) - rte_bitmap_set(lf->npa_bmp, i); - - /* Allocate memory for qint context */ - lf->npa_qint_mem = rte_zmalloc("npa_qint_mem", - sizeof(struct otx2_npa_qint) * nr_pools, 0); - if (lf->npa_qint_mem == NULL) { - rc = -ENOMEM; - goto bmap_free; - } - - /* Allocate memory for nap_aura_lim memory */ - lf->aura_lim = rte_zmalloc("npa_aura_lim_mem", - sizeof(struct npa_aura_lim) * nr_pools, 0); - if (lf->aura_lim == NULL) { - rc = -ENOMEM; - goto qint_free; - } - - /* Init aura start & end limits */ - for (i = 0; i < nr_pools; i++) { - lf->aura_lim[i].ptr_start = UINT64_MAX; - lf->aura_lim[i].ptr_end = 0x0ull; - } - - return 0; - -qint_free: - rte_free(lf->npa_qint_mem); -bmap_free: - rte_bitmap_free(lf->npa_bmp); -bmap_mem_free: - rte_free(lf->npa_bmp_mem); -lf_free: - npa_lf_free(lf->mbox); -exit: - return rc; -} - -static int -npa_lf_fini(struct otx2_npa_lf *lf) -{ - if (!lf) - return NPA_LF_ERR_PARAM; - - rte_free(lf->aura_lim); - rte_free(lf->npa_qint_mem); - rte_bitmap_free(lf->npa_bmp); - rte_free(lf->npa_bmp_mem); - - return npa_lf_free(lf->mbox); - -} - -static inline uint32_t -otx2_aura_size_to_u32(uint8_t val) -{ - if (val == NPA_AURA_SZ_0) - return 128; - if (val >= NPA_AURA_SZ_MAX) - return BIT_ULL(20); - - return 1 << (val + 6); -} - -static int -parse_max_pools(const char *key, const char *value, void *extra_args) -{ - RTE_SET_USED(key); - uint32_t val; - - val = atoi(value); - if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128)) - val = 128; - if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M)) - val = BIT_ULL(20); - - *(uint8_t *)extra_args = rte_log2_u32(val) - 6; - return 0; -} - -#define OTX2_MAX_POOLS "max_pools" - -static uint8_t -otx2_parse_aura_size(struct rte_devargs *devargs) -{ - uint8_t aura_sz = NPA_AURA_SZ_128; - struct rte_kvargs *kvlist; - - if (devargs == NULL) - goto exit; - kvlist = rte_kvargs_parse(devargs->args, NULL); - if (kvlist == NULL) - goto exit; - - rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz); - otx2_parse_common_devargs(kvlist); - rte_kvargs_free(kvlist); -exit: - return aura_sz; -} - -static inline int -npa_lf_attach(struct otx2_mbox *mbox) -{ - struct rsrc_attach_req *req; - - req = otx2_mbox_alloc_msg_attach_resources(mbox); - req->npalf = true; - - return otx2_mbox_process(mbox); -} - -static inline int -npa_lf_detach(struct otx2_mbox *mbox) -{ - struct rsrc_detach_req *req; - - req = otx2_mbox_alloc_msg_detach_resources(mbox); - req->npalf = true; - - return otx2_mbox_process(mbox); -} - -static inline int -npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff) -{ - struct msix_offset_rsp *msix_rsp; - int rc; - - /* Get NPA and NIX MSIX vector offsets */ - otx2_mbox_alloc_msg_msix_offset(mbox); - - rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); - - *npa_msixoff = msix_rsp->npa_msixoff; - - return rc; -} - -/** - * @internal - * Finalize NPA LF. - */ -int -otx2_npa_lf_fini(void) -{ - struct otx2_idev_cfg *idev; - int rc = 0; - - idev = otx2_intra_dev_get_cfg(); - if (idev == NULL) - return -ENOMEM; - - if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) { - otx2_npa_unregister_irqs(idev->npa_lf); - rc |= npa_lf_fini(idev->npa_lf); - rc |= npa_lf_detach(idev->npa_lf->mbox); - otx2_npa_set_defaults(idev); - } - - return rc; -} - -/** - * @internal - * Initialize NPA LF. - */ -int -otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) -{ - struct otx2_dev *dev = otx2_dev; - struct otx2_idev_cfg *idev; - struct otx2_npa_lf *lf; - uint16_t npa_msixoff; - uint32_t nr_pools; - uint8_t aura_sz; - int rc; - - idev = otx2_intra_dev_get_cfg(); - if (idev == NULL) - return -ENOMEM; - - /* Is NPA LF initialized by any another driver? */ - if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) { - - rc = npa_lf_attach(dev->mbox); - if (rc) - goto fail; - - rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff); - if (rc) - goto npa_detach; - - aura_sz = otx2_parse_aura_size(pci_dev->device.devargs); - nr_pools = otx2_aura_size_to_u32(aura_sz); - - lf = &dev->npalf; - rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20), - aura_sz, nr_pools, dev->mbox); - - if (rc) - goto npa_detach; - - lf->pf_func = dev->pf_func; - lf->npa_msixoff = npa_msixoff; - lf->intr_handle = pci_dev->intr_handle; - lf->pci_dev = pci_dev; - - idev->npa_pf_func = dev->pf_func; - idev->npa_lf = lf; - rte_smp_wmb(); - rc = otx2_npa_register_irqs(lf); - if (rc) - goto npa_fini; - - rte_mbuf_set_platform_mempool_ops("octeontx2_npa"); - otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x", - lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff); - } - - return 0; - -npa_fini: - npa_lf_fini(idev->npa_lf); -npa_detach: - npa_lf_detach(dev->mbox); -fail: - rte_atomic16_dec(&idev->npa_refcnt); - return rc; -} - -static inline char* -otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name) -{ - snprintf(name, OTX2_NPA_DEV_NAME_LEN, - OTX2_NPA_DEV_NAME PCI_PRI_FMT, - pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); - - return name; -} - -static int -otx2_npa_init(struct rte_pci_device *pci_dev) -{ - char name[OTX2_NPA_DEV_NAME_LEN]; - const struct rte_memzone *mz; - struct otx2_dev *dev; - int rc = -ENOMEM; - - mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name), - sizeof(*dev), SOCKET_ID_ANY, - 0, OTX2_ALIGN); - if (mz == NULL) - goto error; - - dev = mz->addr; - - /* Initialize the base otx2_dev object */ - rc = otx2_dev_init(pci_dev, dev); - if (rc) - goto malloc_fail; - - /* Grab the NPA LF if required */ - rc = otx2_npa_lf_init(pci_dev, dev); - if (rc) - goto dev_uninit; - - dev->drv_inited = true; - return 0; - -dev_uninit: - otx2_npa_lf_fini(); - otx2_dev_fini(pci_dev, dev); -malloc_fail: - rte_memzone_free(mz); -error: - otx2_err("Failed to initialize npa device rc=%d", rc); - return rc; -} - -static int -otx2_npa_fini(struct rte_pci_device *pci_dev) -{ - char name[OTX2_NPA_DEV_NAME_LEN]; - const struct rte_memzone *mz; - struct otx2_dev *dev; - - mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name)); - if (mz == NULL) - return -EINVAL; - - dev = mz->addr; - if (!dev->drv_inited) - goto dev_fini; - - dev->drv_inited = false; - otx2_npa_lf_fini(); - -dev_fini: - if (otx2_npa_lf_active(dev)) { - otx2_info("%s: common resource in use by other devices", - pci_dev->name); - return -EAGAIN; - } - - otx2_dev_fini(pci_dev, dev); - rte_memzone_free(mz); - - return 0; -} - -static int -npa_remove(struct rte_pci_device *pci_dev) -{ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - return otx2_npa_fini(pci_dev); -} - -static int -npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) -{ - RTE_SET_USED(pci_drv); - - if (rte_eal_process_type() != RTE_PROC_PRIMARY) - return 0; - - return otx2_npa_init(pci_dev); -} - -static const struct rte_pci_id pci_npa_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_OCTEONTX2_RVU_NPA_PF) - }, - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_OCTEONTX2_RVU_NPA_VF) - }, - { - .vendor_id = 0, - }, -}; - -static struct rte_pci_driver pci_npa = { - .id_table = pci_npa_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA, - .probe = npa_probe, - .remove = npa_remove, -}; - -RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); -RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); -RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); -RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2, - OTX2_MAX_POOLS "=<128-1048576>" - OTX2_NPA_LOCK_MASK "=<1-65535>"); diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h deleted file mode 100644 index 8aa548248d..0000000000 --- a/drivers/mempool/octeontx2/otx2_mempool.h +++ /dev/null @@ -1,221 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#ifndef __OTX2_MEMPOOL_H__ -#define __OTX2_MEMPOOL_H__ - -#include -#include -#include -#include - -#include "otx2_common.h" -#include "otx2_mbox.h" - -enum npa_lf_status { - NPA_LF_ERR_PARAM = -512, - NPA_LF_ERR_ALLOC = -513, - NPA_LF_ERR_INVALID_BLOCK_SZ = -514, - NPA_LF_ERR_AURA_ID_ALLOC = -515, - NPA_LF_ERR_AURA_POOL_INIT = -516, - NPA_LF_ERR_AURA_POOL_FINI = -517, - NPA_LF_ERR_BASE_INVALID = -518, -}; - -struct otx2_npa_lf; -struct otx2_npa_qint { - struct otx2_npa_lf *lf; - uint8_t qintx; -}; - -struct npa_aura_lim { - uint64_t ptr_start; - uint64_t ptr_end; -}; - -struct otx2_npa_lf { - uint16_t qints; - uintptr_t base; - uint8_t aura_sz; - uint16_t pf_func; - uint32_t nr_pools; - void *npa_bmp_mem; - void *npa_qint_mem; - uint16_t npa_msixoff; - struct otx2_mbox *mbox; - uint32_t stack_pg_ptrs; - uint32_t stack_pg_bytes; - struct rte_bitmap *npa_bmp; - struct npa_aura_lim *aura_lim; - struct rte_pci_device *pci_dev; - struct rte_intr_handle *intr_handle; -}; - -#define AURA_ID_MASK (BIT_ULL(16) - 1) - -/* - * Generate 64bit handle to have optimized alloc and free aura operation. - * 0 - AURA_ID_MASK for storing the aura_id. - * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address. - * This scheme is valid when OS can give AURA_ID_MASK - * aligned address for lf base address. - */ -static inline uint64_t -npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr) -{ - uint64_t val; - - val = aura_id & AURA_ID_MASK; - return (uint64_t)addr | val; -} - -static inline uint64_t -npa_lf_aura_handle_to_aura(uint64_t aura_handle) -{ - return aura_handle & AURA_ID_MASK; -} - -static inline uintptr_t -npa_lf_aura_handle_to_base(uint64_t aura_handle) -{ - return (uintptr_t)(aura_handle & ~AURA_ID_MASK); -} - -static inline uint64_t -npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop) -{ - uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle); - - if (drop) - wdata |= BIT_ULL(63); /* DROP */ - - return otx2_atomic64_add_nosync(wdata, - (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_AURA_OP_ALLOCX(0))); -} - -static inline void -npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova) -{ - uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); - - if (fabs) - reg |= BIT_ULL(63); /* FABS */ - - otx2_store_pair(iova, reg, - npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0); -} - -static inline uint64_t -npa_lf_aura_op_cnt_get(uint64_t aura_handle) -{ - uint64_t wdata; - uint64_t reg; - - wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; - - reg = otx2_atomic64_add_nosync(wdata, - (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_AURA_OP_CNT)); - - if (reg & BIT_ULL(42) /* OP_ERR */) - return 0; - else - return reg & 0xFFFFFFFFF; -} - -static inline void -npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count) -{ - uint64_t reg = count & (BIT_ULL(36) - 1); - - if (sign) - reg |= BIT_ULL(43); /* CNT_ADD */ - - reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); - - otx2_write64(reg, - npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT); -} - -static inline uint64_t -npa_lf_aura_op_limit_get(uint64_t aura_handle) -{ - uint64_t wdata; - uint64_t reg; - - wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; - - reg = otx2_atomic64_add_nosync(wdata, - (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_AURA_OP_LIMIT)); - - if (reg & BIT_ULL(42) /* OP_ERR */) - return 0; - else - return reg & 0xFFFFFFFFF; -} - -static inline void -npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit) -{ - uint64_t reg = limit & (BIT_ULL(36) - 1); - - reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); - - otx2_write64(reg, - npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT); -} - -static inline uint64_t -npa_lf_aura_op_available(uint64_t aura_handle) -{ - uint64_t wdata; - uint64_t reg; - - wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; - - reg = otx2_atomic64_add_nosync(wdata, - (int64_t *)(npa_lf_aura_handle_to_base( - aura_handle) + NPA_LF_POOL_OP_AVAILABLE)); - - if (reg & BIT_ULL(42) /* OP_ERR */) - return 0; - else - return reg & 0xFFFFFFFFF; -} - -static inline void -npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, - uint64_t end_iova) -{ - uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); - struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); - struct npa_aura_lim *lim = lf->aura_lim; - - lim[reg].ptr_start = RTE_MIN(lim[reg].ptr_start, start_iova); - lim[reg].ptr_end = RTE_MAX(lim[reg].ptr_end, end_iova); - - otx2_store_pair(lim[reg].ptr_start, reg, - npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_POOL_OP_PTR_START0); - otx2_store_pair(lim[reg].ptr_end, reg, - npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_POOL_OP_PTR_END0); -} - -/* NPA LF */ -__rte_internal -int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev); -__rte_internal -int otx2_npa_lf_fini(void); - -/* IRQ */ -int otx2_npa_register_irqs(struct otx2_npa_lf *lf); -void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf); - -/* Debug */ -int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf); - -#endif /* __OTX2_MEMPOOL_H__ */ diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c deleted file mode 100644 index 279ea2e25f..0000000000 --- a/drivers/mempool/octeontx2/otx2_mempool_debug.c +++ /dev/null @@ -1,135 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include "otx2_mempool.h" - -#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) - -static inline void -npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool) -{ - npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base); - npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d", - pool->ena, pool->nat_align, pool->stack_caching); - npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d", - pool->stack_way_mask, pool->buf_offset); - npa_dump("W1: buf_size \t\t%d", pool->buf_size); - - npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d", - pool->stack_max_pages, pool->stack_pages); - - npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc); - - npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d", - pool->stack_offset, pool->shift, pool->avg_level); - npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d", - pool->avg_con, pool->fc_ena, pool->fc_stype); - npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d", - pool->fc_hyst_bits, pool->fc_up_crossing); - npa_dump("W4: update_time\t\t%d\n", pool->update_time); - - npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr); - - npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start); - - npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end); - npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d", - pool->err_int, pool->err_int_ena); - npa_dump("W8: thresh_int\t\t%d", pool->thresh_int); - - npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d", - pool->thresh_int_ena, pool->thresh_up); - npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d", - pool->thresh_qint_idx, pool->err_qint_idx); -} - -static inline void -npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura) -{ - npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr); - - npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d", - aura->ena, aura->pool_caching, aura->pool_way_mask); - npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d", - aura->avg_con, aura->pool_drop_ena); - npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena); - npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d", - aura->bp_ena, aura->aura_drop, aura->shift); - npa_dump("W1: avg_level\t\t%d\n", aura->avg_level); - - npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d", - (uint64_t)aura->count, aura->nix0_bpid); - npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid); - - npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n", - (uint64_t)aura->limit, aura->bp, aura->fc_ena); - npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d", - aura->fc_up_crossing, aura->fc_stype); - - npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits); - - npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr); - - npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d", - aura->pool_drop, aura->update_time); - npa_dump("W5: err_int\t\t%d", aura->err_int); - npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d", - aura->err_int_ena, aura->thresh_int); - npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena); - - npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d", - aura->thresh_up, aura->thresh_qint_idx); - npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx); - - npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh); -} - -int -otx2_mempool_ctx_dump(struct otx2_npa_lf *lf) -{ - struct npa_aq_enq_req *aq; - struct npa_aq_enq_rsp *rsp; - uint32_t q; - int rc = 0; - - for (q = 0; q < lf->nr_pools; q++) { - /* Skip disabled POOL */ - if (rte_bitmap_get(lf->npa_bmp, q)) - continue; - - aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); - aq->aura_id = q; - aq->ctype = NPA_AQ_CTYPE_POOL; - aq->op = NPA_AQ_INSTOP_READ; - - rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); - if (rc) { - otx2_err("Failed to get pool(%d) context", q); - return rc; - } - npa_dump("============== pool=%d ===============\n", q); - npa_lf_pool_dump(&rsp->pool); - } - - for (q = 0; q < lf->nr_pools; q++) { - /* Skip disabled AURA */ - if (rte_bitmap_get(lf->npa_bmp, q)) - continue; - - aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); - aq->aura_id = q; - aq->ctype = NPA_AQ_CTYPE_AURA; - aq->op = NPA_AQ_INSTOP_READ; - - rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); - if (rc) { - otx2_err("Failed to get aura(%d) context", q); - return rc; - } - npa_dump("============== aura=%d ===============\n", q); - npa_lf_aura_dump(&rsp->aura); - } - - return rc; -} diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c deleted file mode 100644 index 5fa22b9612..0000000000 --- a/drivers/mempool/octeontx2/otx2_mempool_irq.c +++ /dev/null @@ -1,303 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include - -#include -#include - -#include "otx2_common.h" -#include "otx2_irq.h" -#include "otx2_mempool.h" - -static void -npa_lf_err_irq(void *param) -{ - struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param; - uint64_t intr; - - intr = otx2_read64(lf->base + NPA_LF_ERR_INT); - if (intr == 0) - return; - - otx2_err("Err_intr=0x%" PRIx64 "", intr); - - /* Clear interrupt */ - otx2_write64(intr, lf->base + NPA_LF_ERR_INT); -} - -static int -npa_lf_register_err_irq(struct otx2_npa_lf *lf) -{ - struct rte_intr_handle *handle = lf->intr_handle; - int rc, vec; - - vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); - /* Register err interrupt vector */ - rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec); - - /* Enable hw interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S); - - return rc; -} - -static void -npa_lf_unregister_err_irq(struct otx2_npa_lf *lf) -{ - struct rte_intr_handle *handle = lf->intr_handle; - int vec; - - vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; - - /* Clear err interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); - otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec); -} - -static void -npa_lf_ras_irq(void *param) -{ - struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param; - uint64_t intr; - - intr = otx2_read64(lf->base + NPA_LF_RAS); - if (intr == 0) - return; - - otx2_err("Ras_intr=0x%" PRIx64 "", intr); - - /* Clear interrupt */ - otx2_write64(intr, lf->base + NPA_LF_RAS); -} - -static int -npa_lf_register_ras_irq(struct otx2_npa_lf *lf) -{ - struct rte_intr_handle *handle = lf->intr_handle; - int rc, vec; - - vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; - - /* Clear err interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); - /* Set used interrupt vectors */ - rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec); - /* Enable hw interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S); - - return rc; -} - -static void -npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf) -{ - int vec; - struct rte_intr_handle *handle = lf->intr_handle; - - vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; - - /* Clear err interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); - otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec); -} - -static inline uint8_t -npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q, - uint32_t off, uint64_t mask) -{ - uint64_t reg, wdata; - uint8_t qint; - - wdata = (uint64_t)q << 44; - reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off)); - - if (reg & BIT_ULL(42) /* OP_ERR */) { - otx2_err("Failed execute irq get off=0x%x", off); - return 0; - } - - qint = reg & 0xff; - wdata &= mask; - otx2_write64(wdata | qint, lf->base + off); - - return qint; -} - -static inline uint8_t -npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p) -{ - return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00); -} - -static inline uint8_t -npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a) -{ - return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00); -} - -static void -npa_lf_q_irq(void *param) -{ - struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param; - struct otx2_npa_lf *lf = qint->lf; - uint8_t irq, qintx = qint->qintx; - uint32_t q, pool, aura; - uint64_t intr; - - intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx)); - if (intr == 0) - return; - - otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx); - - /* Handle pool queue interrupts */ - for (q = 0; q < lf->nr_pools; q++) { - /* Skip disabled POOL */ - if (rte_bitmap_get(lf->npa_bmp, q)) - continue; - - pool = q % lf->qints; - irq = npa_lf_pool_irq_get_and_clear(lf, pool); - - if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS)) - otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool); - - if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE)) - otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool); - - if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR)) - otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool); - } - - /* Handle aura queue interrupts */ - for (q = 0; q < lf->nr_pools; q++) { - - /* Skip disabled AURA */ - if (rte_bitmap_get(lf->npa_bmp, q)) - continue; - - aura = q % lf->qints; - irq = npa_lf_aura_irq_get_and_clear(lf, aura); - - if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER)) - otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura); - - if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER)) - otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura); - - if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER)) - otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura); - - if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS)) - otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura); - } - - /* Clear interrupt */ - otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); - otx2_mempool_ctx_dump(lf); -} - -static int -npa_lf_register_queue_irqs(struct otx2_npa_lf *lf) -{ - struct rte_intr_handle *handle = lf->intr_handle; - int vec, q, qs, rc = 0; - - /* Figure out max qintx required */ - qs = RTE_MIN(lf->qints, lf->nr_pools); - - for (q = 0; q < qs; q++) { - vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; - - /* Clear QINT CNT */ - otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); - - /* Clear interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); - - struct otx2_npa_qint *qintmem = lf->npa_qint_mem; - qintmem += q; - - qintmem->lf = lf; - qintmem->qintx = q; - - /* Sync qints_mem update */ - rte_smp_wmb(); - - /* Register queue irq vector */ - rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec); - if (rc) - break; - - otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); - otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q)); - /* Enable QINT interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q)); - } - - return rc; -} - -static void -npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf) -{ - struct rte_intr_handle *handle = lf->intr_handle; - int vec, q, qs; - - /* Figure out max qintx required */ - qs = RTE_MIN(lf->qints, lf->nr_pools); - - for (q = 0; q < qs; q++) { - vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; - - /* Clear QINT CNT */ - otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); - otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q)); - - /* Clear interrupt */ - otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); - - struct otx2_npa_qint *qintmem = lf->npa_qint_mem; - qintmem += q; - - /* Unregister queue irq vector */ - otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec); - - qintmem->lf = NULL; - qintmem->qintx = 0; - } -} - -int -otx2_npa_register_irqs(struct otx2_npa_lf *lf) -{ - int rc; - - if (lf->npa_msixoff == MSIX_VECTOR_INVALID) { - otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x", - lf->npa_msixoff); - return -EINVAL; - } - - /* Register lf err interrupt */ - rc = npa_lf_register_err_irq(lf); - /* Register RAS interrupt */ - rc |= npa_lf_register_ras_irq(lf); - /* Register queue interrupts */ - rc |= npa_lf_register_queue_irqs(lf); - - return rc; -} - -void -otx2_npa_unregister_irqs(struct otx2_npa_lf *lf) -{ - npa_lf_unregister_err_irq(lf); - npa_lf_unregister_ras_irq(lf); - npa_lf_unregister_queue_irqs(lf); -} diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c deleted file mode 100644 index 332e4f1cb2..0000000000 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ /dev/null @@ -1,901 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include -#include - -#include "otx2_mempool.h" - -static int __rte_hot -otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n) -{ - unsigned int index; const uint64_t aura_handle = mp->pool_id; - const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); - const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) + - NPA_LF_AURA_OP_FREE0; - - /* Ensure mbuf init changes are written before the free pointers - * are enqueued to the stack. - */ - rte_io_wmb(); - for (index = 0; index < n; index++) - otx2_store_pair((uint64_t)obj_table[index], reg, addr); - - return 0; -} - -static __rte_noinline int -npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, - void **obj_table, uint8_t i) -{ - uint8_t retry = 4; - - do { - obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr); - if (obj_table[i] != NULL) - return 0; - - } while (retry--); - - return -ENOENT; -} - -#if defined(RTE_ARCH_ARM64) -static __rte_noinline int -npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr, - void **obj_table, unsigned int n) -{ - uint8_t i; - - for (i = 0; i < n; i++) { - if (obj_table[i] != NULL) - continue; - if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i)) - return -ENOENT; - } - - return 0; -} - -static __rte_noinline int -npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr, - unsigned int n, void **obj_table) -{ - register const uint64_t wdata64 __asm("x26") = wdata; - register const uint64_t wdata128 __asm("x27") = wdata; - uint64x2_t failed = vdupq_n_u64(~0); - - switch (n) { - case 32: - { - asm volatile ( - ".cpu generic+lse\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x16, x17, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x18, x19, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x20, x21, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x22, x23, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d16, x0\n" - "fmov v16.D[1], x1\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d17, x2\n" - "fmov v17.D[1], x3\n" - "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d18, x4\n" - "fmov v18.D[1], x5\n" - "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d19, x6\n" - "fmov v19.D[1], x7\n" - "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "and %[failed].16B, %[failed].16B, v17.16B\n" - "and %[failed].16B, %[failed].16B, v18.16B\n" - "and %[failed].16B, %[failed].16B, v19.16B\n" - "fmov d20, x8\n" - "fmov v20.D[1], x9\n" - "fmov d21, x10\n" - "fmov v21.D[1], x11\n" - "fmov d22, x12\n" - "fmov v22.D[1], x13\n" - "fmov d23, x14\n" - "fmov v23.D[1], x15\n" - "and %[failed].16B, %[failed].16B, v20.16B\n" - "and %[failed].16B, %[failed].16B, v21.16B\n" - "and %[failed].16B, %[failed].16B, v22.16B\n" - "and %[failed].16B, %[failed].16B, v23.16B\n" - "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" - "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" - "fmov d16, x16\n" - "fmov v16.D[1], x17\n" - "fmov d17, x18\n" - "fmov v17.D[1], x19\n" - "fmov d18, x20\n" - "fmov v18.D[1], x21\n" - "fmov d19, x22\n" - "fmov v19.D[1], x23\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "and %[failed].16B, %[failed].16B, v17.16B\n" - "and %[failed].16B, %[failed].16B, v18.16B\n" - "and %[failed].16B, %[failed].16B, v19.16B\n" - "fmov d20, x0\n" - "fmov v20.D[1], x1\n" - "fmov d21, x2\n" - "fmov v21.D[1], x3\n" - "fmov d22, x4\n" - "fmov v22.D[1], x5\n" - "fmov d23, x6\n" - "fmov v23.D[1], x7\n" - "and %[failed].16B, %[failed].16B, v20.16B\n" - "and %[failed].16B, %[failed].16B, v21.16B\n" - "and %[failed].16B, %[failed].16B, v22.16B\n" - "and %[failed].16B, %[failed].16B, v23.16B\n" - "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" - "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" - : "+Q" (*addr), [failed] "=&w" (failed) - : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128), - [dst] "r" (obj_table), [loc] "r" (addr) - : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", - "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16", - "x17", "x18", "x19", "x20", "x21", "x22", "x23", "v16", "v17", - "v18", "v19", "v20", "v21", "v22", "v23" - ); - break; - } - case 16: - { - asm volatile ( - ".cpu generic+lse\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d16, x0\n" - "fmov v16.D[1], x1\n" - "fmov d17, x2\n" - "fmov v17.D[1], x3\n" - "fmov d18, x4\n" - "fmov v18.D[1], x5\n" - "fmov d19, x6\n" - "fmov v19.D[1], x7\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "and %[failed].16B, %[failed].16B, v17.16B\n" - "and %[failed].16B, %[failed].16B, v18.16B\n" - "and %[failed].16B, %[failed].16B, v19.16B\n" - "fmov d20, x8\n" - "fmov v20.D[1], x9\n" - "fmov d21, x10\n" - "fmov v21.D[1], x11\n" - "fmov d22, x12\n" - "fmov v22.D[1], x13\n" - "fmov d23, x14\n" - "fmov v23.D[1], x15\n" - "and %[failed].16B, %[failed].16B, v20.16B\n" - "and %[failed].16B, %[failed].16B, v21.16B\n" - "and %[failed].16B, %[failed].16B, v22.16B\n" - "and %[failed].16B, %[failed].16B, v23.16B\n" - "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" - "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" - : "+Q" (*addr), [failed] "=&w" (failed) - : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128), - [dst] "r" (obj_table), [loc] "r" (addr) - : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", - "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "v16", - "v17", "v18", "v19", "v20", "v21", "v22", "v23" - ); - break; - } - case 8: - { - asm volatile ( - ".cpu generic+lse\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d16, x0\n" - "fmov v16.D[1], x1\n" - "fmov d17, x2\n" - "fmov v17.D[1], x3\n" - "fmov d18, x4\n" - "fmov v18.D[1], x5\n" - "fmov d19, x6\n" - "fmov v19.D[1], x7\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "and %[failed].16B, %[failed].16B, v17.16B\n" - "and %[failed].16B, %[failed].16B, v18.16B\n" - "and %[failed].16B, %[failed].16B, v19.16B\n" - "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" - : "+Q" (*addr), [failed] "=&w" (failed) - : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128), - [dst] "r" (obj_table), [loc] "r" (addr) - : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", - "v16", "v17", "v18", "v19" - ); - break; - } - case 4: - { - asm volatile ( - ".cpu generic+lse\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d16, x0\n" - "fmov v16.D[1], x1\n" - "fmov d17, x2\n" - "fmov v17.D[1], x3\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "and %[failed].16B, %[failed].16B, v17.16B\n" - "st1 { v16.2d, v17.2d}, [%[dst]], 32\n" - : "+Q" (*addr), [failed] "=&w" (failed) - : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128), - [dst] "r" (obj_table), [loc] "r" (addr) - : "memory", "x0", "x1", "x2", "x3", "v16", "v17" - ); - break; - } - case 2: - { - asm volatile ( - ".cpu generic+lse\n" - "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n" - "fmov d16, x0\n" - "fmov v16.D[1], x1\n" - "and %[failed].16B, %[failed].16B, v16.16B\n" - "st1 { v16.2d}, [%[dst]], 16\n" - : "+Q" (*addr), [failed] "=&w" (failed) - : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128), - [dst] "r" (obj_table), [loc] "r" (addr) - : "memory", "x0", "x1", "v16" - ); - break; - } - case 1: - return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); - } - - if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1)))) - return npa_lf_aura_op_search_alloc(wdata, addr, (void **) - ((char *)obj_table - (sizeof(uint64_t) * n)), n); - - return 0; -} - -static __rte_noinline void -otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n) -{ - unsigned int i; - - for (i = 0; i < n; i++) { - if (obj_table[i] != NULL) { - otx2_npa_enq(mp, &obj_table[i], 1); - obj_table[i] = NULL; - } - } -} - -static __rte_noinline int __rte_hot -otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n) -{ - const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); - void **obj_table_bak = obj_table; - const unsigned int nfree = n; - unsigned int parts; - - int64_t * const addr = (int64_t * const) - (npa_lf_aura_handle_to_base(mp->pool_id) + - NPA_LF_AURA_OP_ALLOCX(0)); - while (n) { - parts = n > 31 ? 32 : rte_align32prevpow2(n); - n -= parts; - if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr, - parts, obj_table))) { - otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n); - return -ENOENT; - } - obj_table += parts; - } - - return 0; -} - -#else - -static inline int __rte_hot -otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) -{ - const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); - unsigned int index; - uint64_t obj; - - int64_t * const addr = (int64_t *) - (npa_lf_aura_handle_to_base(mp->pool_id) + - NPA_LF_AURA_OP_ALLOCX(0)); - for (index = 0; index < n; index++, obj_table++) { - obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); - if (obj == 0) { - for (; index > 0; index--) { - obj_table--; - otx2_npa_enq(mp, obj_table, 1); - } - return -ENOENT; - } - *obj_table = (void *)obj; - } - - return 0; -} - -#endif - -static unsigned int -otx2_npa_get_count(const struct rte_mempool *mp) -{ - return (unsigned int)npa_lf_aura_op_available(mp->pool_id); -} - -static int -npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, - struct npa_aura_s *aura, struct npa_pool_s *pool) -{ - struct npa_aq_enq_req *aura_init_req, *pool_init_req; - struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp; - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - struct otx2_idev_cfg *idev; - int rc, off; - - idev = otx2_intra_dev_get_cfg(); - if (idev == NULL) - return -ENOMEM; - - aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - - aura_init_req->aura_id = aura_id; - aura_init_req->ctype = NPA_AQ_CTYPE_AURA; - aura_init_req->op = NPA_AQ_INSTOP_INIT; - otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura)); - - pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - - pool_init_req->aura_id = aura_id; - pool_init_req->ctype = NPA_AQ_CTYPE_POOL; - pool_init_req->op = NPA_AQ_INSTOP_INIT; - otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool)); - - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_wait_for_rsp(mbox, 0); - if (rc < 0) - return rc; - - off = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); - off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff; - pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); - - if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0) - return 0; - else - return NPA_LF_ERR_AURA_POOL_INIT; - - if (!(idev->npa_lock_mask & BIT_ULL(aura_id))) - return 0; - - aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - aura_init_req->aura_id = aura_id; - aura_init_req->ctype = NPA_AQ_CTYPE_AURA; - aura_init_req->op = NPA_AQ_INSTOP_LOCK; - - pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - if (!pool_init_req) { - /* The shared memory buffer can be full. - * Flush it and retry - */ - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_wait_for_rsp(mbox, 0); - if (rc < 0) { - otx2_err("Failed to LOCK AURA context"); - return -ENOMEM; - } - - pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - if (!pool_init_req) { - otx2_err("Failed to LOCK POOL context"); - return -ENOMEM; - } - } - pool_init_req->aura_id = aura_id; - pool_init_req->ctype = NPA_AQ_CTYPE_POOL; - pool_init_req->op = NPA_AQ_INSTOP_LOCK; - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to lock POOL ctx to NDC"); - return -ENOMEM; - } - - return 0; -} - -static int -npa_lf_aura_pool_fini(struct otx2_mbox *mbox, - uint32_t aura_id, - uint64_t aura_handle) -{ - struct npa_aq_enq_req *aura_req, *pool_req; - struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; - struct otx2_mbox_dev *mdev = &mbox->dev[0]; - struct ndc_sync_op *ndc_req; - struct otx2_idev_cfg *idev; - int rc, off; - - idev = otx2_intra_dev_get_cfg(); - if (idev == NULL) - return -EINVAL; - - /* Procedure for disabling an aura/pool */ - rte_delay_us(10); - npa_lf_aura_op_alloc(aura_handle, 0); - - pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - pool_req->aura_id = aura_id; - pool_req->ctype = NPA_AQ_CTYPE_POOL; - pool_req->op = NPA_AQ_INSTOP_WRITE; - pool_req->pool.ena = 0; - pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; - - aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - aura_req->aura_id = aura_id; - aura_req->ctype = NPA_AQ_CTYPE_AURA; - aura_req->op = NPA_AQ_INSTOP_WRITE; - aura_req->aura.ena = 0; - aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; - - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_wait_for_rsp(mbox, 0); - if (rc < 0) - return rc; - - off = mbox->rx_start + - RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); - pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); - - off = mbox->rx_start + pool_rsp->hdr.next_msgoff; - aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); - - if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) - return NPA_LF_ERR_AURA_POOL_FINI; - - /* Sync NDC-NPA for LF */ - ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); - ndc_req->npa_lf_sync = 1; - - rc = otx2_mbox_process(mbox); - if (rc) { - otx2_err("Error on NDC-NPA LF sync, rc %d", rc); - return NPA_LF_ERR_AURA_POOL_FINI; - } - - if (!(idev->npa_lock_mask & BIT_ULL(aura_id))) - return 0; - - aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - aura_req->aura_id = aura_id; - aura_req->ctype = NPA_AQ_CTYPE_AURA; - aura_req->op = NPA_AQ_INSTOP_UNLOCK; - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to unlock AURA ctx to NDC"); - return -EINVAL; - } - - pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); - pool_req->aura_id = aura_id; - pool_req->ctype = NPA_AQ_CTYPE_POOL; - pool_req->op = NPA_AQ_INSTOP_UNLOCK; - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to unlock POOL ctx to NDC"); - return -EINVAL; - } - - return 0; -} - -static inline char* -npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) -{ - snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d", - lf->pf_func, pool_id); - - return name; -} - -static inline const struct rte_memzone * -npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, - int pool_id, size_t size) -{ - return rte_memzone_reserve_aligned( - npa_lf_stack_memzone_name(lf, pool_id, name), size, 0, - RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); -} - -static inline int -npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id) -{ - const struct rte_memzone *mz; - - mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name)); - if (mz == NULL) - return -EINVAL; - - return rte_memzone_free(mz); -} - -static inline int -bitmap_ctzll(uint64_t slab) -{ - if (slab == 0) - return 0; - - return __builtin_ctzll(slab); -} - -static int -npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, - const uint32_t block_count, struct npa_aura_s *aura, - struct npa_pool_s *pool, uint64_t *aura_handle) -{ - int rc, aura_id, pool_id, stack_size, alloc_size; - char name[RTE_MEMZONE_NAMESIZE]; - const struct rte_memzone *mz; - uint64_t slab; - uint32_t pos; - - /* Sanity check */ - if (!lf || !block_size || !block_count || - !pool || !aura || !aura_handle) - return NPA_LF_ERR_PARAM; - - /* Block size should be cache line aligned and in range of 128B-128KB */ - if (block_size % OTX2_ALIGN || block_size < 128 || - block_size > 128 * 1024) - return NPA_LF_ERR_INVALID_BLOCK_SZ; - - pos = slab = 0; - /* Scan from the beginning */ - __rte_bitmap_scan_init(lf->npa_bmp); - /* Scan bitmap to get the free pool */ - rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab); - /* Empty bitmap */ - if (rc == 0) { - otx2_err("Mempools exhausted, 'max_pools' devargs to increase"); - return -ERANGE; - } - - /* Get aura_id from resource bitmap */ - aura_id = pos + bitmap_ctzll(slab); - /* Mark pool as reserved */ - rte_bitmap_clear(lf->npa_bmp, aura_id); - - /* Configuration based on each aura has separate pool(aura-pool pair) */ - pool_id = aura_id; - rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >= - (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0; - if (rc) - goto exit; - - /* Allocate stack memory */ - stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs; - alloc_size = stack_size * lf->stack_pg_bytes; - - mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size); - if (mz == NULL) { - rc = -ENOMEM; - goto aura_res_put; - } - - /* Update aura fields */ - aura->pool_addr = pool_id;/* AF will translate to associated poolctx */ - aura->ena = 1; - aura->shift = rte_log2_u32(block_count); - aura->shift = aura->shift < 8 ? 0 : aura->shift - 8; - aura->limit = block_count; - aura->pool_caching = 1; - aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER); - aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER); - aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER); - aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS); - /* Many to one reduction */ - aura->err_qint_idx = aura_id % lf->qints; - - /* Update pool fields */ - pool->stack_base = mz->iova; - pool->ena = 1; - pool->buf_size = block_size / OTX2_ALIGN; - pool->stack_max_pages = stack_size; - pool->shift = rte_log2_u32(block_count); - pool->shift = pool->shift < 8 ? 0 : pool->shift - 8; - pool->ptr_start = 0; - pool->ptr_end = ~0; - pool->stack_caching = 1; - pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS); - pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE); - pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR); - - /* Many to one reduction */ - pool->err_qint_idx = pool_id % lf->qints; - - /* Issue AURA_INIT and POOL_INIT op */ - rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool); - if (rc) - goto stack_mem_free; - - *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base); - - /* Update aura count */ - npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count); - /* Read it back to make sure aura count is updated */ - npa_lf_aura_op_cnt_get(*aura_handle); - - return 0; - -stack_mem_free: - rte_memzone_free(mz); -aura_res_put: - rte_bitmap_set(lf->npa_bmp, aura_id); -exit: - return rc; -} - -static int -npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) -{ - char name[RTE_MEMZONE_NAMESIZE]; - int aura_id, pool_id, rc; - - if (!lf || !aura_handle) - return NPA_LF_ERR_PARAM; - - aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle); - rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle); - rc |= npa_lf_stack_dma_free(lf, name, pool_id); - - rte_bitmap_set(lf->npa_bmp, aura_id); - - return rc; -} - -static int -npa_lf_aura_range_update_check(uint64_t aura_handle) -{ - uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle); - struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); - struct npa_aura_lim *lim = lf->aura_lim; - __otx2_io struct npa_pool_s *pool; - struct npa_aq_enq_req *req; - struct npa_aq_enq_rsp *rsp; - int rc; - - req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); - - req->aura_id = aura_id; - req->ctype = NPA_AQ_CTYPE_POOL; - req->op = NPA_AQ_INSTOP_READ; - - rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); - if (rc) { - otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id); - return rc; - } - - pool = &rsp->pool; - - if (lim[aura_id].ptr_start != pool->ptr_start || - lim[aura_id].ptr_end != pool->ptr_end) { - otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id); - return -ERANGE; - } - - return 0; -} - -static int -otx2_npa_alloc(struct rte_mempool *mp) -{ - uint32_t block_size, block_count; - uint64_t aura_handle = 0; - struct otx2_npa_lf *lf; - struct npa_aura_s aura; - struct npa_pool_s pool; - size_t padding; - int rc; - - lf = otx2_npa_lf_obj_get(); - if (lf == NULL) { - rc = -EINVAL; - goto error; - } - - block_size = mp->elt_size + mp->header_size + mp->trailer_size; - /* - * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate - * the set selection. - * Add additional padding to ensure that the element size always - * occupies odd number of cachelines to ensure even distribution - * of elements among L1D cache sets. - */ - padding = ((block_size / RTE_CACHE_LINE_SIZE) % 2) ? 0 : - RTE_CACHE_LINE_SIZE; - mp->trailer_size += padding; - block_size += padding; - - block_count = mp->size; - - if (block_size % OTX2_ALIGN != 0) { - otx2_err("Block size should be multiple of 128B"); - rc = -ERANGE; - goto error; - } - - memset(&aura, 0, sizeof(struct npa_aura_s)); - memset(&pool, 0, sizeof(struct npa_pool_s)); - pool.nat_align = 1; - pool.buf_offset = 1; - - if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) { - otx2_err("Unsupported mp->header_size=%d", mp->header_size); - rc = -EINVAL; - goto error; - } - - /* Use driver specific mp->pool_config to override aura config */ - if (mp->pool_config != NULL) - memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); - - rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count, - &aura, &pool, &aura_handle); - if (rc) { - otx2_err("Failed to alloc pool or aura rc=%d", rc); - goto error; - } - - /* Store aura_handle for future queue operations */ - mp->pool_id = aura_handle; - otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64, - lf, block_size, block_count, aura_handle); - - /* Just hold the reference of the object */ - otx2_npa_lf_obj_ref(); - return 0; -error: - return rc; -} - -static void -otx2_npa_free(struct rte_mempool *mp) -{ - struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); - int rc = 0; - - otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id); - if (lf != NULL) - rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id); - - if (rc) - otx2_err("Failed to free pool or aura rc=%d", rc); - - /* Release the reference of npalf */ - otx2_npa_lf_fini(); -} - -static ssize_t -otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, - uint32_t pg_shift, size_t *min_chunk_size, size_t *align) -{ - size_t total_elt_sz; - - /* Need space for one more obj on each chunk to fulfill - * alignment requirements. - */ - total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, - total_elt_sz, min_chunk_size, - align); -} - -static uint8_t -otx2_npa_l1d_way_set_get(uint64_t iova) -{ - return (iova >> rte_log2_u32(RTE_CACHE_LINE_SIZE)) & 0x7; -} - -static int -otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, - rte_iova_t iova, size_t len, - rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) -{ -#define OTX2_L1D_NB_SETS 8 - uint64_t distribution[OTX2_L1D_NB_SETS]; - rte_iova_t start_iova; - size_t total_elt_sz; - uint8_t set; - size_t off; - int i; - - if (iova == RTE_BAD_IOVA) - return -EINVAL; - - total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - - /* Align object start address to a multiple of total_elt_sz */ - off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1); - - if (len < off) - return -EINVAL; - - - vaddr = (char *)vaddr + off; - iova += off; - len -= off; - - memset(distribution, 0, sizeof(uint64_t) * OTX2_L1D_NB_SETS); - start_iova = iova; - while (start_iova < iova + len) { - set = otx2_npa_l1d_way_set_get(start_iova + mp->header_size); - distribution[set]++; - start_iova += total_elt_sz; - } - - otx2_npa_dbg("iova %"PRIx64", aligned iova %"PRIx64"", iova - off, - iova); - otx2_npa_dbg("length %"PRIu64", aligned length %"PRIu64"", - (uint64_t)(len + off), (uint64_t)len); - otx2_npa_dbg("element size %"PRIu64"", (uint64_t)total_elt_sz); - otx2_npa_dbg("requested objects %"PRIu64", possible objects %"PRIu64"", - (uint64_t)max_objs, (uint64_t)(len / total_elt_sz)); - otx2_npa_dbg("L1D set distribution :"); - for (i = 0; i < OTX2_L1D_NB_SETS; i++) - otx2_npa_dbg("set[%d] : objects : %"PRIu64"", i, - distribution[i]); - - npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); - - if (npa_lf_aura_range_update_check(mp->pool_id) < 0) - return -EBUSY; - - return rte_mempool_op_populate_helper(mp, - RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, - max_objs, vaddr, iova, len, - obj_cb, obj_cb_arg); -} - -static struct rte_mempool_ops otx2_npa_ops = { - .name = "octeontx2_npa", - .alloc = otx2_npa_alloc, - .free = otx2_npa_free, - .enqueue = otx2_npa_enq, - .get_count = otx2_npa_get_count, - .calc_mem_size = otx2_npa_calc_mem_size, - .populate = otx2_npa_populate, -#if defined(RTE_ARCH_ARM64) - .dequeue = otx2_npa_deq_arm64, -#else - .dequeue = otx2_npa_deq, -#endif -}; - -RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops); diff --git a/drivers/mempool/octeontx2/version.map b/drivers/mempool/octeontx2/version.map deleted file mode 100644 index e6887ceb8f..0000000000 --- a/drivers/mempool/octeontx2/version.map +++ /dev/null @@ -1,8 +0,0 @@ -INTERNAL { - global: - - otx2_npa_lf_fini; - otx2_npa_lf_init; - - local: *; -}; diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index f8f3d3895e..d34bc6898f 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -579,6 +579,21 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) } static const struct rte_pci_id cn9k_pci_nix_map[] = { + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_PF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_AF_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_AF_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_AF_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_AF_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_AF_VF), { .vendor_id = 0, }, diff --git a/drivers/net/meson.build b/drivers/net/meson.build index 2355d1cde8..e35652fe63 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -45,7 +45,6 @@ drivers = [ 'ngbe', 'null', 'octeontx', - 'octeontx2', 'octeontx_ep', 'pcap', 'pfe', diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build deleted file mode 100644 index ab15844cbc..0000000000 --- a/drivers/net/octeontx2/meson.build +++ /dev/null @@ -1,47 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. -# - -if not is_linux or not dpdk_conf.get('RTE_ARCH_64') - build = false - reason = 'only supported on 64-bit Linux' - subdir_done() -endif - -sources = files( - 'otx2_rx.c', - 'otx2_tx.c', - 'otx2_tm.c', - 'otx2_rss.c', - 'otx2_mac.c', - 'otx2_ptp.c', - 'otx2_flow.c', - 'otx2_link.c', - 'otx2_vlan.c', - 'otx2_stats.c', - 'otx2_mcast.c', - 'otx2_lookup.c', - 'otx2_ethdev.c', - 'otx2_flow_ctrl.c', - 'otx2_flow_dump.c', - 'otx2_flow_parse.c', - 'otx2_flow_utils.c', - 'otx2_ethdev_irq.c', - 'otx2_ethdev_ops.c', - 'otx2_ethdev_sec.c', - 'otx2_ethdev_debug.c', - 'otx2_ethdev_devargs.c', -) - -deps += ['bus_pci', 'cryptodev', 'eventdev', 'security'] -deps += ['common_octeontx2', 'mempool_octeontx2'] - -extra_flags = ['-flax-vector-conversions'] -foreach flag: extra_flags - if cc.has_argument(flag) - cflags += flag - endif -endforeach - -includes += include_directories('../../common/cpt') -includes += include_directories('../../crypto/octeontx2') diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c deleted file mode 100644 index 4f1c0b98de..0000000000 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ /dev/null @@ -1,2814 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2019 Marvell International Ltd. - */ - -#include - -#include -#include -#include -#include -#include -#include - -#include "otx2_ethdev.h" -#include "otx2_ethdev_sec.h" - -static inline uint64_t -nix_get_rx_offload_capa(struct otx2_eth_dev *dev) -{ - uint64_t capa = NIX_RX_OFFLOAD_CAPA; - - if (otx2_dev_is_vf(dev) || - dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) - capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP; - - return capa; -} - -static inline uint64_t -nix_get_tx_offload_capa(struct otx2_eth_dev *dev) -{ - uint64_t capa = NIX_TX_OFFLOAD_CAPA; - - /* TSO not supported for earlier chip revisions */ - if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev)) - capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO | - RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | - RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | - RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO); - return capa; -} - -static const struct otx2_dev_ops otx2_dev_ops = { - .link_status_update = otx2_eth_dev_link_status_update, - .ptp_info_update = otx2_eth_dev_ptp_info_update, - .link_status_get = otx2_eth_dev_link_status_get, -}; - -static int -nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq) -{ - struct otx2_mbox *mbox = dev->mbox; - struct nix_lf_alloc_req *req; - struct nix_lf_alloc_rsp *rsp; - int rc; - - req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox); - req->rq_cnt = nb_rxq; - req->sq_cnt = nb_txq; - req->cq_cnt = nb_rxq; - /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */ - RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128); - req->xqe_sz = NIX_XQESZ_W16; - req->rss_sz = dev->rss_info.rss_size; - req->rss_grps = NIX_RSS_GRPS; - req->npa_func = otx2_npa_pf_func_get(); - req->sso_func = otx2_sso_pf_func_get(); - req->rx_cfg = BIT_ULL(35 /* DIS_APAD */); - if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | - RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) { - req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */); - req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */); - } - req->rx_cfg |= (BIT_ULL(32 /* DROP_RE */) | - BIT_ULL(33 /* Outer L2 Length */) | - BIT_ULL(38 /* Inner L4 UDP Length */) | - BIT_ULL(39 /* Inner L3 Length */) | - BIT_ULL(40 /* Outer L4 UDP Length */) | - BIT_ULL(41 /* Outer L3 Length */)); - - if (dev->rss_tag_as_xor == 0) - req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER; - - rc = otx2_mbox_process_msg(mbox, (void *)&rsp); - if (rc) - return rc; - - dev->sqb_size = rsp->sqb_size; - dev->tx_chan_base = rsp->tx_chan_base; - dev->rx_chan_base = rsp->rx_chan_base; - dev->rx_chan_cnt = rsp->rx_chan_cnt; - dev->tx_chan_cnt = rsp->tx_chan_cnt; - dev->lso_tsov4_idx = rsp->lso_tsov4_idx; - dev->lso_tsov6_idx = rsp->lso_tsov6_idx; - dev->lf_tx_stats = rsp->lf_tx_stats; - dev->lf_rx_stats = rsp->lf_rx_stats; - dev->cints = rsp->cints; - dev->qints = rsp->qints; - dev->npc_flow.channel = dev->rx_chan_base; - dev->ptp_en = rsp->hw_rx_tstamp_en; - - return 0; -} - -static int -nix_lf_switch_header_type_enable(struct otx2_eth_dev *dev, bool enable) -{ - struct otx2_mbox *mbox = dev->mbox; - struct npc_set_pkind *req; - struct msg_resp *rsp; - int rc; - - if (dev->npc_flow.switch_header_type == 0) - return 0; - - /* Notify AF about higig2 config */ - req = otx2_mbox_alloc_msg_npc_set_pkind(mbox); - req->mode = dev->npc_flow.switch_header_type; - if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) { - req->mode = OTX2_PRIV_FLAGS_CUSTOM; - req->pkind = NPC_RX_CHLEN90B_PKIND; - } else if (dev->npc_flow.switch_header_type == - OTX2_PRIV_FLAGS_CH_LEN_24B) { - req->mode = OTX2_PRIV_FLAGS_CUSTOM; - req->pkind = NPC_RX_CHLEN24B_PKIND; - } else if (dev->npc_flow.switch_header_type == - OTX2_PRIV_FLAGS_EXDSA) { - req->mode = OTX2_PRIV_FLAGS_CUSTOM; - req->pkind = NPC_RX_EXDSA_PKIND; - } else if (dev->npc_flow.switch_header_type == - OTX2_PRIV_FLAGS_VLAN_EXDSA) { - req->mode = OTX2_PRIV_FLAGS_CUSTOM; - req->pkind = NPC_RX_VLAN_EXDSA_PKIND; - } - - if (enable == 0) - req->mode = OTX2_PRIV_FLAGS_DEFAULT; - req->dir = PKIND_RX; - rc = otx2_mbox_process_msg(mbox, (void *)&rsp); - if (rc) - return rc; - req = otx2_mbox_alloc_msg_npc_set_pkind(mbox); - req->mode = dev->npc_flow.switch_header_type; - if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B || - dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_24B) - req->mode = OTX2_PRIV_FLAGS_DEFAULT; - - if (enable == 0) - req->mode = OTX2_PRIV_FLAGS_DEFAULT; - req->dir = PKIND_TX; - return otx2_mbox_process_msg(mbox, (void *)&rsp); -} - -static int -nix_lf_free(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - struct nix_lf_free_req *req; - struct ndc_sync_op *ndc_req; - int rc; - - /* Sync NDC-NIX for LF */ - ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); - ndc_req->nix_lf_tx_sync = 1; - ndc_req->nix_lf_rx_sync = 1; - rc = otx2_mbox_process(mbox); - if (rc) - otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc); - - req = otx2_mbox_alloc_msg_nix_lf_free(mbox); - /* Let AF driver free all this nix lf's - * NPC entries allocated using NPC MBOX. - */ - req->flags = 0; - - return otx2_mbox_process(mbox); -} - -int -otx2_cgx_rxtx_start(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - if (otx2_dev_is_vf_or_sdp(dev)) - return 0; - - otx2_mbox_alloc_msg_cgx_start_rxtx(mbox); - - return otx2_mbox_process(mbox); -} - -int -otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - if (otx2_dev_is_vf_or_sdp(dev)) - return 0; - - otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox); - - return otx2_mbox_process(mbox); -} - -static int -npc_rx_enable(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - otx2_mbox_alloc_msg_nix_lf_start_rx(mbox); - - return otx2_mbox_process(mbox); -} - -static int -npc_rx_disable(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox); - - return otx2_mbox_process(mbox); -} - -static int -nix_cgx_start_link_event(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - if (otx2_dev_is_vf_or_sdp(dev)) - return 0; - - otx2_mbox_alloc_msg_cgx_start_linkevents(mbox); - - return otx2_mbox_process(mbox); -} - -static int -cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en) -{ - struct otx2_mbox *mbox = dev->mbox; - - if (en && otx2_dev_is_vf_or_sdp(dev)) - return -ENOTSUP; - - if (en) - otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox); - else - otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox); - - return otx2_mbox_process(mbox); -} - -static int -nix_cgx_stop_link_event(struct otx2_eth_dev *dev) -{ - struct otx2_mbox *mbox = dev->mbox; - - if (otx2_dev_is_vf_or_sdp(dev)) - return 0; - - otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox); - - return otx2_mbox_process(mbox); -} - -static inline void -nix_rx_queue_reset(struct otx2_eth_rxq *rxq) -{ - rxq->head = 0; - rxq->available = 0; -} - -static inline uint32_t -nix_qsize_to_val(enum nix_q_size_e qsize) -{ - return (16UL << (qsize * 2)); -} - -static inline enum nix_q_size_e -nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val) -{ - int i; - - if (otx2_ethdev_fixup_is_min_4k_q(dev)) - i = nix_q_size_4K; - else - i = nix_q_size_16; - - for (; i < nix_q_size_max; i++) - if (val <= nix_qsize_to_val(i)) - break; - - if (i >= nix_q_size_max) - i = nix_q_size_max - 1; - - return i; -} - -static int -nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, - uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp) -{ - struct otx2_mbox *mbox = dev->mbox; - const struct rte_memzone *rz; - uint32_t ring_size, cq_size; - struct nix_aq_enq_req *aq; - uint16_t first_skip; - int rc; - - cq_size = rxq->qlen; - ring_size = cq_size * NIX_CQ_ENTRY_SZ; - rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size, - NIX_CQ_ALIGN, dev->node); - if (rz == NULL) { - otx2_err("Failed to allocate mem for cq hw ring"); - return -ENOMEM; - } - memset(rz->addr, 0, rz->len); - rxq->desc = (uintptr_t)rz->addr; - rxq->qmask = cq_size - 1; - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_INIT; - - aq->cq.ena = 1; - aq->cq.caching = 1; - aq->cq.qsize = rxq->qsize; - aq->cq.base = rz->iova; - aq->cq.avg_level = 0xff; - aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); - aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); - - /* Many to one reduction */ - aq->cq.qint_idx = qid % dev->qints; - /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */ - aq->cq.cint_idx = qid; - - if (otx2_ethdev_fixup_is_limit_cq_full(dev)) { - const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID; - uint16_t min_rx_drop; - - min_rx_drop = ceil(rx_cq_skid / (float)cq_size); - aq->cq.drop = min_rx_drop; - aq->cq.drop_ena = 1; - rxq->cq_drop = min_rx_drop; - } else { - rxq->cq_drop = NIX_CQ_THRESH_LEVEL; - aq->cq.drop = rxq->cq_drop; - aq->cq.drop_ena = 1; - } - - /* TX pause frames enable flowctrl on RX side */ - if (dev->fc_info.tx_pause) { - /* Single bpid is allocated for all rx channels for now */ - aq->cq.bpid = dev->fc_info.bpid[0]; - aq->cq.bp = rxq->cq_drop; - aq->cq.bp_ena = 1; - } - - rc = otx2_mbox_process(mbox); - if (rc) { - otx2_err("Failed to init cq context"); - return rc; - } - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_INIT; - - aq->rq.sso_ena = 0; - - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY) - aq->rq.ipsech_ena = 1; - - aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */ - aq->rq.spb_ena = 0; - aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id); - first_skip = (sizeof(struct rte_mbuf)); - first_skip += RTE_PKTMBUF_HEADROOM; - first_skip += rte_pktmbuf_priv_size(mp); - rxq->data_off = first_skip; - - first_skip /= 8; /* Expressed in number of dwords */ - aq->rq.first_skip = first_skip; - aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8); - aq->rq.flow_tagw = 32; /* 32-bits */ - aq->rq.lpb_sizem1 = mp->elt_size / 8; - aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ - aq->rq.ena = 1; - aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ - aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ - aq->rq.rq_int_ena = 0; - /* Many to one reduction */ - aq->rq.qint_idx = qid % dev->qints; - - aq->rq.xqe_drop_ena = 1; - - rc = otx2_mbox_process(mbox); - if (rc) { - otx2_err("Failed to init rq context"); - return rc; - } - - if (dev->lock_rx_ctx) { - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_LOCK; - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - if (!aq) { - /* The shared memory buffer can be full. - * Flush it and retry - */ - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_wait_for_rsp(mbox, 0); - if (rc < 0) { - otx2_err("Failed to LOCK cq context"); - return rc; - } - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - if (!aq) { - otx2_err("Failed to LOCK rq context"); - return -ENOMEM; - } - } - aq->qidx = qid; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_LOCK; - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to LOCK rq context"); - return rc; - } - } - - return 0; -} - -static int -nix_rq_enb_dis(struct rte_eth_dev *eth_dev, - struct otx2_eth_rxq *rxq, const bool enb) -{ - struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); - struct otx2_mbox *mbox = dev->mbox; - struct nix_aq_enq_req *aq; - - /* Pkts will be dropped silently if RQ is disabled */ - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = rxq->rq; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->rq.ena = enb; - aq->rq_mask.ena = ~(aq->rq_mask.ena); - - return otx2_mbox_process(mbox); -} - -static int -nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) -{ - struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); - struct otx2_mbox *mbox = dev->mbox; - struct nix_aq_enq_req *aq; - int rc; - - /* RQ is already disabled */ - /* Disable CQ */ - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = rxq->rq; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_WRITE; - - aq->cq.ena = 0; - aq->cq_mask.ena = ~(aq->cq_mask.ena); - - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to disable cq context"); - return rc; - } - - if (dev->lock_rx_ctx) { - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - aq->qidx = rxq->rq; - aq->ctype = NIX_AQ_CTYPE_CQ; - aq->op = NIX_AQ_INSTOP_UNLOCK; - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - if (!aq) { - /* The shared memory buffer can be full. - * Flush it and retry - */ - otx2_mbox_msg_send(mbox, 0); - rc = otx2_mbox_wait_for_rsp(mbox, 0); - if (rc < 0) { - otx2_err("Failed to UNLOCK cq context"); - return rc; - } - - aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); - if (!aq) { - otx2_err("Failed to UNLOCK rq context"); - return -ENOMEM; - } - } - aq->qidx = rxq->rq; - aq->ctype = NIX_AQ_CTYPE_RQ; - aq->op = NIX_AQ_INSTOP_UNLOCK; - rc = otx2_mbox_process(mbox); - if (rc < 0) { - otx2_err("Failed to UNLOCK rq context"); - return rc; - } - } - - return 0; -} - -static inline int -nix_get_data_off(struct otx2_eth_dev *dev) -{ - return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0; -} -