From patchwork Tue Jul 18 14:44:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Spewock X-Patchwork-Id: 129631 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D3D842EAB; Tue, 18 Jul 2023 16:48:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C072442D5D; Tue, 18 Jul 2023 16:48:39 +0200 (CEST) Received: from mail-vs1-f103.google.com (mail-vs1-f103.google.com [209.85.217.103]) by mails.dpdk.org (Postfix) with ESMTP id 9A18A42D5A for ; Tue, 18 Jul 2023 16:48:38 +0200 (CEST) Received: by mail-vs1-f103.google.com with SMTP id ada2fe7eead31-443571eda4dso1797530137.2 for ; Tue, 18 Jul 2023 07:48:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1689691718; x=1692283718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MTjJKkKTC+uDCwjHzOg1UU0Bs51ZbjeY0uOCHqlfyEI=; b=dL4Y94PFIwN9cmWj0I2Q0D6M0BQ/7AR+n1lydF/yhyuGWW/vryoqOa6ZkUKahm6S4J WWRnYzdU5snPd6SvSr54X7F8lYllsm4yWbdWx+xO4hGJ4wA0IUXvDDL9nQqED1zFP3fU yZieuIS3y3U+xiA05eUrduGINLRUXFz0yvQoE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689691718; x=1692283718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MTjJKkKTC+uDCwjHzOg1UU0Bs51ZbjeY0uOCHqlfyEI=; b=YRa1sIbTCKksydG5eahyv9R/szQvPRRSWR8MAnSSBysJWRJFw5+U6rH2NWZYjok2aF csT31jteceHyVO4AduXFN8zJ33n69ZWrSQGkc9EhFPs0FxBNt5sbuau5k/4esm5t0wgk XEj/+CWfPf6ykTTa9HWsRVPKeuTp24vIZnWHDSew+fomj6tzklqX+uAgfYQa/5c/NBDN YKcqw6EKmCbgJZOnOQpjfH9Z845izk2f/ZgOeiSqCkuF2kLqdl5IkErThHJmiv+HF0RD p7y0oP1MR9HLgupi/biC+C0h1s1uS0v2eK/GLlwX6XX6kVgseje5Z7Sfk9M+GNGmC/xs elIA== X-Gm-Message-State: ABy/qLb1tDAV5EVFDqfIw1yrR3aLUQvm2AhTm8henVd6D8h5kFrIIyQn KtBSZGxqeLznnx+v4gGXjgvj6/8hvhB/+L4G8UrNdPjnGjb6N5xTPWnd3X+mQSKxng== X-Google-Smtp-Source: APBJJlFy3vJwj95KO6yLv9KrMmtfrlVAoIHfnvUJM4lF488wQ+Y3Vgav2j8tANd7jUF7A8FU8wbJPGHX+YEN X-Received: by 2002:a67:edc9:0:b0:444:59e2:f700 with SMTP id e9-20020a67edc9000000b0044459e2f700mr8093084vsp.7.1689691717888; Tue, 18 Jul 2023 07:48:37 -0700 (PDT) Received: from postal.iol.unh.edu (postal.iol.unh.edu. [2606:4100:3880:1234::84]) by smtp-relay.gmail.com with ESMTPS id b16-20020ab06e90000000b0078f11b40758sm243538uav.14.2023.07.18.07.48.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Jul 2023 07:48:37 -0700 (PDT) X-Relaying-Domain: iol.unh.edu Received: from iol.unh.edu (unknown [IPv6:2606:4100:3880:1271:90f9:1b64:f6e6:867f]) by postal.iol.unh.edu (Postfix) with ESMTP id 5BB63605246B; Tue, 18 Jul 2023 10:48:37 -0400 (EDT) From: jspewock@iol.unh.edu To: Honnappa.Nagarahalli@arm.com, thomas@monjalon.net, lijuan.tu@intel.com, wathsala.vithanage@arm.com, probb@iol.unh.edu, juraj.linkes@pantheon.tech Cc: dev@dpdk.org, Jeremy Spewock Subject: [PATCH v9 1/1] dts: add smoke tests Date: Tue, 18 Jul 2023 10:44:19 -0400 Message-ID: <20230718144659.14328-4-jspewock@iol.unh.edu> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230718144659.14328-3-jspewock@iol.unh.edu> References: <20230718144659.14328-3-jspewock@iol.unh.edu> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jeremy Spewock Adds a new test suite for running smoke tests that verify general configuration aspects of the system under test. If any of these tests fail, the DTS execution terminates as part of a "fail-fast" model. Signed-off-by: Jeremy Spewock --- dts/conf.yaml | 17 +- dts/framework/config/__init__.py | 79 ++++++-- dts/framework/config/conf_yaml_schema.json | 142 ++++++++++++++- dts/framework/dts.py | 84 ++++++--- dts/framework/exception.py | 12 ++ dts/framework/remote_session/__init__.py | 13 +- dts/framework/remote_session/linux_session.py | 3 +- dts/framework/remote_session/os_session.py | 49 ++++- dts/framework/remote_session/posix_session.py | 29 ++- .../remote_session/remote/__init__.py | 10 ++ .../remote/interactive_remote_session.py | 82 +++++++++ .../remote/interactive_shell.py | 131 ++++++++++++++ .../remote_session/remote/testpmd_shell.py | 49 +++++ dts/framework/test_result.py | 24 ++- dts/framework/test_suite.py | 10 +- dts/framework/testbed_model/node.py | 43 ++++- dts/framework/testbed_model/sut_node.py | 169 +++++++++++++----- dts/framework/utils.py | 3 + dts/tests/TestSuite_smoke_tests.py | 114 ++++++++++++ 19 files changed, 969 insertions(+), 94 deletions(-) create mode 100644 dts/framework/remote_session/remote/interactive_remote_session.py create mode 100644 dts/framework/remote_session/remote/interactive_shell.py create mode 100644 dts/framework/remote_session/remote/testpmd_shell.py create mode 100644 dts/tests/TestSuite_smoke_tests.py diff --git a/dts/conf.yaml b/dts/conf.yaml index 129801d87c..0825d958a6 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -10,9 +10,13 @@ executions: compiler_wrapper: ccache perf: false func: true + skip_smoke_tests: false # optional flag that allows you to skip smoke tests test_suites: - hello_world - system_under_test: "SUT 1" + system_under_test: + node_name: "SUT 1" + vdevs: # optional; if removed, vdevs won't be used in the execution + - "crypto_openssl" nodes: - name: "SUT 1" hostname: sut1.change.me.localhost @@ -25,3 +29,14 @@ nodes: hugepages: # optional; if removed, will use system hugepage configuration amount: 256 force_first_numa: false + ports: + - pci: "0000:00:08.0" + os_driver_for_dpdk: vfio-pci # OS driver that DPDK will use + os_driver: i40e + peer_node: "TG 1" + peer_pci: "0000:00:08.0" + - pci: "0000:00:08.1" + os_driver_for_dpdk: vfio-pci + os_driver: i40e + peer_node: "TG 1" + peer_pci: "0000:00:08.1" diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index a4b73483e6..aa7ff358a2 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -65,6 +65,20 @@ class HugepageConfiguration: force_first_numa: bool +@dataclass(slots=True, frozen=True) +class PortConfig: + node: str + pci: str + os_driver_for_dpdk: str + os_driver: str + peer_node: str + peer_pci: str + + @staticmethod + def from_dict(node: str, d: dict) -> "PortConfig": + return PortConfig(node=node, **d) + + @dataclass(slots=True, frozen=True) class NodeConfiguration: name: str @@ -77,6 +91,7 @@ class NodeConfiguration: use_first_core: bool memory_channels: int hugepages: HugepageConfiguration | None + ports: list[PortConfig] @staticmethod def from_dict(d: dict) -> "NodeConfiguration": @@ -85,19 +100,36 @@ def from_dict(d: dict) -> "NodeConfiguration": if "force_first_numa" not in hugepage_config: hugepage_config["force_first_numa"] = False hugepage_config = HugepageConfiguration(**hugepage_config) + common_config = { + "name": d["name"], + "hostname": d["hostname"], + "user": d["user"], + "password": d.get("password"), + "arch": Architecture(d["arch"]), + "os": OS(d["os"]), + "lcores": d.get("lcores", "1"), + "use_first_core": d.get("use_first_core", False), + "memory_channels": d.get("memory_channels", 1), + "hugepages": hugepage_config, + "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], + } + + return NodeConfiguration(**common_config) - return NodeConfiguration( - name=d["name"], - hostname=d["hostname"], - user=d["user"], - password=d.get("password"), - arch=Architecture(d["arch"]), - os=OS(d["os"]), - lcores=d.get("lcores", "1"), - use_first_core=d.get("use_first_core", False), - memory_channels=d.get("memory_channels", 1), - hugepages=hugepage_config, - ) + +@dataclass(slots=True, frozen=True) +class NodeInfo: + """Class to hold important versions within the node. + + This class, unlike the NodeConfiguration class, cannot be generated at the start. + This is because we need to initialize a connection with the node before we can + collect the information needed in this class. Therefore, it cannot be a part of + the configuration class above. + """ + + os_name: str + os_version: str + kernel_version: str @dataclass(slots=True, frozen=True) @@ -121,6 +153,18 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": ) +@dataclass(slots=True, frozen=True) +class BuildTargetInfo: + """Class to hold important versions within the build target. + + This is very similar to the NodeInfo class, it just instead holds information + for the build target. + """ + + dpdk_version: str + compiler_version: str + + class TestSuiteConfigDict(TypedDict): suite: str cases: list[str] @@ -150,6 +194,8 @@ class ExecutionConfiguration: func: bool test_suites: list[TestSuiteConfig] system_under_test: NodeConfiguration + vdevs: list[str] + skip_smoke_tests: bool @staticmethod def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": @@ -159,15 +205,20 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": test_suites: list[TestSuiteConfig] = list( map(TestSuiteConfig.from_dict, d["test_suites"]) ) - sut_name = d["system_under_test"] + sut_name = d["system_under_test"]["node_name"] + skip_smoke_tests = d.get("skip_smoke_tests", False) assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}" - + vdevs = ( + d["system_under_test"]["vdevs"] if "vdevs" in d["system_under_test"] else [] + ) return ExecutionConfiguration( build_targets=build_targets, perf=d["perf"], func=d["func"], + skip_smoke_tests=skip_smoke_tests, test_suites=test_suites, system_under_test=node_map[sut_name], + vdevs=vdevs, ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index ca2d4a1ef2..67722213dd 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -6,6 +6,76 @@ "type": "string", "description": "A unique identifier for a node" }, + "NIC": { + "type": "string", + "enum": [ + "ALL", + "ConnectX3_MT4103", + "ConnectX4_LX_MT4117", + "ConnectX4_MT4115", + "ConnectX5_MT4119", + "ConnectX5_MT4121", + "I40E_10G-10G_BASE_T_BC", + "I40E_10G-10G_BASE_T_X722", + "I40E_10G-SFP_X722", + "I40E_10G-SFP_XL710", + "I40E_10G-X722_A0", + "I40E_1G-1G_BASE_T_X722", + "I40E_25G-25G_SFP28", + "I40E_40G-QSFP_A", + "I40E_40G-QSFP_B", + "IAVF-ADAPTIVE_VF", + "IAVF-VF", + "IAVF_10G-X722_VF", + "ICE_100G-E810C_QSFP", + "ICE_25G-E810C_SFP", + "ICE_25G-E810_XXV_SFP", + "IGB-I350_VF", + "IGB_1G-82540EM", + "IGB_1G-82545EM_COPPER", + "IGB_1G-82571EB_COPPER", + "IGB_1G-82574L", + "IGB_1G-82576", + "IGB_1G-82576_QUAD_COPPER", + "IGB_1G-82576_QUAD_COPPER_ET2", + "IGB_1G-82580_COPPER", + "IGB_1G-I210_COPPER", + "IGB_1G-I350_COPPER", + "IGB_1G-I354_SGMII", + "IGB_1G-PCH_LPTLP_I218_LM", + "IGB_1G-PCH_LPTLP_I218_V", + "IGB_1G-PCH_LPT_I217_LM", + "IGB_1G-PCH_LPT_I217_V", + "IGB_2.5G-I354_BACKPLANE_2_5GBPS", + "IGC-I225_LM", + "IGC-I226_LM", + "IXGBE_10G-82599_SFP", + "IXGBE_10G-82599_SFP_SF_QP", + "IXGBE_10G-82599_T3_LOM", + "IXGBE_10G-82599_VF", + "IXGBE_10G-X540T", + "IXGBE_10G-X540_VF", + "IXGBE_10G-X550EM_A_SFP", + "IXGBE_10G-X550EM_X_10G_T", + "IXGBE_10G-X550EM_X_SFP", + "IXGBE_10G-X550EM_X_VF", + "IXGBE_10G-X550T", + "IXGBE_10G-X550_VF", + "brcm_57414", + "brcm_P2100G", + "cavium_0011", + "cavium_a034", + "cavium_a063", + "cavium_a064", + "fastlinq_ql41000", + "fastlinq_ql41000_vf", + "fastlinq_ql45000", + "fastlinq_ql45000_vf", + "hi1822", + "virtio" + ] + }, + "ARCH": { "type": "string", "enum": [ @@ -94,6 +164,19 @@ "amount" ] }, + "pci_address": { + "type": "string", + "pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$" + }, + "port_peer_address": { + "description": "Peer is a TRex port, and IXIA port or a PCI address", + "oneOf": [ + { + "description": "PCI peer port", + "$ref": "#/definitions/pci_address" + } + ] + }, "test_suite": { "type": "string", "enum": [ @@ -165,6 +248,44 @@ }, "hugepages": { "$ref": "#/definitions/hugepages" + }, + "ports": { + "type": "array", + "items": { + "type": "object", + "description": "Each port should be described on both sides of the connection. This makes configuration slightly more verbose but greatly simplifies implementation. If there are inconsistencies, then DTS will not run until that issue is fixed. An example inconsistency would be port 1, node 1 says it is connected to port 1, node 2, but port 1, node 2 says it is connected to port 2, node 1.", + "properties": { + "pci": { + "$ref": "#/definitions/pci_address", + "description": "The local PCI address of the port" + }, + "os_driver_for_dpdk": { + "type": "string", + "description": "The driver that the kernel should bind this device to for DPDK to use it. (ex: vfio-pci)" + }, + "os_driver": { + "type": "string", + "description": "The driver normally used by this port (ex: i40e)" + }, + "peer_node": { + "type": "string", + "description": "The name of the node the peer port is on" + }, + "peer_pci": { + "$ref": "#/definitions/pci_address", + "description": "The PCI address of the peer port" + } + }, + "additionalProperties": false, + "required": [ + "pci", + "os_driver_for_dpdk", + "os_driver", + "peer_node", + "peer_pci" + ] + }, + "minimum": 1 } }, "additionalProperties": false, @@ -211,8 +332,27 @@ ] } }, + "skip_smoke_tests": { + "description": "Optional field that allows you to skip smoke testing", + "type": "boolean" + }, "system_under_test": { - "$ref": "#/definitions/node_name" + "type":"object", + "properties": { + "node_name": { + "$ref": "#/definitions/node_name" + }, + "vdevs": { + "description": "Optional list of names of vdevs to be used in execution", + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "node_name" + ] } }, "additionalProperties": false, diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 0502284580..3ba7fd2478 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -5,7 +5,13 @@ import sys -from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration +from .config import ( + CONFIGURATION, + BuildTargetConfiguration, + ExecutionConfiguration, + TestSuiteConfig, +) +from .exception import BlockingTestSuiteError from .logger import DTSLOG, getLogger from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites @@ -83,6 +89,7 @@ def _run_execution( """ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.") execution_result = result.add_execution(sut_node.config) + execution_result.add_sut_info(sut_node.node_info) try: sut_node.set_up_execution(execution) @@ -119,13 +126,14 @@ def _run_build_target( try: sut_node.set_up_build_target(build_target) result.dpdk_version = sut_node.dpdk_version + build_target_result.add_build_target_info(sut_node.get_build_target_info()) build_target_result.update_setup(Result.PASS) except Exception as e: dts_logger.exception("Build target setup failed.") build_target_result.update_setup(Result.FAIL, e) else: - _run_suites(sut_node, execution, build_target_result) + _run_all_suites(sut_node, execution, build_target_result) finally: try: @@ -136,7 +144,7 @@ def _run_build_target( build_target_result.update_teardown(Result.FAIL, e) -def _run_suites( +def _run_all_suites( sut_node: SutNode, execution: ExecutionConfiguration, build_target_result: BuildTargetResult, @@ -146,27 +154,61 @@ def _run_suites( with possibly only a subset of test cases. If no subset is specified, run all test cases. """ + end_build_target = False + if not execution.skip_smoke_tests: + execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] for test_suite_config in execution.test_suites: try: - full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" - test_suite_classes = get_test_suites(full_suite_path) - suites_str = ", ".join((x.__name__ for x in test_suite_classes)) - dts_logger.debug( - f"Found test suites '{suites_str}' in '{full_suite_path}'." + _run_single_suite( + sut_node, execution, build_target_result, test_suite_config ) - except Exception as e: - dts_logger.exception("An error occurred when searching for test suites.") - result.update_setup(Result.ERROR, e) - - else: - for test_suite_class in test_suite_classes: - test_suite = test_suite_class( - sut_node, - test_suite_config.test_cases, - execution.func, - build_target_result, - ) - test_suite.run() + except BlockingTestSuiteError as e: + dts_logger.exception( + f"An error occurred within {test_suite_config.test_suite}. " + "Skipping build target..." + ) + result.add_error(e) + end_build_target = True + # if a blocking test failed and we need to bail out of suite executions + if end_build_target: + break + + +def _run_single_suite( + sut_node: SutNode, + execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, + test_suite_config: TestSuiteConfig, +) -> None: + """Runs a single test suite. + + Args: + sut_node: Node to run tests on. + execution: Execution the test case belongs to. + build_target_result: Build target configuration test case is run on + test_suite_config: Test suite configuration + + Raises: + BlockingTestSuiteError: If a test suite that was marked as blocking fails. + """ + try: + full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" + test_suite_classes = get_test_suites(full_suite_path) + suites_str = ", ".join((x.__name__ for x in test_suite_classes)) + dts_logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") + except Exception as e: + dts_logger.exception("An error occurred when searching for test suites.") + result.update_setup(Result.ERROR, e) + + else: + for test_suite_class in test_suite_classes: + test_suite = test_suite_class( + sut_node, + test_suite_config.test_cases, + execution.func, + build_target_result, + ) + test_suite.run() def _exit_dts() -> None: diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 44ff4e979a..001a5a5496 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum): SSH_ERR = 4 DPDK_BUILD_ERR = 10 TESTCASE_VERIFY_ERR = 20 + BLOCKING_TESTSUITE_ERR = 25 class DTSError(Exception): @@ -150,3 +151,14 @@ def __init__(self, value: str): def __str__(self) -> str: return repr(self.value) + + +class BlockingTestSuiteError(DTSError): + suite_name: str + severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR + + def __init__(self, suite_name: str) -> None: + self.suite_name = suite_name + + def __str__(self) -> str: + return f"Blocking suite {self.suite_name} failed." diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index ee221503df..1155dd8318 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -1,5 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire """ The package provides modules for managing remote connections to a remote host (node), @@ -16,8 +17,16 @@ from framework.logger import DTSLOG from .linux_session import LinuxSession -from .os_session import OSSession -from .remote import CommandResult, RemoteSession, SSHSession +from .os_session import InteractiveShellType, OSSession +from .remote import ( + CommandResult, + InteractiveRemoteSession, + InteractiveShell, + RemoteSession, + SSHSession, + TestPmdDevice, + TestPmdShell, +) def create_session( diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py index f13f399121..f64aa8efb0 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/remote_session/linux_session.py @@ -14,7 +14,8 @@ class LinuxSession(PosixSession): The implementation of non-Posix compliant parts of Linux remote sessions. """ - def _get_privileged_command(self, command: str) -> str: + @staticmethod + def _get_privileged_command(command: str) -> str: return f"sudo -- sh -c '{command}'" def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py index bfd70bd480..f543ce3acc 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/remote_session/os_session.py @@ -5,14 +5,24 @@ from abc import ABC, abstractmethod from collections.abc import Iterable from pathlib import PurePath +from typing import Type, TypeVar -from framework.config import Architecture, NodeConfiguration +from framework.config import Architecture, NodeConfiguration, NodeInfo from framework.logger import DTSLOG +from framework.remote_session.remote import InteractiveShell from framework.settings import SETTINGS from framework.testbed_model import LogicalCore from framework.utils import MesonArgs -from .remote import CommandResult, RemoteSession, create_remote_session +from .remote import ( + CommandResult, + InteractiveRemoteSession, + RemoteSession, + create_interactive_session, + create_remote_session, +) + +InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell) class OSSession(ABC): @@ -26,6 +36,7 @@ class OSSession(ABC): name: str _logger: DTSLOG remote_session: RemoteSession + interactive_session: InteractiveRemoteSession def __init__( self, @@ -37,6 +48,7 @@ def __init__( self.name = name self._logger = logger self.remote_session = create_remote_session(node_config, name, logger) + self.interactive_session = create_interactive_session(node_config, logger) def close(self, force: bool = False) -> None: """ @@ -68,8 +80,27 @@ def send_command( return self.remote_session.send_command(command, timeout, verify, env) + def create_interactive_shell( + self, + shell_cls: Type[InteractiveShellType], + eal_parameters: str, + timeout: float, + privileged: bool, + ) -> InteractiveShellType: + """ + See "create_interactive_shell" in SutNode + """ + return shell_cls( + self.interactive_session.session, + self._logger, + self._get_privileged_command if privileged else None, + eal_parameters, + timeout, + ) + + @staticmethod @abstractmethod - def _get_privileged_command(self, command: str) -> str: + def _get_privileged_command(command: str) -> str: """Modify the command so that it executes with administrative privileges. Args: @@ -206,3 +237,15 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: if needed and mount the hugepages if needed. If force_first_numa is True, configure hugepages just on the first socket. """ + + @abstractmethod + def get_compiler_version(self, compiler_name: str) -> str: + """ + Get installed version of compiler used for DPDK + """ + + @abstractmethod + def get_node_info(self) -> NodeInfo: + """ + Collect information about the node + """ diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py index 8ca0acb429..5da0516e05 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/remote_session/posix_session.py @@ -6,7 +6,7 @@ from collections.abc import Iterable from pathlib import PurePath, PurePosixPath -from framework.config import Architecture +from framework.config import Architecture, NodeInfo from framework.exception import DPDKBuildError, RemoteCommandExecutionError from framework.settings import SETTINGS from framework.utils import MesonArgs @@ -221,3 +221,30 @@ def _remove_dpdk_runtime_dirs( def get_dpdk_file_prefix(self, dpdk_prefix) -> str: return "" + + def get_compiler_version(self, compiler_name: str) -> str: + match compiler_name: + case "gcc": + return self.send_command( + f"{compiler_name} --version", SETTINGS.timeout + ).stdout.split("\n")[0] + case "clang": + return self.send_command( + f"{compiler_name} --version", SETTINGS.timeout + ).stdout.split("\n")[0] + case "msvc": + return self.send_command("cl", SETTINGS.timeout).stdout + case "icc": + return self.send_command(f"{compiler_name} -V", SETTINGS.timeout).stdout + case _: + raise ValueError(f"Unknown compiler {compiler_name}") + + def get_node_info(self) -> NodeInfo: + os_release_info = self.send_command( + "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release", + SETTINGS.timeout, + ).stdout.split("\n") + kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout + return NodeInfo( + os_release_info[0].strip(), os_release_info[1].strip(), kernel_version + ) diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py index 8a1512210a..1d29c3ea0d 100644 --- a/dts/framework/remote_session/remote/__init__.py +++ b/dts/framework/remote_session/remote/__init__.py @@ -1,16 +1,26 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire # pylama:ignore=W0611 from framework.config import NodeConfiguration from framework.logger import DTSLOG +from .interactive_remote_session import InteractiveRemoteSession +from .interactive_shell import InteractiveShell from .remote_session import CommandResult, RemoteSession from .ssh_session import SSHSession +from .testpmd_shell import TestPmdDevice, TestPmdShell def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> RemoteSession: return SSHSession(node_config, name, logger) + + +def create_interactive_session( + node_config: NodeConfiguration, logger: DTSLOG +) -> InteractiveRemoteSession: + return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/remote/interactive_remote_session.py new file mode 100644 index 0000000000..2d94daf2a7 --- /dev/null +++ b/dts/framework/remote_session/remote/interactive_remote_session.py @@ -0,0 +1,82 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 University of New Hampshire + +import socket +import traceback + +from paramiko import AutoAddPolicy, SSHClient, Transport # type: ignore +from paramiko.ssh_exception import ( # type: ignore + AuthenticationException, + BadHostKeyException, + NoValidConnectionsError, + SSHException, +) + +from framework.config import NodeConfiguration +from framework.exception import SSHConnectionError +from framework.logger import DTSLOG + + +class InteractiveRemoteSession: + hostname: str + ip: str + port: int + username: str + password: str + _logger: DTSLOG + _node_config: NodeConfiguration + session: SSHClient + _transport: Transport | None + + def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: + self._node_config = node_config + self._logger = _logger + self.hostname = node_config.hostname + self.username = node_config.user + self.password = node_config.password if node_config.password else "" + port = "22" + self.ip = node_config.hostname + if ":" in node_config.hostname: + self.ip, port = node_config.hostname.split(":") + self.port = int(port) + self._logger.info( + f"Initializing interactive connection for {self.username}@{self.hostname}" + ) + self._connect() + self._logger.info( + f"Interactive connection successful for {self.username}@{self.hostname}" + ) + + def _connect(self) -> None: + client = SSHClient() + client.set_missing_host_key_policy(AutoAddPolicy) + self.session = client + retry_attempts = 10 + for retry_attempt in range(retry_attempts): + try: + client.connect( + self.ip, + username=self.username, + port=self.port, + password=self.password, + timeout=20 if self.port else 10, + ) + except (TypeError, BadHostKeyException, AuthenticationException) as e: + self._logger.exception(e) + raise SSHConnectionError(self.hostname) from e + except (NoValidConnectionsError, socket.error, SSHException) as e: + self._logger.debug(traceback.format_exc()) + self._logger.warning(e) + self._logger.info( + "Retrying interactive session connection: " + f"retry number {retry_attempt +1}" + ) + else: + break + else: + raise SSHConnectionError(self.hostname) + # Interactive sessions are used on an "as needed" basis so we have + # to set a keepalive + self._transport = self.session.get_transport() + if self._transport is not None: + self._transport.set_keepalive(30) diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/remote/interactive_shell.py new file mode 100644 index 0000000000..dd2d0628af --- /dev/null +++ b/dts/framework/remote_session/remote/interactive_shell.py @@ -0,0 +1,131 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 University of New Hampshire + +"""Common functionality for interactive shell handling. + +This base class, InteractiveShell, is meant to be extended by other classes that +contain functionality specific to that shell type. These derived classes will often +modify things like the prompt to expect or the arguments to pass into the application, +but still utilize the same method for sending a command and collecting output. How +this output is handled however is often application specific. If an application needs +elevated privileges to start it is expected that the method for gaining those +privileges is provided when initializing the class. +""" + +from pathlib import PurePath +from typing import Callable + +from paramiko import Channel, SSHClient, channel # type: ignore + +from framework.logger import DTSLOG +from framework.settings import SETTINGS + + +class InteractiveShell: + """The base class for managing interactive shells. + + This class shouldn't be instantiated directly, but instead be extended. It contains + methods for starting interactive shells as well as sending commands to these shells + and collecting input until reaching a certain prompt. + + Arguments: + interactive_session: The SSH session dedicated to interactive shells. + logger: Logger used for displaying information in the console. + get_privileged_command: Method for modifying a command to allow it to use + elevated privileges. If this is None, the application will not be started + with elevated privileges. + app_args: Command line arguments to be passed to the application on startup. + timeout: Timeout used for the SSH channel that is dedicated to this interactive + shell. This timeout is for collecting output, so if reading from the buffer + and no output is gathered within the timeout, an exception is thrown. + + Attributes + _default_prompt: Prompt to expect at the end of output when sending a command. + This is often overridden by derived classes. + _command_extra_chars: Extra characters to add to the end of every command + before sending them. This is often overridden by derived classes and is + most commonly an additional newline character. + path: Path to the executable to start the interactive application. + dpdk_app: Whether this application is a DPDK app. If it is, the build + directory for DPDK on the node will be prepended to the path to the + executable. + """ + + _interactive_session: SSHClient + _stdin: channel.ChannelStdinFile + _stdout: channel.ChannelFile + _ssh_channel: Channel + _logger: DTSLOG + _timeout: float + _app_args: str + _default_prompt: str = "" + # Allows for app specific extra characters to be appended to commands + _command_extra_chars: str = "" + path: PurePath + dpdk_app: bool = False + + def __init__( + self, + interactive_session: SSHClient, + logger: DTSLOG, + get_privileged_command: Callable[[str], str] | None, + app_args: str = "", + timeout: float = SETTINGS.timeout, + ) -> None: + self._interactive_session = interactive_session + self._ssh_channel = self._interactive_session.invoke_shell() + self._stdin = self._ssh_channel.makefile_stdin("w") + self._stdout = self._ssh_channel.makefile("r") + self._ssh_channel.settimeout(timeout) + self._ssh_channel.set_combine_stderr(True) # combines stdout and stderr streams + self._logger = logger + self._timeout = timeout + self._app_args = app_args + self._start_application(get_privileged_command) + + def _start_application( + self, get_privileged_command: Callable[[str], str] | None + ) -> None: + """Starts a new interactive application based on _startup_command. + + This method is often overridden by subclasses as their process for + starting may look different. + """ + start_command = f"{self.path} {self._app_args}" + if get_privileged_command is not None: + start_command = get_privileged_command(start_command) + self.send_command(start_command) + + def send_command(self, command: str, prompt: str | None = None) -> str: + """Send a command and get all output before the expected ending string. + + Lines that expect input are not included in the stdout buffer, so they cannot + be used for expect. For example, if you were prompted to log into something + with a username and password, you cannot expect "username:" because it won't + yet be in the stdout buffer. A workaround for this could be consuming an + extra newline character to force the current prompt into the stdout buffer. + + Returns: + All output in the buffer before expected string + """ + self._logger.info(f"Sending: '{command}'") + if prompt is None: + prompt = self._default_prompt + self._stdin.write(f"{command}{self._command_extra_chars}\n") + self._stdin.flush() + out: str = "" + for line in self._stdout: + out += line + if prompt in line and not line.rstrip().endswith( + command.rstrip() + ): # ignore line that sent command + break + self._logger.debug(f"Got output: {out}") + return out + + def close(self) -> None: + self._stdin.close() + self._ssh_channel.close() + + def __del__(self) -> None: + self.close() diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/remote/testpmd_shell.py new file mode 100644 index 0000000000..1455b5a199 --- /dev/null +++ b/dts/framework/remote_session/remote/testpmd_shell.py @@ -0,0 +1,49 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 University of New Hampshire + +from pathlib import PurePath +from typing import Callable + +from .interactive_shell import InteractiveShell + + +class TestPmdDevice(object): + pci_address: str + + def __init__(self, pci_address_line: str): + self.pci_address = pci_address_line.strip().split(": ")[1].strip() + + def __str__(self) -> str: + return self.pci_address + + +class TestPmdShell(InteractiveShell): + path: PurePath = PurePath("app", "dpdk-testpmd") + dpdk_app: bool = True + _default_prompt: str = "testpmd>" + _command_extra_chars: str = ( + "\n" # We want to append an extra newline to every command + ) + + def _start_application( + self, get_privileged_command: Callable[[str], str] | None + ) -> None: + """See "_start_application" in InteractiveShell.""" + self._app_args += " -- -i" + super()._start_application(get_privileged_command) + + def get_devices(self) -> list[TestPmdDevice]: + """Get a list of device names that are known to testpmd + + Uses the device info listed in testpmd and then parses the output to + return only the names of the devices. + + Returns: + A list of strings representing device names (e.g. 0000:14:00.1) + """ + dev_info: str = self.send_command("show device info all") + dev_list: list[TestPmdDevice] = [] + for line in dev_info.split("\n"): + if "device name:" in line.lower(): + dev_list.append(TestPmdDevice(line)) + return dev_list diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 743919820c..aa9c5975c6 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -1,5 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire """ Generic result container and reporters @@ -13,9 +14,11 @@ OS, Architecture, BuildTargetConfiguration, + BuildTargetInfo, Compiler, CPUType, NodeConfiguration, + NodeInfo, ) from .exception import DTSError, ErrorSeverity from .logger import DTSLOG @@ -67,7 +70,7 @@ class Statistics(dict): Using a dict provides a convenient way to format the data. """ - def __init__(self, dpdk_version): + def __init__(self, dpdk_version: str | None): super(Statistics, self).__init__() for result in Result: self[result.name] = 0 @@ -206,6 +209,8 @@ class BuildTargetResult(BaseResult): os: OS cpu: CPUType compiler: Compiler + compiler_version: str | None + dpdk_version: str | None def __init__(self, build_target: BuildTargetConfiguration): super(BuildTargetResult, self).__init__() @@ -213,6 +218,12 @@ def __init__(self, build_target: BuildTargetConfiguration): self.os = build_target.os self.cpu = build_target.cpu self.compiler = build_target.compiler + self.compiler_version = None + self.dpdk_version = None + + def add_build_target_info(self, versions: BuildTargetInfo) -> None: + self.compiler_version = versions.compiler_version + self.dpdk_version = versions.dpdk_version def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: test_suite_result = TestSuiteResult(test_suite_name) @@ -228,6 +239,9 @@ class ExecutionResult(BaseResult): """ sut_node: NodeConfiguration + sut_os_name: str + sut_os_version: str + sut_kernel_version: str def __init__(self, sut_node: NodeConfiguration): super(ExecutionResult, self).__init__() @@ -240,6 +254,11 @@ def add_build_target( self._inner_results.append(build_target_result) return build_target_result + def add_sut_info(self, sut_info: NodeInfo): + self.sut_os_name = sut_info.os_name + self.sut_os_version = sut_info.os_version + self.sut_kernel_version = sut_info.kernel_version + class DTSResult(BaseResult): """ @@ -258,6 +277,7 @@ class DTSResult(BaseResult): """ dpdk_version: str | None + output: dict | None _logger: DTSLOG _errors: list[Exception] _return_code: ErrorSeverity @@ -267,6 +287,7 @@ class DTSResult(BaseResult): def __init__(self, logger: DTSLOG): super(DTSResult, self).__init__() self.dpdk_version = None + self.output = None self._logger = logger self._errors = [] self._return_code = ErrorSeverity.NO_ERR @@ -297,6 +318,7 @@ def process(self) -> None: self._logger.debug(repr(error)) self._stats_result = Statistics(self.dpdk_version) + # add information gathered from the smoke tests to the statistics self.add_stats(self._stats_result) with open(self._stats_filename, "w+") as stats_file: stats_file.write(str(self._stats_result)) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 0705f38f98..de94c9332d 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,7 +11,12 @@ import re from types import MethodType -from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError +from .exception import ( + BlockingTestSuiteError, + ConfigurationError, + SSHTimeoutError, + TestCaseVerifyError, +) from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult @@ -37,6 +42,7 @@ class TestSuite(object): """ sut_node: SutNode + is_blocking = False _logger: DTSLOG _test_cases_to_run: list[str] _func: bool @@ -118,6 +124,8 @@ def run(self) -> None: f"the next test suite may be affected." ) self._result.update_setup(Result.ERROR, e) + if len(self._result.get_errors()) > 0 and self.is_blocking: + raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite(self) -> None: """ diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index d48fafe65d..d237b3f75b 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -7,7 +7,7 @@ A node is a generic host that DTS connects to and manages. """ -from typing import Any, Callable +from typing import Any, Callable, Type from framework.config import ( BuildTargetConfiguration, @@ -15,7 +15,7 @@ NodeConfiguration, ) from framework.logger import DTSLOG, getLogger -from framework.remote_session import OSSession, create_session +from framework.remote_session import InteractiveShellType, OSSession, create_session from framework.settings import SETTINGS from .hw import ( @@ -23,6 +23,7 @@ LogicalCoreCount, LogicalCoreList, LogicalCoreListFilter, + VirtualDevice, lcore_filter, ) @@ -40,6 +41,8 @@ class Node(object): lcores: list[LogicalCore] _logger: DTSLOG _other_sessions: list[OSSession] + _execution_config: ExecutionConfiguration + virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): self.config = node_config @@ -54,6 +57,7 @@ def __init__(self, node_config: NodeConfiguration): ).filter() self._other_sessions = [] + self.virtual_devices = [] self._logger.info(f"Created node: {self.name}") @@ -64,6 +68,9 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: """ self._setup_hugepages() self._set_up_execution(execution_config) + self._execution_config = execution_config + for vdev in execution_config.vdevs: + self.virtual_devices.append(VirtualDevice(vdev)) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: """ @@ -76,6 +83,7 @@ def tear_down_execution(self) -> None: Perform the execution teardown that will be done after each execution this node is part of concludes. """ + self.virtual_devices = [] self._tear_down_execution() def _tear_down_execution(self) -> None: @@ -127,6 +135,37 @@ def create_session(self, name: str) -> OSSession: self._other_sessions.append(connection) return connection + def create_interactive_shell( + self, + shell_cls: Type[InteractiveShellType], + timeout: float = SETTINGS.timeout, + privileged: bool = False, + app_args: str = "", + ) -> InteractiveShellType: + """Create a handler for an interactive session. + + Instantiate shell_cls according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + app_args: The arguments to be passed to the application. + Returns: + Instance of the desired interactive application. + """ + if not shell_cls.dpdk_app: + shell_cls.path = self.main_session.join_remote_path(shell_cls.path) + + return self.main_session.create_interactive_shell( + shell_cls, + app_args, + timeout, + privileged, + ) + def filter_lcores( self, filter_specifier: LogicalCoreCount | LogicalCoreList, diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 9dbc390848..e3227d9bc2 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -1,14 +1,21 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire import os import tarfile import time from pathlib import PurePath - -from framework.config import BuildTargetConfiguration, NodeConfiguration -from framework.remote_session import CommandResult, OSSession +from typing import Type + +from framework.config import ( + BuildTargetConfiguration, + BuildTargetInfo, + NodeConfiguration, + NodeInfo, +) +from framework.remote_session import CommandResult, InteractiveShellType, OSSession from framework.settings import SETTINGS from framework.utils import MesonArgs @@ -16,6 +23,52 @@ from .node import Node +class EalParameters(object): + def __init__( + self, + lcore_list: LogicalCoreList, + memory_channels: int, + prefix: str, + no_pci: bool, + vdevs: list[VirtualDevice], + other_eal_param: str, + ): + """ + Generate eal parameters character string; + :param lcore_list: the list of logical cores to use. + :param memory_channels: the number of memory channels to use. + :param prefix: set file prefix string, eg: + prefix='vf' + :param no_pci: switch of disable PCI bus eg: + no_pci=True + :param vdevs: virtual device list, eg: + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + :param other_eal_param: user defined DPDK eal parameters, eg: + other_eal_param='--single-file-segments' + """ + self._lcore_list = f"-l {lcore_list}" + self._memory_channels = f"-n {memory_channels}" + self._prefix = prefix + if prefix: + self._prefix = f"--file-prefix={prefix}" + self._no_pci = "--no-pci" if no_pci else "" + self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs) + self._other_eal_param = other_eal_param + + def __str__(self) -> str: + return ( + f"{self._lcore_list} " + f"{self._memory_channels} " + f"{self._prefix} " + f"{self._no_pci} " + f"{self._vdevs} " + f"{self._other_eal_param}" + ) + + class SutNode(Node): """ A class for managing connections to the System under Test, providing @@ -30,9 +83,11 @@ class SutNode(Node): _env_vars: dict _remote_tmp_dir: PurePath __remote_dpdk_dir: PurePath | None - _dpdk_version: str | None _app_compile_timeout: float _dpdk_kill_session: OSSession | None + _dpdk_version: str | None + _node_info: NodeInfo | None + _compiler_version: str | None def __init__(self, node_config: NodeConfiguration): super(SutNode, self).__init__(node_config) @@ -41,12 +96,14 @@ def __init__(self, node_config: NodeConfiguration): self._env_vars = {} self._remote_tmp_dir = self.main_session.get_remote_tmp_dir() self.__remote_dpdk_dir = None - self._dpdk_version = None self._app_compile_timeout = 90 self._dpdk_kill_session = None self._dpdk_timestamp = ( f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}" ) + self._dpdk_version = None + self._node_info = None + self._compiler_version = None @property def _remote_dpdk_dir(self) -> PurePath: @@ -75,6 +132,32 @@ def dpdk_version(self) -> str: ) return self._dpdk_version + @property + def node_info(self) -> NodeInfo: + if self._node_info is None: + self._node_info = self.main_session.get_node_info() + return self._node_info + + @property + def compiler_version(self) -> str: + if self._compiler_version is None: + if self._build_target_config is not None: + self._compiler_version = self.main_session.get_compiler_version( + self._build_target_config.compiler.name + ) + else: + self._logger.warning( + "Failed to get compiler version because" + "_build_target_config is None." + ) + return "" + return self._compiler_version + + def get_build_target_info(self) -> BuildTargetInfo: + return BuildTargetInfo( + dpdk_version=self.dpdk_version, compiler_version=self.compiler_version + ) + def _guess_dpdk_remote_dir(self) -> PurePath: return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) @@ -84,6 +167,10 @@ def _set_up_build_target( """ Setup DPDK on the SUT node. """ + # we want to ensure that dpdk_version and compiler_version is reset for new + # build targets + self._dpdk_version = None + self._compiler_version = None self._configure_build_target(build_target_config) self._copy_dpdk_tarball() self._build_dpdk() @@ -262,48 +349,38 @@ def run_dpdk_app( f"{app_path} {eal_args}", timeout, privileged=True, verify=True ) - -class EalParameters(object): - def __init__( + def create_interactive_shell( self, - lcore_list: LogicalCoreList, - memory_channels: int, - prefix: str, - no_pci: bool, - vdevs: list[VirtualDevice], - other_eal_param: str, - ): - """ - Generate eal parameters character string; - :param lcore_list: the list of logical cores to use. - :param memory_channels: the number of memory channels to use. - :param prefix: set file prefix string, eg: - prefix='vf' - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' + shell_cls: Type[InteractiveShellType], + timeout: float = SETTINGS.timeout, + privileged: bool = False, + eal_parameters: EalParameters | str | None = None, + ) -> InteractiveShellType: + """Factory method for creating a handler for an interactive session. + + Instantiate shell_cls according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + eal_parameters: List of EAL parameters to use to launch the app. If this + isn't provided or an empty string is passed, it will default to calling + create_eal_parameters(). + Returns: + Instance of the desired interactive application. """ - self._lcore_list = f"-l {lcore_list}" - self._memory_channels = f"-n {memory_channels}" - self._prefix = prefix - if prefix: - self._prefix = f"--file-prefix={prefix}" - self._no_pci = "--no-pci" if no_pci else "" - self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs) - self._other_eal_param = other_eal_param + if not eal_parameters: + eal_parameters = self.create_eal_parameters() - def __str__(self) -> str: - return ( - f"{self._lcore_list} " - f"{self._memory_channels} " - f"{self._prefix} " - f"{self._no_pci} " - f"{self._vdevs} " - f"{self._other_eal_param}" + # We need to append the build directory for DPDK apps + if shell_cls.dpdk_app: + shell_cls.path = self.main_session.join_remote_path( + self.remote_dpdk_build_dir, shell_cls.path + ) + + return super().create_interactive_shell( + shell_cls, timeout, privileged, str(eal_parameters) ) diff --git a/dts/framework/utils.py b/dts/framework/utils.py index 510e2e3788..60abe46edf 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -25,6 +25,9 @@ def __str__(self) -> str: return self.name +REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" + + def check_dts_python_version() -> None: if sys.version_info.major < 3 or ( sys.version_info.major == 3 and sys.version_info.minor < 10 diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py new file mode 100644 index 0000000000..4a269df75b --- /dev/null +++ b/dts/tests/TestSuite_smoke_tests.py @@ -0,0 +1,114 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 University of New Hampshire + +import re + +from framework.config import PortConfig +from framework.remote_session import TestPmdShell +from framework.settings import SETTINGS +from framework.test_suite import TestSuite +from framework.utils import REGEX_FOR_PCI_ADDRESS + + +class SmokeTests(TestSuite): + is_blocking = True + # dicts in this list are expected to have two keys: + # "pci_address" and "current_driver" + nics_in_node: list[PortConfig] = [] + + def set_up_suite(self) -> None: + """ + Setup: + Set the build directory path and generate a list of NICs in the SUT node. + """ + self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir + self.nics_in_node = self.sut_node.config.ports + + def test_unit_tests(self) -> None: + """ + Test: + Run the fast-test unit-test suite through meson. + """ + self.sut_node.main_session.send_command( + f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests -t 60", + 480, + verify=True, + privileged=True, + ) + + def test_driver_tests(self) -> None: + """ + Test: + Run the driver-test unit-test suite through meson. + """ + vdev_args = "" + for dev in self.sut_node.virtual_devices: + vdev_args += f"--vdev {dev} " + vdev_args = vdev_args[:-1] + driver_tests_command = ( + f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests" + ) + if vdev_args: + self._logger.info( + "Running driver tests with the following virtual " + f"devices: {vdev_args}" + ) + driver_tests_command += f' --test-args "{vdev_args}"' + + self.sut_node.main_session.send_command( + driver_tests_command, + 300, + verify=True, + privileged=True, + ) + + def test_devices_listed_in_testpmd(self) -> None: + """ + Test: + Uses testpmd driver to verify that devices have been found by testpmd. + """ + testpmd_driver = self.sut_node.create_interactive_shell( + TestPmdShell, privileged=True + ) + dev_list = [str(x) for x in testpmd_driver.get_devices()] + for nic in self.nics_in_node: + self.verify( + nic.pci in dev_list, + f"Device {nic.pci} was not listed in testpmd's available devices, " + "please check your configuration", + ) + + def test_device_bound_to_driver(self) -> None: + """ + Test: + Ensure that all drivers listed in the config are bound to the correct + driver. + """ + path_to_devbind = self.sut_node.main_session.join_remote_path( + self.sut_node._remote_dpdk_dir, "usertools", "dpdk-devbind.py" + ) + + all_nics_in_dpdk_devbind = self.sut_node.main_session.send_command( + f"{path_to_devbind} --status | awk '{REGEX_FOR_PCI_ADDRESS}'", + SETTINGS.timeout, + ).stdout + + for nic in self.nics_in_node: + # This regular expression finds the line in the above string that starts + # with the address for the nic we are on in the loop and then captures the + # name of the driver in a group + devbind_info_for_nic = re.search( + f"{nic.pci}[^\\n]*drv=([\\d\\w]*) [^\\n]*", + all_nics_in_dpdk_devbind, + ) + self.verify( + devbind_info_for_nic is not None, + f"Failed to find configured device ({nic.pci}) using dpdk-devbind.py", + ) + # We know this isn't None, but mypy doesn't + assert devbind_info_for_nic is not None + self.verify( + devbind_info_for_nic.group(1) == nic.os_driver, + f"Driver for device {nic.pci} does not match driver listed in " + f"configuration (bound to {devbind_info_for_nic.group(1)})", + )