From patchwork Fri Mar 3 10:24:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124782 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D85841DC6; Fri, 3 Mar 2023 11:25:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E3DD4114B; Fri, 3 Mar 2023 11:25:15 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id 1FE6A400D6 for ; Fri, 3 Mar 2023 11:25:13 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id ay14so4612971edb.11 for ; Fri, 03 Mar 2023 02:25:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FTcKSs25Y/2klYCQvukkyGh3+KUG2Na0D/dWkIcFjMc=; b=DAhUBaHs2LZA9ZfMxq3bCs8GwoMalYVFcLm0CWzEHGZ/IID+JOrELP/JSWL6xVHbAB WB6DgGLxhp+/Ky9KEjxNa6UImdMrbeIm+di/Uv8yyY2fFAd8yz9bMEPIiP/B38yUgqWH 9hZqyWQJhYhgHKuGlSv59SQ+f/PywSbhhNnxL+tKrYlM/zr9FrkN2+mKPCNaaV+rH+5h VDOSYNSL9GZskx3DNSSu0XSgt1lm15uYV2OHEMMM4iMl/KHB2zTD8kfYrcd4VznPicwZ cHBZ4H1Pl2cAJKgh1KoyXGZ10x+wFEpXq953ZsLpxnzabOPnqjiRRue3Idi/x23euYmu wxZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FTcKSs25Y/2klYCQvukkyGh3+KUG2Na0D/dWkIcFjMc=; b=UBjkOwy3BaYjFchXOYqjc9gDk4ybuwlwDUag+b/zANBGS95zN7z8upDRrFAJ3yNZVI /cZsSRE4GgNg6y1s+eaJNARJomWy12eqbfziLVaNgdfqF/C12ifHDz6i2xQ00TV/1uj6 YcNBNMY3jP5/AqQB/K79PwdX1zRZy8TPaOOe04qzb14O/F+fZWzGlWoXx+FXvWXBkxIn UyNL7PT4oRhJFZLmsiPyeIl/BLK8VdqqFf73uN2juDnRZ7wygsLjE5UgYAUE6rLExYur Io67/kO7JhFqLRWEuDOhT3iDvINOE+JOszMhkbagmmZyWKVVAgMkoXyd8QRyBIrK+9YU xAXw== X-Gm-Message-State: AO0yUKVbl0wWQxqmWuoQf2q+5RzLgf1WmaRqegoa7CRjxP2VO8NqUXWK TnqINKZpGd8ID6Kw9i3VqIo7lA== X-Google-Smtp-Source: AK7set/PprX+tAMfdVALYZZHU9d4UC7d9shT2RFjnNJFyPb+DbI8H2wIYFIgsO+lcxaXf9IefMfQoQ== X-Received: by 2002:a17:906:9451:b0:8e1:d996:dca2 with SMTP id z17-20020a170906945100b008e1d996dca2mr976612ejx.64.1677839112424; Fri, 03 Mar 2023 02:25:12 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:12 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 01/10] dts: add node and os abstractions Date: Fri, 3 Mar 2023 11:24:58 +0100 Message-Id: <20230303102507.527790-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The abstraction model in DTS is as follows: Node, defining and implementing methods common to and the base of SUT (system under test) Node and TG (traffic generator) Node. Remote Session, defining and implementing methods common to any remote session implementation, such as SSH Session. OSSession, defining and implementing methods common to any operating system/distribution, such as Linux. OSSession uses a derived Remote Session and Node in turn uses a derived OSSession. This split delegates OS-specific and connection-specific code to specialized classes designed to handle the differences. The base classes implement the methods or parts of methods that are common to all implementations and defines abstract methods that must be implemented by derived classes. Part of the abstractions is the DTS test execution skeleton: execution setup, build setup and then test execution. Signed-off-by: Juraj Linkeš --- dts/conf.yaml | 11 +- dts/framework/config/__init__.py | 73 +++++++- dts/framework/config/conf_yaml_schema.json | 76 +++++++- dts/framework/dts.py | 162 ++++++++++++++---- dts/framework/exception.py | 46 ++++- dts/framework/logger.py | 24 +-- dts/framework/remote_session/__init__.py | 30 +++- dts/framework/remote_session/linux_session.py | 11 ++ dts/framework/remote_session/os_session.py | 46 +++++ dts/framework/remote_session/posix_session.py | 12 ++ .../remote_session/remote/__init__.py | 16 ++ .../{ => remote}/remote_session.py | 41 +++-- .../{ => remote}/ssh_session.py | 20 +-- dts/framework/settings.py | 15 +- dts/framework/testbed_model/__init__.py | 10 +- dts/framework/testbed_model/node.py | 109 +++++++++--- dts/framework/testbed_model/sut_node.py | 13 ++ 17 files changed, 594 insertions(+), 121 deletions(-) create mode 100644 dts/framework/remote_session/linux_session.py create mode 100644 dts/framework/remote_session/os_session.py create mode 100644 dts/framework/remote_session/posix_session.py create mode 100644 dts/framework/remote_session/remote/__init__.py rename dts/framework/remote_session/{ => remote}/remote_session.py (61%) rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%) create mode 100644 dts/framework/testbed_model/sut_node.py diff --git a/dts/conf.yaml b/dts/conf.yaml index 1aaa593612..03696d2bab 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -1,9 +1,16 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright 2022 The DPDK contributors +# Copyright 2022-2023 The DPDK contributors executions: - - system_under_test: "SUT 1" + - build_targets: + - arch: x86_64 + os: linux + cpu: native + compiler: gcc + compiler_wrapper: ccache + system_under_test: "SUT 1" nodes: - name: "SUT 1" hostname: sut1.change.me.localhost user: root + os: linux diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 214be8e7f4..e3e2d74eac 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -1,15 +1,17 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2021 Intel Corporation -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 University of New Hampshire +# Copyright(c) 2023 PANTHEON.tech s.r.o. """ -Generic port and topology nodes configuration file load function +Yaml config parsing methods """ import json import os.path import pathlib from dataclasses import dataclass +from enum import Enum, auto, unique from typing import Any import warlock # type: ignore @@ -18,6 +20,47 @@ from framework.settings import SETTINGS +class StrEnum(Enum): + @staticmethod + def _generate_next_value_( + name: str, start: int, count: int, last_values: object + ) -> str: + return name + + +@unique +class Architecture(StrEnum): + i686 = auto() + x86_64 = auto() + x86_32 = auto() + arm64 = auto() + ppc64le = auto() + + +@unique +class OS(StrEnum): + linux = auto() + freebsd = auto() + windows = auto() + + +@unique +class CPUType(StrEnum): + native = auto() + armv8a = auto() + dpaa2 = auto() + thunderx = auto() + xgene1 = auto() + + +@unique +class Compiler(StrEnum): + gcc = auto() + clang = auto() + icc = auto() + msvc = auto() + + # Slots enables some optimizations, by pre-allocating space for the defined # attributes in the underlying data structure. # @@ -29,6 +72,7 @@ class NodeConfiguration: hostname: str user: str password: str | None + os: OS @staticmethod def from_dict(d: dict) -> "NodeConfiguration": @@ -37,19 +81,44 @@ def from_dict(d: dict) -> "NodeConfiguration": hostname=d["hostname"], user=d["user"], password=d.get("password"), + os=OS(d["os"]), + ) + + +@dataclass(slots=True, frozen=True) +class BuildTargetConfiguration: + arch: Architecture + os: OS + cpu: CPUType + compiler: Compiler + name: str + + @staticmethod + def from_dict(d: dict) -> "BuildTargetConfiguration": + return BuildTargetConfiguration( + arch=Architecture(d["arch"]), + os=OS(d["os"]), + cpu=CPUType(d["cpu"]), + compiler=Compiler(d["compiler"]), + name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}", ) @dataclass(slots=True, frozen=True) class ExecutionConfiguration: + build_targets: list[BuildTargetConfiguration] system_under_test: NodeConfiguration @staticmethod def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": + build_targets: list[BuildTargetConfiguration] = list( + map(BuildTargetConfiguration.from_dict, d["build_targets"]) + ) sut_name = d["system_under_test"] assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}" return ExecutionConfiguration( + build_targets=build_targets, system_under_test=node_map[sut_name], ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 6b8d6ccd05..9170307fbe 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -5,6 +5,68 @@ "node_name": { "type": "string", "description": "A unique identifier for a node" + }, + "OS": { + "type": "string", + "enum": [ + "linux" + ] + }, + "cpu": { + "type": "string", + "description": "Native should be the default on x86", + "enum": [ + "native", + "armv8a", + "dpaa2", + "thunderx", + "xgene1" + ] + }, + "compiler": { + "type": "string", + "enum": [ + "gcc", + "clang", + "icc", + "mscv" + ] + }, + "build_target": { + "type": "object", + "description": "Targets supported by DTS", + "properties": { + "arch": { + "type": "string", + "enum": [ + "ALL", + "x86_64", + "arm64", + "ppc64le", + "other" + ] + }, + "os": { + "$ref": "#/definitions/OS" + }, + "cpu": { + "$ref": "#/definitions/cpu" + }, + "compiler": { + "$ref": "#/definitions/compiler" + }, + "compiler_wrapper": { + "type": "string", + "description": "This will be added before compiler to the CC variable when building DPDK. Optional." + } + }, + "additionalProperties": false, + "required": [ + "arch", + "os", + "cpu", + "compiler" + ] } }, "type": "object", @@ -29,13 +91,17 @@ "password": { "type": "string", "description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred." + }, + "os": { + "$ref": "#/definitions/OS" } }, "additionalProperties": false, "required": [ "name", "hostname", - "user" + "user", + "os" ] }, "minimum": 1 @@ -45,12 +111,20 @@ "items": { "type": "object", "properties": { + "build_targets": { + "type": "array", + "items": { + "$ref": "#/definitions/build_target" + }, + "minimum": 1 + }, "system_under_test": { "$ref": "#/definitions/node_name" } }, "additionalProperties": false, "required": [ + "build_targets", "system_under_test" ] }, diff --git a/dts/framework/dts.py b/dts/framework/dts.py index d23cfc4526..3d4170d10f 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -1,67 +1,157 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2019 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire import sys -import traceback -from collections.abc import Iterable -from framework.testbed_model.node import Node - -from .config import CONFIGURATION +from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration +from .exception import DTSError, ErrorSeverity from .logger import DTSLOG, getLogger +from .testbed_model import SutNode from .utils import check_dts_python_version -dts_logger: DTSLOG | None = None +dts_logger: DTSLOG = getLogger("DTSRunner") +errors = [] def run_all() -> None: """ - Main process of DTS, it will run all test suites in the config file. + The main process of DTS. Runs all build targets in all executions from the main + config file. """ - global dts_logger + global errors # check the python version of the server that run dts check_dts_python_version() - dts_logger = getLogger("dts") - - nodes = {} - # This try/finally block means "Run the try block, if there is an exception, - # run the finally block before passing it upward. If there is not an exception, - # run the finally block after the try block is finished." This helps avoid the - # problem of python's interpreter exit context, which essentially prevents you - # from making certain system calls. This makes cleaning up resources difficult, - # since most of the resources in DTS are network-based, which is restricted. + nodes: dict[str, SutNode] = {} try: # for all Execution sections for execution in CONFIGURATION.executions: - sut_config = execution.system_under_test - if sut_config.name not in nodes: - node = Node(sut_config) - nodes[sut_config.name] = node - node.send_command("echo Hello World") + sut_node = None + if execution.system_under_test.name in nodes: + # a Node with the same name already exists + sut_node = nodes[execution.system_under_test.name] + else: + # the SUT has not been initialized yet + try: + sut_node = SutNode(execution.system_under_test) + except Exception as e: + dts_logger.exception( + f"Connection to node {execution.system_under_test} failed." + ) + errors.append(e) + else: + nodes[sut_node.name] = sut_node + + if sut_node: + _run_execution(sut_node, execution) + + except Exception as e: + dts_logger.exception("An unexpected error has occurred.") + errors.append(e) + raise + + finally: + try: + for node in nodes.values(): + node.close() + except Exception as e: + dts_logger.exception("Final cleanup of nodes failed.") + errors.append(e) + # we need to put the sys.exit call outside the finally clause to make sure + # that unexpected exceptions will propagate + # in that case, the error that should be reported is the uncaught exception as + # that is a severe error originating from the framework + # at that point, we'll only have partial results which could be impacted by the + # error causing the uncaught exception, making them uninterpretable + _exit_dts() + + +def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None: + """ + Run the given execution. This involves running the execution setup as well as + running all build targets in the given execution. + """ + dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.") + + try: + sut_node.set_up_execution(execution) except Exception as e: - # sys.exit() doesn't produce a stack trace, need to print it explicitly - traceback.print_exc() - raise e + dts_logger.exception("Execution setup failed.") + errors.append(e) + + else: + for build_target in execution.build_targets: + _run_build_target(sut_node, build_target, execution) finally: - quit_execution(nodes.values()) + try: + sut_node.tear_down_execution() + except Exception as e: + dts_logger.exception("Execution teardown failed.") + errors.append(e) -def quit_execution(sut_nodes: Iterable[Node]) -> None: +def _run_build_target( + sut_node: SutNode, + build_target: BuildTargetConfiguration, + execution: ExecutionConfiguration, +) -> None: """ - Close session to SUT and TG before quit. - Return exit status when failure occurred. + Run the given build target. """ - for sut_node in sut_nodes: - # close all session - sut_node.node_exit() + dts_logger.info(f"Running build target '{build_target.name}'.") + + try: + sut_node.set_up_build_target(build_target) + except Exception as e: + dts_logger.exception("Build target setup failed.") + errors.append(e) + + else: + _run_suites(sut_node, execution) + + finally: + try: + sut_node.tear_down_build_target() + except Exception as e: + dts_logger.exception("Build target teardown failed.") + errors.append(e) + + +def _run_suites( + sut_node: SutNode, + execution: ExecutionConfiguration, +) -> None: + """ + Use the given build_target to run execution's test suites + with possibly only a subset of test cases. + If no subset is specified, run all test cases. + """ + + +def _exit_dts() -> None: + """ + Process all errors and exit with the proper exit code. + """ + if errors and dts_logger: + dts_logger.debug("Summary of errors:") + for error in errors: + dts_logger.debug(repr(error)) + + return_code = ErrorSeverity.NO_ERR + for error in errors: + error_return_code = ErrorSeverity.GENERIC_ERR + if isinstance(error, DTSError): + error_return_code = error.severity + + if error_return_code > return_code: + return_code = error_return_code - if dts_logger is not None: + if dts_logger: dts_logger.info("DTS execution has ended.") - sys.exit(0) + sys.exit(return_code) diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 8b2f08a8f0..121a0f7296 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -1,20 +1,46 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire """ User-defined exceptions used across the framework. """ +from enum import IntEnum, unique +from typing import ClassVar -class SSHTimeoutError(Exception): + +@unique +class ErrorSeverity(IntEnum): + """ + The severity of errors that occur during DTS execution. + All exceptions are caught and the most severe error is used as return code. + """ + + NO_ERR = 0 + GENERIC_ERR = 1 + CONFIG_ERR = 2 + SSH_ERR = 3 + + +class DTSError(Exception): + """ + The base exception from which all DTS exceptions are derived. + Stores error severity. + """ + + severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR + + +class SSHTimeoutError(DTSError): """ Command execution timeout. """ command: str output: str + severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR def __init__(self, command: str, output: str): self.command = command @@ -27,12 +53,13 @@ def get_output(self) -> str: return self.output -class SSHConnectionError(Exception): +class SSHConnectionError(DTSError): """ SSH connection error. """ host: str + severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR def __init__(self, host: str): self.host = host @@ -41,16 +68,25 @@ def __str__(self) -> str: return f"Error trying to connect with {self.host}" -class SSHSessionDeadError(Exception): +class SSHSessionDeadError(DTSError): """ SSH session is not alive. It can no longer be used. """ host: str + severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR def __init__(self, host: str): self.host = host def __str__(self) -> str: return f"SSH session with {self.host} has died" + + +class ConfigurationError(DTSError): + """ + Raised when an invalid configuration is encountered. + """ + + severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR diff --git a/dts/framework/logger.py b/dts/framework/logger.py index a31fcc8242..bb2991e994 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire """ DTS logger module with several log level. DTS framework and TestSuite logs @@ -33,17 +33,17 @@ class DTSLOG(logging.LoggerAdapter): DTS log class for framework and testsuite. """ - logger: logging.Logger + _logger: logging.Logger node: str sh: logging.StreamHandler fh: logging.FileHandler verbose_fh: logging.FileHandler def __init__(self, logger: logging.Logger, node: str = "suite"): - self.logger = logger + self._logger = logger # 1 means log everything, this will be used by file handlers if their level # is not set - self.logger.setLevel(1) + self._logger.setLevel(1) self.node = node @@ -55,9 +55,13 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): if SETTINGS.verbose is True: sh.setLevel(logging.DEBUG) - self.logger.addHandler(sh) + self._logger.addHandler(sh) self.sh = sh + # prepare the output folder + if not os.path.exists(SETTINGS.output_dir): + os.mkdir(SETTINGS.output_dir) + logging_path_prefix = os.path.join(SETTINGS.output_dir, node) fh = logging.FileHandler(f"{logging_path_prefix}.log") @@ -68,7 +72,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): ) ) - self.logger.addHandler(fh) + self._logger.addHandler(fh) self.fh = fh # This outputs EVERYTHING, intended for post-mortem debugging @@ -82,10 +86,10 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): ) ) - self.logger.addHandler(verbose_fh) + self._logger.addHandler(verbose_fh) self.verbose_fh = verbose_fh - super(DTSLOG, self).__init__(self.logger, dict(node=self.node)) + super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) def logger_exit(self) -> None: """ @@ -93,7 +97,7 @@ def logger_exit(self) -> None: """ for handler in (self.sh, self.fh, self.verbose_fh): handler.flush() - self.logger.removeHandler(handler) + self._logger.removeHandler(handler) def getLogger(name: str, node: str = "suite") -> DTSLOG: diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index a227d8db22..747316c78a 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -1,14 +1,30 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2023 PANTHEON.tech s.r.o. -from framework.config import NodeConfiguration +""" +The package provides modules for managing remote connections to a remote host (node), +differentiated by OS. +The package provides a factory function, create_session, that returns the appropriate +remote connection based on the passed configuration. The differences are in the +underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux). +""" + +# pylama:ignore=W0611 + +from framework.config import OS, NodeConfiguration +from framework.exception import ConfigurationError from framework.logger import DTSLOG -from .remote_session import RemoteSession -from .ssh_session import SSHSession +from .linux_session import LinuxSession +from .os_session import OSSession +from .remote import RemoteSession, SSHSession -def create_remote_session( +def create_session( node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> RemoteSession: - return SSHSession(node_config, name, logger) +) -> OSSession: + match node_config.os: + case OS.linux: + return LinuxSession(node_config, name, logger) + case _: + raise ConfigurationError(f"Unsupported OS {node_config.os}") diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py new file mode 100644 index 0000000000..9d14166077 --- /dev/null +++ b/dts/framework/remote_session/linux_session.py @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire + +from .posix_session import PosixSession + + +class LinuxSession(PosixSession): + """ + The implementation of non-Posix compliant parts of Linux remote sessions. + """ diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py new file mode 100644 index 0000000000..7a4cc5e669 --- /dev/null +++ b/dts/framework/remote_session/os_session.py @@ -0,0 +1,46 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire + +from abc import ABC + +from framework.config import NodeConfiguration +from framework.logger import DTSLOG + +from .remote import RemoteSession, create_remote_session + + +class OSSession(ABC): + """ + The OS classes create a DTS node remote session and implement OS specific + behavior. There a few control methods implemented by the base class, the rest need + to be implemented by derived classes. + """ + + _config: NodeConfiguration + name: str + _logger: DTSLOG + remote_session: RemoteSession + + def __init__( + self, + node_config: NodeConfiguration, + name: str, + logger: DTSLOG, + ): + self._config = node_config + self.name = name + self._logger = logger + self.remote_session = create_remote_session(node_config, name, logger) + + def close(self, force: bool = False) -> None: + """ + Close the remote session. + """ + self.remote_session.close(force) + + def is_alive(self) -> bool: + """ + Check whether the remote session is still responding. + """ + return self.remote_session.is_alive() diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py new file mode 100644 index 0000000000..110b6a4804 --- /dev/null +++ b/dts/framework/remote_session/posix_session.py @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. +# Copyright(c) 2023 University of New Hampshire + +from .os_session import OSSession + + +class PosixSession(OSSession): + """ + An intermediary class implementing the Posix compliant parts of + Linux and other OS remote sessions. + """ diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py new file mode 100644 index 0000000000..f3092f8bbe --- /dev/null +++ b/dts/framework/remote_session/remote/__init__.py @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +# pylama:ignore=W0611 + +from framework.config import NodeConfiguration +from framework.logger import DTSLOG + +from .remote_session import RemoteSession +from .ssh_session import SSHSession + + +def create_remote_session( + node_config: NodeConfiguration, name: str, logger: DTSLOG +) -> RemoteSession: + return SSHSession(node_config, name, logger) diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote/remote_session.py similarity index 61% rename from dts/framework/remote_session/remote_session.py rename to dts/framework/remote_session/remote/remote_session.py index 33047d9d0a..7c7b30225f 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote/remote_session.py @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire import dataclasses from abc import ABC, abstractmethod @@ -19,14 +19,23 @@ class HistoryRecord: class RemoteSession(ABC): + """ + The base class for defining which methods must be implemented in order to connect + to a remote host (node) and maintain a remote session. The derived classes are + supposed to implement/use some underlying transport protocol (e.g. SSH) to + implement the methods. On top of that, it provides some basic services common to + all derived classes, such as keeping history and logging what's being executed + on the remote node. + """ + name: str hostname: str ip: str port: int | None username: str password: str - logger: DTSLOG history: list[HistoryRecord] + _logger: DTSLOG _node_config: NodeConfiguration def __init__( @@ -46,31 +55,34 @@ def __init__( self.port = int(port) self.username = node_config.user self.password = node_config.password or "" - self.logger = logger self.history = [] - self.logger.info(f"Connecting to {self.username}@{self.hostname}.") + self._logger = logger + self._logger.info(f"Connecting to {self.username}@{self.hostname}.") self._connect() - self.logger.info(f"Connection to {self.username}@{self.hostname} successful.") + self._logger.info(f"Connection to {self.username}@{self.hostname} successful.") @abstractmethod def _connect(self) -> None: """ Create connection to assigned node. """ - pass def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str: - self.logger.info(f"Sending: {command}") + """ + Send a command and return the output. + """ + self._logger.info(f"Sending: {command}") out = self._send_command(command, timeout) - self.logger.debug(f"Received from {command}: {out}") + self._logger.debug(f"Received from {command}: {out}") self._history_add(command=command, output=out) return out @abstractmethod def _send_command(self, command: str, timeout: float) -> str: """ - Send a command and return the output. + Use the underlying protocol to execute the command and return the output + of the command. """ def _history_add(self, command: str, output: str) -> None: @@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None: ) def close(self, force: bool = False) -> None: - self.logger.logger_exit() + """ + Close the remote session and free all used resources. + """ + self._logger.logger_exit() self._close(force) @abstractmethod def _close(self, force: bool = False) -> None: """ - Close the remote session, freeing all used resources. + Execute protocol specific steps needed to close the session properly. """ @abstractmethod def is_alive(self) -> bool: """ - Check whether the session is still responding. + Check whether the remote session is still responding. """ diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py similarity index 91% rename from dts/framework/remote_session/ssh_session.py rename to dts/framework/remote_session/remote/ssh_session.py index 7ec327054d..96175f5284 100644 --- a/dts/framework/remote_session/ssh_session.py +++ b/dts/framework/remote_session/remote/ssh_session.py @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire import time @@ -17,7 +17,7 @@ class SSHSession(RemoteSession): """ - Module for creating Pexpect SSH sessions to a node. + Module for creating Pexpect SSH remote sessions. """ session: pxssh.pxssh @@ -56,9 +56,9 @@ def _connect(self) -> None: ) break except Exception as e: - self.logger.warning(e) + self._logger.warning(e) time.sleep(2) - self.logger.info( + self._logger.info( f"Retrying connection: retry number {retry_attempt + 1}." ) else: @@ -67,13 +67,13 @@ def _connect(self) -> None: self.send_expect("stty -echo", "#") self.send_expect("stty columns 1000", "#") except Exception as e: - self.logger.error(RED(str(e))) + self._logger.error(RED(str(e))) if getattr(self, "port", None): suggestion = ( f"\nSuggestion: Check if the firewall on {self.hostname} is " f"stopped.\n" ) - self.logger.info(GREEN(suggestion)) + self._logger.info(GREEN(suggestion)) raise SSHConnectionError(self.hostname) @@ -87,8 +87,8 @@ def send_expect( try: retval = int(ret_status) if retval: - self.logger.error(f"Command: {command} failure!") - self.logger.error(ret) + self._logger.error(f"Command: {command} failure!") + self._logger.error(ret) return retval else: return ret @@ -97,7 +97,7 @@ def send_expect( else: return ret except Exception as e: - self.logger.error( + self._logger.error( f"Exception happened in [{command}] and output is " f"[{self._get_output()}]" ) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 800f2c7b7f..6422b23499 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -1,6 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2021 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire import argparse @@ -23,7 +23,7 @@ def __init__( default: str = None, type: Callable[[str], _T | argparse.FileType | None] = None, choices: Iterable[_T] | None = None, - required: bool = True, + required: bool = False, help: str | None = None, metavar: str | tuple[str, ...] | None = None, ) -> None: @@ -63,13 +63,17 @@ class _Settings: def _get_parser() -> argparse.ArgumentParser: - parser = argparse.ArgumentParser(description="DPDK test framework.") + parser = argparse.ArgumentParser( + description="Run DPDK test suites. All options may be specified with " + "the environment variables provided in brackets. " + "Command line arguments have higher priority.", + formatter_class=argparse.ArgumentDefaultsHelpFormatter, + ) parser.add_argument( "--config-file", action=_env_arg("DTS_CFG_FILE"), default="conf.yaml", - required=False, help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs " "and targets.", ) @@ -79,7 +83,6 @@ def _get_parser() -> argparse.ArgumentParser: "--output", action=_env_arg("DTS_OUTPUT_DIR"), default="output", - required=False, help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.", ) @@ -88,7 +91,6 @@ def _get_parser() -> argparse.ArgumentParser: "--timeout", action=_env_arg("DTS_TIMEOUT"), default=15, - required=False, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for " "compiling DPDK.", ) @@ -98,7 +100,6 @@ def _get_parser() -> argparse.ArgumentParser: "--verbose", action=_env_arg("DTS_VERBOSE"), default="N", - required=False, help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages " "to the console.", ) diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index c5512e5812..8ead9db482 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -1,7 +1,13 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 University of New Hampshire +# Copyright(c) 2023 PANTHEON.tech s.r.o. """ -This module contains the classes used to model the physical traffic generator, +This package contains the classes used to model the physical traffic generator, system under test and any other components that need to be interacted with. """ + +# pylama:ignore=W0611 + +from .node import Node +from .sut_node import SutNode diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 8437975416..e1f06bc389 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -1,62 +1,119 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire """ A node is a generic host that DTS connects to and manages. """ -from framework.config import NodeConfiguration +from framework.config import ( + BuildTargetConfiguration, + ExecutionConfiguration, + NodeConfiguration, +) from framework.logger import DTSLOG, getLogger -from framework.remote_session import RemoteSession, create_remote_session -from framework.settings import SETTINGS +from framework.remote_session import OSSession, create_session class Node(object): """ - Basic module for node management. This module implements methods that + Basic class for node management. This class implements methods that manage a node, such as information gathering (of CPU/PCI/NIC) and environment setup. """ + main_session: OSSession + config: NodeConfiguration name: str - main_session: RemoteSession - logger: DTSLOG - _config: NodeConfiguration - _other_sessions: list[RemoteSession] + _logger: DTSLOG + _other_sessions: list[OSSession] def __init__(self, node_config: NodeConfiguration): - self._config = node_config + self.config = node_config + self.name = node_config.name + self._logger = getLogger(self.name) + self.main_session = create_session(self.config, self.name, self._logger) + self._other_sessions = [] - self.name = node_config.name - self.logger = getLogger(self.name) - self.logger.info(f"Created node: {self.name}") - self.main_session = create_remote_session(self._config, self.name, self.logger) + self._logger.info(f"Created node: {self.name}") - def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str: + def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: """ - Send commands to node and return string before timeout. + Perform the execution setup that will be done for each execution + this node is part of. """ + self._set_up_execution(execution_config) - return self.main_session.send_command(cmds, timeout) + def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: + """ + This method exists to be optionally overwritten by derived classes and + is not decorated so that the derived class doesn't have to use the decorator. + """ - def create_session(self, name: str) -> RemoteSession: - connection = create_remote_session( - self._config, - name, - getLogger(name, node=self.name), + def tear_down_execution(self) -> None: + """ + Perform the execution teardown that will be done after each execution + this node is part of concludes. + """ + self._tear_down_execution() + + def _tear_down_execution(self) -> None: + """ + This method exists to be optionally overwritten by derived classes and + is not decorated so that the derived class doesn't have to use the decorator. + """ + + def set_up_build_target( + self, build_target_config: BuildTargetConfiguration + ) -> None: + """ + Perform the build target setup that will be done for each build target + tested on this node. + """ + self._set_up_build_target(build_target_config) + + def _set_up_build_target( + self, build_target_config: BuildTargetConfiguration + ) -> None: + """ + This method exists to be optionally overwritten by derived classes and + is not decorated so that the derived class doesn't have to use the decorator. + """ + + def tear_down_build_target(self) -> None: + """ + Perform the build target teardown that will be done after each build target + tested on this node. + """ + self._tear_down_build_target() + + def _tear_down_build_target(self) -> None: + """ + This method exists to be optionally overwritten by derived classes and + is not decorated so that the derived class doesn't have to use the decorator. + """ + + def create_session(self, name: str) -> OSSession: + """ + Create and return a new OSSession tailored to the remote OS. + """ + session_name = f"{self.name} {name}" + connection = create_session( + self.config, + session_name, + getLogger(session_name, node=self.name), ) self._other_sessions.append(connection) return connection - def node_exit(self) -> None: + def close(self) -> None: """ - Recover all resource before node exit + Close all connections and free other resources. """ if self.main_session: self.main_session.close() for session in self._other_sessions: session.close() - self.logger.logger_exit() + self._logger.logger_exit() diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py new file mode 100644 index 0000000000..42acb6f9b2 --- /dev/null +++ b/dts/framework/testbed_model/sut_node.py @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2014 Intel Corporation +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +from .node import Node + + +class SutNode(Node): + """ + A class for managing connections to the System under Test, providing + methods that retrieve the necessary information about the node (such as + CPU, memory and NIC details) and configuration capabilities. + """ From patchwork Fri Mar 3 10:24:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124783 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B014641DC6; Fri, 3 Mar 2023 11:25:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45A6240687; Fri, 3 Mar 2023 11:25:16 +0100 (CET) Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by mails.dpdk.org (Postfix) with ESMTP id 9B18E400D6 for ; Fri, 3 Mar 2023 11:25:13 +0100 (CET) Received: by mail-ed1-f51.google.com with SMTP id i34so8286225eda.7 for ; Fri, 03 Mar 2023 02:25:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cITyQzQ1yMMw0k6eIsv77ruoFg0csPndwFcLwVrtIFk=; b=jo50OZNIc9zbtDnSq17fZGa3hHN1p1QrO9doJGqXJVzrEEyosuYmAFacYWnyW+hyaB LHqUL/tBZz3ALwlxkofMjuV8+O6urrpXmQ309M1jf7JCTIjgMtWjtAq4XdOWSY9czs8P AJa/98I+e1UNOyTAfu7tkIro2Hx3ZtCfzsAxYWHI4EzNNxgsbBwGEezx2coeuRKGWojB 18HNOZY8ZFeMZc6az4iEmcZmR50hw85TR7gTRjGjkXjJCpbEhutn66dt4YyHpmBo0T9Z n03JCCkflQldW/w6iomi8cUAeQPNe8QUZvEhSHhvnZkWASwmvGPHHFFg7uHj4OW36RVj aWZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cITyQzQ1yMMw0k6eIsv77ruoFg0csPndwFcLwVrtIFk=; b=BJmB1XDFIWSvk5QarVm4fkpAQhx8Ia+s3U6yr1MKPTFFOHgZogpSsXUuzwqIR1R5sD E3H24lQEr1u1iIOqlEvH/wRIRCEquy39BC4K9GqRZzzT4FnhIkJjnEov+1h9+NDVA6su 2633G7A9O2UOCLX8XQHII1RDXMDKd+H8TBXceDIr/v54rQEPvyZWk8OlGYVWpZ0qaEct riAM2cyZ6z3mcWXm9Z6F4TP9bUwXW5c0BJy1lLOI4KvwPEPOfkCJnz+SbnSsjxgT4vmz VrwFJNJv9CjwksGxTCUO/qFxYBAaQ4VvGAYPXW4CVjOD/vNUJJlcC1RgEmC1vMrCJRu+ bJVw== X-Gm-Message-State: AO0yUKWhRViwd0lTonjcmcRPNrOe6+dahHEP4lTPDT5DP+3INuOhYYbN hsvcbSuk0Tpcps0mJSvNO8D/cg== X-Google-Smtp-Source: AK7set+lbSms69x9/fw3+m2INXxqtIuDd3cuhclYwarjscDsMoNMYg8lG60I8vik7VUvJZbzYTcR1A== X-Received: by 2002:aa7:cf04:0:b0:4ac:bb85:c895 with SMTP id a4-20020aa7cf04000000b004acbb85c895mr1276315edy.1.1677839113269; Fri, 03 Mar 2023 02:25:13 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:13 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 02/10] dts: add ssh command verification Date: Fri, 3 Mar 2023 11:24:59 +0100 Message-Id: <20230303102507.527790-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This is a basic capability needed to check whether the command execution was successful or not. If not, raise a RemoteCommandExecutionError. When a failure is expected, the caller is supposed to catch the exception. Signed-off-by: Juraj Linkeš --- dts/framework/exception.py | 23 +++++++- .../remote_session/remote/remote_session.py | 55 +++++++++++++------ .../remote_session/remote/ssh_session.py | 12 +++- 3 files changed, 69 insertions(+), 21 deletions(-) diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 121a0f7296..e776b42bd9 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -21,7 +21,8 @@ class ErrorSeverity(IntEnum): NO_ERR = 0 GENERIC_ERR = 1 CONFIG_ERR = 2 - SSH_ERR = 3 + REMOTE_CMD_EXEC_ERR = 3 + SSH_ERR = 4 class DTSError(Exception): @@ -90,3 +91,23 @@ class ConfigurationError(DTSError): """ severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR + + +class RemoteCommandExecutionError(DTSError): + """ + Raised when a command executed on a Node returns a non-zero exit status. + """ + + command: str + command_return_code: int + severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + + def __init__(self, command: str, command_return_code: int): + self.command = command + self.command_return_code = command_return_code + + def __str__(self) -> str: + return ( + f"Command {self.command} returned a non-zero exit code: " + f"{self.command_return_code}" + ) diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py index 7c7b30225f..5ac395ec79 100644 --- a/dts/framework/remote_session/remote/remote_session.py +++ b/dts/framework/remote_session/remote/remote_session.py @@ -7,15 +7,29 @@ from abc import ABC, abstractmethod from framework.config import NodeConfiguration +from framework.exception import RemoteCommandExecutionError from framework.logger import DTSLOG from framework.settings import SETTINGS @dataclasses.dataclass(slots=True, frozen=True) -class HistoryRecord: +class CommandResult: + """ + The result of remote execution of a command. + """ + name: str command: str - output: str | int + stdout: str + stderr: str + return_code: int + + def __str__(self) -> str: + return ( + f"stdout: '{self.stdout}'\n" + f"stderr: '{self.stderr}'\n" + f"return_code: '{self.return_code}'" + ) class RemoteSession(ABC): @@ -34,7 +48,7 @@ class RemoteSession(ABC): port: int | None username: str password: str - history: list[HistoryRecord] + history: list[CommandResult] _logger: DTSLOG _node_config: NodeConfiguration @@ -68,28 +82,33 @@ def _connect(self) -> None: Create connection to assigned node. """ - def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str: + def send_command( + self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False + ) -> CommandResult: """ - Send a command and return the output. + Send a command to the connected node and return CommandResult. + If verify is True, check the return code of the executed command + and raise a RemoteCommandExecutionError if the command failed. """ - self._logger.info(f"Sending: {command}") - out = self._send_command(command, timeout) - self._logger.debug(f"Received from {command}: {out}") - self._history_add(command=command, output=out) - return out + self._logger.info(f"Sending: '{command}'") + result = self._send_command(command, timeout) + if verify and result.return_code: + self._logger.debug( + f"Command '{command}' failed with return code '{result.return_code}'" + ) + self._logger.debug(f"stdout: '{result.stdout}'") + self._logger.debug(f"stderr: '{result.stderr}'") + raise RemoteCommandExecutionError(command, result.return_code) + self._logger.debug(f"Received from '{command}':\n{result}") + self.history.append(result) + return result @abstractmethod - def _send_command(self, command: str, timeout: float) -> str: + def _send_command(self, command: str, timeout: float) -> CommandResult: """ - Use the underlying protocol to execute the command and return the output - of the command. + Use the underlying protocol to execute the command and return CommandResult. """ - def _history_add(self, command: str, output: str) -> None: - self.history.append( - HistoryRecord(name=self.name, command=command, output=output) - ) - def close(self, force: bool = False) -> None: """ Close the remote session and free all used resources. diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py index 96175f5284..c2362e2fdf 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/remote/ssh_session.py @@ -12,7 +12,7 @@ from framework.logger import DTSLOG from framework.utils import GREEN, RED -from .remote_session import RemoteSession +from .remote_session import CommandResult, RemoteSession class SSHSession(RemoteSession): @@ -66,6 +66,7 @@ def _connect(self) -> None: self.send_expect("stty -echo", "#") self.send_expect("stty columns 1000", "#") + self.send_expect("bind 'set enable-bracketed-paste off'", "#") except Exception as e: self._logger.error(RED(str(e))) if getattr(self, "port", None): @@ -163,7 +164,14 @@ def _flush(self) -> None: def is_alive(self) -> bool: return self.session.isalive() - def _send_command(self, command: str, timeout: float) -> str: + def _send_command(self, command: str, timeout: float) -> CommandResult: + output = self._send_command_get_output(command, timeout) + return_code = int(self._send_command_get_output("echo $?", timeout)) + + # we're capturing only stdout + return CommandResult(self.name, command, output, "", return_code) + + def _send_command_get_output(self, command: str, timeout: float) -> str: try: self._clean_session() self._send_line(command) From patchwork Fri Mar 3 10:25:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124784 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B626141DC6; Fri, 3 Mar 2023 11:25:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDE4642C4D; Fri, 3 Mar 2023 11:25:17 +0100 (CET) Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by mails.dpdk.org (Postfix) with ESMTP id CEC8C4114B for ; Fri, 3 Mar 2023 11:25:14 +0100 (CET) Received: by mail-ed1-f44.google.com with SMTP id o12so8229129edb.9 for ; Fri, 03 Mar 2023 02:25:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IHMb5HTfM+tzNU8lJ4+XXLXFzfv+sc12siea6sdqEXU=; b=7E7wsto0I6RcUDqAqL1abkscVOc0w8SjpzS9igl1OevM5s+sHONVQhxTrr7TN5FTy0 jVgZE1aq6rFEFCy3zh9aC37+ivp4778fM+pCASatFRUKRPQD2nVo0bSSwBvOyVf9e9lO qrouzB4jZYTRjCMwoOfB6Ugt8CGDIWygGORl7tLE/yXysNN4L+7dmLpD6IBaj/UP3FSo EuVPkAPlKPiJdmfazDgx1NxKIQsPS130MaZcWP5+z9Ys8KfbhSh4lujMSPo2v4TUOhDq E4nIakG60iP2/41b5S7Vo5wcTCkoJ/02Hci+jazMJrzkq5EDAkmeTVWMcSt5shQoWgDL quaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IHMb5HTfM+tzNU8lJ4+XXLXFzfv+sc12siea6sdqEXU=; b=EfZPHcdMIJV1XuI9t9ZkzudaTXIFdG3NNWtuFxLXcOU1LhARujZTaCohE9wMgw6/jo xnPH75B8PKe6rxynFOvO0i2cjM/7eBxiT+kvbLa3XtH7BKIjnM5b3LAUD5hJksqWt75a dkTMgYmyPvLbM5mRPS65IuMLvHYQDm4/dWeYHgj8QCnq4N5PWJDJnpTKVrpdg2JYKLW7 UprhwlZjn/qt/cTS+csX7yb33cVF7h74nG7XSQLa82NhHG2vOajsiTR8KWbznrINos2s UJoKcIcFOdpniCrovcCMIqb9vMvIAp7HtB+UcVkm7fj9bupSe3P+TjJAj9N7C2WFyhkq yhoA== X-Gm-Message-State: AO0yUKVL9qbHgDKa8wsaUfIzXL2rXKKODc5BZpNyht8JKmDeHkjGO1n2 tgE+KU4bVGZwbTwVPPX1/8vHmw== X-Google-Smtp-Source: AK7set/wraPMpykw4go+h4avbS9RYPS/53pXA0SIskWXMxOkIH/11AfBCr+1MsMybOiTnmBdUNbHvQ== X-Received: by 2002:a17:906:9b88:b0:8b1:fc:b06d with SMTP id dd8-20020a1709069b8800b008b100fcb06dmr1467893ejc.77.1677839114262; Fri, 03 Mar 2023 02:25:14 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:14 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 03/10] dts: add dpdk build on sut Date: Fri, 3 Mar 2023 11:25:00 +0100 Message-Id: <20230303102507.527790-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the ability to build DPDK and apps on the SUT, using a configured target. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 2 + dts/framework/exception.py | 17 ++ dts/framework/remote_session/os_session.py | 89 +++++++++- dts/framework/remote_session/posix_session.py | 126 ++++++++++++++ .../remote_session/remote/remote_session.py | 38 ++++- .../remote_session/remote/ssh_session.py | 66 +++++++- dts/framework/settings.py | 44 ++++- dts/framework/testbed_model/node.py | 10 ++ dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++ dts/framework/utils.py | 36 +++- 10 files changed, 570 insertions(+), 16 deletions(-) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index e3e2d74eac..ca61cb10fe 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -91,6 +91,7 @@ class BuildTargetConfiguration: os: OS cpu: CPUType compiler: Compiler + compiler_wrapper: str name: str @staticmethod @@ -100,6 +101,7 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": os=OS(d["os"]), cpu=CPUType(d["cpu"]), compiler=Compiler(d["compiler"]), + compiler_wrapper=d.get("compiler_wrapper", ""), name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}", ) diff --git a/dts/framework/exception.py b/dts/framework/exception.py index e776b42bd9..b4545a5a40 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -23,6 +23,7 @@ class ErrorSeverity(IntEnum): CONFIG_ERR = 2 REMOTE_CMD_EXEC_ERR = 3 SSH_ERR = 4 + DPDK_BUILD_ERR = 10 class DTSError(Exception): @@ -111,3 +112,19 @@ def __str__(self) -> str: f"Command {self.command} returned a non-zero exit code: " f"{self.command_return_code}" ) + + +class RemoteDirectoryExistsError(DTSError): + """ + Raised when a remote directory to be created already exists. + """ + + severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + + +class DPDKBuildError(DTSError): + """ + Raised when DPDK build fails for any reason. + """ + + severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py index 7a4cc5e669..47e9f2889b 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/remote_session/os_session.py @@ -2,10 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -from abc import ABC +from abc import ABC, abstractmethod +from pathlib import PurePath -from framework.config import NodeConfiguration +from framework.config import Architecture, NodeConfiguration from framework.logger import DTSLOG +from framework.settings import SETTINGS +from framework.utils import EnvVarsDict, MesonArgs from .remote import RemoteSession, create_remote_session @@ -44,3 +47,85 @@ def is_alive(self) -> bool: Check whether the remote session is still responding. """ return self.remote_session.is_alive() + + @abstractmethod + def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: + """ + Try to find DPDK remote dir in remote_dir. + """ + + @abstractmethod + def get_remote_tmp_dir(self) -> PurePath: + """ + Get the path of the temporary directory of the remote OS. + """ + + @abstractmethod + def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: + """ + Create extra environment variables needed for the target architecture. Get + information from the node if needed. + """ + + @abstractmethod + def join_remote_path(self, *args: str | PurePath) -> PurePath: + """ + Join path parts using the path separator that fits the remote OS. + """ + + @abstractmethod + def copy_file( + self, + source_file: str | PurePath, + destination_file: str | PurePath, + source_remote: bool = False, + ) -> None: + """ + Copy source_file from local filesystem to destination_file + on the remote Node associated with the remote session. + If source_remote is True, reverse the direction - copy source_file from the + associated remote Node to destination_file on local storage. + """ + + @abstractmethod + def remove_remote_dir( + self, + remote_dir_path: str | PurePath, + recursive: bool = True, + force: bool = True, + ) -> None: + """ + Remove remote directory, by default remove recursively and forcefully. + """ + + @abstractmethod + def extract_remote_tarball( + self, + remote_tarball_path: str | PurePath, + expected_dir: str | PurePath | None = None, + ) -> None: + """ + Extract remote tarball in place. If expected_dir is a non-empty string, check + whether the dir exists after extracting the archive. + """ + + @abstractmethod + def build_dpdk( + self, + env_vars: EnvVarsDict, + meson_args: MesonArgs, + remote_dpdk_dir: str | PurePath, + remote_dpdk_build_dir: str | PurePath, + rebuild: bool = False, + timeout: float = SETTINGS.compile_timeout, + ) -> None: + """ + Build DPDK in the input dir with specified environment variables and meson + arguments. + """ + + @abstractmethod + def get_dpdk_version(self, version_path: str | PurePath) -> str: + """ + Inspect DPDK version on the remote node from version_path. + """ diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py index 110b6a4804..c2580f6a42 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/remote_session/posix_session.py @@ -2,6 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +from pathlib import PurePath, PurePosixPath + +from framework.config import Architecture +from framework.exception import DPDKBuildError, RemoteCommandExecutionError +from framework.settings import SETTINGS +from framework.utils import EnvVarsDict, MesonArgs + from .os_session import OSSession @@ -10,3 +17,122 @@ class PosixSession(OSSession): An intermediary class implementing the Posix compliant parts of Linux and other OS remote sessions. """ + + @staticmethod + def combine_short_options(**opts: bool) -> str: + ret_opts = "" + for opt, include in opts.items(): + if include: + ret_opts = f"{ret_opts}{opt}" + + if ret_opts: + ret_opts = f" -{ret_opts}" + + return ret_opts + + def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath: + remote_guess = self.join_remote_path(remote_dir, "dpdk-*") + result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1") + return PurePosixPath(result.stdout) + + def get_remote_tmp_dir(self) -> PurePosixPath: + return PurePosixPath("/tmp") + + def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: + """ + Create extra environment variables needed for i686 arch build. Get information + from the node if needed. + """ + env_vars = {} + if arch == Architecture.i686: + # find the pkg-config path and store it in PKG_CONFIG_LIBDIR + out = self.remote_session.send_command("find /usr -type d -name pkgconfig") + pkg_path = "" + res_path = out.stdout.split("\r\n") + for cur_path in res_path: + if "i386" in cur_path: + pkg_path = cur_path + break + assert pkg_path != "", "i386 pkg-config path not found" + + env_vars["CFLAGS"] = "-m32" + env_vars["PKG_CONFIG_LIBDIR"] = pkg_path + + return env_vars + + def join_remote_path(self, *args: str | PurePath) -> PurePosixPath: + return PurePosixPath(*args) + + def copy_file( + self, + source_file: str | PurePath, + destination_file: str | PurePath, + source_remote: bool = False, + ) -> None: + self.remote_session.copy_file(source_file, destination_file, source_remote) + + def remove_remote_dir( + self, + remote_dir_path: str | PurePath, + recursive: bool = True, + force: bool = True, + ) -> None: + opts = PosixSession.combine_short_options(r=recursive, f=force) + self.remote_session.send_command(f"rm{opts} {remote_dir_path}") + + def extract_remote_tarball( + self, + remote_tarball_path: str | PurePath, + expected_dir: str | PurePath | None = None, + ) -> None: + self.remote_session.send_command( + f"tar xfm {remote_tarball_path} " + f"-C {PurePosixPath(remote_tarball_path).parent}", + 60, + ) + if expected_dir: + self.remote_session.send_command(f"ls {expected_dir}", verify=True) + + def build_dpdk( + self, + env_vars: EnvVarsDict, + meson_args: MesonArgs, + remote_dpdk_dir: str | PurePath, + remote_dpdk_build_dir: str | PurePath, + rebuild: bool = False, + timeout: float = SETTINGS.compile_timeout, + ) -> None: + try: + if rebuild: + # reconfigure, then build + self._logger.info("Reconfiguring DPDK build.") + self.remote_session.send_command( + f"meson configure {meson_args} {remote_dpdk_build_dir}", + timeout, + verify=True, + env=env_vars, + ) + else: + # fresh build - remove target dir first, then build from scratch + self._logger.info("Configuring DPDK build from scratch.") + self.remove_remote_dir(remote_dpdk_build_dir) + self.remote_session.send_command( + f"meson setup " + f"{meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}", + timeout, + verify=True, + env=env_vars, + ) + + self._logger.info("Building DPDK.") + self.remote_session.send_command( + f"ninja -C {remote_dpdk_build_dir}", timeout, verify=True, env=env_vars + ) + except RemoteCommandExecutionError as e: + raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.") + + def get_dpdk_version(self, build_dir: str | PurePath) -> str: + out = self.remote_session.send_command( + f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True + ) + return out.stdout diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py index 5ac395ec79..91dee3cb4f 100644 --- a/dts/framework/remote_session/remote/remote_session.py +++ b/dts/framework/remote_session/remote/remote_session.py @@ -5,11 +5,13 @@ import dataclasses from abc import ABC, abstractmethod +from pathlib import PurePath from framework.config import NodeConfiguration from framework.exception import RemoteCommandExecutionError from framework.logger import DTSLOG from framework.settings import SETTINGS +from framework.utils import EnvVarsDict @dataclasses.dataclass(slots=True, frozen=True) @@ -83,15 +85,22 @@ def _connect(self) -> None: """ def send_command( - self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False + self, + command: str, + timeout: float = SETTINGS.timeout, + verify: bool = False, + env: EnvVarsDict | None = None, ) -> CommandResult: """ - Send a command to the connected node and return CommandResult. + Send a command to the connected node using optional env vars + and return CommandResult. If verify is True, check the return code of the executed command and raise a RemoteCommandExecutionError if the command failed. """ - self._logger.info(f"Sending: '{command}'") - result = self._send_command(command, timeout) + self._logger.info( + f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "") + ) + result = self._send_command(command, timeout, env) if verify and result.return_code: self._logger.debug( f"Command '{command}' failed with return code '{result.return_code}'" @@ -104,9 +113,12 @@ def send_command( return result @abstractmethod - def _send_command(self, command: str, timeout: float) -> CommandResult: + def _send_command( + self, command: str, timeout: float, env: EnvVarsDict | None + ) -> CommandResult: """ - Use the underlying protocol to execute the command and return CommandResult. + Use the underlying protocol to execute the command using optional env vars + and return CommandResult. """ def close(self, force: bool = False) -> None: @@ -127,3 +139,17 @@ def is_alive(self) -> bool: """ Check whether the remote session is still responding. """ + + @abstractmethod + def copy_file( + self, + source_file: str | PurePath, + destination_file: str | PurePath, + source_remote: bool = False, + ) -> None: + """ + Copy source_file from local filesystem to destination_file on the remote Node + associated with the remote session. + If source_remote is True, reverse the direction - copy source_file from the + associated Node to destination_file on local filesystem. + """ diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py index c2362e2fdf..42ff9498a2 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/remote/ssh_session.py @@ -4,13 +4,15 @@ # Copyright(c) 2022-2023 University of New Hampshire import time +from pathlib import PurePath +import pexpect # type: ignore from pexpect import pxssh # type: ignore from framework.config import NodeConfiguration from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError from framework.logger import DTSLOG -from framework.utils import GREEN, RED +from framework.utils import GREEN, RED, EnvVarsDict from .remote_session import CommandResult, RemoteSession @@ -164,16 +166,22 @@ def _flush(self) -> None: def is_alive(self) -> bool: return self.session.isalive() - def _send_command(self, command: str, timeout: float) -> CommandResult: - output = self._send_command_get_output(command, timeout) - return_code = int(self._send_command_get_output("echo $?", timeout)) + def _send_command( + self, command: str, timeout: float, env: EnvVarsDict | None + ) -> CommandResult: + output = self._send_command_get_output(command, timeout, env) + return_code = int(self._send_command_get_output("echo $?", timeout, None)) # we're capturing only stdout return CommandResult(self.name, command, output, "", return_code) - def _send_command_get_output(self, command: str, timeout: float) -> str: + def _send_command_get_output( + self, command: str, timeout: float, env: EnvVarsDict | None + ) -> str: try: self._clean_session() + if env: + command = f"{env} {command}" self._send_line(command) except Exception as e: raise e @@ -190,3 +198,51 @@ def _close(self, force: bool = False) -> None: else: if self.is_alive(): self.session.logout() + + def copy_file( + self, + source_file: str | PurePath, + destination_file: str | PurePath, + source_remote: bool = False, + ) -> None: + """ + Send a local file to a remote host. + """ + if source_remote: + source_file = f"{self.username}@{self.ip}:{source_file}" + else: + destination_file = f"{self.username}@{self.ip}:{destination_file}" + + port = "" + if self.port: + port = f" -P {self.port}" + + command = ( + f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes" + f" {source_file} {destination_file}" + ) + + self._spawn_scp(command) + + def _spawn_scp(self, scp_cmd: str) -> None: + """ + Transfer a file with SCP + """ + self._logger.info(scp_cmd) + p: pexpect.spawn = pexpect.spawn(scp_cmd) + time.sleep(0.5) + ssh_newkey: str = "Are you sure you want to continue connecting" + i: int = p.expect( + [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120 + ) + if i == 0: # add once in trust list + p.sendline("yes") + i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2) + + if i == 1: + time.sleep(0.5) + p.sendline(self.password) + p.expect("Exit status 0", 60) + if i == 4: + self._logger.error("SCP TIMEOUT error %d" % i) + p.close() diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 6422b23499..f787187ade 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -7,8 +7,11 @@ import os from collections.abc import Callable, Iterable, Sequence from dataclasses import dataclass +from pathlib import Path from typing import Any, TypeVar +from .exception import ConfigurationError + _T = TypeVar("_T") @@ -60,6 +63,9 @@ class _Settings: output_dir: str timeout: float verbose: bool + skip_setup: bool + dpdk_tarball_path: Path + compile_timeout: float def _get_parser() -> argparse.ArgumentParser: @@ -91,6 +97,7 @@ def _get_parser() -> argparse.ArgumentParser: "--timeout", action=_env_arg("DTS_TIMEOUT"), default=15, + type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for " "compiling DPDK.", ) @@ -104,16 +111,51 @@ def _get_parser() -> argparse.ArgumentParser: "to the console.", ) + parser.add_argument( + "-s", + "--skip-setup", + action=_env_arg("DTS_SKIP_SETUP"), + default="N", + help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.", + ) + + parser.add_argument( + "--tarball", + "--snapshot", + action=_env_arg("DTS_DPDK_TARBALL"), + default="dpdk.tar.xz", + type=Path, + help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball " + "which will be used in testing.", + ) + + parser.add_argument( + "--compile-timeout", + action=_env_arg("DTS_COMPILE_TIMEOUT"), + default=1200, + type=float, + help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", + ) + return parser +def _check_tarball_path(parsed_args: argparse.Namespace) -> None: + if not os.path.exists(parsed_args.tarball): + raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.") + + def _get_settings() -> _Settings: parsed_args = _get_parser().parse_args() + _check_tarball_path(parsed_args) return _Settings( config_file_path=parsed_args.config_file, output_dir=parsed_args.output_dir, - timeout=float(parsed_args.timeout), + timeout=parsed_args.timeout, verbose=(parsed_args.verbose == "Y"), + skip_setup=(parsed_args.skip_setup == "Y"), + dpdk_tarball_path=parsed_args.tarball, + compile_timeout=parsed_args.compile_timeout, ) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index e1f06bc389..a7059b5856 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -7,6 +7,8 @@ A node is a generic host that DTS connects to and manages. """ +from typing import Any, Callable + from framework.config import ( BuildTargetConfiguration, ExecutionConfiguration, @@ -14,6 +16,7 @@ ) from framework.logger import DTSLOG, getLogger from framework.remote_session import OSSession, create_session +from framework.settings import SETTINGS class Node(object): @@ -117,3 +120,10 @@ def close(self) -> None: for session in self._other_sessions: session.close() self._logger.logger_exit() + + @staticmethod + def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: + if SETTINGS.skip_setup: + return lambda *args: None + else: + return func diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 42acb6f9b2..21da33d6b3 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -2,6 +2,14 @@ # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. +import os +import tarfile +from pathlib import PurePath + +from framework.config import BuildTargetConfiguration, NodeConfiguration +from framework.settings import SETTINGS +from framework.utils import EnvVarsDict, MesonArgs + from .node import Node @@ -10,4 +18,154 @@ class SutNode(Node): A class for managing connections to the System under Test, providing methods that retrieve the necessary information about the node (such as CPU, memory and NIC details) and configuration capabilities. + Another key capability is building DPDK according to given build target. """ + + _build_target_config: BuildTargetConfiguration | None + _env_vars: EnvVarsDict + _remote_tmp_dir: PurePath + __remote_dpdk_dir: PurePath | None + _dpdk_version: str | None + _app_compile_timeout: float + + def __init__(self, node_config: NodeConfiguration): + super(SutNode, self).__init__(node_config) + self._build_target_config = None + self._env_vars = EnvVarsDict() + self._remote_tmp_dir = self.main_session.get_remote_tmp_dir() + self.__remote_dpdk_dir = None + self._dpdk_version = None + self._app_compile_timeout = 90 + + @property + def _remote_dpdk_dir(self) -> PurePath: + if self.__remote_dpdk_dir is None: + self.__remote_dpdk_dir = self._guess_dpdk_remote_dir() + return self.__remote_dpdk_dir + + @_remote_dpdk_dir.setter + def _remote_dpdk_dir(self, value: PurePath) -> None: + self.__remote_dpdk_dir = value + + @property + def remote_dpdk_build_dir(self) -> PurePath: + if self._build_target_config: + return self.main_session.join_remote_path( + self._remote_dpdk_dir, self._build_target_config.name + ) + else: + return self.main_session.join_remote_path(self._remote_dpdk_dir, "build") + + @property + def dpdk_version(self) -> str: + if self._dpdk_version is None: + self._dpdk_version = self.main_session.get_dpdk_version( + self._remote_dpdk_dir + ) + return self._dpdk_version + + def _guess_dpdk_remote_dir(self) -> PurePath: + return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) + + def _set_up_build_target( + self, build_target_config: BuildTargetConfiguration + ) -> None: + """ + Setup DPDK on the SUT node. + """ + self._configure_build_target(build_target_config) + self._copy_dpdk_tarball() + self._build_dpdk() + + def _configure_build_target( + self, build_target_config: BuildTargetConfiguration + ) -> None: + """ + Populate common environment variables and set build target config. + """ + self._env_vars = EnvVarsDict() + self._build_target_config = build_target_config + self._env_vars.update( + self.main_session.get_dpdk_build_env_vars(build_target_config.arch) + ) + self._env_vars["CC"] = build_target_config.compiler.name + if build_target_config.compiler_wrapper: + self._env_vars["CC"] = ( + f"'{build_target_config.compiler_wrapper} " + f"{build_target_config.compiler.name}'" + ) + + @Node.skip_setup + def _copy_dpdk_tarball(self) -> None: + """ + Copy to and extract DPDK tarball on the SUT node. + """ + self._logger.info("Copying DPDK tarball to SUT.") + self.main_session.copy_file(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir) + + # construct remote tarball path + # the basename is the same on local host and on remote Node + remote_tarball_path = self.main_session.join_remote_path( + self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_tarball_path) + ) + + # construct remote path after extracting + with tarfile.open(SETTINGS.dpdk_tarball_path) as dpdk_tar: + dpdk_top_dir = dpdk_tar.getnames()[0] + self._remote_dpdk_dir = self.main_session.join_remote_path( + self._remote_tmp_dir, dpdk_top_dir + ) + + self._logger.info( + f"Extracting DPDK tarball on SUT: " + f"'{remote_tarball_path}' into '{self._remote_dpdk_dir}'." + ) + # clean remote path where we're extracting + self.main_session.remove_remote_dir(self._remote_dpdk_dir) + + # then extract to remote path + self.main_session.extract_remote_tarball( + remote_tarball_path, self._remote_dpdk_dir + ) + + @Node.skip_setup + def _build_dpdk(self) -> None: + """ + Build DPDK. Uses the already configured target. Assumes that the tarball has + already been copied to and extracted on the SUT node. + """ + self.main_session.build_dpdk( + self._env_vars, + MesonArgs(default_library="static", enable_kmods=True, libdir="lib"), + self._remote_dpdk_dir, + self.remote_dpdk_build_dir, + ) + + def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath: + """ + Build one or all DPDK apps. Requires DPDK to be already built on the SUT node. + When app_name is 'all', build all example apps. + When app_name is any other string, tries to build that example app. + Return the directory path of the built app. If building all apps, return + the path to the examples directory (where all apps reside). + The meson_dpdk_args are keyword arguments + found in meson_option.txt in root DPDK directory. Do not use -D with them, + for example: enable_kmods=True. + """ + self.main_session.build_dpdk( + self._env_vars, + MesonArgs(examples=app_name, **meson_dpdk_args), # type: ignore [arg-type] + # ^^ https://github.com/python/mypy/issues/11583 + self._remote_dpdk_dir, + self.remote_dpdk_build_dir, + rebuild=True, + timeout=self._app_compile_timeout, + ) + + if app_name == "all": + return self.main_session.join_remote_path( + self.remote_dpdk_build_dir, "examples" + ) + return self.main_session.join_remote_path( + self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}" + ) diff --git a/dts/framework/utils.py b/dts/framework/utils.py index c28c8f1082..0ed591ac23 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. -# Copyright(c) 2022 University of New Hampshire +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire import sys @@ -28,3 +28,35 @@ def GREEN(text: str) -> str: def RED(text: str) -> str: return f"\u001B[31;1m{str(text)}\u001B[0m" + + +class EnvVarsDict(dict): + def __str__(self) -> str: + return " ".join(["=".join(item) for item in self.items()]) + + +class MesonArgs(object): + """ + Aggregate the arguments needed to build DPDK: + default_library: Default library type, Meson allows "shared", "static" and "both". + Defaults to None, in which case the argument won't be used. + Keyword arguments: The arguments found in meson_options.txt in root DPDK directory. + Do not use -D with them, for example: + meson_args = MesonArgs(enable_kmods=True). + """ + + _default_library: str + + def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): + self._default_library = ( + f"--default-library={default_library}" if default_library else "" + ) + self._dpdk_args = " ".join( + ( + f"-D{dpdk_arg_name}={dpdk_arg_value}" + for dpdk_arg_name, dpdk_arg_value in dpdk_args.items() + ) + ) + + def __str__(self) -> str: + return " ".join(f"{self._default_library} {self._dpdk_args}".split()) From patchwork Fri Mar 3 10:25:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124785 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B02641DC6; Fri, 3 Mar 2023 11:25:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28A4842D13; Fri, 3 Mar 2023 11:25:19 +0100 (CET) Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by mails.dpdk.org (Postfix) with ESMTP id AEDA44282D for ; Fri, 3 Mar 2023 11:25:15 +0100 (CET) Received: by mail-ed1-f48.google.com with SMTP id da10so8369717edb.3 for ; Fri, 03 Mar 2023 02:25:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839115; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gK3MEvPnb6JKuNrJnUpqWXN3dWEaxigqViuDQZKBmt0=; b=QQZI2lkf/1uEH2HXLQ7ZTxjNtYg43BQKJVkKnDOU+QBL9fvP3OFhPBd8WnkrALXfo5 7JUKx2Z3Pbb0u6a/Vh3axPnLGhhPa/U5v9kyGnrGU2ADb1SGJtL0/j95+7FoXJF0gOwf refthOXM/9UDTJTs6tO78rbmpF9XnlWijP9gntk9qodOZo+PnQYlRFT17YrnRN9K1RxD 0qxf8NX8zQ3WQUZFrwNWLZvAcnRGGc59S50XR7V/w32KFjVbIQCeEL+MhHnOhFvXCuq5 MePPnvv/nJjlNMt1Am57bXXz9oAlkDLN4hAR+5bZ4P7izmcygdpzjnR3ITjluAvm7UOu bVCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839115; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gK3MEvPnb6JKuNrJnUpqWXN3dWEaxigqViuDQZKBmt0=; b=m4zYonlyqtFqhjO3efT5+Aw52EkINr5CDQ0cyigIOCKgtsbzCXAAE7+DWPfs4L/kyT UYKh5fxMxM5VIUkNJiBrxULv1hWzxoF3V22xcRrQFkV3SP9wagSZbOY2Ui2Yt8Z6+m00 7Hps93FW1/B+VQqQ7HuMD3yxy3nzsbtEnhPKR/q1NGim7ISmFoppJfNGrrKVuH6tPYuE YXjXNMyqA5yXEaRGlI0LtfmEn33Lv5K5OcD7cj2twrkuJPrLlxSIyKaA7e2iYKw3F+JZ STUITsQdDg5CfozGX/P9rgSzY4bOMVybcg2zJVAaUxt0pNpbyO7egqBq5Ee0giXvt/qD XrWA== X-Gm-Message-State: AO0yUKWIjg8/2xZgRpPZ+MH0JlKWlKUiF6wRplnH+rYmaOOw4GSoLW4i eLPYoXRPc+D/8aY0o3j1LR8cWg== X-Google-Smtp-Source: AK7set9SYquokmiRBDS/VtDLhIUAVAUDmkRqGKJKv2CtBxrejyL3EZXUX6ljeDYNtJb3njVZTPMZXg== X-Received: by 2002:aa7:cf90:0:b0:4bc:502e:e7de with SMTP id z16-20020aa7cf90000000b004bc502ee7demr1171577edx.32.1677839115187; Fri, 03 Mar 2023 02:25:15 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:15 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 04/10] dts: add dpdk execution handling Date: Fri, 3 Mar 2023 11:25:01 +0100 Message-Id: <20230303102507.527790-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add methods for setting up and shutting down DPDK apps and for constructing EAL parameters. Signed-off-by: Juraj Linkeš --- dts/conf.yaml | 4 + dts/framework/config/__init__.py | 8 + dts/framework/config/conf_yaml_schema.json | 25 ++ dts/framework/remote_session/linux_session.py | 18 ++ dts/framework/remote_session/os_session.py | 22 ++ dts/framework/remote_session/posix_session.py | 83 ++++++ dts/framework/testbed_model/__init__.py | 8 + dts/framework/testbed_model/hw/__init__.py | 27 ++ dts/framework/testbed_model/hw/cpu.py | 274 ++++++++++++++++++ .../testbed_model/hw/virtual_device.py | 16 + dts/framework/testbed_model/node.py | 43 +++ dts/framework/testbed_model/sut_node.py | 128 ++++++++ dts/framework/utils.py | 20 ++ 13 files changed, 676 insertions(+) create mode 100644 dts/framework/testbed_model/hw/__init__.py create mode 100644 dts/framework/testbed_model/hw/cpu.py create mode 100644 dts/framework/testbed_model/hw/virtual_device.py diff --git a/dts/conf.yaml b/dts/conf.yaml index 03696d2bab..1648e5c3c5 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -13,4 +13,8 @@ nodes: - name: "SUT 1" hostname: sut1.change.me.localhost user: root + arch: x86_64 os: linux + lcores: "" + use_first_core: false + memory_channels: 4 diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index ca61cb10fe..17b917f3b3 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -72,7 +72,11 @@ class NodeConfiguration: hostname: str user: str password: str | None + arch: Architecture os: OS + lcores: str + use_first_core: bool + memory_channels: int @staticmethod def from_dict(d: dict) -> "NodeConfiguration": @@ -81,7 +85,11 @@ def from_dict(d: dict) -> "NodeConfiguration": hostname=d["hostname"], user=d["user"], password=d.get("password"), + arch=Architecture(d["arch"]), os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + memory_channels=d.get("memory_channels", 1), ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 9170307fbe..334b4bd8ab 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -6,6 +6,14 @@ "type": "string", "description": "A unique identifier for a node" }, + "ARCH": { + "type": "string", + "enum": [ + "x86_64", + "arm64", + "ppc64le" + ] + }, "OS": { "type": "string", "enum": [ @@ -92,8 +100,24 @@ "type": "string", "description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred." }, + "arch": { + "$ref": "#/definitions/ARCH" + }, "os": { "$ref": "#/definitions/OS" + }, + "lcores": { + "type": "string", + "pattern": "^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$", + "description": "Optional comma-separated list of logical cores to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all lcores." + }, + "use_first_core": { + "type": "boolean", + "description": "Indicate whether DPDK should use the first physical core. It won't be used by default." + }, + "memory_channels": { + "type": "integer", + "description": "How many memory channels to use. Optional, defaults to 1." } }, "additionalProperties": false, @@ -101,6 +125,7 @@ "name", "hostname", "user", + "arch", "os" ] }, diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py index 9d14166077..c49b6bb1d7 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/remote_session/linux_session.py @@ -2,6 +2,8 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +from framework.testbed_model import LogicalCore + from .posix_session import PosixSession @@ -9,3 +11,19 @@ class LinuxSession(PosixSession): """ The implementation of non-Posix compliant parts of Linux remote sessions. """ + + def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + cpu_info = self.remote_session.send_command( + "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#" + ).stdout + lcores = [] + for cpu_line in cpu_info.splitlines(): + lcore, core, socket, node = map(int, cpu_line.split(",")) + if core == 0 and socket == 0 and not use_first_core: + self._logger.info("Not using the first physical core.") + continue + lcores.append(LogicalCore(lcore, core, socket, node)) + return lcores + + def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + return dpdk_prefix diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py index 47e9f2889b..0a42f40a86 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/remote_session/os_session.py @@ -3,11 +3,13 @@ # Copyright(c) 2023 University of New Hampshire from abc import ABC, abstractmethod +from collections.abc import Iterable from pathlib import PurePath from framework.config import Architecture, NodeConfiguration from framework.logger import DTSLOG from framework.settings import SETTINGS +from framework.testbed_model import LogicalCore from framework.utils import EnvVarsDict, MesonArgs from .remote import RemoteSession, create_remote_session @@ -129,3 +131,23 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str: """ Inspect DPDK version on the remote node from version_path. """ + + @abstractmethod + def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + """ + Compose a list of LogicalCores present on the remote node. + If use_first_core is False, the first physical core won't be used. + """ + + @abstractmethod + def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + """ + Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If + dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean. + """ + + @abstractmethod + def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + """ + Get the DPDK file prefix that will be used when running DPDK apps. + """ diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py index c2580f6a42..d38062e8d6 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/remote_session/posix_session.py @@ -2,6 +2,8 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +import re +from collections.abc import Iterable from pathlib import PurePath, PurePosixPath from framework.config import Architecture @@ -136,3 +138,84 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str: f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True ) return out.stdout + + def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + self._logger.info("Cleaning up DPDK apps.") + dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list) + if dpdk_runtime_dirs: + # kill and cleanup only if DPDK is running + dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs) + for dpdk_pid in dpdk_pids: + self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20) + self._check_dpdk_hugepages(dpdk_runtime_dirs) + self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs) + + def _get_dpdk_runtime_dirs( + self, dpdk_prefix_list: Iterable[str] + ) -> list[PurePosixPath]: + prefix = PurePosixPath("/var", "run", "dpdk") + if not dpdk_prefix_list: + remote_prefixes = self._list_remote_dirs(prefix) + if not remote_prefixes: + dpdk_prefix_list = [] + else: + dpdk_prefix_list = remote_prefixes + + return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list] + + def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: + """ + Return a list of directories of the remote_dir. + If remote_path doesn't exist, return None. + """ + out = self.remote_session.send_command( + f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'" + ).stdout + if "No such file or directory" in out: + return None + else: + return out.splitlines() + + def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]: + pids = [] + pid_regex = r"p(\d+)" + for dpdk_runtime_dir in dpdk_runtime_dirs: + dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config") + if self._remote_files_exists(dpdk_config_file): + out = self.remote_session.send_command( + f"lsof -Fp {dpdk_config_file}" + ).stdout + if out and "No such file or directory" not in out: + for out_line in out.splitlines(): + match = re.match(pid_regex, out_line) + if match: + pids.append(int(match.group(1))) + return pids + + def _remote_files_exists(self, remote_path: PurePath) -> bool: + result = self.remote_session.send_command(f"test -e {remote_path}") + return not result.return_code + + def _check_dpdk_hugepages( + self, dpdk_runtime_dirs: Iterable[str | PurePath] + ) -> None: + for dpdk_runtime_dir in dpdk_runtime_dirs: + hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info") + if self._remote_files_exists(hugepage_info): + out = self.remote_session.send_command( + f"lsof -Fp {hugepage_info}" + ).stdout + if out and "No such file or directory" not in out: + self._logger.warning("Some DPDK processes did not free hugepages.") + self._logger.warning("*******************************************") + self._logger.warning(out) + self._logger.warning("*******************************************") + + def _remove_dpdk_runtime_dirs( + self, dpdk_runtime_dirs: Iterable[str | PurePath] + ) -> None: + for dpdk_runtime_dir in dpdk_runtime_dirs: + self.remove_remote_dir(dpdk_runtime_dir) + + def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + return "" diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 8ead9db482..5be3e4c48d 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -9,5 +9,13 @@ # pylama:ignore=W0611 +from .hw import ( + LogicalCore, + LogicalCoreCount, + LogicalCoreList, + LogicalCoreListFilter, + VirtualDevice, + lcore_filter, +) from .node import Node from .sut_node import SutNode diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py new file mode 100644 index 0000000000..88ccac0b0e --- /dev/null +++ b/dts/framework/testbed_model/hw/__init__.py @@ -0,0 +1,27 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +# pylama:ignore=W0611 + +from .cpu import ( + LogicalCore, + LogicalCoreCount, + LogicalCoreCountFilter, + LogicalCoreFilter, + LogicalCoreList, + LogicalCoreListFilter, +) +from .virtual_device import VirtualDevice + + +def lcore_filter( + core_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool, +) -> LogicalCoreFilter: + if isinstance(filter_specifier, LogicalCoreList): + return LogicalCoreListFilter(core_list, filter_specifier, ascending) + elif isinstance(filter_specifier, LogicalCoreCount): + return LogicalCoreCountFilter(core_list, filter_specifier, ascending) + else: + raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py new file mode 100644 index 0000000000..d1918a12dc --- /dev/null +++ b/dts/framework/testbed_model/hw/cpu.py @@ -0,0 +1,274 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +import dataclasses +from abc import ABC, abstractmethod +from collections.abc import Iterable, ValuesView +from dataclasses import dataclass + +from framework.utils import expand_range + + +@dataclass(slots=True, frozen=True) +class LogicalCore(object): + """ + Representation of a CPU core. A physical core is represented in OS + by multiple logical cores (lcores) if CPU multithreading is enabled. + """ + + lcore: int + core: int + socket: int + node: int + + def __int__(self) -> int: + return self.lcore + + +class LogicalCoreList(object): + """ + Convert these options into a list of logical core ids. + lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores + lcore_list=[0,1,2,3] - a list of int indices + lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported + lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported + + The class creates a unified format used across the framework and allows + the user to use either a str representation (using str(instance) or directly + in f-strings) or a list representation (by accessing instance.lcore_list). + Empty lcore_list is allowed. + """ + + _lcore_list: list[int] + _lcore_str: str + + def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): + self._lcore_list = [] + if isinstance(lcore_list, str): + lcore_list = lcore_list.split(",") + for lcore in lcore_list: + if isinstance(lcore, str): + self._lcore_list.extend(expand_range(lcore)) + else: + self._lcore_list.append(int(lcore)) + + # the input lcores may not be sorted + self._lcore_list.sort() + self._lcore_str = ( + f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}' + ) + + @property + def lcore_list(self) -> list[int]: + return self._lcore_list + + def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: + formatted_core_list = [] + segment = lcore_ids_list[:1] + for lcore_id in lcore_ids_list[1:]: + if lcore_id - segment[-1] == 1: + segment.append(lcore_id) + else: + formatted_core_list.append( + f"{segment[0]}-{segment[-1]}" + if len(segment) > 1 + else f"{segment[0]}" + ) + current_core_index = lcore_ids_list.index(lcore_id) + formatted_core_list.extend( + self._get_consecutive_lcores_range( + lcore_ids_list[current_core_index:] + ) + ) + segment.clear() + break + if len(segment) > 0: + formatted_core_list.append( + f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}" + ) + return formatted_core_list + + def __str__(self) -> str: + return self._lcore_str + + +@dataclasses.dataclass(slots=True, frozen=True) +class LogicalCoreCount(object): + """ + Define the number of logical cores to use. + If sockets is not None, socket_count is ignored. + """ + + lcores_per_core: int = 1 + cores_per_socket: int = 2 + socket_count: int = 1 + sockets: list[int] | None = None + + +class LogicalCoreFilter(ABC): + """ + Filter according to the input filter specifier. Each filter needs to be + implemented in a derived class. + This class only implements operations common to all filters, such as sorting + the list to be filtered beforehand. + """ + + _filter_specifier: LogicalCoreCount | LogicalCoreList + _lcores_to_filter: list[LogicalCore] + + def __init__( + self, + lcore_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool = True, + ): + self._filter_specifier = filter_specifier + + # sorting by core is needed in case hyperthreading is enabled + self._lcores_to_filter = sorted( + lcore_list, key=lambda x: x.core, reverse=not ascending + ) + self.filter() + + @abstractmethod + def filter(self) -> list[LogicalCore]: + """ + Use self._filter_specifier to filter self._lcores_to_filter + and return the list of filtered LogicalCores. + self._lcores_to_filter is a sorted copy of the original list, + so it may be modified. + """ + + +class LogicalCoreCountFilter(LogicalCoreFilter): + """ + Filter the input list of LogicalCores according to specified rules: + Use cores from the specified number of sockets or from the specified socket ids. + If sockets is specified, it takes precedence over socket_count. + From each of those sockets, use only cores_per_socket of cores. + And for each core, use lcores_per_core of logical cores. Hypertheading + must be enabled for this to take effect. + If ascending is True, use cores with the lowest numerical id first + and continue in ascending order. If False, start with the highest + id and continue in descending order. This ordering affects which + sockets to consider first as well. + """ + + _filter_specifier: LogicalCoreCount + + def filter(self) -> list[LogicalCore]: + sockets_to_filter = self._filter_sockets(self._lcores_to_filter) + filtered_lcores = [] + for socket_to_filter in sockets_to_filter: + filtered_lcores.extend(self._filter_cores_from_socket(socket_to_filter)) + return filtered_lcores + + def _filter_sockets( + self, lcores_to_filter: Iterable[LogicalCore] + ) -> ValuesView[list[LogicalCore]]: + """ + Remove all lcores that don't match the specified socket(s). + If self._filter_specifier.sockets is not None, keep lcores from those sockets, + otherwise keep lcores from the first + self._filter_specifier.socket_count sockets. + """ + allowed_sockets: set[int] = set() + socket_count = self._filter_specifier.socket_count + if self._filter_specifier.sockets: + socket_count = len(self._filter_specifier.sockets) + allowed_sockets = set(self._filter_specifier.sockets) + + filtered_lcores: dict[int, list[LogicalCore]] = {} + for lcore in lcores_to_filter: + if not self._filter_specifier.sockets: + if len(allowed_sockets) < socket_count: + allowed_sockets.add(lcore.socket) + if lcore.socket in allowed_sockets: + if lcore.socket in filtered_lcores: + filtered_lcores[lcore.socket].append(lcore) + else: + filtered_lcores[lcore.socket] = [lcore] + + if len(allowed_sockets) < socket_count: + raise ValueError( + f"The actual number of sockets from which to use cores " + f"({len(allowed_sockets)}) is lower than required ({socket_count})." + ) + + return filtered_lcores.values() + + def _filter_cores_from_socket( + self, lcores_to_filter: Iterable[LogicalCore] + ) -> list[LogicalCore]: + """ + Keep only the first self._filter_specifier.cores_per_socket cores. + In multithreaded environments, keep only + the first self._filter_specifier.lcores_per_core lcores of those cores. + """ + + # no need to use ordered dict, from Python3.7 the dict + # insertion order is preserved (LIFO). + lcore_count_per_core_map: dict[int, int] = {} + filtered_lcores = [] + for lcore in lcores_to_filter: + if lcore.core in lcore_count_per_core_map: + current_core_lcore_count = lcore_count_per_core_map[lcore.core] + if self._filter_specifier.lcores_per_core > current_core_lcore_count: + # only add lcores of the given core + lcore_count_per_core_map[lcore.core] += 1 + filtered_lcores.append(lcore) + else: + # we have enough lcores per this core + continue + elif self._filter_specifier.cores_per_socket > len( + lcore_count_per_core_map + ): + # only add cores if we need more + lcore_count_per_core_map[lcore.core] = 1 + filtered_lcores.append(lcore) + else: + # we have enough cores + break + + cores_per_socket = len(lcore_count_per_core_map) + if cores_per_socket < self._filter_specifier.cores_per_socket: + raise ValueError( + f"The actual number of cores per socket ({cores_per_socket}) " + f"is lower than required ({self._filter_specifier.cores_per_socket})." + ) + + lcores_per_core = lcore_count_per_core_map[filtered_lcores[-1].core] + if lcores_per_core < self._filter_specifier.lcores_per_core: + raise ValueError( + f"The actual number of logical cores per core ({lcores_per_core}) " + f"is lower than required ({self._filter_specifier.lcores_per_core})." + ) + + return filtered_lcores + + +class LogicalCoreListFilter(LogicalCoreFilter): + """ + Filter the input list of Logical Cores according to the input list of + lcore indices. + An empty LogicalCoreList won't filter anything. + """ + + _filter_specifier: LogicalCoreList + + def filter(self) -> list[LogicalCore]: + if not len(self._filter_specifier.lcore_list): + return self._lcores_to_filter + + filtered_lcores = [] + for core in self._lcores_to_filter: + if core.lcore in self._filter_specifier.lcore_list: + filtered_lcores.append(core) + + if len(filtered_lcores) != len(self._filter_specifier.lcore_list): + raise ValueError( + f"Not all logical cores from {self._filter_specifier.lcore_list} " + f"were found among {self._lcores_to_filter}" + ) + + return filtered_lcores diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/hw/virtual_device.py new file mode 100644 index 0000000000..eb664d9f17 --- /dev/null +++ b/dts/framework/testbed_model/hw/virtual_device.py @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + + +class VirtualDevice(object): + """ + Base class for virtual devices used by DPDK. + """ + + name: str + + def __init__(self, name: str): + self.name = name + + def __str__(self) -> str: + return self.name diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index a7059b5856..f63b755801 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -18,6 +18,14 @@ from framework.remote_session import OSSession, create_session from framework.settings import SETTINGS +from .hw import ( + LogicalCore, + LogicalCoreCount, + LogicalCoreList, + LogicalCoreListFilter, + lcore_filter, +) + class Node(object): """ @@ -29,6 +37,7 @@ class Node(object): main_session: OSSession config: NodeConfiguration name: str + lcores: list[LogicalCore] _logger: DTSLOG _other_sessions: list[OSSession] @@ -38,6 +47,12 @@ def __init__(self, node_config: NodeConfiguration): self._logger = getLogger(self.name) self.main_session = create_session(self.config, self.name, self._logger) + self._get_remote_cpus() + # filter the node lcores according to user config + self.lcores = LogicalCoreListFilter( + self.lcores, LogicalCoreList(self.config.lcores) + ).filter() + self._other_sessions = [] self._logger.info(f"Created node: {self.name}") @@ -111,6 +126,34 @@ def create_session(self, name: str) -> OSSession: self._other_sessions.append(connection) return connection + def filter_lcores( + self, + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool = True, + ) -> list[LogicalCore]: + """ + Filter the LogicalCores found on the Node according to + a LogicalCoreCount or a LogicalCoreList. + + If ascending is True, use cores with the lowest numerical id first + and continue in ascending order. If False, start with the highest + id and continue in descending order. This ordering affects which + sockets to consider first as well. + """ + self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.") + return lcore_filter( + self.lcores, + filter_specifier, + ascending, + ).filter() + + def _get_remote_cpus(self) -> None: + """ + Scan CPUs in the remote OS and store a list of LogicalCores. + """ + self._logger.info("Getting CPU information.") + self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) + def close(self) -> None: """ Close all connections and free other resources. diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 21da33d6b3..3672f5f6e5 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -4,12 +4,15 @@ import os import tarfile +import time from pathlib import PurePath from framework.config import BuildTargetConfiguration, NodeConfiguration +from framework.remote_session import OSSession from framework.settings import SETTINGS from framework.utils import EnvVarsDict, MesonArgs +from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice from .node import Node @@ -21,21 +24,29 @@ class SutNode(Node): Another key capability is building DPDK according to given build target. """ + _dpdk_prefix_list: list[str] + _dpdk_timestamp: str _build_target_config: BuildTargetConfiguration | None _env_vars: EnvVarsDict _remote_tmp_dir: PurePath __remote_dpdk_dir: PurePath | None _dpdk_version: str | None _app_compile_timeout: float + _dpdk_kill_session: OSSession | None def __init__(self, node_config: NodeConfiguration): super(SutNode, self).__init__(node_config) + self._dpdk_prefix_list = [] self._build_target_config = None self._env_vars = EnvVarsDict() self._remote_tmp_dir = self.main_session.get_remote_tmp_dir() self.__remote_dpdk_dir = None self._dpdk_version = None self._app_compile_timeout = 90 + self._dpdk_kill_session = None + self._dpdk_timestamp = ( + f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}" + ) @property def _remote_dpdk_dir(self) -> PurePath: @@ -169,3 +180,120 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa return self.main_session.join_remote_path( self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}" ) + + def kill_cleanup_dpdk_apps(self) -> None: + """ + Kill all dpdk applications on the SUT. Cleanup hugepages. + """ + if self._dpdk_kill_session and self._dpdk_kill_session.is_alive(): + # we can use the session if it exists and responds + self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list) + else: + # otherwise, we need to (re)create it + self._dpdk_kill_session = self.create_session("dpdk_kill") + self._dpdk_prefix_list = [] + + def create_eal_parameters( + self, + lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(), + ascending_cores: bool = True, + prefix: str = "dpdk", + append_prefix_timestamp: bool = True, + no_pci: bool = False, + vdevs: list[VirtualDevice] = None, + other_eal_param: str = "", + ) -> "EalParameters": + """ + Generate eal parameters character string; + :param lcore_filter_specifier: a number of lcores/cores/sockets to use + or a list of lcore ids to use. + The default will select one lcore for each of two cores + on one socket, in ascending order of core ids. + :param ascending_cores: True, use cores with the lowest numerical id first + and continue in ascending order. If False, start with the + highest id and continue in descending order. This ordering + affects which sockets to consider first as well. + :param prefix: set file prefix string, eg: + prefix='vf' + :param append_prefix_timestamp: if True, will append a timestamp to + DPDK file prefix. + :param no_pci: switch of disable PCI bus eg: + no_pci=True + :param vdevs: virtual device list, eg: + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + :param other_eal_param: user defined DPDK eal parameters, eg: + other_eal_param='--single-file-segments' + :return: eal param string, eg: + '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420'; + """ + + lcore_list = LogicalCoreList( + self.filter_lcores(lcore_filter_specifier, ascending_cores) + ) + + if append_prefix_timestamp: + prefix = f"{prefix}_{self._dpdk_timestamp}" + prefix = self.main_session.get_dpdk_file_prefix(prefix) + if prefix: + self._dpdk_prefix_list.append(prefix) + + if vdevs is None: + vdevs = [] + + return EalParameters( + lcore_list=lcore_list, + memory_channels=self.config.memory_channels, + prefix=prefix, + no_pci=no_pci, + vdevs=vdevs, + other_eal_param=other_eal_param, + ) + + +class EalParameters(object): + def __init__( + self, + lcore_list: LogicalCoreList, + memory_channels: int, + prefix: str, + no_pci: bool, + vdevs: list[VirtualDevice], + other_eal_param: str, + ): + """ + Generate eal parameters character string; + :param lcore_list: the list of logical cores to use. + :param memory_channels: the number of memory channels to use. + :param prefix: set file prefix string, eg: + prefix='vf' + :param no_pci: switch of disable PCI bus eg: + no_pci=True + :param vdevs: virtual device list, eg: + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + :param other_eal_param: user defined DPDK eal parameters, eg: + other_eal_param='--single-file-segments' + """ + self._lcore_list = f"-l {lcore_list}" + self._memory_channels = f"-n {memory_channels}" + self._prefix = prefix + if prefix: + self._prefix = f"--file-prefix={prefix}" + self._no_pci = "--no-pci" if no_pci else "" + self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs) + self._other_eal_param = other_eal_param + + def __str__(self) -> str: + return ( + f"{self._lcore_list} " + f"{self._memory_channels} " + f"{self._prefix} " + f"{self._no_pci} " + f"{self._vdevs} " + f"{self._other_eal_param}" + ) diff --git a/dts/framework/utils.py b/dts/framework/utils.py index 0ed591ac23..55e0b0ef0e 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -22,6 +22,26 @@ def check_dts_python_version() -> None: print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) +def expand_range(range_str: str) -> list[int]: + """ + Process range string into a list of integers. There are two possible formats: + n - a single integer + n-m - a range of integers + + The returned range includes both n and m. Empty string returns an empty list. + """ + expanded_range: list[int] = [] + if range_str: + range_boundaries = range_str.split("-") + # will throw an exception when items in range_boundaries can't be converted, + # serving as type check + expanded_range.extend( + range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1) + ) + + return expanded_range + + def GREEN(text: str) -> str: return f"\u001B[32;1m{str(text)}\u001B[0m" From patchwork Fri Mar 3 10:25:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124786 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6544A41DC6; Fri, 3 Mar 2023 11:25:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7BFB642D32; Fri, 3 Mar 2023 11:25:20 +0100 (CET) Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by mails.dpdk.org (Postfix) with ESMTP id 7CB6742B7E for ; Fri, 3 Mar 2023 11:25:16 +0100 (CET) Received: by mail-ed1-f43.google.com with SMTP id cw28so8328750edb.5 for ; Fri, 03 Mar 2023 02:25:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839116; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uny7UNuyfO2yw71vaDPsYze0+L26AjhOb3Uecg+HSwI=; b=K10kHHiTA0zJxuh6tTyQZ2qqw7ooIZMxBtFB4SzzQi2MnV7BN3pcKE85EqIF0RqrSH /4CYIfwGh/G3rkU8G2+egqPOFptgkWj7yY7H2MbMH+laTDtoSL/B4s9zYXcLJrZkfGcy wIFjwgm7SOk2l63wU6xGzj7u3GQPZyvKi8ohErS8zQRY7oGOFJ+9eN8AnyBxL1cQG9uU NhM3csWXFcMFZGNJYWbVSloROoFCZ6W1zbolMWyNMEfg6aaR7OwEqzDGDNPwQ1qzMGLe yObZkasMKXNzOTnokyrDtWrnH+APnyNzfBfbeLW8OZod++6Z+VwcFS4gZOPefCgi/jCa Tk7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839116; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uny7UNuyfO2yw71vaDPsYze0+L26AjhOb3Uecg+HSwI=; b=7Q17/7NboLx1Rqb4PdDoeJdcd7xIaZAkIG6knPupP7iKIReU0S0j4Uq/Fi3UhaSEYZ 1bQ9T91jA2G/7LBiQ0rnElckV5aQeQPDMocqPoJQ4COPzz26bYWp4JPtK3jUwnwVz7L/ a8jXUH4O8oBdNzFgxPc1vzB1saahctJcV4fSHvevgz9IMCfARrpLxidSbj+TXVlso9bG f8qiIHr+thEwGjnHgw/UegM+/nVAQYy8xKN7ZD/gIxGSIe1nrtJixucZOnAKqm+I3HK4 J6N+8oU07iCoGD49vNFsAzt2/wbEGSKtofkKZiSe8a/X3jAW/TCYCvzb/3xe8r6iG9qo FuVQ== X-Gm-Message-State: AO0yUKWW+5sLw8xkAdsilCafNUbU9bKccwE3zMC5mbcrPMI6zsg5O4Ib A2J2sgLkXavAndclICfAf59GdQ== X-Google-Smtp-Source: AK7set+D6zz2g32cbw5D2EzK31s0u2vhmX2yMDVrcW6LWpE3gQ88E1DKvMHfbmhHPNsjrPFdJ4Ek3g== X-Received: by 2002:a17:906:a412:b0:87b:59d9:5a03 with SMTP id l18-20020a170906a41200b0087b59d95a03mr1060390ejz.36.1677839116173; Fri, 03 Mar 2023 02:25:16 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:16 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 05/10] dts: add node memory setup Date: Fri, 3 Mar 2023 11:25:02 +0100 Message-Id: <20230303102507.527790-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Setup hugepages on nodes. This is useful not only on SUT nodes, but also on TG nodes which use TGs that utilize hugepages. The setup is opt-in, i.e. users need to supply hugepage configuration to instruct DTS to configure them. It not configured, hugepage configuration will be skipped. This is helpful if users don't want DTS to tamper with hugepages on their system. Signed-off-by: Juraj Linkeš --- dts/conf.yaml | 3 + dts/framework/config/__init__.py | 14 ++++ dts/framework/config/conf_yaml_schema.json | 21 +++++ dts/framework/remote_session/linux_session.py | 78 +++++++++++++++++++ dts/framework/remote_session/os_session.py | 8 ++ dts/framework/testbed_model/node.py | 12 +++ 6 files changed, 136 insertions(+) diff --git a/dts/conf.yaml b/dts/conf.yaml index 1648e5c3c5..6540a45ef7 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -18,3 +18,6 @@ nodes: lcores: "" use_first_core: false memory_channels: 4 + hugepages: # optional; if removed, will use system hugepage configuration + amount: 256 + force_first_numa: false diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 17b917f3b3..0e5f493c5d 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -66,6 +66,12 @@ class Compiler(StrEnum): # # Frozen makes the object immutable. This enables further optimizations, # and makes it thread safe should we every want to move in that direction. +@dataclass(slots=True, frozen=True) +class HugepageConfiguration: + amount: int + force_first_numa: bool + + @dataclass(slots=True, frozen=True) class NodeConfiguration: name: str @@ -77,9 +83,16 @@ class NodeConfiguration: lcores: str use_first_core: bool memory_channels: int + hugepages: HugepageConfiguration | None @staticmethod def from_dict(d: dict) -> "NodeConfiguration": + hugepage_config = d.get("hugepages") + if hugepage_config: + if "force_first_numa" not in hugepage_config: + hugepage_config["force_first_numa"] = False + hugepage_config = HugepageConfiguration(**hugepage_config) + return NodeConfiguration( name=d["name"], hostname=d["hostname"], @@ -90,6 +103,7 @@ def from_dict(d: dict) -> "NodeConfiguration": lcores=d.get("lcores", "1"), use_first_core=d.get("use_first_core", False), memory_channels=d.get("memory_channels", 1), + hugepages=hugepage_config, ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 334b4bd8ab..56f93def36 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -75,6 +75,24 @@ "cpu", "compiler" ] + }, + "hugepages": { + "type": "object", + "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.", + "properties": { + "amount": { + "type": "integer", + "description": "The amount of hugepages to configure. Hugepage size will be the system default." + }, + "force_first_numa": { + "type": "boolean", + "description": "Set to True to force configuring hugepages on the first NUMA node. Defaults to False." + } + }, + "additionalProperties": false, + "required": [ + "amount" + ] } }, "type": "object", @@ -118,6 +136,9 @@ "memory_channels": { "type": "integer", "description": "How many memory channels to use. Optional, defaults to 1." + }, + "hugepages": { + "$ref": "#/definitions/hugepages" } }, "additionalProperties": false, diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py index c49b6bb1d7..a1e3bc3a92 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/remote_session/linux_session.py @@ -2,7 +2,9 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +from framework.exception import RemoteCommandExecutionError from framework.testbed_model import LogicalCore +from framework.utils import expand_range from .posix_session import PosixSession @@ -27,3 +29,79 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: def get_dpdk_file_prefix(self, dpdk_prefix) -> str: return dpdk_prefix + + def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + self._logger.info("Getting Hugepage information.") + hugepage_size = self._get_hugepage_size() + hugepages_total = self._get_hugepages_total() + self._numa_nodes = self._get_numa_nodes() + + if force_first_numa or hugepages_total != hugepage_amount: + # when forcing numa, we need to clear existing hugepages regardless + # of size, so they can be moved to the first numa node + self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa) + else: + self._logger.info("Hugepages already configured.") + self._mount_huge_pages() + + def _get_hugepage_size(self) -> int: + hugepage_size = self.remote_session.send_command( + "awk '/Hugepagesize/ {print $2}' /proc/meminfo" + ).stdout + return int(hugepage_size) + + def _get_hugepages_total(self) -> int: + hugepages_total = self.remote_session.send_command( + "awk '/HugePages_Total/ { print $2 }' /proc/meminfo" + ).stdout + return int(hugepages_total) + + def _get_numa_nodes(self) -> list[int]: + try: + numa_count = self.remote_session.send_command( + "cat /sys/devices/system/node/online", verify=True + ).stdout + numa_range = expand_range(numa_count) + except RemoteCommandExecutionError: + # the file doesn't exist, meaning the node doesn't support numa + numa_range = [] + return numa_range + + def _mount_huge_pages(self) -> None: + self._logger.info("Re-mounting Hugepages.") + hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts" + self.remote_session.send_command(f"umount $({hugapge_fs_cmd})") + result = self.remote_session.send_command(hugapge_fs_cmd) + if result.stdout == "": + remote_mount_path = "/mnt/huge" + self.remote_session.send_command(f"mkdir -p {remote_mount_path}") + self.remote_session.send_command( + f"mount -t hugetlbfs nodev {remote_mount_path}" + ) + + def _supports_numa(self) -> bool: + # the system supports numa if self._numa_nodes is non-empty and there are more + # than one numa node (in the latter case it may actually support numa, but + # there's no reason to do any numa specific configuration) + return len(self._numa_nodes) > 1 + + def _configure_huge_pages( + self, amount: int, size: int, force_first_numa: bool + ) -> None: + self._logger.info("Configuring Hugepages.") + hugepage_config_path = ( + f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages" + ) + if force_first_numa and self._supports_numa(): + # clear non-numa hugepages + self.remote_session.send_command( + f"echo 0 | sudo tee {hugepage_config_path}" + ) + hugepage_config_path = ( + f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages" + f"/hugepages-{size}kB/nr_hugepages" + ) + + self.remote_session.send_command( + f"echo {amount} | sudo tee {hugepage_config_path}" + ) diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py index 0a42f40a86..048bf7178e 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/remote_session/os_session.py @@ -151,3 +151,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix) -> str: """ Get the DPDK file prefix that will be used when running DPDK apps. """ + + @abstractmethod + def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + """ + Get the node's Hugepage Size, configure the specified amount of hugepages + if needed and mount the hugepages if needed. + If force_first_numa is True, configure hugepages just on the first socket. + """ diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index f63b755801..d48fafe65d 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -62,6 +62,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: Perform the execution setup that will be done for each execution this node is part of. """ + self._setup_hugepages() self._set_up_execution(execution_config) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: @@ -154,6 +155,17 @@ def _get_remote_cpus(self) -> None: self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) + def _setup_hugepages(self): + """ + Setup hugepages on the Node. Different architectures can supply different + amounts of memory for hugepages and numa-based hugepage allocation may need + to be considered. + """ + if self.config.hugepages: + self.main_session.setup_hugepages( + self.config.hugepages.amount, self.config.hugepages.force_first_numa + ) + def close(self) -> None: """ Close all connections and free other resources. From patchwork Fri Mar 3 10:25:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124787 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98D1A41DC6; Fri, 3 Mar 2023 11:26:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B14742D3B; Fri, 3 Mar 2023 11:25:21 +0100 (CET) Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by mails.dpdk.org (Postfix) with ESMTP id 76FCC42BDA for ; Fri, 3 Mar 2023 11:25:17 +0100 (CET) Received: by mail-ed1-f50.google.com with SMTP id cy23so8160751edb.12 for ; Fri, 03 Mar 2023 02:25:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839117; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HLymHjIpgsykTzyW2JfzbHEXQEWkVSUvTLXIWTkD32Q=; b=J6mAecU2bp+kQCsBxmK5mgzGswBryUaqzWzamGsVQapAZuhDgrSq9GYhGGtFUGmqTJ n4b16eK8Hu0daMUamCWv40aNScPOTxJcGLvGM2eAZ2iC13NYk56lvb9z9Rl1+8l/3TZk 1Z2RKoyKsjJk8AxntiVwJ13JEz0fpKGVfFPzdUBB6yGfT9At0qra+qeTFHTQ5CkCUU70 ZCBv7S8RDJu/dBYJIDjcJZJ8KD8ZMs9YKyRWePEmyBxOyrfIQBQTgzpxef3/zXyvDjYl 5zi80W4340SZAoMHyD7WRrGPtflZHVJ9vgJRONIdC5xRzxBZNFB6Cqp/R3iPTPfZlPPO vOxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839117; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HLymHjIpgsykTzyW2JfzbHEXQEWkVSUvTLXIWTkD32Q=; b=p4TCCr98xXeLp28VhgccnOoe1dHL5JIdsu0MEFy9T+FXCa2ytzGVAoi+beYkYs/e7O WqUUCsW+qhobskVUunRdM2vG+VhEOfPzxfl1bSzFntHWcQrTO2jFAlVQ6cAHP/dC+oiz L9lzLEMFgCwakKDXda3IWn6FTma5RBSgtx1kspEYn4J54/oKKOuBhhoyQmybDLyKGvH0 X/AWoDpi33w+4fVZlXUz9IZ7EiUnF48TpXo+tax3vcuCIz1QpmPTlXMAc68CqvqkBmt+ nwFn9+26eO52DWWAyH1Gl9LwBhaDdRfScnUKlMD0HS8R9fwTGuTsg8sAT/jxE6X0+OKb DX5Q== X-Gm-Message-State: AO0yUKVLDS0oHjEKgufhMDGsUzpWvcsaIFP7h/yLyoFXf/WY1Cphw2LD i3+M9OWBKFyjZTNW+RlNOTLb1g== X-Google-Smtp-Source: AK7set+L9lWK94girBdy/AR88IthkBtriJglmQhYFcY8Gp3d4gBqOUEJi5Yq/Vl8DKSAdJUpJnpPTg== X-Received: by 2002:aa7:c3d0:0:b0:4be:3918:9217 with SMTP id l16-20020aa7c3d0000000b004be39189217mr4836996edr.8.1677839117178; Fri, 03 Mar 2023 02:25:17 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:17 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 06/10] dts: add test suite module Date: Fri, 3 Mar 2023 11:25:03 +0100 Message-Id: <20230303102507.527790-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The module implements the base class that all test suites inherit from. It implements methods common to all test suites. The derived test suites implement test cases and any particular setup needed for the suite or tests. Signed-off-by: Juraj Linkeš --- dts/conf.yaml | 2 + dts/framework/config/__init__.py | 4 + dts/framework/config/conf_yaml_schema.json | 10 + dts/framework/exception.py | 16 ++ dts/framework/settings.py | 24 +++ dts/framework/test_suite.py | 228 +++++++++++++++++++++ 6 files changed, 284 insertions(+) create mode 100644 dts/framework/test_suite.py diff --git a/dts/conf.yaml b/dts/conf.yaml index 6540a45ef7..75e33e8ccf 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -8,6 +8,8 @@ executions: cpu: native compiler: gcc compiler_wrapper: ccache + perf: false + func: true system_under_test: "SUT 1" nodes: - name: "SUT 1" diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 0e5f493c5d..544fceca6a 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -131,6 +131,8 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": @dataclass(slots=True, frozen=True) class ExecutionConfiguration: build_targets: list[BuildTargetConfiguration] + perf: bool + func: bool system_under_test: NodeConfiguration @staticmethod @@ -143,6 +145,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": return ExecutionConfiguration( build_targets=build_targets, + perf=d["perf"], + func=d["func"], system_under_test=node_map[sut_name], ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 56f93def36..878ca3aec2 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -164,6 +164,14 @@ }, "minimum": 1 }, + "perf": { + "type": "boolean", + "description": "Enable performance testing." + }, + "func": { + "type": "boolean", + "description": "Enable functional testing." + }, "system_under_test": { "$ref": "#/definitions/node_name" } @@ -171,6 +179,8 @@ "additionalProperties": false, "required": [ "build_targets", + "perf", + "func", "system_under_test" ] }, diff --git a/dts/framework/exception.py b/dts/framework/exception.py index b4545a5a40..ca353d98fc 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -24,6 +24,7 @@ class ErrorSeverity(IntEnum): REMOTE_CMD_EXEC_ERR = 3 SSH_ERR = 4 DPDK_BUILD_ERR = 10 + TESTCASE_VERIFY_ERR = 20 class DTSError(Exception): @@ -128,3 +129,18 @@ class DPDKBuildError(DTSError): """ severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR + + +class TestCaseVerifyError(DTSError): + """ + Used in test cases to verify the expected behavior. + """ + + value: str + severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR + + def __init__(self, value: str): + self.value = value + + def __str__(self) -> str: + return repr(self.value) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index f787187ade..4ccc98537d 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -66,6 +66,8 @@ class _Settings: skip_setup: bool dpdk_tarball_path: Path compile_timeout: float + test_cases: list + re_run: int def _get_parser() -> argparse.ArgumentParser: @@ -137,6 +139,26 @@ def _get_parser() -> argparse.ArgumentParser: help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) + parser.add_argument( + "--test-cases", + action=_env_arg("DTS_TESTCASES"), + default="", + required=False, + help="[DTS_TESTCASES] Comma-separated list of test cases to execute. " + "Unknown test cases will be silently ignored.", + ) + + parser.add_argument( + "--re-run", + "--re_run", + action=_env_arg("DTS_RERUN"), + default=0, + type=int, + required=False, + help="[DTS_RERUN] Re-run each test case the specified amount of times " + "if a test failure occurs", + ) + return parser @@ -156,6 +178,8 @@ def _get_settings() -> _Settings: skip_setup=(parsed_args.skip_setup == "Y"), dpdk_tarball_path=parsed_args.tarball, compile_timeout=parsed_args.compile_timeout, + test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [], + re_run=parsed_args.re_run, ) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py new file mode 100644 index 0000000000..9002d43297 --- /dev/null +++ b/dts/framework/test_suite.py @@ -0,0 +1,228 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2014 Intel Corporation +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +""" +Base class for creating DTS test cases. +""" + +import inspect +import re +from collections.abc import MutableSequence +from types import MethodType + +from .exception import SSHTimeoutError, TestCaseVerifyError +from .logger import DTSLOG, getLogger +from .settings import SETTINGS +from .testbed_model import SutNode + + +class TestSuite(object): + """ + The base TestSuite class provides methods for handling basic flow of a test suite: + * test case filtering and collection + * test suite setup/cleanup + * test setup/cleanup + * test case execution + * error handling and results storage + Test cases are implemented by derived classes. Test cases are all methods + starting with test_, further divided into performance test cases + (starting with test_perf_) and functional test cases (all other test cases). + By default, all test cases will be executed. A list of testcase str names + may be specified in conf.yaml or on the command line + to filter which test cases to run. + The methods named [set_up|tear_down]_[suite|test_case] should be overridden + in derived classes if the appropriate suite/test case fixtures are needed. + """ + + sut_node: SutNode + _logger: DTSLOG + _test_cases_to_run: list[str] + _func: bool + _errors: MutableSequence[Exception] + + def __init__( + self, + sut_node: SutNode, + test_cases: list[str], + func: bool, + errors: MutableSequence[Exception], + ): + self.sut_node = sut_node + self._logger = getLogger(self.__class__.__name__) + self._test_cases_to_run = test_cases + self._test_cases_to_run.extend(SETTINGS.test_cases) + self._func = func + self._errors = errors + + def set_up_suite(self) -> None: + """ + Set up test fixtures common to all test cases; this is done before + any test case is run. + """ + + def tear_down_suite(self) -> None: + """ + Tear down the previously created test fixtures common to all test cases. + """ + + def set_up_test_case(self) -> None: + """ + Set up test fixtures before each test case. + """ + + def tear_down_test_case(self) -> None: + """ + Tear down the previously created test fixtures after each test case. + """ + + def verify(self, condition: bool, failure_description: str) -> None: + if not condition: + self._logger.debug( + "A test case failed, showing the last 10 commands executed on SUT:" + ) + for command_res in self.sut_node.main_session.remote_session.history[-10:]: + self._logger.debug(command_res.command) + raise TestCaseVerifyError(failure_description) + + def run(self) -> None: + """ + Setup, execute and teardown the whole suite. + Suite execution consists of running all test cases scheduled to be executed. + A test cast run consists of setup, execution and teardown of said test case. + """ + test_suite_name = self.__class__.__name__ + + try: + self._logger.info(f"Starting test suite setup: {test_suite_name}") + self.set_up_suite() + self._logger.info(f"Test suite setup successful: {test_suite_name}") + except Exception as e: + self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") + self._errors.append(e) + + else: + self._execute_test_suite() + + finally: + try: + self.tear_down_suite() + self.sut_node.kill_cleanup_dpdk_apps() + except Exception as e: + self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") + self._logger.warning( + f"Test suite '{test_suite_name}' teardown failed, " + f"the next test suite may be affected." + ) + self._errors.append(e) + + def _execute_test_suite(self) -> None: + """ + Execute all test cases scheduled to be executed in this suite. + """ + if self._func: + for test_case_method in self._get_functional_test_cases(): + all_attempts = SETTINGS.re_run + 1 + attempt_nr = 1 + while ( + not self._run_test_case(test_case_method) + and attempt_nr < all_attempts + ): + attempt_nr += 1 + self._logger.info( + f"Re-running FAILED test case '{test_case_method.__name__}'. " + f"Attempt number {attempt_nr} out of {all_attempts}." + ) + + def _get_functional_test_cases(self) -> list[MethodType]: + """ + Get all functional test cases. + """ + return self._get_test_cases(r"test_(?!perf_)") + + def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: + """ + Return a list of test cases matching test_case_regex. + """ + self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") + filtered_test_cases = [] + for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod): + if self._should_be_executed(test_case_name, test_case_regex): + filtered_test_cases.append(test_case) + cases_str = ", ".join((x.__name__ for x in filtered_test_cases)) + self._logger.debug( + f"Found test cases '{cases_str}' in {self.__class__.__name__}." + ) + return filtered_test_cases + + def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: + """ + Check whether the test case should be executed. + """ + match = bool(re.match(test_case_regex, test_case_name)) + if self._test_cases_to_run: + return match and test_case_name in self._test_cases_to_run + + return match + + def _run_test_case(self, test_case_method: MethodType) -> bool: + """ + Setup, execute and teardown a test case in this suite. + Exceptions are caught and recorded in logs. + """ + test_case_name = test_case_method.__name__ + result = False + + try: + # run set_up function for each case + self.set_up_test_case() + except SSHTimeoutError as e: + self._logger.exception(f"Test case setup FAILED: {test_case_name}") + self._errors.append(e) + except Exception as e: + self._logger.exception(f"Test case setup ERROR: {test_case_name}") + self._errors.append(e) + + else: + # run test case if setup was successful + result = self._execute_test_case(test_case_method) + + finally: + try: + self.tear_down_test_case() + except Exception as e: + self._logger.exception(f"Test case teardown ERROR: {test_case_name}") + self._logger.warning( + f"Test case '{test_case_name}' teardown failed, " + f"the next test case may be affected." + ) + self._errors.append(e) + result = False + + return result + + def _execute_test_case(self, test_case_method: MethodType) -> bool: + """ + Execute one test case and handle failures. + """ + test_case_name = test_case_method.__name__ + result = False + try: + self._logger.info(f"Starting test case execution: {test_case_name}") + test_case_method() + result = True + self._logger.info(f"Test case execution PASSED: {test_case_name}") + + except TestCaseVerifyError as e: + self._logger.exception(f"Test case execution FAILED: {test_case_name}") + self._errors.append(e) + except Exception as e: + self._logger.exception(f"Test case execution ERROR: {test_case_name}") + self._errors.append(e) + except KeyboardInterrupt: + self._logger.error( + f"Test case execution INTERRUPTED by user: {test_case_name}" + ) + raise KeyboardInterrupt("Stop DTS") + + return result From patchwork Fri Mar 3 10:25:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124788 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 035EA41DC6; Fri, 3 Mar 2023 11:26:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AD8F642D3E; Fri, 3 Mar 2023 11:25:22 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id 53965427EE for ; Fri, 3 Mar 2023 11:25:18 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id f13so8304087edz.6 for ; Fri, 03 Mar 2023 02:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8sUh98ZWp0MwGaBe4pHvAZTJTMPDTyuFgLID5PGH4Yw=; b=L1r5/d4fHxXIb4C6s8uqiSyYj3R1NOpB/9T3WseHrfAUhM7g2V1Qw5WRf9j7rndzs8 boNDeeVk3k7WGBMcfdvSJMA5jGNYTTwklmcZfHy9PfW6Qm4gWFGigLfCkF4ih5GC1OLO 3KuNx2so4QWD8GtOa7JxQzODu5pxmzjTHLbCr0ASC+GGqygLZBTeeT6BNg5Ca2uTEogI 8DfRsItmjmTo+npwpW2tqc2maXxLEkqIb/EpINuXEHzxpRVusQVIudHhTc6w/OoyZeLd FvJ7ZteVDvs66lEwa+GdyIMXy2tp2DZ2fjFcVsZ7hAMWgkxUoi6/FfpFBfBRJ6BJjau6 KL6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8sUh98ZWp0MwGaBe4pHvAZTJTMPDTyuFgLID5PGH4Yw=; b=5u+sQpDFUUwtniZGbty7bp8kCaaE6Y9U/RxUPIwPLGZbd65FkK4iZxJEqDDu81agE4 TyNRU9l0PJ5dNigKCIrZy6I0uHBEmVlhl33VWqH2K+3bmOQYyiQGxfk2T8I6KxX6tPNJ ks6YOdBwJm0n2eRxEd75/j1QlbtJF2QKbDzl2qQ0Xl63THKlEjKiUv/V8619mMMGnW57 GqoRqMVQxG0kbHCPiw/idCGRJUMMSHGjtuImy3uFZR6KqD0kCa56k9fAM4wHoxrcWFCA y+Ldxms3bnd+QHIlR3Fs50jQjthneYuUT1OTsPRQpTT8tfaTVHFcglQV4+93cwaqVpQ4 j2KQ== X-Gm-Message-State: AO0yUKWgcS7Nj6EQpW9SZnGnQuU3M8BGpvcp2zRy3rwAsDXE6jNry65T w6tUqUgwg66AQ4EIJ0avZ7vQq27axPY+dPKe X-Google-Smtp-Source: AK7set+AnhLEV4sM9xrjd5s0ecN3LSxALuWNBe7O90izyja7mCUUwPBTWO/3LcJnF4ZTnZDS0ZcwJQ== X-Received: by 2002:aa7:c391:0:b0:4ac:c3ea:47e0 with SMTP id k17-20020aa7c391000000b004acc3ea47e0mr1441701edq.14.1677839118077; Fri, 03 Mar 2023 02:25:18 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:17 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 07/10] dts: add hello world testsuite Date: Fri, 3 Mar 2023 11:25:04 +0100 Message-Id: <20230303102507.527790-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The test suite implements test cases defined in the corresponding test plan. Signed-off-by: Juraj Linkeš --- dts/framework/remote_session/__init__.py | 2 +- dts/framework/remote_session/os_session.py | 16 ++++- .../remote_session/remote/__init__.py | 2 +- dts/framework/testbed_model/__init__.py | 1 + dts/framework/testbed_model/sut_node.py | 12 +++- dts/tests/TestSuite_hello_world.py | 64 +++++++++++++++++++ 6 files changed, 93 insertions(+), 4 deletions(-) create mode 100644 dts/tests/TestSuite_hello_world.py diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 747316c78a..ee221503df 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -17,7 +17,7 @@ from .linux_session import LinuxSession from .os_session import OSSession -from .remote import RemoteSession, SSHSession +from .remote import CommandResult, RemoteSession, SSHSession def create_session( diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py index 048bf7178e..4c48ae2567 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/remote_session/os_session.py @@ -12,7 +12,7 @@ from framework.testbed_model import LogicalCore from framework.utils import EnvVarsDict, MesonArgs -from .remote import RemoteSession, create_remote_session +from .remote import CommandResult, RemoteSession, create_remote_session class OSSession(ABC): @@ -50,6 +50,20 @@ def is_alive(self) -> bool: """ return self.remote_session.is_alive() + def send_command( + self, + command: str, + timeout: float, + verify: bool = False, + env: EnvVarsDict | None = None, + ) -> CommandResult: + """ + An all-purpose API in case the command to be executed is already + OS-agnostic, such as when the path to the executed command has been + constructed beforehand. + """ + return self.remote_session.send_command(command, timeout, verify, env) + @abstractmethod def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: """ diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py index f3092f8bbe..8a1512210a 100644 --- a/dts/framework/remote_session/remote/__init__.py +++ b/dts/framework/remote_session/remote/__init__.py @@ -6,7 +6,7 @@ from framework.config import NodeConfiguration from framework.logger import DTSLOG -from .remote_session import RemoteSession +from .remote_session import CommandResult, RemoteSession from .ssh_session import SSHSession diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 5be3e4c48d..f54a947051 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -12,6 +12,7 @@ from .hw import ( LogicalCore, LogicalCoreCount, + LogicalCoreCountFilter, LogicalCoreList, LogicalCoreListFilter, VirtualDevice, diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 3672f5f6e5..2b2b50d982 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -8,7 +8,7 @@ from pathlib import PurePath from framework.config import BuildTargetConfiguration, NodeConfiguration -from framework.remote_session import OSSession +from framework.remote_session import CommandResult, OSSession from framework.settings import SETTINGS from framework.utils import EnvVarsDict, MesonArgs @@ -252,6 +252,16 @@ def create_eal_parameters( other_eal_param=other_eal_param, ) + def run_dpdk_app( + self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30 + ) -> CommandResult: + """ + Run DPDK application on the remote node. + """ + return self.main_session.send_command( + f"{app_path} {eal_args}", timeout, verify=True + ) + class EalParameters(object): def __init__( diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py new file mode 100644 index 0000000000..7e3d95c0cf --- /dev/null +++ b/dts/tests/TestSuite_hello_world.py @@ -0,0 +1,64 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2014 Intel Corporation + +""" +Run the helloworld example app and verify it prints a message for each used core. +No other EAL parameters apart from cores are used. +""" + +from framework.test_suite import TestSuite +from framework.testbed_model import ( + LogicalCoreCount, + LogicalCoreCountFilter, + LogicalCoreList, +) + + +class TestHelloWorld(TestSuite): + def set_up_suite(self) -> None: + """ + Setup: + Build the app we're about to test - helloworld. + """ + self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld") + + def test_hello_world_single_core(self) -> None: + """ + Steps: + Run the helloworld app on the first usable logical core. + Verify: + The app prints a message from the used core: + "hello from core " + """ + + # get the first usable core + lcore_amount = LogicalCoreCount(1, 1, 1) + lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter() + eal_para = self.sut_node.create_eal_parameters( + lcore_filter_specifier=lcore_amount + ) + result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para) + self.verify( + f"hello from core {int(lcores[0])}" in result.stdout, + f"helloworld didn't start on lcore{lcores[0]}", + ) + + def test_hello_world_all_cores(self) -> None: + """ + Steps: + Run the helloworld app on all usable logical cores. + Verify: + The app prints a message from all used cores: + "hello from core " + """ + + # get the maximum logical core number + eal_para = self.sut_node.create_eal_parameters( + lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores) + ) + result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50) + for lcore in self.sut_node.lcores: + self.verify( + f"hello from core {int(lcore)}" in result.stdout, + f"helloworld didn't start on lcore{lcore}", + ) From patchwork Fri Mar 3 10:25:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124789 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A550B41DC6; Fri, 3 Mar 2023 11:26:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A16242D49; Fri, 3 Mar 2023 11:25:24 +0100 (CET) Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by mails.dpdk.org (Postfix) with ESMTP id 44ED142BAC for ; Fri, 3 Mar 2023 11:25:19 +0100 (CET) Received: by mail-ed1-f46.google.com with SMTP id o15so8143403edr.13 for ; Fri, 03 Mar 2023 02:25:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839119; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W1+Bqo5zDfFYMcMPu9OvyDz2WcXjTIJytK8DU3eHLNU=; b=DTUyy44m1m80zw+9TTQ4oHnk9U+Kog6pOkQM05Nb4o557T4Z7dWFxKzudoXeiDwdKM 6ETDWxz+W9YUn/2wMs1OHsWvWHRfdVJJNlhEUzXc1pDf0e+UoVG207eLO0DiuCqQ2bDy +iX5WuhzBxjGOzOzqe3sVgsMhSTiYaQngPbYxsezUXNf55gBMx8wkhCM160IEwnsd0YY gkM76e3/fyahV6DNb2kxPnAb0OxPHAuil91R+b7xdamtCPnaDMYP5XUTeYMkYRlNNPdd vwzvMvTCWBo4JEWe8qHNI6dMz3hyRDFITTzv6HzNwgHWHV3jWbFtVQ6C5Ro+4gwKKmk4 NrxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839119; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W1+Bqo5zDfFYMcMPu9OvyDz2WcXjTIJytK8DU3eHLNU=; b=OgXGNIApNuCw9VuQQVSvpcNR82lfHaM8DiwrInHVrAuuhxX5HOICiK/raD4T0wz8X5 kWVAP3ScGDFguX3pPp9cYJro1c/4ffGJxBO6ywwAmHDm93B6CUmVH+Kbz1qtJ10va+9V YRdepXfvphK4wjL3HmMXPgDUZqEiAapfS7WsomKnnOya0257EG2IS8Y9e9bmgXmDqqfs DN9qNQiIpcHUJCHYIsUVCdYIXjVRNV93DISuM1hqpjX3uUqZVUfwXaQlm87d4XFIlWzX PSa4kU7QNSmdulQ901B+3NWLf54DPPsM5zF3v0xibZHxc4TYONHtdtvnjOdAJ2yKA+6r 9b+A== X-Gm-Message-State: AO0yUKVcnC+MDY3/hk6RDPoRxpaTvaga76fjWMSwYs6iUTxgSQ12+T0t lXqmTC38QR2svVzFe0Jcl4GZhA== X-Google-Smtp-Source: AK7set/QUMdJN6e1g+XhBGyp/fQ+o21zdsSFAcpqfS7ch85A4FwKhT9qVikk2O9O0RLP7asC+4r7Wg== X-Received: by 2002:a05:6402:884:b0:4b1:2041:f8b2 with SMTP id e4-20020a056402088400b004b12041f8b2mr1579639edy.15.1677839118905; Fri, 03 Mar 2023 02:25:18 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:18 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 08/10] dts: add test suite config and runner Date: Fri, 3 Mar 2023 11:25:05 +0100 Message-Id: <20230303102507.527790-9-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The config allows users to specify which test suites and test cases within test suites to run. Also add test suite running capabilities to dts runner. Signed-off-by: Juraj Linkeš --- dts/conf.yaml | 2 ++ dts/framework/config/__init__.py | 29 +++++++++++++++- dts/framework/config/conf_yaml_schema.json | 40 ++++++++++++++++++++++ dts/framework/dts.py | 19 ++++++++++ dts/framework/test_suite.py | 24 ++++++++++++- 5 files changed, 112 insertions(+), 2 deletions(-) diff --git a/dts/conf.yaml b/dts/conf.yaml index 75e33e8ccf..a9bd8a3ecf 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -10,6 +10,8 @@ executions: compiler_wrapper: ccache perf: false func: true + test_suites: + - hello_world system_under_test: "SUT 1" nodes: - name: "SUT 1" diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 544fceca6a..ebb0823ff5 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -12,7 +12,7 @@ import pathlib from dataclasses import dataclass from enum import Enum, auto, unique -from typing import Any +from typing import Any, TypedDict import warlock # type: ignore import yaml @@ -128,11 +128,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": ) +class TestSuiteConfigDict(TypedDict): + suite: str + cases: list[str] + + +@dataclass(slots=True, frozen=True) +class TestSuiteConfig: + test_suite: str + test_cases: list[str] + + @staticmethod + def from_dict( + entry: str | TestSuiteConfigDict, + ) -> "TestSuiteConfig": + if isinstance(entry, str): + return TestSuiteConfig(test_suite=entry, test_cases=[]) + elif isinstance(entry, dict): + return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"]) + else: + raise TypeError(f"{type(entry)} is not valid for a test suite config.") + + @dataclass(slots=True, frozen=True) class ExecutionConfiguration: build_targets: list[BuildTargetConfiguration] perf: bool func: bool + test_suites: list[TestSuiteConfig] system_under_test: NodeConfiguration @staticmethod @@ -140,6 +163,9 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) ) + test_suites: list[TestSuiteConfig] = list( + map(TestSuiteConfig.from_dict, d["test_suites"]) + ) sut_name = d["system_under_test"] assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}" @@ -147,6 +173,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": build_targets=build_targets, perf=d["perf"], func=d["func"], + test_suites=test_suites, system_under_test=node_map[sut_name], ) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 878ca3aec2..ca2d4a1ef2 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -93,6 +93,32 @@ "required": [ "amount" ] + }, + "test_suite": { + "type": "string", + "enum": [ + "hello_world" + ] + }, + "test_target": { + "type": "object", + "properties": { + "suite": { + "$ref": "#/definitions/test_suite" + }, + "cases": { + "type": "array", + "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.", + "items": { + "type": "string" + }, + "minimum": 1 + } + }, + "required": [ + "suite" + ], + "additionalProperties": false } }, "type": "object", @@ -172,6 +198,19 @@ "type": "boolean", "description": "Enable functional testing." }, + "test_suites": { + "type": "array", + "items": { + "oneOf": [ + { + "$ref": "#/definitions/test_suite" + }, + { + "$ref": "#/definitions/test_target" + } + ] + } + }, "system_under_test": { "$ref": "#/definitions/node_name" } @@ -181,6 +220,7 @@ "build_targets", "perf", "func", + "test_suites", "system_under_test" ] }, diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 3d4170d10f..9012a499a3 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -8,6 +8,7 @@ from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration from .exception import DTSError, ErrorSeverity from .logger import DTSLOG, getLogger +from .test_suite import get_test_suites from .testbed_model import SutNode from .utils import check_dts_python_version @@ -132,6 +133,24 @@ def _run_suites( with possibly only a subset of test cases. If no subset is specified, run all test cases. """ + for test_suite_config in execution.test_suites: + try: + full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" + test_suite_classes = get_test_suites(full_suite_path) + suites_str = ", ".join((x.__name__ for x in test_suite_classes)) + dts_logger.debug( + f"Found test suites '{suites_str}' in '{full_suite_path}'." + ) + except Exception as e: + dts_logger.exception("An error occurred when searching for test suites.") + errors.append(e) + + else: + for test_suite_class in test_suite_classes: + test_suite = test_suite_class( + sut_node, test_suite_config.test_cases, execution.func, errors + ) + test_suite.run() def _exit_dts() -> None: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 9002d43297..12bf3b6420 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -6,12 +6,13 @@ Base class for creating DTS test cases. """ +import importlib import inspect import re from collections.abc import MutableSequence from types import MethodType -from .exception import SSHTimeoutError, TestCaseVerifyError +from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError from .logger import DTSLOG, getLogger from .settings import SETTINGS from .testbed_model import SutNode @@ -226,3 +227,24 @@ def _execute_test_case(self, test_case_method: MethodType) -> bool: raise KeyboardInterrupt("Stop DTS") return result + + +def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: + def is_test_suite(object) -> bool: + try: + if issubclass(object, TestSuite) and object is not TestSuite: + return True + except TypeError: + return False + return False + + try: + testcase_module = importlib.import_module(testsuite_module_path) + except ModuleNotFoundError as e: + raise ConfigurationError( + f"Test suite '{testsuite_module_path}' not found." + ) from e + return [ + test_suite_class + for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite) + ] From patchwork Fri Mar 3 10:25:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124790 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0503641DC6; Fri, 3 Mar 2023 11:26:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D99542D39; Fri, 3 Mar 2023 11:25:25 +0100 (CET) Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by mails.dpdk.org (Postfix) with ESMTP id 22BBA42BC9 for ; Fri, 3 Mar 2023 11:25:20 +0100 (CET) Received: by mail-ed1-f50.google.com with SMTP id cy23so8161201edb.12 for ; Fri, 03 Mar 2023 02:25:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e++Ltt17Nq/1UO2DGCtqsiWssHRF6XD5PW4V8DDWR+8=; b=qXeqqM6tAuyqdmuBPNlgg24grqRbwoO1jg/zzT4IS1Q2Z/6BN/2fk8vg49dTkY5TPS r3slpETLqcUcUQlNRcnIYUHWcP7KzjahdhKO6kR16bTn/HCAlvt/dNwOiqTEL9BlvblB kkHYpXUkNgks0C8PE+tMSL6ytf1ZzsaMXzF97yoV9Jn0+5UTOMururnmru5/rxTVuNVx AbbEyS/RhQYNR2dGQSOhnvgeYnSsf2XFTbWzfTDBPrz2a9EDSA7Ff9+5m9Y3AltRy8VZ rswdfOCk7FCdB0mkPMdR3Lk5h1dKD+OzkvdFzc7tAs0V/lHYHAsdx4PI2RT2xYttMiND fTjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e++Ltt17Nq/1UO2DGCtqsiWssHRF6XD5PW4V8DDWR+8=; b=sH79Lb8tSFtNJ891zuQ4B2+GyZDSK0ZVKXdTNDYzzHhNHsVeXkRzTJ5hoCOZoX0MAc 8uTJU7ZjaVkJv+gLVlO/W3eu/e3sj88wzyGv4MdczgXdvskccWHDnpfWeC+sLa6RXru1 Ku+aq5oPdeFKpYQ0miJLp83Nyy7ioJyGu7m67OcIOT9tSkcCVdR4l8xVKwSQRkX9Dxco 8wpfSNQ0M+4RC0JrYRUSfkqh26W1zMeIekMwMub3aoS01mGFidBze7cACCuuNeSpqlgv ioIc9ttSV4U4KrMnVGmCY+HsxgkYWKzk7K2PoqVDP8I9QIz58PavXJWIH8d+HHMDrkn/ kfOw== X-Gm-Message-State: AO0yUKW39n5iTcoN13xJw1SN8c1BylTDn6QlmdQ/x1CiOOzqDFz2xqwj 5H4IJIJWwezWa01NsP96GM7+qT9XAgcvMBR6 X-Google-Smtp-Source: AK7set+KJfGxDqRv1qATaH2btaxgfkPXbuu3yDoDxPHfUOsb7X/eCZ9x7sRcG5vy3xB4dtsED6n0Wg== X-Received: by 2002:a17:906:fe0c:b0:8b3:b74:aeb5 with SMTP id wy12-20020a170906fe0c00b008b30b74aeb5mr1595458ejb.30.1677839119751; Fri, 03 Mar 2023 02:25:19 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:19 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 09/10] dts: add test results module Date: Fri, 3 Mar 2023 11:25:06 +0100 Message-Id: <20230303102507.527790-10-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The module stores the results and errors from all executions, build targets, test suites and test cases. The result consist of the result of the setup and the teardown of each testing stage (listed above) and the results of the inner stages. The innermost stage is the case, which also contains the result of test case itself. The modules also produces a brief overview of the results and the number of executed tests. It also finds the proper return code to exit with from among the stored errors. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 64 +++---- dts/framework/settings.py | 2 - dts/framework/test_result.py | 316 +++++++++++++++++++++++++++++++++++ dts/framework/test_suite.py | 60 +++---- 4 files changed, 382 insertions(+), 60 deletions(-) create mode 100644 dts/framework/test_result.py diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 9012a499a3..0502284580 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -6,14 +6,14 @@ import sys from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration -from .exception import DTSError, ErrorSeverity from .logger import DTSLOG, getLogger +from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites from .testbed_model import SutNode from .utils import check_dts_python_version dts_logger: DTSLOG = getLogger("DTSRunner") -errors = [] +result: DTSResult = DTSResult(dts_logger) def run_all() -> None: @@ -22,7 +22,7 @@ def run_all() -> None: config file. """ global dts_logger - global errors + global result # check the python version of the server that run dts check_dts_python_version() @@ -39,29 +39,31 @@ def run_all() -> None: # the SUT has not been initialized yet try: sut_node = SutNode(execution.system_under_test) + result.update_setup(Result.PASS) except Exception as e: dts_logger.exception( f"Connection to node {execution.system_under_test} failed." ) - errors.append(e) + result.update_setup(Result.FAIL, e) else: nodes[sut_node.name] = sut_node if sut_node: - _run_execution(sut_node, execution) + _run_execution(sut_node, execution, result) except Exception as e: dts_logger.exception("An unexpected error has occurred.") - errors.append(e) + result.add_error(e) raise finally: try: for node in nodes.values(): node.close() + result.update_teardown(Result.PASS) except Exception as e: dts_logger.exception("Final cleanup of nodes failed.") - errors.append(e) + result.update_teardown(Result.ERROR, e) # we need to put the sys.exit call outside the finally clause to make sure # that unexpected exceptions will propagate @@ -72,61 +74,72 @@ def run_all() -> None: _exit_dts() -def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None: +def _run_execution( + sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult +) -> None: """ Run the given execution. This involves running the execution setup as well as running all build targets in the given execution. """ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.") + execution_result = result.add_execution(sut_node.config) try: sut_node.set_up_execution(execution) + execution_result.update_setup(Result.PASS) except Exception as e: dts_logger.exception("Execution setup failed.") - errors.append(e) + execution_result.update_setup(Result.FAIL, e) else: for build_target in execution.build_targets: - _run_build_target(sut_node, build_target, execution) + _run_build_target(sut_node, build_target, execution, execution_result) finally: try: sut_node.tear_down_execution() + execution_result.update_teardown(Result.PASS) except Exception as e: dts_logger.exception("Execution teardown failed.") - errors.append(e) + execution_result.update_teardown(Result.FAIL, e) def _run_build_target( sut_node: SutNode, build_target: BuildTargetConfiguration, execution: ExecutionConfiguration, + execution_result: ExecutionResult, ) -> None: """ Run the given build target. """ dts_logger.info(f"Running build target '{build_target.name}'.") + build_target_result = execution_result.add_build_target(build_target) try: sut_node.set_up_build_target(build_target) + result.dpdk_version = sut_node.dpdk_version + build_target_result.update_setup(Result.PASS) except Exception as e: dts_logger.exception("Build target setup failed.") - errors.append(e) + build_target_result.update_setup(Result.FAIL, e) else: - _run_suites(sut_node, execution) + _run_suites(sut_node, execution, build_target_result) finally: try: sut_node.tear_down_build_target() + build_target_result.update_teardown(Result.PASS) except Exception as e: dts_logger.exception("Build target teardown failed.") - errors.append(e) + build_target_result.update_teardown(Result.FAIL, e) def _run_suites( sut_node: SutNode, execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, ) -> None: """ Use the given build_target to run execution's test suites @@ -143,12 +156,15 @@ def _run_suites( ) except Exception as e: dts_logger.exception("An error occurred when searching for test suites.") - errors.append(e) + result.update_setup(Result.ERROR, e) else: for test_suite_class in test_suite_classes: test_suite = test_suite_class( - sut_node, test_suite_config.test_cases, execution.func, errors + sut_node, + test_suite_config.test_cases, + execution.func, + build_target_result, ) test_suite.run() @@ -157,20 +173,8 @@ def _exit_dts() -> None: """ Process all errors and exit with the proper exit code. """ - if errors and dts_logger: - dts_logger.debug("Summary of errors:") - for error in errors: - dts_logger.debug(repr(error)) - - return_code = ErrorSeverity.NO_ERR - for error in errors: - error_return_code = ErrorSeverity.GENERIC_ERR - if isinstance(error, DTSError): - error_return_code = error.severity - - if error_return_code > return_code: - return_code = error_return_code + result.process() if dts_logger: dts_logger.info("DTS execution has ended.") - sys.exit(return_code) + sys.exit(result.get_return_code()) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 4ccc98537d..71955f4581 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -143,7 +143,6 @@ def _get_parser() -> argparse.ArgumentParser: "--test-cases", action=_env_arg("DTS_TESTCASES"), default="", - required=False, help="[DTS_TESTCASES] Comma-separated list of test cases to execute. " "Unknown test cases will be silently ignored.", ) @@ -154,7 +153,6 @@ def _get_parser() -> argparse.ArgumentParser: action=_env_arg("DTS_RERUN"), default=0, type=int, - required=False, help="[DTS_RERUN] Re-run each test case the specified amount of times " "if a test failure occurs", ) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py new file mode 100644 index 0000000000..743919820c --- /dev/null +++ b/dts/framework/test_result.py @@ -0,0 +1,316 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +""" +Generic result container and reporters +""" + +import os.path +from collections.abc import MutableSequence +from enum import Enum, auto + +from .config import ( + OS, + Architecture, + BuildTargetConfiguration, + Compiler, + CPUType, + NodeConfiguration, +) +from .exception import DTSError, ErrorSeverity +from .logger import DTSLOG +from .settings import SETTINGS + + +class Result(Enum): + """ + An Enum defining the possible states that + a setup, a teardown or a test case may end up in. + """ + + PASS = auto() + FAIL = auto() + ERROR = auto() + SKIP = auto() + + def __bool__(self) -> bool: + return self is self.PASS + + +class FixtureResult(object): + """ + A record that stored the result of a setup or a teardown. + The default is FAIL because immediately after creating the object + the setup of the corresponding stage will be executed, which also guarantees + the execution of teardown. + """ + + result: Result + error: Exception | None = None + + def __init__( + self, + result: Result = Result.FAIL, + error: Exception | None = None, + ): + self.result = result + self.error = error + + def __bool__(self) -> bool: + return bool(self.result) + + +class Statistics(dict): + """ + A helper class used to store the number of test cases by its result + along a few other basic information. + Using a dict provides a convenient way to format the data. + """ + + def __init__(self, dpdk_version): + super(Statistics, self).__init__() + for result in Result: + self[result.name] = 0 + self["PASS RATE"] = 0.0 + self["DPDK VERSION"] = dpdk_version + + def __iadd__(self, other: Result) -> "Statistics": + """ + Add a Result to the final count. + """ + self[other.name] += 1 + self["PASS RATE"] = ( + float(self[Result.PASS.name]) + * 100 + / sum(self[result.name] for result in Result) + ) + return self + + def __str__(self) -> str: + """ + Provide a string representation of the data. + """ + stats_str = "" + for key, value in self.items(): + stats_str += f"{key:<12} = {value}\n" + # according to docs, we should use \n when writing to text files + # on all platforms + return stats_str + + +class BaseResult(object): + """ + The Base class for all results. Stores the results of + the setup and teardown portions of the corresponding stage + and a list of results from each inner stage in _inner_results. + """ + + setup_result: FixtureResult + teardown_result: FixtureResult + _inner_results: MutableSequence["BaseResult"] + + def __init__(self): + self.setup_result = FixtureResult() + self.teardown_result = FixtureResult() + self._inner_results = [] + + def update_setup(self, result: Result, error: Exception | None = None) -> None: + self.setup_result.result = result + self.setup_result.error = error + + def update_teardown(self, result: Result, error: Exception | None = None) -> None: + self.teardown_result.result = result + self.teardown_result.error = error + + def _get_setup_teardown_errors(self) -> list[Exception]: + errors = [] + if self.setup_result.error: + errors.append(self.setup_result.error) + if self.teardown_result.error: + errors.append(self.teardown_result.error) + return errors + + def _get_inner_errors(self) -> list[Exception]: + return [ + error + for inner_result in self._inner_results + for error in inner_result.get_errors() + ] + + def get_errors(self) -> list[Exception]: + return self._get_setup_teardown_errors() + self._get_inner_errors() + + def add_stats(self, statistics: Statistics) -> None: + for inner_result in self._inner_results: + inner_result.add_stats(statistics) + + +class TestCaseResult(BaseResult, FixtureResult): + """ + The test case specific result. + Stores the result of the actual test case. + Also stores the test case name. + """ + + test_case_name: str + + def __init__(self, test_case_name: str): + super(TestCaseResult, self).__init__() + self.test_case_name = test_case_name + + def update(self, result: Result, error: Exception | None = None) -> None: + self.result = result + self.error = error + + def _get_inner_errors(self) -> list[Exception]: + if self.error: + return [self.error] + return [] + + def add_stats(self, statistics: Statistics) -> None: + statistics += self.result + + def __bool__(self) -> bool: + return ( + bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) + ) + + +class TestSuiteResult(BaseResult): + """ + The test suite specific result. + The _inner_results list stores results of test cases in a given test suite. + Also stores the test suite name. + """ + + suite_name: str + + def __init__(self, suite_name: str): + super(TestSuiteResult, self).__init__() + self.suite_name = suite_name + + def add_test_case(self, test_case_name: str) -> TestCaseResult: + test_case_result = TestCaseResult(test_case_name) + self._inner_results.append(test_case_result) + return test_case_result + + +class BuildTargetResult(BaseResult): + """ + The build target specific result. + The _inner_results list stores results of test suites in a given build target. + Also stores build target specifics, such as compiler used to build DPDK. + """ + + arch: Architecture + os: OS + cpu: CPUType + compiler: Compiler + + def __init__(self, build_target: BuildTargetConfiguration): + super(BuildTargetResult, self).__init__() + self.arch = build_target.arch + self.os = build_target.os + self.cpu = build_target.cpu + self.compiler = build_target.compiler + + def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + test_suite_result = TestSuiteResult(test_suite_name) + self._inner_results.append(test_suite_result) + return test_suite_result + + +class ExecutionResult(BaseResult): + """ + The execution specific result. + The _inner_results list stores results of build targets in a given execution. + Also stores the SUT node configuration. + """ + + sut_node: NodeConfiguration + + def __init__(self, sut_node: NodeConfiguration): + super(ExecutionResult, self).__init__() + self.sut_node = sut_node + + def add_build_target( + self, build_target: BuildTargetConfiguration + ) -> BuildTargetResult: + build_target_result = BuildTargetResult(build_target) + self._inner_results.append(build_target_result) + return build_target_result + + +class DTSResult(BaseResult): + """ + Stores environment information and test results from a DTS run, which are: + * Execution level information, such as SUT and TG hardware. + * Build target level information, such as compiler, target OS and cpu. + * Test suite results. + * All errors that are caught and recorded during DTS execution. + + The information is stored in nested objects. + + The class is capable of computing the return code used to exit DTS with + from the stored error. + + It also provides a brief statistical summary of passed/failed test cases. + """ + + dpdk_version: str | None + _logger: DTSLOG + _errors: list[Exception] + _return_code: ErrorSeverity + _stats_result: Statistics | None + _stats_filename: str + + def __init__(self, logger: DTSLOG): + super(DTSResult, self).__init__() + self.dpdk_version = None + self._logger = logger + self._errors = [] + self._return_code = ErrorSeverity.NO_ERR + self._stats_result = None + self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") + + def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: + execution_result = ExecutionResult(sut_node) + self._inner_results.append(execution_result) + return execution_result + + def add_error(self, error) -> None: + self._errors.append(error) + + def process(self) -> None: + """ + Process the data after a DTS run. + The data is added to nested objects during runtime and this parent object + is not updated at that time. This requires us to process the nested data + after it's all been gathered. + + The processing gathers all errors and the result statistics of test cases. + """ + self._errors += self.get_errors() + if self._errors and self._logger: + self._logger.debug("Summary of errors:") + for error in self._errors: + self._logger.debug(repr(error)) + + self._stats_result = Statistics(self.dpdk_version) + self.add_stats(self._stats_result) + with open(self._stats_filename, "w+") as stats_file: + stats_file.write(str(self._stats_result)) + + def get_return_code(self) -> int: + """ + Go through all stored Exceptions and return the highest error code found. + """ + for error in self._errors: + error_return_code = ErrorSeverity.GENERIC_ERR + if isinstance(error, DTSError): + error_return_code = error.severity + + if error_return_code > self._return_code: + self._return_code = error_return_code + + return int(self._return_code) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 12bf3b6420..0705f38f98 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -9,12 +9,12 @@ import importlib import inspect import re -from collections.abc import MutableSequence from types import MethodType from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError from .logger import DTSLOG, getLogger from .settings import SETTINGS +from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult from .testbed_model import SutNode @@ -40,21 +40,21 @@ class TestSuite(object): _logger: DTSLOG _test_cases_to_run: list[str] _func: bool - _errors: MutableSequence[Exception] + _result: TestSuiteResult def __init__( self, sut_node: SutNode, test_cases: list[str], func: bool, - errors: MutableSequence[Exception], + build_target_result: BuildTargetResult, ): self.sut_node = sut_node self._logger = getLogger(self.__class__.__name__) self._test_cases_to_run = test_cases self._test_cases_to_run.extend(SETTINGS.test_cases) self._func = func - self._errors = errors + self._result = build_target_result.add_test_suite(self.__class__.__name__) def set_up_suite(self) -> None: """ @@ -97,10 +97,11 @@ def run(self) -> None: try: self._logger.info(f"Starting test suite setup: {test_suite_name}") self.set_up_suite() + self._result.update_setup(Result.PASS) self._logger.info(f"Test suite setup successful: {test_suite_name}") except Exception as e: self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") - self._errors.append(e) + self._result.update_setup(Result.ERROR, e) else: self._execute_test_suite() @@ -109,13 +110,14 @@ def run(self) -> None: try: self.tear_down_suite() self.sut_node.kill_cleanup_dpdk_apps() + self._result.update_teardown(Result.PASS) except Exception as e: self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") self._logger.warning( f"Test suite '{test_suite_name}' teardown failed, " f"the next test suite may be affected." ) - self._errors.append(e) + self._result.update_setup(Result.ERROR, e) def _execute_test_suite(self) -> None: """ @@ -123,17 +125,18 @@ def _execute_test_suite(self) -> None: """ if self._func: for test_case_method in self._get_functional_test_cases(): + test_case_name = test_case_method.__name__ + test_case_result = self._result.add_test_case(test_case_name) all_attempts = SETTINGS.re_run + 1 attempt_nr = 1 - while ( - not self._run_test_case(test_case_method) - and attempt_nr < all_attempts - ): + self._run_test_case(test_case_method, test_case_result) + while not test_case_result and attempt_nr < all_attempts: attempt_nr += 1 self._logger.info( - f"Re-running FAILED test case '{test_case_method.__name__}'. " + f"Re-running FAILED test case '{test_case_name}'. " f"Attempt number {attempt_nr} out of {all_attempts}." ) + self._run_test_case(test_case_method, test_case_result) def _get_functional_test_cases(self) -> list[MethodType]: """ @@ -166,68 +169,69 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool return match - def _run_test_case(self, test_case_method: MethodType) -> bool: + def _run_test_case( + self, test_case_method: MethodType, test_case_result: TestCaseResult + ) -> None: """ Setup, execute and teardown a test case in this suite. - Exceptions are caught and recorded in logs. + Exceptions are caught and recorded in logs and results. """ test_case_name = test_case_method.__name__ - result = False try: # run set_up function for each case self.set_up_test_case() + test_case_result.update_setup(Result.PASS) except SSHTimeoutError as e: self._logger.exception(f"Test case setup FAILED: {test_case_name}") - self._errors.append(e) + test_case_result.update_setup(Result.FAIL, e) except Exception as e: self._logger.exception(f"Test case setup ERROR: {test_case_name}") - self._errors.append(e) + test_case_result.update_setup(Result.ERROR, e) else: # run test case if setup was successful - result = self._execute_test_case(test_case_method) + self._execute_test_case(test_case_method, test_case_result) finally: try: self.tear_down_test_case() + test_case_result.update_teardown(Result.PASS) except Exception as e: self._logger.exception(f"Test case teardown ERROR: {test_case_name}") self._logger.warning( f"Test case '{test_case_name}' teardown failed, " f"the next test case may be affected." ) - self._errors.append(e) - result = False + test_case_result.update_teardown(Result.ERROR, e) + test_case_result.update(Result.ERROR) - return result - - def _execute_test_case(self, test_case_method: MethodType) -> bool: + def _execute_test_case( + self, test_case_method: MethodType, test_case_result: TestCaseResult + ) -> None: """ Execute one test case and handle failures. """ test_case_name = test_case_method.__name__ - result = False try: self._logger.info(f"Starting test case execution: {test_case_name}") test_case_method() - result = True + test_case_result.update(Result.PASS) self._logger.info(f"Test case execution PASSED: {test_case_name}") except TestCaseVerifyError as e: self._logger.exception(f"Test case execution FAILED: {test_case_name}") - self._errors.append(e) + test_case_result.update(Result.FAIL, e) except Exception as e: self._logger.exception(f"Test case execution ERROR: {test_case_name}") - self._errors.append(e) + test_case_result.update(Result.ERROR, e) except KeyboardInterrupt: self._logger.error( f"Test case execution INTERRUPTED by user: {test_case_name}" ) + test_case_result.update(Result.SKIP) raise KeyboardInterrupt("Stop DTS") - return result - def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: def is_test_suite(object) -> bool: From patchwork Fri Mar 3 10:25:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 124791 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C6EC41DC6; Fri, 3 Mar 2023 11:26:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 155F042D50; Fri, 3 Mar 2023 11:25:26 +0100 (CET) Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by mails.dpdk.org (Postfix) with ESMTP id D3B0642D36 for ; Fri, 3 Mar 2023 11:25:20 +0100 (CET) Received: by mail-ed1-f44.google.com with SMTP id o12so8230125edb.9 for ; Fri, 03 Mar 2023 02:25:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon-tech.20210112.gappssmtp.com; s=20210112; t=1677839120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TtKGqXREbdV24BtUnuvxKsFeNtDkF+SAMUN6axw569Q=; b=aqSjiCq5fsikpmEIfpAcKrOGCg7NnN8QzBCC4Xp0E3O9b5ezWgDX3kRuMqOTf9tMPC Te1jC0vpp9ritvApoKJKwctoEDCM00wsmzpZed4RbejG0HERQ5s7wo6AsgcXZcBhCclH MJ1wlXGGzNYPAhK80M8+hdOK9N6DMqyXiQPGnonmpwMFdklV4lHDKUAQ3YJc0UCUBrtD jx4RAoBunJl7F8rqGAVA+eCq4J+couoztvFU3vGWNZp/w3W0fhvChosnTsyLG1BQ5+D8 f5wK2VINAk+JAH70O2umWrtMSZqF1OO2iZp4c8HlTwJ9oZEfcUtzFZCYhZM20CdC1Lew AY9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677839120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TtKGqXREbdV24BtUnuvxKsFeNtDkF+SAMUN6axw569Q=; b=Ow6JQO/GEetNvAvSqyPvIwWdiGZqBsVOFHxPNN3S8b8OH+1HftILVJgJpJe/+BoKz7 61M3yBXUp6ER23RAgWyKTkZx5hc56xgiri9bA/0uuk9kpCo8xzyPyiMo69lEuGJVukN0 7M6D4w4Td8ZE1QklT+zVgE5fZ5AdFN1uYMCocYVNXTyeXZwC0fUOBxD2R7s9gs0bkwtG qnUNON+I8MMPXq4TqQ1/P7pW9l5PCVVKNshxIxV0QlWuWUYMOieogG25uoVBh24xtchR N4Ur5aDW0B7Yn0A4rKdOlZ8Bz3NIk8aLndAltz06Vo8QVxwvyBKwlOk3jAHkQ1MoZGCY jHXQ== X-Gm-Message-State: AO0yUKXszdK3YIZqtJScS8IWCuPztnjk51q3P4OrYcXuXELpMcfpHQZe rMecUbnaFtyVIqj/O+RFtQxEgQ== X-Google-Smtp-Source: AK7set/uny2SdpEdqO8Mylsv1lNiPW9lDf6Ff+RfHsjaNrJzV72jqlPuKV45Bc4YO2BBuky5rAR4Bg== X-Received: by 2002:aa7:c40b:0:b0:4c1:f57c:4ee7 with SMTP id j11-20020aa7c40b000000b004c1f57c4ee7mr1455864edq.21.1677839120606; Fri, 03 Mar 2023 02:25:20 -0800 (PST) Received: from localhost.localdomain (ip-46.34.234.35.o2inet.sk. [46.34.234.35]) by smtp.gmail.com with ESMTPSA id j19-20020a508a93000000b004c3e3a6136dsm984028edj.21.2023.03.03.02.25.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 02:25:20 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, lijuan.tu@intel.com, bruce.richardson@intel.com, probb@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v6 10/10] doc: update dts setup and test suite cookbook Date: Fri, 3 Mar 2023 11:25:07 +0100 Message-Id: <20230303102507.527790-11-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230303102507.527790-1-juraj.linkes@pantheon.tech> References: <20230223152840.634183-1-juraj.linkes@pantheon.tech> <20230303102507.527790-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Document how to configure and run DTS. Also add documentation related to new features: SUT setup and a brief test suite implementation cookbook. Signed-off-by: Juraj Linkeš Tested-by: Patrick Robb --- doc/guides/tools/dts.rst | 165 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 163 insertions(+), 2 deletions(-) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index daf54359ed..ebd6dceb6a 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2022 PANTHEON.tech s.r.o. + Copyright(c) 2022-2023 PANTHEON.tech s.r.o. DPDK Test Suite =============== @@ -56,7 +56,7 @@ DTS runtime environment or just plain DTS environment are used interchangeably. Setting up DTS environment --------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~ #. **Python Version** @@ -93,6 +93,167 @@ Setting up DTS environment poetry install poetry shell +#. **SSH Connection** + + DTS uses Python pexpect for SSH connections between DTS environment and the other hosts. + The pexpect implementation is a wrapper around the ssh command in the DTS environment. + This means it'll use the SSH agent providing the ssh command and its keys. + + +Setting up System Under Test +---------------------------- + +There are two areas that need to be set up on a System Under Test: + +#. **DPDK dependencies** + + DPDK will be built and run on the SUT. + Consult the Getting Started guides for the list of dependencies for each distribution. + +#. **Hardware dependencies** + + Any hardware DPDK uses needs a proper driver + and most OS distributions provide those, but the version may not be satisfactory. + It's up to each user to install the driver they're interested in testing. + The hardware also may also need firmware upgrades, which is also left at user discretion. + +#. **Hugepages** + + There are two ways to configure hugepages: + + * DTS configuration + + You may specify the optional hugepage configuration in the DTS config file. + If you do, DTS will take care of configuring hugepages, + overwriting your current SUT hugepage configuration. + + * System under test configuration + + It's possible to use the hugepage configuration already present on the SUT. + If you wish to do so, don't specify the hugepage configuration in the DTS config file. + + +Running DTS +----------- + +DTS needs to know which nodes to connect to and what hardware to use on those nodes. +Once that's configured, DTS needs a DPDK tarball and it's ready to run. + +Configuring DTS +~~~~~~~~~~~~~~~ + +DTS configuration is split into nodes and executions and build targets within executions. +By default, DTS will try to use the ``dts/conf.yaml`` config file, +which is a template that illustrates what can be configured in DTS: + + .. literalinclude:: ../../../dts/conf.yaml + :language: yaml + :start-at: executions: + + +The user must be root or any other user with prompt starting with ``#``. +The other fields are mostly self-explanatory +and documented in more detail in ``dts/framework/config/conf_yaml_schema.json``. + +DTS Execution +~~~~~~~~~~~~~ + +DTS is run with ``main.py`` located in the ``dts`` directory after entering Poetry shell:: + + usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] + [-v VERBOSE] [-s SKIP_SETUP] [--tarball TARBALL] + [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES] + [--re-run RE_RUN] + + Run DPDK test suites. All options may be specified with the environment variables provided in + brackets. Command line arguments have higher priority. + + options: + -h, --help show this help message and exit + --config-file CONFIG_FILE + [DTS_CFG_FILE] configuration file that describes the test cases, SUTs + and targets. (default: conf.yaml) + --output-dir OUTPUT_DIR, --output OUTPUT_DIR + [DTS_OUTPUT_DIR] Output directory where dts logs and results are + saved. (default: output) + -t TIMEOUT, --timeout TIMEOUT + [DTS_TIMEOUT] The default timeout for all DTS operations except for + compiling DPDK. (default: 15) + -v VERBOSE, --verbose VERBOSE + [DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all + messages to the console. (default: N) + -s SKIP_SETUP, --skip-setup SKIP_SETUP + [DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG + nodes. (default: N) + --tarball TARBALL, --snapshot TARBALL + [DTS_DPDK_TARBALL] Path to DPDK source code tarball which will be + used in testing. (default: dpdk.tar.xz) + --compile-timeout COMPILE_TIMEOUT + [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200) + --test-cases TEST_CASES + [DTS_TESTCASES] Comma-separated list of test cases to execute. + Unknown test cases will be silently ignored. (default: ) + --re-run RE_RUN, --re_run RE_RUN + [DTS_RERUN] Re-run each test case the specified amount of times if a + test failure occurs (default: 0) + + +The brackets contain the names of environment variables that set the same thing. +The minimum DTS needs is a config file and a DPDK tarball. +You may pass those to DTS using the command line arguments or use the default paths. + + +DTS Results +~~~~~~~~~~~ + +Results are stored in the output dir by default +which be changed with the ``--output-dir`` command line argument. +The results contain basic statistics of passed/failed test cases and DPDK version. + + +How To Write a Test Suite +------------------------- + +All test suites inherit from ``TestSuite`` defined in ``dts/framework/test_suite.py``. +There are four types of methods that comprise a test suite: + +#. **Test cases** + + | Test cases are methods that start with a particular prefix. + | Functional test cases start with ``test_``, e.g. ``test_hello_world_single_core``. + | Performance test cases start with ``test_perf_``, e.g. ``test_perf_nic_single_core``. + | A test suite may have any number of functional and/or performance test cases. + However, these test cases must test the same feature, + following the rule of one feature = one test suite. + Test cases for one feature don't need to be grouped in just one test suite, though. + If the feature requires many testing scenarios to cover, + the test cases would be better off spread over multiple test suites + so that each test suite doesn't take too long to execute. + +#. **Setup and Teardown methods** + + | There are setup and teardown methods for the whole test suite and each individual test case. + | Methods ``set_up_suite`` and ``tear_down_suite`` will be executed + before any and after all test cases have been executed, respectively. + | Methods ``set_up_test_case`` and ``tear_down_test_case`` will be executed + before and after each test case, respectively. + | These methods don't need to be implemented if there's no need for them in a test suite. + In that case, nothing will happen when they're is executed. + +#. **Test case verification** + + Test case verification should be done with the ``verify`` method, which records the result. + The method should be called at the end of each test case. + +#. **Other methods** + + Of course, all test suite code should adhere to coding standards. + Only the above methods will be treated specially and any other methods may be defined + (which should be mostly private methods needed by each particular test suite). + Any specific features (such as NIC configuration) required by a test suite + should be implemented in the ``SutNode`` class (and the underlying classes that ``SutNode`` uses) + and used by the test suite via the ``sut_node`` field. + DTS Developer Tools -------------------