[v6,2/2] dts: reformat to 100 line length

Message ID 20231120123646.43994-2-juraj.linkes@pantheon.tech (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series [v6,1/2] doc: increase python max line length to 100 |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-compile-arm64-testing success Testing PASS

Commit Message

Juraj Linkeš Nov. 20, 2023, 12:36 p.m. UTC
  Reformat to 100 from the previous 88 to unify with C recommendations.

The C recommendation is the maximum with the ideal being 80. The Python
tools are not suitable for this flexibility.

We require all patches with DTS code to be validated with the
devtools/dts-check-format.sh script, part of which is the black
formatting tool. We've set up black to format all of the codebase and
the reformat is needed so that future submitters are not affected.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Jeremy Spewock <jspewock@iol.unh.edu>
---
 dts/framework/config/__init__.py              | 20 ++-----
 dts/framework/dts.py                          | 15 ++---
 dts/framework/exception.py                    |  5 +-
 dts/framework/remote_session/__init__.py      |  4 +-
 dts/framework/remote_session/linux_session.py | 39 ++++---------
 dts/framework/remote_session/posix_session.py | 30 +++-------
 .../remote/interactive_remote_session.py      |  7 +--
 .../remote/interactive_shell.py               |  4 +-
 .../remote_session/remote/remote_session.py   |  8 +--
 .../remote_session/remote/ssh_session.py      | 16 ++----
 .../remote_session/remote/testpmd_shell.py    |  8 +--
 dts/framework/settings.py                     | 15 ++---
 dts/framework/test_result.py                  | 16 ++----
 dts/framework/test_suite.py                   | 57 +++++--------------
 .../capturing_traffic_generator.py            |  7 +--
 dts/framework/testbed_model/hw/cpu.py         | 20 ++-----
 dts/framework/testbed_model/node.py           |  8 +--
 dts/framework/testbed_model/scapy.py          | 19 ++-----
 dts/framework/testbed_model/sut_node.py       | 40 ++++---------
 dts/framework/testbed_model/tg_node.py        |  7 +--
 dts/framework/utils.py                        | 20 ++-----
 dts/tests/TestSuite_hello_world.py            |  4 +-
 dts/tests/TestSuite_smoke_tests.py            | 11 +---
 23 files changed, 99 insertions(+), 281 deletions(-)
  

Comments

Thomas Monjalon Nov. 20, 2023, 4:50 p.m. UTC | #1
20/11/2023 13:36, Juraj Linkeš:
> Reformat to 100 from the previous 88 to unify with C recommendations.
> 
> The C recommendation is the maximum with the ideal being 80. The Python
> tools are not suitable for this flexibility.
> 
> We require all patches with DTS code to be validated with the
> devtools/dts-check-format.sh script, part of which is the black
> formatting tool. We've set up black to format all of the codebase and
> the reformat is needed so that future submitters are not affected.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Acked-by: Jeremy Spewock <jspewock@iol.unh.edu>

In general, I don't like doing large cosmetic changes,
but it looks mandatory to allow automatic formatting with black.

Applied, thanks.

Note that my pylama is still emitting warning when it goes longer than 79.
It may be a problem in my environment but something to check.
By the way, why are we using pylama in the script
instead of directly calling the linters we are interested in?
  
Juraj Linkeš Nov. 21, 2023, 9:27 a.m. UTC | #2
On Mon, Nov 20, 2023 at 5:50 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 20/11/2023 13:36, Juraj Linkeš:
> > Reformat to 100 from the previous 88 to unify with C recommendations.
> >
> > The C recommendation is the maximum with the ideal being 80. The Python
> > tools are not suitable for this flexibility.
> >
> > We require all patches with DTS code to be validated with the
> > devtools/dts-check-format.sh script, part of which is the black
> > formatting tool. We've set up black to format all of the codebase and
> > the reformat is needed so that future submitters are not affected.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > Acked-by: Jeremy Spewock <jspewock@iol.unh.edu>
>
> In general, I don't like doing large cosmetic changes,
> but it looks mandatory to allow automatic formatting with black.
>
> Applied, thanks.
>
> Note that my pylama is still emitting warning when it goes longer than 79.
> It may be a problem in my environment but something to check.
> By the way, why are we using pylama in the script
> instead of directly calling the linters we are interested in?
>

This is a good point. It's mainly a little bit of convenience, as it's
easier to run just one tool instead of multiples (and possibly
different config files). But seeing as Pylama doesn't seem to be
maintained well (it doesn't work with mypy properly and there are
issues with pydocstyle as well), we may eschew it and go with running
the linters individually - I'll note this for the future.

>
>
  

Patch

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..9b32cf0532 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -140,9 +140,7 @@  def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
 
         if "traffic_generator" in d:
             return TGNodeConfiguration(
-                traffic_generator=TrafficGeneratorConfig.from_dict(
-                    d["traffic_generator"]
-                ),
+                traffic_generator=TrafficGeneratorConfig.from_dict(d["traffic_generator"]),
                 **common_config,
             )
         else:
@@ -249,9 +247,7 @@  def from_dict(
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
-        test_suites: list[TestSuiteConfig] = list(
-            map(TestSuiteConfig.from_dict, d["test_suites"])
-        )
+        test_suites: list[TestSuiteConfig] = list(map(TestSuiteConfig.from_dict, d["test_suites"]))
         sut_name = d["system_under_test_node"]["node_name"]
         skip_smoke_tests = d.get("skip_smoke_tests", False)
         assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
@@ -268,9 +264,7 @@  def from_dict(
         ), f"Invalid TG configuration {traffic_generator_node}"
 
         vdevs = (
-            d["system_under_test_node"]["vdevs"]
-            if "vdevs" in d["system_under_test_node"]
-            else []
+            d["system_under_test_node"]["vdevs"] if "vdevs" in d["system_under_test_node"] else []
         )
         return ExecutionConfiguration(
             build_targets=build_targets,
@@ -299,9 +293,7 @@  def from_dict(d: dict) -> "Configuration":
         assert len(nodes) == len(node_map), "Duplicate node names are not allowed"
 
         executions: list[ExecutionConfiguration] = list(
-            map(
-                ExecutionConfiguration.from_dict, d["executions"], [node_map for _ in d]
-            )
+            map(ExecutionConfiguration.from_dict, d["executions"], [node_map for _ in d])
         )
 
         return Configuration(executions=executions)
@@ -315,9 +307,7 @@  def load_config() -> Configuration:
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
 
-    schema_path = os.path.join(
-        pathlib.Path(__file__).parent.resolve(), "conf_yaml_schema.json"
-    )
+    schema_path = os.path.join(pathlib.Path(__file__).parent.resolve(), "conf_yaml_schema.json")
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f773f0c38d..25d6942d81 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -92,9 +92,7 @@  def _run_execution(
     Run the given execution. This involves running the execution setup as well as
     running all build targets in the given execution.
     """
-    dts_logger.info(
-        f"Running execution with SUT '{execution.system_under_test_node.name}'."
-    )
+    dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.")
     execution_result = result.add_execution(sut_node.config)
     execution_result.add_sut_info(sut_node.node_info)
 
@@ -107,9 +105,7 @@  def _run_execution(
 
     else:
         for build_target in execution.build_targets:
-            _run_build_target(
-                sut_node, tg_node, build_target, execution, execution_result
-            )
+            _run_build_target(sut_node, tg_node, build_target, execution, execution_result)
 
     finally:
         try:
@@ -170,13 +166,10 @@  def _run_all_suites(
         execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")]
     for test_suite_config in execution.test_suites:
         try:
-            _run_single_suite(
-                sut_node, tg_node, execution, build_target_result, test_suite_config
-            )
+            _run_single_suite(sut_node, tg_node, execution, build_target_result, test_suite_config)
         except BlockingTestSuiteError as e:
             dts_logger.exception(
-                f"An error occurred within {test_suite_config.test_suite}. "
-                "Skipping build target..."
+                f"An error occurred within {test_suite_config.test_suite}. Skipping build target."
             )
             result.add_error(e)
             end_build_target = True
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 001a5a5496..b362e42924 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -116,10 +116,7 @@  def __init__(self, command: str, command_return_code: int):
         self.command_return_code = command_return_code
 
     def __str__(self) -> str:
-        return (
-            f"Command {self.command} returned a non-zero exit code: "
-            f"{self.command_return_code}"
-        )
+        return f"Command {self.command} returned a non-zero exit code: {self.command_return_code}"
 
 
 class RemoteDirectoryExistsError(DTSError):
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 00b6d1f03a..6124417bd7 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -30,9 +30,7 @@ 
 )
 
 
-def create_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> OSSession:
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index a3f1a6bf3b..fd877fbfae 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -82,9 +82,7 @@  def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
         self._mount_huge_pages()
 
     def _get_hugepage_size(self) -> int:
-        hugepage_size = self.send_command(
-            "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
-        ).stdout
+        hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
         return int(hugepage_size)
 
     def _get_hugepages_total(self) -> int:
@@ -120,13 +118,9 @@  def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(
-        self, amount: int, size: int, force_first_numa: bool
-    ) -> None:
+    def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = (
-            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
-        )
+        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -135,24 +129,18 @@  def _configure_huge_pages(
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(
-            f"echo {amount} | tee {hugepage_config_path}", privileged=True
-        )
+        self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True)
 
     def update_ports(self, ports: list[Port]) -> None:
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert (
-                port.node == self.name
-            ), "Attempted to gather port info on the wrong node"
+            assert port.node == self.name, "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(
-                        port, port_info.get("logicalname"), "logical_name"
-                    )
+                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -163,25 +151,18 @@  def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(
-        self, port: Port, attr_value: str | None, attr_name: str
-    ) -> None:
+    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(
-                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
-            )
+            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
         else:
             self._logger.warning(
-                f"Attempted to get '{attr_name}' of port {port.pci}, "
-                f"but it doesn't exist."
+                f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
         state = "up" if enable else "down"
-        self.send_command(
-            f"ip link set dev {port.logical_name} {state}", privileged=True
-        )
+        self.send_command(f"ip link set dev {port.logical_name} {state}", privileged=True)
 
     def configure_port_ip_address(
         self,
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 5da0516e05..a29e2e8280 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -94,8 +94,7 @@  def extract_remote_tarball(
         expected_dir: str | PurePath | None = None,
     ) -> None:
         self.send_command(
-            f"tar xfm {remote_tarball_path} "
-            f"-C {PurePosixPath(remote_tarball_path).parent}",
+            f"tar xfm {remote_tarball_path} -C {PurePosixPath(remote_tarball_path).parent}",
             60,
         )
         if expected_dir:
@@ -125,8 +124,7 @@  def build_dpdk(
                 self._logger.info("Configuring DPDK build from scratch.")
                 self.remove_remote_dir(remote_dpdk_build_dir)
                 self.send_command(
-                    f"meson setup "
-                    f"{meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
+                    f"meson setup {meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
                     timeout,
                     verify=True,
                     env=env_vars,
@@ -140,9 +138,7 @@  def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
-        out = self.send_command(
-            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
-        )
+        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -156,9 +152,7 @@  def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(
-        self, dpdk_prefix_list: Iterable[str]
-    ) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -174,9 +168,7 @@  def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Return a list of directories of the remote_dir.
         If remote_path doesn't exist, return None.
         """
-        out = self.send_command(
-            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
-        ).stdout
+        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -200,9 +192,7 @@  def _remote_files_exists(self, remote_path: PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def _check_dpdk_hugepages(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -213,9 +203,7 @@  def _check_dpdk_hugepages(
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -245,6 +233,4 @@  def get_node_info(self) -> NodeInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return NodeInfo(
-            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
-        )
+        return NodeInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/remote/interactive_remote_session.py
index 9085a668e8..098ded1bb0 100644
--- a/dts/framework/remote_session/remote/interactive_remote_session.py
+++ b/dts/framework/remote_session/remote/interactive_remote_session.py
@@ -73,9 +73,7 @@  def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
             f"Initializing interactive connection for {self.username}@{self.hostname}"
         )
         self._connect()
-        self._logger.info(
-            f"Interactive connection successful for {self.username}@{self.hostname}"
-        )
+        self._logger.info(f"Interactive connection successful for {self.username}@{self.hostname}")
 
     def _connect(self) -> None:
         """Establish a connection to the node.
@@ -108,8 +106,7 @@  def _connect(self) -> None:
                 self._logger.debug(traceback.format_exc())
                 self._logger.warning(e)
                 self._logger.info(
-                    "Retrying interactive session connection: "
-                    f"retry number {retry_attempt +1}"
+                    f"Retrying interactive session connection: retry number {retry_attempt +1}"
                 )
             else:
                 break
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/remote/interactive_shell.py
index c24376b2a8..4db19fb9b3 100644
--- a/dts/framework/remote_session/remote/interactive_shell.py
+++ b/dts/framework/remote_session/remote/interactive_shell.py
@@ -85,9 +85,7 @@  def __init__(
         self._app_args = app_args
         self._start_application(get_privileged_command)
 
-    def _start_application(
-        self, get_privileged_command: Callable[[str], str] | None
-    ) -> None:
+    def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None:
         """Starts a new interactive application based on the path to the app.
 
         This method is often overridden by subclasses as their process for
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 0647d93de4..719f7d1ef7 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -96,9 +96,7 @@  def send_command(
         If verify is True, check the return code of the executed command
         and raise a RemoteCommandExecutionError if the command failed.
         """
-        self._logger.info(
-            f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
-        )
+        self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else ""))
         result = self._send_command(command, timeout, env)
         if verify and result.return_code:
             self._logger.debug(
@@ -112,9 +110,7 @@  def send_command(
         return result
 
     @abstractmethod
-    def _send_command(
-        self, command: str, timeout: float, env: dict | None
-    ) -> CommandResult:
+    def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
         """
         Use the underlying protocol to execute the command using optional env vars
         and return CommandResult.
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 8d127f1601..1a7ee649ab 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -80,9 +80,7 @@  def _connect(self) -> None:
                 if error not in errors:
                     errors.append(error)
 
-                self._logger.info(
-                    f"Retrying connection: retry number {retry_attempt + 1}."
-                )
+                self._logger.info(f"Retrying connection: retry number {retry_attempt + 1}.")
 
             else:
                 break
@@ -92,9 +90,7 @@  def _connect(self) -> None:
     def is_alive(self) -> bool:
         return self.session.is_connected
 
-    def _send_command(
-        self, command: str, timeout: float, env: dict | None
-    ) -> CommandResult:
+    def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
         """Send a command and return the result of the execution.
 
         Args:
@@ -107,9 +103,7 @@  def _send_command(
             SSHTimeoutError: The command execution timed out.
         """
         try:
-            output = self.session.run(
-                command, env=env, warn=True, hide=True, timeout=timeout
-            )
+            output = self.session.run(command, env=env, warn=True, hide=True, timeout=timeout)
 
         except (UnexpectedExit, ThreadException) as e:
             self._logger.exception(e)
@@ -119,9 +113,7 @@  def _send_command(
             self._logger.exception(e)
             raise SSHTimeoutError(command, e.result.stderr) from e
 
-        return CommandResult(
-            self.name, command, output.stdout, output.stderr, output.return_code
-        )
+        return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code)
 
     def copy_from(
         self,
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/remote/testpmd_shell.py
index 1455b5a199..08ac311016 100644
--- a/dts/framework/remote_session/remote/testpmd_shell.py
+++ b/dts/framework/remote_session/remote/testpmd_shell.py
@@ -21,13 +21,9 @@  class TestPmdShell(InteractiveShell):
     path: PurePath = PurePath("app", "dpdk-testpmd")
     dpdk_app: bool = True
     _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = (
-        "\n"  # We want to append an extra newline to every command
-    )
+    _command_extra_chars: str = "\n"  # We want to append an extra newline to every command
 
-    def _start_application(
-        self, get_privileged_command: Callable[[str], str] | None
-    ) -> None:
+    def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None:
         """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index cfa39d011b..974793a11a 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -72,9 +72,8 @@  class _Settings:
 
 def _get_parser() -> argparse.ArgumentParser:
     parser = argparse.ArgumentParser(
-        description="Run DPDK test suites. All options may be specified with "
-        "the environment variables provided in brackets. "
-        "Command line arguments have higher priority.",
+        description="Run DPDK test suites. All options may be specified with the environment "
+        "variables provided in brackets. Command line arguments have higher priority.",
         formatter_class=argparse.ArgumentDefaultsHelpFormatter,
     )
 
@@ -82,8 +81,7 @@  def _get_parser() -> argparse.ArgumentParser:
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
         default="conf.yaml",
-        help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
-        "and targets.",
+        help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.",
     )
 
     parser.add_argument(
@@ -100,8 +98,7 @@  def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_TIMEOUT"),
         default=15,
         type=float,
-        help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
-        "compiling DPDK.",
+        help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.",
     )
 
     parser.add_argument(
@@ -170,9 +167,7 @@  def _get_settings() -> _Settings:
         timeout=parsed_args.timeout,
         verbose=(parsed_args.verbose == "Y"),
         skip_setup=(parsed_args.skip_setup == "Y"),
-        dpdk_tarball_path=Path(
-            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
-        )
+        dpdk_tarball_path=Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
         if not os.path.exists(parsed_args.tarball)
         else Path(parsed_args.tarball),
         compile_timeout=parsed_args.compile_timeout,
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index f0fbe80f6f..4c2e7e2418 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -83,9 +83,7 @@  def __iadd__(self, other: Result) -> "Statistics":
         """
         self[other.name] += 1
         self["PASS RATE"] = (
-            float(self[Result.PASS.name])
-            * 100
-            / sum(self[result.name] for result in Result)
+            float(self[Result.PASS.name]) * 100 / sum(self[result.name] for result in Result)
         )
         return self
 
@@ -135,9 +133,7 @@  def _get_setup_teardown_errors(self) -> list[Exception]:
 
     def _get_inner_errors(self) -> list[Exception]:
         return [
-            error
-            for inner_result in self._inner_results
-            for error in inner_result.get_errors()
+            error for inner_result in self._inner_results for error in inner_result.get_errors()
         ]
 
     def get_errors(self) -> list[Exception]:
@@ -174,9 +170,7 @@  def add_stats(self, statistics: Statistics) -> None:
         statistics += self.result
 
     def __bool__(self) -> bool:
-        return (
-            bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
-        )
+        return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
 
 
 class TestSuiteResult(BaseResult):
@@ -247,9 +241,7 @@  def __init__(self, sut_node: NodeConfiguration):
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
-    def add_build_target(
-        self, build_target: BuildTargetConfiguration
-    ) -> BuildTargetResult:
+    def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult:
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3b890c0451..4a7907ec33 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -102,9 +102,7 @@  def _process_links(self) -> None:
                     tg_port.peer,
                     tg_port.identifier,
                 ):
-                    self._port_links.append(
-                        PortLink(sut_port=sut_port, tg_port=tg_port)
-                    )
+                    self._port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
 
     def set_up_suite(self) -> None:
         """
@@ -151,9 +149,7 @@  def configure_testbed_ipv4(self, restore: bool = False) -> None:
     def _configure_ipv4_forwarding(self, enable: bool) -> None:
         self.sut_node.configure_ipv4_forwarding(enable)
 
-    def send_packet_and_capture(
-        self, packet: Packet, duration: float = 1
-    ) -> list[Packet]:
+    def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[Packet]:
         """
         Send a packet through the appropriate interface and
         receive on the appropriate interface.
@@ -202,21 +198,15 @@  def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on SUT:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on TG:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(
-        self, expected_packet: Packet, received_packets: list[Packet]
-    ) -> None:
+    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -225,17 +215,11 @@  def verify_packets(
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify(
-                "An expected packet not found among received packets."
-            )
+            self._fail_test_case_verify("An expected packet not found among received packets.")
 
-    def _compare_packets(
-        self, expected_packet: Packet, received_packet: Packet
-    ) -> bool:
+    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
         self._logger.debug(
-            "Comparing packets: \n"
-            f"{expected_packet.summary()}\n"
-            f"{received_packet.summary()}"
+            f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
 
         l3 = IP in expected_packet.layers()
@@ -262,14 +246,10 @@  def _compare_packets(
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(
-                f"The expected packet did not contain {expected_payload}."
-            )
+            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug(
-                "The received payload had extra layers which were not padding."
-            )
+            self._logger.debug("The received payload had extra layers which were not padding.")
             return False
         return True
 
@@ -296,10 +276,7 @@  def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if (
-            received_packet.src != expected_packet.src
-            or received_packet.dst != expected_packet.dst
-        ):
+        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
             return False
         return True
 
@@ -373,9 +350,7 @@  def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
             if self._should_be_executed(test_case_name, test_case_regex):
                 filtered_test_cases.append(test_case)
         cases_str = ", ".join((x.__name__ for x in filtered_test_cases))
-        self._logger.debug(
-            f"Found test cases '{cases_str}' in {self.__class__.__name__}."
-        )
+        self._logger.debug(f"Found test cases '{cases_str}' in {self.__class__.__name__}.")
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
@@ -445,9 +420,7 @@  def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(
-                f"Test case execution INTERRUPTED by user: {test_case_name}"
-            )
+            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
@@ -464,9 +437,7 @@  def is_test_suite(object) -> bool:
     try:
         testcase_module = importlib.import_module(testsuite_module_path)
     except ModuleNotFoundError as e:
-        raise ConfigurationError(
-            f"Test suite '{testsuite_module_path}' not found."
-        ) from e
+        raise ConfigurationError(f"Test suite '{testsuite_module_path}' not found.") from e
     return [
         test_suite_class
         for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite)
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/capturing_traffic_generator.py
index ab98987f8e..e6512061d7 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/capturing_traffic_generator.py
@@ -100,8 +100,7 @@  def send_packets_and_capture(
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
-            f"Sending packet on {send_port.logical_name}, "
-            f"receiving on {receive_port.logical_name}."
+            f"Sending packet on {send_port.logical_name}, receiving on {receive_port.logical_name}."
         )
         received_packets = self._send_packets_and_capture(
             packets,
@@ -110,9 +109,7 @@  def send_packets_and_capture(
             duration,
         )
 
-        self._logger.debug(
-            f"Received packets: {get_packet_summaries(received_packets)}"
-        )
+        self._logger.debug(f"Received packets: {get_packet_summaries(received_packets)}")
         self._write_capture_from_packets(capture_name, received_packets)
         return received_packets
 
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
index d1918a12dc..cbc5fe7fff 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -54,9 +54,7 @@  def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = (
-            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
-        )
+        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
 
     @property
     def lcore_list(self) -> list[int]:
@@ -70,15 +68,11 @@  def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}"
-                    if len(segment) > 1
-                    else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(
-                        lcore_ids_list[current_core_index:]
-                    )
+                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
                 )
                 segment.clear()
                 break
@@ -125,9 +119,7 @@  def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(
-            lcore_list, key=lambda x: x.core, reverse=not ascending
-        )
+        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
         self.filter()
 
     @abstractmethod
@@ -220,9 +212,7 @@  def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(
-                lcore_count_per_core_map
-            ):
+            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fc01e0bf8e..ef700d8114 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -103,18 +103,14 @@  def _tear_down_execution(self) -> None:
         is not decorated so that the derived class doesn't have to use the decorator.
         """
 
-    def set_up_build_target(
-        self, build_target_config: BuildTargetConfiguration
-    ) -> None:
+    def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
         """
         Perform the build target setup that will be done for each build target
         tested on this node.
         """
         self._set_up_build_target(build_target_config)
 
-    def _set_up_build_target(
-        self, build_target_config: BuildTargetConfiguration
-    ) -> None:
+    def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
         """
         This method exists to be optionally overwritten by derived classes and
         is not decorated so that the derived class doesn't have to use the decorator.
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
index af0d4dbb25..9083e92b3d 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/scapy.py
@@ -96,9 +96,7 @@  def scapy_send_packets_and_capture(
     return [scapy_packet.build() for scapy_packet in sniffer.stop(join=True)]
 
 
-def scapy_send_packets(
-    xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
-) -> None:
+def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None:
     """RPC function to send packets.
 
     The function is meant to be executed on the remote TG node.
@@ -197,9 +195,7 @@  class ScapyTrafficGenerator(CapturingTrafficGenerator):
     def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
         self._config = config
         self._tg_node = tg_node
-        self._logger = getLogger(
-            f"{self._tg_node.name} {self._config.traffic_generator_type}"
-        )
+        self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -218,9 +214,7 @@  def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
         self._start_xmlrpc_server_in_remote_python(xmlrpc_server_listen_port)
 
         # connect to the server
-        server_url = (
-            f"http://{self._tg_node.config.hostname}:{xmlrpc_server_listen_port}"
-        )
+        server_url = f"http://{self._tg_node.config.hostname}:{xmlrpc_server_listen_port}"
         self.rpc_server_proxy = xmlrpc.client.ServerProxy(
             server_url, allow_none=True, verbose=SETTINGS.verbose
         )
@@ -240,17 +234,14 @@  def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
         # or class, so strip all lines containing only whitespace
-        src = "\n".join(
-            [line for line in src.splitlines() if not line.isspace() and line != ""]
-        )
+        src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""])
 
         spacing = "\n" * 4
 
         # execute it in the python terminal
         self.session.send_command(spacing + src + spacing)
         self.session.send_command(
-            f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
-            f"server.serve_forever()",
+            f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()",
             "XMLRPC OK",
         )
 
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 4161d3a4d5..7f75043bd3 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -131,9 +131,7 @@  def remote_dpdk_build_dir(self) -> PurePath:
     @property
     def dpdk_version(self) -> str:
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(
-                self._remote_dpdk_dir
-            )
+            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_dir)
         return self._dpdk_version
 
     @property
@@ -151,8 +149,7 @@  def compiler_version(self) -> str:
                 )
             else:
                 self._logger.warning(
-                    "Failed to get compiler version because"
-                    "_build_target_config is None."
+                    "Failed to get compiler version because _build_target_config is None."
                 )
                 return ""
         return self._compiler_version
@@ -173,9 +170,7 @@  def get_build_target_info(self) -> BuildTargetInfo:
     def _guess_dpdk_remote_dir(self) -> PurePath:
         return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
 
-    def _set_up_build_target(
-        self, build_target_config: BuildTargetConfiguration
-    ) -> None:
+    def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
         """
         Setup DPDK on the SUT node.
         """
@@ -195,23 +190,18 @@  def _tear_down_build_target(self) -> None:
         """
         self.bind_ports_to_driver(for_dpdk=False)
 
-    def _configure_build_target(
-        self, build_target_config: BuildTargetConfiguration
-    ) -> None:
+    def _configure_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
         """
         Populate common environment variables and set build target config.
         """
         self._env_vars = {}
         self._build_target_config = build_target_config
-        self._env_vars.update(
-            self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
-        )
+        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch))
         self._env_vars["CC"] = build_target_config.compiler.name
         if build_target_config.compiler_wrapper:
             self._env_vars["CC"] = (
-                f"'{build_target_config.compiler_wrapper} "
-                f"{build_target_config.compiler.name}'"
-            )
+                f"'{build_target_config.compiler_wrapper} {build_target_config.compiler.name}'"
+            )  # fmt: skip
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
@@ -242,9 +232,7 @@  def _copy_dpdk_tarball(self) -> None:
         self.main_session.remove_remote_dir(self._remote_dpdk_dir)
 
         # then extract to remote path
-        self.main_session.extract_remote_tarball(
-            remote_tarball_path, self._remote_dpdk_dir
-        )
+        self.main_session.extract_remote_tarball(remote_tarball_path, self._remote_dpdk_dir)
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
@@ -281,9 +269,7 @@  def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(
-                self.remote_dpdk_build_dir, "examples"
-            )
+            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
@@ -337,9 +323,7 @@  def create_eal_parameters(
                 '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
         """
 
-        lcore_list = LogicalCoreList(
-            self.filter_lcores(lcore_filter_specifier, ascending_cores)
-        )
+        lcore_list = LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores))
 
         if append_prefix_timestamp:
             prefix = f"{prefix}_{self._dpdk_timestamp}"
@@ -404,9 +388,7 @@  def create_interactive_shell(
                 self.remote_dpdk_build_dir, shell_cls.path
             )
 
-        return super().create_interactive_shell(
-            shell_cls, timeout, privileged, str(eal_parameters)
-        )
+        return super().create_interactive_shell(shell_cls, timeout, privileged, str(eal_parameters))
 
     def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
         """Bind all ports on the SUT to a driver.
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 27025cfa31..79a55663b5 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -45,9 +45,7 @@  class TGNode(Node):
 
     def __init__(self, node_config: TGNodeConfiguration):
         super(TGNode, self).__init__(node_config)
-        self.traffic_generator = create_traffic_generator(
-            self, node_config.traffic_generator
-        )
+        self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
         self._logger.info(f"Created node: {self.name}")
 
     def send_packet_and_capture(
@@ -94,6 +92,5 @@  def create_traffic_generator(
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
         case _:
             raise ConfigurationError(
-                "Unknown traffic generator: "
-                f"{traffic_generator_config.traffic_generator_type}"
+                f"Unknown traffic generator: {traffic_generator_config.traffic_generator_type}"
             )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d27c2c5b5f..d098d364ff 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -19,9 +19,7 @@ 
 
 class StrEnum(Enum):
     @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
+    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
         return name
 
     def __str__(self) -> str:
@@ -32,9 +30,7 @@  def __str__(self) -> str:
 
 
 def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (
-        sys.version_info.major == 3 and sys.version_info.minor < 10
-    ):
+    if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10):
         print(
             RED(
                 (
@@ -60,9 +56,7 @@  def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(
-            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
-        )
+        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
 
     return expanded_range
 
@@ -71,9 +65,7 @@  def get_packet_summaries(packets: list[Packet]):
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(
-            list(map(lambda pkt: pkt.summary(), packets)), indent=4
-        )
+        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -94,9 +86,7 @@  class MesonArgs(object):
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
-        self._default_library = (
-            f"--default-library={default_library}" if default_library else ""
-        )
+        self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..768ba1cfa8 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -34,9 +34,7 @@  def test_hello_world_single_core(self) -> None:
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
-        eal_para = self.sut_node.create_eal_parameters(
-            lcore_filter_specifier=lcore_amount
-        )
+        eal_para = self.sut_node.create_eal_parameters(lcore_filter_specifier=lcore_amount)
         result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
         self.verify(
             f"hello from core {int(lcores[0])}" in result.stdout,
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index e8016d1b54..8958f58dac 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -45,13 +45,10 @@  def test_driver_tests(self) -> None:
         for dev in self.sut_node.virtual_devices:
             vdev_args += f"--vdev {dev} "
         vdev_args = vdev_args[:-1]
-        driver_tests_command = (
-            f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests"
-        )
+        driver_tests_command = f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests"
         if vdev_args:
             self._logger.info(
-                "Running driver tests with the following virtual "
-                f"devices: {vdev_args}"
+                f"Running driver tests with the following virtual devices: {vdev_args}"
             )
             driver_tests_command += f' --test-args "{vdev_args}"'
 
@@ -67,9 +64,7 @@  def test_devices_listed_in_testpmd(self) -> None:
         Test:
             Uses testpmd driver to verify that devices have been found by testpmd.
         """
-        testpmd_driver = self.sut_node.create_interactive_shell(
-            TestPmdShell, privileged=True
-        )
+        testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell, privileged=True)
         dev_list = [str(x) for x in testpmd_driver.get_devices()]
         for nic in self.nics_in_node:
             self.verify(