From patchwork Thu Sep 22 14:29:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Li, HongboX" X-Patchwork-Id: 116624 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0148AA0540; Thu, 22 Sep 2022 08:26:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E716740697; Thu, 22 Sep 2022 08:26:39 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id DC78E40156 for ; Thu, 22 Sep 2022 08:26:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663827998; x=1695363998; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pgti0tOxTwG3XPebD8guWNJ6Jnwoq3AFkqOPBrskPnU=; b=PpIckU6OwHBtO+xbcbKEnTH2C23MNLYWSOEr2GJmfDoXq1IAWvngpldp RyDKMuWvgwXiJUR1ys/ZxtgwlUMHnUjMX/3A6xoLONJEGFH5CpG01mcPM TxOBn5kZUHmOScqEya30dw4CFFsze1Y/AcYb91LM3KnuJo9wsj7bbdiVv D7ufh52a3Iv3RyMq9yrN/ZYz0MuVn77GNcqnZSJXzSbA/odbqA/pCsnSu 5zW+tc155j93iuaQVFUo/jUB28fxFoPUS/94CZaP+PFbPlAiyK0xKH9Iz i4cLHLaZSS71TcqufR02p/RhlQrXWAN0TIcAlxDeEcQXG2M+0a51225A7 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="297812191" X-IronPort-AV: E=Sophos;i="5.93,335,1654585200"; d="scan'208";a="297812191" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 23:26:37 -0700 X-IronPort-AV: E=Sophos;i="5.93,335,1654585200"; d="scan'208";a="948467906" Received: from unknown (HELO localhost.localdomain) ([10.239.252.92]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 23:26:31 -0700 From: Hongbo Li To: dts@dpdk.org Cc: Hongbo Li Subject: [PATCH V1 7/7] tests/multiprocess:Separated performance cases Date: Thu, 22 Sep 2022 14:29:50 +0000 Message-Id: <20220922142950.398902-7-hongbox.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220922142950.398902-1-hongbox.li@intel.com> References: <20220922142950.398902-1-hongbox.li@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Separated performance cases Signed-off-by: Hongbo Li --- test_plans/index.rst | 7 + test_plans/multiprocess_test_plan.rst | 48 --- test_plans/perf_multiprocess_test_plan.rst | 194 ++++++++++++ tests/TestSuite_multiprocess.py | 210 ------------- tests/TestSuite_perf_multiprocess.py | 333 +++++++++++++++++++++ 5 files changed, 534 insertions(+), 258 deletions(-) create mode 100644 test_plans/perf_multiprocess_test_plan.rst create mode 100644 tests/TestSuite_perf_multiprocess.py diff --git a/test_plans/index.rst b/test_plans/index.rst index 8e2634bd..a834d767 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -108,6 +108,13 @@ The following are the test plans for the DPDK DTS automated test system. ntb_test_plan nvgre_test_plan perf_virtio_user_loopback_test_plan + perf_efd_test_plan + perf_ipfrag_test_plan + perf_kni_test_plan + perf_l2fwd_test_plan + perf_multiprocess_test_plan + perf_tso_test_plan + perf_vxlan_test_plan pf_smoke_test_plan pipeline_test_plan pvp_virtio_user_multi_queues_port_restart_test_plan diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst index bfef1ca9..699938ed 100644 --- a/test_plans/multiprocess_test_plan.rst +++ b/test_plans/multiprocess_test_plan.rst @@ -196,26 +196,6 @@ run should remain the same, except for the ``num-procs`` value, which should be adjusted appropriately. -Test Case: Performance Tests ----------------------------- - -Run the multiprocess application using standard IP traffic - varying source -and destination address information to allow RSS to evenly distribute packets -among RX queues. Record traffic throughput results as below. - -+-------------------+-----+-----+-----+-----+-----+-----+ -| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 | -+-------------------+-----+-----+-----+-----+-----+-----+ -| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 | -+-------------------+-----+-----+-----+-----+-----+-----+ -| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 | -+-------------------+-----+-----+-----+-----+-----+-----+ -| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 | -+-------------------+-----+-----+-----+-----+-----+-----+ -| %-age Line Rate | X | X | X | X | X | X | -+-------------------+-----+-----+-----+-----+-----+-----+ -| Packet Rate(mpps) | X | X | X | X | X | X | -+-------------------+-----+-----+-----+-----+-----+-----+ Test Case: Function Tests ------------------------- @@ -294,34 +274,6 @@ An example commands to run 8 client processes is as follows:: root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 & root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 & -Test Case: Performance Measurement ----------------------------------- - -- On the traffic generator set up a traffic flow in both directions specifying - IP traffic. -- Run the server and client applications as above. -- Start the traffic and record the throughput for transmitted and received packets. - -An example set of results is shown below. - -+----------------------+-----+-----+-----+-----+-----+-----+ -| Server threads | 1 | 1 | 1 | 1 | 1 | 1 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 | -+----------------------+-----+-----+-----+-----+-----+-----+ -| %-age Line Rate | X | X | X | X | X | X | -+----------------------+-----+-----+-----+-----+-----+-----+ -| Packet Rate(mpps) | X | X | X | X | X | X | -+----------------------+-----+-----+-----+-----+-----+-----+ - Test Case: Function Tests ------------------------- start server process and 2 client process, send some packets, the number of packets is a random value between 20 and 256. diff --git a/test_plans/perf_multiprocess_test_plan.rst b/test_plans/perf_multiprocess_test_plan.rst new file mode 100644 index 00000000..4cca63de --- /dev/null +++ b/test_plans/perf_multiprocess_test_plan.rst @@ -0,0 +1,194 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2010-2017 Intel Corporation + +======================================= +Sample Application Tests: Multi-Process +======================================= + +Simple MP Application Test +========================== + +Description +----------- + +This test is a basic multi-process test which demonstrates the basics of sharing +information between DPDK processes. The same application binary is run +twice - once as a primary instance, and once as a secondary instance. Messages +are sent from primary to secondary and vice versa, demonstrating the processes +are sharing memory and can communicate using rte_ring structures. + +Prerequisites +------------- + +If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When +using vfio, use the following commands to load the vfio driver and bind it +to the device under test:: + + modprobe vfio + modprobe vfio-pci + usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id + +Assuming that a DPDK build has been set up and the multi-process sample +applications have been built. + +Symmetric MP Application Test +============================= + +Description +----------- + +This test is a multi-process test which demonstrates how multiple processes can +work together to perform packet I/O and packet processing in parallel, much as +other example application work by using multiple threads. In this example, each +process reads packets from all network ports being used - though from a different +RX queue in each case. Those packets are then forwarded by each process which +sends them out by writing them directly to a suitable TX queue. + +Prerequisites +------------- + +Assuming that an Intel� DPDK build has been set up and the multi-process sample +applications have been built. It is also assumed that a traffic generator has +been configured and plugged in to the NIC ports 0 and 1. + +Test Methodology +---------------- + +As with the simple_mp example, the first instance of the symmetric_mp process +must be run as the primary instance, though with a number of other application +specific parameters also provided after the EAL arguments. These additional +parameters are: + +* -p , where portmask is a hexadecimal bitmask of what ports on the + system are to be used. For example: -p 3 to use ports 0 and 1 only. +* --num-procs , where N is the total number of symmetric_mp instances that + will be run side-by-side to perform packet processing. This parameter is used to + configure the appropriate number of receive queues on each network port. +* --proc-id , where n is a numeric value in the range 0 <= n < N (number of + processes, specified above). This identifies which symmetric_mp instance is being + run, so that each process can read a unique receive queue on each network port. + +The secondary symmetric_mp instances must also have these parameters specified, +and the first two must be the same as those passed to the primary instance, or errors +result. + +For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all +performing level-2 forwarding of packets between ports 0 and 1, the following +commands can be used (assuming run as root):: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 2 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0 + ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1 + ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 8 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2 + ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 10 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3 + +To run only 1 or 2 instances, the above parameters to the 1 or 2 instances being +run should remain the same, except for the ``num-procs`` value, which should be +adjusted appropriately. + + +Test Case: Performance Tests +---------------------------- + +Run the multiprocess application using standard IP traffic - varying source +and destination address information to allow RSS to evenly distribute packets +among RX queues. Record traffic throughput results as below. + ++-------------------+-----+-----+-----+-----+-----+-----+ +| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 | ++-------------------+-----+-----+-----+-----+-----+-----+ +| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 | ++-------------------+-----+-----+-----+-----+-----+-----+ +| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 | ++-------------------+-----+-----+-----+-----+-----+-----+ +| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 | ++-------------------+-----+-----+-----+-----+-----+-----+ +| %-age Line Rate | X | X | X | X | X | X | ++-------------------+-----+-----+-----+-----+-----+-----+ +| Packet Rate(mpps) | X | X | X | X | X | X | ++-------------------+-----+-----+-----+-----+-----+-----+ + +Client Server Multiprocess Tests +================================ + +Description +----------- + +The client-server sample application demonstrates the ability of Intel� DPDK +to use multiple processes in which a server process performs packet I/O and one +or multiple client processes perform packet processing. The server process +controls load balancing on the traffic received from a number of input ports to +a user-specified number of clients. The client processes forward the received +traffic, outputting the packets directly by writing them to the TX rings of the +outgoing ports. + +Prerequisites +------------- + +Assuming that an Intel� DPDK build has been set up and the multi-process +sample application has been built. +Also assuming a traffic generator is connected to the ports "0" and "1". + +It is important to run the server application before the client application, +as the server application manages both the NIC ports with packet transmission +and reception, as well as shared memory areas and client queues. + +Run the Server Application: + +- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number). +- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1. +- Define the maximum number of clients using -n, e.g. -n 8. + +The command line below is an example on how to start the server process on +logical core 2 to handle a maximum of 8 client processes configured to +run on socket 0 to handle traffic from NIC ports 0 and 1:: + + root@host:mp_server# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_server -c 2 -- -p 3 -n 8 + +NOTE: If an additional second core is given in the coremask to the server process +that second core will be used to print statistics. When benchmarking, only a +single lcore is needed for the server process + +Run the Client application: + +- In another terminal run the client application. +- Give each client a distinct core mask with -c. +- Give each client a unique client-id with -n. + +An example commands to run 8 client processes is as follows:: + + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40 --proc-type=secondary -- -n 0 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100 --proc-type=secondary -- -n 1 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 400 --proc-type=secondary -- -n 2 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 1000 --proc-type=secondary -- -n 3 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 4000 --proc-type=secondary -- -n 4 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 10000 --proc-type=secondary -- -n 5 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 & + root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 & + +Test Case: Performance Measurement +---------------------------------- + +- On the traffic generator set up a traffic flow in both directions specifying + IP traffic. +- Run the server and client applications as above. +- Start the traffic and record the throughput for transmitted and received packets. + +An example set of results is shown below. + ++----------------------+-----+-----+-----+-----+-----+-----+ +| Server threads | 1 | 1 | 1 | 1 | 1 | 1 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 | ++----------------------+-----+-----+-----+-----+-----+-----+ +| %-age Line Rate | X | X | X | X | X | X | ++----------------------+-----+-----+-----+-----+-----+-----+ +| Packet Rate(mpps) | X | X | X | X | X | X | ++----------------------+-----+-----+-----+-----+-----+-----+ diff --git a/tests/TestSuite_multiprocess.py b/tests/TestSuite_multiprocess.py index da382a41..ed0933b6 100644 --- a/tests/TestSuite_multiprocess.py +++ b/tests/TestSuite_multiprocess.py @@ -1689,216 +1689,6 @@ class TestMultiprocess(TestCase): } self.rte_flow(mac_ipv4_symmetric, self.multiprocess_rss_data, **pmd_param) - def test_perf_multiprocess_performance(self): - """ - Benchmark Multiprocess performance. - #""" - packet_count = 16 - self.dut.send_expect("fg", "# ") - txPort = self.tester.get_local_port(self.dut_ports[0]) - rxPort = self.tester.get_local_port(self.dut_ports[1]) - mac = self.tester.get_mac(txPort) - dmac = self.dut.get_mac_address(self.dut_ports[0]) - tgenInput = [] - - # create mutative src_ip+dst_ip package - for i in range(packet_count): - package = ( - r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]' - % (mac, dmac, i + 1, i + 2) - ) - self.tester.scapy_append(package) - pcap = os.sep.join([self.output_path, "test_%d.pcap" % i]) - self.tester.scapy_append('wrpcap("%s", flows)' % pcap) - tgenInput.append([txPort, rxPort, pcap]) - self.tester.scapy_execute() - - # run multiple symmetric_mp process - validExecutions = [] - for execution in executions: - if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]: - validExecutions.append(execution) - - portMask = utils.create_mask(self.dut_ports) - - for n in range(len(validExecutions)): - execution = validExecutions[n] - # get coreList form execution['cores'] - coreList = self.dut.get_core_list(execution["cores"], socket=self.socket) - # to run a set of symmetric_mp instances, like test plan - dutSessionList = [] - for index in range(len(coreList)): - dut_new_session = self.dut.new_session() - dutSessionList.append(dut_new_session) - # add -a option when tester and dut in same server - dut_new_session.send_expect( - self.app_symmetric_mp - + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d" - % ( - utils.create_mask([coreList[index]]), - self.eal_param, - portMask, - execution["nprocs"], - index, - ), - "Finished Process Init", - ) - - # clear streams before add new streams - self.tester.pktgen.clear_streams() - # run packet generator - streams = self.pktgen_helper.prepare_stream_from_tginput( - tgenInput, 100, None, self.tester.pktgen - ) - _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) - - execution["pps"] = pps - - # close all symmetric_mp process - self.dut.send_expect("killall symmetric_mp", "# ") - # close all dut sessions - for dut_session in dutSessionList: - self.dut.close_session(dut_session) - - # get rate and mpps data - for n in range(len(executions)): - self.verify(executions[n]["pps"] is not 0, "No traffic detected") - self.result_table_create( - [ - "Num-procs", - "Sockets/Cores/Threads", - "Num Ports", - "Frame Size", - "%-age Line Rate", - "Packet Rate(mpps)", - ] - ) - - for execution in validExecutions: - self.result_table_add( - [ - execution["nprocs"], - execution["cores"], - 2, - 64, - execution["pps"] / float(100000000 / (8 * 84)), - execution["pps"] / float(1000000), - ] - ) - - self.result_table_print() - - def test_perf_multiprocess_client_serverperformance(self): - """ - Benchmark Multiprocess client-server performance. - """ - self.dut.kill_all() - self.dut.send_expect("fg", "# ") - txPort = self.tester.get_local_port(self.dut_ports[0]) - rxPort = self.tester.get_local_port(self.dut_ports[1]) - mac = self.tester.get_mac(txPort) - - self.tester.scapy_append( - 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0]) - ) - self.tester.scapy_append('smac="%s"' % mac) - self.tester.scapy_append( - 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]' - ) - - pcap = os.sep.join([self.output_path, "test.pcap"]) - self.tester.scapy_append('wrpcap("%s", flows)' % pcap) - self.tester.scapy_execute() - - validExecutions = [] - for execution in executions: - if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]: - validExecutions.append(execution) - - for execution in validExecutions: - coreList = self.dut.get_core_list(execution["cores"], socket=self.socket) - # get core with socket parameter to specified which core dut used when tester and dut in same server - coreMask = utils.create_mask( - self.dut.get_core_list("1S/1C/1T", socket=self.socket) - ) - portMask = utils.create_mask(self.dut_ports) - # specified mp_server core and add -a option when tester and dut in same server - self.dut.send_expect( - self.app_mp_server - + " -n %d -c %s %s -- -p %s -n %d" - % ( - self.dut.get_memory_channels(), - coreMask, - self.eal_param, - portMask, - execution["nprocs"], - ), - "Finished Process Init", - 20, - ) - self.dut.send_expect("^Z", "\r\n") - self.dut.send_expect("bg", "# ") - - for n in range(execution["nprocs"]): - time.sleep(5) - # use next core as mp_client core, different from mp_server - coreMask = utils.create_mask([str(int(coreList[n]) + 1)]) - self.dut.send_expect( - self.app_mp_client - + " -n %d -c %s --proc-type=secondary %s -- -n %d" - % (self.dut.get_memory_channels(), coreMask, self.eal_param, n), - "Finished Process Init", - ) - self.dut.send_expect("^Z", "\r\n") - self.dut.send_expect("bg", "# ") - - tgenInput = [] - tgenInput.append([txPort, rxPort, pcap]) - - # clear streams before add new streams - self.tester.pktgen.clear_streams() - # run packet generator - streams = self.pktgen_helper.prepare_stream_from_tginput( - tgenInput, 100, None, self.tester.pktgen - ) - _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) - - execution["pps"] = pps - self.dut.kill_all() - time.sleep(5) - - for n in range(len(executions)): - self.verify(executions[n]["pps"] is not 0, "No traffic detected") - - self.result_table_create( - [ - "Server threads", - "Server Cores/Threads", - "Num-procs", - "Sockets/Cores/Threads", - "Num Ports", - "Frame Size", - "%-age Line Rate", - "Packet Rate(mpps)", - ] - ) - - for execution in validExecutions: - self.result_table_add( - [ - 1, - "1S/1C/1T", - execution["nprocs"], - execution["cores"], - 2, - 64, - execution["pps"] / float(100000000 / (8 * 84)), - execution["pps"] / float(1000000), - ] - ) - - self.result_table_print() - def set_fields(self): """set ip protocol field behavior""" fields_config = { diff --git a/tests/TestSuite_perf_multiprocess.py b/tests/TestSuite_perf_multiprocess.py new file mode 100644 index 00000000..d03bb2f6 --- /dev/null +++ b/tests/TestSuite_perf_multiprocess.py @@ -0,0 +1,333 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2014 Intel Corporation +# + +""" +DPDK Test suite. +Multi-process Test. +""" + +import copy +import os +import random +import re +import time +import traceback +from collections import OrderedDict + +import framework.utils as utils +from framework.exception import VerifyFailure +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase, check_supported_nic +from framework.utils import GREEN, RED + +from .rte_flow_common import FdirProcessing as fdirprocess +from .rte_flow_common import RssProcessing as rssprocess + +executions = [] + + +class TestMultiprocess(TestCase): + + support_nic = ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"] + + def set_up_all(self): + """ + Run at the start of each test suite. + + Multiprocess prerequisites. + Requirements: + OS is not freeBSD + DUT core number >= 4 + multi_process build pass + """ + # self.verify('bsdapp' not in self.target, "Multiprocess not support freebsd") + + self.verify(len(self.dut.get_all_cores()) >= 4, "Not enough Cores") + self.pkt = Packet() + self.dut_ports = self.dut.get_ports() + self.socket = self.dut.get_numa_id(self.dut_ports[0]) + extra_option = "-Dexamples='multi_process/client_server_mp/mp_server,multi_process/client_server_mp/mp_client,multi_process/simple_mp,multi_process/symmetric_mp'" + self.dut.build_install_dpdk(target=self.target, extra_options=extra_option) + self.app_mp_client = self.dut.apps_name["mp_client"] + self.app_mp_server = self.dut.apps_name["mp_server"] + self.app_simple_mp = self.dut.apps_name["simple_mp"] + self.app_symmetric_mp = self.dut.apps_name["symmetric_mp"] + + executions.append({"nprocs": 1, "cores": "1S/1C/1T", "pps": 0}) + executions.append({"nprocs": 2, "cores": "1S/1C/2T", "pps": 0}) + executions.append({"nprocs": 2, "cores": "1S/2C/1T", "pps": 0}) + executions.append({"nprocs": 4, "cores": "1S/2C/2T", "pps": 0}) + executions.append({"nprocs": 4, "cores": "1S/4C/1T", "pps": 0}) + executions.append({"nprocs": 8, "cores": "1S/4C/2T", "pps": 0}) + + self.eal_param = "" + for i in self.dut_ports: + self.eal_param += " -a %s" % self.dut.ports_info[i]["pci"] + + self.eal_para = self.dut.create_eal_parameters(cores="1S/2C/1T") + # start new session to run secondary + self.session_secondary = self.dut.new_session() + + # get dts output path + if self.logger.log_path.startswith(os.sep): + self.output_path = self.logger.log_path + else: + cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) + self.output_path = os.sep.join([cur_path, self.logger.log_path]) + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.dport_info0 = self.dut.ports_info[self.dut_ports[0]] + self.pci0 = self.dport_info0["pci"] + self.tester_ifaces = [ + self.tester.get_interface(self.dut.ports_map[port]) + for port in self.dut_ports + ] + rxq = 1 + self.session_list = [] + self.logfmt = "*" * 20 + + def set_up(self): + """ + Run before each test case. + """ + pass + + def test_perf_multiprocess_performance(self): + """ + Benchmark Multiprocess performance. + #""" + packet_count = 16 + self.dut.send_expect("fg", "# ") + txPort = self.tester.get_local_port(self.dut_ports[0]) + rxPort = self.tester.get_local_port(self.dut_ports[1]) + mac = self.tester.get_mac(txPort) + dmac = self.dut.get_mac_address(self.dut_ports[0]) + tgenInput = [] + + # create mutative src_ip+dst_ip package + for i in range(packet_count): + package = ( + r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]' + % (mac, dmac, i + 1, i + 2) + ) + self.tester.scapy_append(package) + pcap = os.sep.join([self.output_path, "test_%d.pcap" % i]) + self.tester.scapy_append('wrpcap("%s", flows)' % pcap) + tgenInput.append([txPort, rxPort, pcap]) + self.tester.scapy_execute() + + # run multiple symmetric_mp process + validExecutions = [] + for execution in executions: + if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]: + validExecutions.append(execution) + + portMask = utils.create_mask(self.dut_ports) + + for n in range(len(validExecutions)): + execution = validExecutions[n] + # get coreList form execution['cores'] + coreList = self.dut.get_core_list(execution["cores"], socket=self.socket) + # to run a set of symmetric_mp instances, like test plan + dutSessionList = [] + for index in range(len(coreList)): + dut_new_session = self.dut.new_session() + dutSessionList.append(dut_new_session) + # add -a option when tester and dut in same server + dut_new_session.send_expect( + self.app_symmetric_mp + + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d" + % ( + utils.create_mask([coreList[index]]), + self.eal_param, + portMask, + execution["nprocs"], + index, + ), + "Finished Process Init", + ) + + # clear streams before add new streams + self.tester.pktgen.clear_streams() + # run packet generator + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, None, self.tester.pktgen + ) + _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + + execution["pps"] = pps + + # close all symmetric_mp process + self.dut.send_expect("killall symmetric_mp", "# ") + # close all dut sessions + for dut_session in dutSessionList: + self.dut.close_session(dut_session) + + # get rate and mpps data + for n in range(len(executions)): + self.verify(executions[n]["pps"] is not 0, "No traffic detected") + self.result_table_create( + [ + "Num-procs", + "Sockets/Cores/Threads", + "Num Ports", + "Frame Size", + "%-age Line Rate", + "Packet Rate(mpps)", + ] + ) + + for execution in validExecutions: + self.result_table_add( + [ + execution["nprocs"], + execution["cores"], + 2, + 64, + execution["pps"] / float(100000000 / (8 * 84)), + execution["pps"] / float(1000000), + ] + ) + + self.result_table_print() + + def test_perf_multiprocess_client_serverperformance(self): + """ + Benchmark Multiprocess client-server performance. + """ + self.dut.kill_all() + self.dut.send_expect("fg", "# ") + txPort = self.tester.get_local_port(self.dut_ports[0]) + rxPort = self.tester.get_local_port(self.dut_ports[1]) + mac = self.tester.get_mac(txPort) + + self.tester.scapy_append( + 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0]) + ) + self.tester.scapy_append('smac="%s"' % mac) + self.tester.scapy_append( + 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]' + ) + + pcap = os.sep.join([self.output_path, "test.pcap"]) + self.tester.scapy_append('wrpcap("%s", flows)' % pcap) + self.tester.scapy_execute() + + validExecutions = [] + for execution in executions: + if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]: + validExecutions.append(execution) + + for execution in validExecutions: + coreList = self.dut.get_core_list(execution["cores"], socket=self.socket) + # get core with socket parameter to specified which core dut used when tester and dut in same server + coreMask = utils.create_mask( + self.dut.get_core_list("1S/1C/1T", socket=self.socket) + ) + portMask = utils.create_mask(self.dut_ports) + # specified mp_server core and add -a option when tester and dut in same server + self.dut.send_expect( + self.app_mp_server + + " -n %d -c %s %s -- -p %s -n %d" + % ( + self.dut.get_memory_channels(), + coreMask, + self.eal_param, + portMask, + execution["nprocs"], + ), + "Finished Process Init", + 20, + ) + self.dut.send_expect("^Z", "\r\n") + self.dut.send_expect("bg", "# ") + + for n in range(execution["nprocs"]): + time.sleep(5) + # use next core as mp_client core, different from mp_server + coreMask = utils.create_mask([str(int(coreList[n]) + 1)]) + self.dut.send_expect( + self.app_mp_client + + " -n %d -c %s --proc-type=secondary %s -- -n %d" + % (self.dut.get_memory_channels(), coreMask, self.eal_param, n), + "Finished Process Init", + ) + self.dut.send_expect("^Z", "\r\n") + self.dut.send_expect("bg", "# ") + + tgenInput = [] + tgenInput.append([txPort, rxPort, pcap]) + + # clear streams before add new streams + self.tester.pktgen.clear_streams() + # run packet generator + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, None, self.tester.pktgen + ) + _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + + execution["pps"] = pps + self.dut.kill_all() + time.sleep(5) + + for n in range(len(executions)): + self.verify(executions[n]["pps"] is not 0, "No traffic detected") + + self.result_table_create( + [ + "Server threads", + "Server Cores/Threads", + "Num-procs", + "Sockets/Cores/Threads", + "Num Ports", + "Frame Size", + "%-age Line Rate", + "Packet Rate(mpps)", + ] + ) + + for execution in validExecutions: + self.result_table_add( + [ + 1, + "1S/1C/1T", + execution["nprocs"], + execution["cores"], + 2, + 64, + execution["pps"] / float(100000000 / (8 * 84)), + execution["pps"] / float(1000000), + ] + ) + + self.result_table_print() + + def set_fields(self): + """set ip protocol field behavior""" + fields_config = { + "ip": { + "src": {"range": 64, "action": "inc"}, + "dst": {"range": 64, "action": "inc"}, + }, + } + + return fields_config + + def tear_down(self): + """ + Run after each test case. + """ + if self.session_list: + for sess in self.session_list: + self.dut.close_session(sess) + self.dut.kill_all() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.kill_all() + pass