[V1,7/7] tests/multiprocess:Separated performance cases

Message ID 20220922142950.398902-7-hongbox.li@intel.com (mailing list archive)
State Superseded
Headers
Series [V1,1/7] tests/efd:Separated performance cases |

Checks

Context Check Description
ci/Intel-dts-suite-test fail Apply issues

Commit Message

Li, HongboX Sept. 22, 2022, 2:29 p.m. UTC
  Separated performance cases

Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
 test_plans/index.rst                       |   7 +
 test_plans/multiprocess_test_plan.rst      |  48 ---
 test_plans/perf_multiprocess_test_plan.rst | 194 ++++++++++++
 tests/TestSuite_multiprocess.py            | 210 -------------
 tests/TestSuite_perf_multiprocess.py       | 333 +++++++++++++++++++++
 5 files changed, 534 insertions(+), 258 deletions(-)
 create mode 100644 test_plans/perf_multiprocess_test_plan.rst
 create mode 100644 tests/TestSuite_perf_multiprocess.py
  

Patch

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 8e2634bd..a834d767 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -108,6 +108,13 @@  The following are the test plans for the DPDK DTS automated test system.
     ntb_test_plan
     nvgre_test_plan
     perf_virtio_user_loopback_test_plan
+    perf_efd_test_plan
+    perf_ipfrag_test_plan
+    perf_kni_test_plan
+    perf_l2fwd_test_plan
+    perf_multiprocess_test_plan
+    perf_tso_test_plan
+    perf_vxlan_test_plan
     pf_smoke_test_plan
     pipeline_test_plan
     pvp_virtio_user_multi_queues_port_restart_test_plan
diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst
index bfef1ca9..699938ed 100644
--- a/test_plans/multiprocess_test_plan.rst
+++ b/test_plans/multiprocess_test_plan.rst
@@ -196,26 +196,6 @@  run should remain the same, except for the ``num-procs`` value, which should be
 adjusted appropriately.
 
 
-Test Case: Performance Tests
-----------------------------
-
-Run the multiprocess application using standard IP traffic - varying source
-and destination address information to allow RSS to evenly distribute packets
-among RX queues. Record traffic throughput results as below.
-
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num-procs         |  1  |  2  |  2  |  4  |  4  |  8  |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Cores/Threads     | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports         |  2  |  2  |  2  |  2  |  2  |  2  |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size       |  64 |  64 |  64 |  64 |  64 |  64 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate   |  X  |  X  |  X  |  X  |  X  |  X  |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps) |  X  |  X  |  X  |  X  |  X  |  X  |
-+-------------------+-----+-----+-----+-----+-----+-----+
 
 Test Case: Function Tests
 -------------------------
@@ -294,34 +274,6 @@  An example commands to run 8 client processes is as follows::
    root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
    root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
 
-Test Case: Performance Measurement
-----------------------------------
-
-- On the traffic generator set up a traffic flow in both directions specifying
-  IP traffic.
-- Run the server and client applications as above.
-- Start the traffic and record the throughput for transmitted and received packets.
-
-An example set of results is shown below.
-
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server threads       |  1  |  1  |  1  |  1  |  1  |  1  |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num-clients          |  1  |  2  |  2  |  4  |  4  |  8  |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports            |  2  |  2  |  2  |  2  |  2  |  2  |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size          |  64 |  64 |  64 |  64 |  64 |  64 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate      |  X  |  X  |  X  |  X  |  X  |  X  |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps)    |  X  |  X  |  X  |  X  |  X  |  X  |
-+----------------------+-----+-----+-----+-----+-----+-----+
-
 Test Case: Function Tests
 -------------------------
 start server process and 2 client process, send some packets, the number of packets is a random value between 20 and 256.
diff --git a/test_plans/perf_multiprocess_test_plan.rst b/test_plans/perf_multiprocess_test_plan.rst
new file mode 100644
index 00000000..4cca63de
--- /dev/null
+++ b/test_plans/perf_multiprocess_test_plan.rst
@@ -0,0 +1,194 @@ 
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2010-2017 Intel Corporation
+
+=======================================
+Sample Application Tests: Multi-Process
+=======================================
+
+Simple MP Application Test
+==========================
+
+Description
+-----------
+
+This test is a basic multi-process test which demonstrates the basics of sharing
+information between DPDK processes. The same application binary is run
+twice - once as a primary instance, and once as a secondary instance. Messages
+are sent from primary to secondary and vice versa, demonstrating the processes
+are sharing memory and can communicate using rte_ring structures.
+
+Prerequisites
+-------------
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+   modprobe vfio
+   modprobe vfio-pci
+   usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assuming that a DPDK build has been set up and the multi-process sample
+applications have been built.
+
+Symmetric MP Application Test
+=============================
+
+Description
+-----------
+
+This test is a multi-process test which demonstrates how multiple processes can
+work together to perform packet I/O and packet processing in parallel, much as
+other example application work by using multiple threads. In this example, each
+process reads packets from all network ports being used - though from a different
+RX queue in each case. Those packets are then forwarded by each process which
+sends them out by writing them directly to a suitable TX queue.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process sample
+applications have been built. It is also assumed that a traffic generator has
+been configured and plugged in to the NIC ports 0 and 1.
+
+Test Methodology
+----------------
+
+As with the simple_mp example, the first instance of the symmetric_mp process
+must be run as the primary instance, though with a number of other application
+specific parameters also provided after the EAL arguments. These additional
+parameters are:
+
+* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the
+  system are to be used. For example: -p 3 to use ports 0 and 1 only.
+* --num-procs <N>, where N is the total number of symmetric_mp instances that
+  will be run side-by-side to perform packet processing. This parameter is used to
+  configure the appropriate number of receive queues on each network port.
+* --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of
+  processes, specified above). This identifies which symmetric_mp instance is being
+  run, so that each process can read a unique receive queue on each network port.
+
+The secondary symmetric_mp instances must also have these parameters specified,
+and the first two must be the same as those passed to the primary instance, or errors
+result.
+
+For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all
+performing level-2 forwarding of packets between ports 0 and 1, the following
+commands can be used (assuming run as root)::
+
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 2 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 8 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 10 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3
+
+To run only 1 or 2 instances, the above parameters to the 1 or 2 instances being
+run should remain the same, except for the ``num-procs`` value, which should be
+adjusted appropriately.
+
+
+Test Case: Performance Tests
+----------------------------
+
+Run the multiprocess application using standard IP traffic - varying source
+and destination address information to allow RSS to evenly distribute packets
+among RX queues. Record traffic throughput results as below.
+
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num-procs         |  1  |  2  |  2  |  4  |  4  |  8  |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Cores/Threads     | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports         |  2  |  2  |  2  |  2  |  2  |  2  |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size       |  64 |  64 |  64 |  64 |  64 |  64 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate   |  X  |  X  |  X  |  X  |  X  |  X  |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps) |  X  |  X  |  X  |  X  |  X  |  X  |
++-------------------+-----+-----+-----+-----+-----+-----+
+
+Client Server Multiprocess Tests
+================================
+
+Description
+-----------
+
+The client-server sample application demonstrates the ability of Intel� DPDK
+to use multiple processes in which a server process performs packet I/O and one
+or multiple client processes perform packet processing. The server process
+controls load balancing on the traffic received from a number of input ports to
+a user-specified number of clients. The client processes forward the received
+traffic, outputting the packets directly by writing them to the TX rings of the
+outgoing ports.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process
+sample application has been built.
+Also assuming a traffic generator is connected to the ports "0" and "1".
+
+It is important to run the server application before the client application,
+as the server application manages both the NIC ports with packet transmission
+and reception, as well as shared memory areas and client queues.
+
+Run the Server Application:
+
+- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number).
+- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1.
+- Define the maximum number of clients using -n, e.g. -n 8.
+
+The command line below is an example on how to start the server process on
+logical core 2 to handle a maximum of 8 client processes configured to
+run on socket 0 to handle traffic from NIC ports 0 and 1::
+
+    root@host:mp_server# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_server -c 2 -- -p 3 -n 8
+
+NOTE: If an additional second core is given in the coremask to the server process
+that second core will be used to print statistics. When benchmarking, only a
+single lcore is needed for the server process
+
+Run the Client application:
+
+- In another terminal run the client application.
+- Give each client a distinct core mask with -c.
+- Give each client a unique client-id with -n.
+
+An example commands to run 8 client processes is as follows::
+
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40 --proc-type=secondary -- -n 0 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100 --proc-type=secondary -- -n 1 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 400 --proc-type=secondary -- -n 2 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 1000 --proc-type=secondary -- -n 3 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 4000 --proc-type=secondary -- -n 4 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 10000 --proc-type=secondary -- -n 5 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
+
+Test Case: Performance Measurement
+----------------------------------
+
+- On the traffic generator set up a traffic flow in both directions specifying
+  IP traffic.
+- Run the server and client applications as above.
+- Start the traffic and record the throughput for transmitted and received packets.
+
+An example set of results is shown below.
+
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server threads       |  1  |  1  |  1  |  1  |  1  |  1  |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num-clients          |  1  |  2  |  2  |  4  |  4  |  8  |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports            |  2  |  2  |  2  |  2  |  2  |  2  |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size          |  64 |  64 |  64 |  64 |  64 |  64 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate      |  X  |  X  |  X  |  X  |  X  |  X  |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps)    |  X  |  X  |  X  |  X  |  X  |  X  |
++----------------------+-----+-----+-----+-----+-----+-----+
diff --git a/tests/TestSuite_multiprocess.py b/tests/TestSuite_multiprocess.py
index da382a41..ed0933b6 100644
--- a/tests/TestSuite_multiprocess.py
+++ b/tests/TestSuite_multiprocess.py
@@ -1689,216 +1689,6 @@  class TestMultiprocess(TestCase):
         }
         self.rte_flow(mac_ipv4_symmetric, self.multiprocess_rss_data, **pmd_param)
 
-    def test_perf_multiprocess_performance(self):
-        """
-        Benchmark Multiprocess performance.
-        #"""
-        packet_count = 16
-        self.dut.send_expect("fg", "# ")
-        txPort = self.tester.get_local_port(self.dut_ports[0])
-        rxPort = self.tester.get_local_port(self.dut_ports[1])
-        mac = self.tester.get_mac(txPort)
-        dmac = self.dut.get_mac_address(self.dut_ports[0])
-        tgenInput = []
-
-        # create mutative src_ip+dst_ip package
-        for i in range(packet_count):
-            package = (
-                r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
-                % (mac, dmac, i + 1, i + 2)
-            )
-            self.tester.scapy_append(package)
-            pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
-            self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
-            tgenInput.append([txPort, rxPort, pcap])
-        self.tester.scapy_execute()
-
-        # run multiple symmetric_mp process
-        validExecutions = []
-        for execution in executions:
-            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
-                validExecutions.append(execution)
-
-        portMask = utils.create_mask(self.dut_ports)
-
-        for n in range(len(validExecutions)):
-            execution = validExecutions[n]
-            # get coreList form execution['cores']
-            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
-            # to run a set of symmetric_mp instances, like test plan
-            dutSessionList = []
-            for index in range(len(coreList)):
-                dut_new_session = self.dut.new_session()
-                dutSessionList.append(dut_new_session)
-                # add -a option when tester and dut in same server
-                dut_new_session.send_expect(
-                    self.app_symmetric_mp
-                    + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
-                    % (
-                        utils.create_mask([coreList[index]]),
-                        self.eal_param,
-                        portMask,
-                        execution["nprocs"],
-                        index,
-                    ),
-                    "Finished Process Init",
-                )
-
-            # clear streams before add new streams
-            self.tester.pktgen.clear_streams()
-            # run packet generator
-            streams = self.pktgen_helper.prepare_stream_from_tginput(
-                tgenInput, 100, None, self.tester.pktgen
-            )
-            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
-            execution["pps"] = pps
-
-            # close all symmetric_mp process
-            self.dut.send_expect("killall symmetric_mp", "# ")
-            # close all dut sessions
-            for dut_session in dutSessionList:
-                self.dut.close_session(dut_session)
-
-        # get rate and mpps data
-        for n in range(len(executions)):
-            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
-        self.result_table_create(
-            [
-                "Num-procs",
-                "Sockets/Cores/Threads",
-                "Num Ports",
-                "Frame Size",
-                "%-age Line Rate",
-                "Packet Rate(mpps)",
-            ]
-        )
-
-        for execution in validExecutions:
-            self.result_table_add(
-                [
-                    execution["nprocs"],
-                    execution["cores"],
-                    2,
-                    64,
-                    execution["pps"] / float(100000000 / (8 * 84)),
-                    execution["pps"] / float(1000000),
-                ]
-            )
-
-        self.result_table_print()
-
-    def test_perf_multiprocess_client_serverperformance(self):
-        """
-        Benchmark Multiprocess client-server performance.
-        """
-        self.dut.kill_all()
-        self.dut.send_expect("fg", "# ")
-        txPort = self.tester.get_local_port(self.dut_ports[0])
-        rxPort = self.tester.get_local_port(self.dut_ports[1])
-        mac = self.tester.get_mac(txPort)
-
-        self.tester.scapy_append(
-            'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
-        )
-        self.tester.scapy_append('smac="%s"' % mac)
-        self.tester.scapy_append(
-            'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
-        )
-
-        pcap = os.sep.join([self.output_path, "test.pcap"])
-        self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
-        self.tester.scapy_execute()
-
-        validExecutions = []
-        for execution in executions:
-            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
-                validExecutions.append(execution)
-
-        for execution in validExecutions:
-            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
-            # get core with socket parameter to specified which core dut used when tester and dut in same server
-            coreMask = utils.create_mask(
-                self.dut.get_core_list("1S/1C/1T", socket=self.socket)
-            )
-            portMask = utils.create_mask(self.dut_ports)
-            # specified mp_server core and add -a option when tester and dut in same server
-            self.dut.send_expect(
-                self.app_mp_server
-                + " -n %d -c %s %s -- -p %s -n %d"
-                % (
-                    self.dut.get_memory_channels(),
-                    coreMask,
-                    self.eal_param,
-                    portMask,
-                    execution["nprocs"],
-                ),
-                "Finished Process Init",
-                20,
-            )
-            self.dut.send_expect("^Z", "\r\n")
-            self.dut.send_expect("bg", "# ")
-
-            for n in range(execution["nprocs"]):
-                time.sleep(5)
-                # use next core as mp_client core, different from mp_server
-                coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
-                self.dut.send_expect(
-                    self.app_mp_client
-                    + " -n %d -c %s --proc-type=secondary %s -- -n %d"
-                    % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
-                    "Finished Process Init",
-                )
-                self.dut.send_expect("^Z", "\r\n")
-                self.dut.send_expect("bg", "# ")
-
-            tgenInput = []
-            tgenInput.append([txPort, rxPort, pcap])
-
-            # clear streams before add new streams
-            self.tester.pktgen.clear_streams()
-            # run packet generator
-            streams = self.pktgen_helper.prepare_stream_from_tginput(
-                tgenInput, 100, None, self.tester.pktgen
-            )
-            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
-            execution["pps"] = pps
-            self.dut.kill_all()
-            time.sleep(5)
-
-        for n in range(len(executions)):
-            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
-
-        self.result_table_create(
-            [
-                "Server threads",
-                "Server Cores/Threads",
-                "Num-procs",
-                "Sockets/Cores/Threads",
-                "Num Ports",
-                "Frame Size",
-                "%-age Line Rate",
-                "Packet Rate(mpps)",
-            ]
-        )
-
-        for execution in validExecutions:
-            self.result_table_add(
-                [
-                    1,
-                    "1S/1C/1T",
-                    execution["nprocs"],
-                    execution["cores"],
-                    2,
-                    64,
-                    execution["pps"] / float(100000000 / (8 * 84)),
-                    execution["pps"] / float(1000000),
-                ]
-            )
-
-        self.result_table_print()
-
     def set_fields(self):
         """set ip protocol field behavior"""
         fields_config = {
diff --git a/tests/TestSuite_perf_multiprocess.py b/tests/TestSuite_perf_multiprocess.py
new file mode 100644
index 00000000..d03bb2f6
--- /dev/null
+++ b/tests/TestSuite_perf_multiprocess.py
@@ -0,0 +1,333 @@ 
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Multi-process Test.
+"""
+
+import copy
+import os
+import random
+import re
+import time
+import traceback
+from collections import OrderedDict
+
+import framework.utils as utils
+from framework.exception import VerifyFailure
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase, check_supported_nic
+from framework.utils import GREEN, RED
+
+from .rte_flow_common import FdirProcessing as fdirprocess
+from .rte_flow_common import RssProcessing as rssprocess
+
+executions = []
+
+
+class TestMultiprocess(TestCase):
+
+    support_nic = ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"]
+
+    def set_up_all(self):
+        """
+        Run at the start of each test suite.
+
+        Multiprocess prerequisites.
+        Requirements:
+            OS is not freeBSD
+            DUT core number >= 4
+            multi_process build pass
+        """
+        # self.verify('bsdapp' not in self.target, "Multiprocess not support freebsd")
+
+        self.verify(len(self.dut.get_all_cores()) >= 4, "Not enough Cores")
+        self.pkt = Packet()
+        self.dut_ports = self.dut.get_ports()
+        self.socket = self.dut.get_numa_id(self.dut_ports[0])
+        extra_option = "-Dexamples='multi_process/client_server_mp/mp_server,multi_process/client_server_mp/mp_client,multi_process/simple_mp,multi_process/symmetric_mp'"
+        self.dut.build_install_dpdk(target=self.target, extra_options=extra_option)
+        self.app_mp_client = self.dut.apps_name["mp_client"]
+        self.app_mp_server = self.dut.apps_name["mp_server"]
+        self.app_simple_mp = self.dut.apps_name["simple_mp"]
+        self.app_symmetric_mp = self.dut.apps_name["symmetric_mp"]
+
+        executions.append({"nprocs": 1, "cores": "1S/1C/1T", "pps": 0})
+        executions.append({"nprocs": 2, "cores": "1S/1C/2T", "pps": 0})
+        executions.append({"nprocs": 2, "cores": "1S/2C/1T", "pps": 0})
+        executions.append({"nprocs": 4, "cores": "1S/2C/2T", "pps": 0})
+        executions.append({"nprocs": 4, "cores": "1S/4C/1T", "pps": 0})
+        executions.append({"nprocs": 8, "cores": "1S/4C/2T", "pps": 0})
+
+        self.eal_param = ""
+        for i in self.dut_ports:
+            self.eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+        self.eal_para = self.dut.create_eal_parameters(cores="1S/2C/1T")
+        # start new session to run secondary
+        self.session_secondary = self.dut.new_session()
+
+        # get dts output path
+        if self.logger.log_path.startswith(os.sep):
+            self.output_path = self.logger.log_path
+        else:
+            cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+            self.output_path = os.sep.join([cur_path, self.logger.log_path])
+        # create an instance to set stream field setting
+        self.pktgen_helper = PacketGeneratorHelper()
+        self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+        self.pci0 = self.dport_info0["pci"]
+        self.tester_ifaces = [
+            self.tester.get_interface(self.dut.ports_map[port])
+            for port in self.dut_ports
+        ]
+        rxq = 1
+        self.session_list = []
+        self.logfmt = "*" * 20
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def test_perf_multiprocess_performance(self):
+        """
+        Benchmark Multiprocess performance.
+        #"""
+        packet_count = 16
+        self.dut.send_expect("fg", "# ")
+        txPort = self.tester.get_local_port(self.dut_ports[0])
+        rxPort = self.tester.get_local_port(self.dut_ports[1])
+        mac = self.tester.get_mac(txPort)
+        dmac = self.dut.get_mac_address(self.dut_ports[0])
+        tgenInput = []
+
+        # create mutative src_ip+dst_ip package
+        for i in range(packet_count):
+            package = (
+                r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
+                % (mac, dmac, i + 1, i + 2)
+            )
+            self.tester.scapy_append(package)
+            pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
+            self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+            tgenInput.append([txPort, rxPort, pcap])
+        self.tester.scapy_execute()
+
+        # run multiple symmetric_mp process
+        validExecutions = []
+        for execution in executions:
+            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+                validExecutions.append(execution)
+
+        portMask = utils.create_mask(self.dut_ports)
+
+        for n in range(len(validExecutions)):
+            execution = validExecutions[n]
+            # get coreList form execution['cores']
+            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+            # to run a set of symmetric_mp instances, like test plan
+            dutSessionList = []
+            for index in range(len(coreList)):
+                dut_new_session = self.dut.new_session()
+                dutSessionList.append(dut_new_session)
+                # add -a option when tester and dut in same server
+                dut_new_session.send_expect(
+                    self.app_symmetric_mp
+                    + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
+                    % (
+                        utils.create_mask([coreList[index]]),
+                        self.eal_param,
+                        portMask,
+                        execution["nprocs"],
+                        index,
+                    ),
+                    "Finished Process Init",
+                )
+
+            # clear streams before add new streams
+            self.tester.pktgen.clear_streams()
+            # run packet generator
+            streams = self.pktgen_helper.prepare_stream_from_tginput(
+                tgenInput, 100, None, self.tester.pktgen
+            )
+            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+            execution["pps"] = pps
+
+            # close all symmetric_mp process
+            self.dut.send_expect("killall symmetric_mp", "# ")
+            # close all dut sessions
+            for dut_session in dutSessionList:
+                self.dut.close_session(dut_session)
+
+        # get rate and mpps data
+        for n in range(len(executions)):
+            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+        self.result_table_create(
+            [
+                "Num-procs",
+                "Sockets/Cores/Threads",
+                "Num Ports",
+                "Frame Size",
+                "%-age Line Rate",
+                "Packet Rate(mpps)",
+            ]
+        )
+
+        for execution in validExecutions:
+            self.result_table_add(
+                [
+                    execution["nprocs"],
+                    execution["cores"],
+                    2,
+                    64,
+                    execution["pps"] / float(100000000 / (8 * 84)),
+                    execution["pps"] / float(1000000),
+                ]
+            )
+
+        self.result_table_print()
+
+    def test_perf_multiprocess_client_serverperformance(self):
+        """
+        Benchmark Multiprocess client-server performance.
+        """
+        self.dut.kill_all()
+        self.dut.send_expect("fg", "# ")
+        txPort = self.tester.get_local_port(self.dut_ports[0])
+        rxPort = self.tester.get_local_port(self.dut_ports[1])
+        mac = self.tester.get_mac(txPort)
+
+        self.tester.scapy_append(
+            'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
+        )
+        self.tester.scapy_append('smac="%s"' % mac)
+        self.tester.scapy_append(
+            'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
+        )
+
+        pcap = os.sep.join([self.output_path, "test.pcap"])
+        self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+        self.tester.scapy_execute()
+
+        validExecutions = []
+        for execution in executions:
+            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+                validExecutions.append(execution)
+
+        for execution in validExecutions:
+            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+            # get core with socket parameter to specified which core dut used when tester and dut in same server
+            coreMask = utils.create_mask(
+                self.dut.get_core_list("1S/1C/1T", socket=self.socket)
+            )
+            portMask = utils.create_mask(self.dut_ports)
+            # specified mp_server core and add -a option when tester and dut in same server
+            self.dut.send_expect(
+                self.app_mp_server
+                + " -n %d -c %s %s -- -p %s -n %d"
+                % (
+                    self.dut.get_memory_channels(),
+                    coreMask,
+                    self.eal_param,
+                    portMask,
+                    execution["nprocs"],
+                ),
+                "Finished Process Init",
+                20,
+            )
+            self.dut.send_expect("^Z", "\r\n")
+            self.dut.send_expect("bg", "# ")
+
+            for n in range(execution["nprocs"]):
+                time.sleep(5)
+                # use next core as mp_client core, different from mp_server
+                coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
+                self.dut.send_expect(
+                    self.app_mp_client
+                    + " -n %d -c %s --proc-type=secondary %s -- -n %d"
+                    % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
+                    "Finished Process Init",
+                )
+                self.dut.send_expect("^Z", "\r\n")
+                self.dut.send_expect("bg", "# ")
+
+            tgenInput = []
+            tgenInput.append([txPort, rxPort, pcap])
+
+            # clear streams before add new streams
+            self.tester.pktgen.clear_streams()
+            # run packet generator
+            streams = self.pktgen_helper.prepare_stream_from_tginput(
+                tgenInput, 100, None, self.tester.pktgen
+            )
+            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+            execution["pps"] = pps
+            self.dut.kill_all()
+            time.sleep(5)
+
+        for n in range(len(executions)):
+            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+
+        self.result_table_create(
+            [
+                "Server threads",
+                "Server Cores/Threads",
+                "Num-procs",
+                "Sockets/Cores/Threads",
+                "Num Ports",
+                "Frame Size",
+                "%-age Line Rate",
+                "Packet Rate(mpps)",
+            ]
+        )
+
+        for execution in validExecutions:
+            self.result_table_add(
+                [
+                    1,
+                    "1S/1C/1T",
+                    execution["nprocs"],
+                    execution["cores"],
+                    2,
+                    64,
+                    execution["pps"] / float(100000000 / (8 * 84)),
+                    execution["pps"] / float(1000000),
+                ]
+            )
+
+        self.result_table_print()
+
+    def set_fields(self):
+        """set ip protocol field behavior"""
+        fields_config = {
+            "ip": {
+                "src": {"range": 64, "action": "inc"},
+                "dst": {"range": 64, "action": "inc"},
+            },
+        }
+
+        return fields_config
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        if self.session_list:
+            for sess in self.session_list:
+                self.dut.close_session(sess)
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
+        pass