From patchwork Tue Apr 26 05:50:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jun Dong X-Patchwork-Id: 110250 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6573A00BE; Tue, 26 Apr 2022 07:50:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFBFC427F2; Tue, 26 Apr 2022 07:50:47 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 24CAC4068E for ; Tue, 26 Apr 2022 07:50:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650952245; x=1682488245; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FXYgm+VGutjaQyJeEd1FWtkDxuscgAyynFU54GqpFAc=; b=LVswpcLzaV4XBbCaCdBfq6AUrj0nAbnJg54HCQR7RLcAF/NOJmrv0KUY d0FKlYGvVwOeApGjKY3y8P35ojK0qwFgJOhRgeGOG08VHu1JtVv37x8HE ACj+UUl2JtPgWaTLtSuXBX5EkoLmZaOSb55SIeFhNwpf73hAp2ICqov9d 2Qgfm5dQ3UgyZ3VqL+eWZN4zKAOlQpe56Aj0GnBKNP2e2R2dEMeHCi/lV MOfIOYyLFIpntls0uT/9a2b3pI4sZ5Aac5iUmz0wfu0VZI5Y5d7rq2arI h7veQUD9YHMDC8no2gVLncqjLbAmfHe6hOvCnfAUlYAxiVNn5Zn2NePc/ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="265627873" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="265627873" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2022 22:50:44 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="650033693" Received: from shwdenpg197.ccr.corp.intel.com ([10.253.109.70]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2022 22:50:42 -0700 From: Jun Dong To: dts@dpdk.org Cc: lijuan.tu@intel.com, qingx.sun@intel.com, junx.dong@intel.com Subject: [dts] [RFC 5/6] test_plans/*: replace codename in test plans Date: Tue, 26 Apr 2022 13:50:26 +0800 Message-Id: <20220426055027.6932-6-junx.dong@intel.com> X-Mailer: git-send-email 2.33.1.windows.1 In-Reply-To: <20220426055027.6932-1-junx.dong@intel.com> References: <20220426055027.6932-1-junx.dong@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Signed-off-by: Jun Dong --- test_plans/NICStatistics_test_plan.rst | 4 +- test_plans/bbdev_test_plan.rst | 19 +++--- test_plans/blocklist_test_plan.rst | 7 -- .../cloud_filter_with_l4_port_test_plan.rst | 2 +- test_plans/dcf_lifecycle_test_plan.rst | 6 +- test_plans/ddp_gtp_qregion_test_plan.rst | 10 +-- test_plans/ddp_gtp_test_plan.rst | 19 +++--- test_plans/ddp_l2tpv3_test_plan.rst | 5 +- test_plans/ddp_mpls_test_plan.rst | 19 +++--- test_plans/ddp_ppp_l2tp_test_plan.rst | 9 +-- test_plans/dual_vlan_test_plan.rst | 2 +- test_plans/dynamic_flowtype_test_plan.rst | 10 +-- test_plans/eeprom_dump_test_plan.rst | 4 +- ...ckage_download_in_ice_driver_test_plan.rst | 6 +- test_plans/floating_veb_test_plan.rst | 21 +++--- test_plans/generic_flow_api_test_plan.rst | 48 +++++++------- ..._plan.rst => i40e_rss_input_test_plan.rst} | 6 +- test_plans/iavf_fdir_test_plan.rst | 4 +- .../iavf_flexible_descriptor_test_plan.rst | 9 +-- ..._package_driver_error_handle_test_plan.rst | 14 ++-- test_plans/iavf_test_plan.rst | 4 +- ...plan.rst => ice_1pps_signal_test_plan.rst} | 14 ++-- ...e_advanced_iavf_rss_gtpogre_test_plan.rst} | 6 +- ... ice_advanced_iavf_rss_gtpu_test_plan.rst} | 9 +-- ...anced_iavf_rss_pppol2tpoudp_test_plan.rst} | 7 +- ...st => ice_advanced_iavf_rss_test_plan.rst} | 21 +++--- ...f_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst} | 4 +- ...=> ice_advanced_rss_gtpogre_test_plan.rst} | 9 +-- ...st => ice_advanced_rss_gtpu_test_plan.rst} | 9 +-- ...t => ice_advanced_rss_pppoe_test_plan.rst} | 4 +- ...lan.rst => ice_advanced_rss_test_plan.rst} | 13 ++-- ...d_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst} | 4 +- ...n.rst => ice_dcf_acl_filter_test_plan.rst} | 4 +- ...an.rst => ice_dcf_date_path_test_plan.rst} | 0 ...st => ice_dcf_flow_priority_test_plan.rst} | 4 +- ...est_plan.rst => ice_dcf_qos_test_plan.rst} | 5 +- ... ice_dcf_switch_filter_gtpu_test_plan.rst} | 6 +- ...ice_dcf_switch_filter_pppoe_test_plan.rst} | 6 +- ...st => ice_dcf_switch_filter_test_plan.rst} | 10 +-- ..._test_plan.rst => ice_ecpri_test_plan.rst} | 10 +-- ...r_test_plan.rst => ice_fdir_test_plan.rst} | 4 +- ...an.rst => ice_flow_priority_test_plan.rst} | 17 ++--- ...st => ice_iavf_fdir_gtpogre_test_plan.rst} | 6 +- ... ice_iavf_fdir_pppol2tpoudp_test_plan.rst} | 4 +- ...e_iavf_ip_fragment_rte_flow_test_plan.rst} | 4 +- ...t => ice_iavf_rss_configure_test_plan.rst} | 2 +- ...=> ice_ip_fragment_rte_flow_test_plan.rst} | 4 +- ...rst => ice_limit_value_test_test_plan.rst} | 18 ++--- ...q_test_plan.rst => ice_qinq_test_plan.rst} | 14 ++-- ...an.rst => ice_rss_configure_test_plan.rst} | 6 +- ... => ice_switch_filter_pppoe_test_plan.rst} | 6 +- ...an.rst => ice_switch_filter_test_plan.rst} | 10 +-- ...f_support_multicast_address_test_plan.rst} | 4 +- test_plans/index.rst | 66 +++++++++---------- test_plans/inline_ipsec_test_plan.rst | 2 +- test_plans/ipfrag_test_plan.rst | 2 +- test_plans/ipgre_test_plan.rst | 12 +++- test_plans/ipv4_reassembly_test_plan.rst | 2 +- ..._get_extra_queue_information_test_plan.rst | 2 +- test_plans/l2tp_esp_coverage_test_plan.rst | 8 ++- test_plans/l3fwd_test_plan.rst | 2 +- test_plans/large_vf_test_plan.rst | 4 +- test_plans/macsec_for_ixgbe_test_plan.rst | 12 ++-- ...ious_driver_event_indication_test_plan.rst | 6 +- test_plans/metrics_test_plan.rst | 2 +- test_plans/multicast_test_plan.rst | 2 +- test_plans/nic_single_core_perf_test_plan.rst | 43 ++++++------ test_plans/nvgre_test_plan.rst | 22 +++---- test_plans/pf_smoke_test_plan.rst | 2 +- test_plans/pmd_bonded_test_plan.rst | 4 +- test_plans/pmd_test_plan.rst | 8 +-- test_plans/pmdpcap_test_plan.rst | 4 +- test_plans/pmdrss_hash_test_plan.rst | 25 +++---- test_plans/pmdrssreta_test_plan.rst | 6 +- test_plans/port_control_test_plan.rst | 6 +- test_plans/ptpclient_test_plan.rst | 6 +- test_plans/ptype_mapping_test_plan.rst | 6 +- test_plans/pvp_share_lib_test_plan.rst | 12 ++-- test_plans/qinq_filter_test_plan.rst | 30 ++++----- test_plans/queue_region_test_plan.rst | 29 ++++---- test_plans/queue_start_stop_test_plan.rst | 2 +- test_plans/rss_to_rte_flow_test_plan.rst | 4 +- test_plans/rteflow_priority_test_plan.rst | 3 +- ...ntime_vf_queue_number_kernel_test_plan.rst | 2 +- ...time_vf_queue_number_maxinum_test_plan.rst | 12 ++-- .../runtime_vf_queue_number_test_plan.rst | 2 +- test_plans/rxtx_offload_test_plan.rst | 10 +-- test_plans/stats_checks_test_plan.rst | 4 +- test_plans/tso_test_plan.rst | 2 +- test_plans/tx_preparation_test_plan.rst | 4 +- test_plans/uni_pkt_test_plan.rst | 66 +++++++++---------- test_plans/userspace_ethtool_test_plan.rst | 10 +-- test_plans/veb_switch_test_plan.rst | 21 +++--- test_plans/vf_daemon_test_plan.rst | 2 +- test_plans/vf_kernel_test_plan.rst | 6 +- test_plans/vf_pf_reset_test_plan.rst | 2 +- test_plans/vf_rss_test_plan.rst | 8 +-- test_plans/vf_single_core_perf_test_plan.rst | 20 +++--- test_plans/vf_smoke_test_plan.rst | 2 +- .../vlan_ethertype_config_test_plan.rst | 2 +- test_plans/vm_power_manager_test_plan.rst | 2 +- test_plans/vmdq_dcb_test_plan.rst | 6 +- test_plans/vmdq_test_plan.rst | 14 ++-- test_plans/vxlan_test_plan.rst | 22 +++---- 104 files changed, 529 insertions(+), 496 deletions(-) rename test_plans/{fortville_rss_input_test_plan.rst => i40e_rss_input_test_plan.rst} (99%) rename test_plans/{cvl_1pps_signal_test_plan.rst => ice_1pps_signal_test_plan.rst} (90%) rename test_plans/{cvl_advanced_iavf_rss_gtpogre_test_plan.rst => ice_advanced_iavf_rss_gtpogre_test_plan.rst} (99%) rename test_plans/{cvl_advanced_iavf_rss_gtpu_test_plan.rst => ice_advanced_iavf_rss_gtpu_test_plan.rst} (99%) rename test_plans/{cvl_advanced_iavf_rss_pppol2tpoudp_test_plan.rst => ice_advanced_iavf_rss_pppol2tpoudp_test_plan.rst} (99%) rename test_plans/{cvl_advanced_iavf_rss_test_plan.rst => ice_advanced_iavf_rss_test_plan.rst} (99%) mode change 100755 => 100644 rename test_plans/{cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst => ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst} (99%) rename test_plans/{cvl_advanced_rss_gtpogre_test_plan.rst => ice_advanced_rss_gtpogre_test_plan.rst} (99%) rename test_plans/{cvl_advanced_rss_gtpu_test_plan.rst => ice_advanced_rss_gtpu_test_plan.rst} (99%) rename test_plans/{cvl_advanced_rss_pppoe_test_plan.rst => ice_advanced_rss_pppoe_test_plan.rst} (99%) rename test_plans/{cvl_advanced_rss_test_plan.rst => ice_advanced_rss_test_plan.rst} (99%) rename test_plans/{cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst => ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst} (99%) rename test_plans/{cvl_dcf_acl_filter_test_plan.rst => ice_dcf_acl_filter_test_plan.rst} (99%) rename test_plans/{cvl_dcf_date_path_test_plan.rst => ice_dcf_date_path_test_plan.rst} (100%) mode change 100755 => 100644 rename test_plans/{cvl_dcf_flow_priority_test_plan.rst => ice_dcf_flow_priority_test_plan.rst} (99%) mode change 100755 => 100644 rename test_plans/{cvl_dcf_qos_test_plan.rst => ice_dcf_qos_test_plan.rst} (99%) rename test_plans/{cvl_dcf_switch_filter_gtpu_test_plan.rst => ice_dcf_switch_filter_gtpu_test_plan.rst} (99%) rename test_plans/{cvl_dcf_switch_filter_pppoe_test_plan.rst => ice_dcf_switch_filter_pppoe_test_plan.rst} (99%) rename test_plans/{cvl_dcf_switch_filter_test_plan.rst => ice_dcf_switch_filter_test_plan.rst} (99%) rename test_plans/{cvl_ecpri_test_plan.rst => ice_ecpri_test_plan.rst} (99%) rename test_plans/{cvl_fdir_test_plan.rst => ice_fdir_test_plan.rst} (99%) rename test_plans/{cvl_flow_priority_test_plan.rst => ice_flow_priority_test_plan.rst} (98%) rename test_plans/{iavf_fdir_gtpogre_test_plan.rst => ice_iavf_fdir_gtpogre_test_plan.rst} (99%) rename test_plans/{cvl_iavf_fdir_pppol2tpoudp_test_plan.rst => ice_iavf_fdir_pppol2tpoudp_test_plan.rst} (99%) rename test_plans/{cvl_iavf_ip_fragment_rte_flow_test_plan.rst => ice_iavf_ip_fragment_rte_flow_test_plan.rst} (99%) rename test_plans/{cvl_iavf_rss_configure_test_plan.rst => ice_iavf_rss_configure_test_plan.rst} (99%) mode change 100755 => 100644 rename test_plans/{cvl_ip_fragment_rte_flow_test_plan.rst => ice_ip_fragment_rte_flow_test_plan.rst} (99%) rename test_plans/{cvl_limit_value_test_test_plan.rst => ice_limit_value_test_test_plan.rst} (97%) rename test_plans/{cvl_qinq_test_plan.rst => ice_qinq_test_plan.rst} (99%) rename test_plans/{cvl_rss_configure_test_plan.rst => ice_rss_configure_test_plan.rst} (99%) rename test_plans/{cvl_switch_filter_pppoe_test_plan.rst => ice_switch_filter_pppoe_test_plan.rst} (99%) rename test_plans/{cvl_switch_filter_test_plan.rst => ice_switch_filter_test_plan.rst} (99%) rename test_plans/{cvl_vf_support_multicast_address_test_plan.rst => ice_vf_support_multicast_address_test_plan.rst} (99%) diff --git a/test_plans/NICStatistics_test_plan.rst b/test_plans/NICStatistics_test_plan.rst index cfae7068..a5f00981 100644 --- a/test_plans/NICStatistics_test_plan.rst +++ b/test_plans/NICStatistics_test_plan.rst @@ -35,10 +35,10 @@ NIC Statistics Tests ==================== This document provides benchmark tests for the userland Intel® -82599 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD). +82599 Gigabit Ethernet Controller Poll Mode Driver (PMD). The userland PMD application runs the ``IO forwarding mode`` test described in the PMD test plan document with different parameters for -the configuration of Niantic NIC ports. +the configuration of 82599 NIC ports. The core configuration description is: diff --git a/test_plans/bbdev_test_plan.rst b/test_plans/bbdev_test_plan.rst index 2e2a64e5..58e794e7 100644 --- a/test_plans/bbdev_test_plan.rst +++ b/test_plans/bbdev_test_plan.rst @@ -30,9 +30,9 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -============================================================= -Wireless device for ICX-D (bbdev) for Turbo decoding/encoding -============================================================= +=========================================================================== +Wireless device for Xeon SP 83xx/63xx-D (bbdev) for Turbo decoding/encoding +=========================================================================== Description =========== @@ -50,17 +50,16 @@ Description and virtual (software) wireless acceleration functions. Physical bbdev devices are discovered during the PCI probe/enumeration of - the EAL function which is executed at DPDK initialization, based on - their PCI device identifier, each unique PCI BDF (bus/bridge, device, - function). + the EAL function which is executed at DPDK initialization, based on their + PCI device identifier, each unique PCI BDF (bus/bridge, device, function). Virtual devices can be created by two mechanisms, either using the EAL command line options or from within the application using an EAL API directly. - so it is required to perform validation of the framework divided into - 2 stages: + so it is required to perform validation of the framework divided into 2 + stages: Stage 1: Validation of the SW-only solution (turbo_sw) - Stage 2: Validation of the HW-accelerated solution (ICX-D TIP) on an ICX-D - platform. + Stage 2: Validation of the HW-accelerated solution (Xeon SP 83xx/63xx-D TIP) + on an Xeon SP 83xx/63xx-D platform. We now only support stage 1. Prerequisites diff --git a/test_plans/blocklist_test_plan.rst b/test_plans/blocklist_test_plan.rst index f1231331..f6bc854c 100644 --- a/test_plans/blocklist_test_plan.rst +++ b/test_plans/blocklist_test_plan.rst @@ -61,19 +61,16 @@ are bound and available:: EAL: Device bound EAL: map PCI resource for device 0000:01:00.0 EAL: PCI memory mapped at 0x7fe6b68c7000 - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.1/driver/unbind EAL: bind PCI device 0000:01:00.1 to uio driver EAL: Device bound EAL: map PCI resource for device 0000:01:00.1 EAL: PCI memory mapped at 0x7fe6b6847000 - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.0/driver/unbind EAL: bind PCI device 0000:02:00.0 to uio driver EAL: Device bound EAL: map PCI resource for device 0000:02:00.0 EAL: PCI memory mapped at 0x7fe6b6580000 - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind EAL: bind PCI device 0000:02:00.1 to uio driver EAL: Device bound @@ -96,19 +93,16 @@ Select first available port to be blocklisted and specify it with -b option. For Check that corresponding device is skipped for binding, and only 3 ports are available now::: - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.1/driver/unbind EAL: bind PCI device 0000:01:00.1 to uio driver EAL: Device bound EAL: map PCI resource for device 0000:01:00.1 EAL: PCI memory mapped at 0x7f0037912000 - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.0/driver/unbind EAL: bind PCI device 0000:02:00.0 to uio driver EAL: Device bound EAL: map PCI resource for device 0000:02:00.0 EAL: PCI memory mapped at 0x7f0037892000 - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind EAL: bind PCI device 0000:02:00.1 to uio driver EAL: Device bound @@ -131,7 +125,6 @@ For the example above::: Check that 3 corresponding device is skipped for binding, and only 1 ports is available now::: - EAL: probe driver: 8086:10fb rte_niantic_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind EAL: bind PCI device 0000:02:00.1 to uio driver EAL: Device bound diff --git a/test_plans/cloud_filter_with_l4_port_test_plan.rst b/test_plans/cloud_filter_with_l4_port_test_plan.rst index e9f226ac..7fbf5130 100644 --- a/test_plans/cloud_filter_with_l4_port_test_plan.rst +++ b/test_plans/cloud_filter_with_l4_port_test_plan.rst @@ -38,7 +38,7 @@ Prerequisites ============= 1. Hardware: - Fortville + Intel® Ethernet 700 Series 2. software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/dcf_lifecycle_test_plan.rst b/test_plans/dcf_lifecycle_test_plan.rst index 3e63a3a4..bd5f6927 100644 --- a/test_plans/dcf_lifecycle_test_plan.rst +++ b/test_plans/dcf_lifecycle_test_plan.rst @@ -32,7 +32,7 @@ ============================ -CVL DCF Lifecycle Test Suite +ICE DCF Lifecycle Test Suite ============================ Description @@ -1073,7 +1073,7 @@ TC34: ACL DCF mode is active, add ACL filters by way of host based configuration Added rule with ID 15871 =============================== -CVL DCF enable device reset API +ICE DCF enable device reset API =============================== Description @@ -1088,7 +1088,7 @@ Prerequisites Hardware -------- -Supportted NICs: columbiaville_25g/columbiaville_100g +Supportted NICs: Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 Software -------- diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst index 7e2b1816..6ea1f546 100644 --- a/test_plans/ddp_gtp_qregion_test_plan.rst +++ b/test_plans/ddp_gtp_qregion_test_plan.rst @@ -52,11 +52,11 @@ Requirements 5. Requirements 1 and 2 should be possible for IPv6 addresses to use 64, 48 or 32-bit prefixes instead of full address. -FVL supports queue regions configuration for RSS, so different traffic -classes or different packet classification types can be separated to -different queue regions which includes several queues. Support to set -hash input set info for RSS flexible payload, then enable new -protocols' RSS. +Intel® Ethernet 700 Series supports queue regions configuration for RSS, +so different traffic classes or different packet classification types +can be separated to different queue regions which includes several queues. +Support to set hash input set info for RSS flexible payload, then enable +new protocols' RSS. Dynamic flow type feature introduces GTP pctype and flow type, design and add queue region/queue range mapping as below table. For more detailed and relative information, please refer to dynamic flow type and queue diff --git a/test_plans/ddp_gtp_test_plan.rst b/test_plans/ddp_gtp_test_plan.rst index 0fd5a50d..adf52050 100644 --- a/test_plans/ddp_gtp_test_plan.rst +++ b/test_plans/ddp_gtp_test_plan.rst @@ -30,15 +30,16 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -=============================== -Fortville DDP GTP-C/GTP-U Tests -=============================== - -FVL6 supports DDP (Dynamic Device Personalization) to program analyzer/parser -via AdminQ. Profile can be used to update FVL configuration tables via MMIO -configuration space, not microcode or firmware itself. For microcode/FW -changes new HW/FW/NVM image must be uploaded to the NIC. Profiles will be -stored in binary files and need to be passed to AQ to program FVL during +================================================ +Intel® Ethernet 700 Series DDP GTP-C/GTP-U Tests +================================================ + +Intel® Ethernet 700 Series supports DDP (Dynamic Device Personalization) to +program analyzer/parser via AdminQ. Profile can be used to update Intel® +Ethernet 700 Series configuration tables via MMIO configuration space, not +microcode or firmware itself. For microcode/FW changes new HW/FW/NVM image +must be uploaded to the NIC. Profiles will be stored in binary files and +need to be passed to AQ to program Intel® Ethernet 700 Series during initialization stage. GPRS Tunneling Protocol (GTP) is a group of IP-based communications diff --git a/test_plans/ddp_l2tpv3_test_plan.rst b/test_plans/ddp_l2tpv3_test_plan.rst index d4ae0f55..7039119d 100644 --- a/test_plans/ddp_l2tpv3_test_plan.rst +++ b/test_plans/ddp_l2tpv3_test_plan.rst @@ -35,7 +35,8 @@ DDP L2TPV3 ========== DDP profile 0x80000004 adds support for directing L2TPv3 packets based on -their session ID for FVL NIC. For DDP introduction, please refer to : +their session ID for Intel® Ethernet 700 Series NIC. For DDP introduction, +please refer to : https://software.intel.com/en-us/articles/dynamic-device-personalization-for-intel-ethernet-700-series @@ -62,7 +63,7 @@ Requirements as below ===================== Flow API support for flow director rules based on L2TPv3 session ID -The current scope is limited to FVL NIC +The current scope is limited to Intel® Ethernet 700 Series NIC Prerequisites ============= diff --git a/test_plans/ddp_mpls_test_plan.rst b/test_plans/ddp_mpls_test_plan.rst index 6c4d0e01..9f544e1e 100644 --- a/test_plans/ddp_mpls_test_plan.rst +++ b/test_plans/ddp_mpls_test_plan.rst @@ -30,15 +30,16 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -==================================================== -Fortville DDP (Dynamic Device Personalization) Tests -==================================================== - -FVL6 supports DDP (Dynamic Device Personalization) to program analyzer/parser -via AdminQ. Profile can be used to update FVL configuration tables via MMIO -configuration space, not microcode or firmware itself. For microcode/FW -changes new HW/FW/NVM image must be uploaded to the NIC. Profiles will be -stored in binary files and need to be passed to AQ to program FVL during +===================================================================== +Intel® Ethernet 700 Series DDP (Dynamic Device Personalization) Tests +===================================================================== + +Intel® Ethernet 700 Series supports DDP (Dynamic Device Personalization) to +program analyzer/parser via AdminQ. Profile can be used to update Intel® +Ethernet 700 Series configuration tables via MMIO configuration space, not +microcode or firmware itself. For microcode/FW changes new HW/FW/NVM image +must be uploaded to the NIC. Profiles will be stored in binary files and +need to be passed to AQ to program Intel® Ethernet 700 Series during initialization stage. With DDP, MPLS (Multi-protocol Label Switching) can be supported by NVM with diff --git a/test_plans/ddp_ppp_l2tp_test_plan.rst b/test_plans/ddp_ppp_l2tp_test_plan.rst index 3f9c53b7..ccdddf31 100644 --- a/test_plans/ddp_ppp_l2tp_test_plan.rst +++ b/test_plans/ddp_ppp_l2tp_test_plan.rst @@ -34,8 +34,9 @@ DDP PPPoE/L2TPv2/PPPoL2TPv2 =========================== -Fortville supports PPPoE/L2TPv2/PPPoL2TPv2 new protocols after loading profile. -For DDP introduction, please refer to ddp gtp or ddp mpls test plan. +Intel® Ethernet 700 Series supports PPPoE/L2TPv2/PPPoL2TPv2 new protocols +after loading profile.For DDP introduction, please refer to ddp gtp or ddp +mpls test plan. Requirements as below:: @@ -51,8 +52,8 @@ Requirements as below:: Dynamic flow type mapping eliminates usage of number of hard-coded flow types in bulky if-else statements. For instance, when configure hash enable -flags for RSS in i40e_config_hena() function and will make partitioning FVL -in i40e PMD more scalable. +flags for RSS in i40e_config_hena() function and will make partitioning +Intel® Ethernet 700 Series in i40e PMD more scalable. I40e PCTYPEs are statically mapped to RTE_ETH_FLOW_* types in DPDK, defined in rte_eth_ctrl.h, flow types used to define ETH_RSS_* offload types in diff --git a/test_plans/dual_vlan_test_plan.rst b/test_plans/dual_vlan_test_plan.rst index a7e03bcc..d8cb28fa 100644 --- a/test_plans/dual_vlan_test_plan.rst +++ b/test_plans/dual_vlan_test_plan.rst @@ -294,7 +294,7 @@ Check whether the mode is set successfully:: qinq(extend) on Set Tag Protocol ID ``0x1234`` on port ``0``. -Nic only support inner model, except Fortville:: +Nic only support inner model, except Intel® Ethernet 700 Series:: testpmd> vlan set inner tpid 0x1234 0 diff --git a/test_plans/dynamic_flowtype_test_plan.rst b/test_plans/dynamic_flowtype_test_plan.rst index 5fda715e..eaf98cfe 100644 --- a/test_plans/dynamic_flowtype_test_plan.rst +++ b/test_plans/dynamic_flowtype_test_plan.rst @@ -30,9 +30,9 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -======================================================== -Fortville Dynamic Mapping of Flow Types to PCTYPEs Tests -======================================================== +========================================================================= +Intel® Ethernet 700 Series Dynamic Mapping of Flow Types to PCTYPEs Tests +========================================================================= More protocols can be added dynamically using dynamic device personalization profiles (DDP). @@ -46,8 +46,8 @@ SW RTE_ETH_FLOW type mapping is proposed. Dynamic flow type mapping will eliminate usage of number of hard-coded flow types in bulky if-else statements. For instance, when configure hash enable -flags for RSS in i40e_config_hena() function and will make partitioning FVL -in i40e PMD more scalable. +flags for RSS in i40e_config_hena() function and will make partitioning +Intel® Ethernet 700 Series in i40e PMD more scalable. I40e PCTYPEs are statically mapped to RTE_ETH_FLOW_* types in DPDK, defined in rte_eth_ctrl.h, and flow types used to define ETH_RSS_* offload types in diff --git a/test_plans/eeprom_dump_test_plan.rst b/test_plans/eeprom_dump_test_plan.rst index 6470297a..79d23a3a 100644 --- a/test_plans/eeprom_dump_test_plan.rst +++ b/test_plans/eeprom_dump_test_plan.rst @@ -68,7 +68,7 @@ Test Case : EEPROM Dump ethtool -e raw on length >> .txt -3. If nic is columbiaville, store the output of the first 1000 lines from testpmd and ethtool into two files, +3. If nic is Intel® Ethernet 800 Series, store the output of the first 1000 lines from testpmd and ethtool into two files, else store the output from testpmd and ethtool into two files. Then compare both files, verify they are the same. 4. Delete all the files created during testing. @@ -86,7 +86,7 @@ Test Case : Module EEPROM Dump ethtool -m raw on length >> .txt -3. If nic is columbiaville, store the output of the first 16 lines from testpmd and ethtool into two files, +3. If nic is Intel® Ethernet 800 Series, store the output of the first 16 lines from testpmd and ethtool into two files, else store the output from testpmd and ethtool into two files. Then compare both files, verify they are the same. 4. Delete all the files created during testing. diff --git a/test_plans/enable_package_download_in_ice_driver_test_plan.rst b/test_plans/enable_package_download_in_ice_driver_test_plan.rst index b81e4e79..02837459 100644 --- a/test_plans/enable_package_download_in_ice_driver_test_plan.rst +++ b/test_plans/enable_package_download_in_ice_driver_test_plan.rst @@ -30,9 +30,9 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -========================================================== -Flexible pipeline package processing on CPK NIC mode Tests -========================================================== +=========================================================== +Flexible pipeline package processing on E822 NIC mode Tests +=========================================================== Description =========== diff --git a/test_plans/floating_veb_test_plan.rst b/test_plans/floating_veb_test_plan.rst index bd6e1cd2..6ab99986 100644 --- a/test_plans/floating_veb_test_plan.rst +++ b/test_plans/floating_veb_test_plan.rst @@ -42,24 +42,25 @@ http://www.ieee802.org/802_tutorials/2009-11 /evb-tutorial-draft-20091116_v09.pdf Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN -Bridge internal to Fortville that bridges the traffic of multiple VSIs over -an internal virtual network. +Bridge internal to Intel® Ethernet 700 Series that bridges the traffic of +multiple VSIs over an internal virtual network. Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA -multiplexes the traffic of one or more VSIs onto a single Fortville Ethernet -port. The biggest difference between a VEB and a VEPA is that a VEB can -switch packets internally between VSIs, whereas a VEPA cannot. +multiplexes the traffic of one or more VSIs onto a single Intel® Ethernet +700 Series Ethernet port. The biggest difference between a VEB and a VEPA +is that a VEB can switch packets internally between VSIs, whereas a VEPA +cannot. Virtual Station Interface (VSI) - This is an IEEE EVB term that defines the properties of a virtual machine's (or a physical machine's) connection -to the network. Each downstream v-port on a Fortville VEB or VEPA defines -a VSI. A standards-based definition of VSI properties enables network -management tools to perform virtual machine migration and associated network -re-configuration in a vendor-neutral manner. +to the network. Each downstream v-port on a Intel® Ethernet 700 Series VEB +or VEPA defines a VSI. A standards-based definition of VSI properties enables +network management tools to perform virtual machine migration and associated +network re-configuration in a vendor-neutral manner. My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC internal -switch. It's similar as Niantic's SRIOV switch. +switch. It's similar as 82599's SRIOV switch. Floating VEB Introduction ========================= diff --git a/test_plans/generic_flow_api_test_plan.rst b/test_plans/generic_flow_api_test_plan.rst index 6fd039a0..1d1fa2dd 100644 --- a/test_plans/generic_flow_api_test_plan.rst +++ b/test_plans/generic_flow_api_test_plan.rst @@ -38,7 +38,7 @@ Prerequisites ============= 1. Hardware: - Fortville and Niantic + Intel® Ethernet 700 Series and 82599 2. software: dpdk: http://dpdk.org/git/dpdk @@ -52,8 +52,8 @@ Note: validate the rules first before create it in each case. All the rules that can be validated correctly should be created successfully. The rules can't be validated correctly shouldn't be created successfully. -Test case: Fortville ethertype -============================== +Test case: Intel® Ethernet 700 Series ethertype +=============================================== 1. Launch the app ``testpmd`` with the following arguments:: @@ -94,8 +94,8 @@ Test case: Fortville ethertype testpmd> flow list 0 -Test case: Fortville fdir for L2 payload -======================================== +Test case: Intel® Ethernet 700 Series fdir for L2 payload +========================================================= 1. Launch the app ``testpmd`` with the following arguments:: @@ -128,8 +128,8 @@ Test case: Fortville fdir for L2 payload testpmd> flow list 0 -Test case: Fortville fdir for flexbytes -======================================= +Test case: Intel® Ethernet 700 Series fdir for flexbytes +======================================================== 1. Launch the app ``testpmd`` with the following arguments:: @@ -205,8 +205,8 @@ Test case: Fortville fdir for flexbytes testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 / sctp src is 42 / raw relative is 1 offset is 2 pattern is abcdefghijklmnop / end actions queue index 5 / end testpmd> flow create 0 ingress pattern eth / vlan tci is 1 / ipv6 src is 2001::1 dst is 2001::2 tc is 3 hop is 30 / tcp src is 32 dst is 33 / raw relative is 1 offset is 0 pattern is hijk / raw relative is 1 offset is 8 pattern is abcdefgh / end actions queue index 6 / end -Test case: Fortville fdir for ipv4 -================================== +Test case: Intel® Ethernet 700 Series fdir for ipv4 +=================================================== Prerequisites: @@ -310,8 +310,8 @@ Test case: Fortville fdir for ipv4 testpmd> flow list 0 -Test case: Fortville fdir for ipv6 -================================== +Test case: Intel® Ethernet 700 Series fdir for ipv6 +=================================================== Prerequisites: @@ -396,8 +396,8 @@ Test case: Fortville fdir for ipv6 testpmd> flow list 0 -Test case: Fortville fdir wrong parameters -========================================== +Test case: Intel® Ethernet 700 Series fdir wrong parameters +=========================================================== 1. Launch the app ``testpmd`` with the following arguments:: @@ -449,8 +449,8 @@ Note: /// not support IP fragment /// -Test case: Fortville tunnel vxlan -================================= +Test case: Intel® Ethernet 700 Series tunnel vxlan +================================================== Prerequisites: @@ -552,8 +552,8 @@ Test case: Fortville tunnel vxlan testpmd> flow list 0 -Test case: Fortville tunnel nvgre -================================= +Test case: Intel® Ethernet 700 Series tunnel nvgre +================================================== Prerequisites: @@ -1494,8 +1494,8 @@ Test case: igb flexbytes testpmd> flow flush 0 testpmd> flow list 0 -Test case: Fortville fdir for l2 mac -==================================== +Test case: Intel® Ethernet 700 Series fdir for l2 mac +===================================================== Prerequisites: bind the PF to dpdk driver:: @@ -2007,7 +2007,7 @@ Test case: Dual vlan(QinQ) 1. config testpmd on DUT - 1. set up testpmd with Fortville NICs:: + 1. set up testpmd with Intel® Ethernet 700 Series NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 -- -i --coremask=0x1fffe --portmask=0x1 --rxq=16 --txq=16 --tx-offloads=0x8fff @@ -2105,7 +2105,7 @@ Test Case: 10GB Multiple filters 1. config testpmd on DUT - 1. set up testpmd with Fortville NICs:: + 1. set up testpmd with Intel® Ethernet 700 Series NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -- -i --disable-rss --rxq=16 --txq=16 @@ -2179,7 +2179,7 @@ queue 1. Test Case: jumbo framesize filter =================================== -This case is designed for NIC (niantic,I350, 82576 and 82580). Since +This case is designed for NIC (82599, I350, 82576 and 82580). Since ``Testpmd`` could transmits packets with jumbo frame size , it also could transmit above packets on assigned queue. Launch the app ``testpmd`` with the following arguments:: @@ -2216,7 +2216,7 @@ the packet are not received on the queue 2:: Test Case: 64 queues ======================== -This case is designed for NIC(niantic). Default use 64 queues for test +This case is designed for NIC(82599). Default use 64 queues for test Launch the app ``testpmd`` with the following arguments:: @@ -2230,7 +2230,7 @@ Launch the app ``testpmd`` with the following arguments:: testpmd>vlan set filter off 1 Create the 5-tuple Filters with different queues (32,63) on port 0 for -niantic:: +82599:: testpmd> set stat_qmap rx 0 32 1 testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 / tcp dst is 1 src is 1 / end actions queue index 32 / end diff --git a/test_plans/fortville_rss_input_test_plan.rst b/test_plans/i40e_rss_input_test_plan.rst similarity index 99% rename from test_plans/fortville_rss_input_test_plan.rst rename to test_plans/i40e_rss_input_test_plan.rst index b8e9ea2f..badc98ad 100644 --- a/test_plans/fortville_rss_input_test_plan.rst +++ b/test_plans/i40e_rss_input_test_plan.rst @@ -31,9 +31,9 @@ OF THE POSSIBILITY OF SUCH DAMAGE. -==================================================== -Fortville Configuration of RSS in RTE Flow Tests -==================================================== +================================================================= +Intel® Ethernet 700 Series Configuration of RSS in RTE Flow Tests +================================================================= Description =========== diff --git a/test_plans/iavf_fdir_test_plan.rst b/test_plans/iavf_fdir_test_plan.rst index d371ca71..f5940e1c 100644 --- a/test_plans/iavf_fdir_test_plan.rst +++ b/test_plans/iavf_fdir_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ====================================== -CVL:advanced iavf with FDIR capability +ICE:advanced iavf with FDIR capability ====================================== Support flow director to steering packets to queue/queue group in iavf @@ -206,7 +206,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 2. Software: DPDK: http://dpdk.org/git/dpdk diff --git a/test_plans/iavf_flexible_descriptor_test_plan.rst b/test_plans/iavf_flexible_descriptor_test_plan.rst index d03fbe41..f03f755a 100644 --- a/test_plans/iavf_flexible_descriptor_test_plan.rst +++ b/test_plans/iavf_flexible_descriptor_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================ -CVL IAVF Flexible Descriptor +ICE IAVF Flexible Descriptor ============================ @@ -39,15 +39,16 @@ Description =========== To carry more metadata to descriptor and fill them to mbuf to save CPU cost on packet parsing. -CVL flexible descriptor capability can be used. iavf driver could negotiate descriptor format with PF driver by virtchnl. -It is implemented in DPDK20.11, and requires ice base driver >= 1.3.0. +Intel® Ethernet 800 Series flexible descriptor capability can be used. iavf driver could +negotiate descriptor format with PF driver by virtchnl. It is implemented in DPDK20.11, and +requires ice base driver >= 1.3.0. Prerequisites ============= 1. NIC requires - - Intel E810 series ethernet cards: columbiaville_25g, columbiaville_100g, etc. + - Intel® Ethernet 800 Series ethernet cards: E810-XXVDA4/E810-CQ, etc. 2. Toplogy diff --git a/test_plans/iavf_package_driver_error_handle_test_plan.rst b/test_plans/iavf_package_driver_error_handle_test_plan.rst index 6c34b0a8..7d26c368 100644 --- a/test_plans/iavf_package_driver_error_handle_test_plan.rst +++ b/test_plans/iavf_package_driver_error_handle_test_plan.rst @@ -36,15 +36,19 @@ IAVF Flexible Package and driver error handle check Description =========== -1. The feature is to check the CVL 100G and 25G NIC, when the using old version driver and latest DDP package, - will cause to the RSS rule create fail, because the old version driver vers does not support RSS feature in iavf. -2. The feature is to check the CVL 100G and 25G NIC, when the using old version ddp package or invalide ddp package and latest version driver, - wll cause to the VF start fail, because the old version package or invalid package does not support VF create for the IAVF +1. The feature is to check the Intel® Ethernet 800 Series 100G and 25G NIC, + when the using old version driver and latest DDP package, will cause to + the RSS rule create fail, because the old version driver vers does not + support RSS feature in iavf. +2. The feature is to check the Intel® Ethernet 800 Series 100G and 25G NIC, + when the using old version ddp package or invalide ddp package and latest + version driver, wll cause to the VF start fail, because the old version + package or invalid package does not support VF create for the IAVF Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/iavf_test_plan.rst b/test_plans/iavf_test_plan.rst index ddd7fbb9..1b2c6e6d 100644 --- a/test_plans/iavf_test_plan.rst +++ b/test_plans/iavf_test_plan.rst @@ -37,8 +37,8 @@ DPDK IAVF API Tests Intel Adaptive Virtual Function(IAVF) Hardwares -======================= -I40E driver NIC (Fortville XXV710, Fortville Spirit, Fortville Eagle) +================================================================ +I40E driver NIC (Intel® Ethernet 700 Series XXV710, XL710, X710) Prerequisites diff --git a/test_plans/cvl_1pps_signal_test_plan.rst b/test_plans/ice_1pps_signal_test_plan.rst similarity index 90% rename from test_plans/cvl_1pps_signal_test_plan.rst rename to test_plans/ice_1pps_signal_test_plan.rst index a6dfd49b..d3b17283 100644 --- a/test_plans/cvl_1pps_signal_test_plan.rst +++ b/test_plans/ice_1pps_signal_test_plan.rst @@ -31,16 +31,16 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================= -CVL 1PPS Signal Test Plan +ICE 1PPS Signal Test Plan ========================= Description =========== -The E810 supports a total of four single-ended GPIO signals(SPD[20:23])plus one different GPIO signal (CLK_OUT_P/N), -which is configured by default 1PPS(out). The SPD[20:23] is mapping to pin_id[0:3]. -This test plan is designed to check the value of related registers, which make up the 1PPS signal. -The registers address depends on some hardware config. -The test cases only give the example of Columbiaville_25g and Columbiaville_100g. +The Intel® Ethernet 800 Series supports a total of four single-ended GPIO signals(SPD[20:23])plus +one different GPIO signal (CLK_OUT_P/N), which is configured by default 1PPS(out). The SPD[20:23] +is mapping to pin_id[0:3]. This test plan is designed to check the value of related registers, +which make up the 1PPS signal. The registers address depends on some hardware config. +The test cases only give the example of E810-XXVDA4 and E810-CQ. Prerequisites @@ -52,7 +52,7 @@ DUT port 0 <----> Tester port 0 Hardware -------- -Supported NICs: columbiaville_25g/columbiaville_100g +Supported NICs: Intel® Ethernet 800 Series E810-XXVDA4/E810-CQ Software -------- diff --git a/test_plans/cvl_advanced_iavf_rss_gtpogre_test_plan.rst b/test_plans/ice_advanced_iavf_rss_gtpogre_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_iavf_rss_gtpogre_test_plan.rst rename to test_plans/ice_advanced_iavf_rss_gtpogre_test_plan.rst index 24e33b96..9f2b5094 100644 --- a/test_plans/cvl_advanced_iavf_rss_gtpogre_test_plan.rst +++ b/test_plans/ice_advanced_iavf_rss_gtpogre_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =============================== -CVL IAVF Support GTPoGRE in RSS +ICE IAVF Support GTPoGRE in RSS =============================== Description @@ -47,7 +47,7 @@ As previous(dpdk-21.05) DDP limitation, the outer IP layer of an GTP over GRE pa while customers need the second IP layer. A new DDP package is required in dpdk-21.08, the new DDP's parser will be able to generate 3 layer's IP protocol header, so it will not allow a GTP over GRE packet to share the same profile with a normal GTP packet. -And DPDK need to support both RSS and FDIR in CVL IAVF. +And DPDK need to support both RSS and FDIR in Intel® Ethernet 800 Series IAVF. This test plan is designed to check the RSS of GTPoGRE. Supported input set: inner most l3/l4 src/dst @@ -57,7 +57,7 @@ Supported function: toeplitz, symmetric Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst b/test_plans/ice_advanced_iavf_rss_gtpu_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst rename to test_plans/ice_advanced_iavf_rss_gtpu_test_plan.rst index f5710532..61f23b8d 100644 --- a/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst +++ b/test_plans/ice_advanced_iavf_rss_gtpu_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =============================== -CVL IAVF: Advanced RSS For GTPU +ICE IAVF: Advanced RSS For GTPU =============================== Description =========== -Enable RSS in CVL IAVF for GTP-U Up/Down Link sperately. +Enable RSS in Intel® Ethernet 800 Series IAVF for GTP-U Up/Down Link sperately. IAVF RSS hash algorithm is based on 5 Tuple (Src IP Address/Dst IP Address/Src Port/Dst Port/l4 Protocol) using the DPDK RTE_FLOW rules for GTP-U packets. It can support ipv4+ipv6 combination of GTP-U packet. @@ -186,7 +186,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -195,7 +195,8 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100G, so below cases only support CVL NIC. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100G, + so below cases only support Intel® Ethernet 800 Series NIC. 3. create a VF from a PF in DUT, set mac address for thi VF:: diff --git a/test_plans/cvl_advanced_iavf_rss_pppol2tpoudp_test_plan.rst b/test_plans/ice_advanced_iavf_rss_pppol2tpoudp_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_iavf_rss_pppol2tpoudp_test_plan.rst rename to test_plans/ice_advanced_iavf_rss_pppol2tpoudp_test_plan.rst index 11738250..4fa6acd2 100644 --- a/test_plans/cvl_advanced_iavf_rss_pppol2tpoudp_test_plan.rst +++ b/test_plans/ice_advanced_iavf_rss_pppol2tpoudp_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================================= -CVL IAVF: Advanced RSS For PPPoL2TPv2oUDP +ICE IAVF: Advanced RSS For PPPoL2TPv2oUDP ========================================= Description @@ -45,7 +45,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -54,7 +54,8 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100G, so below cases only support CVL NIC. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100G, + so below cases only support Intel® Ethernet 800 Series NIC. 3. load PPPoL2TPv2oUDP package diff --git a/test_plans/cvl_advanced_iavf_rss_test_plan.rst b/test_plans/ice_advanced_iavf_rss_test_plan.rst old mode 100755 new mode 100644 similarity index 99% rename from test_plans/cvl_advanced_iavf_rss_test_plan.rst rename to test_plans/ice_advanced_iavf_rss_test_plan.rst index f991aaf7..fa02ddd6 --- a/test_plans/cvl_advanced_iavf_rss_test_plan.rst +++ b/test_plans/ice_advanced_iavf_rss_test_plan.rst @@ -31,15 +31,15 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================== -CVL: IAVF Advanced RSS FOR CVL +ICE: IAVF Advanced RSS FOR ICE ============================== Description =========== -IAVF Advanced RSS support columbiaville nic with ice , throught create rule include related pattern and input-set -to hash IP and ports domain, diversion the packets to the difference queues in VF. +IAVF Advanced RSS support Intel® Ethernet 800 Series nic with ice , throught create rule include related +pattern and input-set to hash IP and ports domain, diversion the packets to the difference queues in VF. * inner header hash for tunnel packets, including comms package. * symmetric hash by rte_flow RSS action. @@ -47,15 +47,15 @@ to hash IP and ports domain, diversion the packets to the difference queues in V * For PFCP protocal, the destination port value of the outer UDP header is equal to 8805(0x2265). PFCP Node headers shall be identified when the Version field is equal to 001 and the S field is equal 0. PFCP Session headers shall be identified when the Version field is equal to 001 and the S field is equal 1. - CVL only support RSS hash for PFCP Session SEID value. + Intel® Ethernet 800 Series only support RSS hash for PFCP Session SEID value. * For L2TPv3 protocal, the IP proto id is equal to 115(0x73). - CVL only support RSS hash for L2TPv3 Session id value. + Intel® Ethernet 800 Series only support RSS hash for L2TPv3 Session id value. * For ESP protocal, the IP proto id is equal to 50(0x32). - CVL only support RSS hash for ESP SPI value. + Intel® Ethernet 800 Series only support RSS hash for ESP SPI value. * For AH protocal, the IP proto id is equal to 51(0x33). - CVL only support RSS hash for AH SPI value. + E810 series only support RSS hash for AH SPI value. * For NAT_T-ESP protocal, the destination port value of the outer UDP header is equal to 4500(0x1194). - CVL only support RSS hash for NAT_T-ESP SPI value. + Intel® Ethernet 800 Series only support RSS hash for NAT_T-ESP SPI value. Pattern and input set --------------------- @@ -322,7 +322,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -331,7 +331,8 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100g, so below the case only support CVL nic. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100g, + so below the case only support Intel® Ethernet 800 Series nic. 3. create a VF from a PF in DUT, set mac address for thi VF:: diff --git a/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst b/test_plans/ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst rename to test_plans/ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst index b9229f19..a81329fa 100644 --- a/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst +++ b/test_plans/ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================================ -CVL IAVF: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP +ICE IAVF: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP ================================================ Description @@ -90,7 +90,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g + - Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ 2. Software: diff --git a/test_plans/cvl_advanced_rss_gtpogre_test_plan.rst b/test_plans/ice_advanced_rss_gtpogre_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_rss_gtpogre_test_plan.rst rename to test_plans/ice_advanced_rss_gtpogre_test_plan.rst index 5c3e7c4d..59b49d19 100644 --- a/test_plans/cvl_advanced_rss_gtpogre_test_plan.rst +++ b/test_plans/ice_advanced_rss_gtpogre_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================= -CVL: Advanced RSS FOR GTPoGRE +ICE: Advanced RSS FOR GTPoGRE ============================= Description @@ -176,7 +176,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -185,12 +185,13 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100g, so below the case only support CVL nic. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100g, + so below the case only support Intel® Ethernet 800 Series nic. 3. Copy gtpogre pkg to /lib/firmware/updates/intel/ice/ddp/ice.pkg Then reload ice driver -4. bind the CVL port to dpdk driver in DUT:: +4. bind the Intel® Ethernet 800 Series port to dpdk driver in DUT:: modprobe vfio-pci usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:3b:00.0 diff --git a/test_plans/cvl_advanced_rss_gtpu_test_plan.rst b/test_plans/ice_advanced_rss_gtpu_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_rss_gtpu_test_plan.rst rename to test_plans/ice_advanced_rss_gtpu_test_plan.rst index 96dad583..3be05f4c 100644 --- a/test_plans/cvl_advanced_rss_gtpu_test_plan.rst +++ b/test_plans/ice_advanced_rss_gtpu_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================== -CVL: Advanced RSS FOR GTPU +ICE: Advanced RSS FOR GTPU ========================== Description @@ -171,7 +171,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -180,9 +180,10 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100g, so below the case only support CVL nic. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100g, + so below the case only support Intel® Ethernet 800 Series nic. -3. bind the CVL port to dpdk driver in DUT:: +3. bind the Intel® Ethernet 800 Series port to dpdk driver in DUT:: modprobe vfio-pci usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:3b:00.0 diff --git a/test_plans/cvl_advanced_rss_pppoe_test_plan.rst b/test_plans/ice_advanced_rss_pppoe_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_rss_pppoe_test_plan.rst rename to test_plans/ice_advanced_rss_pppoe_test_plan.rst index c3f6ccad..05dd82b6 100644 --- a/test_plans/cvl_advanced_rss_pppoe_test_plan.rst +++ b/test_plans/ice_advanced_rss_pppoe_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =========================== -CVL: Advanced RSS FOR PPPOE +ICE: Advanced RSS FOR PPPOE =========================== Description @@ -99,7 +99,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: diff --git a/test_plans/cvl_advanced_rss_test_plan.rst b/test_plans/ice_advanced_rss_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_rss_test_plan.rst rename to test_plans/ice_advanced_rss_test_plan.rst index 4e119422..84495696 100644 --- a/test_plans/cvl_advanced_rss_test_plan.rst +++ b/test_plans/ice_advanced_rss_test_plan.rst @@ -31,14 +31,14 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================= -CVL: Advanced RSS FOR CVL +ICE: Advanced RSS FOR ICE ========================= Description =========== -Advanced RSS only support columbiaville nic with ice , through creating rules which include related pattern and input-set -to hash IP and ports domain, diverting the packets to different queues. +Advanced RSS only support Intel® Ethernet 800 Series nic with ice , through creating rules which include +related pattern and input-set to hash IP and ports domain, diverting the packets to different queues. * inner header hash for tunnel packets, including comms package. * symmetric hash by rte_flow RSS func. @@ -290,7 +290,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g/ + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: @@ -299,9 +299,10 @@ Prerequisites .. note:: - This rss feature designed for CVL NIC 25G and 100g, so below the case only support CVL nic. + This rss feature designed for Intel® Ethernet 800 Series NIC 25G and 100g, + so below the case only support Intel® Ethernet 800 Series nic. -3. bind the CVL port to dpdk driver in DUT:: +3. bind the Intel® Ethernet 800 Series port to dpdk driver in DUT:: modprobe vfio-pci usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:18:00.0 diff --git a/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst b/test_plans/ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst similarity index 99% rename from test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst rename to test_plans/ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst index ca22efda..092d8fe9 100644 --- a/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst +++ b/test_plans/ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =========================================== -CVL: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP +ICE: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP =========================================== Description @@ -90,7 +90,7 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville_25g/columbiaville_100g + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: diff --git a/test_plans/cvl_dcf_acl_filter_test_plan.rst b/test_plans/ice_dcf_acl_filter_test_plan.rst similarity index 99% rename from test_plans/cvl_dcf_acl_filter_test_plan.rst rename to test_plans/ice_dcf_acl_filter_test_plan.rst index 27a0e332..1f92d659 100644 --- a/test_plans/cvl_dcf_acl_filter_test_plan.rst +++ b/test_plans/ice_dcf_acl_filter_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================== -CVL DCF ACL filter +ICE DCF ACL filter ================== Description @@ -51,7 +51,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_dcf_date_path_test_plan.rst b/test_plans/ice_dcf_date_path_test_plan.rst old mode 100755 new mode 100644 similarity index 100% rename from test_plans/cvl_dcf_date_path_test_plan.rst rename to test_plans/ice_dcf_date_path_test_plan.rst diff --git a/test_plans/cvl_dcf_flow_priority_test_plan.rst b/test_plans/ice_dcf_flow_priority_test_plan.rst old mode 100755 new mode 100644 similarity index 99% rename from test_plans/cvl_dcf_flow_priority_test_plan.rst rename to test_plans/ice_dcf_flow_priority_test_plan.rst index d4623194..522243b3 --- a/test_plans/cvl_dcf_flow_priority_test_plan.rst +++ b/test_plans/ice_dcf_flow_priority_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================ -CVL Support Flow Priority in DCF +ICE Support Flow Priority in DCF ================================ Description @@ -47,7 +47,7 @@ Description Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_dcf_qos_test_plan.rst b/test_plans/ice_dcf_qos_test_plan.rst similarity index 99% rename from test_plans/cvl_dcf_qos_test_plan.rst rename to test_plans/ice_dcf_qos_test_plan.rst index 1ceaed0c..4498c717 100644 --- a/test_plans/cvl_dcf_qos_test_plan.rst +++ b/test_plans/ice_dcf_qos_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =================================== -CVL configure QoS for vf/vsi in DCF +ICE configure QoS for vf/vsi in DCF =================================== Description @@ -81,7 +81,8 @@ Prerequisites ============= 1. Hardware: - 1 port from columbiaville_100g(NIC-1), 2 ports from columbiaville_25g(NIC-2); + 1 port from Intel® Ethernet Network Adapter E810-CQDA2(NIC-1), + 2 ports from Intel® Ethernet Network Adapter E810-XXVDA4(NIC-2); one 100G cable, one 10G cable; The connection is as below table:: diff --git a/test_plans/cvl_dcf_switch_filter_gtpu_test_plan.rst b/test_plans/ice_dcf_switch_filter_gtpu_test_plan.rst similarity index 99% rename from test_plans/cvl_dcf_switch_filter_gtpu_test_plan.rst rename to test_plans/ice_dcf_switch_filter_gtpu_test_plan.rst index 5004b1e6..5ba28ec2 100644 --- a/test_plans/cvl_dcf_switch_filter_gtpu_test_plan.rst +++ b/test_plans/ice_dcf_switch_filter_gtpu_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================ -CVL DCF Switch Filter GTPU Tests +ICE DCF Switch Filter GTPU Tests ================================ Description =========== -This document provides the plan for testing DCF switch filter gtpu of CVL, including: +This document provides the plan for testing DCF switch filter gtpu of Intel® Ethernet 800 Series, including: * Enable DCF switch filter for GTPU, the Pattern and Input Set are shown in the below table @@ -118,7 +118,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ 2. Software:: diff --git a/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst b/test_plans/ice_dcf_switch_filter_pppoe_test_plan.rst similarity index 99% rename from test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst rename to test_plans/ice_dcf_switch_filter_pppoe_test_plan.rst index a00687c8..323f5e3e 100644 --- a/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst +++ b/test_plans/ice_dcf_switch_filter_pppoe_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================= -CVL DCF Switch Filter PPPOE Tests +ICE DCF Switch Filter PPPOE Tests ================================= Description =========== -This document provides the plan for testing DCF switch filter pppoe of CVL, including: +This document provides the plan for testing DCF switch filter pppoe of Intel® Ethernet 800 Series, including: * Enable DCF switch filter for PPPOES (comm #1 package) @@ -156,7 +156,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. Software: diff --git a/test_plans/cvl_dcf_switch_filter_test_plan.rst b/test_plans/ice_dcf_switch_filter_test_plan.rst similarity index 99% rename from test_plans/cvl_dcf_switch_filter_test_plan.rst rename to test_plans/ice_dcf_switch_filter_test_plan.rst index dda27740..08485fad 100644 --- a/test_plans/cvl_dcf_switch_filter_test_plan.rst +++ b/test_plans/ice_dcf_switch_filter_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =========================== -CVL DCF Switch Filter Tests +ICE DCF Switch Filter Tests =========================== Description =========== -This document provides the plan for testing DCF switch filter of CVL, including: +This document provides the plan for testing DCF switch filter of Intel® Ethernet 800 Series, including: * Enable DCF switch filter for IPv4/IPv6 + TCP/UDP/IGMP (comm #1 package) * Enable DCF switch filter for tunnel: NVGRE (comm #1 package) @@ -186,7 +186,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. Software: @@ -2661,7 +2661,7 @@ are dropped. Test case: Max vfs ================== -Description: 256 VFs can be created on a CVL NIC, if 2*100G NIC, each PF +Description: 256 VFs can be created on a Intel® Ethernet 800 Series NIC, if 2*100G NIC, each PF can create 128 VFs, else if 4*25G NIC, each PF can create 64 VFs. This case is used to test when all VFs on a PF are used, switch filter rules can work. This case is designed based on 4*25G NIC. @@ -2741,7 +2741,7 @@ This case is designed based on 4*25G NIC. Test case: max field vectors ============================ -Description: 48 field vectors can be used on a CVL nic. This case is used +Description: 48 field vectors can be used on a Intel® Ethernet 800 Series nic. This case is used to test when field vectors are run out of, then creating a rule, testpmd will not hang and provide a friendly output. diff --git a/test_plans/cvl_ecpri_test_plan.rst b/test_plans/ice_ecpri_test_plan.rst similarity index 99% rename from test_plans/cvl_ecpri_test_plan.rst rename to test_plans/ice_ecpri_test_plan.rst index 101562cd..9bf88d4d 100644 --- a/test_plans/cvl_ecpri_test_plan.rst +++ b/test_plans/ice_ecpri_test_plan.rst @@ -31,15 +31,15 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================== -CVL support eCPRI protocol +ICE support eCPRI protocol ========================== eCPRI protocol is used for exchanging messages within ORAN 5G Front Haul. According to the ORAN FH specification the eCPRI packets are sent over Ethernet. They can be transmitted over standard Ethernet frames, or can use IP/UDP as the transport mechanism. In case of the IP/UDP transport mechanism the ORAN FH standard says that -the UDP destination port used for eCPRI protocol is not fixed, thus, the user should be able to configure the port number dynamically. -A change is required in DPDK APIs to allow it. -And CVL rss and fdir rte_flow APIs are needed to support classfication of eCPRI protocol. +the UDP destination port used for eCPRI protocol is not fixed, thus, the user should be able to configure +the port number dynamically. A change is required in DPDK APIs to allow it. +And Intel® Ethernet 800 Series rss and fdir rte_flow APIs are needed to support classfication of eCPRI protocol. Therefore, this test plan contain 3 parts: * UDP dst port dynamically config for eCPRI * rss supporting for eCPRI @@ -49,7 +49,7 @@ Therefore, this test plan contain 3 parts: Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_fdir_test_plan.rst b/test_plans/ice_fdir_test_plan.rst similarity index 99% rename from test_plans/cvl_fdir_test_plan.rst rename to test_plans/ice_fdir_test_plan.rst index 55625bde..906b3527 100644 --- a/test_plans/cvl_fdir_test_plan.rst +++ b/test_plans/ice_fdir_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================ -CVL:Classification:Flow Director +ICE:Classification:Flow Director ================================ Enable fdir filter for IPv4/IPv6 + TCP/UDP/SCTP (OS default package) @@ -129,7 +129,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 810 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. Software: diff --git a/test_plans/cvl_flow_priority_test_plan.rst b/test_plans/ice_flow_priority_test_plan.rst similarity index 98% rename from test_plans/cvl_flow_priority_test_plan.rst rename to test_plans/ice_flow_priority_test_plan.rst index a04e12dc..4bce5105 100644 --- a/test_plans/cvl_flow_priority_test_plan.rst +++ b/test_plans/ice_flow_priority_test_plan.rst @@ -31,16 +31,17 @@ OF THE POSSIBILITY OF SUCH DAMAGE. =============================== -CVL Support Flow Priority in PF +ICE Support Flow Priority in PF =============================== Description =========== -In CVL PF rte_flow distribution mode(non-pipeline mode), a flow with priority = 1 will be programmed into switch filter, -a flow with priority = 0 will be programmed into switch first then fdir. -Currently only support priority 0 and 1. 1 means low priority and 0 means high priority. -When looking up rule table, matched pkt will hit the high priority rule firstly, -it will hit the low priority rule only when there is no high priority rule exist. +In Intel® Ethernet 800 Series PF rte_flow distribution mode(non-pipeline mode), a flow +with priority = 1 will be programmed into switch filter, a flow with priority = 0 will +be programmed into switch first then fdir. Currently only support priority 0 and 1. 1 +means low priority and 0 means high priority. When looking up rule table, matched pkt +will hit the high priority rule firstly, it will hit the low priority rule only when +there is no high priority rule exist. Prerequisites @@ -53,7 +54,7 @@ Topology Hardware -------- -Supportted NICs: columbiaville_25g/columbiaville_100g +Supportted NICs: Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 Software -------- @@ -288,7 +289,7 @@ Support Pattern and Input Set ..note:: - the basic switch function of supported pattern is covered by cvl_switch_filter_test_plan.rst and cvl_switch_filter_pppoe_test_plan.rst. + the basic switch function of supported pattern is covered by ice_switch_filter_test_plan.rst and ice_switch_filter_pppoe_test_plan.rst. this test plan is designed to check the flow priority in switch, so we only select some patterns not all matrix in test plan. diff --git a/test_plans/iavf_fdir_gtpogre_test_plan.rst b/test_plans/ice_iavf_fdir_gtpogre_test_plan.rst similarity index 99% rename from test_plans/iavf_fdir_gtpogre_test_plan.rst rename to test_plans/ice_iavf_fdir_gtpogre_test_plan.rst index 8e9e187a..4fb5bc34 100644 --- a/test_plans/iavf_fdir_gtpogre_test_plan.rst +++ b/test_plans/ice_iavf_fdir_gtpogre_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================ -CVL IAVF Support GTPoGRE in FDIR +ICE IAVF Support GTPoGRE in FDIR ================================ Description @@ -47,7 +47,7 @@ As previous(dpdk-21.05) DDP limitation, the outer IP layer of an GTP over GRE pa while customers need the second IP layer. A new DDP package is required in dpdk-21.08, the new DDP's parser will be able to generate 3 layer's IP protocol header, so it will not allow a GTP over GRE packet to share the same profile with a normal GTP packet. -And DPDK need to support both RSS and FDIR in CVL IAVF. +And DPDK need to support both RSS and FDIR in Intel® Ethernet 800 Series IAVF. This test plan is designed to check the FDIR of GTPoGRE. Supported input set: inner most l3/l4 src/dst, outer l3 src/dst, gtpu teid, qfi @@ -57,7 +57,7 @@ Supported action: queue index, rss queues, passthru, drop, mark rss Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_iavf_fdir_pppol2tpoudp_test_plan.rst b/test_plans/ice_iavf_fdir_pppol2tpoudp_test_plan.rst similarity index 99% rename from test_plans/cvl_iavf_fdir_pppol2tpoudp_test_plan.rst rename to test_plans/ice_iavf_fdir_pppol2tpoudp_test_plan.rst index da195ad2..fa3ac934 100644 --- a/test_plans/cvl_iavf_fdir_pppol2tpoudp_test_plan.rst +++ b/test_plans/ice_iavf_fdir_pppol2tpoudp_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================= -CVL IAVF: FDIR For PPPoL2TPv2oUDP +ICE IAVF: FDIR For PPPoL2TPv2oUDP ================================= Description @@ -50,7 +50,7 @@ Topology Hardware -------- -Supportted NICs: columbiaville_25g/columbiaville_100g +Supportted NICs: Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 Software -------- diff --git a/test_plans/cvl_iavf_ip_fragment_rte_flow_test_plan.rst b/test_plans/ice_iavf_ip_fragment_rte_flow_test_plan.rst similarity index 99% rename from test_plans/cvl_iavf_ip_fragment_rte_flow_test_plan.rst rename to test_plans/ice_iavf_ip_fragment_rte_flow_test_plan.rst index 35e4c03c..d7e4e1b1 100644 --- a/test_plans/cvl_iavf_ip_fragment_rte_flow_test_plan.rst +++ b/test_plans/ice_iavf_ip_fragment_rte_flow_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================== -CVL IAVF IP FRAGMENT RTE FLOW TEST +ICE IAVF IP FRAGMENT RTE FLOW TEST ================================== Description @@ -63,7 +63,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_iavf_rss_configure_test_plan.rst b/test_plans/ice_iavf_rss_configure_test_plan.rst old mode 100755 new mode 100644 similarity index 99% rename from test_plans/cvl_iavf_rss_configure_test_plan.rst rename to test_plans/ice_iavf_rss_configure_test_plan.rst index ecce0fe8..2473f28b --- a/test_plans/cvl_iavf_rss_configure_test_plan.rst +++ b/test_plans/ice_iavf_rss_configure_test_plan.rst @@ -53,7 +53,7 @@ Prerequisites 1. NIC requires: - - Intel E810 series ethernet cards: columbiaville_25g, columbiaville_100g, etc. + - Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ, etc. 2. insmod ice.ko, and bind PF to ice. diff --git a/test_plans/cvl_ip_fragment_rte_flow_test_plan.rst b/test_plans/ice_ip_fragment_rte_flow_test_plan.rst similarity index 99% rename from test_plans/cvl_ip_fragment_rte_flow_test_plan.rst rename to test_plans/ice_ip_fragment_rte_flow_test_plan.rst index 909692cc..18e0ad28 100644 --- a/test_plans/cvl_ip_fragment_rte_flow_test_plan.rst +++ b/test_plans/ice_ip_fragment_rte_flow_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================= -CVL IP FRAGMENT RTE FLOW TEST +ICE IP FRAGMENT RTE FLOW TEST ============================= Description @@ -63,7 +63,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk diff --git a/test_plans/cvl_limit_value_test_test_plan.rst b/test_plans/ice_limit_value_test_test_plan.rst similarity index 97% rename from test_plans/cvl_limit_value_test_test_plan.rst rename to test_plans/ice_limit_value_test_test_plan.rst index 84e650d5..30c98e77 100644 --- a/test_plans/cvl_limit_value_test_test_plan.rst +++ b/test_plans/ice_limit_value_test_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ======================== -CVL Limit Value Test +ICE Limit Value Test ======================== Description @@ -60,7 +60,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: DPDK: http://dpdk.org/git/dpdk @@ -419,7 +419,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. Software: @@ -488,7 +488,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. Software: @@ -552,11 +552,11 @@ Prerequisites Test case: max rule number ========================== -Description: 32k switch filter rules can be created on a CVL card, -and all PFs and VFs share the 32k rules. But the system will first create -some MAC_VLAN rules in switch table, and as the number of rules increased, -the hash conflicts in the switch filter table are increased, so we can -create a total of 32500 switch filter rules on a DCF. +Description: 32k switch filter rules can be created on a Intel® Ethernet +800 Series card, and all PFs and VFs share the 32k rules. But the system +will first create some MAC_VLAN rules in switch table, and as the number +of rules increased, the hash conflicts in the switch filter table are +increased, so we can create a total of 32500 switch filter rules on a DCF. 1. create 32500 rules with the same pattern, but different input set:: diff --git a/test_plans/cvl_qinq_test_plan.rst b/test_plans/ice_qinq_test_plan.rst similarity index 99% rename from test_plans/cvl_qinq_test_plan.rst rename to test_plans/ice_qinq_test_plan.rst index 474d6209..2ad52f10 100644 --- a/test_plans/cvl_qinq_test_plan.rst +++ b/test_plans/ice_qinq_test_plan.rst @@ -31,9 +31,9 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================= -CVL support QinQ protocol +ICE support QinQ protocol ========================= -DPDK support QinQ protocol in CVL as below requirements: +DPDK support QinQ protocol in Intel® Ethernet 800 Series as below requirements: * DCF support QinQ by add steering rule and vlan strip disable. * DCF is able to set port vlan by port representor. * AVF is able to configure inner VLAN filter when port vlan is enabled base on negotiation. @@ -47,7 +47,7 @@ this test plan contain 3 parts to cover above requirements: Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: dpdk: http://dpdk.org/git/dpdk @@ -1536,12 +1536,14 @@ Test case 14: AVF CRC strip and Vlan strip co-exists 196.222.232.221 > 127.0.0.1: ip-proto-0 480 ==================================== -CVL DCF QINQ Switch Filter Test Plan +ICE DCF QINQ Switch Filter Test Plan ==================================== Description =========== -CVL support l4 for QinQ switch filter in DCF driver is by dst MAC + outer VLAN id + inner VLAN id + dst IP + dst port, and port can support as eth / vlan / vlan / IP / tcp|udp. +Intel® Ethernet 800 Series support l4 for QinQ switch filter in DCF driver is by +dst MAC + outer VLAN id + inner VLAN id + dst IP + dst port, and port can support +as eth / vlan / vlan / IP / tcp|udp. * Enable QINQ switch filter for IPv4/IPv6, IPv4 + TCP/UDP in non-pipeline mode. * Enable QINQ switch filter for IPv6 + TCP/UDP in pipeline mode. @@ -1550,7 +1552,7 @@ Prerequisites Hardware -------- -Supportted NICs: columbiaville_25g/columbiaville_100g +Supportted NICs: Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 Software -------- diff --git a/test_plans/cvl_rss_configure_test_plan.rst b/test_plans/ice_rss_configure_test_plan.rst similarity index 99% rename from test_plans/cvl_rss_configure_test_plan.rst rename to test_plans/ice_rss_configure_test_plan.rst index 3e9fb96b..7f16398e 100644 --- a/test_plans/cvl_rss_configure_test_plan.rst +++ b/test_plans/ice_rss_configure_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================= -CVL: RSS CONFIGURE AND UPDATE +ICE: RSS CONFIGURE AND UPDATE ============================= Description @@ -64,14 +64,14 @@ Prerequisites 1. Hardware: - - Intel E810 series ethernet cards: columbiaville + - Intel® Ethernet 800 Series 2. Software: - dpdk: http://dpdk.org/git/dpdk - scapy: http://www.secdev.org/projects/scapy/ -3. bind the CVL port to dpdk driver in DUT:: +3. bind the Intel® Ethernet 800 Series port to dpdk driver in DUT:: modprobe vfio-pci usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:18:00.0 diff --git a/test_plans/cvl_switch_filter_pppoe_test_plan.rst b/test_plans/ice_switch_filter_pppoe_test_plan.rst similarity index 99% rename from test_plans/cvl_switch_filter_pppoe_test_plan.rst rename to test_plans/ice_switch_filter_pppoe_test_plan.rst index 7485c45d..3d6db445 100644 --- a/test_plans/cvl_switch_filter_pppoe_test_plan.rst +++ b/test_plans/ice_switch_filter_pppoe_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ======================= -CVL Switch Filter Tests +ICE Switch Filter Tests ======================= Description =========== -This document provides the plan for testing switch filter feature of CVL, including: +This document provides the plan for testing switch filter feature of Intel® Ethernet 800 Series, including: * Enable switch filter for PPPOE in non-pipeline/pipeline mode (comm #1 package) @@ -179,7 +179,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. software: diff --git a/test_plans/cvl_switch_filter_test_plan.rst b/test_plans/ice_switch_filter_test_plan.rst similarity index 99% rename from test_plans/cvl_switch_filter_test_plan.rst rename to test_plans/ice_switch_filter_test_plan.rst index 6688b842..ceda7390 100644 --- a/test_plans/cvl_switch_filter_test_plan.rst +++ b/test_plans/ice_switch_filter_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ======================= -CVL Switch Filter Tests +ICE Switch Filter Tests ======================= Description =========== -This document provides the plan for testing switch filter feature of CVL, including: +This document provides the plan for testing switch filter feature of Intel® Ethernet 800 Series, including: * Enable switch filter for IPv4/IPv6 + TCP/UDP in non-pipeline/pipeline mode (comm #1 package) * Enable switch filter for tunnel : VXLAN / NVGRE in non-pipeline/pipeline mode (comm #1 package) @@ -157,7 +157,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ design the cases with 2 ports card. 2. software: @@ -5153,7 +5153,9 @@ flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv6 dst is CD Description =========== -CVL support l4 for QinQ switch filter in PF driver is by dst MAC + outer VLAN id + inner VLAN id + dst IP + dst port, and port can support as eth / vlan / vlan / IP / tcp|udp. +Intel® Ethernet 800 Series support l4 for QinQ switch filter in PF driver is by +dst MAC + outer VLAN id + inner VLAN id + dst IP + dst port, and port can support +as eth / vlan / vlan / IP / tcp|udp. * Enable QINQ switch filter for IPv4/IPv6, IPv4 + TCP/UDP in non-pipeline mode. * Enable QINQ switch filter for IPv6 + TCP/UDP in pipeline mode. diff --git a/test_plans/cvl_vf_support_multicast_address_test_plan.rst b/test_plans/ice_vf_support_multicast_address_test_plan.rst similarity index 99% rename from test_plans/cvl_vf_support_multicast_address_test_plan.rst rename to test_plans/ice_vf_support_multicast_address_test_plan.rst index 24a3c600..71ae7f56 100644 --- a/test_plans/cvl_vf_support_multicast_address_test_plan.rst +++ b/test_plans/ice_vf_support_multicast_address_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ================================= -CVL: VF support multicast address +ICE: VF support multicast address ================================= VF support adding and removing multicast address @@ -40,7 +40,7 @@ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ 2. Software: DPDK: http://dpdk.org/git/dpdk diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..046efc5a 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -42,37 +42,37 @@ The following are the test plans for the DPDK DTS automated test system. blocklist_test_plan checksum_offload_test_plan coremask_test_plan - cvl_advanced_rss_test_plan - cvl_advanced_rss_gtpu_test_plan - cvl_advanced_rss_pppoe_test_plan - cvl_advanced_rss_gtpogre_test_plan - cvl_advanced_iavf_rss_test_plan - cvl_advanced_iavf_rss_gtpu_test_plan - cvl_advanced_iavf_rss_gtpogre_test_plan - cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan - cvl_advanced_iavf_rss_pppol2tpoudp_test_plan - cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan - cvl_dcf_acl_filter_test_plan - cvl_dcf_date_path_test_plan - cvl_dcf_switch_filter_test_plan - cvl_dcf_switch_filter_pppoe_test_plan - cvl_dcf_switch_filter_gtpu_test_plan - cvl_dcf_flow_priority_test_plan - cvl_flow_priority_test_plan - cvl_dcf_qos_test_plan - cvl_ecpri_test_plan - cvl_fdir_test_plan - cvl_ip_fragment_rte_flow_test_plan - cvl_iavf_ip_fragment_rte_flow_test_plan - cvl_iavf_rss_configure_test_plan - cvl_iavf_fdir_pppol2tpoudp_test_plan - cvl_limit_value_test_test_plan - cvl_qinq_test_plan - cvl_rss_configure_test_plan - cvl_switch_filter_test_plan - cvl_switch_filter_pppoe_test_plan - cvl_vf_support_multicast_address_test_plan - cvl_1pps_signal_test_plan + ice_advanced_rss_test_plan + ice_advanced_rss_gtpu_test_plan + ice_advanced_rss_pppoe_test_plan + ice_advanced_rss_gtpogre_test_plan + ice_advanced_iavf_rss_test_plan + ice_advanced_iavf_rss_gtpu_test_plan + ice_advanced_iavf_rss_gtpogre_test_plan + ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan + ice_advanced_iavf_rss_pppol2tpoudp_test_plan + ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan + ice_dcf_acl_filter_test_plan + ice_dcf_date_path_test_plan + ice_dcf_switch_filter_test_plan + ice_dcf_switch_filter_pppoe_test_plan + ice_dcf_switch_filter_gtpu_test_plan + ice_dcf_flow_priority_test_plan + ice_flow_priority_test_plan + ice_dcf_qos_test_plan + ice_ecpri_test_plan + ice_fdir_test_plan + ice_ip_fragment_rte_flow_test_plan + ice_iavf_ip_fragment_rte_flow_test_plan + ice_iavf_rss_configure_test_plan + ice_iavf_fdir_pppol2tpoudp_test_plan + ice_limit_value_test_test_plan + ice_qinq_test_plan + ice_rss_configure_test_plan + ice_switch_filter_test_plan + ice_switch_filter_pppoe_test_plan + ice_vf_support_multicast_address_test_plan + ice_1pps_signal_test_plan cloud_filter_with_l4_port_test_plan dcf_lifecycle_test_plan crypto_perf_cryptodev_perf_test_plan @@ -91,12 +91,12 @@ The following are the test plans for the DPDK DTS automated test system. firmware_version_test_plan floating_veb_test_plan flow_classify_softnic_test_plan - fortville_rss_input_test_plan + i40e_rss_input_test_plan generic_flow_api_test_plan hotplug_mp_test_plan hotplug_test_plan iavf_fdir_test_plan - iavf_fdir_gtpogre_test_plan + ice_iavf_fdir_gtpogre_test_plan iavf_flexible_descriptor_test_plan iavf_package_driver_error_handle_test_plan ieee1588_test_plan diff --git a/test_plans/inline_ipsec_test_plan.rst b/test_plans/inline_ipsec_test_plan.rst index 0b4e45a1..4ce841b3 100644 --- a/test_plans/inline_ipsec_test_plan.rst +++ b/test_plans/inline_ipsec_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================== -Niantic Inline IPsec Tests +82599 Inline IPsec Tests ========================== This test plan describe the method of validation inline hardware acceleration diff --git a/test_plans/ipfrag_test_plan.rst b/test_plans/ipfrag_test_plan.rst index 68cf01ca..da8d52d4 100644 --- a/test_plans/ipfrag_test_plan.rst +++ b/test_plans/ipfrag_test_plan.rst @@ -45,7 +45,7 @@ Prerequisites - For each CPU socket, each memory channel should be populated with at least 1x DIMM - Board is populated with at least 2x 1GbE or 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be - met for Intel 82599 (Niantic) NICs: + met for Intel 82599 NICs: - NICs are plugged into PCIe Gen2 or Gen3 slots - For PCIe Gen2 slots, the number of lanes should be 8x or higher diff --git a/test_plans/ipgre_test_plan.rst b/test_plans/ipgre_test_plan.rst index 2c652273..6e1234c9 100644 --- a/test_plans/ipgre_test_plan.rst +++ b/test_plans/ipgre_test_plan.rst @@ -35,13 +35,19 @@ Generic Routing Encapsulation (GRE) Tests ========================================= -Generic Routing Encapsulation (GRE) is a tunneling protocol developed by Cisco Systems that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links over an Internet Protocol network. -Fortville support GRE packet detecting, checksum computing and filtering. +Generic Routing Encapsulation (GRE) is a tunneling protocol developed by +Cisco Systems that can encapsulate a wide variety of network layer +protocols inside virtual point-to-point links over an Internet Protocol +network. Intel® Ethernet 700 Series support GRE packet detecting, checksum +computing and filtering. Prerequisites ============= -Fortville/carlsville/columbiaville nic should be on the DUT. +Intel® Ethernet 700 Series/ +Intel® Ethernet Network Adapter X710-T4L/ +Intel® Ethernet Network Adapter X710-T2L/ +Intel® Ethernet 800 Series nic should be on the DUT. Test Case 1: GRE ipv4 packet detect =================================== diff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst index 354dae51..fdd48be8 100644 --- a/test_plans/ipv4_reassembly_test_plan.rst +++ b/test_plans/ipv4_reassembly_test_plan.rst @@ -53,7 +53,7 @@ to the device under test:: modprobe vfio-pci usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id -1x Intel® 82599 (Niantic) NICs (1x 10GbE full duplex optical ports per NIC) +1x 82599 NICs (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots. Build dpdk and examples=ip_reassembly: diff --git a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst index 07c67e76..592e6fdd 100644 --- a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst +++ b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst @@ -31,7 +31,7 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ========================================================== -Niantic ixgbe_get_vf_queue Include Extra Information Tests +82599 ixgbe_get_vf_queue Include Extra Information Tests ========================================================== Description diff --git a/test_plans/l2tp_esp_coverage_test_plan.rst b/test_plans/l2tp_esp_coverage_test_plan.rst index f9edaee9..b3a2fa45 100644 --- a/test_plans/l2tp_esp_coverage_test_plan.rst +++ b/test_plans/l2tp_esp_coverage_test_plan.rst @@ -36,8 +36,9 @@ test coverage for L2TPv3 and ESP Description =========== -For each protocol, below is a list of standard features supported by the Columbiaville hardware and the impact on the feature for each protocol. -Some features are supported in a limited manner as stated below. +For each protocol, below is a list of standard features supported by the +Intel® Ethernet 800 Series hardware and the impact on the feature for +each protocol.Some features are supported in a limited manner as stated below. IPSec(ESP): L2 Tag offloads @@ -67,7 +68,8 @@ DCB ----Priority Flow Control - No this test plan is designed to check above offloads in L2TPv3 and ESP. -and CVL can't support tx checksum in vector path now, so only test the rx checksum offload. +and Intel® Ethernet 700 Series can't support tx checksum in vector path +now, so only test the rx checksum offload. Prerequisites diff --git a/test_plans/l3fwd_test_plan.rst b/test_plans/l3fwd_test_plan.rst index f0b4c641..efd761dd 100644 --- a/test_plans/l3fwd_test_plan.rst +++ b/test_plans/l3fwd_test_plan.rst @@ -44,7 +44,7 @@ Prerequisites - For each CPU socket, each memory channel should be populated with at least 1x DIMM - Board is populated with 4x 1GbE or 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be - met for Intel 82599 (Niantic) NICs: + met for 82599 NICs: - NICs are plugged into PCIe Gen2 or Gen3 slots - For PCIe Gen2 slots, the number of lanes should be 8x or higher diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst index 4e2d0555..1f690ad2 100644 --- a/test_plans/large_vf_test_plan.rst +++ b/test_plans/large_vf_test_plan.rst @@ -31,14 +31,14 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ============================ -CVL: Large VF for 256 queues +ICE: Large VF for 256 queues ============================ Prerequisites ============= 1. Hardware: - columbiaville_25g/columbiaville_100g + Intel® Ethernet Network Adapter E810-XXVDA4/Intel® Ethernet Network Adapter E810-CQDA2 2. Software: DPDK: http://dpdk.org/git/dpdk diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst index 68c2c2c8..fc21026b 100644 --- a/test_plans/macsec_for_ixgbe_test_plan.rst +++ b/test_plans/macsec_for_ixgbe_test_plan.rst @@ -31,13 +31,13 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ==================================================== -Niantic Media Access Control Security (MACsec) Tests +82599 Media Access Control Security (MACsec) Tests ==================================================== Description =========== -This document provides test plan for testing the MACsec function of Niantic: +This document provides test plan for testing the MACsec function of 82599: IEEE 802.1AE: https://en.wikipedia.org/wiki/IEEE_802.1AE Media Access Control Security (MACsec) is a Layer 2 security technology @@ -50,9 +50,9 @@ in the hardware. As a hop-to-hop Layer 2 security feature, MACsec can be combined with Layer 3 security technologies such as IPsec for end-to-end data security. -MACsec was removed in Fortville since Data Center customers don’t require it. -MACsec can be used for LAN / VLAN, Campus, Cloud and NFV environments -(Guest and Overlay) to protect and encrypt data on the wire. +MACsec was removed in Intel® Ethernet 700 Series since Data Center customers +don’t require it. MACsec can be used for LAN / VLAN, Campus, Cloud and NFV +environments (Guest and Overlay) to protect and encrypt data on the wire. One benefit of a SW approach to encryption in the cloud is that the payload is encrypted by the tenant, not by the tunnel provider, thus the tenant has full control over the keys. @@ -72,7 +72,7 @@ Prerequisites 1. Hardware: - * 1x Niantic NIC (2x 10G) + * 1x 82599 NIC (2x 10G) :: port0: diff --git a/test_plans/malicious_driver_event_indication_test_plan.rst b/test_plans/malicious_driver_event_indication_test_plan.rst index c97555ba..a1037e20 100644 --- a/test_plans/malicious_driver_event_indication_test_plan.rst +++ b/test_plans/malicious_driver_event_indication_test_plan.rst @@ -30,9 +30,9 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -========================================================== -Malicious driver event indication process in FVL PF driver -========================================================== +================================================================================= +Malicious driver event indication process in Intel® Ethernet 700 Series PF driver +================================================================================= Need modify the testpmd APP to generate invalid packets in tx only mode diff --git a/test_plans/metrics_test_plan.rst b/test_plans/metrics_test_plan.rst index 2ba3617d..a2b8a3db 100644 --- a/test_plans/metrics_test_plan.rst +++ b/test_plans/metrics_test_plan.rst @@ -72,7 +72,7 @@ Calculates peak and average bit-rate statistics. Prerequisites ============= -2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC) +2x 82599 NICs (2x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots in two different configurations. port topology diagram:: diff --git a/test_plans/multicast_test_plan.rst b/test_plans/multicast_test_plan.rst index 11d84347..b9dbfdaa 100644 --- a/test_plans/multicast_test_plan.rst +++ b/test_plans/multicast_test_plan.rst @@ -70,7 +70,7 @@ Prerequisites - Board is populated with 2x 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be - met for Intel 82599 (Niantic) NICs: + met for 82599 NICs: - NICs are plugged into PCIe Gen2 or Gen3 slots - For PCIe Gen2 slots, the number of lanes should be 8x or higher diff --git a/test_plans/nic_single_core_perf_test_plan.rst b/test_plans/nic_single_core_perf_test_plan.rst index 7f86312f..404ea6cc 100644 --- a/test_plans/nic_single_core_perf_test_plan.rst +++ b/test_plans/nic_single_core_perf_test_plan.rst @@ -30,19 +30,20 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -====================================================================== -Benchmark the performance of single core forwarding with FVL25G/NNT10G -====================================================================== +======================================================================================== +Benchmark the performance of single core forwarding with XXV710 and 82599/500 Series 10G +======================================================================================== Prerequisites ============= 1. Hardware: - 1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics, - all installed on the same socket, pick one port per nic - 1.2) nic_single_core_perf test for NNT10G: four 82599 nics, - all installed on the same socket, pick one port per nic + 1.1) nic_single_core_perf test for Intel® Ethernet Network Adapter XXV710-DA2 : + two dual port Intel® Ethernet Network Adapter XXV710-DA2 nics, all installed + on the same socket, pick one port per nic + 1.2) nic_single_core_perf test for 82599/500 Series 10G: four 82599 nics, all + installed on the same socket, pick one port per nic 2. Software:: @@ -60,15 +61,17 @@ Prerequisites 3. Connect all the selected nic ports to traffic generator(IXIA,TREX, PKTGEN) ports(TG ports):: - 2 TG 25g ports for FVL25G ports - 4 TG 10g ports for 4 NNT10G ports + 2 TG 25g ports for Intel® Ethernet Network Adapter XXV710-DA2 ports + 4 TG 10g ports for 4 82599/500 Series 10G ports 4. Case config:: - For FVL40g, if test 16 Byte Descriptor, need to set the "CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y" + For Intel® Ethernet Converged Network Adapter XL710-QDA2, if test 16 + Byte Descriptor, need to set the "CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y" in ./config/common_base and re-build DPDK. - For CVL25G, if test 16 Byte Descriptor, need to set the "CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=y" - in ./config/common_base and re-build DPDK. + For Intel® Ethernet Network Adapter E810-XXVDA4, if test 16 Byte Descriptor, + need to set the "CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=y" in ./config/common_base + and re-build DPDK. Test Case : Single Core Performance Measurement =============================================== @@ -89,13 +92,13 @@ Test Case : Single Core Performance Measurement 4) check throughput and compare it with the expected value. -5) for NNT10G, repeat above step 1-4 for txd=rxd=512,2048 separately. - for FVL25G nic, just test txd=rxd=512,2048 following above steps - 1-4. +5) for 82599/500 Series 10G, repeat above step 1-4 for txd=rxd=512,2048 separately. + for Intel® Ethernet Network Adapter XXV710-DA2 nic, just test + txd=rxd=512,2048 following above steps 1-4. 6) Result tables for different NICs: - FVL25G: + Intel® Ethernet Network Adapter XXV710-DA2: +------------+---------+-------------+---------+---------------------+ | Frame Size | TXD/RXD | Throughput | Rate | Expected Throughput | @@ -105,7 +108,7 @@ Test Case : Single Core Performance Measurement | 64 | 2048 | xxxxxx Mpps | xxx % | xxx Mpps | +------------+---------+-------------+---------+---------------------+ - NNT10G: + 82599/500 Series 10G: +------------+---------+-------------+---------+---------------------+ | Frame Size | TXD/RXD | Throughput | Rate | Expected Throughput | @@ -119,9 +122,9 @@ Test Case : Single Core Performance Measurement Note : The values for the expected throughput may vary due to different platform and OS, and traffic generator, please correct threshold - values accordingly. (the above expected values for FVL 25G and - NNT10G were got from the combination of Purly,Ubuntu 16.04, and - traffic generator IXIA) + values accordingly. (the above expected values for XXV710 and + 82599/500 Series 10G were got from the combination of Purly, + Ubuntu 16.04, and traffic generator IXIA) Case will raise failure if actual throughputs have more than 1Mpps gap from expected ones. diff --git a/test_plans/nvgre_test_plan.rst b/test_plans/nvgre_test_plan.rst index c05292ee..7c669fe8 100644 --- a/test_plans/nvgre_test_plan.rst +++ b/test_plans/nvgre_test_plan.rst @@ -30,33 +30,33 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -===================== -Fortville NVGRE Tests -===================== +====================================== +Intel® Ethernet 700 Series NVGRE Tests +====================================== Cloud providers build virtual network overlays over existing network infrastructure that provide tenant isolation and scaling. Tunneling layers added to the packets carry the virtual networking frames over existing Layer 2 and IP networks. Conceptually, this is similar to -creating virtual private networks over the Internet. Fortville will -process these tunneling layers by the hardware. +creating virtual private networks over the Internet. Intel® Ethernet +700 Series will process these tunneling layers by the hardware. -This document provides test plan for Fortville NVGRE packet detecting, -checksum computing and filtering. +This document provides test plan for Intel® Ethernet 700 Series NVGRE +packet detecting, checksum computing and filtering. Prerequisites ============= -1x Intel X710 (Fortville) NICs (2x 40GbE full duplex optical ports per NIC) +1x X710 NICs (2x 40GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot. -1x Intel XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC) +1x XL710-DA4 (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot. DUT board must be two sockets system and each cpu have more than 8 lcores. -For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR -in dpdk/config/common_base file to n. +For Intel® Ethernet 700 Series NICs need change the value of +CONFIG_RTE_LIBRTE_I40E_INC_VECTOR in dpdk/config/common_base file to n. Test Case: NVGRE ipv4 packet detect =================================== diff --git a/test_plans/pf_smoke_test_plan.rst b/test_plans/pf_smoke_test_plan.rst index ffcd2105..259bc76c 100644 --- a/test_plans/pf_smoke_test_plan.rst +++ b/test_plans/pf_smoke_test_plan.rst @@ -46,7 +46,7 @@ Prerequisites 1. Hardware: - niantic/fortville/columbiaville + 82599/Intel® Ethernet 700 Series/Intel® Ethernet 800 Series 2. Software: diff --git a/test_plans/pmd_bonded_test_plan.rst b/test_plans/pmd_bonded_test_plan.rst index 52ba55ad..e0cb24dd 100644 --- a/test_plans/pmd_bonded_test_plan.rst +++ b/test_plans/pmd_bonded_test_plan.rst @@ -80,8 +80,8 @@ Prerequisites for Bonding * NIC and IXIA ports requirements. - - Tester: have 4 10Gb (Niantic) ports and 4 1Gb ports. - - DUT: have 4 10Gb (Niantic) ports and 4 1Gb ports. All functional tests should be done on both 10G and 1G port. + - Tester: have 4 10Gb (82599) ports and 4 1Gb ports. + - DUT: have 4 10Gb (82599) ports and 4 1Gb ports. All functional tests should be done on both 10G and 1G port. - IXIA: have 4 10G ports and 4 1G ports. IXIA is used for performance test. * BIOS settings on DUT: diff --git a/test_plans/pmd_test_plan.rst b/test_plans/pmd_test_plan.rst index 7b7d994f..9a0452e7 100644 --- a/test_plans/pmd_test_plan.rst +++ b/test_plans/pmd_test_plan.rst @@ -81,11 +81,11 @@ If using igb_uio:: usertools/dpdk-devbind.py --bind=igb_uio device_bus_id Case config:: - For FVL40g, if test 16 Byte Descriptor, need to set the "CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y" - in ./config/common_base and re-build DPDK. + For Intel® Ethernet Converged Network Adapter XL710-QDA2, if test 16 Byte Descriptor, need to set + the "CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y" in ./config/common_base and re-build DPDK. - For CVL25G, if test 16 Byte Descriptor, need to set the "CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=y" - in ./config/common_base and re-build DPDK. + For Intel® Ethernet Network Adapter E810-XXVDA4, if test 16 Byte Descriptor, need to set the + "CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=y" in ./config/common_base and re-build DPDK. Test Case: Packet Checking ========================== diff --git a/test_plans/pmdpcap_test_plan.rst b/test_plans/pmdpcap_test_plan.rst index 18b85e5e..ad3e4551 100644 --- a/test_plans/pmdpcap_test_plan.rst +++ b/test_plans/pmdpcap_test_plan.rst @@ -34,8 +34,8 @@ TestPMD PCAP Tests ================== -This document provides tests for the userland Intel(R) -82599 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD) when using +This document provides tests for the userland Intel(R) 82599 +Gigabit Ethernet Controller Poll Mode Driver (PMD) when using pcap files as input and output. The core configurations description is: diff --git a/test_plans/pmdrss_hash_test_plan.rst b/test_plans/pmdrss_hash_test_plan.rst index 544d2e51..f23a5a90 100644 --- a/test_plans/pmdrss_hash_test_plan.rst +++ b/test_plans/pmdrss_hash_test_plan.rst @@ -30,24 +30,24 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -=============================================== -Fortville RSS - Configuring Hash Function Tests -=============================================== +================================================================ +Intel® Ethernet 700 Series RSS - Configuring Hash Function Tests +================================================================ -This document provides test plan for testing the function of Fortville: +This document provides test plan for testing the function of Intel® Ethernet 700 Series: Support configuring hash functions. Prerequisites ============= -* 2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC) -* 1x Fortville_eagle NIC (4x 10G) -* 1x Fortville_spirit NIC (2x 40G) -* 2x Fortville_spirit_single NIC (1x 40G) +* 2x 82599 NICs (2x 10GbE full duplex optical ports per NIC) +* 1x X710 NIC (4x 10G) +* 1x XL710 NIC (2x 40G) +* 2x XL710 NIC (1x 40G) -The four ports of the 82599 connect to the Fortville_eagle; -The two ports of Fortville_spirit connect to Fortville_spirit_single. +The four ports of the 82599 connect to the X710; +The two ports of XL710 connect to XL710. The three kinds of NICs are the target NICs. the connected NICs can send packets to these three NICs using scapy. @@ -61,7 +61,8 @@ handled by a different logical core. #. The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.) -#. A hash calculation is performed. The Fortville supports four hash function: +#. A hash calculation is performed. The Intel® Ethernet 700 Series supports four + hash function: Toeplitz, simple XOR and their Symmetric RSS. #. The seven LSBs of the hash result are used as an index into a 128/512 entry @@ -75,7 +76,7 @@ Test Case: test_toeplitz Testpmd configuration - 16 RX/TX queues per port ------------------------------------------------ -#. set up testpmd with fortville NICs:: +#. set up testpmd with Intel® Ethernet 700 Series NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c fffff -n %d -- -i --coremask=0xffffe --rxq=16 --txq=16 diff --git a/test_plans/pmdrssreta_test_plan.rst b/test_plans/pmdrssreta_test_plan.rst index dd9f1ca1..f165e4a7 100644 --- a/test_plans/pmdrssreta_test_plan.rst +++ b/test_plans/pmdrssreta_test_plan.rst @@ -31,12 +31,12 @@ OF THE POSSIBILITY OF SUCH DAMAGE. ====================================== -Niantic Reta (Redirection table) Tests +82599 Reta (Redirection table) Tests ====================================== This document provides test plan for benchmarking of Rss reta(Redirection table) updating for the Intel® 82599 10 Gigabit Ethernet Controller -(Niantic) Poll Mode Driver (PMD) in userland runtime configurations. +(82599) Poll Mode Driver (PMD) in userland runtime configurations. The content of Rss Redirection table are not defined following reset of the Memory Configuration registers. System software must initialize the table prior to enabling multiple receive queues .It can also update @@ -46,7 +46,7 @@ not synchronized with the arrival time of received packets. Prerequisites ============= -2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC) +2x Intel® 82599 (82599) NICs (2x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots. To avoid PCIe bandwidth bottlenecks at high packet rates, a single optical port from each NIC is connected to the traffic generator. diff --git a/test_plans/port_control_test_plan.rst b/test_plans/port_control_test_plan.rst index 532d363a..4577802d 100644 --- a/test_plans/port_control_test_plan.rst +++ b/test_plans/port_control_test_plan.rst @@ -40,9 +40,9 @@ Prerequisites 1. Hardware: - * Fortville - * Niantic - * Columbiaville + * Intel® Ethernet 700 Series NIC + * 82599 + * Intel® Ethernet 810 Series NIC * i350 NIC * e1000 emulated device diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst index 31ba2b15..bf3ba91f 100644 --- a/test_plans/ptpclient_test_plan.rst +++ b/test_plans/ptpclient_test_plan.rst @@ -51,7 +51,7 @@ Case Config:: $ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static $ ninja -C -The sample should be validated on Forville, Niantic and i350 Nics. +The sample should be validated on Intel® Ethernet 700 Series, 82599 and i350 Nics. Test case: ptp client ====================== @@ -64,7 +64,7 @@ Start ptp client on DUT and wait few seconds:: .//examples/dpdk-ptpclient -c f -n 3 -- -T 0 -p 0x1 Check that output message contained T1,T2,T3,T4 clock and time difference -between master and slave time is about 10us in niantic, 20us in Fortville, +between master and slave time is about 10us in 82599, 20us in Intel® Ethernet 700 Series, 8us in i350. Test case: update system @@ -87,5 +87,5 @@ Start ptp client on DUT and wait few seconds:: Make sure DUT system time has been changed to same as tester. Check that output message contained T1,T2,T3,T4 clock and time difference -between master and slave time is about 10us in niantic, 20us in Fortville, +between master and slave time is about 10us in 82599, 20us in Intel® Ethernet 700 Series, 8us in i350. diff --git a/test_plans/ptype_mapping_test_plan.rst b/test_plans/ptype_mapping_test_plan.rst index fdabd191..4f15d9cf 100644 --- a/test_plans/ptype_mapping_test_plan.rst +++ b/test_plans/ptype_mapping_test_plan.rst @@ -38,9 +38,9 @@ All PTYPEs (packet types) in DPDK PMDs before are statically defined using static constant map tables. It makes impossible to add a new packet type without first defining them statically and then recompiling DPDK. New NICs are flexible enough to be reconfigured depending on the -network environment. In case of FVL new PTYPEs can be added -dynamically at device initialization time using corresponding AQ -commands. +network environment. In case of Intel® Ethernet 700 Series new PTYPEs +can be added dynamically at device initialization time using corresponding +AQ commands. Note that the packet types of the same packet recognized by different hardware may be different, as different hardware may have different capabilities of packet type recognition. diff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst index c5a3c9b8..c3d49db2 100644 --- a/test_plans/pvp_share_lib_test_plan.rst +++ b/test_plans/pvp_share_lib_test_plan.rst @@ -39,8 +39,8 @@ Description The feature need compile dpdk as shared libraries, then application should use option ``-d`` to load the dynamic pmd that are built as shared libraries. -Test Case1: Vhost/virtio-user pvp share lib test with niantic -============================================================= +Test Case1: Vhost/virtio-user pvp share lib test with 82599 +=========================================================== 1. Enable the shared lib in DPDK configure file:: @@ -54,7 +54,7 @@ Test Case1: Vhost/virtio-user pvp share lib test with niantic export LD_LIBRARY_PATH=/root/dpdk/x86_64-native-linuxapp-gcc/drivers:$LD_LIBRARY_PATH -4. Bind niantic port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: +4. Bind 82599 port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 -d librte_net_vhost.so.21.0 -d librte_net_i40e.so.21.0 -d librte_mempool_ring.so.21.0 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i @@ -70,12 +70,12 @@ Test Case1: Vhost/virtio-user pvp share lib test with niantic testpmd>show port stats all -Test Case2: Vhost/virtio-user pvp share lib test with fortville -=============================================================== +Test Case2: Vhost/virtio-user pvp share lib test with Intel® Ethernet 700 Series +================================================================================ Similar as Test Case1, all steps are similar except step 4: -4. Bind fortville port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: +4. Bind Intel® Ethernet 700 Series port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 -d librte_net_vhost.so -d librte_net_i40e.so -d librte_mempool_ring.so \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst index 488596ed..8e3a9231 100644 --- a/test_plans/qinq_filter_test_plan.rst +++ b/test_plans/qinq_filter_test_plan.rst @@ -30,18 +30,18 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -=============================================== -Fortville Cloud filters for QinQ steering Tests -=============================================== -This document provides test plan for testing the function of Fortville: +================================================================ +Intel® Ethernet 700 Series Cloud filters for QinQ steering Tests +================================================================ +This document provides test plan for testing the function of Intel® Ethernet 700 Series: QinQ filter function Prerequisites ============= 1.Hardware: - Fortville + Intel® Ethernet 700 Series HarborChannel_DP_OEMGEN_8MB_J24798-001_0.65_80002DA4 - firmware-version: 5.70 0x80002da4 1.3908.0(fortville 25G) or 6.0.0+ + firmware-version: 5.70 0x80002da4 1.3908.0(Intel® Ethernet Network Adapter XXV710-DA2) or 6.0.0+ 2.Software: dpdk: http://dpdk.org/git/dpdk @@ -53,10 +53,10 @@ Test Case 1: test qinq packet type Testpmd configuration - 4 RX/TX queues per port ------------------------------------------------ -#. For fortville NICs need change the value of +#. For Intel® Ethernet 700 Series NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR in dpdk/config/common_base file to n. -#. set up testpmd with fortville NICs:: +#. set up testpmd with Intel® Ethernet 700 Series NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss @@ -89,7 +89,7 @@ Test Case 2: qinq packet filter to PF queues Testpmd configuration - 4 RX/TX queues per port ----------------------------------------------- -#. set up testpmd with fortville NICs:: +#. set up testpmd with Intel® Ethernet 700 Series NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss @@ -132,7 +132,7 @@ Test Case 3: qinq packet filter to VF queues linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1 -#. set up testpmd with fortville PF NICs:: +#. set up testpmd with Intel® Ethernet 700 Series PF NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 @@ -158,7 +158,7 @@ Test Case 3: qinq packet filter to VF queues testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end -#. set up testpmd with fortville VF0 NICs:: +#. set up testpmd with Intel® Ethernet 700 Series VF0 NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 @@ -174,7 +174,7 @@ Test Case 3: qinq packet filter to VF queues testpmd command: start -#. set up testpmd with fortville VF1 NICs:: +#. set up testpmd with Intel® Ethernet 700 Series VF1 NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 @@ -209,7 +209,7 @@ Test Case 4: qinq packet filter with different tpid linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1 -#. set up testpmd with fortville PF NICs:: +#. set up testpmd with Intel® Ethernet 700 Series PF NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 @@ -239,7 +239,7 @@ Test Case 4: qinq packet filter with different tpid testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end -#. set up testpmd with fortville VF0 NICs:: +#. set up testpmd with Intel® Ethernet 700 Series VF0 NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 @@ -255,7 +255,7 @@ Test Case 4: qinq packet filter with different tpid testpmd command: start -#. set up testpmd with fortville VF1 NICs:: +#. set up testpmd with Intel® Ethernet 700 Series VF1 NICs:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 diff --git a/test_plans/queue_region_test_plan.rst b/test_plans/queue_region_test_plan.rst index 7e4c9ca6..41857646 100644 --- a/test_plans/queue_region_test_plan.rst +++ b/test_plans/queue_region_test_plan.rst @@ -30,18 +30,19 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -=========================================== -Fortville Configure RSS Queue Regions Tests -=========================================== +============================================================ +Intel® Ethernet 700 Series Configure RSS Queue Regions Tests +============================================================ Description =========== -FVL/FPK and future CVL/CPK NICs support queue regions configuration for -RSS in PF/VF, so different traffic classes or different packet -classification types can be separated to different queue regions which -includes several queues, but traffic classes and packet classification -cannot co-existing with the support of queue region functionality. -Different PCtype packets take rss algorithm in different queue regions. +Intel® Ethernet 700 Series/X722 series and future Intel® Ethernet 800 +Series/E822 NICs support queue regions configuration for RSS in PF/VF, +so different traffic classes or different packet classification types +can be separated to different queue regions which includes several +queues, but traffic classes and packet classification cannot co-existing +with the support of queue region functionality. Different PCtype packets +take rss algorithm in different queue regions. Examples: @@ -58,14 +59,14 @@ Examples: • different traffic classes defined in VLAN PCP bits distributed to different queue regions -For FVL see chapter 7.1.7 of the latest datasheet. -For FPK/CPK see corresponding EAS sections. +For Intel® Ethernet 700 Series see chapter 7.1.7 of the latest datasheet. +For X722 series/E822 see corresponding EAS sections. Prerequisites ============= 1. Hardware: - Fortville + Intel® Ethernet 700 Series 2. software: dpdk: http://dpdk.org/git/dpdk @@ -139,8 +140,8 @@ Test case 1: different pctype packet can enter the expected queue region Send the pkt1-pkt9, the packets can't enter the same queue which defined in queue region rule. They are distributed to queues according RSS rule. -Notes: fortville can't parse the TCP SYN type packet, fortpark can parse it. -So if fortville, pkt2 to queue 6 or queue 7. +Notes: Intel® Ethernet 700 Series can't parse the TCP SYN type packet, X722 series can parse it. +So if Intel® Ethernet 700 Series, pkt2 to queue 6 or queue 7. Test case 2: different user priority packet can enter the expected queue region =============================================================================== diff --git a/test_plans/queue_start_stop_test_plan.rst b/test_plans/queue_start_stop_test_plan.rst index 82e35280..963cdc9a 100644 --- a/test_plans/queue_start_stop_test_plan.rst +++ b/test_plans/queue_start_stop_test_plan.rst @@ -58,7 +58,7 @@ To run the testpmd application in linuxapp environment with 4 lcores, Test Case: queue start/stop --------------------------- -This case support PF (fortville), VF (fortville,niantic) +This case support PF (Intel® Ethernet 700 Series), VF (Intel® Ethernet 700 Series, 82599) #. Update testpmd source code. Add the following C code in ./app/test-pmd/fwdmac.c:: diff --git a/test_plans/rss_to_rte_flow_test_plan.rst b/test_plans/rss_to_rte_flow_test_plan.rst index cb006d80..481cd032 100644 --- a/test_plans/rss_to_rte_flow_test_plan.rst +++ b/test_plans/rss_to_rte_flow_test_plan.rst @@ -713,8 +713,8 @@ Test case: Set queue region in rte_flow with invalid parameter (I40E) Test case: Queue region and RSS rule combination (I40E) ======================================================= -Notes: Queue region is only supported by fortville, so this case only can -be implemented with fortville. +Notes: Queue region is only supported by Intel® Ethernet 700 Series, so this case only can +be implemented with Intel® Ethernet 700 Series. 1. Start the testpmd:: diff --git a/test_plans/rteflow_priority_test_plan.rst b/test_plans/rteflow_priority_test_plan.rst index 405a1bf1..ead54333 100644 --- a/test_plans/rteflow_priority_test_plan.rst +++ b/test_plans/rteflow_priority_test_plan.rst @@ -47,7 +47,8 @@ when priority is not active, flows are created on fdir then switch/ACL. when priority is active, flows are identified into 2 category: High priority as permission stage that maps to switch/ACL, Low priority as distribution stage that maps to fdir, -a no destination high priority rule is not acceptable, since it may be overwritten by a low priority rule due to cvl FXP behavior. +a no destination high priority rule is not acceptable, since it may be overwritten +by a low priority rule due to Intel® Ethernet 800 Series FXP behavior. Note: Since these tests are focus on priority, the patterns in tests are examples. diff --git a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst index d4f01a3b..1e235966 100644 --- a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst +++ b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst @@ -49,7 +49,7 @@ Prerequisites 1. Hardware: -- Forville(X710/XL710/XXV710) +- Intel® Ethernet 700 Series(X710/XL710/XXV710) 2. Software: diff --git a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst index 1a9ff77d..aeff1bb9 100644 --- a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst +++ b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst @@ -47,9 +47,9 @@ see runtime_vf_queue_number_test_plan.rst The datasheet xl710-10-40-controller-datasheet2017.pdf described in page 10: "The 710 series supports up to 1536 LQPs that can be assigned to PFs or VFs as needed". - For four ports Fortville NIC, each port has 384 queues, + For four ports Intel® Ethernet 700 Series NIC, each port has 384 queues, the total queues number is 384 * 4 = 1536. - For two ports Fortville NIC, each port has 768 queues, + For two ports Intel® Ethernet 700 Series NIC, each port has 768 queues, the total queues number is 768 * 2 = 1536. - Queues PF used @@ -76,7 +76,7 @@ Prerequisites 1. Hardware: -- Fortville(X710/XXV710/XL710) +- Intel® Ethernet 700 Series(X710/XXV710/XL710) 2. Software: @@ -90,11 +90,11 @@ Set up scenario =============== 1. Set up max VFs from one PF with DPDK driver - Create 32 vfs on four ports fortville NIC:: + Create 32 vfs on four ports Intel® Ethernet 700 Series NIC:: echo 32 > /sys/bus/pci/devices/0000\:05\:00.0/max_vfs - Create 64 vfs on two ports fortville NIC:: + Create 64 vfs on two ports Intel® Ethernet 700 Series NIC:: echo 64 > /sys/bus/pci/devices/0000\:05\:00.0/max_vfs @@ -152,7 +152,7 @@ Test case 2: set max queue number per vf on one pf port ================================================================ 1. Start the PF testpmd with VF max queue number 16:: As the feature description describe, the max value of queue-num-per-vf is 8 - for Both two and four ports Fortville NIC:: + for Both two and four ports Intel® Ethernet 700 Series NIC:: .//app/dpdk-testpmd -c f -n 4 -a 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \ --socket-mem 1024,1024 -- -i diff --git a/test_plans/runtime_vf_queue_number_test_plan.rst b/test_plans/runtime_vf_queue_number_test_plan.rst index c4aaaed0..1757271a 100644 --- a/test_plans/runtime_vf_queue_number_test_plan.rst +++ b/test_plans/runtime_vf_queue_number_test_plan.rst @@ -62,7 +62,7 @@ Prerequisites 1. Hardware: -- Fortville(X710/XXV710/XL710) +- Intel® Ethernet 700 Series(X710/XXV710/XL710) 2. Software: diff --git a/test_plans/rxtx_offload_test_plan.rst b/test_plans/rxtx_offload_test_plan.rst index 172bb9cd..6739bcb0 100644 --- a/test_plans/rxtx_offload_test_plan.rst +++ b/test_plans/rxtx_offload_test_plan.rst @@ -65,7 +65,7 @@ Prerequisites ============= 1. Hardware: - FVL/NNT + Intel® Ethernet 700 Series and 82599/500 Series 2. Software: dpdk: http://dpdk.org/git/dpdk @@ -168,8 +168,8 @@ Test case: Rx offload per-port and per_queue setting Check the configuration and the port can start normally. -Test case: NNT Rx offload per-queue setting -=========================================== +Test case: 82599/500 Series Rx offload per-queue setting +======================================================== 1. Start testpmd:: @@ -514,8 +514,8 @@ Test case: Tx offload per-queue and per-port setting Check the configuration and the port can start normally. -Test case: FVL Tx offload per-queue setting -=========================================== +Test case: Intel® Ethernet 700 Series Tx offload per-queue setting +================================================================== 1. Start testpmd and get the tx_offload capability and configuration:: diff --git a/test_plans/stats_checks_test_plan.rst b/test_plans/stats_checks_test_plan.rst index 5cc3ee7e..7eb120b3 100644 --- a/test_plans/stats_checks_test_plan.rst +++ b/test_plans/stats_checks_test_plan.rst @@ -128,7 +128,7 @@ The fields checked are RX-packets and TX-packets of each queue stats, RX-packets, RX-bytes, TX-packets and TX-bytes of each port stats, rx_good_packets and rx_good_bytes of each port xstats, tx_good_packets and tx_good_bytes of each port xstats, -FVL does not support hardware per queue stats, +Intel® Ethernet 700 Series does not support hardware per queue stats, so we won't check rx and tx per queue stats. Test Case: PF xstatus Checks @@ -260,7 +260,7 @@ Test Case: PF xstatus Checks verify rx_good_packets, RX-packets of port 0 and tx_good_packets, TX-packets of port 1 are both 100. rx_good_bytes, RX-bytes of port 0 and tx_good_bytes, TX-bytes of port 1 are the same. -FVL does not support hardware per queue stats, +Intel® Ethernet 700 Series does not support hardware per queue stats, so rx_qx_packets and rx_qx_bytes are both 0. tx_qx_packets and tx_qx_bytes are both 0 too. diff --git a/test_plans/tso_test_plan.rst b/test_plans/tso_test_plan.rst index ee443d10..6b1367d0 100644 --- a/test_plans/tso_test_plan.rst +++ b/test_plans/tso_test_plan.rst @@ -40,7 +40,7 @@ Description This document provides the plan for testing the TSO (Transmit Segmentation Offload, also called Large Send offload - LSO) feature of Intel Ethernet Controller, including Intel 82599 10GbE Ethernet Controller and -Fortville 40GbE Ethernet Controller. TSO enables the TCP/IP stack to +Intel® Ethernet Converged Network Adapter XL710-QDA2. TSO enables the TCP/IP stack to pass to the network device a larger ULP datagram than the Maximum Transmit Unit Size (MTU). NIC divides the large ULP datagram to multiple segments according to the MTU size. diff --git a/test_plans/tx_preparation_test_plan.rst b/test_plans/tx_preparation_test_plan.rst index a6d9a0b9..92ba2fb6 100644 --- a/test_plans/tx_preparation_test_plan.rst +++ b/test_plans/tx_preparation_test_plan.rst @@ -171,8 +171,8 @@ Captured packet:: Note: Generally TSO only supports TCP packets but doesn't support UDP packets due to -hardware segmentation limitation, for example packets are sent on niantic -NIC, but not segmented. +hardware segmentation limitation, for example packets are sent on 82599 NIC, but +not segmented. Packet:: diff --git a/test_plans/uni_pkt_test_plan.rst b/test_plans/uni_pkt_test_plan.rst index 68f12c92..f92e9f3e 100644 --- a/test_plans/uni_pkt_test_plan.rst +++ b/test_plans/uni_pkt_test_plan.rst @@ -64,7 +64,7 @@ Test Case: L2 Packet detect =========================== This case checked that whether Timesync, ARP, LLDP detection supported by -Fortville. +Intel® Ethernet 700 Series. Send time sync packet from tester:: @@ -94,12 +94,12 @@ Test Case: IPv4&L4 packet type detect ===================================== This case checked that whether L3 and L4 packet can be normally detected. -Only Fortville can detect icmp packet. -Only niantic and i350 can detect ipv4 extension packet. -Fortville did not detect whether packet contain ipv4 header options, so L3 +Only Intel® Ethernet 700 Series can detect icmp packet. +Only 82599 and i350 can detect ipv4 extension packet. +Intel® Ethernet 700 Series did not detect whether packet contain ipv4 header options, so L3 type will be shown as IPV4_EXT_UNKNOWN. -Fortville will identify all unrecognized L4 packet as L4_NONFRAG. -Only Fortville can identify L4 fragment packet. +Intel® Ethernet 700 Series will identify all unrecognized L4 packet as L4_NONFRAG. +Only Intel® Ethernet 700 Series can identify L4 fragment packet. Send IP only packet and verify L2/L3/L4 corrected:: @@ -127,13 +127,13 @@ Send IP+SCTP packet and verify L2/L3/L4 corrected:: (outer) L4 type: SCTP -Send IP+ICMP packet and verify L2/L3/L4 corrected(Fortville):: +Send IP+ICMP packet and verify L2/L3/L4 corrected(Intel® Ethernet 700 Series):: sendp([Ether()/IP()/ICMP()/Raw('\0'*60)], iface=txItf) (outer) L4 type: ICMP -Send IP fragment+TCP packet and verify L2/L3/L4 corrected(Fortville):: +Send IP fragment+TCP packet and verify L2/L3/L4 corrected(Intel® Ethernet 700 Series):: sendp([Ether()/IP(frag=5)/TCP()/Raw('\0'*60)], iface=txItf) @@ -141,14 +141,14 @@ Send IP fragment+TCP packet and verify L2/L3/L4 corrected(Fortville):: (outer) L3 type: IPV4_EXT_UNKNOWN (outer) L4 type: L4_FRAG -Send IP extension packet and verify L2/L3 corrected(Niantic,i350):: +Send IP extension packet and verify L2/L3 corrected(82599,i350):: sendp([Ether()/IP(ihl=10)/Raw('\0'*40)],iface=txItf) (outer) L3 type: IPV4_EXT (outer) L4 type: Unknown -Send IP extension+SCTP packet and verify L2/L3/L4 corrected(Niantic,i350):: +Send IP extension+SCTP packet and verify L2/L3/L4 corrected(82599,i350):: sendp([Ether()/IP(ihl=10)/SCTP()/Raw('\0'*40)],iface=txItf) @@ -159,10 +159,10 @@ Test Case: IPv6&L4 packet type detect ===================================== This case checked that whether IPv6 and L4 packet can be normally detected. -Fortville did not detect whether packet contain ipv6 extension options, so L3 -type will be shown as IPV6_EXT_UNKNOWN. -Fortville will identify all unrecognized L4 packet as L4_NONFRAG. -Only Fortville can identify L4 fragment packet. +Intel® Ethernet 700 Series did not detect whether packet contain ipv6 extension +options, so L3 type will be shown as IPV6_EXT_UNKNOWN. +Intel® Ethernet 700 Series will identify all unrecognized L4 packet as L4_NONFRAG. +Only Intel® Ethernet 700 Series can identify L4 fragment packet. Send IPv6 only packet and verify L2/L3/L4 corrected:: @@ -184,14 +184,14 @@ Send IPv6+TCP packet and verify L2/L3/L4 corrected:: (outer) L4 type: TCP -Send IPv6 fragment packet and verify L2/L3/L4 corrected(Fortville):: +Send IPv6 fragment packet and verify L2/L3/L4 corrected(Intel® Ethernet 700 Series):: sendp([Ether()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*60)],iface=txItf) (outer) L3 type: IPV6_EXT_UNKNOWN (outer) L4 type: L4_FRAG -Send IPv6 fragment packet and verify L2/L3/L4 corrected(Niantic,i350):: +Send IPv6 fragment packet and verify L2/L3/L4 corrected(82599,i350):: sendp([Ether()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*60)],iface=txItf) @@ -202,7 +202,7 @@ Test Case: IP in IPv4 tunnel packet type detect =============================================== This case checked that whether IP in IPv4 tunnel packet can be normally -detected by Fortville. +detected by Intel® Ethernet 700 Series. Send IPv4+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:: @@ -283,11 +283,11 @@ Send IPv4+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:: Inner L4 type: ICMP -Test Case: IPv6 in IPv4 tunnel packet type detect by niantic and i350 -===================================================================== +Test Case: IPv6 in IPv4 tunnel packet type detect by 82599 and i350 +=================================================================== This case checked that whether IPv4 in IPv6 tunnel packet can be normally -detected by Niantic and i350. +detected by 82599 and i350. Send IPv4+IPv6 packet and verify inner and outer L2/L3/L4 corrected:: @@ -340,7 +340,7 @@ Test Case: IP in IPv6 tunnel packet type detect =============================================== This case checked that whether IP in IPv6 tunnel packet can be normally -detected by Fortville. +detected by Intel® Ethernet 700 Series. Send IPv4+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:: @@ -427,9 +427,9 @@ Test Case: NVGRE tunnel packet type detect ========================================== This case checked that whether NVGRE tunnel packet can be normally detected -by Fortville. -Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will -be displayed as GRENAT. +by Intel® Ethernet 700 Series. +Intel® Ethernet 700 Series did not distinguish GRE/Teredo/Vxlan packets, all +those types will be displayed as GRENAT. Send IPv4+NVGRE fragment packet and verify inner and outer L2/L3/L4 corrected:: @@ -560,9 +560,9 @@ Test Case: NVGRE in IPv6 tunnel packet type detect ================================================== This case checked that whether NVGRE in IPv6 tunnel packet can be normally -detected by Fortville. -Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will -be displayed as GRENAT. +detected by Intel® Ethernet 700 Series. +Intel® Ethernet 700 Series did not distinguish GRE/Teredo/Vxlan packets, all +those types will be displayed as GRENAT. Send IPV6+NVGRE+MAC packet and verify inner and outer L2/L3/L4 corrected:: @@ -777,9 +777,9 @@ Test Case: GRE tunnel packet type detect ======================================== This case checked that whether GRE tunnel packet can be normally detected by -Fortville. -Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will -be displayed as GRENAT. +Intel® Ethernet 700 Series. +Intel® Ethernet 700 Series did not distinguish GRE/Teredo/Vxlan packets, all +those types will be displayed as GRENAT. Send IPv4+GRE+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:: @@ -835,9 +835,9 @@ Test Case: Vxlan tunnel packet type detect ========================================== This case checked that whether Vxlan tunnel packet can be normally detected by -Fortville. -Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types -will be displayed as GRENAT. +Intel® Ethernet 700 Series. +Intel® Ethernet 700 Series did not distinguish GRE/Teredo/Vxlan packets, all +those types will be displayed as GRENAT. Add vxlan tunnel port filter on receive port:: diff --git a/test_plans/userspace_ethtool_test_plan.rst b/test_plans/userspace_ethtool_test_plan.rst index ab3fbc8f..1cc11cf4 100644 --- a/test_plans/userspace_ethtool_test_plan.rst +++ b/test_plans/userspace_ethtool_test_plan.rst @@ -43,15 +43,15 @@ is based upon a simple L2 frame reflector. Prerequisites ============= -notice: On FVL, test case "test_dump_driver_info" need a physical link disconnect, -this case must do manually at this condition. +notice: On Intel® Ethernet 700 Series, test case "test_dump_driver_info" +need a physical link disconnect, this case must do manually at this condition. Assume port 0 and 1 are connected to the traffic generator, to run the test application in linux app environment with 4 lcores, 2 ports:: ethtool -c f -n 4 -The sample should be validated on Fortville, Niantic and i350 Nics. +The sample should be validated on Intel® Ethernet 700 Series, 82599 and i350 Nics. other requirements: @@ -76,7 +76,7 @@ linux's ethtool, were exactly the same:: firmware-version: 0x61bf0001 Use "link" command to dump all ports link status. -Notice:: On FVL, link detect need a physical link disconnect:: +Notice:: On Intel® Ethernet 700 Series, link detect need a physical link disconnect:: EthApp> link Port 0: Up @@ -226,7 +226,7 @@ Test Case: Pause tx/rx test(performance test) Enable port 0 Rx pause frame and then create two packets flows on IXIA port. One flow is 100000 normally packet and the second flow is pause frame. -Check that dut's port 0 Rx speed dropped status. For example, niantic will drop +Check that dut's port 0 Rx speed dropped status. For example, 82599 will drop from 14.8Mpps to 7.49Mpps:: EthApp> pause 0 rx diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_test_plan.rst index bf2e1cd0..e27afdd4 100644 --- a/test_plans/veb_switch_test_plan.rst +++ b/test_plans/veb_switch_test_plan.rst @@ -41,24 +41,25 @@ IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN -Bridge internal to Fortville that bridges the traffic of multiple VSIs over an -internal virtual network. +Bridge internal to Intel® Ethernet 700 Series that bridges the traffic of +multiple VSIs over an internal virtual network. Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA -multiplexes the traffic of one or more VSIs onto a single Fortville Ethernet -port. The biggest difference between a VEB and a VEPA is that a VEB can -switch packets internally between VSIs, whereas a VEPA cannot. +multiplexes the traffic of one or more VSIs onto a single Intel® Ethernet +700 Series Ethernet port. The biggest difference between a VEB and a VEPA +is that a VEB can switch packets internally between VSIs, whereas a VEPA +cannot. Virtual Station Interface (VSI) - This is an IEEE EVB term that defines the properties of a virtual machine's (or a physical machine's) connection -to the network. Each downstream v-port on a Fortville VEB or VEPA defines -a VSI. A standards-based definition of VSI properties enables network -management tools to perform virtual machine migration and associated network -re-configuration in a vendor-neutral manner. +to the network. Each downstream v-port on a Intel® Ethernet 700 Series VEB +or VEPA defines a VSI. A standards-based definition of VSI properties enables +network management tools to perform virtual machine migration and associated +network re-configuration in a vendor-neutral manner. My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC internal -switch. It's similar as Niantic's SRIOV switch. +switch. It's similar as 82599's SRIOV switch. Prerequisites for VEB testing ============================= diff --git a/test_plans/vf_daemon_test_plan.rst b/test_plans/vf_daemon_test_plan.rst index 42c679ac..8e0e762b 100644 --- a/test_plans/vf_daemon_test_plan.rst +++ b/test_plans/vf_daemon_test_plan.rst @@ -71,7 +71,7 @@ Prerequisites ============= 1. Host PF in DPDK driver. Create 2 VFs from 1 PF with dpdk driver, take - Niantic for example:: + 82599 for example:: ./usertools/dpdk-devbind.py -b igb_uio 81:00.0 echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs diff --git a/test_plans/vf_kernel_test_plan.rst b/test_plans/vf_kernel_test_plan.rst index f711f54a..16e4fc47 100644 --- a/test_plans/vf_kernel_test_plan.rst +++ b/test_plans/vf_kernel_test_plan.rst @@ -48,7 +48,7 @@ below features. Test Case 1: Set up environment and load driver =============================================== 1. Get the pci device id of DUT, load ixgbe driver to required version, - take Niantic for example:: + take 82599 for example:: rmmod ixgbe insmod ixgbe.ko @@ -305,7 +305,7 @@ Steps: can be received by DPDK PF Note: -Niantic NIC un-supports this case. +82599 NIC un-supports this case. Test Case 10: RSS @@ -330,7 +330,7 @@ Steps: 4. Check kernel VF each queue can receive packets Note: -Niantic NIC un-supports this case. +82599 NIC un-supports this case. Test Case 11: DPDK PF + kernel VF + DPDK VF diff --git a/test_plans/vf_pf_reset_test_plan.rst b/test_plans/vf_pf_reset_test_plan.rst index d3a9f505..ff3aa326 100644 --- a/test_plans/vf_pf_reset_test_plan.rst +++ b/test_plans/vf_pf_reset_test_plan.rst @@ -41,7 +41,7 @@ Prerequisites 1. Hardware: - * Fortville 4*10G NIC (driver: i40e) + * Intel® Ethernet 700 Series 4*10G NIC (driver: i40e) * tester: ens3f0 * dut: ens5f0(pf0), ens5f1(pf1) * ens3f0 connect with ens5f0 by cable diff --git a/test_plans/vf_rss_test_plan.rst b/test_plans/vf_rss_test_plan.rst index 9e575546..5bb2f182 100644 --- a/test_plans/vf_rss_test_plan.rst +++ b/test_plans/vf_rss_test_plan.rst @@ -34,7 +34,7 @@ VF RSS - Configuring Hash Function Tests ======================================== -This document provides test plan for testing the function of Fortville: +This document provides test plan for testing the function of Intel® Ethernet 700 Series: Support configuring hash functions. Prerequisites @@ -53,15 +53,15 @@ handled by a different logical core. #. The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.) -#. A hash calculation is performed. The Fortville supports three hash function: +#. A hash calculation is performed. The Intel® Ethernet 700 Series supports three hash function: Toeplitz, simple XOR and their Symmetric RSS. #. Hash results are used as an index into a 128/512 entry 'redirection table'. -#. Niantic VF only supports simple default hash algorithm(simple). Fortville NICs +#. 82599 VF only supports simple default hash algorithm(simple). Intel® Ethernet 700 Series NICs support all hash algorithm only used dpdk driver on host. when used kernel driver on host, - fortville NICs only support default hash algorithm(simple). + Intel® Ethernet 700 Series NICs only support default hash algorithm(simple). The RSS RETA update feature is designed to make RSS more flexible by allowing users to define the correspondence between the seven LSBs of hash result and diff --git a/test_plans/vf_single_core_perf_test_plan.rst b/test_plans/vf_single_core_perf_test_plan.rst index 8c4ff727..6de790ea 100644 --- a/test_plans/vf_single_core_perf_test_plan.rst +++ b/test_plans/vf_single_core_perf_test_plan.rst @@ -39,12 +39,14 @@ Prerequisites 1. Nic single core performance test requirements: - 1.1) FVL25G: two dual port FVL25G nics, all installed on the same socket, - pick one port per nic. - 1.2) NNT10G: four 82599 nics, all installed on the same socket, - pick one port per nic. - 1.3) CVL100G: one CVL100G nics, all installed on the same socket, - pick one port per nic. + 1.1) Intel® Ethernet Network Adapter XXV710-DA2: + two dual port Intel® Ethernet Network Adapter XXV710-DA2 nics,all + installed on the same socket, pick one port per nic. + 1.2) 82599/500 Series 10G: + four 82599 nics, all installed on the same socket, pick one port per nic. + 1.3) Intel® Ethernet Network Adapter E810-CQDA2: + one Intel® Ethernet Network Adapter E810-CQDA2 nics, all installed on the + same socket, pick one port per nic. 2. Software:: @@ -62,9 +64,9 @@ Prerequisites 3. Connect all the selected nic ports to traffic generator(IXIA,TREX, PKTGEN) ports(TG ports):: - 2 TG 25g ports for FVL25G ports - 4 TG 10g ports for 4 NNT10G ports - 1 TG 100g ports for CVL100G port + 2 TG 25g ports for Intel® Ethernet Network Adapter XXV710-DA2 ports + 4 TG 10g ports for 4 82599/500 Series 10G ports + 1 TG 100g ports for Intel® Ethernet Network Adapter E810-CQDA2 port Test Case : Vf Single Core Performance Measurement ================================================== diff --git a/test_plans/vf_smoke_test_plan.rst b/test_plans/vf_smoke_test_plan.rst index 43330c10..54cf4c42 100644 --- a/test_plans/vf_smoke_test_plan.rst +++ b/test_plans/vf_smoke_test_plan.rst @@ -46,7 +46,7 @@ Prerequisites 1. Hardware: - niantic/fortville/columbiaville + 82599/Intel® Ethernet 700 Series/Intel® Ethernet 800 Series 2. Software: diff --git a/test_plans/vlan_ethertype_config_test_plan.rst b/test_plans/vlan_ethertype_config_test_plan.rst index e620e6c7..733efdc8 100644 --- a/test_plans/vlan_ethertype_config_test_plan.rst +++ b/test_plans/vlan_ethertype_config_test_plan.rst @@ -46,7 +46,7 @@ Prerequisites ============= 1. Hardware: - one Fortville NIC (4x 10G or 2x10G or 2x40G or 1x10G) + one Intel® Ethernet 700 Series NIC (4x 10G or 2x10G or 2x40G or 1x10G) 2. Software: diff --git a/test_plans/vm_power_manager_test_plan.rst b/test_plans/vm_power_manager_test_plan.rst index f85d2ef9..ca66ba06 100644 --- a/test_plans/vm_power_manager_test_plan.rst +++ b/test_plans/vm_power_manager_test_plan.rst @@ -64,7 +64,7 @@ Prerequisites 1. Hardware: - CPU: Haswell, IVB(CrownPass) - - NIC: Niantic 82599 + - NIC: 82599 2. BIOS: diff --git a/test_plans/vmdq_dcb_test_plan.rst b/test_plans/vmdq_dcb_test_plan.rst index a4beaa93..4820372b 100644 --- a/test_plans/vmdq_dcb_test_plan.rst +++ b/test_plans/vmdq_dcb_test_plan.rst @@ -30,9 +30,9 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -=================================================================== -Fortville: Support of RX Packet Filtering using VMDQ & DCB Features -=================================================================== +==================================================================================== +Intel® Ethernet 700 Series: Support of RX Packet Filtering using VMDQ & DCB Features +==================================================================================== The Intel Network Interface Card(e.g. XL710), supports a number of packet filtering functions which can be used to distribute incoming packets diff --git a/test_plans/vmdq_test_plan.rst b/test_plans/vmdq_test_plan.rst index ef64823c..8f6fd3bc 100644 --- a/test_plans/vmdq_test_plan.rst +++ b/test_plans/vmdq_test_plan.rst @@ -34,9 +34,9 @@ VMDQ Tests ========== -The 1G, 10G 82599 and 40G FVL Network Interface Card (NIC), supports a number of packet -filtering functions which can be used to distribute incoming packets into a -number of reception (RX) queues. VMDQ is a filtering +The 1G, 10G 82599 and 40G Intel® Ethernet 700 Series Network Interface Card (NIC), +supports a number of packet filtering functions which can be used to distribute +incoming packets into a number of reception (RX) queues. VMDQ is a filtering functions which operate on VLAN-tagged packets to distribute those packets among up to 512 RX queues. @@ -63,14 +63,16 @@ Prerequisites reception, the other for transmission - The traffic generator being used is configured to send to the application RX port a stream of packets with VLAN tags, where the VLAN IDs increment from 0 - to the pools numbers(e.g: for FVL spirit, it's 63, inclusive) as well as the MAC address from - 52:54:00:12:[port_index]:00 to 52:54:00:12:[port_index]:3e and the VLAN user priority field increments from 0 to 7 + to the pools numbers(e.g: for Intel® Ethernet Converged Network Adapter XL710-QDA2, + it's 63, inclusive) as well as the MAC address from 52:54:00:12:[port_index]:00 to + 52:54:00:12:[port_index]:3e and the VLAN user priority field increments from 0 to 7 (inclusive) for each VLAN ID. In our case port_index = 0 or 1. Test Case: Measure VMDQ pools queues ------------------------------------ 1. Put different number of pools: in the case of 10G 82599 Nic is 64, in the case - of FVL spirit is 63,in case of FVL eagle is 34. + of Intel® Ethernet Converged Network Adapter XL710-QDA2 is 63,in case of Intel® + Ethernet Converged Network Adapter X710-DA4 is 34. 2. Start traffic transmission using approx 10% of line rate. 3. After a number of seconds, e.g. 15, stop traffic, and ensure no traffic loss (<0.001%) has occurred. diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst index f7bdeca3..5ab5be7f 100644 --- a/test_plans/vxlan_test_plan.rst +++ b/test_plans/vxlan_test_plan.rst @@ -30,31 +30,31 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -===================== -Fortville Vxlan Tests -===================== +====================================== +Intel® Ethernet 700 Series Vxlan Tests +====================================== Cloud providers build virtual network overlays over existing network infrastructure that provide tenant isolation and scaling. Tunneling layers added to the packets carry the virtual networking frames over existing Layer 2 and IP networks. Conceptually, this is similar to -creating virtual private networks over the Internet. Fortville will -process these tunneling layers by the hardware. +creating virtual private networks over the Internet. Intel® Ethernet +700 Series will process these tunneling layers by the hardware. -This document provides test plan for Fortville vxlan packet detecting, -checksum computing and filtering. +This document provides test plan for Intel® Ethernet 700 Series vxlan +packet detecting, checksum computing and filtering. Prerequisites ============= -1x Intel® X710 (Fortville) NICs (2x 40GbE full duplex optical ports per NIC) -plugged into the available PCIe Gen3 8-lane slot. +1x Intel® X710 (Intel® Ethernet 700 Series) NICs (2x 40GbE full duplex +optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot. 1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot. DUT board must be two sockets system and each cpu have more than 8 lcores. -For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR -in dpdk/config/common_base file to n. +For Intel® Ethernet 700 Series NICs need change the value of +CONFIG_RTE_LIBRTE_I40E_INC_VECTOR in dpdk/config/common_base file to n. Test Case: Vxlan ipv4 packet detect ===================================