From patchwork Tue Aug 2 17:51:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 114545 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90C86A00C5; Tue, 2 Aug 2022 19:52:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B49A40A84; Tue, 2 Aug 2022 19:52:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6C11F40141 for ; Tue, 2 Aug 2022 19:52:26 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2729QBWJ020589; Tue, 2 Aug 2022 10:52:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=ijVEYXKEwxHILm6vzoY53IT63mHSLsFnP2esPWnV/pY=; b=Utl8v3WLOEak92PnNcFo5hNH6Nmpx+/Ji/Q4l5F4nUsFvmGUe4GUw+oEHeTAWEcrHtcq 4oi9gew0/WXx5NzbPTtNJx1RO+2p+p5vj+o/yktQrchmxwTB2rTfC2430DmRq4rGz4S+ Rya0NqnWFchgKxb2aO9Zre6dKRJ8iLEfUTlnABicwg3H4G+Cm69Arw4P7Cnmwd4KhbYx UzuzxeR6sYuDCJFWxoVYloOV6dX/8l9yajUd7wIoozfhNGoGkQ8JaeJSIEcIxbdHCZTj nkPVpHLjirKw4yYhDFU68LqfIt6r2FuZpOCSti/alOgH6NRqPIfBQc/mehg8Ll04xoBc ng== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3hq19ksy0u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 02 Aug 2022 10:52:25 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Aug 2022 10:52:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 2 Aug 2022 10:52:23 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id C53923F7057; Tue, 2 Aug 2022 10:52:21 -0700 (PDT) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , Hanumanth Pothula Subject: [PATCH v2 1/1] app/testpmd: add command line argument 'nic-to-pmd-rx-metadata' Date: Tue, 2 Aug 2022 23:21:51 +0530 Message-ID: <20220802175151.2277437-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220801131338.1710737-1-hpothula@marvell.com> References: <20220801131338.1710737-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: V1J7oogfRsKEW_EhqtNLsrKxUjmnTirC X-Proofpoint-GUID: V1J7oogfRsKEW_EhqtNLsrKxUjmnTirC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-02_13,2022-08-02_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Presently, rx metadata is sent to PMD by default, leading to a performance drop as processing for the same in rx path takes extra cycles. Hence, introducing command line argument, 'nic-to-pmd-rx-metadata' to control passing rx metadata to PMD. By default it’s disabled. Signed-off-by: Hanumanth Pothula v2: - taken cared alignment issues - renamed command line argument from rx-metadata to nic-to-pmd-rx-metadata - renamed variable name from rx-metadata to nic_to_pmd_rx_metadata Acked-by: Aman Singh --- app/test-pmd/parameters.c | 4 ++++ app/test-pmd/testpmd.c | 6 +++++- app/test-pmd/testpmd.h | 2 ++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index e3c9757f3f..a381945492 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -213,6 +213,7 @@ usage(char* progname) printf(" --hairpin-mode=0xXX: bitmask set the hairpin port mode.\n" " 0x10 - explicit Tx rule, 0x02 - hairpin ports paired\n" " 0x01 - hairpin ports loop, 0x00 - hairpin port self\n"); + printf(" --nic-to-pmd-rx-metadata: let the NIC deliver per-packet Rx metadata to PMD\n"); } #ifdef RTE_LIB_CMDLINE @@ -710,6 +711,7 @@ launch_args_parse(int argc, char** argv) { "record-burst-stats", 0, 0, 0 }, { PARAM_NUM_PROCS, 1, 0, 0 }, { PARAM_PROC_ID, 1, 0, 0 }, + { "nic-to-pmd-rx-metadata", 0, 0, 0 }, { 0, 0, 0, 0 }, }; @@ -1510,6 +1512,8 @@ launch_args_parse(int argc, char** argv) num_procs = atoi(optarg); if (!strcmp(lgopts[opt_idx].name, PARAM_PROC_ID)) proc_id = atoi(optarg); + if (!strcmp(lgopts[opt_idx].name, "nic-to-pmd-rx-metadata")) + nic_to_pmd_rx_metadata = 1; break; case 'h': usage(argv[0]); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index addcbcac85..2b17d4f757 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -411,6 +411,9 @@ uint8_t clear_ptypes = true; /* Hairpin ports configuration mode. */ uint16_t hairpin_mode; +/* Send Rx metadata */ +uint8_t nic_to_pmd_rx_metadata; + /* Pretty printing of ethdev events */ static const char * const eth_event_desc[] = { [RTE_ETH_EVENT_UNKNOWN] = "unknown", @@ -1628,7 +1631,8 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id) int ret; int i; - eth_rx_metadata_negotiate_mp(pid); + if (nic_to_pmd_rx_metadata) + eth_rx_metadata_negotiate_mp(pid); port->dev_conf.txmode = tx_mode; port->dev_conf.rxmode = rx_mode; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fb2f5195d3..294a9c8cf4 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -621,6 +621,8 @@ extern struct rte_ether_addr peer_eth_addrs[RTE_MAX_ETHPORTS]; extern uint32_t burst_tx_delay_time; /**< Burst tx delay time(us) for mac-retry. */ extern uint32_t burst_tx_retry_num; /**< Burst tx retry number for mac-retry. */ +extern uint8_t nic_to_pmd_rx_metadata; + #ifdef RTE_LIB_GRO #define GRO_DEFAULT_ITEM_NUM_PER_FLOW 32 #define GRO_DEFAULT_FLOW_NUM (RTE_GRO_MAX_BURST_ITEM_NUM / \