From patchwork Thu Oct 21 05:08:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 102516 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E96E6A0C4B; Thu, 21 Oct 2021 07:10:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA2CC4117A; Thu, 21 Oct 2021 07:10:04 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2047.outbound.protection.outlook.com [40.107.243.47]) by mails.dpdk.org (Postfix) with ESMTP id 35F9E40142 for ; Thu, 21 Oct 2021 07:10:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RjwdUneiuy9HcQ6suVZxC5WC5Rrd0Kov4PuXi/rzAcBvB4pCnN4REqYW/9ldmDwMTsOU5/Xkt2UkE6Bvu6G5aeicO7Lf2qy9Un2vkV+CyFywXJgynfWtwJJlbmi53HyIYJf64JwP51ys/udUbZUUzHT4aRa1owV1V1dcZdqJJmDGUKlcSwNocgXj5psswKdmLGiQ15FRh4vUiWjbNoaTB8EJoLnmx0KPzXqxvdXOqX4HJ4hjXWlpbLzrUd+0oHgeEyb5bVFRddH/bTgnYE7WnMe24ETi9hv0R0pfGiYYi9eb3uHztPyqjwiuAfK5WcG8ZASjMSvHfD5L9NO9Hqm6tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4eApp9jaYOhKJlaFO/v128K1bo79+7a37eRQAFGAIdI=; b=Bf4Txa/1dh8JxvmzQfGedwktp+98m9k8x3nC1IfqrOAu/IY2R/MSPshot+Iw3hNqKshnUqvKDGbhl/osXNpYyjgqvCQlFO2uv8SyNcOkFAxcxE5vSEee0OcD+CpFBsHAazFa0f0Qr1v4s0qxFOXGDZUu418WEpbNUyZhQgEISx07hD6H5LZ6xXq1EqSVQSHBU+PNvr5rYRSRa1kBojmcZY6/HuV+LnRRk2TXzlh3edTc3ecCyp9cJq7j45ZbY1VtS+RvcNgGidZwbi7ZT268pikJXTDohUTet9G/O+Koe7MGtq0V9W1kq6cLUQtc9tgPvatSaQJGZLq7Rl9E/1cifQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4eApp9jaYOhKJlaFO/v128K1bo79+7a37eRQAFGAIdI=; b=r8X9yKi0Vlx9dxjIrAIe6AP3KRMaLDVRNW40NWXSsix1hZyKOAwFgvxBKecKlzhCjwZpzn6LgWVJbxPyl4kEz/EXUgij2ll7Go0pFzgG+dM5XdkmiNQTbPCa2e5LaMD5FYlAYJyr1bwO+ivCtJ0p/UcyTWl+0FH1E4R6rKiM/xYX/o85WoyyNroqbLIwGfhRH1GjwZeJ/oD3R/X227MWVktUHsLm/Qa2ITzLEhLg5GXyPfDLoQEq/6Xar1qcXDseaNT/ektnKKuPPeZsGDZ3MLhXusmwANG5XvW3rFDdteRq/GrJVvar+ifXfPPBrltCIRu7u+9Jw+un1dzgSNAV6Q== Received: from DM6PR06CA0072.namprd06.prod.outlook.com (2603:10b6:5:54::49) by BN8PR12MB3315.namprd12.prod.outlook.com (2603:10b6:408:41::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.18; Thu, 21 Oct 2021 05:09:58 +0000 Received: from DM6NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:5:54:cafe::21) by DM6PR06CA0072.outlook.office365.com (2603:10b6:5:54::49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.15 via Frontend Transport; Thu, 21 Oct 2021 05:09:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT011.mail.protection.outlook.com (10.13.172.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Thu, 21 Oct 2021 05:09:55 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 21 Oct 2021 05:09:50 +0000 From: Xueming Li To: , Zhang Yuying , Li Xiaoyun CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , "Ananyev Konstantin" , Ajit Khaparde Date: Thu, 21 Oct 2021 13:08:31 +0800 Message-ID: <20211021050832.2599691-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211021050832.2599691-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211021050832.2599691-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0b426281-d04b-4f60-025e-08d9945105c8 X-MS-TrafficTypeDiagnostic: BN8PR12MB3315: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-Transport-Forked: True X-MS-Oob-TLC-OOBClassifiers: OLM:2582; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PWasjtjc4KJRtdGkqYq3CHIRqelUs6qJw2QhVMc3O/IwstTzuWudTD4Zs2vRnyu5B/YicmKPW8kytd5TECt6Lw6K8jSdRUrsTOXbb3M3Km6g4jxiEMFKg3a2qpkPlXuBTK1db8bVU9DFdkSb0mN8+lNFKXGKKdIFy/YCAzTfHX3t9dpdBTCpmnjSJCQjEY9kmGkll6sLIasch1PDPVbfL3xkWYj/E3tePM8mXDUkqKqAgHHRAB0UKw1gRw/juJv9px31GXfUGZHtdCGWd1WWKhWgxakf8Fm9SyxNZmYFcJfMfvHcAD0qYTTbZ1zk2OQZ5GYgo+2Qbbdb7sH4+2zmwjlOBWDjwU/8C6H0Ze7djAqSaPpgMUbePhjtJ0aaIhNXz8GRxlZ+ZhNhUCXuvT9tpJOoF1Pc1wwkdFkDTQytPypGFK1UhKKIMAlnpyfkHlnIfPt7Bll9WHrgceIs/mI+EmfYK5vd2UHpft6RKDVOwwTHQ5sFh6ikgkw+NY6bi1vjJHdWRWIzzUjhtWdICphMmTlO7+Sb+zpGgtLO778G/fv7KlKDblC+DlWjaL5/Ckalw9jgHB5ZfS3nroGh0obh7cmui3FvRZgYSCx2wR+fDKhrPtrsFgY5D/mu1ItFQcM+eHWTzuSoIDn26zFidSmpHgYBDjZJzNE4P1O7b79TGa//uc7bOcxFJ0KubxfrM7vA0zXjwxmzkS1F329/MnV8FQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(186003)(16526019)(316002)(7696005)(6286002)(2616005)(8676002)(26005)(2906002)(82310400003)(110136005)(336012)(47076005)(426003)(1076003)(508600001)(86362001)(70206006)(70586007)(54906003)(36756003)(4326008)(8936002)(55016002)(6666004)(7636003)(356005)(36860700001)(83380400001)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2021 05:09:55.4084 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0b426281-d04b-4f60-025e-08d9945105c8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3315 Subject: [dpdk-dev] [PATCH v12 6/7] app/testpmd: force shared Rx queue polled on same core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Shared Rx queue must be polled on same core. This patch checks and stops forwarding if shared RxQ being scheduled on multiple cores. It's suggested to use same number of Rx queues and polling cores. Signed-off-by: Xueming Li Acked-by: Xiaoyun Li --- app/test-pmd/config.c | 105 +++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 5 +- app/test-pmd/testpmd.h | 2 + 3 files changed, 111 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index e4bbf457916..cad78350dcc 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3067,6 +3067,111 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, } } +/* + * Check whether a shared rxq scheduled on other lcores. + */ +static bool +fwd_stream_on_other_lcores(uint16_t domain_id, lcoreid_t src_lc, + portid_t src_port, queueid_t src_rxq, + uint32_t share_group, queueid_t share_rxq) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + struct rte_port *port; + struct rte_eth_dev_info *dev_info; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* Check remaining cores. */ + for (lc_id = src_lc + 1; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + port = &ports[fs->rx_port]; + dev_info = &port->dev_info; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((dev_info->dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) + == 0 || rxq_conf->share_group == 0) + /* Not shared rxq. */ + continue; + if (domain_id != port->dev_info.switch_info.domain_id) + continue; + if (rxq_conf->share_group != share_group) + continue; + if (rxq_conf->share_qid != share_rxq) + continue; + printf("Shared Rx queue group %u queue %hu can't be scheduled on different cores:\n", + share_group, share_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + src_lc, src_port, src_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + lc_id, fs->rx_port, fs->rx_queue); + printf("Please use --nb-cores=%hu to limit number of forwarding cores\n", + nb_rxq); + return true; + } + } + return false; +} + +/* + * Check shared rxq configuration. + * + * Shared group must not being scheduled on different core. + */ +bool +pkt_fwd_shared_rxq_check(void) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + uint16_t domain_id; + struct rte_port *port; + struct rte_eth_dev_info *dev_info; + struct rte_eth_rxconf *rxq_conf; + + if (rxq_share == 0) + return true; + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* + * Check streams on each core, make sure the same switch domain + + * group + queue doesn't get scheduled on other cores. + */ + for (lc_id = 0; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + /* Update lcore info stream being scheduled. */ + fs->lcore = fwd_lcores[lc_id]; + port = &ports[fs->rx_port]; + dev_info = &port->dev_info; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((dev_info->dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) + == 0 || rxq_conf->share_group == 0) + /* Not shared rxq. */ + continue; + /* Check shared rxq not scheduled on remaining cores. */ + domain_id = port->dev_info.switch_info.domain_id; + if (fwd_stream_on_other_lcores(domain_id, lc_id, + fs->rx_port, + fs->rx_queue, + rxq_conf->share_group, + rxq_conf->share_qid)) + return false; + } + } + return true; +} + /* * Setup forwarding configuration for each logical core. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 80337bad382..d76d298a4b9 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2309,6 +2309,10 @@ start_packet_forwarding(int with_tx_first) fwd_config_setup(); + pkt_fwd_config_display(&cur_fwd_config); + if (!pkt_fwd_shared_rxq_check()) + return; + port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin; if (port_fwd_begin != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { @@ -2338,7 +2342,6 @@ start_packet_forwarding(int with_tx_first) if(!no_flush_rx) flush_fwd_rx_queues(); - pkt_fwd_config_display(&cur_fwd_config); rxtx_config_display(); fwd_stats_reset(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 63f9913deb6..9482dab3071 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -147,6 +147,7 @@ struct fwd_stream { uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; + struct fwd_lcore *lcore; /**< Lcore being scheduled. */ }; /** @@ -842,6 +843,7 @@ void port_summary_header_display(void); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); +bool pkt_fwd_shared_rxq_check(void); void pkt_fwd_config_display(struct fwd_config *cfg); void rxtx_config_display(void); void fwd_config_setup(void);