From patchwork Sun Mar 6 23:23:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 108555 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE1BBA0093; Mon, 7 Mar 2022 00:23:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 851F841178; Mon, 7 Mar 2022 00:23:30 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2071.outbound.protection.outlook.com [40.107.100.71]) by mails.dpdk.org (Postfix) with ESMTP id A03D34116A; Mon, 7 Mar 2022 00:23:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=edzL4hs2FqSMUdvi4WL1Mxx18MQaRFCywCFK1cBDMRV6tmGTc8GnRToGlkAdqDJHJZXxshPDg4x7MJbKWI43zIWCyWUb4bfqNHK+UE9Q1hnAlRwUzunBYNa0V72nxh/Ovn5hYNcTd+oAfmJFte5lia6N+RCyzWR3UOnYh4aiw9HZsN5T8UUTFhL2BOyGb+I47ETHrIsTtWSNleRxPL/G7N1vG07W6grIMWKjFaGL8Lal8BbCVxAtMVZjVNidNFH4O/k0qcqfWlhUcpWtV+PIHDPBjvJkCR7K7qydyjnzEw172tswbxR0ueEqd1J8loTae/Meh9aUSD9+RrSD3hhWmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y7w1Y9VBoWKh9RD/mRWrejOrEUnCDc78oCZBrpbVhRE=; b=JP74oU1RKBiK+uiuIy44X1Oet2/giYpPWxORFM2vE4RQRnXtJfAyvt+h74swoulMFiHcosEO/OBh9J+BoPRop+xG5EdkHEuk6IgFSlzUCV/cHL3H2yVKSkH1whPWUt+hq4GhuYMko1AOp+cf5UmZ8pQB7hNAiwf3A06ZurpORNH3DPxJ8uNiZMbjzOxMyLNbJySxSakpnokvDX0Gs4GFfl7Iq0BuWQuY35XWeG/OXysmQa9S6pWdIrthUjQqsob2R5VEwDdzjTK8eHRNQdQQuPxHNFnorfuRF+3S6Vtj93PfNGz0TLx42GowRAkLunARJ0Iidiy/TA8Htbur191FSQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=oktetlabs.ru smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Y7w1Y9VBoWKh9RD/mRWrejOrEUnCDc78oCZBrpbVhRE=; b=f3Em9PKKiuYa899eUBXAZ4iAEKUfbIq5hYVkx4aaakLdRxn1/ooJL9e/Y7vSdAPlR75w6vQfKU9mpZsoFz5kmSdV/99DQhYpnuK+6xNctVW48LJLbX5uAyqWDzlJv4gYhVRcmY9coyijlLS3tyWWKMTyh105pLLTjC1WVWqieMFphhr7whMyMAjV0V21ZRdpRmD5VwZCMgmDiYTp5hMVQv2cVKoL3psfH68h5ynwM8cd7IXm2pQzPkq9CzD1WNlqZN8c+GIZ0YhD4gNAJlrf115/qS6BwBPvitpfZ2Delyg0rAf88LnkYGhFLqRJWY8safiWhXAjbjfLb9cqC5EZVw== Received: from DM6PR03CA0101.namprd03.prod.outlook.com (2603:10b6:5:333::34) by BN8PR12MB2930.namprd12.prod.outlook.com (2603:10b6:408:69::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14; Sun, 6 Mar 2022 23:23:21 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:5:333:cafe::54) by DM6PR03CA0101.outlook.office365.com (2603:10b6:5:333::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14 via Frontend Transport; Sun, 6 Mar 2022 23:23:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5038.14 via Frontend Transport; Sun, 6 Mar 2022 23:23:20 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Mar 2022 23:23:19 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sun, 6 Mar 2022 15:23:18 -0800 Received: from nvidia.com (10.127.8.11) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Sun, 6 Mar 2022 15:23:15 -0800 From: Dmitry Kozlyuk To: CC: Xiaoyun Li , Yuying Zhang , Aman Deep Singh , Ferruh Yigit , Andrew Rybchenko , Thomas Monjalon , , Matan Azrad , Thomas Monjalon Subject: [PATCH v2 1/2] ethdev: prohibit polling of a stopped queue Date: Mon, 7 Mar 2022 01:23:09 +0200 Message-ID: <20220306232310.613552-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220306232310.613552-1-dkozlyuk@nvidia.com> References: <20220113092103.282538-1-dkozlyuk@nvidia.com> <20220306232310.613552-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 786c45a6-01ed-4dd5-7bef-08d9ffc84d68 X-MS-TrafficTypeDiagnostic: BN8PR12MB2930:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CgvR5v5UF+SlTcDeWVTU3pkw8PwJNVCLXMqzP+La3/0FwmnAUk3Vm5VRO40lgzU3S4JlGAukRSuoulyHNksOkSwuHXeGPA5WAaipBd/aE6unxM7OOrQeflxeRIvY8mb7vkoQ2Pl7heEKDKK+Af0rXKGRhxivonk09OM+Xzn/PA2zn7SR3ntUXD+zKZ+Z/DRPKSEp2dvSY6kw2Ki/njHSyUbnwDlmZyGuvF3iDsWUkxfmWm5vbN5qn2TWToUbt0o4btaQOPnTl4TGWJa97PtH+eMW8h/cmXzh4haJefxch4a+KcEqu+bStTPsw3s+DvOL9V4oR1F5Md8xMX1dx7Ffu9mQGVI71QbUubbmW0z2Nx/2GcoPBnt2I+Eeky2CtqCUa/oD6DGqHRYW4M9R5PK120nHyhA6vYKMIUHNDYfPGkBh8GEWY1GmJ0GbXZAK1wJtsdk332c5CYCcjBDZQgcu7cuwGQOlsfYH32lGSdIuqEeIG2TcUr8IC4uFF9VLzIp4VzRujmbk+uyoSElsyyOgyXuLOwyUXvC2seX0jqo2rkVoobj3EIqdZdNekbaSyDQBWAbbb76x7N2b8dBQ220CmLYTJduAp7iCja5JOK8GCR5YsjnQREmiCALcet7t2+LNEEtBqLYbBthLtwWipi0f04gKqfiUlbMfER/3uHWaeWF5bIHsS4e7gHmYZ/DA4IRPxoI4lhRHt9TTauIPJMJvlg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(6286002)(2616005)(82310400004)(81166007)(7696005)(36860700001)(86362001)(2906002)(356005)(316002)(47076005)(70586007)(508600001)(70206006)(8676002)(55016003)(6666004)(4326008)(26005)(336012)(8936002)(426003)(5660300002)(186003)(83380400001)(36756003)(40460700003)(54906003)(6916009)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2022 23:23:20.5464 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 786c45a6-01ed-4dd5-7bef-08d9ffc84d68 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB2930 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Whether it is allowed to call Rx/Tx functions for a stopped queue was undocumented. Some PMDs make this behavior a no-op either by explicitly checking the queue state or by the way how their routines are implemented or HW works. No-op behavior may be convenient for application developers. But it also means that pollers of stopped queues would go all the way down to PMD Rx/Tx routines, wasting cycles. Some PMDs would do a check for the queue state on data path, even though it may never be needed for a particular application. Also, use cases for stopping queues or starting them deferred do not logically require polling stopped queues. Use case 1: a secondary that was polling the queue has crashed, the primary is doing a recovery to free all mbufs. By definition the queue to be restarted is not polled. Use case 2: deferred queue start or queue reconfiguration. The polling thread must be synchronized anyway, because queue start and stop are non-atomic. Prohibit calling Rx/Tx functions on stopped queues. Fixes: 0748be2cf9a2 ("ethdev: queue start and stop") Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- lib/ethdev/rte_ethdev.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c2d1f9a972..9f12a6043c 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -74,7 +74,7 @@ * rte_eth_rx_queue_setup()), it must call rte_eth_dev_stop() first to stop the * device and then do the reconfiguration before calling rte_eth_dev_start() * again. The transmit and receive functions should not be invoked when the - * device is stopped. + * device is stopped or when the queue is stopped (for that queue). * * Please note that some configuration is not stored between calls to * rte_eth_dev_stop()/rte_eth_dev_start(). The following configuration will From patchwork Sun Mar 6 23:23:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 108554 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26FEDA0093; Mon, 7 Mar 2022 00:23:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0AA934115D; Mon, 7 Mar 2022 00:23:27 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2064.outbound.protection.outlook.com [40.107.236.64]) by mails.dpdk.org (Postfix) with ESMTP id 9F170407FF; Mon, 7 Mar 2022 00:23:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Iif/W7855a5WK8GBZJRZMSSqVpDwLz4TVXZXfZwg/RLu/mw+MiKrhktqELPhGOZ+9usPWxNnt21/9VQn8wCVOHYVTW7zoQPYhTa1P4I2Em4CNL1la+iqMhPdRLJzOAky/9U7Kn6nbmMS/qR2QfQtntK1wc2LWhen8NIgB5Zs1wInzn+my0ql9c5QUaeBwK0AQZ1w9mrPGv8FYw4RvBVJ7VjRPVm48tUexIWfTnApgigZT2yfD5HeAdKBVleWE5SpUsofubtIYR37Az8Rirx8K9SItY7n4tkM8/jrlhSR+HaanTUutLnajRbLbp1Lx3EdwsfQZ8xOhzfqu74yZKjYOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RnwkWUENhdisEyX8PcoyqFaMFEpo+P7Esqpgw/KwdZ4=; b=CoWIM9JgBpqkrTjSU+VvB2RcMUMDmCtVfQsLe0uzxVlP0TDrmhocifesopTtYGU937S3855cDBHh/URri9UgZtHbdK9Ofmi8VkRHz7kaS+K597tWynIFdx5VgoGbp0ZCnx4xgv1zUQiM+1wgSE3DX9AoxcwyFDyTxBOqGjft+xH+/UzkH9tTDdoYMSihu6gM54nWCsHlvLZFxaDKRIsEfISWkw2pFRCdIytdWJFXKUm4mO0Ng6NoEVAfoDm8sfQiVTlF03Xt1louiuxKPda+IKaByulDGshkRrsVAZiG+XDZXxOes5LNuh7Vh/3UpW+lpdF3cQYpfufkL8NRxiCh3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RnwkWUENhdisEyX8PcoyqFaMFEpo+P7Esqpgw/KwdZ4=; b=evg140ZEgWi0zE4bX6XRho0eaw2LpflAysg0dlKPRnpRcObBR9ZpERwRxJ85x3hfOYHQbGkHe5GpTNJ1QRvj2Zh+Go7vTi44rhjO+/aGbxSAocuk6m89191bRWnjcmwJfOoQob8D5RI+lFBcRQTbHL/kpMrCwmhaYEdGYba2xDQmsofNrkW5i6UuDuxByBUFC5u5raWsRyfGIvVnn7TTz/wZArzLMJSikJkfPXRppJ6sXqMYlaJbw2pMTf/4e/vVenSERDo5jy0Ju+tIxQXGohAXMaA7s4FXRve+K08gfB3nDIi3y/OQmrWe+GfLKZ5H4nN4OynOO30dg7a1oApB6A== Received: from MWHPR19CA0068.namprd19.prod.outlook.com (2603:10b6:300:94::30) by BYAPR12MB4600.namprd12.prod.outlook.com (2603:10b6:a03:112::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14; Sun, 6 Mar 2022 23:23:23 +0000 Received: from CO1NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:300:94:cafe::d8) by MWHPR19CA0068.outlook.office365.com (2603:10b6:300:94::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14 via Frontend Transport; Sun, 6 Mar 2022 23:23:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT055.mail.protection.outlook.com (10.13.175.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5038.14 via Frontend Transport; Sun, 6 Mar 2022 23:23:23 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Mar 2022 23:23:22 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sun, 6 Mar 2022 15:23:21 -0800 Received: from nvidia.com (10.127.8.11) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Sun, 6 Mar 2022 15:23:19 -0800 From: Dmitry Kozlyuk To: CC: Xiaoyun Li , Yuying Zhang , Aman Deep Singh , Ferruh Yigit , Andrew Rybchenko , Thomas Monjalon , , Matan Azrad Subject: [PATCH v2 2/2] app/testpmd: do not poll stopped queues Date: Mon, 7 Mar 2022 01:23:10 +0200 Message-ID: <20220306232310.613552-3-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220306232310.613552-1-dkozlyuk@nvidia.com> References: <20220113092103.282538-1-dkozlyuk@nvidia.com> <20220306232310.613552-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 31c0415c-bfa0-4e5f-6dce-08d9ffc84ee4 X-MS-TrafficTypeDiagnostic: BYAPR12MB4600:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 466DcHgDAEwvqose5jQBU5CWhQ8FxHKirJUg5J1OV7CNrxsIARAwcrcAia3DHpzHOd/RKLGUJ956PKPus/xjd4mlzm3Xl+dw8Uz8NC2o3sEyXr40ZH6hhnBGj552/mEXGmNJYJX5zNiTT9aQpjJ20+VQlSWiQ1yLxN2DfbaqYWuMGcUwYn5MU9I4i7tGfGZRG9h7w8LDyw2eKb5DmDT94kD0Q72G0a1SnY2E/b+NXR5sIblZbk5qJwVanHv5biCwMETUNCsHXDH3lXyqj7t1Ujlfz/vHhVFsBzVs1pc6UmhbK0j4XQlYPC7mKhrbOwqDKx91hmkv0DW0Cdl9j2APJhtap4aFZCtEW6xMvSlFfZIM2NQ6IQ9fBfTiZL2ivLz997OeK4heHHb6/80jnQAGUoh8PdeKgbGldzzAKr5nqzqq+MpNi3pi9hMgDdc+RUeb3RgkkTOxRENE5vCHx0N/9kG0JGqwE8xTWBqEFlsnuPAj7kqW4AfKCpjg+owoBn0HrNIBjHfTkswWHN9u3GcQmNIUWF8ndAfPHYROit7UI/jPc/jh0hWHnsuFoligg8WYQkf8CVL3wr73/4NR8k/UwnnNIALHIBmfVGOnDixy/RIhqrhwLUctZsu8iad0eAu73fXghokXV50jiWyYQHEmllCO0Mst0ZfNdQ0NWqHQQGDH6nJLlAecTlLrfQe5qtXeVaPi0WA6mE3FxSp4zTsqZg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(2616005)(6286002)(107886003)(82310400004)(81166007)(36860700001)(7696005)(2906002)(86362001)(356005)(316002)(47076005)(508600001)(70206006)(70586007)(8676002)(55016003)(6666004)(4326008)(336012)(26005)(8936002)(426003)(5660300002)(186003)(83380400001)(30864003)(36756003)(40460700003)(54906003)(6916009)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2022 23:23:23.0364 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 31c0415c-bfa0-4e5f-6dce-08d9ffc84ee4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB4600 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Calling Rx/Tx functions on a stopped queue is not supported. Do not run packet forwarding for streams that use stopped queues. Each stream has a read-only "disabled" field, so that lcore function can skip such streams. Forwarding engines can set this field using a new "stream_init" callback function by checking relevant queue states. A helper function is provided to check if a given Rx queue, Tx queue, or both of them are stopped. Fixes: 5f4ec54f1d16 ("testpmd: queue start and stop") Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- app/test-pmd/5tswap.c | 13 ++++++++ app/test-pmd/csumonly.c | 13 ++++++++ app/test-pmd/flowgen.c | 13 ++++++++ app/test-pmd/icmpecho.c | 13 ++++++++ app/test-pmd/ieee1588fwd.c | 13 ++++++++ app/test-pmd/iofwd.c | 13 ++++++++ app/test-pmd/macfwd.c | 13 ++++++++ app/test-pmd/noisy_vnf.c | 13 ++++++++ app/test-pmd/rxonly.c | 13 ++++++++ app/test-pmd/shared_rxq_fwd.c | 13 ++++++++ app/test-pmd/testpmd.c | 57 ++++++++++++++++++++++++++++++++++- app/test-pmd/testpmd.h | 4 +++ app/test-pmd/txonly.c | 13 ++++++++ 13 files changed, 203 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index 629d3e0d31..2aa1f1843b 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -185,9 +185,22 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +stream_init_5tuple_swap(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine five_tuple_swap_fwd_engine = { .fwd_mode_name = "5tswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_5tuple_swap, .packet_fwd = pkt_burst_5tuple_swap, }; diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 5274d498ee..a031cae2ca 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -1178,9 +1178,22 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +stream_init_checksum_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine csum_fwd_engine = { .fwd_mode_name = "csum", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_checksum_forward, .packet_fwd = pkt_burst_checksum_forward, }; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 9ceef3b54a..e2c1bfd82c 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -207,9 +207,22 @@ flowgen_begin(portid_t pi) return 0; } +static int +flowgen_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine flow_gen_engine = { .fwd_mode_name = "flowgen", .port_fwd_begin = flowgen_begin, .port_fwd_end = NULL, + .stream_init = flowgen_stream_init, .packet_fwd = pkt_burst_flow_gen, }; diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 99c94cb282..dd3699ff3b 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -512,9 +512,22 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +icmpecho_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine icmp_echo_engine = { .fwd_mode_name = "icmpecho", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = icmpecho_stream_init, .packet_fwd = reply_to_icmp_echo_rqsts, }; diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index 9ff817aa68..f9f73f2c14 100644 --- a/app/test-pmd/ieee1588fwd.c +++ b/app/test-pmd/ieee1588fwd.c @@ -211,9 +211,22 @@ port_ieee1588_fwd_end(portid_t pi) rte_eth_timesync_disable(pi); } +static int +port_ieee1588_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine ieee1588_fwd_engine = { .fwd_mode_name = "ieee1588", .port_fwd_begin = port_ieee1588_fwd_begin, .port_fwd_end = port_ieee1588_fwd_end, + .stream_init = port_ieee1588_stream_init, .packet_fwd = ieee1588_packet_fwd, }; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 19cd920f70..b736a2a3bc 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -88,9 +88,22 @@ pkt_burst_io_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +stream_init_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine io_fwd_engine = { .fwd_mode_name = "io", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_forward, .packet_fwd = pkt_burst_io_forward, }; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 812a0c721f..64b65c8c51 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -119,9 +119,22 @@ pkt_burst_mac_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +stream_init_mac_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine mac_fwd_engine = { .fwd_mode_name = "mac", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_mac_forward, .packet_fwd = pkt_burst_mac_forward, }; diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index e4434bea95..58f53212a4 100644 --- a/app/test-pmd/noisy_vnf.c +++ b/app/test-pmd/noisy_vnf.c @@ -277,9 +277,22 @@ noisy_fwd_begin(portid_t pi) return 0; } +static int +stream_init_noisy_vnf(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, &tx_stopped); + if (ret == 0) + fs->disabled = rx_stopped || tx_stopped; + return ret; +} + struct fwd_engine noisy_vnf_engine = { .fwd_mode_name = "noisy", .port_fwd_begin = noisy_fwd_begin, .port_fwd_end = noisy_fwd_end, + .stream_init = stream_init_noisy_vnf, .packet_fwd = pkt_burst_noisy_vnf, }; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index d1a579d8d8..945ea2d27a 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -68,9 +68,22 @@ pkt_burst_receive(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +stream_init_receive(struct fwd_stream *fs) +{ + bool rx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, NULL); + if (ret == 0) + fs->disabled = rx_stopped; + return ret; +} + struct fwd_engine rx_only_engine = { .fwd_mode_name = "rxonly", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_receive, .packet_fwd = pkt_burst_receive, }; diff --git a/app/test-pmd/shared_rxq_fwd.c b/app/test-pmd/shared_rxq_fwd.c index da54a383fd..9389df2627 100644 --- a/app/test-pmd/shared_rxq_fwd.c +++ b/app/test-pmd/shared_rxq_fwd.c @@ -107,9 +107,22 @@ shared_rxq_fwd(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static int +shared_rxq_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, &rx_stopped, NULL); + if (ret == 0) + fs->disabled = rx_stopped; + return ret; +} + struct fwd_engine shared_rxq_engine = { .fwd_mode_name = "shared_rxq", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = shared_rxq_stream_init, .packet_fwd = shared_rxq_fwd, }; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..b3e360121a 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1763,6 +1763,37 @@ reconfig(portid_t new_port_id, unsigned socket_id) init_port_config(); } +int +fwd_stream_get_stopped_queues(struct fwd_stream *fs, bool *rx, bool *tx) +{ + struct rte_eth_rxq_info rx_qinfo; + struct rte_eth_txq_info tx_qinfo; + int ret; + + if (rx != NULL) { + ret = rte_eth_rx_queue_info_get(fs->rx_port, fs->rx_queue, + &rx_qinfo); + if (ret < 0) { + RTE_LOG(ERR, USER1, "Cannot get port %d RX queue %d info: %s\n", + fs->rx_port, fs->rx_queue, + rte_strerror(rte_errno)); + return ret; + } + *rx = rx_qinfo.queue_state == RTE_ETH_QUEUE_STATE_STOPPED; + } + if (tx != NULL) { + ret = rte_eth_tx_queue_info_get(fs->tx_port, fs->tx_queue, + &tx_qinfo); + if (ret < 0) { + TESTPMD_LOG(ERR, "Cannot get port %d TX queue %d info: %s\n", + fs->tx_port, fs->tx_queue, + rte_strerror(rte_errno)); + return ret; + } + *tx = tx_qinfo.queue_state == RTE_ETH_QUEUE_STATE_STOPPED; + } + return 0; +} int init_fwd_streams(void) @@ -2155,6 +2186,21 @@ flush_fwd_rx_queues(void) for (j = 0; j < 2; j++) { for (rxp = 0; rxp < cur_fwd_config.nb_fwd_ports; rxp++) { for (rxq = 0; rxq < nb_rxq; rxq++) { + struct rte_eth_rxq_info rx_qinfo; + int ret; + + ret = rte_eth_rx_queue_info_get(rxp, rxq, + &rx_qinfo); + if (ret < 0) { + TESTPMD_LOG(ERR, "Cannot get port %d RX queue %d info: %s\n", + rxp, rxq, + rte_strerror(rte_errno)); + return; + } + if (rx_qinfo.queue_state == + RTE_ETH_QUEUE_STATE_STOPPED) + continue; + port_id = fwd_ports_ids[rxp]; /** * testpmd can stuck in the below do while loop @@ -2201,7 +2247,8 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) nb_fs = fc->stream_nb; do { for (sm_id = 0; sm_id < nb_fs; sm_id++) - (*pkt_fwd)(fsm[sm_id]); + if (!fsm[sm_id]->disabled) + (*pkt_fwd)(fsm[sm_id]); #ifdef RTE_LIB_BITRATESTATS if (bitrate_enabled != 0 && bitrate_lcore_id == rte_lcore_id()) { @@ -2283,6 +2330,7 @@ start_packet_forwarding(int with_tx_first) { port_fwd_begin_t port_fwd_begin; port_fwd_end_t port_fwd_end; + stream_init_t stream_init = cur_fwd_eng->stream_init; unsigned int i; if (strcmp(cur_fwd_eng->fwd_mode_name, "rxonly") == 0 && !nb_rxq) @@ -2313,6 +2361,13 @@ start_packet_forwarding(int with_tx_first) if (!pkt_fwd_shared_rxq_check()) return; + if (stream_init != NULL) + for (i = 0; i < cur_fwd_config.nb_fwd_streams; i++) + if (stream_init(fwd_streams[i]) < 0) { + TESTPMD_LOG(ERR, "Cannot init stream\n"); + return; + } + port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin; if (port_fwd_begin != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..59edae645e 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -134,6 +134,7 @@ struct fwd_stream { portid_t tx_port; /**< forwarding port of received packets */ queueid_t tx_queue; /**< TX queue to send forwarded packets */ streamid_t peer_addr; /**< index of peer ethernet address of packets */ + bool disabled; /**< the stream is disabled and should not run */ unsigned int retry_enabled; @@ -323,12 +324,14 @@ struct fwd_lcore { */ typedef int (*port_fwd_begin_t)(portid_t pi); typedef void (*port_fwd_end_t)(portid_t pi); +typedef int (*stream_init_t)(struct fwd_stream *fs); typedef void (*packet_fwd_t)(struct fwd_stream *fs); struct fwd_engine { const char *fwd_mode_name; /**< Forwarding mode name. */ port_fwd_begin_t port_fwd_begin; /**< NULL if nothing special to do. */ port_fwd_end_t port_fwd_end; /**< NULL if nothing special to do. */ + stream_init_t stream_init; /**< NULL if nothing special to do. */ packet_fwd_t packet_fwd; /**< Mandatory. */ }; @@ -887,6 +890,7 @@ void rxtx_config_display(void); void fwd_config_setup(void); void set_def_fwd_config(void); void reconfig(portid_t new_port_id, unsigned socket_id); +int fwd_stream_get_stopped_queues(struct fwd_stream *fs, bool *rx, bool *tx); int init_fwd_streams(void); void update_fwd_ports(portid_t new_pid); diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index fc039a622c..1fa5238896 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -504,9 +504,22 @@ tx_only_begin(portid_t pi) return 0; } +static int +tx_only_stream_init(struct fwd_stream *fs) +{ + bool tx_stopped; + int ret; + + ret = fwd_stream_get_stopped_queues(fs, NULL, &tx_stopped); + if (ret == 0) + fs->disabled = tx_stopped; + return ret; +} + struct fwd_engine tx_only_engine = { .fwd_mode_name = "txonly", .port_fwd_begin = tx_only_begin, .port_fwd_end = NULL, + .stream_init = tx_only_stream_init, .packet_fwd = pkt_burst_transmit, };