From patchwork Fri Sep 17 08:01:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99069 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45148A0C46; Fri, 17 Sep 2021 10:02:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E977410E3; Fri, 17 Sep 2021 10:02:22 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2040.outbound.protection.outlook.com [40.107.93.40]) by mails.dpdk.org (Postfix) with ESMTP id D38BB40689 for ; Fri, 17 Sep 2021 10:02:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Y1ZoxHWbfXPGAxZOj4nXJXuEmceRSDVC5niJlioit70I3EhhaP5nYdufi003Wh0o2vNyomyP6Plo3BLf5koDJGlFa16FocXT8jhowKVO8rYD3jWOyFt3vkL9MYDxWFUhHzlFRISx+lHrgEner2tWrAMS7wikZzR2eMWcJvDAUtSlFcULbqFyfbTLT3yXQ/wM7FA4aVxud0wWJF9cFli2HHgoTZT0F6iU4VV0t79tgovoE5kzLaxHmUiZ4oCUMCOBXeeetZ7BKDKSZLLBrz3Rb6712EsOkXUNRBptpH1phOMYP7uQvM7yRWqcNi5lPI/IFhYTYKOqDOmNFrMfQ10qBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ujYISd6LMaPdHaABRFLGx7XBnrsrKjZYRFWztMn3K+s=; b=IN9uWJN0qSK+VCgQSKnFC/3C8q8lIIdh/nawPIM3KsYE+UwTqe2vLMMuzKURxgcYkTW79ZBt7G/kok8NU5s+gyX7C6Xa6b13GeMjXY9cjf+bAK1f7nP0Oi6s0DfyDvYjJivqDXHNPwJWng4mM4Uub/PQTW8o/cyMjsh5lPJzWC4JSW4sF/Sxz8Si8b0sgHFY4IWfS765ia6g/v7qeA+mQICgZbRfA2Sxq89UAu7uB+YmSZIdmris7aoLnHbMim6g16lDljD/W9dq9ANae63uHe3Jyd3Dy85sELXKSoLX4KdLayYn+VCaiSH0AktT3uWuXE9xm72VWYO9LSvrTEZajg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ujYISd6LMaPdHaABRFLGx7XBnrsrKjZYRFWztMn3K+s=; b=E5jeZKp+O6YbzLzq++sGh4PzE+Sh3gE5SGCAwWDNBxbmFxTDARzEtI1W6RTveKSLs/EgClkLKNHElEmcGINiYpVpU4WTd3767trTO3wXSHTYTxhDP7Fyd5Ddkuf4uqFj6swDyaLX/jnaDrHhLxJZS22tG/0Im469EAnpEmRWNe1JgAwLYVBwrONK7XcBhgoqUbeH/LZB5RyXJSOOfM8GK26IFHtpXFIgDvUqC7mG+Xg/mkriKxn0HdYxmzwmF2ODJlipQuWuST7JN/PkiKAo04eTU4kM8+HMc+/FaPBDjH6vxOoTMTE5/jUW464qqmO4PVBgGfGBo4kD3VW/X8Q9xw== Received: from DM5PR2001CA0014.namprd20.prod.outlook.com (2603:10b6:4:16::24) by BN6PR1201MB2465.namprd12.prod.outlook.com (2603:10b6:404:a6::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.16; Fri, 17 Sep 2021 08:02:19 +0000 Received: from DM6NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:4:16:cafe::9b) by DM5PR2001CA0014.outlook.office365.com (2603:10b6:4:16::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by DM6NAM11FT031.mail.protection.outlook.com (10.13.172.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:19 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:18 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:16 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit Date: Fri, 17 Sep 2021 16:01:14 +0800 Message-ID: <20210917080121.329373-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3a5f0ffb-b20d-47a1-56e2-08d979b178bb X-MS-TrafficTypeDiagnostic: BN6PR1201MB2465: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UMZWGZz11wqSYkgRnkubHj1F3Z4dhmm6UnJLD2Ca5vZHbiZ5votJqB9RwfW/xFrkTCFsL8gdj1VHUq+KPL5mtHcp6v8Lk4UFFieeyK7MX0TqJqV0pPwuTCjo13+Br2IqeuzwthfqTrHTvTeLTAhlg7Rwou5AQRH7H4z+j8+FoI/oKM41qvMy85LDM/RvILclJ7KDK+BnNwhRBR+zmDwQicwOoYLah2MXdXPZ1p/Mw+rh0D25Ayn6kCaBXFd1zpQuiYHFH/mZwZCWQbW0xm9/uqDGnMQlwzN3eFlkPDxtAIRqyn6friW4LTwl6mruakCxVyPexEP+eOR/GRfAbeXgEIrOu0e/tVM0cF3H2b/SVYxBSiPKT5h1VxhxwDMLXViIG9NKnNlU37wur2UxwgeOjmVDI6lgAbUgcj3NA3hfZe26g0RuXgEPYQOV6l2D7XVPMNCQymZRa20l2pBIfZm9udHjIOVoQWpl8knUZ7eSd4PC+9CqXAmIGGhT0Ran1BBdVizPGTMkVgluCDw86G61qv3uGAcdIbno/KryZklWJgbZjq4eRbZG18roL9a2pWKqaONO20U6zv/KwslMczvP+YEfMqSJWMPt37Bsf87L+XH64JIHsWrw1igA+jWERe4zz+6BWsqtqupR4cKSXVk0as/LUGwdza+2jkI6WGZcHKNYcYoDGaX+AO4DyU9Mvu7tp228fpZsoSyECbKNpMNkQ7ZcPsSW+PE/UlV1vAz8LjkQAh9pqlsN7tZzzX9iRLDMCSFYcV+3XmZqLfBRigBeejdPmhVU76qWYI2sGb09Wx+JvJV4XUw2+esNb0k5o2jk2RoufGdztnoU5SkBj96nzQ== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(46966006)(36840700001)(316002)(83380400001)(54906003)(6286002)(82310400003)(2906002)(4326008)(36906005)(7696005)(186003)(5660300002)(7636003)(82740400003)(70586007)(16526019)(336012)(6916009)(86362001)(1076003)(107886003)(2616005)(36860700001)(8936002)(36756003)(55016002)(8676002)(426003)(47076005)(966005)(478600001)(70206006)(356005)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:19.0092 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3a5f0ffb-b20d-47a1-56e2-08d979b178bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB2465 Subject: [dpdk-dev] [PATCH v3 1/8] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In current DPDK framework, each RX queue is pre-loaded with mbufs for incoming packets. When number of representors scale out in a switch domain, the memory consumption became significant. Most important, polling all ports leads to high cache miss, high latency and low throughput. This patch introduces shared RX queue. Ports with same configuration in a switch domain could share RX queue set by specifying sharing group. Polling any queue using same shared RX queue receives packets from all member ports. Source port is identified by mbuf->port. Port queue number in a shared group should be identical. Queue index is 1:1 mapped in shared group. Share RX queue must be polled on single thread or core. Multiple groups is supported by group ID. Signed-off-by: Xueming Li Cc: Jerin Jacob --- Rx queue object could be used as shared Rx queue object, it's important to clear all queue control callback api that using queue object: https://mails.dpdk.org/archives/dev/2021-July/215574.html --- doc/guides/nics/features.rst | 11 +++++++++++ doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/switch_representation.rst | 10 ++++++++++ lib/ethdev/rte_ethdev.c | 1 + lib/ethdev/rte_ethdev.h | 7 +++++++ 5 files changed, 30 insertions(+) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index a96e12d155..2e2a9b1554 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -624,6 +624,17 @@ Supports inner packet L4 checksum. ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``. +.. _nic_features_shared_rx_queue: + +Shared Rx queue +--------------- + +Supports shared Rx queue for ports in same switch domain. + +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SHARED_RXQ``. +* **[provides] mbuf**: ``mbuf.port``. + + .. _nic_features_packet_type_parsing: Packet type parsing diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 754184ddd4..ebeb4c1851 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -19,6 +19,7 @@ Free Tx mbuf on demand = Queue start/stop = Runtime Rx queue setup = Runtime Tx queue setup = +Shared Rx queue = Burst mode info = Power mgmt address monitor = MTU update = diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst index ff6aa91c80..45bf5a3a10 100644 --- a/doc/guides/prog_guide/switch_representation.rst +++ b/doc/guides/prog_guide/switch_representation.rst @@ -123,6 +123,16 @@ thought as a software "patch panel" front-end for applications. .. [1] `Ethernet switch device driver model (switchdev) `_ +- Memory usage of representors is huge when number of representor grows, + because PMD always allocate mbuf for each descriptor of Rx queue. + Polling the large number of ports brings more CPU load, cache miss and + latency. Shared Rx queue can be used to share Rx queue between PF and + representors in same switch domain. ``RTE_ETH_RX_OFFLOAD_SHARED_RXQ`` + is present in Rx offloading capability of device info. Setting the + offloading flag in device Rx mode or Rx queue configuration to enable + shared Rx queue. Polling any member port of shared Rx queue can return + packets of all ports in group, port ID is saved in ``mbuf.port``. + Basic SR-IOV ------------ diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index a7c090ce79..b3a58d5e65 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -127,6 +127,7 @@ static const struct { RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM), RTE_RX_OFFLOAD_BIT2STR(RSS_HASH), RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT), + RTE_ETH_RX_OFFLOAD_BIT2STR(SHARED_RXQ), }; #undef RTE_RX_OFFLOAD_BIT2STR diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index d2b27c351f..a578c9db9d 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf { uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ + uint32_t shared_group; /**< Shared port group index in switch domain. */ /** * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. * Only offloads set on rx_queue_offload_capa or rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000 #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000 +/** + * Rx queue is shared among ports in same switch domain to save memory, + * avoid polling each port. Any port in group can be used to receive packets. + * Real source port number saved in mbuf->port field. + */ +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000 #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ DEV_RX_OFFLOAD_UDP_CKSUM | \ From patchwork Fri Sep 17 08:01:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99070 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2827EA0C46; Fri, 17 Sep 2021 10:02:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BB99410EF; Fri, 17 Sep 2021 10:02:25 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2055.outbound.protection.outlook.com [40.107.92.55]) by mails.dpdk.org (Postfix) with ESMTP id D00EC410EF for ; Fri, 17 Sep 2021 10:02:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PleNdwkSFmtEWhDQDhlAX896LOjL0FrmcBxWNysyMU+/RnVaXH2SptbPez1f+LrppCLkuvbqmtueSt1SWThvPE1dA2qQ0CaPkjA9dxTX54uba1AoBJfGqybSWHP0GuTIiSVKfDOTAolMXUUF9ic1xXbhiTZbW0J+noWaN2zCh3y+0Es8+SP6Si07rC+glAaYj3U2Ol0m9ApWGMN4WwIwh8Y9MwqirkHtHi1e1THm4EnwaFt4LIkh1y0/sEOx43+cH9agzYhzDjgbvLDGFjV0tadlmmJPGtOcwbSgsOiY0Y6+NnU66Rwis8cIm2Cb7Y+/RGAhAFfw1fYxPqJYVI6D1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=xg48E+3Qh/QwdCB6CztKLzLgSNKkqjn6KF0f3OMVECs=; b=IJ+FBmR1rc5kVRfdDh+pmM5gklAuAdGFEDg5yDQa/9sUlaFWJ4WGBE+aOdnWe20sF0l8TQMs7Fj7cxOu/WNdjzueZqdwJRy81MJOF+jtZrfjU/WlMVqNWhzOklqUwjNrY9YLGqqiH02ZXFQlDOkMKoLzaa99EqHwsB7ADL0T+S+DaT1sR8iRmc3ewyXmsb7hLh0FtA4LEMq7WFgkURJgCWMV53o9luDwvHySe6+xXIUZDHi7cqOVhoe3BV0ISWhX4MZaW+KoXojqStv5UyH5FJsIhPFLw+kFyVdcbll2cVGtIM9YajSQ7P0anHOGLn+JyZDNZvDgTNBjv1g40K1paA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xg48E+3Qh/QwdCB6CztKLzLgSNKkqjn6KF0f3OMVECs=; b=j8APpc62CsPtStlA6VOqtqO5XaNu/WU7krQT2+eInS020dcEosZCYV1C+b2s/MGYD2gwkPBxRvQiS7pJUHrmM0iIP+6ajXhNVXF1vgUqKX/2msC+Ux7QG0DqK4yyzkLUqjt7bvY9ieZR+jcnrt9kEENiFW0oNtoMiwmQI6Z1RTVBLxG0BdiQAALg1zeESGJOHk52UJiddO1hvwT5bqC5obL/mKkFgDPUHSJ3sR50yN7Dae4JByKfpsaTzIw7rC6xhwWovJA9mYLiz59UUHZ6v/3a5sAWVFvZRc+pTcldmP355mo8XX7eRw41aaYTgkLBK84H7uCi9Iu5Dsyqr1Q9pA== Received: from BN9PR03CA0214.namprd03.prod.outlook.com (2603:10b6:408:f8::9) by BYAPR12MB2743.namprd12.prod.outlook.com (2603:10b6:a03:61::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14; Fri, 17 Sep 2021 08:02:22 +0000 Received: from BN8NAM11FT030.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::32) by BN9PR03CA0214.outlook.office365.com (2603:10b6:408:f8::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.16 via Frontend Transport; Fri, 17 Sep 2021 08:02:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT030.mail.protection.outlook.com (10.13.177.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:22 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:21 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:18 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Ray Kinsella Date: Fri, 17 Sep 2021 16:01:15 +0800 Message-ID: <20210917080121.329373-3-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e9081bf4-39d9-46fc-f18b-08d979b17a97 X-MS-TrafficTypeDiagnostic: BYAPR12MB2743: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4941; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SBD3wArXffL48Th7nZUV+VKZmxgdz+IBG/IPhq7gKy2V3ixFDGtzL49WwpwrqGBvWSlMUg0c0HAm0Rt/rKodDg7YADuw+v2z8y/Exg4DVR605VBQuewV3Kgy+Gt1IjpL7+7xUkJcEwZVsOOlm7hJMhZH+0wHmpsjexTIN0evJTMBgqONvNLXqPkdIwCzgBdD8v8m8TuamZXsKQ1efHgQ/lVWf+nYGOjP4nvAd0vT7qnJRjkbitgvzN35fAFIp4Z58qM62y7lZ5Adm2i2VEIhGw3CdfD3VD6/cnikeBoWg8tvldVVeYuVXjtageAEyadGgyk2FIm/UJCroFWtjU5xiao2Uon4wU+DKC9H5xZPqETWz8dxl5ZFm8gjVDDmWk031EiTgQPlKl6iYkTmaieEhcAqvKE81RWfNwvRzQaMV3LVLMH3S6M/77H0zh4ZCTIcs9WtLRrSF/vrtLEiCLqBJzENHOVP+jfavPqvG/cZJlCbAlb4UPr3dGGurpKHmPBpAIwbIEhDRo9v27SAO/nrYhtibClW6J/Lsnv9u0BdD3oCYFgdT+aYNi1KfVmpfo3m22wpeKrXzW3kzAFITpBU8cwDliofE4W9kS4dxumgjrdyEQO4OqOS3zLxW8mOx1O7PqgUnDYwYsUHVzaRKyYOtC563SVRSLmQDXeyKOheT6KGCjlpEACi9Muk9BFviAL5Y4UkLS2vIxPeMuEQIZRp0w== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid04.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6286002)(7696005)(4326008)(508600001)(83380400001)(47076005)(36906005)(1076003)(8676002)(54906003)(5660300002)(356005)(336012)(2906002)(186003)(16526019)(2616005)(36860700001)(26005)(70206006)(70586007)(36756003)(82310400003)(8936002)(86362001)(6916009)(426003)(316002)(6666004)(55016002)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:22.0677 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e9081bf4-39d9-46fc-f18b-08d979b17a97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT030.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2743 Subject: [dpdk-dev] [PATCH v3 2/8] ethdev: new API to aggregate shared Rx queue group X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces new api to aggreated ports among same shared Rx queue group. Only queues with specified share group is aggregated. Rx burst and device close are expected to be supported by new device. Signed-off-by: Xueming Li --- lib/ethdev/ethdev_driver.h | 23 ++++++++++++++++++++++- lib/ethdev/rte_ethdev.c | 22 ++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 16 ++++++++++++++++ lib/ethdev/version.map | 3 +++ 4 files changed, 63 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 524757cf6f..72156a4153 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -786,10 +786,28 @@ typedef int (*eth_get_monitor_addr_t)(void *rxq, * @return * Negative errno value on error, number of info entries otherwise. */ - typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev, struct rte_eth_representor_info *info); +/** + * @internal + * Aggregate shared Rx queue. + * + * Create a new port used for shared Rx queue polling. + * + * Only queues with specified share group are aggregated. + * At least Rx burst and device close should be supported. + * + * @param dev + * Ethdev handle of port. + * @param group + * Shared Rx queue group to aggregate. + * @return + * UINT16_MAX if failed, otherwise aggregated port number. + */ +typedef int (*eth_shared_rxq_aggregate_t)(struct rte_eth_dev *dev, + uint32_t group); + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -950,6 +968,9 @@ struct eth_dev_ops { eth_representor_info_get_t representor_info_get; /**< Get representor info. */ + + eth_shared_rxq_aggregate_t shared_rxq_aggregate; + /**< Aggregate shared Rx queue. */ }; /** diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b3a58d5e65..9f2ef58309 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6301,6 +6301,28 @@ rte_eth_representor_info_get(uint16_t port_id, return eth_err(port_id, (*dev->dev_ops->representor_info_get)(dev, info)); } +uint16_t +rte_eth_shared_rxq_aggregate(uint16_t port_id, uint32_t group) +{ + struct rte_eth_dev *dev; + uint64_t offloads; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->shared_rxq_aggregate, + UINT16_MAX); + + offloads = dev->data->dev_conf.rxmode.offloads; + if ((offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) == 0) { + RTE_ETHDEV_LOG(ERR, "port_id=%u doesn't support Rx offload\n", + port_id); + return UINT16_MAX; + } + + return (*dev->dev_ops->shared_rxq_aggregate)(dev, group); +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO); RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index a578c9db9d..f15d2142b2 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -4895,6 +4895,22 @@ __rte_experimental int rte_eth_representor_info_get(uint16_t port_id, struct rte_eth_representor_info *info); +/** + * Aggregate shared Rx queue ports to one port for polling. + * + * Only queues with specified share group is aggregated. + * Any operation besides Rx burst and device close is unexpected. + * + * @param port_id + * The port identifier of the device from shared Rx queue group. + * @param group + * Shared Rx queue group to aggregate. + * @return + * UINT16_MAX if failed, otherwise aggregated port number. + */ +__rte_experimental +uint16_t rte_eth_shared_rxq_aggregate(uint16_t port_id, uint32_t group); + #include /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 3eece75b72..97a2233508 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -249,6 +249,9 @@ EXPERIMENTAL { rte_mtr_meter_policy_delete; rte_mtr_meter_policy_update; rte_mtr_meter_policy_validate; + + # added in 21.11 + rte_eth_shared_rxq_aggregate; }; INTERNAL { From patchwork Fri Sep 17 08:01:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99071 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E649A0C46; Fri, 17 Sep 2021 10:02:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF93B410F6; Fri, 17 Sep 2021 10:02:30 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2061.outbound.protection.outlook.com [40.107.236.61]) by mails.dpdk.org (Postfix) with ESMTP id E42AD410F5 for ; Fri, 17 Sep 2021 10:02:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eIw4taf+8EWeUBBLyvNkReyiKxIcW/NllcPCgnGjXj4xxha8YN+rnmaGC/pW0SJP4Ube1c57rvEuaUUseiJw1HRU7DQEDDxwcmRQG/78BbLc7rPVsg/upOlpYKSrUsZJ1aTYq7BEjpBe1k2eVPyL1EvxOCBu/3H+xBrZ97NgV3e0aZsbAdK12c+3qojfjhKoltIpB9mQH4/3nCCwe1UEQKXknI5gwEJjhRxS6R0+kBPIoyRCt4smuLZj7BGVjy7/oDJtsUrs6moUPabDvDBMIX+f6qZYfsS+w6xffgNTgIowpeL9QDlwNTUODoMuyCH6z8QQbXTlyMXKL/XMN+DhlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=UKsvJ3pZU56zQJZUn4w48RPTqMNaJq9oXUVNublu8No=; b=lg0UqiGILyKbIYa1WTNbo4UCrFCZPu3Pt2TuV52a/lFVUILcHYVaXqAalyHxeaWOABaI6Hj2puOjPz+C7koYiSLJ7g5Bw8c+Xbd4mjpnujHeAwVzP+66Og/95/0mVcugDH+jxbO6kk4ALERQkPr/noPIrxKIg007NZYvWhzDPtJERwssjmtDJdVhCKMGwAkIgbdrNQJVyELfxlifeNKBQOmhu0+CiEIQDcjSprUc6hkjb0K7hfeJ6jBJxGiqQHlSZK6z0wbX/h0i8vm6K68Y93G9J2DEmk4hl1eCwi05RZQZRrRapxE1b68sIlE4aqRxeM67KR+feIFCFSZLX/4+uA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UKsvJ3pZU56zQJZUn4w48RPTqMNaJq9oXUVNublu8No=; b=B/VXBClIKhD+s8GdTw1szq+2JAi7yiNxf+ykZIjFHrB5WbpjBRHkfHkKqyv3NZUJYslHFYxvdG9xeB7mgY2D+qUrAJoty1NT8Y8nwgmSxXcgHpNvMl0E9hZ9olsSzidxZiYTHgmh01QaNcwdICrv96a9Rdl75sU51re+7LdOmQ+SxR6Y6Va+EtXjchkBp0V1Pji2A7HwEuuZwjBrDOA3hljs7SChb19ZsNSsqYICs5HrfwjL0kEnG52upsGPmpcrNl5eC9TP5QhUVxOlI1KNFo9/8Pq9EsTVWruTVq2wIpPDl9Vl1J5DKy4H/SkKtb55AiKXte+uo0c1Vf6ULmvH8Q== Received: from DM6PR03CA0095.namprd03.prod.outlook.com (2603:10b6:5:333::28) by CH0PR12MB5187.namprd12.prod.outlook.com (2603:10b6:610:ba::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14; Fri, 17 Sep 2021 08:02:26 +0000 Received: from DM6NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:5:333:cafe::90) by DM6PR03CA0095.outlook.office365.com (2603:10b6:5:333::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by DM6NAM11FT060.mail.protection.outlook.com (10.13.173.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:26 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 01:02:25 -0700 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:23 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:16 +0800 Message-ID: <20210917080121.329373-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1dd19b91-b390-44b7-2263-08d979b17d48 X-MS-TrafficTypeDiagnostic: CH0PR12MB5187: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:327; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zfoxjvcVsy34SLlJsANpIcnglqlagIX0ffZjZ1IQPAJWACHeDRnvPiqkq2IAEbKlHIYIwBOrePRq7+rwwVogJge7YDncbfbNT1lrEkT4blou/xBZC5pNXpK4xrf6KmMJyqf4N7H+1P3My9TZkH01eREY9cGWqA1gL0EDKpxAHapsJDyteQHEQBagrfXH981NYpJmA41Fdxz/R2OYMZlIub39Ay7mrNfGnEd9OJ7paQkWj/vNKi2WjkMjJ1wGyfH+0e6Y/3MBvQsRBma1P8vaEsbFTv5Xs6me2pvsWQeQKQiN1OJ6LUkt1aj2b30JyoNglKrOJGb0bVsy19qECYvBXlxtha9X2pm98tqbfk72wcHOY7Br8kdhixNWNGNGUdRgqmvOP5Q/CpCQJhr2KP6zcwsl10aGcikuVZIeScdfJMBSCVz3LhhOJQfSqXK4CmWeu8C1hbYxbTVwwfEKnVjDU0pjJrDR/vw89oW2x6B3mOXqthVSb+osc7GeaTlkGYvZbGw5KAT5qdCtpnAjDSTz4qPm/7l+IEvaH+p5KAki+fOLW3TlIzjehnTH10+RADkZF6DmptWJExUZMeHxgcnQ6KQHJcjd/isNZ+Y7tNJFwPgphusl3wSbXX3Pf+Oh1IuuGLkNfqJvopxpdpMal4p9ekpDfyWXnPOAX0wzmghZW9xBTwZo7D/svX4XKWksgBoIDMOoslQFImxpYyq5MvVy9A== X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(36840700001)(46966006)(47076005)(82310400003)(55016002)(426003)(336012)(16526019)(4744005)(26005)(70586007)(1076003)(70206006)(54906003)(186003)(36756003)(83380400001)(86362001)(6916009)(478600001)(316002)(7636003)(6286002)(7696005)(356005)(6666004)(5660300002)(2616005)(4326008)(82740400003)(8936002)(8676002)(36860700001)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:26.6198 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1dd19b91-b390-44b7-2263-08d979b17d48 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5187 Subject: [dpdk-dev] [PATCH v3 3/8] app/testpmd: dump port and queue info for each packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In case of shared Rx queue, port number of mbufs returned from one rx burst could be different. To support shared Rx queue, this patch dumps mbuf->port and queue for each packet. Signed-off-by: Xueming Li --- app/test-pmd/util.c | 1 + 1 file changed, 1 insertion(+) diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index 14a9a251fb..b85fbf75a5 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -100,6 +100,7 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], struct rte_flow_restore_info info = { 0, }; mb = pkts[i]; + MKDUMPSTR(print_buf, buf_size, cur_len, "port %u, ", mb->port); eth_hdr = rte_pktmbuf_read(mb, 0, sizeof(_eth_hdr), &_eth_hdr); eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type); packet_type = mb->packet_type; From patchwork Fri Sep 17 08:01:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99072 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0902AA0C46; Fri, 17 Sep 2021 10:02:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6BC941109; Fri, 17 Sep 2021 10:02:34 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2073.outbound.protection.outlook.com [40.107.237.73]) by mails.dpdk.org (Postfix) with ESMTP id 6182841102 for ; Fri, 17 Sep 2021 10:02:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d+rg5KueG6z5FRtoJyDkVRP3ZHpIbDcXL5UFT1kfMUSwq5OPoo+DFN5gmr+bD6gF6yzUHxQ+xXBV8nylZBJ6fB/80YyflS6JHIryfgoznu+EeSFrh3EFgkHWbAQeq8z3QPnof8BL0Jey9r0m0WhuF5LnUCfjTtHHhiuMzTgqOZqNP17iVgEXZyjBIaCmfTwg4T2Effn7lEEC6STbsVhl3gVVBo0nTjufJcx412AO3zZRo6yb43QyrOs5Dqc9XZru9Z9mbFtSTq6BNMIOvHrwPPIhNplhF8iR0jBPqN462bGCmS9ZmiEbExkOatNtldH1dIQMdwTyIPg2Z/THSppfgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=pmD9t5AoNm1cxjj8wZBp9TUlOUezUmBlAHluUaCArsk=; b=Xujxi8pvY0gZC8BI9ze5+tnq7Adu8ddDEo4Z4jtFvix3Pd57XWlf38wDs3dEgsE8xByWRRuNYmCF+d2zzOouU5AAVlEOO2ntyCGyyBO8LdrEQpaXOK2xR5+4tnzNcyN0SbtCoPq4F3qqiXN84g2PmO2k9PwNE9miZjhGAw6JhThPj5+xu3iop+9czcsiIhiIs/9awr4gL+jOED39irJEqyueuvkPikndeYYQAROgnVPisUzXqXihFuRIilkV0wIEKk6VthLCgCnN6gQpvprOarr3YE2lzg+cODahOW0AB0c698ay/CSDP+5v+0jXhYERoAXMH/Jg8k/ugHrJ2iC9NQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pmD9t5AoNm1cxjj8wZBp9TUlOUezUmBlAHluUaCArsk=; b=FoJ9KtGfoRGnC9YUbdgU70vYqiHn2pjVCN1pGgiO948eH+Nk0sWsk3hqvvIhSPeDKHd1G3Y6LYf3oxNXqIctjsYNf8iz69T1qFsA6BvzfzEOHoEqcK+Lv67ypB3QQv721oUd4E6r31A2VGZfplBmRk/huCP9PaCp64mlAOFHo0DVeDdB0WZAAbNYE/wQ4BaJH1A8bhMrRlxk8ie+BODybWaIgajKkPxGnhATvldLGHxLmVrkAuBpLAM0pmvIXn8DVY66nbm9ja+DZo0Byudibf48z1SgGzUscARK2qnN9Uzu9PCv5K50+whAQjdGBB9NyO+8WUAwRsBmkD61DDCX2A== Received: from DM5PR19CA0006.namprd19.prod.outlook.com (2603:10b6:3:151::16) by SA0PR12MB4525.namprd12.prod.outlook.com (2603:10b6:806:92::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14; Fri, 17 Sep 2021 08:02:29 +0000 Received: from DM6NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:3:151:cafe::46) by DM5PR19CA0006.outlook.office365.com (2603:10b6:3:151::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT003.mail.protection.outlook.com (10.13.173.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:29 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:28 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:26 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:17 +0800 Message-ID: <20210917080121.329373-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5734225c-7479-47c3-5703-08d979b17efe X-MS-TrafficTypeDiagnostic: SA0PR12MB4525: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SlyIQJgrJIdyy9jIwycLKRHR+L4cHgY6B8DdbAU/7pG3jumT1H22J/IqfuaB40qSvcsuewEUl5g1ba9yYg3PV4xmHMuPSIpdFZ+uvBvgJlkxsJm4iTWZ6jwOxx5viNMd4YPgHNqI2EkaNBuYetLd9dksP+fWvxumHpfORZmR/vQ3eOn/0VinffkWWDxEL+MOGGnxLIl3QcsG+4QWYbwUv5acEnp3hxhp7V9ozrcaVv1fst49J1y3ywr+UMB+jxDjzHOVYbQNo1Uxumkyue7Pb23PPCKv8B5I8fgLnOyllN0dmY9By5pVIwnHMIVKYup7xFK10fsR00sEu8I0fls44bZfFY53m6mHaCcn9DD7UsdVTT2ARW21ynkxzPWECj+bnOye5Jv1CuqSbAGq8Nlx7wVn90O+3OMFTflGcBSDRVL2d5jYVH6YGR6yLbOa1kx9A9Z470NHaqjAdtnCDOwmpBDmaGj6VzKFgWA67VrcYLS66cCCa+s5lotNAOrCvBOtZsFk6F1hb31J21gqF83nrrbYUgiyCxGv5b82/ro7BJ7Ze7vZYNcylCBjUigHrGKqmOHSFpNvmZygkIbX3Ny28LnFXUbQyWcvBFtqAv1TEPqkAc4NdRw1wvN2ocgAWLqjbU4IqYAF9zElD40DnFdHWzr0R8ZNiINFgKNdTemqy8fctBaL+zj4QlylRc6iyeOo0Xp/aZ53UYV438tXIXQRxw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(136003)(376002)(39860400002)(36840700001)(46966006)(36756003)(54906003)(86362001)(82740400003)(1076003)(83380400001)(8676002)(7696005)(6666004)(5660300002)(6286002)(7636003)(26005)(426003)(186003)(16526019)(36860700001)(2616005)(70586007)(47076005)(70206006)(82310400003)(4326008)(6916009)(2906002)(316002)(36906005)(8936002)(356005)(55016002)(478600001)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:29.5106 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5734225c-7479-47c3-5703-08d979b17efe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4525 Subject: [dpdk-dev] [PATCH v3 4/8] app/testpmd: new parameter to enable shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds "--rxq-share" parameter to enable shared rxq for each rxq. Default shared rxq group 0 is used, RX queues in same switch domain shares same rxq according to queue index. Shared Rx queue is enabled only if device support offloading flag RTE_ETH_RX_OFFLOAD_SHARED_RXQ. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 6 +++++- app/test-pmd/parameters.c | 13 +++++++++++++ app/test-pmd/testpmd.c | 18 ++++++++++++++++++ app/test-pmd/testpmd.h | 2 ++ doc/guides/testpmd_app_ug/run_app.rst | 5 +++++ 5 files changed, 43 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index f5765b34f7..8ec5f87ef3 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2707,7 +2707,11 @@ rxtx_config_display(void) printf(" RX threshold registers: pthresh=%d hthresh=%d " " wthresh=%d\n", pthresh_tmp, hthresh_tmp, wthresh_tmp); - printf(" RX Offloads=0x%"PRIx64"\n", offloads_tmp); + printf(" RX Offloads=0x%"PRIx64, offloads_tmp); + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + printf(" share group=%u", + rx_conf->shared_group); + printf("\n"); } /* per tx queue config only for first queue to be less verbose */ diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3f94a82e32..de0f1d28cc 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -167,6 +167,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); + printf(" --rxq-share: number of ports per shared rxq groups\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -607,6 +608,7 @@ launch_args_parse(int argc, char** argv) { "rxpkts", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, + { "rxq-share", 0, 0, 0 }, { "eth-link-speed", 1, 0, 0 }, { "disable-link-check", 0, 0, 0 }, { "disable-device-start", 0, 0, 0 }, @@ -1271,6 +1273,17 @@ launch_args_parse(int argc, char** argv) } if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) txonly_multi_flow = 1; + if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { + if (optarg == NULL) { + rxq_share = UINT32_MAX; + } else { + n = atoi(optarg); + if (n >= 0) + rxq_share = (uint32_t)n; + else + rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); + } + } if (!strcmp(lgopts[opt_idx].name, "no-flush-rx")) no_flush_rx = 1; if (!strcmp(lgopts[opt_idx].name, "eth-link-speed")) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 97ae52e17e..417e92ade1 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -498,6 +498,11 @@ uint8_t record_core_cycles; */ uint8_t record_burst_stats; +/* + * Number of ports per shared Rx queue group, 0 disable. + */ +uint32_t rxq_share; + unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -1506,6 +1511,11 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id) port->dev_conf.txmode.offloads &= ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; + if (rxq_share > 0 && + (port->dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_SHARED_RXQ)) + port->dev_conf.rxmode.offloads |= + RTE_ETH_RX_OFFLOAD_SHARED_RXQ; + /* Apply Rx offloads configuration */ for (i = 0; i < port->dev_info.max_rx_queues; i++) port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads; @@ -3401,6 +3411,14 @@ rxtx_port_config(struct rte_port *port) for (qid = 0; qid < nb_rxq; qid++) { offloads = port->rx_conf[qid].offloads; port->rx_conf[qid] = port->dev_info.default_rxconf; + + if (rxq_share > 0 && + (port->dev_info.rx_offload_capa & + RTE_ETH_RX_OFFLOAD_SHARED_RXQ)) { + offloads |= RTE_ETH_RX_OFFLOAD_SHARED_RXQ; + port->rx_conf[qid].shared_group = nb_ports / rxq_share; + } + if (offloads != 0) port->rx_conf[qid].offloads = offloads; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 5863b2f43f..3dfaaad94c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -477,6 +477,8 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; +extern uint32_t rxq_share; + extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; extern int nb_flows_flowgen; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 640eadeff7..1b9f715608 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -389,6 +389,11 @@ The command line options are: Generate multiple flows in txonly mode. +* ``--rxq-share=[X]`` + + Create all queues in shared RX queue mode if device supports. + Group number grows per X ports, default to group 0 if X not specified. + * ``--eth-link-speed`` Set a forced link speed to the ethernet port:: From patchwork Fri Sep 17 08:01:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99073 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 261C3A0C46; Fri, 17 Sep 2021 10:03:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12D99410EB; Fri, 17 Sep 2021 10:03:00 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2064.outbound.protection.outlook.com [40.107.223.64]) by mails.dpdk.org (Postfix) with ESMTP id B880D410E9 for ; Fri, 17 Sep 2021 10:02:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lF02j/A656Evhued5sLNT1P0LQWED55WJFDMUtWMSUGgYADpTZ0xAOa9WRoZAS7bjUaPQdhd6/gpsBhsyVbfqWeRaSpwRQpIbLBsZLi2fgAduWwaTujPaiwoWQF9q+4LN+BnUQ+Z1jkMsTmBXXPZqIa350lm+3JTNi+HyiQeAOBlTS+tnrPcHXW/5Ieny+MyFWyOtwtPJDrJA36KD9F9ZW48oiRMHwvvUjDF/6fjf6ozwn32xrXqdQIMZCy0pgru+sONxkoTjBewUdTqlEy4geItBPBuFKeXfAFotGtpN7j3/Nbix4wX9nXNUADp0zqCW8qcRPSb+LX0TLFKyRA14A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=65IqaSr3+9OwMBor5CO0NuHAYf+ALdVhn1sGChWJ46M=; b=Rn5wF1H5YXu4ArSP3xjtlIN7d/U+PQtRKvpa01Ee4VwddAC886cZyN27SrthpdsjLfTHjOfH9EaKfTT7x3EGt98nlA3My2qvR36vpDYZbEiRVvTBkPYjK9xuh+REa4oeqIjzDD19D5O7gzF4NqtVVfoJme7MXJaOA5XLTBogsbqOE++BfzCbhnYm+ZTNiaEqaoJEFvYzbnsZ5oWtFhnueviE99K7ZYX2RHzT9nD5v/vfaHa+PczsmZRx9S08WacwViCF5PzU77pV+poxabJlBjBBUF9+1qJ1R/co2R7kCc31fi/JpIB35a0q8VhtONhdUbs78bV3DErHXqbuCsKQgw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=65IqaSr3+9OwMBor5CO0NuHAYf+ALdVhn1sGChWJ46M=; b=uO9XCc7oo4wape6+L8NQnSY1ovbfM923U4GRKs/VlV0SyseJEYLXGpe5i9ffgOcXEHDjkKCFHhJjDo6H6kkK30FmgXMVtSK8Rj8PtSysoo5IlIP1m+99x7+UEW6SPXZhc3Py/YpjkSA/mgswY98dG5/STw2rDRmT/5Po1Bbvx3X9mLr8n8WbHFAqDOmsxgxX/DTv10SPTD5KldlKfxhIG5CNrBGv0MuuBFJ/5ILrNQXojFRhrt+Ldb5cTgMRb8c8gBSkZ3j/H/zVRZSG/7LEpD7AUrZmp+Z31HV2AZTjmcvZ6EYqnLE02sixWnAlDYstuMxaIdAsK3+8RV78Iatenw== Received: from DM5PR18CA0075.namprd18.prod.outlook.com (2603:10b6:3:3::13) by MWHPR12MB1293.namprd12.prod.outlook.com (2603:10b6:300:9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14; Fri, 17 Sep 2021 08:02:56 +0000 Received: from DM6NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:3:3:cafe::e8) by DM5PR18CA0075.outlook.office365.com (2603:10b6:3:3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by DM6NAM11FT036.mail.protection.outlook.com (10.13.172.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:56 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:56 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:52 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:18 +0800 Message-ID: <20210917080121.329373-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7a3d4380-9d74-4892-b9fa-08d979b18efd X-MS-TrafficTypeDiagnostic: MWHPR12MB1293: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: s9YCMSVfI3EArY1EqZMwDauVWk+3HxL96MuTXYJwjYKfzmgD2FObDkzKZxT5SC1eI9zeaT9CFe9XKAZq5z9u7bSsF1UeGFLt+J8LcommGHWJrg9E1vD78i0H2RgXR2q+hhb792BUvcy0FHawvbNH23Hx5TY2EcW/8b6E5infy88ObpQfvXMps6gq9WugdRJvJPJXNVYlsI1/qTq5PoeB+ELfhTqtkPFBa8WY5d2aXx0XvFkoA8uET4ctF04Nzwu/GXTI181BkOZToPZ+SEgI3pRZWjHejOTy9b+Vb0ERbQ5oPJM1IC/epUFXbgXBuU107QUciwhu3RKcfjoPgLf0bs9dLZo0SyOuiZkpf9EqsjqIefUbADRacEDOTdHeHwJaetP0GW4sFVCNmtnUUYzkI561+R/S3Dbe3S1sJuELU2OC5cVaRuazLmzC4exRVw1kKIGTKOn590yLnqNj28apO/nTSURKtlHBzXwjMYldbNgf3IsJ9dCag3hwW/8SNx/6vaRHD2UewY5qkp8Co/wLgh7PfCpFSsPCQvnkYfhiB3cQGEk0mdCR3uNHOzWiAZSJ9vy6HAnZVT1PFeTLfwvxE1mbAHeBlS9EauthRfJykkEmDM3llEd18kvnohHX9yJS4hAJdcxgKWUyziSJ/vVXXG1G16XhpDQfzhLjb9wD0C8fSXQLHjmvRYqQL+CjhFWT2fu6yGWpe/jMmfG653ixkg== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(47076005)(2616005)(508600001)(336012)(36756003)(1076003)(6666004)(4326008)(55016002)(36906005)(426003)(356005)(7636003)(2906002)(8676002)(86362001)(82310400003)(6286002)(316002)(16526019)(54906003)(83380400001)(6916009)(186003)(7696005)(26005)(5660300002)(70586007)(70206006)(36860700001)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:56.3331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a3d4380-9d74-4892-b9fa-08d979b18efd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1293 Subject: [dpdk-dev] [PATCH v3 5/8] app/testpmd: force shared Rx queue polled on same core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Shared rxqs shares one set rx queue of groups zero. Shared Rx queue must must be polled from one core. Checks and stops forwarding if shared rxq being scheduled on multiple cores. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 96 ++++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 2 + 3 files changed, 101 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 8ec5f87ef3..035247c33f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2883,6 +2883,102 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, } } +/* + * Check whether a shared rxq scheduled on other lcores. + */ +static bool +fwd_stream_on_other_lcores(uint16_t domain_id, portid_t src_port, + queueid_t src_rxq, lcoreid_t src_lc, + uint32_t shared_group) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + for (lc_id = src_lc + 1; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + if (domain_id != port->dev_info.switch_info.domain_id) + continue; + if (fs->rx_queue != src_rxq) + continue; + if (rxq_conf->shared_group != shared_group) + continue; + printf("Shared RX queue group %u can't be scheduled on different cores:\n", + shared_group); + printf(" lcore %hhu Port %hu queue %hu\n", + src_lc, src_port, src_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + lc_id, fs->rx_port, fs->rx_queue); + printf(" please use --nb-cores=%hu to limit forwarding cores\n", + nb_rxq); + return true; + } + } + return false; +} + +/* + * Check shared rxq configuration. + * + * Shared group must not being scheduled on different core. + */ +bool +pkt_fwd_shared_rxq_check(void) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + uint16_t domain_id; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* + * Check streams on each core, make sure the same switch domain + + * group + queue doesn't get scheduled on other cores. + */ + for (lc_id = 0; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + /* Update lcore info stream being scheduled. */ + fs->lcore = fwd_lcores[lc_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + /* Check shared rxq not scheduled on remaining cores. */ + domain_id = port->dev_info.switch_info.domain_id; + if (fwd_stream_on_other_lcores(domain_id, fs->rx_port, + fs->rx_queue, lc_id, + rxq_conf->shared_group)) + return false; + } + } + return true; +} + /* * Setup forwarding configuration for each logical core. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 417e92ade1..cab4b36b04 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2241,10 +2241,12 @@ start_packet_forwarding(int with_tx_first) fwd_config_setup(); + pkt_fwd_config_display(&cur_fwd_config); + if (!pkt_fwd_shared_rxq_check()) + return; if(!no_flush_rx) flush_fwd_rx_queues(); - pkt_fwd_config_display(&cur_fwd_config); rxtx_config_display(); fwd_stats_reset(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 3dfaaad94c..f121a2da90 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -144,6 +144,7 @@ struct fwd_stream { uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; + struct fwd_lcore *lcore; /**< Lcore being scheduled. */ }; /** @@ -795,6 +796,7 @@ void port_summary_header_display(void); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); +bool pkt_fwd_shared_rxq_check(void); void pkt_fwd_config_display(struct fwd_config *cfg); void rxtx_config_display(void); void fwd_config_setup(void); From patchwork Fri Sep 17 08:01:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99074 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC628A0C46; Fri, 17 Sep 2021 10:03:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EBE3410E9; Fri, 17 Sep 2021 10:03:03 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2088.outbound.protection.outlook.com [40.107.244.88]) by mails.dpdk.org (Postfix) with ESMTP id 651334111A for ; Fri, 17 Sep 2021 10:03:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XCSbZSy6Pf1kFZTfVNkch2c/jfjDY5DQFtnpPgpC+gj6J0fPVDaOy0bvXGVUb8xKAX92EA8KxJQcLSb1oVcuBBS3a3edpwEaOnY3l6NEHReI7vhmmSec6ilYCSp2YvSBwCuh0VgCunfSLBXt15F1n/5RqAjWVQEY7DeiRHnlro8S8gIEWdU9ZynVC/FcQbg/q5KAUZB975lBo+Gs/fZqqSNqSBri7RMCaE3aWVffSIgLIj1fsoASNyAQmRB6c8883XiBvk+M61p6D7Za71Da49Ui9Ir62yGPL90olN+DgCbMQFab0o70OusW/VCtPBq6/6NxAAAqnyuaL0gjIIMMpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=pxnA5A12JF4RLi09SHhGZt6fFqXFI4qQShYIRqinQQw=; b=U+sEPWv4ntRqzKdVJhkaH0eB+7zPcK5eNEHjF4x3rfHb7KkF3REAsVMzpBFaJiZ/rhRQ/tyEZAbooDVW+EmnXOtVJYRrx+/p1IQxV08Dg2h7PXz59ftzXVht/GuB5EvhcrdSKk44DcEKDmw9AqxrxhmFa6q1lF153T8Qs2RMeNqlzdmmhzjSTqEMEqeGLgBJ1EuEbvR6VoLD7/VIo7YHgryEqQ7CE5hhYRJrMWwdQ8SZyYYB899/Vjb1R5+Xiqlkngm4TO+yjbqCy8r2PGMdmYIDrBoaRpHo+0/JgIEkhvjcM6zHyrjbaeQSKy/H5cWnM/nyqNJHxUQukxRmhwSvvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pxnA5A12JF4RLi09SHhGZt6fFqXFI4qQShYIRqinQQw=; b=eIg1r9GEZVQmZtunDYmYZnfGlPWF/CAsbYmxiniuYFw1DlcH8SGOWsl1wujhMkIsbDS2D5HWKD7kqMk2jB1EhT3NpYGg4MnH1OUjHfIH2vIQpYqPxjOH+kLq4BRf9ilkOC1Kyjl5wIGUOUFSr+24QuJymLnsfQ/wGNEprqclYVg/XF6zx1BpdeR6GTSUh9ClUVDmgC59tbNBMYFiaFHMB4s3R+OOoZP1k7V3EBvCbVp3rBntchncCI6rySf8UZvbcRbCwO0TwRH0Azo553cZynrLmO7MnS9cUcqAmVYmVDg5wpzEQQqv4gGZXfTXkA+h5YLTt1b1yt7qTqH08yCSDQ== Received: from BN9PR03CA0049.namprd03.prod.outlook.com (2603:10b6:408:fb::24) by BYAPR12MB2888.namprd12.prod.outlook.com (2603:10b6:a03:137::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4478.25; Fri, 17 Sep 2021 08:03:00 +0000 Received: from BN8NAM11FT037.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::a) by BN9PR03CA0049.outlook.office365.com (2603:10b6:408:fb::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT037.mail.protection.outlook.com (10.13.177.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:59 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:59 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:56 +0000 From: Xueming Li To: CC: Xiaoyu Min , , Jerin Jacob , Ferruh Yigit , "Andrew Rybchenko" , Viacheslav Ovsiienko , Thomas Monjalon , "Lior Margalit" , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:19 +0800 Message-ID: <20210917080121.329373-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 86481d18-a1fe-492a-f08d-08d979b190fe X-MS-TrafficTypeDiagnostic: BYAPR12MB2888: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vChfAitqnQItryV/8TAKHgLd8qLYIOYarnbTOvToGXWXJLvprbOghfdbZhtLlRx0mrzifAwp0V4z9yjZZGpZ3loEaRPuerBCXcpY0rb+fFo+FvWkBm/k6KeGgupvf/XdDQqq4AaquMevaU+ZM1+ZAV1xTe//wgMj9Q1teVxEeV4XpgaVlp8nnL4RVx3dat6ws2o3e0tuczqMFs9fr6f5EhxsHE7bedGmLklgJPim2ZlSccWIAdMaGCQurEXQu9wQ4ruMw7VoxTEY/QXZHrzC3o4olsSR+7tSWr9OCQ2gpruR3FceGT7HxeHZY68uGXYZSIBk35HdTmH2aIQLw0skZ2mJB0IN2O8kLtAZPCygLwiv7fGgOT9jdBke72JVu+sKsfhAOe7c+eypfb75Kqf9vwnzat3jZW/2EJOJGYb8J0hOAkzirBbjJ2OXIutcjPJcQVRzKcD/GFt+fv5XR5sIlzl2856+ucu8uEXAKBkoLqakINS68F1j2l9M2xgQGeD8oGtqypZwym8iGlS+2qZc6QbmGRXqcTTbNFwiFjWxJCfR9Hxh6/G9F+cYTvmoF52UnovDqXNz+R9yGK+gtlGOJNa0pv2+aRvjRjNvx4MaT1n9uDZhJXHXw8htNU3AU5O9X4omXedRsKvBe/EIeXB/T9q2HwiyAhH+pQm1vBXFGB9Gf5eJmuoMPqdHjM5CS723YbtgwRd+CtNeC/L2/l/YZg== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(376002)(136003)(39860400002)(396003)(36840700001)(46966006)(36906005)(36860700001)(54906003)(426003)(336012)(8676002)(55016002)(478600001)(83380400001)(36756003)(6666004)(6916009)(47076005)(82310400003)(86362001)(16526019)(356005)(316002)(26005)(2616005)(186003)(82740400003)(7636003)(8936002)(4326008)(70206006)(7696005)(6286002)(2906002)(5660300002)(30864003)(1076003)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:59.6695 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 86481d18-a1fe-492a-f08d-08d979b190fe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT037.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2888 Subject: [dpdk-dev] [PATCH v3 6/8] app/testpmd: add common fwd wrapper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xiaoyu Min Added common forwarding wrapper function for all fwd engines which do the following in common: - record core cycles - call rte_eth_rx_burst(...,nb_pkt_per_burst) - update received packets - handle received mbufs with callback function For better performance, the function is defined as macro. Signed-off-by: Xiaoyu Min Signed-off-by: Xueming Li --- app/test-pmd/5tswap.c | 25 +++++-------------------- app/test-pmd/csumonly.c | 25 ++++++------------------- app/test-pmd/flowgen.c | 20 +++++--------------- app/test-pmd/icmpecho.c | 30 ++++++++---------------------- app/test-pmd/iofwd.c | 24 +++++------------------- app/test-pmd/macfwd.c | 24 +++++------------------- app/test-pmd/macswap.c | 23 +++++------------------ app/test-pmd/rxonly.c | 32 ++++++++------------------------ app/test-pmd/testpmd.h | 19 +++++++++++++++++++ 9 files changed, 66 insertions(+), 156 deletions(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index e8cef9623b..8fe940294f 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -82,18 +82,16 @@ swap_udp(struct rte_udp_hdr *udp_hdr) * Parses each layer and swaps it. When the next layer doesn't match it stops. */ static void -pkt_burst_5tuple_swap(struct fwd_stream *fs) +_5tuple_swap_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; struct rte_mbuf *mb; uint16_t next_proto; uint64_t ol_flags; uint16_t proto; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - int i; union { struct rte_ether_hdr *eth; @@ -105,20 +103,6 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) uint8_t *byte; } h; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; ol_flags = ol_flags_init(txp->dev_conf.txmode.offloads); vlan_qinq_set(pkts_burst, nb_rx, ol_flags, @@ -182,12 +166,13 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(_5tuple_swap_stream); + struct fwd_engine five_tuple_swap_fwd_engine = { .fwd_mode_name = "5tswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_5tuple_swap, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 38cc256533..9bfc7d10dc 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -763,7 +763,7 @@ pkt_copy_split(const struct rte_mbuf *pkt) } /* - * Receive a burst of packets, and for each packet: + * For each packet in received mbuf: * - parse packet, and try to recognize a supported packet type (1) * - if it's not a supported packet type, don't touch the packet, else: * - reprocess the checksum of all supported layers. This is done in SW @@ -792,9 +792,9 @@ pkt_copy_split(const struct rte_mbuf *pkt) * OUTER_IP is only useful for tunnel packets. */ static void -pkt_burst_checksum_forward(struct fwd_stream *fs) +checksum_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mbuf *gso_segments[GSO_MAX_PKT_BURST]; struct rte_gso_ctx *gso_ctx; struct rte_mbuf **tx_pkts_burst; @@ -805,7 +805,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) void **gro_ctx; uint16_t gro_pkts_num; uint8_t gro_enable; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_prep; uint16_t i; @@ -820,18 +819,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t nb_segments = 0; int ret; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* receive a burst of packet */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; rx_bad_ip_csum = 0; rx_bad_l4_csum = 0; rx_bad_outer_l4_csum = 0; @@ -1138,13 +1125,13 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) rte_pktmbuf_free(tx_pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(checksum_forward_stream); + struct fwd_engine csum_fwd_engine = { .fwd_mode_name = "csum", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_checksum_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 0d3664a64d..aa45948b4c 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -61,10 +61,10 @@ RTE_DEFINE_PER_LCORE(int, _next_flow); * still do so in order to maintain traffic statistics. */ static void -pkt_burst_flow_gen(struct fwd_stream *fs) +flow_gen_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { unsigned pkt_size = tx_pkt_length - 4; /* Adjust FCS */ - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mempool *mbp; struct rte_mbuf *pkt = NULL; struct rte_ether_hdr *eth_hdr; @@ -72,7 +72,6 @@ pkt_burst_flow_gen(struct fwd_stream *fs) struct rte_udp_hdr *udp_hdr; uint16_t vlan_tci, vlan_tci_outer; uint64_t ol_flags = 0; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_dropped; uint16_t nb_pkt; @@ -80,17 +79,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs) uint16_t i; uint32_t retry; uint64_t tx_offloads; - uint64_t start_tsc = 0; int next_flow = RTE_PER_LCORE(_next_flow); - get_start_cycles(&start_tsc); - - /* Receive a burst of packets and discard them. */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) rte_pktmbuf_free(pkts_burst[i]); @@ -195,12 +186,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_pkt); } - RTE_PER_LCORE(_next_flow) = next_flow; - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(flow_gen_stream); + static void flowgen_begin(portid_t pi) { @@ -211,5 +201,5 @@ struct fwd_engine flow_gen_engine = { .fwd_mode_name = "flowgen", .port_fwd_begin = flowgen_begin, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_flow_gen, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 8948f28eb5..467ba330aa 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -267,13 +267,13 @@ ipv4_hdr_cksum(struct rte_ipv4_hdr *ip_h) (((rte_be_to_cpu_32((ipv4_addr)) >> 24) & 0x000000FF) == 0xE0) /* - * Receive a burst of packets, lookup for ICMP echo requests, and, if any, - * send back ICMP echo replies. + * Lookup for ICMP echo requests in received mbuf and, if any, + * send back ICMP echo replies to corresponding Tx port. */ static void -reply_to_icmp_echo_rqsts(struct fwd_stream *fs) +reply_to_icmp_echo_rqsts_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mbuf *pkt; struct rte_ether_hdr *eth_h; struct rte_vlan_hdr *vlan_h; @@ -283,7 +283,6 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) struct rte_ether_addr eth_addr; uint32_t retry; uint32_t ip_addr; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_replies; uint16_t eth_type; @@ -291,22 +290,9 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) uint16_t arp_op; uint16_t arp_pro; uint32_t cksum; - uint8_t i; + uint16_t i; int l2_len; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * First, receive a burst of packets. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; nb_replies = 0; for (i = 0; i < nb_rx; i++) { if (likely(i < nb_rx - 1)) @@ -509,13 +495,13 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) } while (++nb_tx < nb_replies); } } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(reply_to_icmp_echo_rqsts_stream); + struct fwd_engine icmp_echo_engine = { .fwd_mode_name = "icmpecho", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = reply_to_icmp_echo_rqsts, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 83d098adcb..dbd78167b4 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -44,25 +44,11 @@ * to packets data. */ static void -pkt_burst_io_forward(struct fwd_stream *fs) +io_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, - pkts_burst, nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - fs->rx_packets += nb_rx; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); @@ -85,13 +71,13 @@ pkt_burst_io_forward(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(io_forward_stream); + struct fwd_engine io_fwd_engine = { .fwd_mode_name = "io", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_io_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 0568ea794d..b0728c7597 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -44,32 +44,18 @@ * before forwarding them. */ static void -pkt_burst_mac_forward(struct fwd_stream *fs) +mac_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; struct rte_mbuf *mb; struct rte_ether_hdr *eth_hdr; uint32_t retry; - uint16_t nb_rx; uint16_t nb_tx; uint16_t i; uint64_t ol_flags = 0; uint64_t tx_offloads; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; tx_offloads = txp->dev_conf.txmode.offloads; if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) @@ -116,13 +102,13 @@ pkt_burst_mac_forward(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(mac_forward_stream); + struct fwd_engine mac_fwd_engine = { .fwd_mode_name = "mac", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 310bca06af..cc208944d7 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -50,27 +50,13 @@ * addresses of packets before forwarding them. */ static void -pkt_burst_mac_swap(struct fwd_stream *fs) +mac_swap_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; do_macswap(pkts_burst, nb_rx, txp); @@ -95,12 +81,13 @@ pkt_burst_mac_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(mac_swap_stream); + struct fwd_engine mac_swap_engine = { .fwd_mode_name = "macswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_swap, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index c78fc4609a..a7354596b5 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -41,37 +41,21 @@ #include "testpmd.h" /* - * Received a burst of packets. + * Process a burst of received packets from same stream. */ static void -pkt_burst_receive(struct fwd_stream *fs) +rxonly_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - uint16_t nb_rx; - uint16_t i; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) - rte_pktmbuf_free(pkts_burst[i]); - - get_end_cycles(fs, start_tsc); + RTE_SET_USED(fs); + rte_pktmbuf_free_bulk(pkts_burst, nb_rx); } +PKT_BURST_FWD(rxonly_forward_stream) + struct fwd_engine rx_only_engine = { .fwd_mode_name = "rxonly", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_receive, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f121a2da90..4792bef03b 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1028,6 +1028,25 @@ void add_tx_dynf_callback(portid_t portid); void remove_tx_dynf_callback(portid_t portid); int update_jumbo_frame_offload(portid_t portid); +#define PKT_BURST_FWD(cb) \ +static void \ +pkt_burst_fwd(struct fwd_stream *fs) \ +{ \ + struct rte_mbuf *pkts_burst[nb_pkt_per_burst]; \ + uint16_t nb_rx; \ + uint64_t start_tsc = 0; \ + \ + get_start_cycles(&start_tsc); \ + nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, \ + pkts_burst, nb_pkt_per_burst); \ + inc_rx_burst_stats(fs, nb_rx); \ + if (unlikely(nb_rx == 0)) \ + return; \ + fs->rx_packets += nb_rx; \ + cb(fs, nb_rx, pkts_burst); \ + get_end_cycles(fs, start_tsc); \ +} + /* * Work-around of a compilation error with ICC on invocations of the * rte_be_to_cpu_16() function. From patchwork Fri Sep 17 08:01:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99075 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFA7EA0C46; Fri, 17 Sep 2021 10:03:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A52E1410EF; Fri, 17 Sep 2021 10:03:07 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2085.outbound.protection.outlook.com [40.107.237.85]) by mails.dpdk.org (Postfix) with ESMTP id 3C5ED410EA for ; Fri, 17 Sep 2021 10:03:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EH8J9zNi+h3ZJ8BtWfoU9VpAQiVJQh8XpCR/55Ov7cVffdO17nSl/0J5aVpnRHI/KMqIVKm1Atfaw7kNejZey7+yq2VIuL9yT756qvfh+po3iUJnMHFEWl8QMEWC67Arop9Ckp0nWIfq9r+k2s+53WyK1aZNtReWlc+ybHNrOljRjd/891do49JMxL4R45vZ3c+//AnBcCe6fn/MFXaUe2bcGiQbf3qODBJicaAod4k0pyYRN1pej0UIPKm8/mWPiE+vDppo5qtpW+1XWAeVEqB+e+WwNe9JQx453AikVGb4qU8OSCxSJTMGihe4ENrtXsL5vry/adUyj1vAqwkhEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=2+NteC9+JFGyCgW2bzwjaQu4JMbiCksdu3nbfGWBrzI=; b=eVbKKRuKAr8UMtk4q/p1j1zZ7swTfayzYPX7yN5b4NR3xkQ0/KtxD+tU7NtIWk0HkcwOhEtzpQKLThzxz1MBlcGLXSCHT/lwtp5e19MjGAFfGiRUhRvAQNkrrv7kXGzp6XdFQq2E1akMjOLJI6A18Dd7W4AfgBFCE9joxdCbNGT76scMgEe/DNncDmiHBkr+QMcDpgFq5ET5n0q+eYjED+cxZEjFzCyawuwppXMRpzwWzL94lB4G9zeiduHJULRoZIm20eP4xY5g9DCPedXYQXcs3ezicRIBv0TnJiijUnE969npI7LDl9abZmVIcwW1oKtl6Cpwr3YHd9GU2CG9og== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.32) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2+NteC9+JFGyCgW2bzwjaQu4JMbiCksdu3nbfGWBrzI=; b=VXoc1R5RX9d4zPxLfZ7dt6WXptGoVwbMWIFRBX3G5aAgpPsqn0NrIF4bJPYJKZ/d0T6i6J35gpSYsNEY5GuuBusuAjapqD+yRcapScxO4pdHs/82lzEmI4tXzphspIHdsR9gpLpjGap7jmxJqMvyk3/dyIESdJ7tMFFjJZ8SG1aKkQzMk1EXsr0G6kg5eLbtGETg/xYPBIVtC9D586ErRMJFlwsofeyMOD1d7woTUt0XFIu2k1LaU+QZmUqLCBVusxvnGrlxiek9Wew9pL+bdDCGR4Q5bx/x9/ioaXXgyRe6aTFaKmofT+BfEH6vrCv1cp48dELJvC7dnNo4gfXSEA== Received: from DM5PR07CA0106.namprd07.prod.outlook.com (2603:10b6:4:ae::35) by BN6PR12MB1828.namprd12.prod.outlook.com (2603:10b6:404:108::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.16; Fri, 17 Sep 2021 08:03:03 +0000 Received: from DM6NAM11FT030.eop-nam11.prod.protection.outlook.com (2603:10b6:4:ae:cafe::1b) by DM5PR07CA0106.outlook.office365.com (2603:10b6:4:ae::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:03:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.32) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.32 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.32; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.32) by DM6NAM11FT030.mail.protection.outlook.com (10.13.172.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:03:02 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 01:03:02 -0700 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:59 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:20 +0800 Message-ID: <20210917080121.329373-8-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 764e512e-2695-442c-ce5b-08d979b192cd X-MS-TrafficTypeDiagnostic: BN6PR12MB1828: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QJtmtTuDThVZ69Au8j/9vDg6AHM3nUByA+E4QKkwkbn0klARwwiNqKDYzNJT9rFPsGdJ/jOS9OEqIY1026pT6ZP7GsEzLETxk4Yoo1BtDcq2S1DprOFmucoz1hvVg0Xr8p4ZieimiVwmXJNbNpTQ4RdcCUl5TRdS0ZoIVeZovC8hTd9M8lLhixaYS1adbH7MPdQ6UPgkimi/4AbhidbSVWikhMuPUM02LMG78noaIVbj/iB3POHpr/1fAp6u+xc9U5KRdOH8d6+gfUBzIvrpWtgT+XSGTUcmFM8jJquTi3gA1psXI5AaDRKvIhZglAgp0rwuS+dQCvMsxdZ53blIqcK64bcaoqfCfIdZuRUFEAMthBJlM+1+irhMQiuOmUTusxagx1ZgQnFZppWvWqsiE4iNhxr1NIl4Ik7u6pJa1A+EAiqRPIiQtE7e53XQXua8Nxbb3O2xUbgYcNBB6eqw4MriU0pdIw5bs1T7vOqUctwNTKs4wQMH0bhLdyZOQLZu1CMyxRjbdNHTPmsGBpRMJA/YYgBJj1w+k6f2RTKYoeEGlZJBR7k7mD/XRXsHFCDyYzq1iuWdTWEbIhtTKr08gXWZrOT4ssL6xjsvb4SP/MuiKz3uPUNe2faiTT+NUGir7PizJTBDcy24vyUdjSUAB+rbPqbpAc1Q28lCkK6P1NveDv43PfzY3B/vU1c63O2fkNUgYEm3l+rLAY490km+fCGMhZQ1WvWgKiRUMm6b21M= X-Forefront-Antispam-Report: CIP:216.228.112.32; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid01.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6916009)(26005)(86362001)(16526019)(186003)(6666004)(70206006)(8676002)(336012)(36860700001)(36756003)(508600001)(5660300002)(1076003)(82310400003)(47076005)(54906003)(8936002)(7636003)(7696005)(83380400001)(4326008)(2906002)(356005)(55016002)(426003)(6286002)(2616005)(70586007)(316002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:03:02.7470 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 764e512e-2695-442c-ce5b-08d979b192cd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.32]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT030.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1828 Subject: [dpdk-dev] [PATCH v3 7/8] app/testpmd: improve forwarding cache miss X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To minimize cache miss, adds flags and burst size used in forwarding to stream, moves condition tests in forwarding to flags in stream. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 18 ++++++++++++++---- app/test-pmd/flowgen.c | 6 +++--- app/test-pmd/noisy_vnf.c | 2 +- app/test-pmd/testpmd.h | 21 ++++++++++++--------- app/test-pmd/txonly.c | 8 ++++---- 5 files changed, 34 insertions(+), 21 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 035247c33f..5cdf8fa082 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3050,6 +3050,16 @@ fwd_topology_tx_port_get(portid_t rxp) } } +static void +fwd_stream_set_common(struct fwd_stream *fs) +{ + fs->nb_pkt_per_burst = nb_pkt_per_burst; + fs->record_burst_stats = !!record_burst_stats; + fs->record_core_cycles = !!record_core_cycles; + fs->retry_enabled = !!retry_enabled; + fs->rxq_share = !!rxq_share; +} + static void simple_fwd_config_setup(void) { @@ -3079,7 +3089,7 @@ simple_fwd_config_setup(void) fwd_ports_ids[fwd_topology_tx_port_get(i)]; fwd_streams[i]->tx_queue = 0; fwd_streams[i]->peer_addr = fwd_streams[i]->tx_port; - fwd_streams[i]->retry_enabled = retry_enabled; + fwd_stream_set_common(fwd_streams[i]); } } @@ -3140,7 +3150,7 @@ rss_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; - fs->retry_enabled = retry_enabled; + fwd_stream_set_common(fs); rxp++; if (rxp < nb_fwd_ports) continue; @@ -3255,7 +3265,7 @@ dcb_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = txq + j % nb_tx_queue; fs->peer_addr = fs->tx_port; - fs->retry_enabled = retry_enabled; + fwd_stream_set_common(fs); } fwd_lcores[lc_id]->stream_nb += rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue; @@ -3326,7 +3336,7 @@ icmp_echo_config_setup(void) fs->tx_port = fs->rx_port; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; - fs->retry_enabled = retry_enabled; + fwd_stream_set_common(fs); if (verbose_level > 0) printf(" stream=%d port=%d rxq=%d txq=%d\n", sm_id, fs->rx_port, fs->rx_queue, diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index aa45948b4c..c282f3bcb1 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -97,12 +97,12 @@ flow_gen_stream(struct fwd_stream *fs, uint16_t nb_rx, if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT) ol_flags |= PKT_TX_MACSEC; - for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + for (nb_pkt = 0; nb_pkt < fs->nb_pkt_per_burst; nb_pkt++) { if (!nb_pkt || !nb_clones) { nb_clones = nb_pkt_flowgen_clones; /* Logic limitation */ - if (nb_clones > nb_pkt_per_burst) - nb_clones = nb_pkt_per_burst; + if (nb_clones > fs->nb_pkt_per_burst) + nb_clones = fs->nb_pkt_per_burst; pkt = rte_mbuf_raw_alloc(mbp); if (!pkt) diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index 382a4c2aae..56bf6a4e70 100644 --- a/app/test-pmd/noisy_vnf.c +++ b/app/test-pmd/noisy_vnf.c @@ -153,7 +153,7 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs) uint64_t now; nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, - pkts_burst, nb_pkt_per_burst); + pkts_burst, fs->nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) goto flush; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 4792bef03b..3b8796a7a5 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -128,12 +128,17 @@ struct fwd_stream { queueid_t tx_queue; /**< TX queue to send forwarded packets */ streamid_t peer_addr; /**< index of peer ethernet address of packets */ - unsigned int retry_enabled; + uint16_t nb_pkt_per_burst; + unsigned int record_burst_stats:1; + unsigned int record_core_cycles:1; + unsigned int retry_enabled:1; + unsigned int rxq_share:1; /* "read-write" results */ uint64_t rx_packets; /**< received packets */ uint64_t tx_packets; /**< received packets transmitted */ uint64_t fwd_dropped; /**< received packets not forwarded */ + uint64_t core_cycles; /**< used for RX and TX processing */ uint64_t rx_bad_ip_csum ; /**< received packets has bad ip checksum */ uint64_t rx_bad_l4_csum ; /**< received packets has bad l4 checksum */ uint64_t rx_bad_outer_l4_csum; @@ -141,7 +146,6 @@ struct fwd_stream { uint64_t rx_bad_outer_ip_csum; /**< received packets having bad outer ip checksum */ unsigned int gro_times; /**< GRO operation times */ - uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; struct fwd_lcore *lcore; /**< Lcore being scheduled. */ @@ -750,28 +754,27 @@ port_pci_reg_write(struct rte_port *port, uint32_t reg_off, uint32_t reg_v) static inline void get_start_cycles(uint64_t *start_tsc) { - if (record_core_cycles) - *start_tsc = rte_rdtsc(); + *start_tsc = rte_rdtsc(); } static inline void get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc) { - if (record_core_cycles) + if (unlikely(fs->record_core_cycles)) fs->core_cycles += rte_rdtsc() - start_tsc; } static inline void inc_rx_burst_stats(struct fwd_stream *fs, uint16_t nb_rx) { - if (record_burst_stats) + if (unlikely(fs->record_burst_stats)) fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; } static inline void inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) { - if (record_burst_stats) + if (unlikely(fs->record_burst_stats)) fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; } @@ -1032,13 +1035,13 @@ int update_jumbo_frame_offload(portid_t portid); static void \ pkt_burst_fwd(struct fwd_stream *fs) \ { \ - struct rte_mbuf *pkts_burst[nb_pkt_per_burst]; \ + struct rte_mbuf *pkts_burst[fs->nb_pkt_per_burst]; \ uint16_t nb_rx; \ uint64_t start_tsc = 0; \ \ get_start_cycles(&start_tsc); \ nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, \ - pkts_burst, nb_pkt_per_burst); \ + pkts_burst, fs->nb_pkt_per_burst); \ inc_rx_burst_stats(fs, nb_rx); \ if (unlikely(nb_rx == 0)) \ return; \ diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index aed820f5d3..db6130421c 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -367,8 +367,8 @@ pkt_burst_transmit(struct fwd_stream *fs) eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); if (rte_mempool_get_bulk(mbp, (void **)pkts_burst, - nb_pkt_per_burst) == 0) { - for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + fs->nb_pkt_per_burst) == 0) { + for (nb_pkt = 0; nb_pkt < fs->nb_pkt_per_burst; nb_pkt++) { if (unlikely(!pkt_burst_prepare(pkts_burst[nb_pkt], mbp, ð_hdr, vlan_tci, vlan_tci_outer, @@ -376,12 +376,12 @@ pkt_burst_transmit(struct fwd_stream *fs) nb_pkt, fs))) { rte_mempool_put_bulk(mbp, (void **)&pkts_burst[nb_pkt], - nb_pkt_per_burst - nb_pkt); + fs->nb_pkt_per_burst - nb_pkt); break; } } } else { - for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + for (nb_pkt = 0; nb_pkt < fs->nb_pkt_per_burst; nb_pkt++) { pkt = rte_mbuf_raw_alloc(mbp); if (pkt == NULL) break; From patchwork Fri Sep 17 08:01:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 99076 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88B15A0C46; Fri, 17 Sep 2021 10:03:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 19517410ED; Fri, 17 Sep 2021 10:03:11 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2052.outbound.protection.outlook.com [40.107.96.52]) by mails.dpdk.org (Postfix) with ESMTP id 6BF54410EA for ; Fri, 17 Sep 2021 10:03:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HDWCcUEnjnpyNgC32yF6MWScN2nt7fTD6uEucTEUeq4JgFwUIX45MB5Om2V1Ux6mJ3lAq04GLYNphfkmbuxJpYHrDnI7bRqWAQD/o3D8IcZU4JAtCi1FGbnD4it4draaJ1wtTFJTSV4HeGpOe0tNfnpzwfi51R9bYrDXcpq3f6E2c6meLuh+hxpqfi4Mjjy4cAzpzGF+K2q+YEyEsocgbwVNeK9eqdxxGW65IGPhzshqhfkmd5eimg2VniIeFiCZYLaepb9g+H6c2OY5hmEg7yZ7E3MffoRY+nOFnQn9/Yy8YMhvi1hjMh0pUyW7+cP5Vwcy2UiHXJJ/PFqQ0XeXyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=hRKrc6i7tLggR3jGoQbEwRBdNtnuX67Fi+iOvZgJ27s=; b=IQSUhidNQGIH2ndlWeH9zCZtRaVxOdXwU+zm0OUUzvSOHRGbEyUXGrgk9tSk6XTOwfT37wnFViZZTQF5EuspextbUwVdTa5BqvOCA7YaNTsFlRqqGh/fFrEn5mLNcY0zW+6xeOzLm+hoFoTfe1eR3Y1Cja5VJEwKFFiWkfsCuOJJfpoz/xHQGRB0N5UJHD34bKuXuW/pawhZtdjYAvKcQtqJf+0B65N8B8i8f2uIMV2yF93GGmuqv/ZQjsuUykKyKFP69oppLZYkXI+ekAOk9Z/UAJa7UaLFEULNzCsaq56Uyt+Sp49JiXaN6IY/UUYJ58/shdZse6fOkWo0YoTjWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hRKrc6i7tLggR3jGoQbEwRBdNtnuX67Fi+iOvZgJ27s=; b=C0U3U8rMhAImh24yZ+DuF7ymeHNI081GTlKxGIpXmrBQh0nDSDa35PnxG9laVKmKhaKTkae7aCrUK4k2LmBSvriUleLVsjPP4YQDIpf67x6dZfgrRXUVnkaXxOQx37QlrBe69dtN7W7403+n9Hgkn40tnXdh1cba9Ehel8A9o2Y+A3RsDDqrYbYOyQYVx0pEWT37chiwZCAVW6oo81RG1lzj2dYYvPT+Yo/PWCUNCyl3oob+7CeqLidFOFfCHfmbdGh9rMH7VnP68N7Gium1G1GdeCNEZy8HWQHHMOvOVe5qdU8lFIFwt4rZLhFloeNz+0BrKIKfpS/I3k3kXBs0IA== Received: from DM6PR17CA0021.namprd17.prod.outlook.com (2603:10b6:5:1b3::34) by BN6PR1201MB2464.namprd12.prod.outlook.com (2603:10b6:404:ae::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.16; Fri, 17 Sep 2021 08:03:05 +0000 Received: from DM6NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1b3:cafe::21) by DM6PR17CA0021.outlook.office365.com (2603:10b6:5:1b3::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:03:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT049.mail.protection.outlook.com (10.13.172.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:03:05 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:03:05 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:03:02 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:21 +0800 Message-ID: <20210917080121.329373-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0d4e4e87-7efc-40b9-28bc-08d979b19493 X-MS-TrafficTypeDiagnostic: BN6PR1201MB2464: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2449; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EGSuRnS76gkkHSc4SiGifYg8aChU5Chhonq9KcjhadAFrPCxX/T7dL+e3+QblWpSoT+SCqssUW3tss5gQ++UHIcCS/blcbi8w5y09QjG7JrKivXHU6IoMFKu3kAOJ3aKZG13R4O4R2VPmMkB5YpZneO2ngnHGgazQmHuq4u2smaXiZYnc9l0e+of3fTJQE0aL9CUr0UChRG7AtQxw2j6cF/M88/zGnR/W1qwIFoSZofqIP7iKo48Q+RwNXRu7NMIfSi2s8X49GDKqRfq5ZYn//yivNM1Z7eAMM4D7T9l0bOov46//bgkDo39W/hFnPLpDSCapLdv4xHg9mEi5wclPiYRHtvmq38tqmar0PUrHG4iy2IhL04pWmRP1/U8BRgv8e5e3Y1Wue6ym8JyAdbm81hYzDynLpfl8BRB/pnUwvWncCHZ/Y9CyE85vCShuDqly8qlIQx1s/ZLU2cA8q3sqRtLCus2tiA+Bj0yRdXuQvR7RzRRryAAXJeb+odh7rgTuyIDdCDxXchu74qlPWtF9m2W3WvDkc3nDs+FVV+2RPd3vvwOGGwsnckxqV0j9nE1r4A52sSGSlDDuRQtcMfsNTaqJGG796OwX3MYOWDnINxVIKWsMc8gLcDFhda3I1Ea64I7kneZq1KtApPVjcf76zSqDs6OEOWZWiwBXHbUScj3MqQMK+l00pfOqOxVaxwIIZAUs3WGDz3GLz2EJAdpsw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(54906003)(186003)(2616005)(356005)(336012)(426003)(8676002)(316002)(7636003)(6286002)(16526019)(36906005)(82310400003)(1076003)(36756003)(55016002)(6916009)(5660300002)(36860700001)(6666004)(83380400001)(4326008)(86362001)(7696005)(47076005)(70206006)(26005)(2906002)(508600001)(70586007)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:03:05.7101 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d4e4e87-7efc-40b9-28bc-08d979b19493 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB2464 Subject: [dpdk-dev] [PATCH v3 8/8] app/testpmd: support shared Rx queue forwarding X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" By enabling shared Rx queue, received packets come from all member ports in same shared Rx queue. This patch adds a common forwarding function for shared Rx queue, groups source forwarding stream by looking into local streams on current lcore with packet source port(mbuf->port) and queue, then invokes callback to handle received packets for each source stream. Signed-off-by: Xueming Li --- app/test-pmd/ieee1588fwd.c | 30 +++++++++++------ app/test-pmd/testpmd.c | 69 ++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.h | 9 ++++- 3 files changed, 97 insertions(+), 11 deletions(-) diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index 034f238c34..0151d6de74 100644 --- a/app/test-pmd/ieee1588fwd.c +++ b/app/test-pmd/ieee1588fwd.c @@ -90,23 +90,17 @@ port_ieee1588_tx_timestamp_check(portid_t pi) } static void -ieee1588_packet_fwd(struct fwd_stream *fs) +ieee1588_fwd_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkt) { - struct rte_mbuf *mb; + struct rte_mbuf *mb = (*pkt); struct rte_ether_hdr *eth_hdr; struct rte_ether_addr addr; struct ptpv2_msg *ptp_hdr; uint16_t eth_type; uint32_t timesync_index; - /* - * Receive 1 packet at a time. - */ - if (rte_eth_rx_burst(fs->rx_port, fs->rx_queue, &mb, 1) == 0) - return; - - fs->rx_packets += 1; - + RTE_SET_USED(nb_rx); /* * Check that the received packet is a PTP packet that was detected * by the hardware. @@ -198,6 +192,22 @@ ieee1588_packet_fwd(struct fwd_stream *fs) port_ieee1588_tx_timestamp_check(fs->rx_port); } +/* + * Wrapper of real fwd ingine. + */ +static void +ieee1588_packet_fwd(struct fwd_stream *fs) +{ + struct rte_mbuf *mb; + + if (rte_eth_rx_burst(fs->rx_port, fs->rx_queue, &mb, 1) == 0) + return; + if (unlikely(fs->rxq_share > 0)) + forward_shared_rxq(fs, 1, &mb, ieee1588_fwd_stream); + else + ieee1588_fwd_stream(fs, 1, &mb); +} + static void port_ieee1588_fwd_begin(portid_t pi) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index cab4b36b04..1d82397831 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2106,6 +2106,75 @@ flush_fwd_rx_queues(void) } } +/** + * Get packet source stream by source port and queue. + * All streams of same shared Rx queue locates on same core. + */ +static struct fwd_stream * +forward_stream_get(struct fwd_stream *fs, uint16_t port) +{ + streamid_t sm_id; + struct fwd_lcore *fc; + struct fwd_stream **fsm; + streamid_t nb_fs; + + fc = fs->lcore; + fsm = &fwd_streams[fc->stream_idx]; + nb_fs = fc->stream_nb; + for (sm_id = 0; sm_id < nb_fs; sm_id++) { + if (fsm[sm_id]->rx_port == port && + fsm[sm_id]->rx_queue == fs->rx_queue) + return fsm[sm_id]; + } + return NULL; +} + +/** + * Forward packet by source port and queue. + */ +static void +forward_by_port(struct fwd_stream *src_fs, uint16_t port, uint16_t nb_rx, + struct rte_mbuf **pkts, packet_fwd_cb fwd) +{ + struct fwd_stream *fs = forward_stream_get(src_fs, port); + + if (fs != NULL) { + fs->rx_packets += nb_rx; + fwd(fs, nb_rx, pkts); + } else { + /* Source stream not found, drop all packets. */ + src_fs->fwd_dropped += nb_rx; + while (nb_rx > 0) + rte_pktmbuf_free(pkts[--nb_rx]); + } +} + +/** + * Forward packets from shared Rx queue. + * + * Source port of packets are identified by mbuf->port. + */ +void +forward_shared_rxq(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst, packet_fwd_cb fwd) +{ + uint16_t i, nb_fs_rx = 1, port; + + /* Locate real source fs according to mbuf->port. */ + for (i = 0; i < nb_rx; ++i) { + rte_prefetch0(pkts_burst[i + 1]); + port = pkts_burst[i]->port; + if (i + 1 == nb_rx || pkts_burst[i + 1]->port != port) { + /* Forward packets with same source port. */ + forward_by_port(fs, port, nb_fs_rx, + &pkts_burst[i + 1 - nb_fs_rx], fwd); + nb_fs_rx = 1; + } else { + nb_fs_rx++; + } + } +} + static void run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) { diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 3b8796a7a5..7869f61f74 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -276,6 +276,8 @@ struct fwd_lcore { typedef void (*port_fwd_begin_t)(portid_t pi); typedef void (*port_fwd_end_t)(portid_t pi); typedef void (*packet_fwd_t)(struct fwd_stream *fs); +typedef void (*packet_fwd_cb)(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts); struct fwd_engine { const char *fwd_mode_name; /**< Forwarding mode name. */ @@ -910,6 +912,8 @@ char *list_pkt_forwarding_modes(void); char *list_pkt_forwarding_retry_modes(void); void set_pkt_forwarding_mode(const char *fwd_mode); void start_packet_forwarding(int with_tx_first); +void forward_shared_rxq(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst, packet_fwd_cb fwd); void fwd_stats_display(void); void fwd_stats_reset(void); void stop_packet_forwarding(void); @@ -1046,7 +1050,10 @@ pkt_burst_fwd(struct fwd_stream *fs) \ if (unlikely(nb_rx == 0)) \ return; \ fs->rx_packets += nb_rx; \ - cb(fs, nb_rx, pkts_burst); \ + if (fs->rxq_share) \ + forward_shared_rxq(fs, nb_rx, pkts_burst, cb); \ + else \ + cb(fs, nb_rx, pkts_burst); \ get_end_cycles(fs, start_tsc); \ }