From patchwork Wed Oct 20 07:53:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xueming(Steven) Li" X-Patchwork-Id: 102376 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE7D7A0C45; Wed, 20 Oct 2021 09:54:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC6B241196; Wed, 20 Oct 2021 09:54:08 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2075.outbound.protection.outlook.com [40.107.96.75]) by mails.dpdk.org (Postfix) with ESMTP id 6B88541186 for ; Wed, 20 Oct 2021 09:54:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dL2cEv+pJKMWN3log/G45exBLLUiYikyWwZ9aJ0Y+96EdYTWHDzrwjXtZZwFTQ51wz16Z/omkjcwza+BzufNoDrxXtvQXQMHvjUeasswIp37GPvzv96D7mI1FLSbj0xZ0zVJlOxmMUiwMenPtctEoGKTUFfigok7h1kuEZE/jY7JPzWoNYhaUi0CbZzMjzFt41+VoLdrktLucUFrFpFbo/qZLxQTB5YitHBRAUYfvCHEK0C/7Qkh8y4ASikr0BNuF3p35K3fi37qeLV+eov/D09qkYTcd09YUPNvLV4ly339VthIl75HMZ/ObJbt7a9jIk2sJOjTbDVR7k6Fcwk87A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RsT/NClEjLz+sKjrojCFArNIshtlCLuLP+kNEGP1Gw0=; b=SerF/sXP3jyD8HR0djmZvA+fuBt+ElMUdAcMi3k1bGunixqHMYXMjvhDNHUgiQDi72pVqbk2W3IzemPR7YTOWdNkFytuJufYHUa6Voai3upkDqd2w3ltYptoaL76pLeGSMD3wVOubiETVuOfzIyX04NDrVvwjG6dgKEa9troqn7tCLN6FJQI1vboHJRiPFNQx/s1mkPEtonAMoeJi3omT+OZ7bMpxCoJ/EXUq+1I1TrH6TNPYi3mz2Hb+RVP7dksjzlYSfwO9nK3XaESvke9SCOm4UcJYUlOkPAJOUamMno97QoezjGLlOyJo/44lgaAst7jPj+FpyRYlZglel45oQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RsT/NClEjLz+sKjrojCFArNIshtlCLuLP+kNEGP1Gw0=; b=rrkBLuFy+ZiPw4Si6XbZbAmFobmMzTa2IyaAkoE0G6RtYqq7eiyGwoRdTaVH2LbRDHEhe4lxjZaHF8wNsKd9uUgz+r2Ao4GsbzffsITMt35zofV0Ijy91g9QcoHR6x6oJGc27ZkoaPcdUdDwUELKE5X/Z3oDWu6vtsXld7Il/zNtiZ83eggACnSbIRonnqadCWHpxto1y/Azgm8xvw2KjLZZvSNwyoYtF9wsYKcWs5kSqRUy2MZBFXaeedNwGdhW1W113pfVxiNN3ETS0EyGi8pSdNzgRcpyHgXpWV3KOTZNVSsQpF62JNCmCa75LmfecjlsSWrX1LIb83xF7+WxUQ== Received: from DM6PR07CA0101.namprd07.prod.outlook.com (2603:10b6:5:337::34) by DM5PR12MB2421.namprd12.prod.outlook.com (2603:10b6:4:b4::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.17; Wed, 20 Oct 2021 07:54:05 +0000 Received: from DM6NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:5:337:cafe::cf) by DM6PR07CA0101.outlook.office365.com (2603:10b6:5:337::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.15 via Frontend Transport; Wed, 20 Oct 2021 07:54:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT036.mail.protection.outlook.com (10.13.172.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Wed, 20 Oct 2021 07:54:04 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 20 Oct 2021 07:53:55 +0000 From: Xueming Li To: , Zhang Yuying CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , "Ananyev Konstantin" , Ajit Khaparde Date: Wed, 20 Oct 2021 15:53:13 +0800 Message-ID: <20211020075319.2397551-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211020075319.2397551-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211020075319.2397551-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2da459ba-5a72-4243-4cb5-08d9939eca1c X-MS-TrafficTypeDiagnostic: DM5PR12MB2421: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8tO4RiwHR/uZ/VnLFkBgCyZO+t1ErRjW3XUYn89Vqi2kIFvNGQ1RpUQ5u0FQ54JpNeqT+sC1MIB/ISCD8hKb6RE5XMnGJINkd9TfdN8S1oye//dXyvJ0oobkgLsv/CIj6xCWrge9+hXRcG8mDyCDugcJjN1JapZVWDtUuEHeBjxzxxCZ4hkLFpBZmUGAvD6+7d43/Mnyg9/1wCgCfaZ9Q5/KboExWMRdRBpmrcNwVU/fbnpQOfO3wNPmMAeHfIHI2oXdBRNGvgUR5ZB6gO5WkOATsPtE6anQW/C4OjcvhXmBHTTElYLfBLIh6Yuw1TzNpfMq+Xfm9IUOQZSLVdcbPGd2008aMHSkYC25dluQ46LeeHEX9LSTJ3cUBtBW/fyOfJ19xPhp3Z7bImvpfSWvSApBXozVv/rL1GI52ql7tKASw8JDRs1u+XMawKcWAAbpLfloW7QAqicM51+kwBYiMh+0INNhF9m6jBGp/ZERhk/PkcG9zXkWUe0JP9L0/gISxG1OHmeWVKcZ6CmaglySHI1QxM53EPnxtFbLu2MIWYiJ0Mk7ttqTn48YlF3HmW4EtVfCNIySQWvPFmvVO/SFkRsDczHFONn3hYRKix6ujIFRMvUKWdRYIARmPWx1wdMQDiaWlYZYkrUcmtxnb4/CEkct7MvniyzQ+iHhUG11FhEaLzztkmrSCxm8IH6uAWXKWnvxPPoJ63C0CXgjZyOE1oCir3q2pxa3KvAFqRbd0ajCiONcfMMRnUJyyLGovu7ig3dEYEzlzsmGvcPpaJHtpqL5vh34cJwFS+2VLykzt2M= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(86362001)(316002)(6666004)(6286002)(36906005)(110136005)(54906003)(36860700001)(26005)(70586007)(356005)(83380400001)(70206006)(2616005)(186003)(336012)(82310400003)(36756003)(5660300002)(8936002)(16526019)(47076005)(7636003)(508600001)(426003)(1076003)(4326008)(2906002)(8676002)(7696005)(55016002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2021 07:54:04.4809 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2da459ba-5a72-4243-4cb5-08d9939eca1c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2421 Subject: [dpdk-dev] [PATCH v11 1/7] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In current DPDK framework, each Rx queue is pre-loaded with mbufs to save incoming packets. For some PMDs, when number of representors scale out in a switch domain, the memory consumption became significant. Polling all ports also leads to high cache miss, high latency and low throughput. This patch introduces shared Rx queue. Ports in same Rx domain and switch domain could share Rx queue set by specifying non-zero sharing group in Rx queue configuration. Shared Rx queue is identified by share_rxq field of Rx queue configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using same shared Rx queue ID. No special API is defined to receive packets from shared Rx queue. Polling any member port of a shared Rx queue receives packets of that queue for all member ports, port_id is identified by mbuf->port. PMD is responsible to resolve shared Rx queue from device and queue data. Shared Rx queue must be polled in same thread or core, polling a queue ID of any member port is essentially same. Multiple share groups are supported. PMD should support mixed configuration by allowing multiple share groups and non-shared Rx queue on one port. Example grouping and polling model to reflect service priority: Group1, 2 shared Rx queues per port: PF, rep0, rep1 Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127 Core0: poll PF queue0 Core1: poll PF queue1 Core2: poll rep2 queue0 PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE. PMD is responsible for shared Rx queue consistency checks to avoid member port's configuration contradict each other. Signed-off-by: Xueming Li Reviewed-by: Andrew Rybchenko Acked-by: Ajit Khaparde --- doc/guides/nics/features.rst | 13 ++++++++++ doc/guides/nics/features/default.ini | 1 + .../prog_guide/switch_representation.rst | 11 +++++++++ doc/guides/rel_notes/release_21_11.rst | 6 +++++ lib/ethdev/rte_ethdev.c | 8 +++++++ lib/ethdev/rte_ethdev.h | 24 +++++++++++++++++++ 6 files changed, 63 insertions(+) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index e346018e4b8..89f9accbca1 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -615,6 +615,19 @@ Supports inner packet L4 checksum. ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``. +.. _nic_features_shared_rx_queue: + +Shared Rx queue +--------------- + +Supports shared Rx queue for ports in same Rx domain of a switch domain. + +* **[uses] rte_eth_dev_info**: ``dev_capa:RTE_ETH_DEV_CAPA_RXQ_SHARE``. +* **[uses] rte_eth_dev_infoļ¼Œrte_eth_switch_info**: ``rx_domain``, ``domain_id``. +* **[uses] rte_eth_rxconf**: ``share_group``, ``share_qid``. +* **[provides] mbuf**: ``mbuf.port``. + + .. _nic_features_packet_type_parsing: Packet type parsing diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index d473b94091a..93f5d1b46f4 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -19,6 +19,7 @@ Free Tx mbuf on demand = Queue start/stop = Runtime Rx queue setup = Runtime Tx queue setup = +Shared Rx queue = Burst mode info = Power mgmt address monitor = MTU update = diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst index ff6aa91c806..4f2532a91ea 100644 --- a/doc/guides/prog_guide/switch_representation.rst +++ b/doc/guides/prog_guide/switch_representation.rst @@ -123,6 +123,17 @@ thought as a software "patch panel" front-end for applications. .. [1] `Ethernet switch device driver model (switchdev) `_ +- For some PMDs, memory usage of representors is huge when number of + representor grows, mbufs are allocated for each descriptor of Rx queue. + Polling large number of ports brings more CPU load, cache miss and + latency. Shared Rx queue can be used to share Rx queue between PF and + representors among same Rx domain. ``RTE_ETH_DEV_CAPA_RXQ_SHARE`` in + device info is used to indicate the capability. Setting non-zero share + group in Rx queue configuration to enable share, share_qid is used to + identify the shared Rx queue in group. Polling any member port can + receive packets of all member ports in the group, port ID is saved in + ``mbuf.port``. + Basic SR-IOV ------------ diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 3362c52a738..caf82242f2e 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -75,6 +75,12 @@ New Features operations. * Added multi-process support. +* **Added ethdev shared Rx queue support.** + + * Added new device capability flag and Rx domain field to switch info. + * Added share group and share queue ID to Rx queue configuration. + * Added testpmd support and dedicate forwarding engine. + * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.** Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 028907bc4b9..bc55f899f72 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -2159,6 +2159,14 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EINVAL; } + if (local_conf.share_group > 0 && + (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { + RTE_ETHDEV_LOG(ERR, + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share\n", + port_id, rx_queue_id, local_conf.share_group); + return -EINVAL; + } + /* * If LRO is enabled, check that the maximum aggregated packet * size is supported by the configured device. diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 6d80514ba7a..34acc91273d 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1044,6 +1044,14 @@ struct rte_eth_rxconf { uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ + /** + * Share group index in Rx domain and switch domain. + * Non-zero value to enable Rx queue share, zero value disable share. + * PMD is responsible for Rx queue consistency checks to avoid member + * port's configuration contradict to each other. + */ + uint16_t share_group; + uint16_t share_qid; /**< Shared Rx queue ID in group. */ /** * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. * Only offloads set on rx_queue_offload_capa or rx_offload_capa @@ -1445,6 +1453,16 @@ struct rte_eth_conf { #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001 /** Device supports Tx queue setup after device started. */ #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 +/** + * Device supports shared Rx queue among ports within Rx domain and + * switch domain. Mbufs are consumed by shared Rx queue instead of + * each queue. Multiple groups are supported by share_group of Rx + * queue configuration. Shared Rx queue is identified by PMD using + * share_qid of Rx queue configuration. Polling any port in the group + * receive packets of all member ports, source port identified by + * mbuf->port field. + */ +#define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) /**@}*/ /* @@ -1488,6 +1506,12 @@ struct rte_eth_switch_info { * but each driver should explicitly define the mapping of switch * port identifier to that physical interconnect/switch */ + /** + * Shared Rx queue sub-domain boundary. Only ports in same Rx domain + * and switch domain can share Rx queue. Valid only if device advertised + * RTE_ETH_DEV_CAPA_RXQ_SHARE capability. + */ + uint16_t rx_domain; }; /**