From patchwork Thu Nov 4 12:33:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 103756 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC924A0548; Thu, 4 Nov 2021 13:35:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCE7B42749; Thu, 4 Nov 2021 13:34:50 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2066.outbound.protection.outlook.com [40.107.244.66]) by mails.dpdk.org (Postfix) with ESMTP id C7C8B42715 for ; Thu, 4 Nov 2021 13:34:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OGaigiFu1NFmCCkX2OBNfyes4TL7VBqc6Yt/Eu7xeIHcZhNSHdbeJVhvG/hqa5+gyaH+ie6U9FRNRVeIk+EixpWyrFTneKsPaFl8etAzZmOcSCYPCNev/G6dGw6jG6FTyPzcGi16MekxnBlBPhBXIVV1nEwgzWACNTr+8x90lrpWDGB+m0aq3ONhcqmBL6APBJpu24Yuyo63cz06t/7RFMwQRSaYf0Ngsa3faPPFw4cRQVvi9GC+msvIlrRqU5mfemn8X7fh7CZk+Td7wFE0se7DjDVOIcj3wqCom27XqP5Oo8tWtQyzQxzf3aHdh+tfX6FeKNO+5kdF+f5KUvkIUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ACxRsWsW6lSq+cyAs5TKh8yM8c3PM8uX3hRW7r+8tiE=; b=kYQmDlkZoW91sNc5ZEO5EDdaHX+viyCtbJ2x3u9a5jsEpPhNWedW0CpGYX/sIzp9jQ78+USLYHQa7cXTMaDXJAuPmqfVQpmRZAr/DPtQrp9ncT3yLjkGCvqGMIAEdA8iNs/dXFjllWxu8j43P8qQZcftrt4kQ8yAqRmbbGQnFe16YvBHQjmYzFJA2sFmv6FraCae8bCbO+wYOC0iVlBo6KppP0p2lrf8JlUFFrzZ2Lw36RcgXmq4iuDyTmBpawCsTJ4KyUPReoDE5zYKOOg4Wy9gAi8dgPzxJPhZM3xzFyYyct37ZzXJyA/HWPv5gw0bC7RWFZiP63GSGHK//RuOzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ACxRsWsW6lSq+cyAs5TKh8yM8c3PM8uX3hRW7r+8tiE=; b=hqu0VGLCHgJVPZ0aRN6xHTb/QaPcosUFzOFvbC02h7GowpRxxuWhbwuuoFCtju+H6Sn8cgfDb9v8YrybMqcEGHFNlC0YmIg3w9h9TfU+bUm+yvPH0xhOyZzxJKhRyCKBG9SA8RNhzUNbDPIPPzz6GGGGceSLeY1nx7fGzKhdih8QyZCrS+ZeU590JbRFKnegENL1DRqwmhhl4L3POMszMPzJRAS59UFHfyxiWNlik12JAnIbyUk4xA+tTdtS6GTjekeaGL4bZl1SO+kRm/VqLqWzs/SZBTnyZsyqsSBvYA/DPo7Tzdb3krog2yDWCMYmHzt+3SLrowOUcapVlORusw== Received: from BN6PR14CA0009.namprd14.prod.outlook.com (2603:10b6:404:79::19) by CH0PR12MB5281.namprd12.prod.outlook.com (2603:10b6:610:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 4 Nov 2021 12:34:46 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::98) by BN6PR14CA0009.outlook.office365.com (2603:10b6:404:79::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:45 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:24 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:15 +0800 Message-ID: <20211104123320.1638915-10-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5e633fd5-1147-4ce6-60cf-08d99f8f7bd7 X-MS-TrafficTypeDiagnostic: CH0PR12MB5281: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:295; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: COBJQx7mUh+s0ePfd9e/VfYa0v8byOP+f3lHYq+0Sjn4Tht4l4QJIYSU5VifW1W/jwumPbi+IEpqDpchBFTHurevmZmV91shquago5I7kdg1TvGKWFZJ61OqYNL/BaThaHn1NH6jDlSwW/R4uoy2OjX1Qulm9O16DZ4L5tYwzCexs5/9LOTp4dF5k4jUy2XytIPTVSY782Z5OcB0WuA+Tv3tAiS9YTeyfv5LkZbrn4wHEvLCEblOXl7MG7wtS2sHcTUEcPnUjxpdXkQWsz8IPCmMcxb+XpdfBoh6etxLiCXosuZEQVxDD8xz3n9L7TNDOfuqB7j1vpM1mQB2h6t8W2uG7I/zCUw/UpWQ6iG3oZjH8to9Yyx3uBTMPWGyNRi2m2GrjjHZ2oXGfanBz9QKgQ+5eGvyV9rQn5acNy3mvIMKDvr4iMwcywiQc9rzA5WiwJYHiMXz3GaHTNzN7dvhz4EpSpGnj2l2ISC7+OGq9fum5gcDqvUXHXPNYOTX9zv2HgfzGS6HjdwbhdGLKdCG7O3U8Q+bXCIVZTiaXLPWDXv3JqWAtDT7CZkVRCgSQsEjwa2c/xDQPX7M7p7/Z23mQ4+IEvduS5sBw3+0umJW2u9nCp4LEEKH8V8aELfuyXgsc30dxWGgWdyZkFjXoozhlMzBspZDyE1VRb0n0PL3KT11BE9LWMqzmC2YQ+zoInUmNDZ5yuFnMUZE121vL/FTng== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(6666004)(336012)(8676002)(83380400001)(8936002)(4326008)(5660300002)(16526019)(36756003)(426003)(70206006)(2906002)(70586007)(2616005)(7696005)(82310400003)(6916009)(36860700001)(316002)(107886003)(186003)(26005)(47076005)(86362001)(55016002)(508600001)(356005)(7636003)(6286002)(1076003)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:45.4849 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5e633fd5-1147-4ce6-60cf-08d99f8f7bd7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5281 Subject: [dpdk-dev] [PATCH v4 09/14] net/mlx5: move Rx queue hairpin info to private data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hairpin info of Rx queue can't be shared, moves to private queue data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rx.h | 4 ++-- drivers/net/mlx5/mlx5_rxq.c | 13 +++++-------- drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------ 3 files changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index eccfbf1108d..b21918223b8 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -162,8 +162,6 @@ struct mlx5_rxq_ctrl { uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ uint32_t wqn; /* WQ number. */ uint16_t dump_file_n; /* Number of dump files. */ - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ - uint32_t hairpin_status; /* Hairpin binding status. */ }; /* RX queue private data. */ @@ -173,6 +171,8 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ + uint32_t hairpin_status; /* Hairpin binding status. */ }; /* mlx5_rxq.c */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8071ddbd61c..7b637fda643 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1695,8 +1695,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; - tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; + rxq->hairpin_conf = *hairpin_conf; mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; @@ -1913,14 +1913,11 @@ const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq_ctrl->hairpin_conf; + if (idx < priv->rxqs_n && rxq != NULL) { + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + return &rxq->hairpin_conf; } return NULL; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index e5d74d275f8..a124f74fcda 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -324,7 +324,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) } rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || - rxq_ctrl->hairpin_conf.peers[0].queue != i) { + rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " "Rx queue %d", dev->data->port_id, @@ -354,7 +354,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (ret) goto error; /* Qs with auto-bind will be destroyed directly. */ - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); } @@ -457,9 +457,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, } peer_info->qp_id = rxq_ctrl->obj->rq->id; peer_info->vhca_id = priv->config.hca_attr.vhca_id; - peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; - peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; - peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; + peer_info->peer_q = rxq->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit; + peer_info->manual_bind = rxq->hairpin_conf.manual_bind; } return 0; } @@ -581,20 +581,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status != 0) { + if (rxq->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); return 0; } if (peer_info->tx_explicit != - rxq_ctrl->hairpin_conf.tx_explicit) { + rxq->hairpin_conf.tx_explicit) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); return -rte_errno; } if (peer_info->manual_bind != - rxq_ctrl->hairpin_conf.manual_bind) { + rxq->hairpin_conf.manual_bind) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); @@ -606,7 +606,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.hairpin_peer_vhca = peer_info->vhca_id; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; } return ret; } @@ -688,7 +688,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status == 0) { + if (rxq->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); return 0; @@ -703,7 +703,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.rq_state = MLX5_SQC_STATE_RST; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 0; + rxq->hairpin_status = 0; } return ret; } @@ -1041,7 +1041,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - pp = rxq_ctrl->hairpin_conf.peers[0].port; + pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; DRV_LOG(ERR, "port %hu queue %u peer port "