From patchwork Sat Oct 16 09:12:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xueming(Steven) Li" X-Patchwork-Id: 101882 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FCB9A034F; Sat, 16 Oct 2021 11:13:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 308B041158; Sat, 16 Oct 2021 11:13:21 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2072.outbound.protection.outlook.com [40.107.220.72]) by mails.dpdk.org (Postfix) with ESMTP id 0CF984116B for ; Sat, 16 Oct 2021 11:13:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bmid+8e8pvgZsEVOU3pQfZcftCN6rFlERLisSj3YiDdGdDir10OQijF7U+NDjpbjRBCaD/t/mYDpEwIvdLGtJzMF0veiQJroKpDMYJ1quEDfyuVcRRR/4J5YneIgM+7qKXHCGiWru34N6TU/Ly60qi2BZwGdAurfjUcVoQHSF6hOI2s7g1R/7w8JUvE1EsMdoXzZ+DCPVvzw29ak3Gmj7MMB/8BgksDD5Yoae1b4fFkMzQPRq1LBxUG3mm5+NDzUesmv5RKKz/zSsa+pembmpQX2My0KD51oQRFADrOd93WfafjzAFjkbcfxcdMhbbN3Ph6wY/x1sv9/1rvCsZbrlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JBruw2K+KI17mELyn9aZeZFxQYEEprcCcgkCnXUdB1M=; b=cfpmUU/WpxyUPwecA3ObemuGUrcReJlf5wCKoRkLzxpM9e6A+fB0xVS41dmNA8m2jiSEfydZlVT7BhRTFNymWW1TVFOROEawuX8CtyL8f4comMGvt0f1wB7+RuHv3lNnlZ2v8MAsMWwwG6kjsysy01ZDwbEdDF2plkP1+EnheAfCsTbUpuS9lLxUH9wZHy61+eNzz7T3EuAtBa9X5LaHJvZlyHrwJ5s2ubIT9x8rezqH0zoifiR4UFLOoInoFXfUpydeuCAd+wcysbh8nNM7a6BuniAHFZEp2kBAheqmYaSuIrhz1HjOtFbhSBxySN2d1Ca/uv5lHrEmnxPTMneMjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JBruw2K+KI17mELyn9aZeZFxQYEEprcCcgkCnXUdB1M=; b=NT0mLhl/rpdiV/xvEbtPWyYFBTRuSU6qph5DgMIWVp5w+037fO9d5JN0+EwNYYxDJ30ju+0SwNd26Hsyls9RI0d1klkzF9IffyOJb6XNmglva+vXlxbnLqQkmn58fc4i+jEvXcao1EXuDHggq0qdSBbVdFwqdos5VAegZYJptwgP7K57XOpE0Nmco7uvasJNUvHVpMKNjVwNFmlA1JgV4gNRmaVGfBMEU2aQQhYhVBWwFcuNxp6+nqxDqEBgm3DbtQeSHyQID7fL3hjVaL6pL5IhiV7nyqn2qsW5sXd8uA26TodRIjEgfGdFpFBPYLPN1as+n4bnAkE/XasVZwK0sw== Received: from BN6PR1201CA0005.namprd12.prod.outlook.com (2603:10b6:405:4c::15) by DM5PR12MB1499.namprd12.prod.outlook.com (2603:10b6:4:8::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.15; Sat, 16 Oct 2021 09:13:16 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:405:4c:cafe::ab) by BN6PR1201CA0005.outlook.office365.com (2603:10b6:405:4c::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16 via Frontend Transport; Sat, 16 Oct 2021 09:13:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Sat, 16 Oct 2021 09:13:15 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 16 Oct 2021 09:13:13 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sat, 16 Oct 2021 17:12:08 +0800 Message-ID: <20211016091214.1831902-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211016091214.1831902-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> <20211016091214.1831902-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5ebc436c-e45a-4f57-8fb7-08d990852fb8 X-MS-TrafficTypeDiagnostic: DM5PR12MB1499: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:295; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Mpo5B2SCbDvW5FTKTRF0FShyRrM2cjnaO/gtVlmc407Rxfkb041wwA45X5TLAsGm5vD44QqRwsM9B9REtZI1oXkHzetN7e4RZftwWMnjsc8sBFZJZ0hcjl9vOxIlvBNfZsAXHdkpml3TWbEhiWApIdkBtjx9IYlt+BqkAgYXIWsBFaApYzOTC+FwWgQrEKwqoIHaci8unC+wlN9OX+i9HzlHsVtR0nxHd2lnK5AdHPJEryZOd2/I5AWs+5V+7GhidLiGBTFQk3f/t5tuDcjB7lCRXLMquS372UlWy2SG/ICmeUMoaTh46wvSsWuqofBWUpwxP+haA8GvpUTTe4O4zIGQ1HGKFPCungFnGABVjFEf0Q29xaxPLLg2eIRhvU+BdlC+z27rGzbD6B99WrviVXo9WCMXpKiGqCQkCUX/HSmmMdQhtDzR5NsveeC11RuvoVqvJ9WAmMIprlf8jviHIB8f5WxBabOM95JIEcFBbK4vQOvm74wn6vzI9ZR1bGkKbHF9KYsWtb8fhPFtkU9UDiXXLavvh82ysRxwkhpyBlg0zeLmiQEnY1uxFzUKYYENlSK8zC1ifcuiKrU0xpecBwJDHkVPv7lXrFiaPvPhGzc6+5n7P2d3qJSJ+RiTcvsRM+5IyMarsHd2x1SZl2LyhXNAPTPhYETg1ERFMbhMbTTUIJbEdErFy4Eyz1TDkB+zkZlK9Qcvt5RzdxBc4Vplzg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(6666004)(54906003)(6286002)(36860700001)(4326008)(316002)(2616005)(107886003)(1076003)(186003)(5660300002)(55016002)(16526019)(82310400003)(36756003)(8936002)(26005)(7696005)(426003)(8676002)(7636003)(6916009)(83380400001)(47076005)(70206006)(86362001)(70586007)(336012)(508600001)(2906002)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2021 09:13:15.3551 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ebc436c-e45a-4f57-8fb7-08d990852fb8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1499 Subject: [dpdk-dev] [PATCH v2 08/13] net/mlx5: move Rx queue hairpin info to private data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hairpin info of Rx queue can't be shared, moves to private queue data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rx.h | 4 ++-- drivers/net/mlx5/mlx5_rxq.c | 13 +++++-------- drivers/net/mlx5/mlx5_trigger.c | 24 ++++++++++++------------ 3 files changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fe19414c130..2ed544556f5 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -171,8 +171,6 @@ struct mlx5_rxq_ctrl { uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ uint32_t wqn; /* WQ number. */ uint16_t dump_file_n; /* Number of dump files. */ - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ - uint32_t hairpin_status; /* Hairpin binding status. */ }; /* RX queue private data. */ @@ -182,6 +180,8 @@ struct mlx5_rxq_priv { struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ + uint32_t hairpin_status; /* Hairpin binding status. */ }; /* mlx5_rxq.c */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 17077762ecd..2b9ab7b3fc4 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1649,8 +1649,8 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; - tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; + rxq->hairpin_conf = *hairpin_conf; mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; @@ -1869,14 +1869,11 @@ const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq_ctrl->hairpin_conf; + if (idx < priv->rxqs_n && rxq != NULL) { + if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + return &rxq->hairpin_conf; } return NULL; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index a49254c96f6..f376f4d6fc4 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -273,7 +273,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) } rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || - rxq_ctrl->hairpin_conf.peers[0].queue != i) { + rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " "Rx queue %d", dev->data->port_id, @@ -303,7 +303,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (ret) goto error; /* Qs with auto-bind will be destroyed directly. */ - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); } @@ -406,9 +406,9 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, } peer_info->qp_id = rxq_ctrl->obj->rq->id; peer_info->vhca_id = priv->config.hca_attr.vhca_id; - peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; - peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; - peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; + peer_info->peer_q = rxq->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit; + peer_info->manual_bind = rxq->hairpin_conf.manual_bind; } return 0; } @@ -530,20 +530,20 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status != 0) { + if (rxq->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); return 0; } if (peer_info->tx_explicit != - rxq_ctrl->hairpin_conf.tx_explicit) { + rxq->hairpin_conf.tx_explicit) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); return -rte_errno; } if (peer_info->manual_bind != - rxq_ctrl->hairpin_conf.manual_bind) { + rxq->hairpin_conf.manual_bind) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); @@ -555,7 +555,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.hairpin_peer_vhca = peer_info->vhca_id; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 1; + rxq->hairpin_status = 1; } return ret; } @@ -637,7 +637,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (rxq_ctrl->hairpin_status == 0) { + if (rxq->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); return 0; @@ -652,7 +652,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, rq_attr.rq_state = MLX5_SQC_STATE_RST; ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) - rxq_ctrl->hairpin_status = 0; + rxq->hairpin_status = 0; } return ret; } @@ -990,7 +990,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - pp = rxq_ctrl->hairpin_conf.peers[0].port; + pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; DRV_LOG(ERR, "port %hu queue %u peer port "