From patchwork Wed Feb 23 18:48:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108195 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0ACAA00C2; Wed, 23 Feb 2022 19:49:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 004054115F; Wed, 23 Feb 2022 19:48:57 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id C586341153 for ; Wed, 23 Feb 2022 19:48:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SLyJ/T41Wnh2ez0QppwDxqpxU45UisGKslaYABvCDYsxATyipa2Z2T97nPeNGLcBNmQw4engY1w2TBK0f0R3WbyoI9v1MKaW3fshRcmZGjhnxuzvjj7A4nsOetxY6/jzzhiHxS973XFm6KrqluWRaVxVS7AW/N2TBad8WSfl1Coro+gUblzzkCGqJqfwibhmZwaibVGwbUlQbC4jj8l0MxTgdPmtgXYvGA00nm/CimsIfn+Vh0Im1WmOo2vShBcxJUtJN6HjjR0Kd+o5WFTNmLu7gqcLvQIjw5ZKRmBQdp5JgXEuEKk1CdBAvuFwjvRI6REyRSDaD/TYxmGe5UT4Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IRDr8cPKNQvGN4/kdAc1y1w0DT9Dxd/I8E4L8ned/aQ=; b=IYST/Y/zVfQ3vu/n3EORbHWK9wIQLxl7pqNDAZvbQ7hhw+8kfphIIeX62c8Nb+tTb6jdFWlKwgKJ6sp6XBvGEeE6sI+5NShgvmDC66mDDehXpYJ1whQNFKTwgGG5+PyfIAvTImaeehGrD5x7LzZDbRhG8QAH4TfBZOmT9U8rT5FghsgZSUGasOHek0v3hXzyKot+euZRBXkOkkPT9b5PPx9n+k3HUJTFZbT9pTUB/kxw+OTBMrLbwQzwl6Shht9a5odw9POW6lBACTYD68Et1nU87rPoInjZ4uVOkYAI6OYVv8emJB3/gumxiMhxkdjj4VZoVn8T9zBMIsW47xS2Ww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IRDr8cPKNQvGN4/kdAc1y1w0DT9Dxd/I8E4L8ned/aQ=; b=LUUnjteQc8kQvZc1d0cOkCHluoBSaPmuWqjnpPNsV7bJn82g7eFjF4X1Ywvtq8Um5QmtN0OGBTcn4OTMSg06PFjNFE9XQonEYC0gYMy65EkcMrLiX0VHzPF2nyCmz4qtkMnMmjAZZmSrfSurwyidgybrGY9A1DqMDnilnNPG/9QkOaF4YVL/5yazA1wDtGUN+JORqEQGUc4hkw79m1fO2LbRN2oD17ncr7MHCiB5jMEWCejmftTWxb7yDEolnsnknSv70ydL/04aMX4LUXy3clfuGbsr3RUwHkQj4WLB7iXjQn6GnVcnWliZL4tCtt+alrmw5zc0zQeaZ62cLYDUzw== Received: from BN6PR19CA0073.namprd19.prod.outlook.com (2603:10b6:404:133::11) by CY4PR12MB1382.namprd12.prod.outlook.com (2603:10b6:903:40::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Wed, 23 Feb 2022 18:48:52 +0000 Received: from BN8NAM11FT035.eop-nam11.prod.protection.outlook.com (2603:10b6:404:133:cafe::f8) by BN6PR19CA0073.outlook.office365.com (2603:10b6:404:133::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Wed, 23 Feb 2022 18:48:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT035.mail.protection.outlook.com (10.13.177.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Wed, 23 Feb 2022 18:48:51 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Feb 2022 18:48:50 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 23 Feb 2022 10:48:49 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Wed, 23 Feb 2022 10:48:48 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure Date: Wed, 23 Feb 2022 20:48:33 +0200 Message-ID: <20220223184835.3061161-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220223184835.3061161-1-michaelba@nvidia.com> References: <20220222210416.2669519-1-michaelba@nvidia.com> <20220223184835.3061161-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e4ee62b6-f5c5-451a-7ebb-08d9f6fd22b5 X-MS-TrafficTypeDiagnostic: CY4PR12MB1382:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: teJNQqvHchEyPtRhA+f2zSgKtSUUpTOH+vnEkXNga2U/rJI64meiySh8lyH1VPfa3GRyJ+140NETGHUWOGS/Oy1E3u7Ypc9VfTVSoWpxuukGetnsTc09VwTN5NBRqi2q5NdawIxQpOETR1fUnEYC0dmFaD7r1QKWN4+nrKoweyEPdBIlvzgyROY6DUts+kq9URIMD15Y/nbB5KyLQ/vBoqYdiYH67gM74sVnsXWPWQCvWWyPVFoxyNBUbLgyoRtUM8P15CU4zA5E3XICFW7EVLo5cLrXmCG6rDXIYQD9RLmINumcHjgIZ7sxFjmFQ069q3nB2LLIvMG8G6cHNZyB+q9sPlnhMoQ+rHX+rojQ8wod6CUVgkUQ8gmeE5CfHKLyKhWRZ3tQAo4CJB/FXvP+UxVNXaXglidYC/mBDlZJ0aKJvY52OyRbGo5IUp5202sT/y+36Eu4bnU4vjJpl4TGeXxH6VIBM6gHnGHzbHlHE5WDIGdOt2Z05Wf94nM17rXuIHcyhG8cOsR/2faeYjH6kknw8bwGRYVJJnu7bNiLcemEUAN8HMWDpZwvDPH70klb/M0ErsTf8C8L1xwBBiBYU1yZohriR+P/yMYbiMhUGvwUdLn4w1tO8N5onys930fOmU0IZTYhkXVBjfcWFcnELl0mX4q4ewietQK6yU3rMezalXy1kKh9/Hhc4CI+r2ygKRaG3owx4cHYEB79f9+wbg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(30864003)(5660300002)(83380400001)(55016003)(36860700001)(336012)(426003)(40460700003)(6916009)(7696005)(6666004)(2906002)(8936002)(47076005)(54906003)(36756003)(70586007)(1076003)(356005)(8676002)(107886003)(2616005)(81166007)(4326008)(26005)(316002)(86362001)(82310400004)(186003)(508600001)(6286002)(70206006)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 18:48:51.6770 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e4ee62b6-f5c5-451a-7ebb-08d9f6fd22b5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT035.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1382 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The RxQ/TxQ control structure has a field named type. This type is enum with values for standard and hairpin. The use of this field is to check whether the queue is of the hairpin type or standard. This patch replaces it with a boolean variable that saves whether it is a hairpin. Signed-off-by: Michael Baum --- drivers/net/mlx5/mlx5_devx.c | 26 ++++++++++-------------- drivers/net/mlx5/mlx5_ethdev.c | 2 +- drivers/net/mlx5/mlx5_flow.c | 14 ++++++------- drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++-------- drivers/net/mlx5/mlx5_rx.h | 13 +++--------- drivers/net/mlx5/mlx5_rxq.c | 33 +++++++++++------------------- drivers/net/mlx5/mlx5_trigger.c | 36 ++++++++++++++++----------------- drivers/net/mlx5/mlx5_tx.h | 7 +------ drivers/net/mlx5/mlx5_txq.c | 14 ++++++------- 9 files changed, 64 insertions(+), 95 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index a9b8c2a1b7..e4bc90a30e 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -88,7 +88,7 @@ mlx5_devx_modify_rq(struct mlx5_rxq_priv *rxq, uint8_t type) default: break; } - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq->ctrl->is_hairpin) return mlx5_devx_cmd_modify_rq(rxq->ctrl->obj->rq, &rq_attr); return mlx5_devx_cmd_modify_rq(rxq->devx_rq.rq, &rq_attr); } @@ -162,7 +162,7 @@ mlx5_rxq_devx_obj_release(struct mlx5_rxq_priv *rxq) if (rxq_obj == NULL) return; - if (rxq_obj->rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) { + if (rxq_obj->rxq_ctrl->is_hairpin) { if (rxq_obj->rq == NULL) return; mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RST); @@ -476,7 +476,7 @@ mlx5_rxq_devx_obj_new(struct mlx5_rxq_priv *rxq) MLX5_ASSERT(rxq_data); MLX5_ASSERT(tmpl); - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq_ctrl->is_hairpin) return mlx5_rxq_obj_hairpin_new(rxq); tmpl->rxq_ctrl = rxq_ctrl; if (rxq_ctrl->irq && !rxq_ctrl->started) { @@ -583,7 +583,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); MLX5_ASSERT(rxq != NULL); - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) + if (rxq->ctrl->is_hairpin) rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; else rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; @@ -706,17 +706,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, int tunnel, struct mlx5_devx_tir_attr *tir_attr) { struct mlx5_priv *priv = dev->data->dev_private; - enum mlx5_rxq_type rxq_obj_type; + bool is_hairpin; bool lro = true; uint32_t i; /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_ctrl *rxq_ctrl = - mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]); - rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type : - MLX5_RXQ_TYPE_STANDARD; - + is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]); /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { struct mlx5_rxq_data *rxq_i = @@ -728,7 +724,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, } } } else { - rxq_obj_type = priv->drop_queue.rxq->ctrl->type; + is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; @@ -759,7 +755,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) << MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT; } - if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN) + if (is_hairpin) tir_attr->transport_domain = priv->sh->td->id; else tir_attr->transport_domain = priv->sh->tdn; @@ -932,7 +928,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) goto error; } rxq_obj->rxq_ctrl = rxq_ctrl; - rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; + rxq_ctrl->is_hairpin = false; rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq_obj; rxq->ctrl = rxq_ctrl; @@ -1232,7 +1228,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) struct mlx5_txq_ctrl *txq_ctrl = container_of(txq_data, struct mlx5_txq_ctrl, txq); - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) + if (txq_ctrl->is_hairpin) return mlx5_txq_obj_hairpin_new(dev, idx); #if !defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) && defined(HAVE_INFINIBAND_VERBS_H) DRV_LOG(ERR, "Port %u Tx queue %u cannot create with DevX, no UAR.", @@ -1371,7 +1367,7 @@ void mlx5_txq_devx_obj_release(struct mlx5_txq_obj *txq_obj) { MLX5_ASSERT(txq_obj); - if (txq_obj->txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) { + if (txq_obj->txq_ctrl->is_hairpin) { if (txq_obj->tis) claim_zero(mlx5_devx_cmd_destroy(txq_obj->tis)); #if defined(HAVE_MLX5DV_DEVX_UAR_OFFSET) || !defined(HAVE_INFINIBAND_VERBS_H) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 72bf8ac914..406761ccf8 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -173,7 +173,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) for (i = 0, j = 0; i < rxqs_n; i++) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl && !rxq_ctrl->is_hairpin) rss_queue_arr[j++] = i; } rss_queue_n = j; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a87ac8e6d7..58f0aba294 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1676,7 +1676,7 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, const char **error, uint32_t *queue_idx) { const struct mlx5_priv *priv = dev->data->dev_private; - enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED; + bool is_hairpin = false; uint32_t i; for (i = 0; i != queues_n; ++i) { @@ -1693,9 +1693,9 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, *queue_idx = i; return -EINVAL; } - if (i == 0) - rxq_type = rxq_ctrl->type; - if (rxq_type != rxq_ctrl->type) { + if (i == 0 && rxq_ctrl->is_hairpin) + is_hairpin = true; + if (is_hairpin != rxq_ctrl->is_hairpin) { *error = "combining hairpin and regular RSS queues is not supported"; *queue_idx = i; return -ENOTSUP; @@ -5767,15 +5767,13 @@ flow_create_split_metadata(struct rte_eth_dev *dev, const struct rte_flow_action_queue *queue; queue = qrss->conf; - if (mlx5_rxq_get_type(dev, queue->index) == - MLX5_RXQ_TYPE_HAIRPIN) + if (mlx5_rxq_is_hairpin(dev, queue->index)) qrss = NULL; } else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) { const struct rte_flow_action_rss *rss; rss = qrss->conf; - if (mlx5_rxq_get_type(dev, rss->queue[0]) == - MLX5_RXQ_TYPE_HAIRPIN) + if (mlx5_rxq_is_hairpin(dev, rss->queue[0])) qrss = NULL; } } diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index abd1c27538..c4cd5c894b 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5771,8 +5771,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, } /* Continue validation for Xcap actions.*/ if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) && - (queue_index == 0xFFFF || - mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) { + (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index))) { if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) == MLX5_FLOW_XCAP_ACTIONS) return rte_flow_error_set(error, ENOTSUP, @@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, */ if ((action_flags & (MLX5_FLOW_XCAP_ACTIONS | MLX5_FLOW_VLAN_ACTIONS)) && - (queue_index == 0xFFFF || - mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN || + (queue_index == 0xFFFF || !mlx5_rxq_is_hairpin(dev, queue_index) || ((conf = mlx5_rxq_get_hairpin_conf(dev, queue_index)) != NULL && conf->tx_explicit != 0))) { if ((action_flags & MLX5_FLOW_XCAP_ACTIONS) == @@ -10948,10 +10946,8 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = - MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct mlx5_txq_ctrl *txq; uint32_t queue, mask; @@ -10962,7 +10958,7 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, txq = mlx5_txq_get(dev, queue_v->queue); if (!txq) return; - if (txq->type == MLX5_TXQ_TYPE_HAIRPIN) + if (txq->is_hairpin) queue = txq->obj->sq->id; else queue = txq->obj->sq_obj.sq->id; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 38335fd744..1fdf4ff161 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -141,12 +141,6 @@ struct mlx5_rxq_data { /* Buffer split segment descriptions - sizes, offsets, pools. */ } __rte_cache_aligned; -enum mlx5_rxq_type { - MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */ - MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */ - MLX5_RXQ_TYPE_UNDEFINED, -}; - /* RX queue control descriptor. */ struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ @@ -154,7 +148,7 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - enum mlx5_rxq_type type; /* Rxq type. */ + bool is_hairpin; /* Whether RxQ type is Hairpin. */ unsigned int socket; /* CPU socket ID for allocations. */ LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */ uint32_t share_group; /* Group ID of shared RXQ. */ @@ -253,7 +247,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc); int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev); -enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx); +bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx); const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf (struct rte_eth_dev *dev, uint16_t idx); struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev); @@ -627,8 +621,7 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) for (i = 0; i < priv->rxqs_n; ++i) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; n_ibv++; if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 809006f66a..796497ab1a 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1391,8 +1391,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); struct mlx5_rxq_data *rxq; - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; rxq = &rxq_ctrl->rxq; n_ibv++; @@ -1480,8 +1479,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) for (i = 0; i != priv->rxqs_n; ++i) { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq_ctrl == NULL || - rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || rxq_ctrl->is_hairpin) continue; rxq_ctrl->rxq.mprq_mp = mp; } @@ -1798,7 +1796,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOSPC; goto error; } - tmpl->type = MLX5_RXQ_TYPE_STANDARD; + tmpl->is_hairpin = false; if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl, &priv->sh->cdev->mr_scache.dev_gen, socket)) { /* rte_errno is already set. */ @@ -1969,7 +1967,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, LIST_INIT(&tmpl->owners); rxq->ctrl = tmpl; LIST_INSERT_HEAD(&tmpl->owners, rxq, owner_entry); - tmpl->type = MLX5_RXQ_TYPE_HAIRPIN; + tmpl->is_hairpin = true; tmpl->socket = SOCKET_ID_ANY; tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; @@ -2120,7 +2118,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) mlx5_free(rxq_ctrl->obj); rxq_ctrl->obj = NULL; } - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (!rxq_ctrl->is_hairpin) { if (!rxq_ctrl->started) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = @@ -2129,7 +2127,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) } else { /* Refcnt zero, closing device. */ LIST_REMOVE(rxq, owner_entry); if (LIST_EMPTY(&rxq_ctrl->owners)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (!rxq_ctrl->is_hairpin) mlx5_mr_btree_free (&rxq_ctrl->rxq.mr_ctrl.cache_bh); if (rxq_ctrl->rxq.shared) @@ -2169,7 +2167,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) } /** - * Get a Rx queue type. + * Check whether RxQ type is Hairpin. * * @param dev * Pointer to Ethernet device. @@ -2177,17 +2175,15 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) * Rx queue index. * * @return - * The Rx queue type. + * True if Rx queue type is Hairpin, otherwise False. */ -enum mlx5_rxq_type -mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) +bool +mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && rxq_ctrl != NULL) - return rxq_ctrl->type; - return MLX5_RXQ_TYPE_UNDEFINED; + return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin); } /* @@ -2204,14 +2200,9 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) const struct rte_eth_hairpin_conf * mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (idx < priv->rxqs_n && rxq != NULL) { - if (rxq->ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - return &rxq->hairpin_conf; - } - return NULL; + return mlx5_rxq_is_hairpin(dev, idx) ? &rxq->hairpin_conf : NULL; } /** diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 74c3bc8a13..fe8b42c414 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -59,7 +59,7 @@ mlx5_txq_start(struct rte_eth_dev *dev) if (!txq_ctrl) continue; - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) + if (!txq_ctrl->is_hairpin) txq_alloc_elts(txq_ctrl); MLX5_ASSERT(!txq_ctrl->obj); txq_ctrl->obj = mlx5_malloc(flags, sizeof(struct mlx5_txq_obj), @@ -77,7 +77,7 @@ mlx5_txq_start(struct rte_eth_dev *dev) txq_ctrl->obj = NULL; goto error; } - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) { + if (!txq_ctrl->is_hairpin) { size_t size = txq_data->cqe_s * sizeof(*txq_data->fcqs); txq_data->fcqs = mlx5_malloc(flags, size, @@ -167,7 +167,7 @@ mlx5_rxq_ctrl_prepare(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl, { int ret = 0; - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { + if (!rxq_ctrl->is_hairpin) { /* * Pre-register the mempools. Regardless of whether * the implicit registration is enabled or not, @@ -280,7 +280,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN || + if (!txq_ctrl->is_hairpin || txq_ctrl->hairpin_conf.peers[0].port != self_port) { mlx5_txq_release(dev, i); continue; @@ -299,7 +299,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) if (!txq_ctrl) continue; /* Skip hairpin queues with other peer ports. */ - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN || + if (!txq_ctrl->is_hairpin || txq_ctrl->hairpin_conf.peers[0].port != self_port) { mlx5_txq_release(dev, i); continue; @@ -322,7 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || + if (!rxq_ctrl->is_hairpin || rxq->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u Tx queue %d can't be binded to " @@ -412,7 +412,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, dev->data->port_id, peer_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Txq", dev->data->port_id, peer_queue); @@ -444,7 +444,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); @@ -510,7 +510,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Txq", dev->data->port_id, cur_queue); @@ -570,7 +570,7 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); @@ -644,7 +644,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, dev->data->port_id, cur_queue); return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Txq", dev->data->port_id, cur_queue); @@ -683,7 +683,7 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, return -rte_errno; } rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + if (!rxq_ctrl->is_hairpin) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); @@ -751,7 +751,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -791,7 +791,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -886,7 +886,7 @@ mlx5_hairpin_unbind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) txq_ctrl = mlx5_txq_get(dev, i); if (txq_ctrl == NULL) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -1016,7 +1016,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + if (!txq_ctrl->is_hairpin) { mlx5_txq_release(dev, i); continue; } @@ -1040,7 +1040,7 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, if (rxq == NULL) continue; rxq_ctrl = rxq->ctrl; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) + if (!rxq_ctrl->is_hairpin) continue; pp = rxq->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { @@ -1318,7 +1318,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) if (!txq_ctrl) continue; /* Only Tx implicit mode requires the default Tx flow. */ - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN && + if (txq_ctrl->is_hairpin && txq_ctrl->hairpin_conf.tx_explicit == 0 && txq_ctrl->hairpin_conf.peers[0].port == priv->dev_data->port_id) { diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 0adc3f4839..89dac0c65a 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -169,17 +169,12 @@ struct mlx5_txq_data { /* Storage for queued packets, must be the last field. */ } __rte_cache_aligned; -enum mlx5_txq_type { - MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */ - MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */ -}; - /* TX queue control descriptor. */ struct mlx5_txq_ctrl { LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */ uint32_t refcnt; /* Reference counter. */ unsigned int socket; /* CPU socket ID for allocations. */ - enum mlx5_txq_type type; /* The txq ctrl type. */ + bool is_hairpin; /* Whether TxQ type is Hairpin. */ unsigned int max_inline_data; /* Max inline data. */ unsigned int max_tso_header; /* Max TSO header size. */ struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index f128c3d1a5..0140f8b3b2 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -527,7 +527,7 @@ txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd) return -rte_errno; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) return 0; MLX5_ASSERT(ppriv); /* @@ -570,7 +570,7 @@ txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl) rte_errno = ENOMEM; } - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) return; addr = ppriv->uar_table[txq_ctrl->txq.idx].db; rte_mem_unmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size); @@ -631,7 +631,7 @@ mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd) continue; txq = (*priv->txqs)[i]; txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); - if (txq_ctrl->type != MLX5_TXQ_TYPE_STANDARD) + if (txq_ctrl->is_hairpin) continue; MLX5_ASSERT(txq->idx == (uint16_t)i); ret = txq_uar_init_secondary(txq_ctrl, fd); @@ -1107,7 +1107,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, goto error; } __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); - tmpl->type = MLX5_TXQ_TYPE_STANDARD; + tmpl->is_hairpin = false; LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; error: @@ -1150,7 +1150,7 @@ mlx5_txq_hairpin_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->txq.port_id = dev->data->port_id; tmpl->txq.idx = idx; tmpl->hairpin_conf = *hairpin_conf; - tmpl->type = MLX5_TXQ_TYPE_HAIRPIN; + tmpl->is_hairpin = true; __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; @@ -1209,7 +1209,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) mlx5_free(txq_ctrl->obj); txq_ctrl->obj = NULL; } - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) { + if (!txq_ctrl->is_hairpin) { if (txq_ctrl->txq.fcqs) { mlx5_free(txq_ctrl->txq.fcqs); txq_ctrl->txq.fcqs = NULL; @@ -1218,7 +1218,7 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&txq_ctrl->refcnt, __ATOMIC_RELAXED)) { - if (txq_ctrl->type == MLX5_TXQ_TYPE_STANDARD) + if (!txq_ctrl->is_hairpin) mlx5_mr_btree_free(&txq_ctrl->txq.mr_ctrl.cache_bh); LIST_REMOVE(txq_ctrl, next); mlx5_free(txq_ctrl);