From patchwork Fri Jan 27 03:22:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 122599 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20DED4248B; Fri, 27 Jan 2023 04:22:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7FF540143; Fri, 27 Jan 2023 04:22:41 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2058.outbound.protection.outlook.com [40.107.237.58]) by mails.dpdk.org (Postfix) with ESMTP id 678CB400D7; Fri, 27 Jan 2023 04:22:40 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=U5F6OqPqOxxZFzxAXMmRRs6AmlEUYILgSatSwxTdgVqv92Ips8B1Bde8eNZI/Z2bqZCKSVZ1jZjg9qBkHyoLNzWpcz7ULfd8IMVTD2tRTXOTapyfF26YSFe1G8e1YpEp2x72WYy0Zn137gCBOqjDK/IWPCUiKrDGArPI4GOLblf7f+ZhjlEBUFOdDxLIubrZ7FhW2fkU/scT6YpKA12rbfc8lDJKkc7NJnOtOpaUAEamLw8QKHE3FIjj3XrtpTa7CwyXlh4PhZe8E7NHuBNGb5nnghmMRTumG5QTnoRJzY18bDlemiDnP7PrjiR0zqYepUMJ+nhReROCShzU/Y8+Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X1AVoGNnlDgYIKIc9s68cPHMs/IgwQqKbLxPFmKZ99s=; b=MukYL05g7Eea8K9dhj5nOIpMTwyuTp6drWzAA9Rpxu5UruihYKv6dBPDCIUqmy0QNe2J5Xr3swz4kxCoMLMe8hyQd+TUTN4Y/E/uGz89qQiPeonmE+xd4o86DE1hOW+sE7+YKVdpXzFFyuitpNqULDe97PQDrgk4jW00QjJB7vo6xOFaynpWqU5GU5BewuKeaPcbm2Ujv4WfgPFP/Pkni9iGTVyUhLFXkgHxJrUWQ6QavMeODIf+Vewz1UVprchfLW4BU0FD0SrX+HHvCmCOEBHZYyArvA1FFXIj7zBtJlqqfnvaRSItUZV1ekKCuOgj4mK9uhvwwWfy/WfBB+kQPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X1AVoGNnlDgYIKIc9s68cPHMs/IgwQqKbLxPFmKZ99s=; b=Qu2PJVE7HImSKPl2qBjv2U0onD5m3M+LFXGMbY8gNbZGSHLCqF3KaQwamkvHIN+wAwbCerAN9gFgtJuuar8bQaQVmOv+taQUs846WY9QRZJ6Z7ra279z8HPpeIpQz0NjzkUaGuSVcw5wy9NG7R1ME0yUYS+iaVO0svLiJaVYKiQKZGewe10vbGR+lhyoKU7MHduQMwPnS4s/uSSpJvRnUAZyXHDzPG6SavHIt3uZxojVceBXw5sM/W0SVQ2FoJQR5bg4CwjFp3Dl4wBoo0mVGwqt6QtRXFXZTUwgQTHFgmHYKbsOvQlzbUJkafzrvSNs7OHJg3oHiNrjjNtb/XriKw== Received: from BN9PR03CA0392.namprd03.prod.outlook.com (2603:10b6:408:111::7) by SJ0PR12MB7081.namprd12.prod.outlook.com (2603:10b6:a03:4ae::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan 2023 03:22:38 +0000 Received: from BN8NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:408:111:cafe::98) by BN9PR03CA0392.outlook.office365.com (2603:10b6:408:111::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 03:22:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT019.mail.protection.outlook.com (10.13.176.158) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 03:22:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 19:22:28 -0800 Received: from pegasus01.mtr.labs.mlnx (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 19:22:26 -0800 From: Alexander Kozyrev To: CC: , , , Subject: [PATCH] net/mlx5: fix error CQE dumping for vectorized Rx burst Date: Fri, 27 Jan 2023 05:22:11 +0200 Message-ID: <20230127032211.3990018-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT019:EE_|SJ0PR12MB7081:EE_ X-MS-Office365-Filtering-Correlation-Id: d9e2a57b-cc59-4ac6-be56-08db0015bd45 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q03XZVAOUk1XBxVn+C7Kjym4bV/gS4S3tWsohwwayLCCiqQ2lo/L0sx4EV7jkJUvK/+J83dc6z9Y47e7gG01L9amSTv3Fmcan4SBJ2DTcDk3ShYolozQOq4t6+udICJLUR3pvj/D4Xo5b6G6qLVCcZFzA22rifxbiEs2wAdyZ4yPG/jIHHD74v7dTGuL1xbGekEG0e02ViLbA1RqNbq0LvkFw43PtGh+mjbbyofjIeXIr0jsGkDLvCKeiVpMKC2lv85lYpIS8/1zy9iqLqfpURKnc31dnJoBotW8CA0xtU0vizdTzXJuL3C99IraYeSvA7EzB75pYeSmjwWXaMI6WNmskV8kpy8N1we1Gc6qoa6kfJACj88RFQWd9N6TnBo73dzJAtdYCWFBbht6lUbrSSmZOGOq7fCbdR0h73AR1k1UR1yL17LogO9+8Si/i6sGiD99Xuah2zgP3XXldHTQGDC8vdGcPsm0e/eT96y2PgT9a9r4lda3zdKnh4l10adurppYaXCaKGoUv65wXgPEqCxH4ZRFitNvsAx1jZ8LMeV09YXSfrrQB1VReJ6dkj+aC1MB1Ff5GaVynsFRnN5/G0I9YtsJVGEfF03UgBsZhW9c+pauoAaeuems1n1sTqUFSCFJfYdvg4YlCTicSJo/W25w+78s7eIGiYVffSbxWKuQmwvvXUN20/Q2P3o5eHOfel/HHVQBVpnm4/Ay3tknBA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199018)(46966006)(36840700001)(40470700004)(1076003)(478600001)(107886003)(186003)(16526019)(6666004)(26005)(2616005)(86362001)(82310400005)(47076005)(356005)(40480700001)(426003)(316002)(54906003)(36756003)(70586007)(6916009)(83380400001)(82740400003)(4326008)(36860700001)(8676002)(70206006)(450100002)(41300700001)(7636003)(8936002)(336012)(5660300002)(40460700003)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 03:22:37.0803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d9e2a57b-cc59-4ac6-be56-08db0015bd45 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7081 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There is a dump file with debug information created for an error CQE to help with troubleshooting later. It starts with the last CQE, which, presumably is the error CQE. But this is only true for the scalar Rx burst routing since we handle CQEs there one by one and detect the error immediately. For vectorized Rx bursts, we may already move to another CQE when we detect the error since we handle CQEs in batches there. Go back to the error CQE in this case to dump proper CQE. Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rx.c | 16 +++++++++++----- drivers/net/mlx5/mlx5_rx.h | 3 ++- drivers/net/mlx5/mlx5_rxtx_vec.c | 12 +++++++----- 3 files changed, 20 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 917c517b83..7612d15f01 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -425,12 +425,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) * @param[in] vec * 1 when called from vectorized Rx burst, need to prepare mbufs for the RQ. * 0 when called from non-vectorized Rx burst. + * @param[in] err_n + * Number of CQEs to check for an error. * * @return * MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwise the CQE status. */ int -mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) +mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) { const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; @@ -442,13 +444,18 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) volatile struct mlx5_cqe *cqe; volatile struct mlx5_err_cqe *err_cqe; } u = { - .cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask], + .cqe = &(*rxq->cqes)[(rxq->cq_ci - vec) & cqe_mask], }; struct mlx5_mp_arg_queue_state_modify sm; - int ret; + int ret, i; switch (rxq->err_state) { case MLX5_RXQ_ERR_STATE_NO_ERROR: + for (i = 0; i < (int)err_n; i++) { + u.cqe = &(*rxq->cqes)[(rxq->cq_ci - vec - i) & cqe_mask]; + if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) + break; + } rxq->err_state = MLX5_RXQ_ERR_STATE_NEED_RESET; /* Fall-through */ case MLX5_RXQ_ERR_STATE_NEED_RESET: @@ -507,7 +514,6 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) rxq->elts_ci : rxq->rq_ci; uint32_t elt_idx; struct rte_mbuf **elt; - int i; unsigned int n = elts_n - (elts_ci - rxq->rq_pi); @@ -628,7 +634,7 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { if (unlikely(ret == MLX5_CQE_STATUS_ERR || rxq->err_state)) { - ret = mlx5_rx_err_handle(rxq, 0); + ret = mlx5_rx_err_handle(rxq, 0, 1); if (ret == MLX5_CQE_STATUS_HW_OWN || ret == MLX5_RECOVERY_ERROR_RET) return MLX5_ERROR_CQE_RET; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index e078aaf3dc..4ba53ebc48 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -286,7 +286,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); -__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec); +__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, + uint8_t vec, uint16_t err_n); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 0e2eab068a..c6be2be763 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -74,7 +74,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->stats.ipackets -= (pkts_n - n); rxq->stats.ibytes -= err_bytes; #endif - mlx5_rx_err_handle(rxq, 1); + mlx5_rx_err_handle(rxq, 1, pkts_n); return n; } @@ -253,8 +253,6 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, } rxq->rq_pi += i; rxq->cq_ci += i; - rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); if (rq_ci != rxq->rq_ci) { rxq->rq_ci = rq_ci; rte_io_wmb(); @@ -361,8 +359,6 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->decompressed -= n; } } - rte_io_wmb(); - *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); *no_cq = !rcvd_pkt; return rcvd_pkt; } @@ -390,6 +386,7 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) bool no_cq = false; do { + err = 0; nb_rx = rxq_burst_v(rxq, pkts + tn, pkts_n - tn, &err, &no_cq); if (unlikely(err | rxq->err_state)) @@ -397,6 +394,8 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) tn += nb_rx; if (unlikely(no_cq)) break; + rte_io_wmb(); + *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); } while (tn != pkts_n); return tn; } @@ -524,6 +523,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) bool no_cq = false; do { + err = 0; nb_rx = rxq_burst_mprq_v(rxq, pkts + tn, pkts_n - tn, &err, &no_cq); if (unlikely(err | rxq->err_state)) @@ -531,6 +531,8 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) tn += nb_rx; if (unlikely(no_cq)) break; + rte_io_wmb(); + *rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci); } while (tn != pkts_n); return tn; }