From patchwork Fri Jan 27 03:22:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 122600 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FAB74248B; Fri, 27 Jan 2023 04:23:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 523BB40697; Fri, 27 Jan 2023 04:23:12 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2086.outbound.protection.outlook.com [40.107.212.86]) by mails.dpdk.org (Postfix) with ESMTP id 8B16E400D7; Fri, 27 Jan 2023 04:23:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GdcLKpq7ycCBM2AlsWOGj2yGctv9CNxrRnR8jcKCb/rmexW4I580eVcJR/lbOYatjEHJm51hgp4MC+k/grx1RXnLnvfg+DMUtiUCUqeb33k601dpF/6s8brlNTjqyCauXHnNidu9zuEfjHjvKMh4WIpH9MtNS+Hvd2rkTU8vC5OxXssmmLzH1J+k0hsoxs+P7PNupEGdHJ/9zJflnCaT6vx5DWMsMtJlbS4R0fjz5SgVnvRhEeAt2doGt8qKQ/3MBBaywrXQgTqQRailkY/rzGZqVzLz0UPAri5a0g3WBx4/RPNSP/E07CD+ZN/4uHEo+yG5rzyC7JZgq409cPtrLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C7YAuuMdx/1RP4S0PDR4/3pX0rZqcnlVi6YVvZJdkho=; b=BZDvO/HZql+Zj329aSEy9Xrc+c19GpZ+ZKZQuLBmM1etF1ToGuuJHC+jiMfsSn3vdHqy2jbdJALe6L32F+s5tIkT82i5NA4HJQ4+zX4045Aa/oLIIKKrqYe1Jt9Qv9Jvsc90eylRj9E2N/RnkmHkas1eBVGfaeS4DfdUbBbbC5D/ZNsFE1HpaDLhqIcMFlsL9Iy/i7lEOfojHR+iRkBz9M9RqbNCBD1m16sBwjE2b3Kre/mKrsZvIMnqIWjaMCyheKOrVV8VpSy8WeQToQotV/Vmq/oLgxtGpOxzkKFKdT64tYm7ckWGhQ8ugYZBPLYawWQjLBYQoanXMKAnviAwmA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C7YAuuMdx/1RP4S0PDR4/3pX0rZqcnlVi6YVvZJdkho=; b=afXanAD9bETroYHvl6jAn/bUhdX7QUHfXbdz/RExB9iiQ3OPJsMnwI6hwpH3Mqys9EX1A0IZcsI+NTBcUlWHzeves541imT5ispkKgCpbn5fissJ/oS+qxunEWz/I95we6XD4rt0ZmAEPffPdiH24Rnumjvg9Lf21CvXewrhR+bnozLmBNQ+VeZsHHFe60VkE70WhsP5Od30fnWvONJ8ab3yBEaUp3LvLeI4bewRyxtKL5/8bR/QHNv6s/Kw3k2D8JMMErgZALw6hvh9xkFE/bBg75ZamPggEu0qk22KTLOyJvaVeDfWeJ7aEcDk+wa9uE/oS+x5JTTa9IC2mxPj/g== Received: from BN1PR10CA0004.namprd10.prod.outlook.com (2603:10b6:408:e0::9) by PH7PR12MB7161.namprd12.prod.outlook.com (2603:10b6:510:200::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan 2023 03:23:08 +0000 Received: from BN8NAM11FT104.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e0:cafe::45) by BN1PR10CA0004.outlook.office365.com (2603:10b6:408:e0::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 03:23:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT104.mail.protection.outlook.com (10.13.177.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Fri, 27 Jan 2023 03:23:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 19:22:58 -0800 Received: from pegasus01.mtr.labs.mlnx (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 19:22:56 -0800 From: Alexander Kozyrev To: CC: , , , Subject: [PATCH] net/mlx5: ignore non-critical syndromes for Rx queue Date: Fri, 27 Jan 2023 05:22:43 +0200 Message-ID: <20230127032243.3990099-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT104:EE_|PH7PR12MB7161:EE_ X-MS-Office365-Filtering-Correlation-Id: 720d20e8-4003-4ca9-5009-08db0015cfe6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QzWTGsgYDOoEU00fOojTvPfSqm2AW6WML+9vlwyoMHtAtlJCF1RazHSUc4+akSd3Xoo+ww6yhf+OWZX7xNuIEZpwl0Qh9LUzJL9lSVyDMooOWMLRwdJAkfjaxg7hUoqDmUa+0Q5aQhsrYpx3flMZ4ynAza/mdn+0nchDxMc13A05ol3bnDMyI+Ucb2tJP4TP3GI1Jm4huFgofAbBRy6P46xOowNu6S3lxtZ6gPXYVXbf50+JTBbBbCaChEqsDZTli2cHo1a/sbLtHLJRLD1beVoiopijuTL3nqIutPfGuqYFNktoGP8AV0bisKq/aEtvVTSZ5h4WbpBufa0pQvPoh/uT0mz9BzjrwGMRvDnglvPtAdbC3kwnrdxm6wn6fGViMJ9w574X3V/PcH73uOvRgk3Ilrckri7T6ug2HL6Jrqewq1bws9WkdpRO1/EybIEbDUDg52UlZzVzTP1Vg+931i324uDEqr6JdnscmR9vxZN5IoH/pKqHEhROhUoxX3zjdRa/PSRdRBcB0w5Kbjxrl6gIqRipGAr1OnFgiNiMGaJ+zzPkXw2jnKBdHgNzFUMnB/CHyP76A5YYOUzOiRAQOaRu2vM/3V3CoyZv2Dc3KEEZi1qli7xBdDG93gjkIUwN8Y+8iqqX4xrDooFOmyI2+oIoKyiqPG1gKeM9wmkcHt0QKcFS9frTP6Aba1xVSKMjzj+Fv6Ju2KqGXGMaAOZv4w== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199018)(46966006)(36840700001)(40470700004)(41300700001)(4326008)(336012)(316002)(1076003)(40460700003)(6666004)(107886003)(16526019)(36756003)(186003)(26005)(86362001)(40480700001)(54906003)(478600001)(2616005)(70586007)(70206006)(450100002)(6916009)(356005)(8676002)(30864003)(82310400005)(5660300002)(83380400001)(36860700001)(7636003)(2906002)(47076005)(426003)(8936002)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 03:23:08.3316 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 720d20e8-4003-4ca9-5009-08db0015cfe6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT104.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7161 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For non-fatal syndromes like LOCAL_LENGTH_ERR, the Rx queue reset shouldn't be triggered. Rx queue could continue with the next packets without any recovery. Only three syndromes warrant Rx queue reset: LOCAL_QP_OP_ERR, LOCAL_PROT_ERR and WR_FLUSH_ERR. Do not initiate a Rx queue reset in any other cases. Skip all non-critical error CQEs and continue with packet processing. Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rx.c | 123 ++++++++++++++++++++++++------- drivers/net/mlx5/mlx5_rx.h | 5 +- drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +- 3 files changed, 102 insertions(+), 29 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 7612d15f01..99a08ef5f1 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -39,7 +39,8 @@ rxq_cq_to_pkt_type(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, static __rte_always_inline int mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, - uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe); + uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe, + uint16_t *skip_cnt, bool mprq); static __rte_always_inline uint32_t rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe); @@ -408,10 +409,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) *rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci); } +#define MLX5_ERROR_CQE_MASK 0x40000000 /* Must be negative. */ -#define MLX5_ERROR_CQE_RET (-1) +#define MLX5_REGULAR_ERROR_CQE_RET (-5) +#define MLX5_CRITICAL_ERROR_CQE_RET (-4) /* Must not be negative. */ #define MLX5_RECOVERY_ERROR_RET 0 +#define MLX5_RECOVERY_IGNORE_RET 1 +#define MLX5_RECOVERY_COMPLETED_RET 2 /** * Handle a Rx error. @@ -429,10 +434,14 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) * Number of CQEs to check for an error. * * @return - * MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwise the CQE status. + * MLX5_RECOVERY_ERROR_RET in case of recovery error, + * MLX5_RECOVERY_IGNORE_RET in case of non-critical error syndrome, + * MLX5_RECOVERY_COMPLETED_RET in case of recovery is completed, + * otherwise the CQE status after ignored error syndrome or queue reset. */ int -mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) +mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, + uint16_t err_n, uint16_t *skip_cnt) { const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; @@ -447,14 +456,35 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) .cqe = &(*rxq->cqes)[(rxq->cq_ci - vec) & cqe_mask], }; struct mlx5_mp_arg_queue_state_modify sm; + bool critical_syndrome = false; int ret, i; switch (rxq->err_state) { + case MLX5_RXQ_ERR_STATE_IGNORE: + ret = check_cqe(u.cqe, cqe_n, rxq->cq_ci - vec); + if (ret != MLX5_CQE_STATUS_ERR) { + rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return ret; + } + /* Fall-through */ case MLX5_RXQ_ERR_STATE_NO_ERROR: for (i = 0; i < (int)err_n; i++) { u.cqe = &(*rxq->cqes)[(rxq->cq_ci - vec - i) & cqe_mask]; - if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) + if (MLX5_CQE_OPCODE(u.cqe->op_own) == MLX5_CQE_RESP_ERR) { + if (u.err_cqe->syndrome == MLX5_CQE_SYNDROME_LOCAL_QP_OP_ERR || + u.err_cqe->syndrome == MLX5_CQE_SYNDROME_LOCAL_PROT_ERR || + u.err_cqe->syndrome == MLX5_CQE_SYNDROME_WR_FLUSH_ERR) + critical_syndrome = true; break; + } + } + if (!critical_syndrome) { + if (rxq->err_state == MLX5_RXQ_ERR_STATE_NO_ERROR) { + *skip_cnt = 0; + if (i == err_n) + rxq->err_state = MLX5_RXQ_ERR_STATE_IGNORE; + } + return MLX5_RECOVERY_IGNORE_RET; } rxq->err_state = MLX5_RXQ_ERR_STATE_NEED_RESET; /* Fall-through */ @@ -546,6 +576,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) } mlx5_rxq_initialize(rxq); rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return MLX5_RECOVERY_COMPLETED_RET; } return ret; default: @@ -565,19 +596,24 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, uint16_t err_n) * @param[out] mcqe * Store pointer to mini-CQE if compressed. Otherwise, the pointer is not * written. - * + * @param[out] skip_cnt + * Number of packets skipped due to recoverable errors. + * @param mprq + * Indication if it is called from MPRQ. * @return - * 0 in case of empty CQE, MLX5_ERROR_CQE_RET in case of error CQE, - * otherwise the packet size in regular RxQ, and striding byte - * count format in mprq case. + * 0 in case of empty CQE, MLX5_REGULAR_ERROR_CQE_RET in case of error CQE, + * MLX5_CRITICAL_ERROR_CQE_RET in case of error CQE lead to Rx queue reset, + * otherwise the packet size in regular RxQ, + * and striding byte count format in mprq case. */ static inline int mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, - uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe) + uint16_t cqe_cnt, volatile struct mlx5_mini_cqe8 **mcqe, + uint16_t *skip_cnt, bool mprq) { struct rxq_zip *zip = &rxq->zip; uint16_t cqe_n = cqe_cnt + 1; - int len; + int len = 0, ret = 0; uint16_t idx, end; do { @@ -626,7 +662,6 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, * compressed. */ } else { - int ret; int8_t op_own; uint32_t cq_ci; @@ -634,10 +669,12 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { if (unlikely(ret == MLX5_CQE_STATUS_ERR || rxq->err_state)) { - ret = mlx5_rx_err_handle(rxq, 0, 1); - if (ret == MLX5_CQE_STATUS_HW_OWN || - ret == MLX5_RECOVERY_ERROR_RET) - return MLX5_ERROR_CQE_RET; + ret = mlx5_rx_err_handle(rxq, 0, 1, skip_cnt); + if (ret == MLX5_CQE_STATUS_HW_OWN) + return MLX5_ERROR_CQE_MASK; + if (ret == MLX5_RECOVERY_ERROR_RET || + ret == MLX5_RECOVERY_COMPLETED_RET) + return MLX5_CRITICAL_ERROR_CQE_RET; } else { return 0; } @@ -690,8 +727,15 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe, } } if (unlikely(rxq->err_state)) { + if (rxq->err_state == MLX5_RXQ_ERR_STATE_IGNORE && + ret == MLX5_CQE_STATUS_SW_OWN) { + rxq->err_state = MLX5_RXQ_ERR_STATE_NO_ERROR; + return len & MLX5_ERROR_CQE_MASK; + } cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; ++rxq->stats.idropped; + (*skip_cnt) += mprq ? (len & MLX5_MPRQ_STRIDE_NUM_MASK) >> + MLX5_MPRQ_STRIDE_NUM_SHIFT : 1; } else { return len; } @@ -843,6 +887,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) int len = 0; /* keep its value across iterations. */ while (pkts_n) { + uint16_t skip_cnt; unsigned int idx = rq_ci & wqe_cnt; volatile struct mlx5_wqe_data_seg *wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx]; @@ -881,11 +926,24 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) } if (!pkt) { cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; - len = mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe); - if (len <= 0) { - rte_mbuf_raw_free(rep); - if (unlikely(len == MLX5_ERROR_CQE_RET)) + len = mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe, &skip_cnt, false); + if (unlikely(len & MLX5_ERROR_CQE_MASK)) { + if (len == MLX5_CRITICAL_ERROR_CQE_RET) { + rte_mbuf_raw_free(rep); rq_ci = rxq->rq_ci << sges_n; + break; + } + rq_ci >>= sges_n; + rq_ci += skip_cnt; + rq_ci <<= sges_n; + idx = rq_ci & wqe_cnt; + wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx]; + seg = (*rxq->elts)[idx]; + cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; + len = len & ~MLX5_ERROR_CQE_MASK; + } + if (len == 0) { + rte_mbuf_raw_free(rep); break; } pkt = seg; @@ -1095,6 +1153,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) uint16_t strd_cnt; uint16_t strd_idx; uint32_t byte_cnt; + uint16_t skip_cnt; volatile struct mlx5_mini_cqe8 *mcqe = NULL; enum mlx5_rqx_code rxq_code; @@ -1107,14 +1166,26 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) buf = (*rxq->mprq_bufs)[rq_ci & wq_mask]; } cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; - ret = mlx5_rx_poll_len(rxq, cqe, cq_mask, &mcqe); + ret = mlx5_rx_poll_len(rxq, cqe, cq_mask, &mcqe, &skip_cnt, true); + if (unlikely(ret & MLX5_ERROR_CQE_MASK)) { + if (ret == MLX5_CRITICAL_ERROR_CQE_RET) { + rq_ci = rxq->rq_ci; + consumed_strd = rxq->consumed_strd; + break; + } + consumed_strd += skip_cnt; + while (consumed_strd >= strd_n) { + /* Replace WQE if the buffer is still in use. */ + mprq_buf_replace(rxq, rq_ci & wq_mask); + /* Advance to the next WQE. */ + consumed_strd -= strd_n; + ++rq_ci; + buf = (*rxq->mprq_bufs)[rq_ci & wq_mask]; + } + cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; + } if (ret == 0) break; - if (unlikely(ret == MLX5_ERROR_CQE_RET)) { - rq_ci = rxq->rq_ci; - consumed_strd = rxq->consumed_strd; - break; - } byte_cnt = ret; len = (byte_cnt & MLX5_MPRQ_LEN_MASK) >> MLX5_MPRQ_LEN_SHIFT; MLX5_ASSERT((int)len >= (rxq->crc_present << 2)); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 4ba53ebc48..6b42e27c89 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -62,6 +62,7 @@ enum mlx5_rxq_err_state { MLX5_RXQ_ERR_STATE_NO_ERROR = 0, MLX5_RXQ_ERR_STATE_NEED_RESET, MLX5_RXQ_ERR_STATE_NEED_READY, + MLX5_RXQ_ERR_STATE_IGNORE, }; enum mlx5_rqx_code { @@ -286,8 +287,8 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); -__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, - uint8_t vec, uint16_t err_n); +__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec, + uint16_t err_n, uint16_t *skip_cnt); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index c6be2be763..667475a93e 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -51,6 +51,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n) { uint16_t n = 0; + uint16_t skip_cnt; unsigned int i; #ifdef MLX5_PMD_SOFT_COUNTERS uint32_t err_bytes = 0; @@ -74,7 +75,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rxq->stats.ipackets -= (pkts_n - n); rxq->stats.ibytes -= err_bytes; #endif - mlx5_rx_err_handle(rxq, 1, pkts_n); + mlx5_rx_err_handle(rxq, 1, pkts_n, &skip_cnt); return n; }