From patchwork Mon Aug 15 07:31:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 115062 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D751DA00C3; Mon, 15 Aug 2022 01:25:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E98E942C96; Mon, 15 Aug 2022 01:23:14 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 5848342C91; Mon, 15 Aug 2022 01:23:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660519393; x=1692055393; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gq8duh4RWWtOG5oLZDjXaAQ1jsY2TLLyDX8mntMFP1E=; b=VRQsGcciurj5ymrxNP8xufMKDKVXpzoFjEmnqWpDmEZZishG6A0zFi7Y ALAHd1rma7juJhk31j9w3dRGX8JJ2dBCkHI818tHGk0Fkaayg9YL/ZBGE dWVEzdDii858l4tHmGSa6exJbufavcJSb4fF3MZVC923ELsQihLhc/3d8 rBDeCmJSn+rh35SijCoVdVxXiWUr1qf34NOdDGH+TNsL1VPd23ZzSCV67 +7iMM1ryRUyM1w7BBlz8lzKiPxdSgDKSOcZSHL4ZlsWFhYEr3rP1Q6hGV UJW+fS2/7/AVNve2LCMPeKWDGMEUcKIFv5DSHs22yy+J4y6PtvOJAqPoj g==; X-IronPort-AV: E=McAfee;i="6400,9594,10439"; a="291857974" X-IronPort-AV: E=Sophos;i="5.93,237,1654585200"; d="scan'208";a="291857974" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2022 16:23:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,237,1654585200"; d="scan'208";a="635283183" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.4]) by orsmga008.jf.intel.com with ESMTP; 14 Aug 2022 16:23:10 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, Qi Zhang , stable@dpdk.org, Roman Storozhenko Subject: [PATCH v2 34/70] net/ice/base: fix null pointer dereference during Date: Mon, 15 Aug 2022 03:31:30 -0400 Message-Id: <20220815073206.2917968-35-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220815073206.2917968-1-qi.z.zhang@intel.com> References: <20220815071306.2910599-1-qi.z.zhang@intel.com> <20220815073206.2917968-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sometimes, during the shutdown process, an PCIe unrecoverable error occurs. This leads to the following NULL pointer dereference error while clearing hardware tables: The patch fixes this bug by checking every table pointer against NULL before reference it, as some of them probably have been cleared in advance. Fixes: 969890d505b1 ("net/ice/base: enable clearing of HW tables") Cc: stable@dpdk.org Signed-off-by: Roman Storozhenko Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flex_pipe.c | 332 +++++++++++++++------------ 1 file changed, 179 insertions(+), 153 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index aea0d97b9d..2d95ce4d74 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -2144,6 +2144,129 @@ void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx) INIT_LIST_HEAD(&hw->fl_profs[blk_idx]); } +/** + * ice_init_hw_tbls - init hardware table memory + * @hw: pointer to the hardware structure + */ +enum ice_status ice_init_hw_tbls(struct ice_hw *hw) +{ + u8 i; + + ice_init_lock(&hw->rss_locks); + INIT_LIST_HEAD(&hw->rss_list_head); + if (!hw->dcf_enabled) + ice_init_all_prof_masks(hw); + for (i = 0; i < ICE_BLK_COUNT; i++) { + struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir; + struct ice_prof_tcam *prof = &hw->blk[i].prof; + struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1; + struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2; + struct ice_es *es = &hw->blk[i].es; + u16 j; + + if (hw->blk[i].is_list_init) + continue; + + ice_init_flow_profs(hw, i); + ice_init_lock(&es->prof_map_lock); + INIT_LIST_HEAD(&es->prof_map); + hw->blk[i].is_list_init = true; + + hw->blk[i].overwrite = blk_sizes[i].overwrite; + es->reverse = blk_sizes[i].reverse; + + xlt1->sid = ice_blk_sids[i][ICE_SID_XLT1_OFF]; + xlt1->count = blk_sizes[i].xlt1; + + xlt1->ptypes = (struct ice_ptg_ptype *) + ice_calloc(hw, xlt1->count, sizeof(*xlt1->ptypes)); + + if (!xlt1->ptypes) + goto err; + + xlt1->ptg_tbl = (struct ice_ptg_entry *) + ice_calloc(hw, ICE_MAX_PTGS, sizeof(*xlt1->ptg_tbl)); + + if (!xlt1->ptg_tbl) + goto err; + + xlt1->t = (u8 *)ice_calloc(hw, xlt1->count, sizeof(*xlt1->t)); + if (!xlt1->t) + goto err; + + xlt2->sid = ice_blk_sids[i][ICE_SID_XLT2_OFF]; + xlt2->count = blk_sizes[i].xlt2; + + xlt2->vsis = (struct ice_vsig_vsi *) + ice_calloc(hw, xlt2->count, sizeof(*xlt2->vsis)); + + if (!xlt2->vsis) + goto err; + + xlt2->vsig_tbl = (struct ice_vsig_entry *) + ice_calloc(hw, xlt2->count, sizeof(*xlt2->vsig_tbl)); + if (!xlt2->vsig_tbl) + goto err; + + for (j = 0; j < xlt2->count; j++) + INIT_LIST_HEAD(&xlt2->vsig_tbl[j].prop_lst); + + xlt2->t = (u16 *)ice_calloc(hw, xlt2->count, sizeof(*xlt2->t)); + if (!xlt2->t) + goto err; + + prof->sid = ice_blk_sids[i][ICE_SID_PR_OFF]; + prof->count = blk_sizes[i].prof_tcam; + prof->max_prof_id = blk_sizes[i].prof_id; + prof->cdid_bits = blk_sizes[i].prof_cdid_bits; + prof->t = (struct ice_prof_tcam_entry *) + ice_calloc(hw, prof->count, sizeof(*prof->t)); + + if (!prof->t) + goto err; + + prof_redir->sid = ice_blk_sids[i][ICE_SID_PR_REDIR_OFF]; + prof_redir->count = blk_sizes[i].prof_redir; + prof_redir->t = (u8 *)ice_calloc(hw, prof_redir->count, + sizeof(*prof_redir->t)); + + if (!prof_redir->t) + goto err; + + es->sid = ice_blk_sids[i][ICE_SID_ES_OFF]; + es->count = blk_sizes[i].es; + es->fvw = blk_sizes[i].fvw; + es->t = (struct ice_fv_word *) + ice_calloc(hw, (u32)(es->count * es->fvw), + sizeof(*es->t)); + if (!es->t) + goto err; + + es->ref_count = (u16 *) + ice_calloc(hw, es->count, sizeof(*es->ref_count)); + + if (!es->ref_count) + goto err; + + es->written = (u8 *) + ice_calloc(hw, es->count, sizeof(*es->written)); + + if (!es->written) + goto err; + + es->mask_ena = (u32 *) + ice_calloc(hw, es->count, sizeof(*es->mask_ena)); + + if (!es->mask_ena) + goto err; + } + return ICE_SUCCESS; + +err: + ice_free_hw_tbls(hw); + return ICE_ERR_NO_MEMORY; +} + /** * ice_fill_blk_tbls - Read package context for tables * @hw: pointer to the hardware structure @@ -2308,162 +2431,65 @@ void ice_clear_hw_tbls(struct ice_hw *hw) ice_free_vsig_tbl(hw, (enum ice_block)i); - ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes), - ICE_NONDMA_MEM); - ice_memset(xlt1->ptg_tbl, 0, - ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl), - ICE_NONDMA_MEM); - ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t), - ICE_NONDMA_MEM); - - ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis), - ICE_NONDMA_MEM); - ice_memset(xlt2->vsig_tbl, 0, - xlt2->count * sizeof(*xlt2->vsig_tbl), - ICE_NONDMA_MEM); - ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t), - ICE_NONDMA_MEM); - - ice_memset(prof->t, 0, prof->count * sizeof(*prof->t), - ICE_NONDMA_MEM); - ice_memset(prof_redir->t, 0, - prof_redir->count * sizeof(*prof_redir->t), - ICE_NONDMA_MEM); - - ice_memset(es->t, 0, es->count * sizeof(*es->t) * es->fvw, - ICE_NONDMA_MEM); - ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count), - ICE_NONDMA_MEM); - ice_memset(es->written, 0, es->count * sizeof(*es->written), - ICE_NONDMA_MEM); - ice_memset(es->mask_ena, 0, es->count * sizeof(*es->mask_ena), - ICE_NONDMA_MEM); + if (xlt1->ptypes) + ice_memset(xlt1->ptypes, 0, + xlt1->count * sizeof(*xlt1->ptypes), + ICE_NONDMA_MEM); + + if (xlt1->ptg_tbl) + ice_memset(xlt1->ptg_tbl, 0, + ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl), + ICE_NONDMA_MEM); + + if (xlt1->t) + ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t), + ICE_NONDMA_MEM); + + if (xlt2->vsis) + ice_memset(xlt2->vsis, 0, + xlt2->count * sizeof(*xlt2->vsis), + ICE_NONDMA_MEM); + + if (xlt2->vsig_tbl) + ice_memset(xlt2->vsig_tbl, 0, + xlt2->count * sizeof(*xlt2->vsig_tbl), + ICE_NONDMA_MEM); + + if (xlt2->t) + ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t), + ICE_NONDMA_MEM); + + if (prof->t) + ice_memset(prof->t, 0, prof->count * sizeof(*prof->t), + ICE_NONDMA_MEM); + + if (prof_redir->t) + ice_memset(prof_redir->t, 0, + prof_redir->count * sizeof(*prof_redir->t), + ICE_NONDMA_MEM); + + if (es->t) + ice_memset(es->t, 0, + es->count * sizeof(*es->t) * es->fvw, + ICE_NONDMA_MEM); + + if (es->ref_count) + ice_memset(es->ref_count, 0, + es->count * sizeof(*es->ref_count), + ICE_NONDMA_MEM); + + if (es->written) + ice_memset(es->written, 0, + es->count * sizeof(*es->written), + ICE_NONDMA_MEM); + + if (es->mask_ena) + ice_memset(es->mask_ena, 0, + es->count * sizeof(*es->mask_ena), + ICE_NONDMA_MEM); } } -/** - * ice_init_hw_tbls - init hardware table memory - * @hw: pointer to the hardware structure - */ -enum ice_status ice_init_hw_tbls(struct ice_hw *hw) -{ - u8 i; - - ice_init_lock(&hw->rss_locks); - INIT_LIST_HEAD(&hw->rss_list_head); - if (!hw->dcf_enabled) - ice_init_all_prof_masks(hw); - for (i = 0; i < ICE_BLK_COUNT; i++) { - struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir; - struct ice_prof_tcam *prof = &hw->blk[i].prof; - struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1; - struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2; - struct ice_es *es = &hw->blk[i].es; - u16 j; - - if (hw->blk[i].is_list_init) - continue; - - ice_init_flow_profs(hw, i); - ice_init_lock(&es->prof_map_lock); - INIT_LIST_HEAD(&es->prof_map); - hw->blk[i].is_list_init = true; - - hw->blk[i].overwrite = blk_sizes[i].overwrite; - es->reverse = blk_sizes[i].reverse; - - xlt1->sid = ice_blk_sids[i][ICE_SID_XLT1_OFF]; - xlt1->count = blk_sizes[i].xlt1; - - xlt1->ptypes = (struct ice_ptg_ptype *) - ice_calloc(hw, xlt1->count, sizeof(*xlt1->ptypes)); - - if (!xlt1->ptypes) - goto err; - - xlt1->ptg_tbl = (struct ice_ptg_entry *) - ice_calloc(hw, ICE_MAX_PTGS, sizeof(*xlt1->ptg_tbl)); - - if (!xlt1->ptg_tbl) - goto err; - - xlt1->t = (u8 *)ice_calloc(hw, xlt1->count, sizeof(*xlt1->t)); - if (!xlt1->t) - goto err; - - xlt2->sid = ice_blk_sids[i][ICE_SID_XLT2_OFF]; - xlt2->count = blk_sizes[i].xlt2; - - xlt2->vsis = (struct ice_vsig_vsi *) - ice_calloc(hw, xlt2->count, sizeof(*xlt2->vsis)); - - if (!xlt2->vsis) - goto err; - - xlt2->vsig_tbl = (struct ice_vsig_entry *) - ice_calloc(hw, xlt2->count, sizeof(*xlt2->vsig_tbl)); - if (!xlt2->vsig_tbl) - goto err; - - for (j = 0; j < xlt2->count; j++) - INIT_LIST_HEAD(&xlt2->vsig_tbl[j].prop_lst); - - xlt2->t = (u16 *)ice_calloc(hw, xlt2->count, sizeof(*xlt2->t)); - if (!xlt2->t) - goto err; - - prof->sid = ice_blk_sids[i][ICE_SID_PR_OFF]; - prof->count = blk_sizes[i].prof_tcam; - prof->max_prof_id = blk_sizes[i].prof_id; - prof->cdid_bits = blk_sizes[i].prof_cdid_bits; - prof->t = (struct ice_prof_tcam_entry *) - ice_calloc(hw, prof->count, sizeof(*prof->t)); - - if (!prof->t) - goto err; - - prof_redir->sid = ice_blk_sids[i][ICE_SID_PR_REDIR_OFF]; - prof_redir->count = blk_sizes[i].prof_redir; - prof_redir->t = (u8 *)ice_calloc(hw, prof_redir->count, - sizeof(*prof_redir->t)); - - if (!prof_redir->t) - goto err; - - es->sid = ice_blk_sids[i][ICE_SID_ES_OFF]; - es->count = blk_sizes[i].es; - es->fvw = blk_sizes[i].fvw; - es->t = (struct ice_fv_word *) - ice_calloc(hw, (u32)(es->count * es->fvw), - sizeof(*es->t)); - if (!es->t) - goto err; - - es->ref_count = (u16 *) - ice_calloc(hw, es->count, sizeof(*es->ref_count)); - - if (!es->ref_count) - goto err; - - es->written = (u8 *) - ice_calloc(hw, es->count, sizeof(*es->written)); - - if (!es->written) - goto err; - - es->mask_ena = (u32 *) - ice_calloc(hw, es->count, sizeof(*es->mask_ena)); - - if (!es->mask_ena) - goto err; - } - return ICE_SUCCESS; - -err: - ice_free_hw_tbls(hw); - return ICE_ERR_NO_MEMORY; -} - /** * ice_prof_gen_key - generate profile ID key * @hw: pointer to the HW struct