From patchwork Mon Nov 6 09:59:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjing Qiao X-Patchwork-Id: 133887 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DBBBA432B9; Mon, 6 Nov 2023 11:00:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 74CB1402C2; Mon, 6 Nov 2023 11:00:51 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id DAE4E402BD for ; Mon, 6 Nov 2023 11:00:49 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699264850; x=1730800850; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=X0DgNcpG6jXSTUDZqbdhG0rgiQw/SQqB/xgbD/ax/sI=; b=E87CC/zE5UXfUD5d8AEwdCh4KFWEsiO0tM/fv+nucIdoc+lA5plaqQO3 yVAVjBM6PrXKarqDAfdYaoOnYvWN7koT9d+70J56g8ZCdE9N0MH+eaYJy 1XYqgECVZv/lyxH4lQXZeCQwLiaJovBFM3ZRmrYRatgMu9nYPEcmMueKU ROEsTtOB/3dmnm0Mq7I5veialvLn0FwrTWRyyHriwo1RHq6fojRe3XC4H s1CoH2/h7VaxNi+hAcns462rgJGE4k2x3Ge5+wswi6+uz1h/JFYtzj/3m 2Nk6JOaZLSfEdK+E7jvg9wl3IbS9SEDzoNGe7Tm5O73qG6NUf92hDk9IS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10885"; a="392112326" X-IronPort-AV: E=Sophos;i="6.03,281,1694761200"; d="scan'208";a="392112326" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Nov 2023 02:00:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10885"; a="1009473144" X-IronPort-AV: E=Sophos;i="6.03,281,1694761200"; d="scan'208";a="1009473144" Received: from dpdk-wenjing-02.sh.intel.com ([10.67.118.228]) by fmsmga006.fm.intel.com with ESMTP; 06 Nov 2023 02:00:45 -0800 From: wenjing.qiao@intel.com To: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com Cc: dev@dpdk.org, yuying.zhang@intel.com, Wenjing Qiao Subject: [PATCH] net/cpfl: fix coverity issues Date: Mon, 6 Nov 2023 09:59:34 +0000 Message-Id: <20231106095933.962565-1-wenjing.qiao@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenjing Qiao Fix integer handling issues, tainted_scalar issues, uninit issues, overrun issues and control flow issues reported by coverity scan. Coverity issue: 403259 Coverity issue: 403261 Coverity issue: 403266 Coverity issue: 403267 Coverity issue: 403271 Coverity issue: 403274 Fixes: db042ef09d26 ("net/cpfl: implement FXP rule creation and destroying") Fixes: 03f976012304 ("net/cpfl: adapt FXP to flow engine") Signed-off-by: Wenjing Qiao Acked-by: Qi Zhang --- drivers/net/cpfl/cpfl_ethdev.c | 2 +- drivers/net/cpfl/cpfl_flow_engine_fxp.c | 12 +++--------- drivers/net/cpfl/cpfl_fxp_rule.c | 6 +++--- drivers/net/cpfl/cpfl_rules.c | 3 +-- 4 files changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index eb168eee51..7697aea0ce 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -2478,7 +2478,7 @@ cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct idpf_dma_mem *dma { int i; - if (!idpf_alloc_dma_mem(NULL, orig_dma, size * (1 + batch_size))) { + if (!idpf_alloc_dma_mem(NULL, orig_dma, (uint64_t)size * (1 + batch_size))) { PMD_INIT_LOG(ERR, "Could not alloc dma memory"); return -ENOMEM; } diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c b/drivers/net/cpfl/cpfl_flow_engine_fxp.c index ddede2f553..4d3cdf813e 100644 --- a/drivers/net/cpfl/cpfl_flow_engine_fxp.c +++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c @@ -107,13 +107,6 @@ cpfl_fxp_create(struct rte_eth_dev *dev, return ret; } -static inline void -cpfl_fxp_rule_free(struct rte_flow *flow) -{ - rte_free(flow->rule); - flow->rule = NULL; -} - static int cpfl_fxp_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, @@ -128,7 +121,7 @@ cpfl_fxp_destroy(struct rte_eth_dev *dev, struct cpfl_vport *vport; struct cpfl_repr *repr; - rim = flow->rule; + rim = (struct cpfl_rule_info_meta *)flow->rule; if (!rim) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, @@ -164,7 +157,8 @@ cpfl_fxp_destroy(struct rte_eth_dev *dev, for (i = rim->pr_num; i < rim->rule_num; i++) cpfl_fxp_mod_idx_free(ad, rim->rules[i].mod.mod_index); err: - cpfl_fxp_rule_free(flow); + rte_free(rim); + flow->rule = NULL; return ret; } diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c index ea65e20507..ba3a036e7a 100644 --- a/drivers/net/cpfl/cpfl_fxp_rule.c +++ b/drivers/net/cpfl/cpfl_fxp_rule.c @@ -76,8 +76,8 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m rte_delay_us_sleep(10); ret = cpfl_vport_ctlq_recv(cq, &num_q_msg, &q_msg[0]); - if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && - ret != CPFL_ERR_CTLQ_ERROR) { + if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && ret != CPFL_ERR_CTLQ_ERROR && + ret != CPFL_ERR_CTLQ_EMPTY) { PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x\n", ret); retries++; continue; @@ -165,7 +165,7 @@ cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma, { union cpfl_rule_cfg_pkt_record *blob = NULL; enum cpfl_ctlq_rule_cfg_opc opc; - struct cpfl_rule_cfg_data cfg; + struct cpfl_rule_cfg_data cfg = {0}; uint16_t cfg_ctrl; if (!dma->va) { diff --git a/drivers/net/cpfl/cpfl_rules.c b/drivers/net/cpfl/cpfl_rules.c index 3d259d3da8..6c0e435b1d 100644 --- a/drivers/net/cpfl/cpfl_rules.c +++ b/drivers/net/cpfl/cpfl_rules.c @@ -116,8 +116,7 @@ cpfl_prep_sem_rule_blob(const uint8_t *key, uint32_t i; idpf_memset(rule_blob, 0, sizeof(*rule_blob), IDPF_DMA_MEM); - idpf_memcpy(rule_blob->sem_rule.key, key, key_byte_len, - CPFL_NONDMA_TO_DMA); + memcpy(rule_blob->sem_rule.key, key, key_byte_len); for (i = 0; i < act_byte_len / sizeof(uint32_t); i++) *act_dst++ = CPU_TO_LE32(*act_src++);