From patchwork Sun Dec 10 01:24:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134987 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD2FF436BE; Sun, 10 Dec 2023 02:32:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6534D40E13; Sun, 10 Dec 2023 02:31:55 +0100 (CET) Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by mails.dpdk.org (Postfix) with ESMTP id 24CAC40685 for ; Sun, 10 Dec 2023 02:31:52 +0100 (CET) Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-286d8dae2dbso3326228a91.1 for ; Sat, 09 Dec 2023 17:31:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702171911; x=1702776711; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=AWwcA33RGlOyMk6Ej2NxgmQ9xYpwfIRib8mWlCoen1Q=; b=d9VuTRM+Xaw9Pc9pfBV/DLe/qb4SnFE4OD/CpVotYuDVTQ6HnbZtkIQmLZIuswlee8 JfqgK0StQuBil99D3VXKtLBEAhiIuSEmnMbFZyXDY47UYpD8hbHyWToZKZZJYdpFCZE9 dbcxsWKc0hSpdwEvksmy/leh5zKMFWWkrr3h4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702171911; x=1702776711; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AWwcA33RGlOyMk6Ej2NxgmQ9xYpwfIRib8mWlCoen1Q=; b=jbn6XtQ8tlENsN2MsHEn7xdbY+YmxkrTJaymzPAgBi47aobUjr08kmRzU3LQCFDjY0 2MHqjDL2vuZEBsReUaQ0ldCJx33PLMwD70W9cmdc5+pb5ryGZbp3zCHyRKOTdF0THrvy M9oUuUrzd2oi+6PUopQAl4SO/g+4Ht+HEYE5o3EZwHE85RkEUMBp/g/P78UK0+kC7vZL 6Ql6xEq9zkF11TWdCWURhIZDVCc5JMg/nLsZyPLKO5+Nca4iHgL/3a38DuK0sHWZr89R khBar1ctMooB7cIrM30RO47VdPI9PKhFTQGjVL2OKdXoAUQ82DdxwNsybUc/AGJUKp+1 fD0Q== X-Gm-Message-State: AOJu0YyC/Gh8tqqdp+x30mezzF3JezBZRaJxjCJ02Qu2MtWKYVh4GiaV bps+wZtgkRNTxJW3ouvzMsXGRYrqn5s3X2jdl9zDiaJYes2Ec021+o15WDDiTHxhuLB5rmfVKke vmGuCnNxkqLFqTH532Xn6fXdmuHF40I8HHQjwr4Kmgc8g0hPN6ccsQteLVU1ShmiAzQJ8 X-Google-Smtp-Source: AGHT+IESgOoaAJH8eujoJCaoOVTtjtEhFuylHJTZCbHttPs7QI5QK4RLdNsTmTZtOnoR9u4Sr2k8qg== X-Received: by 2002:a17:90b:1bd0:b0:286:b271:39a1 with SMTP id oa16-20020a17090b1bd000b00286b27139a1mr2508204pjb.27.1702171910810; Sat, 09 Dec 2023 17:31:50 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:6d20:fff:e413:282a]) by smtp.gmail.com with ESMTPSA id q3-20020a170902f78300b001d083fed5f3sm4006050pln.60.2023.12.09.17.31.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Dec 2023 17:31:49 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: stable@dpdk.org, Damodharam Ammepalli Subject: [PATCH v2 08/14] net/bnxt: fix array overflow Date: Sat, 9 Dec 2023 17:24:49 -0800 Message-Id: <20231210012455.20229-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231210012455.20229-1-ajit.khaparde@broadcom.com> References: <20231210012455.20229-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In some cases the number of elements in the context memory array can exceed the MAX_CTX_PAGES and that can cause the static members ctx_pg_arr and ctx_dma_arr to overflow. Allocate them dynamically to prevent this overflow. Cc: stable@dpdk.org Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 4 ++-- drivers/net/bnxt/bnxt_ethdev.c | 42 +++++++++++++++++++++++++++------- 2 files changed, 36 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 7439ecf4fa..3fbdf1ddcc 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -455,8 +455,8 @@ struct bnxt_ring_mem_info { struct bnxt_ctx_pg_info { uint32_t entries; - void *ctx_pg_arr[MAX_CTX_PAGES]; - rte_iova_t ctx_dma_arr[MAX_CTX_PAGES]; + void **ctx_pg_arr; + rte_iova_t *ctx_dma_arr; struct bnxt_ring_mem_info ring_mem; }; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 2a41fafa02..c585373ba3 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4767,7 +4767,7 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, { struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; const struct rte_memzone *mz = NULL; - char mz_name[RTE_MEMZONE_NAMESIZE]; + char name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; uint64_t valid_bits = 0; uint32_t sz; @@ -4779,6 +4779,19 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE; rmem->page_size = BNXT_PAGE_SIZE; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_pg_arr = rte_zmalloc(name, sizeof(void *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_pg_arr == NULL) + return -ENOMEM; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_dma_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_dma_arr = rte_zmalloc(name, sizeof(rte_iova_t *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_dma_arr == NULL) + return -ENOMEM; + rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; @@ -4786,13 +4799,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, valid_bits = PTU_PTE_VALID; if (rmem->nr_pages > 1) { - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_tbl%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); + name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, rmem->nr_pages * 8, bp->eth_dev->device->numa_node, RTE_MEMZONE_2MB | @@ -4811,11 +4824,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->pg_tbl_mz = mz; } - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz = rte_memzone_lookup(mz_name); + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, mem_size, bp->eth_dev->device->numa_node, RTE_MEMZONE_1GB | @@ -4861,6 +4874,17 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) return; bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED; + rte_free(bp->ctx->qp_mem.ctx_pg_arr); + rte_free(bp->ctx->srq_mem.ctx_pg_arr); + rte_free(bp->ctx->cq_mem.ctx_pg_arr); + rte_free(bp->ctx->vnic_mem.ctx_pg_arr); + rte_free(bp->ctx->stat_mem.ctx_pg_arr); + rte_free(bp->ctx->qp_mem.ctx_dma_arr); + rte_free(bp->ctx->srq_mem.ctx_dma_arr); + rte_free(bp->ctx->cq_mem.ctx_dma_arr); + rte_free(bp->ctx->vnic_mem.ctx_dma_arr); + rte_free(bp->ctx->stat_mem.ctx_dma_arr); + rte_memzone_free(bp->ctx->qp_mem.ring_mem.mz); rte_memzone_free(bp->ctx->srq_mem.ring_mem.mz); rte_memzone_free(bp->ctx->cq_mem.ring_mem.mz); @@ -4873,6 +4897,8 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) rte_memzone_free(bp->ctx->stat_mem.ring_mem.pg_tbl_mz); for (i = 0; i < bp->ctx->tqm_fp_rings_count + 1; i++) { + rte_free(bp->ctx->tqm_mem[i]->ctx_pg_arr); + rte_free(bp->ctx->tqm_mem[i]->ctx_dma_arr); if (bp->ctx->tqm_mem[i]) rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz); }