From patchwork Tue May 23 09:13:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127196 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5008242B7D; Tue, 23 May 2023 11:14:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3256F40EE5; Tue, 23 May 2023 11:14:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4A35F40A80 for ; Tue, 23 May 2023 11:14:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34N1k96B029337 for ; Tue, 23 May 2023 02:14:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=isqW/TVavp/SDoXIIvocQm1oDGez3+AYxQarbpZIHQo=; b=jT+dAgDtBcOTDOLMHA1otuWpupNy3kOLqDD2M6FwHNyeqjNjVzDfOEzXIliLgNqDzVQJ z7cAJn20kMSBbhk00fdaFLXHBvhDmDsPwkR6HzCI9Vm1sKDuKLXcR8LY3+3wACg+NZVd IwJePCUXMJqbkveKxOiQpwJhGXRuCScCvSpNqybb2pg9xqh3aSEdLCPb4QLRYz+NPt2f JcZKU0XcBtsPsqkDRRIMZ9/uWX3/iy1ZeoHiFit521y4XlMh+4rHlSjhsO+OE88bH0Ib 4BORJ4nCFsL/+DT4gcsKv9XRhxwWm2D57KPgI1MV5aCEdf3rqNLm4KgCKTYPZmsrbk4C Nw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qrm46hftf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 23 May 2023 02:14:39 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 02:14:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 02:14:28 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id BB8FE3F7074; Tue, 23 May 2023 02:14:24 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , Subject: [PATCH v2 1/5] mempool/cnxk: use pool config to pass flags Date: Tue, 23 May 2023 14:43:56 +0530 Message-ID: <20230523091400.717834-1-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411075528.1125799-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QuNw7og486AhXz63f4gfZ6K_PGHsQmkv X-Proofpoint-GUID: QuNw7og486AhXz63f4gfZ6K_PGHsQmkv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_06,2023-05-22_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use lower bits of pool_config to pass flags specific to cnxk mempool PMD ops. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.h | 24 ++++++++++++++++++++++++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 17 ++++++++++------- drivers/net/cnxk/cnxk_ethdev_sec.c | 25 ++++++------------------- 3 files changed, 40 insertions(+), 26 deletions(-) diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 3405aa7663..fc2e4b5b70 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -7,6 +7,30 @@ #include +enum cnxk_mempool_flags { + /* This flag is used to ensure that only aura zero is allocated. + * If aura zero is not available, then mempool creation fails. + */ + CNXK_MEMPOOL_F_ZERO_AURA = RTE_BIT64(0), + /* Here the pool create will use the npa_aura_s structure passed + * as pool config to create the pool. + */ + CNXK_MEMPOOL_F_CUSTOM_AURA = RTE_BIT64(1), +}; + +#define CNXK_MEMPOOL_F_MASK 0xFUL + +#define CNXK_MEMPOOL_FLAGS(_m) \ + (PLT_U64_CAST((_m)->pool_config) & CNXK_MEMPOOL_F_MASK) +#define CNXK_MEMPOOL_CONFIG(_m) \ + (PLT_PTR_CAST(PLT_U64_CAST((_m)->pool_config) & ~CNXK_MEMPOOL_F_MASK)) +#define CNXK_MEMPOOL_SET_FLAGS(_m, _f) \ + do { \ + void *_c = CNXK_MEMPOOL_CONFIG(_m); \ + uint64_t _flags = CNXK_MEMPOOL_FLAGS(_m) | (_f); \ + (_m)->pool_config = PLT_PTR_CAST(PLT_U64_CAST(_c) | _flags); \ + } while (0) + unsigned int cnxk_mempool_get_count(const struct rte_mempool *mp); ssize_t cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c index 3769afd3d1..1b6c4591bb 100644 --- a/drivers/mempool/cnxk/cnxk_mempool_ops.c +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -72,7 +72,7 @@ cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, int cnxk_mempool_alloc(struct rte_mempool *mp) { - uint32_t block_count, flags = 0; + uint32_t block_count, flags, roc_flags = 0; uint64_t aura_handle = 0; struct npa_aura_s aura; struct npa_pool_s pool; @@ -96,15 +96,18 @@ cnxk_mempool_alloc(struct rte_mempool *mp) pool.nat_align = 1; pool.buf_offset = mp->header_size / ROC_ALIGN; - /* Use driver specific mp->pool_config to override aura config */ - if (mp->pool_config != NULL) - memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + flags = CNXK_MEMPOOL_FLAGS(mp); + if (flags & CNXK_MEMPOOL_F_ZERO_AURA) { + roc_flags = ROC_NPA_ZERO_AURA_F; + } else if (flags & CNXK_MEMPOOL_F_CUSTOM_AURA) { + struct npa_aura_s *paura; - if (aura.ena && aura.pool_addr == 0) - flags = ROC_NPA_ZERO_AURA_F; + paura = CNXK_MEMPOOL_CONFIG(mp); + memcpy(&aura, paura, sizeof(struct npa_aura_s)); + } rc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura, - &pool, flags); + &pool, roc_flags); if (rc) { plt_err("Failed to alloc pool or aura rc=%d", rc); goto error; diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c index aa8a378a00..cd64daacc0 100644 --- a/drivers/net/cnxk/cnxk_ethdev_sec.c +++ b/drivers/net/cnxk/cnxk_ethdev_sec.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_INL_META_POOL_NAME "NIX_INL_META_POOL" @@ -43,7 +44,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ { const char *mp_name = NULL; struct rte_pktmbuf_pool_private mbp_priv; - struct npa_aura_s *aura; struct rte_mempool *mp; uint16_t first_skip; int rc; @@ -65,7 +65,6 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EINVAL; } - plt_free(mp->pool_config); rte_mempool_free(mp); *aura_handle = 0; @@ -84,22 +83,12 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ return -EIO; } - /* Indicate to allocate zero aura */ - aura = plt_zmalloc(sizeof(struct npa_aura_s), 0); - if (!aura) { - rc = -ENOMEM; - goto free_mp; - } - aura->ena = 1; - if (!mempool_name) - aura->pool_addr = 0; - else - aura->pool_addr = 1; /* Any non zero value, so that alloc from next free Index */ - - rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), aura); + rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), + mempool_name ? + NULL : PLT_PTR_CAST(CNXK_MEMPOOL_F_ZERO_AURA)); if (rc) { plt_err("Failed to setup mempool ops for meta, rc=%d", rc); - goto free_aura; + goto free_mp; } /* Init mempool private area */ @@ -113,15 +102,13 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_ rc = rte_mempool_populate_default(mp); if (rc < 0) { plt_err("Failed to create inline meta pool, rc=%d", rc); - goto free_aura; + goto free_mp; } rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); *aura_handle = mp->pool_id; *mpool = (uintptr_t)mp; return 0; -free_aura: - plt_free(aura); free_mp: rte_mempool_free(mp); return rc; From patchwork Tue May 23 09:13:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127198 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B856642B7D; Tue, 23 May 2023 11:15:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A93FB42D32; Tue, 23 May 2023 11:15:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C834242D17 for ; Tue, 23 May 2023 11:15:05 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34N1k5Tr029254 for ; Tue, 23 May 2023 02:15:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ZRRa9ZYmGwrHDFA6IrTayeGeHanGxvVQZIhtc4lRiWc=; b=GSWsWyItNa7ux0GggeYj8YwTfeoc8qhyUrFPW0l20I+w7oMaT7SObARFtTO+wgq+mkgB U2gVqsiVAhD9iXXMIeF3P1TGIfCQxtyDmE0GqArXBs2EzEthLUbqJ5iCiowDdBj+1DKn gd642Qk0hqkfqgG+lQPAgU56JedgObDkEKAdVKsldrgHcXhZFaHTYf74hfNNByuW4N+q z9MJv1FLrgPbbh4RaCRyqczQM1lCqqsAkZfFbVFNB6V8Y6uUtG6E8ULOR7M7PzlwhTz3 RylexEES3UyHvs4RGJrL0VphqhsahyyVwWXz/wg4qJrnuEwN5YaPS0WP/e5ryzU5yygE jg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qrm46hfvc-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 23 May 2023 02:15:04 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 02:15:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 02:15:03 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 092313F7074; Tue, 23 May 2023 02:14:58 -0700 (PDT) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v2 2/5] common/cnxk: add NPA aura create/destroy ROC APIs Date: Tue, 23 May 2023 14:43:57 +0530 Message-ID: <20230523091400.717834-2-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230523091400.717834-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> <20230523091400.717834-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: fjhRLmz3aP_cCMMMMlxoZ2Cv5lHVImuh X-Proofpoint-GUID: fjhRLmz3aP_cCMMMMlxoZ2Cv5lHVImuh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_06,2023-05-22_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add ROC APIs which allows to create NPA auras independently and attach it to an existing NPA pool. Also add API to destroy NPA auras independently. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.c | 219 ++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa.h | 4 + drivers/common/cnxk/version.map | 2 + 3 files changed, 225 insertions(+) diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 20637fbf65..e3c925ddd1 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -85,6 +85,36 @@ npa_aura_pool_init(struct mbox *m_box, uint32_t aura_id, struct npa_aura_s *aura return rc; } +static int +npa_aura_init(struct mbox *m_box, uint32_t aura_id, struct npa_aura_s *aura) +{ + struct npa_aq_enq_req *aura_init_req; + struct npa_aq_enq_rsp *aura_init_rsp; + struct mbox *mbox; + int rc = -ENOSPC; + + mbox = mbox_get(m_box); + aura_init_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (aura_init_req == NULL) + goto exit; + aura_init_req->aura_id = aura_id; + aura_init_req->ctype = NPA_AQ_CTYPE_AURA; + aura_init_req->op = NPA_AQ_INSTOP_INIT; + mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura)); + + rc = mbox_process_msg(mbox, (void **)&aura_init_rsp); + if (rc < 0) + goto exit; + + if (aura_init_rsp->hdr.rc == 0) + rc = 0; + else + rc = NPA_ERR_AURA_POOL_INIT; +exit: + mbox_put(mbox); + return rc; +} + static int npa_aura_pool_fini(struct mbox *m_box, uint32_t aura_id, uint64_t aura_handle) { @@ -156,6 +186,54 @@ npa_aura_pool_fini(struct mbox *m_box, uint32_t aura_id, uint64_t aura_handle) return rc; } +static int +npa_aura_fini(struct mbox *m_box, uint32_t aura_id) +{ + struct npa_aq_enq_req *aura_req; + struct npa_aq_enq_rsp *aura_rsp; + struct ndc_sync_op *ndc_req; + struct mbox *mbox; + int rc = -ENOSPC; + + /* Procedure for disabling an aura/pool */ + plt_delay_us(10); + + mbox = mbox_get(m_box); + aura_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (aura_req == NULL) + goto exit; + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + rc = mbox_process_msg(mbox, (void **)&aura_rsp); + if (rc < 0) + goto exit; + + if (aura_rsp->hdr.rc != 0) + return NPA_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = mbox_alloc_msg_ndc_sync_op(mbox); + if (ndc_req == NULL) { + rc = -ENOSPC; + goto exit; + } + ndc_req->npa_lf_sync = 1; + rc = mbox_process(mbox); + if (rc) { + plt_err("Error on NDC-NPA LF sync, rc %d", rc); + rc = NPA_ERR_AURA_POOL_FINI; + goto exit; + } + rc = 0; +exit: + mbox_put(mbox); + return rc; +} + int roc_npa_pool_op_pc_reset(uint64_t aura_handle) { @@ -493,6 +571,108 @@ roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size, return rc; } +static int +npa_aura_alloc(struct npa_lf *lf, const uint32_t block_count, int pool_id, + struct npa_aura_s *aura, uint64_t *aura_handle, uint32_t flags) +{ + int rc, aura_id; + + /* Sanity check */ + if (!lf || !aura || !aura_handle) + return NPA_ERR_PARAM; + + roc_npa_dev_lock(); + /* Get aura_id from resource bitmap */ + aura_id = find_free_aura(lf, flags); + if (aura_id < 0) { + roc_npa_dev_unlock(); + return NPA_ERR_AURA_ID_ALLOC; + } + + /* Mark aura as reserved */ + plt_bitmap_clear(lf->npa_bmp, aura_id); + + roc_npa_dev_unlock(); + rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || + aura_id >= (int)BIT_ULL(6 + lf->aura_sz)) ? + NPA_ERR_AURA_ID_ALLOC : + 0; + if (rc) + goto exit; + + /* Update aura fields */ + aura->pool_addr = pool_id; /* AF will translate to associated poolctx */ + aura->ena = 1; + aura->shift = plt_log2_u32(block_count); + aura->shift = aura->shift < 8 ? 0 : aura->shift - 8; + aura->limit = block_count; + aura->pool_caching = 1; + aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS); + aura->avg_con = 0; + /* Many to one reduction */ + aura->err_qint_idx = aura_id % lf->qints; + + /* Issue AURA_INIT and POOL_INIT op */ + rc = npa_aura_init(lf->mbox, aura_id, aura); + if (rc) + return rc; + + *aura_handle = roc_npa_aura_handle_gen(aura_id, lf->base); + + return 0; + +exit: + return rc; +} + +int +roc_npa_aura_create(uint64_t *aura_handle, uint32_t block_count, + struct npa_aura_s *aura, int pool_id, uint32_t flags) +{ + struct npa_aura_s defaura; + struct idev_cfg *idev; + struct npa_lf *lf; + int rc; + + lf = idev_npa_obj_get(); + if (lf == NULL) { + rc = NPA_ERR_DEVICE_NOT_BOUNDED; + goto error; + } + + idev = idev_get_cfg(); + if (idev == NULL) { + rc = NPA_ERR_ALLOC; + goto error; + } + + if (flags & ROC_NPA_ZERO_AURA_F && !lf->zero_aura_rsvd) { + rc = NPA_ERR_ALLOC; + goto error; + } + + if (aura == NULL) { + memset(&defaura, 0, sizeof(struct npa_aura_s)); + aura = &defaura; + } + + rc = npa_aura_alloc(lf, block_count, pool_id, aura, aura_handle, flags); + if (rc) { + plt_err("Failed to alloc aura rc=%d", rc); + goto error; + } + + plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, *aura_handle); + + /* Just hold the reference of the object */ + __atomic_fetch_add(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST); +error: + return rc; +} + int roc_npa_aura_limit_modify(uint64_t aura_handle, uint16_t aura_limit) { @@ -561,6 +741,45 @@ roc_npa_pool_destroy(uint64_t aura_handle) return rc; } +static int +npa_aura_free(struct npa_lf *lf, uint64_t aura_handle) +{ + int aura_id, rc; + + if (!lf || !aura_handle) + return NPA_ERR_PARAM; + + aura_id = roc_npa_aura_handle_to_aura(aura_handle); + rc = npa_aura_fini(lf->mbox, aura_id); + + if (rc) + return rc; + + memset(&lf->aura_attr[aura_id], 0, sizeof(struct npa_aura_attr)); + + roc_npa_dev_lock(); + plt_bitmap_set(lf->npa_bmp, aura_id); + roc_npa_dev_unlock(); + + return rc; +} + +int +roc_npa_aura_destroy(uint64_t aura_handle) +{ + struct npa_lf *lf = idev_npa_obj_get(); + int rc = 0; + + plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, aura_handle); + rc = npa_aura_free(lf, aura_handle); + if (rc) + plt_err("Failed to destroy aura rc=%d", rc); + + /* Release the reference of npa */ + rc |= npa_lf_fini(); + return rc; +} + int roc_npa_pool_range_update_check(uint64_t aura_handle) { diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index dd588b0322..df15dabe92 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -732,6 +732,10 @@ int __roc_api roc_npa_pool_range_update_check(uint64_t aura_handle); void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, uint64_t end_iova); +int __roc_api roc_npa_aura_create(uint64_t *aura_handle, uint32_t block_count, + struct npa_aura_s *aura, int pool_id, + uint32_t flags); +int __roc_api roc_npa_aura_destroy(uint64_t aura_handle); uint64_t __roc_api roc_npa_zero_aura_handle(void); int __roc_api roc_npa_buf_type_update(uint64_t aura_handle, enum roc_npa_buf_type type, int cnt); uint64_t __roc_api roc_npa_buf_type_mask(uint64_t aura_handle); diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index b298a21b84..9414b55e9c 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -347,6 +347,8 @@ INTERNAL { roc_nix_vlan_mcam_entry_write; roc_nix_vlan_strip_vtag_ena_dis; roc_nix_vlan_tpid_set; + roc_npa_aura_create; + roc_npa_aura_destroy; roc_npa_buf_type_mask; roc_npa_buf_type_limit_get; roc_npa_buf_type_update; From patchwork Tue May 23 09:27:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127202 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8110942B81; Tue, 23 May 2023 14:48:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67D8542BDA; Tue, 23 May 2023 14:48:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7534C40A80 for ; Tue, 23 May 2023 14:48:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34NCN6cf030947 for ; Tue, 23 May 2023 05:48:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=QI98unYdbhipDydzq2qozSdhPiaWxqaTq3C61cEnRvM=; b=RurW/pkfGv0PRlux8YjaB+yV+yalTfDRaO/hxUpuHJkN1qhOmo/AzuaeeSFGklim0vXv M2UGu6R1qN0FsPF4Xj0eMdKJbkHXRQGcaoQvqFvwTXdUJ6A6pfglvC53rrUAz043IwNb WULcE9PjkKptnnitToRYpbjAdyxLFLcelsSJYZpzOm04VGuZFtkcRqj8reB7bV+sJHk6 REBAzp9vdFWK8j9AHrONaSwJQ5oUvsVl10M78ZNDG08JmNtcrLhbqgcf5MFuOY1zHpBN 2UtIY0d1R61KeGG5zQnyUm66ves/RqV2yyEGjyADpXqHBhfa+xQNc0sDE99IrDtqPPlN rQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qrm46j72c-17 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 23 May 2023 05:48:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 02:27:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 02:27:45 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 8A3453F7074; Tue, 23 May 2023 02:27:42 -0700 (PDT) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Ashwin Sekhar T K , "Pavan Nikhilesh" CC: , , , , Subject: [PATCH v2 3/5] mempool/cnxk: add NPA aura range get/set APIs Date: Tue, 23 May 2023 14:57:39 +0530 Message-ID: <20230523092739.718214-1-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230523091400.717834-1-asekhar@marvell.com> References: <20230523091400.717834-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: hywOI8E2h9VHHYPG-V3JcW_wRdsWm_WP X-Proofpoint-GUID: hywOI8E2h9VHHYPG-V3JcW_wRdsWm_WP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_08,2023-05-23_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Current APIs to set range on auras modifies both the aura range limits in software and pool range limits in NPA hardware. Newly added ROC APIs allow to set/get aura range limits in software alone without modifying hardware. The existing aura range set functionality has been moved as a pool range set API. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_nix_queue.c | 2 +- drivers/common/cnxk/roc_npa.c | 35 ++++++++++++++++++++++++- drivers/common/cnxk/roc_npa.h | 6 +++++ drivers/common/cnxk/roc_sso.c | 2 +- drivers/common/cnxk/version.map | 2 ++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 2 +- 6 files changed, 45 insertions(+), 4 deletions(-) diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index 21bfe7d498..ac4d9856c1 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -1050,7 +1050,7 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) goto npa_fail; } - roc_npa_aura_op_range_set(sq->aura_handle, (uint64_t)sq->sqe_mem, iova); + roc_npa_pool_op_range_set(sq->aura_handle, (uint64_t)sq->sqe_mem, iova); roc_npa_aura_limit_modify(sq->aura_handle, nb_sqb_bufs); sq->aura_sqb_bufs = nb_sqb_bufs; diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index e3c925ddd1..3b0f95a304 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -18,7 +18,7 @@ roc_npa_lf_init_cb_register(roc_npa_lf_init_cb_t cb) } void -roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, +roc_npa_pool_op_range_set(uint64_t aura_handle, uint64_t start_iova, uint64_t end_iova) { const uint64_t start = roc_npa_aura_handle_to_base(aura_handle) + @@ -32,6 +32,7 @@ roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, PLT_ASSERT(lf); lim = lf->aura_lim; + /* Change the range bookkeeping in software as well as in hardware */ lim[reg].ptr_start = PLT_MIN(lim[reg].ptr_start, start_iova); lim[reg].ptr_end = PLT_MAX(lim[reg].ptr_end, end_iova); @@ -39,6 +40,38 @@ roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, roc_store_pair(lim[reg].ptr_end, reg, end); } +void +roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, + uint64_t end_iova) +{ + uint64_t reg = roc_npa_aura_handle_to_aura(aura_handle); + struct npa_lf *lf = idev_npa_obj_get(); + struct npa_aura_lim *lim; + + PLT_ASSERT(lf); + lim = lf->aura_lim; + + /* Change only the bookkeeping in software */ + lim[reg].ptr_start = PLT_MIN(lim[reg].ptr_start, start_iova); + lim[reg].ptr_end = PLT_MAX(lim[reg].ptr_end, end_iova); +} + +void +roc_npa_aura_op_range_get(uint64_t aura_handle, uint64_t *start_iova, + uint64_t *end_iova) +{ + uint64_t aura_id = roc_npa_aura_handle_to_aura(aura_handle); + struct npa_aura_lim *lim; + struct npa_lf *lf; + + lf = idev_npa_obj_get(); + PLT_ASSERT(lf); + + lim = lf->aura_lim; + *start_iova = lim[aura_id].ptr_start; + *end_iova = lim[aura_id].ptr_end; +} + static int npa_aura_pool_init(struct mbox *m_box, uint32_t aura_id, struct npa_aura_s *aura, struct npa_pool_s *pool) diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index df15dabe92..21608a40d9 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -732,6 +732,12 @@ int __roc_api roc_npa_pool_range_update_check(uint64_t aura_handle); void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, uint64_t end_iova); +void __roc_api roc_npa_aura_op_range_get(uint64_t aura_handle, + uint64_t *start_iova, + uint64_t *end_iova); +void __roc_api roc_npa_pool_op_range_set(uint64_t aura_handle, + uint64_t start_iova, + uint64_t end_iova); int __roc_api roc_npa_aura_create(uint64_t *aura_handle, uint32_t block_count, struct npa_aura_s *aura, int pool_id, uint32_t flags); diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index 4a6a5080f7..c376bd837f 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -523,7 +523,7 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, roc_npa_aura_op_free(xaq->aura_handle, 0, iova); iova += xaq_buf_size; } - roc_npa_aura_op_range_set(xaq->aura_handle, (uint64_t)xaq->mem, iova); + roc_npa_pool_op_range_set(xaq->aura_handle, (uint64_t)xaq->mem, iova); if (roc_npa_aura_op_available_wait(xaq->aura_handle, xaq->nb_xaq, 0) != xaq->nb_xaq) { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 9414b55e9c..5281c71550 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -354,6 +354,7 @@ INTERNAL { roc_npa_buf_type_update; roc_npa_aura_drop_set; roc_npa_aura_limit_modify; + roc_npa_aura_op_range_get; roc_npa_aura_op_range_set; roc_npa_ctx_dump; roc_npa_dev_fini; @@ -365,6 +366,7 @@ INTERNAL { roc_npa_pool_create; roc_npa_pool_destroy; roc_npa_pool_op_pc_reset; + roc_npa_pool_op_range_set; roc_npa_pool_range_update_check; roc_npa_zero_aura_handle; roc_npc_fini; diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c index 1b6c4591bb..a1aeaee746 100644 --- a/drivers/mempool/cnxk/cnxk_mempool_ops.c +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -174,7 +174,7 @@ cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, plt_npa_dbg("requested objects %" PRIu64 ", possible objects %" PRIu64 "", (uint64_t)max_objs, (uint64_t)num_elts); - roc_npa_aura_op_range_set(mp->pool_id, iova, + roc_npa_pool_op_range_set(mp->pool_id, iova, iova + num_elts * total_elt_sz); if (roc_npa_pool_range_update_check(mp->pool_id) < 0)