Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/114786/?format=api
http://patchwork.dpdk.org/api/patches/114786/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20220809184908.24030-7-ndabilpuram@marvell.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20220809184908.24030-7-ndabilpuram@marvell.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20220809184908.24030-7-ndabilpuram@marvell.com", "date": "2022-08-09T18:48:51", "name": "[07/23] common/cnxk: reserve aura zero on cn10ka NPA", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": true, "hash": "913848eeea545adb779cca999dd893e5b83073e7", "submitter": { "id": 1202, "url": "http://patchwork.dpdk.org/api/people/1202/?format=api", "name": "Nithin Dabilpuram", "email": "ndabilpuram@marvell.com" }, "delegate": { "id": 310, "url": "http://patchwork.dpdk.org/api/users/310/?format=api", "username": "jerin", "first_name": "Jerin", "last_name": "Jacob", "email": "jerinj@marvell.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20220809184908.24030-7-ndabilpuram@marvell.com/mbox/", "series": [ { "id": 24239, "url": "http://patchwork.dpdk.org/api/series/24239/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=24239", "date": "2022-08-09T18:48:45", "name": "[01/23] common/cnxk: fix part value for cn10k", "version": 1, "mbox": "http://patchwork.dpdk.org/series/24239/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/114786/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/114786/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 7C033A04FD;\n\tTue, 9 Aug 2022 20:52:16 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 6F88F42BE6;\n\tTue, 9 Aug 2022 20:52:16 +0200 (CEST)", "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n [67.231.148.174])\n by mails.dpdk.org (Postfix) with ESMTP id 7075542BD4\n for <dev@dpdk.org>; Tue, 9 Aug 2022 20:52:14 +0200 (CEST)", "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id\n 279D65aj015667;\n Tue, 9 Aug 2022 11:50:07 -0700", "from dc5-exch02.marvell.com ([199.233.59.182])\n by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3huds2uksb-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);\n Tue, 09 Aug 2022 11:50:07 -0700", "from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18;\n Tue, 9 Aug 2022 11:50:05 -0700", "from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend\n Transport; Tue, 9 Aug 2022 11:50:05 -0700", "from hyd1588t430.marvell.com (unknown [10.29.52.204])\n by maili.marvell.com (Postfix) with ESMTP id 24A8E3F7089;\n Tue, 9 Aug 2022 11:49:59 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-type; s=pfpt0220; bh=ZNFbzv1trFHQpyuViyN5GnayNemoG3Vgz63GRmTW9Ac=;\n b=ej4DL0siPXlnO9I/1WWAHrs+DtVxDnftvcBOpktWNlee7rYXVIHhCAEDf4VMQhG0e9Ao\n jWN+hXhyBHhA4EVoy8DHiDjZ/q72knIcWZR2LPdMBJJg9oQbT+w2lqEzHC3WtXEAHk/4\n msosZEKfwU34NJBSh4RDaqVuZfYFInVcZAimdeKjbqUk6ImYdJBBD8cF8zuQuQM+RFVN\n Dn5vkdBUytAob8wy5pKd6q/e8mFRGIsqyuPuXT69RYyigqNmue9zlIpfOnOJwzbcj+8O\n JXz01+nZbkXXEBAMfgKXC/pnj3+vOKNaGBEtJ73dDKANKhDR2P4hf0GNw0k9xZsa1tWO DA==", "From": "Nithin Dabilpuram <ndabilpuram@marvell.com>", "To": "Nithin Dabilpuram <ndabilpuram@marvell.com>, Kiran Kumar K\n <kirankumark@marvell.com>, Sunil Kumar Kori <skori@marvell.com>, Satha Rao\n <skoteshwar@marvell.com>, Ray Kinsella <mdr@ashroe.eu>, Ashwin Sekhar T K\n <asekhar@marvell.com>, Pavan Nikhilesh <pbhagavatula@marvell.com>", "CC": "<jerinj@marvell.com>, <dev@dpdk.org>", "Subject": "[PATCH 07/23] common/cnxk: reserve aura zero on cn10ka NPA", "Date": "Wed, 10 Aug 2022 00:18:51 +0530", "Message-ID": "<20220809184908.24030-7-ndabilpuram@marvell.com>", "X-Mailer": "git-send-email 2.8.4", "In-Reply-To": "<20220809184908.24030-1-ndabilpuram@marvell.com>", "References": "<20220809184908.24030-1-ndabilpuram@marvell.com>", "MIME-Version": "1.0", "Content-Type": "text/plain", "X-Proofpoint-ORIG-GUID": "GmyNGDgli5GzteTXP-z2mgLuhffdFMfy", "X-Proofpoint-GUID": "GmyNGDgli5GzteTXP-z2mgLuhffdFMfy", "X-Proofpoint-Virus-Version": "vendor=baseguard\n engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1\n definitions=2022-08-09_05,2022-08-09_02,2022-06-22_01", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "Reserve aura id 0 on cn10k and provide mechanism to\nspecifically allocate it and free it via roc_npa_*\nAPI's.\n\nSigned-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>\n---\n drivers/common/cnxk/roc_dpi.c | 2 +-\n drivers/common/cnxk/roc_nix_queue.c | 2 +-\n drivers/common/cnxk/roc_npa.c | 100 ++++++++++++++++++++++++++------\n drivers/common/cnxk/roc_npa.h | 6 +-\n drivers/common/cnxk/roc_npa_priv.h | 1 +\n drivers/common/cnxk/roc_sso.c | 2 +-\n drivers/common/cnxk/version.map | 1 +\n drivers/mempool/cnxk/cnxk_mempool_ops.c | 7 ++-\n 8 files changed, 97 insertions(+), 24 deletions(-)", "diff": "diff --git a/drivers/common/cnxk/roc_dpi.c b/drivers/common/cnxk/roc_dpi.c\nindex 23b2cc4..93c8318 100644\n--- a/drivers/common/cnxk/roc_dpi.c\n+++ b/drivers/common/cnxk/roc_dpi.c\n@@ -75,7 +75,7 @@ roc_dpi_configure(struct roc_dpi *roc_dpi)\n \n \tmemset(&aura, 0, sizeof(aura));\n \trc = roc_npa_pool_create(&aura_handle, DPI_CMD_QUEUE_SIZE,\n-\t\t\t\t DPI_CMD_QUEUE_BUFS, &aura, &pool);\n+\t\t\t\t DPI_CMD_QUEUE_BUFS, &aura, &pool, 0);\n \tif (rc) {\n \t\tplt_err(\"Failed to create NPA pool, err %d\\n\", rc);\n \t\treturn rc;\ndiff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c\nindex 692b134..70b4516 100644\n--- a/drivers/common/cnxk/roc_nix_queue.c\n+++ b/drivers/common/cnxk/roc_nix_queue.c\n@@ -713,7 +713,7 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq)\n \taura.fc_addr = (uint64_t)sq->fc;\n \taura.fc_hyst_bits = 0; /* Store count on all updates */\n \trc = roc_npa_pool_create(&sq->aura_handle, blk_sz, nb_sqb_bufs, &aura,\n-\t\t\t\t &pool);\n+\t\t\t\t &pool, 0);\n \tif (rc)\n \t\tgoto fail;\n \ndiff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c\nindex 1e60f44..760a231 100644\n--- a/drivers/common/cnxk/roc_npa.c\n+++ b/drivers/common/cnxk/roc_npa.c\n@@ -261,15 +261,59 @@ bitmap_ctzll(uint64_t slab)\n }\n \n static int\n+find_free_aura(struct npa_lf *lf, uint32_t flags)\n+{\n+\tstruct plt_bitmap *bmp = lf->npa_bmp;\n+\tuint64_t aura0_state = 0;\n+\tuint64_t slab;\n+\tuint32_t pos;\n+\tint idx = -1;\n+\tint rc;\n+\n+\tif (flags & ROC_NPA_ZERO_AURA_F) {\n+\t\t/* Only look for zero aura */\n+\t\tif (plt_bitmap_get(bmp, 0))\n+\t\t\treturn 0;\n+\t\tplt_err(\"Zero aura already in use\");\n+\t\treturn -1;\n+\t}\n+\n+\tif (lf->zero_aura_rsvd) {\n+\t\t/* Save and clear zero aura bit if needed */\n+\t\taura0_state = plt_bitmap_get(bmp, 0);\n+\t\tif (aura0_state)\n+\t\t\tplt_bitmap_clear(bmp, 0);\n+\t}\n+\n+\tpos = 0;\n+\tslab = 0;\n+\t/* Scan from the beginning */\n+\tplt_bitmap_scan_init(bmp);\n+\t/* Scan bitmap to get the free pool */\n+\trc = plt_bitmap_scan(bmp, &pos, &slab);\n+\t/* Empty bitmap */\n+\tif (rc == 0) {\n+\t\tplt_err(\"Aura's exhausted\");\n+\t\tgoto empty;\n+\t}\n+\n+\tidx = pos + bitmap_ctzll(slab);\n+empty:\n+\tif (lf->zero_aura_rsvd && aura0_state)\n+\t\tplt_bitmap_set(bmp, 0);\n+\n+\treturn idx;\n+}\n+\n+static int\n npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size,\n \t\t\t const uint32_t block_count, struct npa_aura_s *aura,\n-\t\t\t struct npa_pool_s *pool, uint64_t *aura_handle)\n+\t\t\t struct npa_pool_s *pool, uint64_t *aura_handle,\n+\t\t\t uint32_t flags)\n {\n \tint rc, aura_id, pool_id, stack_size, alloc_size;\n \tchar name[PLT_MEMZONE_NAMESIZE];\n \tconst struct plt_memzone *mz;\n-\tuint64_t slab;\n-\tuint32_t pos;\n \n \t/* Sanity check */\n \tif (!lf || !block_size || !block_count || !pool || !aura ||\n@@ -281,20 +325,11 @@ npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size,\n \t block_size > ROC_NPA_MAX_BLOCK_SZ)\n \t\treturn NPA_ERR_INVALID_BLOCK_SZ;\n \n-\tpos = 0;\n-\tslab = 0;\n-\t/* Scan from the beginning */\n-\tplt_bitmap_scan_init(lf->npa_bmp);\n-\t/* Scan bitmap to get the free pool */\n-\trc = plt_bitmap_scan(lf->npa_bmp, &pos, &slab);\n-\t/* Empty bitmap */\n-\tif (rc == 0) {\n-\t\tplt_err(\"Mempools exhausted\");\n-\t\treturn NPA_ERR_AURA_ID_ALLOC;\n-\t}\n-\n \t/* Get aura_id from resource bitmap */\n-\taura_id = pos + bitmap_ctzll(slab);\n+\taura_id = find_free_aura(lf, flags);\n+\tif (aura_id < 0)\n+\t\treturn NPA_ERR_AURA_ID_ALLOC;\n+\n \t/* Mark pool as reserved */\n \tplt_bitmap_clear(lf->npa_bmp, aura_id);\n \n@@ -374,7 +409,7 @@ npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size,\n int\n roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,\n \t\t uint32_t block_count, struct npa_aura_s *aura,\n-\t\t struct npa_pool_s *pool)\n+\t\t struct npa_pool_s *pool, uint32_t flags)\n {\n \tstruct npa_aura_s defaura;\n \tstruct npa_pool_s defpool;\n@@ -394,6 +429,11 @@ roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,\n \t\tgoto error;\n \t}\n \n+\tif (flags & ROC_NPA_ZERO_AURA_F && !lf->zero_aura_rsvd) {\n+\t\trc = NPA_ERR_ALLOC;\n+\t\tgoto error;\n+\t}\n+\n \tif (aura == NULL) {\n \t\tmemset(&defaura, 0, sizeof(struct npa_aura_s));\n \t\taura = &defaura;\n@@ -406,7 +446,7 @@ roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,\n \t}\n \n \trc = npa_aura_pool_pair_alloc(lf, block_size, block_count, aura, pool,\n-\t\t\t\t aura_handle);\n+\t\t\t\t aura_handle, flags);\n \tif (rc) {\n \t\tplt_err(\"Failed to alloc pool or aura rc=%d\", rc);\n \t\tgoto error;\n@@ -522,6 +562,26 @@ roc_npa_pool_range_update_check(uint64_t aura_handle)\n \treturn 0;\n }\n \n+uint64_t\n+roc_npa_zero_aura_handle(void)\n+{\n+\tstruct idev_cfg *idev;\n+\tstruct npa_lf *lf;\n+\n+\tlf = idev_npa_obj_get();\n+\tif (lf == NULL)\n+\t\treturn NPA_ERR_DEVICE_NOT_BOUNDED;\n+\n+\tidev = idev_get_cfg();\n+\tif (idev == NULL)\n+\t\treturn NPA_ERR_ALLOC;\n+\n+\t/* Return aura handle only if reserved */\n+\tif (lf->zero_aura_rsvd)\n+\t\treturn roc_npa_aura_handle_gen(0, lf->base);\n+\treturn 0;\n+}\n+\n static inline int\n npa_attach(struct mbox *mbox)\n {\n@@ -672,6 +732,10 @@ npa_dev_init(struct npa_lf *lf, uintptr_t base, struct mbox *mbox)\n \tfor (i = 0; i < nr_pools; i++)\n \t\tplt_bitmap_set(lf->npa_bmp, i);\n \n+\t/* Reserve zero aura for all models other than CN9K */\n+\tif (!roc_model_is_cn9k())\n+\t\tlf->zero_aura_rsvd = true;\n+\n \t/* Allocate memory for qint context */\n \tlf->npa_qint_mem = plt_zmalloc(sizeof(struct npa_qint) * nr_pools, 0);\n \tif (lf->npa_qint_mem == NULL) {\ndiff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h\nindex 59d13d8..69129cb 100644\n--- a/drivers/common/cnxk/roc_npa.h\n+++ b/drivers/common/cnxk/roc_npa.h\n@@ -711,10 +711,13 @@ struct roc_npa {\n int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa);\n int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa);\n \n+/* Flags to pool create */\n+#define ROC_NPA_ZERO_AURA_F BIT(0)\n+\n /* NPA pool */\n int __roc_api roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,\n \t\t\t\t uint32_t block_count, struct npa_aura_s *aura,\n-\t\t\t\t struct npa_pool_s *pool);\n+\t\t\t\t struct npa_pool_s *pool, uint32_t flags);\n int __roc_api roc_npa_aura_limit_modify(uint64_t aura_handle,\n \t\t\t\t\tuint16_t aura_limit);\n int __roc_api roc_npa_pool_destroy(uint64_t aura_handle);\n@@ -722,6 +725,7 @@ int __roc_api roc_npa_pool_range_update_check(uint64_t aura_handle);\n void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle,\n \t\t\t\t\t uint64_t start_iova,\n \t\t\t\t\t uint64_t end_iova);\n+uint64_t __roc_api roc_npa_zero_aura_handle(void);\n \n /* Init callbacks */\n typedef int (*roc_npa_lf_init_cb_t)(struct plt_pci_device *pci_dev);\ndiff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h\nindex 5a02a61..de3d544 100644\n--- a/drivers/common/cnxk/roc_npa_priv.h\n+++ b/drivers/common/cnxk/roc_npa_priv.h\n@@ -32,6 +32,7 @@ struct npa_lf {\n \tuint8_t aura_sz;\n \tuint32_t qints;\n \tuintptr_t base;\n+\tbool zero_aura_rsvd;\n };\n \n struct npa_qint {\ndiff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c\nindex 126a9cb..4bee5a9 100644\n--- a/drivers/common/cnxk/roc_sso.c\n+++ b/drivers/common/cnxk/roc_sso.c\n@@ -473,7 +473,7 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,\n \taura.fc_addr = (uint64_t)xaq->fc;\n \taura.fc_hyst_bits = 0; /* Store count on all updates */\n \trc = roc_npa_pool_create(&xaq->aura_handle, xaq_buf_size, xaq->nb_xaq,\n-\t\t\t\t &aura, &pool);\n+\t\t\t\t &aura, &pool, 0);\n \tif (rc) {\n \t\tplt_err(\"Failed to create XAQ pool\");\n \t\tgoto npa_fail;\ndiff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map\nindex 6d43e37..6c05e89 100644\n--- a/drivers/common/cnxk/version.map\n+++ b/drivers/common/cnxk/version.map\n@@ -318,6 +318,7 @@ INTERNAL {\n \troc_npa_pool_destroy;\n \troc_npa_pool_op_pc_reset;\n \troc_npa_pool_range_update_check;\n+\troc_npa_zero_aura_handle;\n \troc_npc_fini;\n \troc_npc_flow_create;\n \troc_npc_flow_destroy;\ndiff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c\nindex c7b75f0..a0b94bb 100644\n--- a/drivers/mempool/cnxk/cnxk_mempool_ops.c\n+++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c\n@@ -72,10 +72,10 @@ cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,\n int\n cnxk_mempool_alloc(struct rte_mempool *mp)\n {\n+\tuint32_t block_count, flags = 0;\n \tuint64_t aura_handle = 0;\n \tstruct npa_aura_s aura;\n \tstruct npa_pool_s pool;\n-\tuint32_t block_count;\n \tsize_t block_size;\n \tint rc = -ERANGE;\n \n@@ -100,8 +100,11 @@ cnxk_mempool_alloc(struct rte_mempool *mp)\n \tif (mp->pool_config != NULL)\n \t\tmemcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s));\n \n+\tif (aura.ena && aura.pool_addr == 0)\n+\t\tflags = ROC_NPA_ZERO_AURA_F;\n+\n \trc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura,\n-\t\t\t\t &pool);\n+\t\t\t\t &pool, flags);\n \tif (rc) {\n \t\tplt_err(\"Failed to alloc pool or aura rc=%d\", rc);\n \t\tgoto error;\n", "prefixes": [ "07/23" ] }{ "id": 114786, "url": "