From patchwork Mon May 29 09:25:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127661 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E141442BD1; Mon, 29 May 2023 11:25:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D16C5410DD; Mon, 29 May 2023 11:25:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1D5AA410D7 for ; Mon, 29 May 2023 11:25:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34SN0E68006357 for ; Mon, 29 May 2023 02:25:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Wh+4dNJPN9G9N6j7Vx26JZDc52jYGTR6h7Opj4n/wGw=; b=ezofiwBkwY9jewr95z51wd7h3UHgmMf1gvnaW/Aldwq2jtDp28pUrZtEGn/jxoMP7w/O NV2l13+EGp0XrYopIyDPxjVMkaMVeLfWTHYkk/hk++slGmLFIn6NFBz388I3w5e60IK5 zgMnPfq+CNAJrltJBj9po/iYNWnApZEYreBN9gFdjb2WO3DfF+S4THdR4YYZYL1RUUsZ Jitf5LTkxCyz4w1I7u8NhmzWH+tQzUPcwgsIKEXKBPf8VRiiMudocW7sXNWXCIUnPxLv agL+yDOgNVcEqLe+7FyEzROXGPXGigRpXEb/2eWG8SWRmh/rqXaArrq27MM9fZXEuACQ wA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3quf7pdr7y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 29 May 2023 02:25:53 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 29 May 2023 02:25:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 29 May 2023 02:25:51 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 14B113F7053; Mon, 29 May 2023 02:25:47 -0700 (PDT) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Ashwin Sekhar T K , "Pavan Nikhilesh" CC: , , , , Subject: [PATCH v2 1/2] mempool/cnxk: fix indefinite wait in batch alloc Date: Mon, 29 May 2023 14:55:44 +0530 Message-ID: <20230529092545.959180-1-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230526134507.885354-1-asekhar@marvell.com> References: <20230526134507.885354-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ZRyeLf8LTFHnP7upqzdV4rQU0wXTD1hQ X-Proofpoint-ORIG-GUID: ZRyeLf8LTFHnP7upqzdV4rQU0wXTD1hQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-29_06,2023-05-25_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Avoid waiting indefinitely when counting batch allocated pointers by adding a wait timeout. Fixes: 50d08d3934ec ("common/cnxk: fix batch alloc completion poll logic") Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.h | 15 +++++++++------ drivers/mempool/cnxk/cn10k_mempool_ops.c | 3 ++- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 21608a40d9..d3caa71586 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -241,19 +241,23 @@ roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf, } static inline void -roc_npa_batch_alloc_wait(uint64_t *cache_line) +roc_npa_batch_alloc_wait(uint64_t *cache_line, unsigned int wait_us) { + const uint64_t ticks = (uint64_t)wait_us * plt_tsc_hz() / (uint64_t)1E6; + const uint64_t start = plt_tsc_cycles(); + /* Batch alloc status code is updated in bits [5:6] of the first word * of the 128 byte cache line. */ while (((__atomic_load_n(cache_line, __ATOMIC_RELAXED) >> 5) & 0x3) == ALLOC_CCODE_INVAL) - ; + if (wait_us && (plt_tsc_cycles() - start) >= ticks) + break; } static inline unsigned int roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num, - unsigned int do_wait) + unsigned int wait_us) { unsigned int count, i; @@ -267,8 +271,7 @@ roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num, status = (struct npa_batch_alloc_status_s *)&aligned_buf[i]; - if (do_wait) - roc_npa_batch_alloc_wait(&aligned_buf[i]); + roc_npa_batch_alloc_wait(&aligned_buf[i], wait_us); count += status->count; } @@ -293,7 +296,7 @@ roc_npa_aura_batch_alloc_extract(uint64_t *buf, uint64_t *aligned_buf, status = (struct npa_batch_alloc_status_s *)&aligned_buf[i]; - roc_npa_batch_alloc_wait(&aligned_buf[i]); + roc_npa_batch_alloc_wait(&aligned_buf[i], 0); line_count = status->count; diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c index ba826f0f01..ff0015d8de 100644 --- a/drivers/mempool/cnxk/cn10k_mempool_ops.c +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c @@ -9,6 +9,7 @@ #define BATCH_ALLOC_SZ ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS #define BATCH_OP_DATA_TABLE_MZ_NAME "batch_op_data_table_mz" +#define BATCH_ALLOC_WAIT_US 5 enum batch_op_status { BATCH_ALLOC_OP_NOT_ISSUED = 0, @@ -178,7 +179,7 @@ cn10k_mempool_get_count(const struct rte_mempool *mp) if (mem->status == BATCH_ALLOC_OP_ISSUED) count += roc_npa_aura_batch_alloc_count( - mem->objs, BATCH_ALLOC_SZ, 1); + mem->objs, BATCH_ALLOC_SZ, BATCH_ALLOC_WAIT_US); if (mem->status == BATCH_ALLOC_OP_DONE) count += mem->sz; From patchwork Mon May 29 09:25:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127662 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13F9342BD1; Mon, 29 May 2023 11:26:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04A6A4282D; Mon, 29 May 2023 11:26:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 456BB4282D for ; Mon, 29 May 2023 11:25:59 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34SN17dX008148 for ; Mon, 29 May 2023 02:25:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=sSvsy65u77q05bBzmrf4/o/TiFN9RvwwPLxjIjHmGXo=; b=fY0EIo8f3ggW4+CviPXdp+bNhj50sg8mDXHKX2wJHd57YzK8aEf1ORZLPaiO9t6V+W/+ pPZIuBYQ0ZFqPyitNoSuaVTZVpAvk85B/CPakN5wXAOqQkB3BfC2xdFRARRrz2+X39Bp hyGCoD1me634pMsHlEPLg64GS08yHiAutq1dGL9ObFALitU3BA7gJKuo2/6J/Qo8zhKe 2C8E2T6NvkUlgAAGExA7Pj+gE0ubkcByrJSS0fKxZEIvxTbTRWOj+fWumwyVcxYPzxyN h5Dh8ihMJr3VK/Ux7yMvuQSCxcu+Wg8io9d+xWq5iz9I9+oerTcGasP6C7VziY1QV00D Zw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3quf7pdr84-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 29 May 2023 02:25:57 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 29 May 2023 02:25:56 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 29 May 2023 02:25:56 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id D60733F7053; Mon, 29 May 2023 02:25:52 -0700 (PDT) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v2 2/2] common/cnxk: add new APIs for batch operations Date: Mon, 29 May 2023 14:55:45 +0530 Message-ID: <20230529092545.959180-2-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230529092545.959180-1-asekhar@marvell.com> References: <20230526134507.885354-1-asekhar@marvell.com> <20230529092545.959180-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: C8aZVHhy4yWGl_gVJRQ0_vDbdJILALWu X-Proofpoint-ORIG-GUID: C8aZVHhy4yWGl_gVJRQ0_vDbdJILALWu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-29_06,2023-05-25_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new APIs for counting and extracting allocated objects from a single cache line in the batch alloc memory. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/roc_npa.h | 78 ++++++++++++++++++++++++++++++----- 1 file changed, 67 insertions(+), 11 deletions(-) diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index d3caa71586..0653531198 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -209,7 +209,6 @@ roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf, unsigned int num, const int dis_wait, const int drop) { - unsigned int i; int64_t *addr; uint64_t res; union { @@ -220,10 +219,6 @@ roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf, if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) return -1; - /* Zero first word of every cache line */ - for (i = 0; i < num; i += (ROC_ALIGN / sizeof(uint64_t))) - buf[i] = 0; - addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_BATCH_ALLOC); cmp.u = 0; @@ -240,6 +235,9 @@ roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf, return 0; } +/* + * Wait for a batch alloc operation on a cache line to complete. + */ static inline void roc_npa_batch_alloc_wait(uint64_t *cache_line, unsigned int wait_us) { @@ -255,6 +253,23 @@ roc_npa_batch_alloc_wait(uint64_t *cache_line, unsigned int wait_us) break; } +/* + * Count the number of pointers in a single batch alloc cache line. + */ +static inline unsigned int +roc_npa_aura_batch_alloc_count_line(uint64_t *line, unsigned int wait_us) +{ + struct npa_batch_alloc_status_s *status; + + status = (struct npa_batch_alloc_status_s *)line; + roc_npa_batch_alloc_wait(line, wait_us); + + return status->count; +} + +/* + * Count the number of pointers in a sequence of batch alloc cache lines. + */ static inline unsigned int roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num, unsigned int wait_us) @@ -279,6 +294,40 @@ roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num, return count; } +/* + * Extract allocated pointers from a single batch alloc cache line. This api + * only extracts the required number of pointers from the cache line and it + * adjusts the statsus->count so that a subsequent call to this api can + * extract the remaining pointers in the cache line appropriately. + */ +static inline unsigned int +roc_npa_aura_batch_alloc_extract_line(uint64_t *buf, uint64_t *line, + unsigned int num, unsigned int *rem) +{ + struct npa_batch_alloc_status_s *status; + unsigned int avail; + + status = (struct npa_batch_alloc_status_s *)line; + roc_npa_batch_alloc_wait(line, 0); + avail = status->count; + num = avail > num ? num : avail; + if (num) + memcpy(buf, &line[avail - num], num * sizeof(uint64_t)); + avail -= num; + if (avail == 0) { + /* Clear the lowest 7 bits of the first pointer */ + buf[0] &= ~0x7FUL; + status->ccode = 0; + } + status->count = avail; + *rem = avail; + + return num; +} + +/* + * Extract all allocated pointers from a sequence of batch alloc cache lines. + */ static inline unsigned int roc_npa_aura_batch_alloc_extract(uint64_t *buf, uint64_t *aligned_buf, unsigned int num) @@ -330,11 +379,15 @@ roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf, } } +/* + * Issue a batch alloc operation on a sequence of cache lines, wait for the + * batch alloc to complete and copy the pointers out into the user buffer. + */ static inline unsigned int roc_npa_aura_op_batch_alloc(uint64_t aura_handle, uint64_t *buf, - uint64_t *aligned_buf, unsigned int num, - const int dis_wait, const int drop, - const int partial) + unsigned int num, uint64_t *aligned_buf, + unsigned int aligned_buf_sz, const int dis_wait, + const int drop, const int partial) { unsigned int count, chunk, num_alloc; @@ -344,9 +397,12 @@ roc_npa_aura_op_batch_alloc(uint64_t aura_handle, uint64_t *buf, count = 0; while (num) { - chunk = (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) ? - ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS : - num; + /* Make sure that the pointers allocated fit into the cache + * lines reserved. + */ + chunk = aligned_buf_sz / sizeof(uint64_t); + chunk = PLT_MIN(num, chunk); + chunk = PLT_MIN((int)chunk, ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS); if (roc_npa_aura_batch_alloc_issue(aura_handle, aligned_buf, chunk, dis_wait, drop))