From patchwork Mon Oct 18 07:51:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 101939 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5ECA2A0C43; Mon, 18 Oct 2021 09:51:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F163340141; Mon, 18 Oct 2021 09:51:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id DA5224003C for ; Mon, 18 Oct 2021 09:51:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19I3o0n3024862 for ; Mon, 18 Oct 2021 00:51:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=DWeoVZDzdSWHUt2NZScCebk9/xbViCdg18XWUrr6xzw=; b=IB4FWvWcA0X924sd8wjNY0BCSvtnToeLp/Ad/CUuSojYPMiQJFMJ/wxxbzVm4izLGcRE VQOwTlGDxEc+waPFAGQO1ITC0mrTHtPfI8W9vdzcrRg3zP+iBGXNHiX6yZwU3xW4ckr8 aUIp0Dl2VkJfU98HuDtRoPQfhUfC6oUXnoA14rZroiJFTPcs7ZueQCdef0oFDp45GTbq U/Ry7GI67jRnSE+oZ9c2n3Niyrzogdj74YvxGJG1pwf4Exn+oIoShBvVJphjWhZctL5M 7s4ObCFL7S5YJ612JfHxH1WJ3oSgq9b3Rwlq/ielsdNA2zlAwdZggjjJkWceFuRg8qb+ hQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3bs1bugr49-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 18 Oct 2021 00:51:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 18 Oct 2021 00:51:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 18 Oct 2021 00:51:51 -0700 Received: from HY-LT1002.marvell.com (HY-LT1002.marvell.com [10.28.176.218]) by maili.marvell.com (Postfix) with ESMTP id D8D015E6865; Mon, 18 Oct 2021 00:51:48 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Jerin Jacob CC: Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Date: Mon, 18 Oct 2021 13:21:39 +0530 Message-ID: <1634543500-128-1-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-Proofpoint-GUID: tt0pNHce1foGcpU7G-g_WNN2dtsL-W0r X-Proofpoint-ORIG-GUID: tt0pNHce1foGcpU7G-g_WNN2dtsL-W0r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-18_02,2021-10-14_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH 1/2] common/cnxk: align CPT queue depth to power of 2 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use CPT LF queue depth as power of 2 to aid in masked checks for pending queue. Signed-off-by: Anoob Joseph Acked-by: Jerin Jacob Kollanukkaran --- drivers/common/cnxk/roc_cpt.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c index 74ada6e..5674418 100644 --- a/drivers/common/cnxk/roc_cpt.c +++ b/drivers/common/cnxk/roc_cpt.c @@ -568,6 +568,9 @@ cpt_lf_init(struct roc_cpt_lf *lf) if (lf->nb_desc == 0 || lf->nb_desc > CPT_LF_MAX_NB_DESC) lf->nb_desc = CPT_LF_DEFAULT_NB_DESC; + /* Update nb_desc to next power of 2 to aid in pending queue checks */ + lf->nb_desc = plt_align32pow2(lf->nb_desc); + /* Allocate memory for instruction queue for CPT LF. */ iq_mem = plt_zmalloc(cpt_lf_iq_mem_calc(lf->nb_desc), ROC_ALIGN); if (iq_mem == NULL) From patchwork Mon Oct 18 07:51:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 101940 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B085FA0C43; Mon, 18 Oct 2021 09:52:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 026FC410E4; Mon, 18 Oct 2021 09:52:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3431E4003C for ; Mon, 18 Oct 2021 09:51:58 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19I3nOYC024145 for ; Mon, 18 Oct 2021 00:51:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=uk0Tdh6Y237QLhMV0KEBkaIFrNZvd+81e7Nr8X6RrwU=; b=cNTgXRrr5SYWZRqK3ODzHLx09mjBlGRbz8xslo27ymD64S+fBBT0UUI8JwYb3zCuQUEI 6BspxzIxiPVX0pZGYuMhf0KHY8Z7hnip/3W+aRsefTqoIeMpLi+vTXsgH7Bq79+XHZfS MA0vEliGZTLNv3JX6QHWN1n/60qR7IYdI/w488uXHQ5tobzcKxoZVmnzywbOweZvNIFn 7SnYsBmyuUZ5j6lZ5UJRKe73oLeY90PKDhonT9nr6wyvzOiShdPRykRuUyqCPu/wDtGq JsnZiQGNv/v91c0DRWQxSTB6HdgO1CvTcWl1CLk2GOwUazG/gVB8rHhwoo0DQdv009pe hA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 3bs1bugr64-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 18 Oct 2021 00:51:57 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 18 Oct 2021 00:51:55 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 18 Oct 2021 00:51:55 -0700 Received: from HY-LT1002.marvell.com (HY-LT1002.marvell.com [10.28.176.218]) by maili.marvell.com (Postfix) with ESMTP id 1E2F55E6864; Mon, 18 Oct 2021 00:51:52 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Jerin Jacob CC: Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Date: Mon, 18 Oct 2021 13:21:40 +0530 Message-ID: <1634543500-128-2-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1634543500-128-1-git-send-email-anoobj@marvell.com> References: <1634543500-128-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: JBJH7XQkd0UFUw-dnfMuEyhStZmdoNyE X-Proofpoint-ORIG-GUID: JBJH7XQkd0UFUw-dnfMuEyhStZmdoNyE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-18_02,2021-10-14_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH 2/2] crypto/cnxk: rework pending queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rework pending queue to allow producer and consumer cores to be different. Signed-off-by: Anoob Joseph --- doc/guides/cryptodevs/cnxk.rst | 6 --- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 36 +++++++++++------- drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 63 ++++++++++++++----------------- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 20 +++++++--- drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 37 +++++++++++++++--- 5 files changed, 97 insertions(+), 65 deletions(-) diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst index 752316f..1fb0a88 100644 --- a/doc/guides/cryptodevs/cnxk.rst +++ b/doc/guides/cryptodevs/cnxk.rst @@ -244,9 +244,3 @@ CN10XX Features supported * UDP Encapsulation * AES-128/192/256-GCM * AES-128/192/256-CBC-SHA1-HMAC - -Limitations ------------ - -Multiple lcores may not operate on the same crypto queue pair. The lcore that -enqueues to a queue pair is the one that must dequeue from it. diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index c25c8e6..7f724de 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -196,11 +196,15 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct pending_queue *pend_q; struct cpt_inst_s *inst; uint16_t lmt_id; + uint64_t head; int ret, i; pend_q = &qp->pend_q; - nb_allowed = qp->lf.nb_desc - pend_q->pending_count; + const uint64_t pq_mask = pend_q->pq_mask; + + head = pend_q->head; + nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask); nb_ops = RTE_MIN(nb_ops, nb_allowed); if (unlikely(nb_ops == 0)) @@ -214,18 +218,18 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) again: for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) { - infl_req = &pend_q->req_queue[pend_q->enq_tail]; + infl_req = &pend_q->req_queue[head]; infl_req->op_flags = 0; ret = cn10k_cpt_fill_inst(qp, ops + i, &inst[2 * i], infl_req); if (unlikely(ret != 1)) { plt_dp_err("Could not process op: %p", ops + i); if (i == 0) - goto update_pending; + goto pend_q_commit; break; } - MOD_INC(pend_q->enq_tail, qp->lf.nb_desc); + pending_queue_advance(&head, pq_mask); } if (i > PKTS_PER_STEORL) { @@ -251,9 +255,10 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) goto again; } -update_pending: - pend_q->pending_count += count + i; +pend_q_commit: + rte_atomic_thread_fence(__ATOMIC_RELEASE); + pend_q->head = head; pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); @@ -512,18 +517,23 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; struct cpt_cn10k_res_s *res; + uint64_t infl_cnt, pq_tail; struct rte_crypto_op *cop; - int i, nb_pending; + int i; pend_q = &qp->pend_q; - nb_pending = pend_q->pending_count; + const uint64_t pq_mask = pend_q->pq_mask; + + pq_tail = pend_q->tail; + infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask); + nb_ops = RTE_MIN(nb_ops, infl_cnt); - if (nb_ops > nb_pending) - nb_ops = nb_pending; + /* Ensure infl_cnt isn't read before data lands */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); for (i = 0; i < nb_ops; i++) { - infl_req = &pend_q->req_queue[pend_q->deq_head]; + infl_req = &pend_q->req_queue[pq_tail]; res = (struct cpt_cn10k_res_s *)&infl_req->res; @@ -538,7 +548,7 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) break; } - MOD_INC(pend_q->deq_head, qp->lf.nb_desc); + pending_queue_advance(&pq_tail, pq_mask); cop = infl_req->cop; @@ -550,7 +560,7 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) rte_mempool_put(qp->meta_info.pool, infl_req->mdata); } - pend_q->pending_count -= i; + pend_q->tail = pq_tail; return i; } diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 7527793..449208d 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -218,14 +218,14 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) uint16_t nb_allowed, count = 0; struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; - uint64_t enq_tail; + uint64_t head; int ret; - const uint32_t nb_desc = qp->lf.nb_desc; + pend_q = &qp->pend_q; + const uint64_t lmt_base = qp->lf.lmt_base; const uint64_t io_addr = qp->lf.io_addr; - - pend_q = &qp->pend_q; + const uint64_t pq_mask = pend_q->pq_mask; /* Clear w0, w2, w3 of both inst */ @@ -236,14 +236,13 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) inst[1].w2.u64 = 0; inst[1].w3.u64 = 0; - nb_allowed = qp->lf.nb_desc - pend_q->pending_count; + head = pend_q->head; + nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask); nb_ops = RTE_MIN(nb_ops, nb_allowed); - enq_tail = pend_q->enq_tail; - if (unlikely(nb_ops & 1)) { op_1 = ops[0]; - infl_req_1 = &pend_q->req_queue[enq_tail]; + infl_req_1 = &pend_q->req_queue[head]; infl_req_1->op_flags = 0; ret = cn9k_cpt_inst_prep(qp, op_1, infl_req_1, &inst[0]); @@ -257,7 +256,7 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) inst[0].res_addr = (uint64_t)&infl_req_1->res; cn9k_cpt_inst_submit(&inst[0], lmt_base, io_addr); - MOD_INC(enq_tail, nb_desc); + pending_queue_advance(&head, pq_mask); count++; } @@ -265,10 +264,10 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) op_1 = ops[count]; op_2 = ops[count + 1]; - infl_req_1 = &pend_q->req_queue[enq_tail]; - MOD_INC(enq_tail, nb_desc); - infl_req_2 = &pend_q->req_queue[enq_tail]; - MOD_INC(enq_tail, nb_desc); + infl_req_1 = &pend_q->req_queue[head]; + pending_queue_advance(&head, pq_mask); + infl_req_2 = &pend_q->req_queue[head]; + pending_queue_advance(&head, pq_mask); infl_req_1->cop = op_1; infl_req_2->cop = op_2; @@ -284,23 +283,14 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) ret = cn9k_cpt_inst_prep(qp, op_1, infl_req_1, &inst[0]); if (unlikely(ret)) { plt_dp_err("Could not process op: %p", op_1); - if (enq_tail == 0) - enq_tail = nb_desc - 2; - else if (enq_tail == 1) - enq_tail = nb_desc - 1; - else - enq_tail--; + pending_queue_retreat(&head, pq_mask, 2); break; } ret = cn9k_cpt_inst_prep(qp, op_2, infl_req_2, &inst[1]); if (unlikely(ret)) { plt_dp_err("Could not process op: %p", op_2); - if (enq_tail == 0) - enq_tail = nb_desc - 1; - else - enq_tail--; - + pending_queue_retreat(&head, pq_mask, 1); cn9k_cpt_inst_submit(&inst[0], lmt_base, io_addr); count++; break; @@ -311,8 +301,9 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) count += 2; } - pend_q->enq_tail = enq_tail; - pend_q->pending_count += count; + rte_atomic_thread_fence(__ATOMIC_RELEASE); + + pend_q->head = head; pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz(); @@ -522,20 +513,23 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; struct cpt_cn9k_res_s *res; + uint64_t infl_cnt, pq_tail; struct rte_crypto_op *cop; - uint32_t pq_deq_head; int i; - const uint32_t nb_desc = qp->lf.nb_desc; - pend_q = &qp->pend_q; - nb_ops = RTE_MIN(nb_ops, pend_q->pending_count); + const uint64_t pq_mask = pend_q->pq_mask; + + pq_tail = pend_q->tail; + infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask); + nb_ops = RTE_MIN(nb_ops, infl_cnt); - pq_deq_head = pend_q->deq_head; + /* Ensure infl_cnt isn't read before data lands */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); for (i = 0; i < nb_ops; i++) { - infl_req = &pend_q->req_queue[pq_deq_head]; + infl_req = &pend_q->req_queue[pq_tail]; res = (struct cpt_cn9k_res_s *)&infl_req->res; @@ -550,7 +544,7 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) break; } - MOD_INC(pq_deq_head, nb_desc); + pending_queue_advance(&pq_tail, pq_mask); cop = infl_req->cop; @@ -562,8 +556,7 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) rte_mempool_put(qp->meta_info.pool, infl_req->mdata); } - pend_q->pending_count -= i; - pend_q->deq_head = pq_deq_head; + pend_q->tail = pq_tail; return i; } diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index 41d8fe4..2705c87 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -171,9 +171,10 @@ cnxk_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev, { char mempool_name[RTE_MEMPOOL_NAMESIZE]; struct cpt_qp_meta_info *meta_info; + int lcore_cnt = rte_lcore_count(); struct rte_mempool *pool; + int mb_pool_sz, mlen = 8; uint32_t cache_sz; - int mlen = 8; if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) { /* Get meta len */ @@ -186,14 +187,22 @@ cnxk_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev, mlen = RTE_MAX(mlen, cnxk_cpt_asym_get_mlen()); } + mb_pool_sz = nb_elements; cache_sz = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, nb_elements / 1.5); + /* For poll mode, core that enqueues and core that dequeues can be + * different. For event mode, all cores are allowed to use same crypto + * queue pair. + */ + + mb_pool_sz += (RTE_MAX(2, lcore_cnt) * cache_sz); + /* Allocate mempool */ snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "cnxk_cpt_mb_%u:%u", dev->data->dev_id, qp_id); - pool = rte_mempool_create(mempool_name, nb_elements, mlen, cache_sz, 0, + pool = rte_mempool_create(mempool_name, mb_pool_sz, mlen, cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(), 0); if (pool == NULL) { @@ -266,9 +275,8 @@ cnxk_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id, /* Initialize pending queue */ qp->pend_q.req_queue = pq_mem->addr; - qp->pend_q.enq_tail = 0; - qp->pend_q.deq_head = 0; - qp->pend_q.pending_count = 0; + qp->pend_q.head = 0; + qp->pend_q.tail = 0; return qp; @@ -369,6 +377,8 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, goto exit; } + qp->pend_q.pq_mask = qp->lf.nb_desc - 1; + roc_cpt->lf[qp_id] = &qp->lf; ret = roc_cpt_lmtline_init(roc_cpt, &qp->lmtline, qp_id); diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index c5332de..0d36365 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -53,14 +53,14 @@ struct cpt_inflight_req { } __rte_aligned(16); struct pending_queue { - /** Pending requests count */ - uint64_t pending_count; /** Array of pending requests */ struct cpt_inflight_req *req_queue; - /** Tail of queue to be used for enqueue */ - uint16_t enq_tail; - /** Head of queue to be used for dequeue */ - uint16_t deq_head; + /** Head of the queue to be used for enqueue */ + uint64_t head; + /** Tail of the queue to be used for dequeue */ + uint64_t tail; + /** Pending queue mask */ + uint64_t pq_mask; /** Timeout to track h/w being unresponsive */ uint64_t time_out; }; @@ -151,4 +151,29 @@ cnxk_event_crypto_mdata_get(struct rte_crypto_op *op) return ec_mdata; } +static __rte_always_inline void +pending_queue_advance(uint64_t *index, const uint64_t mask) +{ + *index = (*index + 1) & mask; +} + +static __rte_always_inline void +pending_queue_retreat(uint64_t *index, const uint64_t mask, uint64_t nb_entry) +{ + *index = (*index - nb_entry) & mask; +} + +static __rte_always_inline uint64_t +pending_queue_infl_cnt(uint64_t head, uint64_t tail, const uint64_t mask) +{ + return (head - tail) & mask; +} + +static __rte_always_inline uint64_t +pending_queue_free_cnt(uint64_t head, uint64_t tail, const uint64_t mask) +{ + /* mask is nb_desc - 1 */ + return mask - pending_queue_infl_cnt(head, tail, mask); +} + #endif /* _CNXK_CRYPTODEV_OPS_H_ */