From patchwork Fri Dec 2 10:13:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 120435 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD5C3A0542; Fri, 2 Dec 2022 11:13:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A47740687; Fri, 2 Dec 2022 11:13:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 219A9400D6 for ; Fri, 2 Dec 2022 11:13:11 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B27oJ8f023666 for ; Fri, 2 Dec 2022 02:13:11 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=N4g3Z7CvKfLrIteX8UD2+VlcpE+zsXbMKkTyqJ0nf9Y=; b=a6rRRW/R53DxnfEwctFOL135u8EOxDhy6b4cc/Ipu/asJi57kCsR1ADv1iUDiIH7u12Q 4UNRo8vu497lrqLEII/ZRS5FEN0OKMuCLy8jbSMU501zpuU3EYj0Sko5+vrVcnebMJTL t6Er776+f+1mZD24VYUw8ITNGWVvrcQh/pWx2yjuQBQ1uzI1oJ7biIr4A3l42Yjpv/em hyDW9lU7WGZFvICgtgjIVTCcMufIJ5BSh2RtZ/dVNsN5TCESFZ26FeM3fmlR/MKyuUIu jYAhAa4tnaytkk4K3U1+Ehhf45wSYgYkhDh77/VMdh9h6KUPY9SAQ3ps/ZNUMGUm3yqM PA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m7dau0f9r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 02 Dec 2022 02:13:11 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 2 Dec 2022 02:13:09 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 2 Dec 2022 02:13:09 -0800 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 2E0563F7045; Fri, 2 Dec 2022 02:13:06 -0800 (PST) From: Volodymyr Fialko To: , Jerin Jacob CC: , , Volodymyr Fialko Subject: [PATCH] app/testeventdev: add crypto producer burst mode Date: Fri, 2 Dec 2022 11:13:03 +0100 Message-ID: <20221202101303.1523261-1-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: bzwnRVphEV5k5kLxwZ5ZnMgRbHh8ZNfd X-Proofpoint-ORIG-GUID: bzwnRVphEV5k5kLxwZ5ZnMgRbHh8ZNfd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-02_04,2022-12-01_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add ability to set enqueue burst size for crypto producer. Existing parameter `--prod_enq_burst_sz` can be used in combination with `--prod_type_cryptodev` to enable burst enqueue for crypto producer. Example: ./dpdk-test-eventdev -l 0-2 -a -a -- \ --prod_type_cryptodev --crypto_adptr_mode 1 --test=perf_atq \ --stlist=a --wlcores 1 --plcores 2 --prod_enq_burst_sz 32 Signed-off-by: Volodymyr Fialko Acked-by: Shijith Thotton --- app/test-eventdev/test_perf_common.c | 235 ++++++++++++++++++++++++++- doc/guides/tools/testeventdev.rst | 3 +- 2 files changed, 235 insertions(+), 3 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index 8d7e483c55..c54f0ba1df 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -554,6 +554,233 @@ perf_event_crypto_producer(void *arg) return 0; } +static void +crypto_adapter_enq_op_new_burst(struct prod_data *p) +{ + const struct test_perf *t = p->t; + const struct evt_options *opt = t->opt; + + struct rte_mbuf *m, *pkts_burst[MAX_PROD_ENQ_BURST_SIZE]; + struct rte_crypto_op *ops_burst[MAX_PROD_ENQ_BURST_SIZE]; + const uint32_t burst_size = opt->prod_enq_burst_sz; + uint8_t *result[MAX_PROD_ENQ_BURST_SIZE]; + const uint32_t nb_flows = t->nb_flows; + const uint64_t nb_pkts = t->nb_pkts; + uint16_t len, enq, nb_alloc, offset; + struct rte_mempool *pool = t->pool; + uint16_t qp_id = p->ca.cdev_qp_id; + uint8_t cdev_id = p->ca.cdev_id; + uint64_t alloc_failures = 0; + uint32_t flow_counter = 0; + uint64_t count = 0; + uint32_t i; + + if (opt->verbose_level > 1) + printf("%s(): lcore %d queue %d cdev_id %u cdev_qp_id %u\n", + __func__, rte_lcore_id(), p->queue_id, p->ca.cdev_id, + p->ca.cdev_qp_id); + + offset = sizeof(struct perf_elt); + len = RTE_MAX(RTE_ETHER_MIN_LEN + offset, opt->mbuf_sz); + + while (count < nb_pkts && t->done == false) { + if (opt->crypto_op_type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + struct rte_crypto_sym_op *sym_op; + int ret; + + nb_alloc = rte_crypto_op_bulk_alloc(t->ca_op_pool, + RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops_burst, burst_size); + if (unlikely(nb_alloc != burst_size)) { + alloc_failures++; + continue; + } + + ret = rte_pktmbuf_alloc_bulk(pool, pkts_burst, burst_size); + if (unlikely(ret != 0)) { + alloc_failures++; + rte_mempool_put_bulk(t->ca_op_pool, (void **)ops_burst, burst_size); + continue; + } + + for (i = 0; i < burst_size; i++) { + m = pkts_burst[i]; + rte_pktmbuf_append(m, len); + sym_op = ops_burst[i]->sym; + sym_op->m_src = m; + sym_op->cipher.data.offset = offset; + sym_op->cipher.data.length = len - offset; + rte_crypto_op_attach_sym_session(ops_burst[i], + p->ca.crypto_sess[flow_counter++ % nb_flows]); + } + } else { + struct rte_crypto_asym_op *asym_op; + + nb_alloc = rte_crypto_op_bulk_alloc(t->ca_op_pool, + RTE_CRYPTO_OP_TYPE_ASYMMETRIC, ops_burst, burst_size); + if (unlikely(nb_alloc != burst_size)) { + alloc_failures++; + continue; + } + + if (rte_mempool_get_bulk(pool, (void **)result, burst_size)) { + alloc_failures++; + rte_mempool_put_bulk(t->ca_op_pool, (void **)ops_burst, burst_size); + continue; + } + + for (i = 0; i < burst_size; i++) { + asym_op = ops_burst[i]->asym; + asym_op->modex.base.data = modex_test_case.base.data; + asym_op->modex.base.length = modex_test_case.base.len; + asym_op->modex.result.data = result[i]; + asym_op->modex.result.length = modex_test_case.result_len; + rte_crypto_op_attach_asym_session(ops_burst[i], + p->ca.crypto_sess[flow_counter++ % nb_flows]); + } + } + + enq = 0; + while (!t->done) { + enq += rte_cryptodev_enqueue_burst(cdev_id, qp_id, ops_burst + enq, + burst_size - enq); + if (enq == burst_size) + break; + } + + count += burst_size; + } + + if (opt->verbose_level > 1 && alloc_failures) + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", + __func__, rte_lcore_id(), alloc_failures); +} + +static void +crypto_adapter_enq_op_fwd_burst(struct prod_data *p) +{ + const struct test_perf *t = p->t; + const struct evt_options *opt = t->opt; + + struct rte_mbuf *m, *pkts_burst[MAX_PROD_ENQ_BURST_SIZE]; + struct rte_crypto_op *ops_burst[MAX_PROD_ENQ_BURST_SIZE]; + const uint32_t burst_size = opt->prod_enq_burst_sz; + struct rte_event ev[MAX_PROD_ENQ_BURST_SIZE]; + uint8_t *result[MAX_PROD_ENQ_BURST_SIZE]; + const uint32_t nb_flows = t->nb_flows; + const uint64_t nb_pkts = t->nb_pkts; + uint16_t len, enq, nb_alloc, offset; + struct rte_mempool *pool = t->pool; + const uint8_t dev_id = p->dev_id; + const uint8_t port = p->port_id; + uint64_t alloc_failures = 0; + uint32_t flow_counter = 0; + uint64_t count = 0; + uint32_t i; + + if (opt->verbose_level > 1) + printf("%s(): lcore %d port %d queue %d cdev_id %u cdev_qp_id %u\n", + __func__, rte_lcore_id(), port, p->queue_id, + p->ca.cdev_id, p->ca.cdev_qp_id); + + offset = sizeof(struct perf_elt); + len = RTE_MAX(RTE_ETHER_MIN_LEN + offset, opt->mbuf_sz); + + for (i = 0; i < burst_size; i++) { + ev[i].event = 0; + ev[i].op = RTE_EVENT_OP_NEW; + ev[i].queue_id = p->queue_id; + ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC; + ev[i].event_type = RTE_EVENT_TYPE_CPU; + } + + while (count < nb_pkts && t->done == false) { + if (opt->crypto_op_type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + struct rte_crypto_sym_op *sym_op; + int ret; + + nb_alloc = rte_crypto_op_bulk_alloc(t->ca_op_pool, + RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops_burst, burst_size); + if (unlikely(nb_alloc != burst_size)) { + alloc_failures++; + continue; + } + + ret = rte_pktmbuf_alloc_bulk(pool, pkts_burst, burst_size); + if (unlikely(ret != 0)) { + alloc_failures++; + rte_mempool_put_bulk(t->ca_op_pool, (void **)ops_burst, burst_size); + continue; + } + + for (i = 0; i < burst_size; i++) { + m = pkts_burst[i]; + rte_pktmbuf_append(m, len); + sym_op = ops_burst[i]->sym; + sym_op->m_src = m; + sym_op->cipher.data.offset = offset; + sym_op->cipher.data.length = len - offset; + rte_crypto_op_attach_sym_session(ops_burst[i], + p->ca.crypto_sess[flow_counter++ % nb_flows]); + ev[i].event_ptr = ops_burst[i]; + } + } else { + struct rte_crypto_asym_op *asym_op; + + nb_alloc = rte_crypto_op_bulk_alloc(t->ca_op_pool, + RTE_CRYPTO_OP_TYPE_ASYMMETRIC, ops_burst, burst_size); + if (unlikely(nb_alloc != burst_size)) { + alloc_failures++; + continue; + } + + if (rte_mempool_get_bulk(pool, (void **)result, burst_size)) { + alloc_failures++; + rte_mempool_put_bulk(t->ca_op_pool, (void **)ops_burst, burst_size); + continue; + } + + for (i = 0; i < burst_size; i++) { + asym_op = ops_burst[i]->asym; + asym_op->modex.base.data = modex_test_case.base.data; + asym_op->modex.base.length = modex_test_case.base.len; + asym_op->modex.result.data = result[i]; + asym_op->modex.result.length = modex_test_case.result_len; + rte_crypto_op_attach_asym_session(ops_burst[i], + p->ca.crypto_sess[flow_counter++ % nb_flows]); + ev[i].event_ptr = ops_burst[i]; + } + } + + enq = 0; + while (!t->done) { + enq += rte_event_crypto_adapter_enqueue(dev_id, port, ev + enq, + burst_size - enq); + if (enq == burst_size) + break; + } + + count += burst_size; + } + + if (opt->verbose_level > 1 && alloc_failures) + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", + __func__, rte_lcore_id(), alloc_failures); +} + +static inline int +perf_event_crypto_producer_burst(void *arg) +{ + struct prod_data *p = arg; + struct evt_options *opt = p->t->opt; + + if (opt->crypto_adptr_mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) + crypto_adapter_enq_op_new_burst(p); + else + crypto_adapter_enq_op_fwd_burst(p); + + return 0; +} + static int perf_producer_wrapper(void *arg) { @@ -580,8 +807,12 @@ perf_producer_wrapper(void *arg) else if (t->opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR && t->opt->timdev_use_burst) return perf_event_timer_producer_burst(arg); - else if (t->opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) - return perf_event_crypto_producer(arg); + else if (t->opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) { + if (t->opt->prod_enq_burst_sz > 1) + return perf_event_crypto_producer_burst(arg); + else + return perf_event_crypto_producer(arg); + } return 0; } diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst index 6f065b9752..33cbe04d70 100644 --- a/doc/guides/tools/testeventdev.rst +++ b/doc/guides/tools/testeventdev.rst @@ -176,7 +176,8 @@ The following are the application command-line options: Set producer enqueue burst size. Can be used to configure the number of events the producer(s) will enqueue as a burst to the event device. - Only applicable for `perf_queue` test. + Only applicable for `perf_queue` and `perf_atq` test in combination with + CPU (default) or crypto device (``--prod_type_cryptodev``) producers. * ``--nb_eth_queues``