From patchwork Thu Aug 4 09:59:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 114603 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 083E0A00C4; Thu, 4 Aug 2022 11:59:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE89142BCF; Thu, 4 Aug 2022 11:59:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 356034281B for ; Thu, 4 Aug 2022 11:59:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27476Cja012676; Thu, 4 Aug 2022 02:59:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KFybf6cwiepbuEuzBIL4SEO6EHYmZ9+mUC4LUDr7F5E=; b=DA8NpgAR/Ta76vDlPECc/RUSEIH3Uwd1lwBJC4r7pXzK6IZUBB9zWW/fNBZYBqvz3nur r0rgKA1N6rsnFSRfXtLMqrIwttz9hdfX31VRGRPG8Ve9ikrwjDzW1qAa0HCD368P1e1/ jTdPyfYPNKNhxDtS6i6zcAkkySNtJG0nNZU8of4fIzvtQe8qgHZpNIDQ3JDXFuhKGwVi IVXWEcu4WchZQi/MWs5joyv/lPcbC2oSIT3kxlfC8qkWWtY0I/WxnprjR/f9KpQ4bbOQ gkmXUyZfATgSWb96/gtyKXi822pdkDqt5WdU2WpxJLVafA1/lqYsQ/KDJj909aDuVeGN ag== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3hqp04n2e4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 02:59:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 02:59:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 4 Aug 2022 02:59:50 -0700 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 6323B3F705C; Thu, 4 Aug 2022 02:59:47 -0700 (PDT) From: Volodymyr Fialko To: , Jerin Jacob , Abhinandan Gujjar , Pavan Nikhilesh , Shijith Thotton , Hemant Agrawal , Sachin Saxena , "Jay Jayatheerthan" CC: , , Volodymyr Fialko Subject: [PATCH 1/3] eventdev: introduce event cryptodev vector type Date: Thu, 4 Aug 2022 11:59:05 +0200 Message-ID: <20220804095907.97895-2-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220804095907.97895-1-vfialko@marvell.com> References: <20220622013839.405771-1-vfialko@marvell.com> <20220804095907.97895-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: -TAkpm6A5KxyC8TbeY4uLde1AFLkWqNr X-Proofpoint-ORIG-GUID: -TAkpm6A5KxyC8TbeY4uLde1AFLkWqNr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-02_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce ability to aggregate crypto operations processed by event crypto adapter into single event containing rte_event_vector whose event type is RTE_EVENT_TYPE_CRYPTODEV_VECTOR. Application should set RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR in rte_event_crypto_adapter_queue_conf::flag and provide vector configuration with respect of rte_event_crypto_adapter_vector_limits, which could be obtained by calling rte_event_crypto_adapter_vector_limits_get, to enable vectorization. The event crypto adapter would be responsible for vectorizing the crypto operations based on provided response information in rte_event_crypto_metadata::response_info. Updated drivers and tests accordingly to new API. Signed-off-by: Volodymyr Fialko --- app/test-eventdev/test_perf_common.c | 10 +- app/test/test_event_crypto_adapter.c | 12 ++- .../prog_guide/event_crypto_adapter.rst | 23 +++- drivers/event/cnxk/cn10k_eventdev.c | 4 +- drivers/event/cnxk/cn9k_eventdev.c | 5 +- drivers/event/dpaa/dpaa_eventdev.c | 9 +- drivers/event/dpaa2/dpaa2_eventdev.c | 9 +- drivers/event/octeontx/ssovf_evdev.c | 4 +- lib/eventdev/eventdev_pmd.h | 35 +++++- lib/eventdev/eventdev_trace.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 90 ++++++++++++++-- lib/eventdev/rte_event_crypto_adapter.h | 101 +++++++++++++++++- lib/eventdev/rte_event_eth_rx_adapter.h | 3 +- lib/eventdev/rte_eventdev.h | 8 ++ 14 files changed, 276 insertions(+), 43 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index 81420be73a..c770bc93f6 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -837,14 +837,14 @@ perf_event_crypto_adapter_setup(struct test_perf *t, struct prod_data *p) } if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { - struct rte_event response_info; + struct rte_event_crypto_adapter_queue_conf conf; - response_info.event = 0; - response_info.sched_type = RTE_SCHED_TYPE_ATOMIC; - response_info.queue_id = p->queue_id; + memset(&conf, 0, sizeof(conf)); + conf.ev.sched_type = RTE_SCHED_TYPE_ATOMIC; + conf.ev.queue_id = p->queue_id; ret = rte_event_crypto_adapter_queue_pair_add( TEST_PERF_CA_ID, p->ca.cdev_id, p->ca.cdev_qp_id, - &response_info); + &conf); } else { ret = rte_event_crypto_adapter_queue_pair_add( TEST_PERF_CA_ID, p->ca.cdev_id, p->ca.cdev_qp_id, NULL); diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 2ecc7e2cea..bb617c1042 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -1175,6 +1175,10 @@ test_crypto_adapter_create(void) static int test_crypto_adapter_qp_add_del(void) { + struct rte_event_crypto_adapter_queue_conf queue_conf = { + .ev = response_info, + }; + uint32_t cap; int ret; @@ -1183,7 +1187,7 @@ test_crypto_adapter_qp_add_del(void) if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { ret = rte_event_crypto_adapter_queue_pair_add(TEST_ADAPTER_ID, - TEST_CDEV_ID, TEST_CDEV_QP_ID, &response_info); + TEST_CDEV_ID, TEST_CDEV_QP_ID, &queue_conf); } else ret = rte_event_crypto_adapter_queue_pair_add(TEST_ADAPTER_ID, TEST_CDEV_ID, TEST_CDEV_QP_ID, NULL); @@ -1206,6 +1210,10 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) .new_event_threshold = 1200, }; + struct rte_event_crypto_adapter_queue_conf queue_conf = { + .ev = response_info, + }; + uint32_t cap; int ret; @@ -1238,7 +1246,7 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { ret = rte_event_crypto_adapter_queue_pair_add(TEST_ADAPTER_ID, - TEST_CDEV_ID, TEST_CDEV_QP_ID, &response_info); + TEST_CDEV_ID, TEST_CDEV_QP_ID, &queue_conf); } else ret = rte_event_crypto_adapter_queue_pair_add(TEST_ADAPTER_ID, TEST_CDEV_ID, TEST_CDEV_QP_ID, NULL); diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 4fb5c688e0..554df7e358 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -201,10 +201,10 @@ capability, event information must be passed to the add API. ret = rte_event_crypto_adapter_caps_get(id, evdev, &cap); if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { - struct rte_event event; + struct rte_event_crypto_adapter_queue_conf conf; - // Fill in event information & pass it to add API - rte_event_crypto_adapter_queue_pair_add(id, cdev_id, qp_id, &event); + // Fill in conf.event information & pass it to add API + rte_event_crypto_adapter_queue_pair_add(id, cdev_id, qp_id, &conf); } else rte_event_crypto_adapter_queue_pair_add(id, cdev_id, qp_id, NULL); @@ -291,6 +291,23 @@ the ``rte_crypto_op``. rte_memcpy(op + len, &m_data, sizeof(m_data)); } +Enable event vectorization +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The event crypto adapter can aggregate outcoming crypto operations based on +provided response information of ``rte_event_crypto_metadata::response_info`` +and generate a ``rte_event`` containing ``rte_event_vector`` whose event type +is ``RTE_EVENT_TYPE_CRYPTODEV_VECTOR``. +To enable vectorization application should set +RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR in +``rte_event_crypto_adapter_queue_conf::flag`` and provide vector +configuration(size, mempool, etc.) with respect of +``rte_event_crypto_adapter_vector_limits``, which could be obtained by calling +``rte_event_crypto_adapter_vector_limits_get()``. + +The RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR capability indicates whether +PMD supports this feature. + Start the adapter instance ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 5a0cab40a9..e74ec57382 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -889,11 +889,11 @@ static int cn10k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, const struct rte_cryptodev *cdev, int32_t queue_pair_id, - const struct rte_event *event) + const struct rte_event_crypto_adapter_queue_conf *conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - RTE_SET_USED(event); + RTE_SET_USED(conf); CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn10k"); diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 2e27030049..45ed547cb0 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1120,11 +1120,12 @@ cn9k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, static int cn9k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, const struct rte_cryptodev *cdev, - int32_t queue_pair_id, const struct rte_event *event) + int32_t queue_pair_id, + const struct rte_event_crypto_adapter_queue_conf *conf) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - RTE_SET_USED(event); + RTE_SET_USED(conf); CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index ff6cc0be18..2b9ecd9fbf 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -775,10 +776,10 @@ static int dpaa_eventdev_crypto_queue_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cryptodev, int32_t rx_queue_id, - const struct rte_event *ev) + const struct rte_event_crypto_adapter_queue_conf *conf) { struct dpaa_eventdev *priv = dev->data->dev_private; - uint8_t ev_qid = ev->queue_id; + uint8_t ev_qid = conf->ev.queue_id; u16 ch_id = priv->evq_info[ev_qid].ch_id; int ret; @@ -786,10 +787,10 @@ dpaa_eventdev_crypto_queue_add(const struct rte_eventdev *dev, if (rx_queue_id == -1) return dpaa_eventdev_crypto_queue_add_all(dev, - cryptodev, ev); + cryptodev, &conf->ev); ret = dpaa_sec_eventq_attach(cryptodev, rx_queue_id, - ch_id, ev); + ch_id, &conf->ev); if (ret) { DPAA_EVENTDEV_ERR( "dpaa_sec_eventq_attach failed: ret: %d\n", ret); diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index ffc7b8b073..0137736794 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -865,10 +866,10 @@ static int dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cryptodev, int32_t rx_queue_id, - const struct rte_event *ev) + const struct rte_event_crypto_adapter_queue_conf *conf) { struct dpaa2_eventdev *priv = dev->data->dev_private; - uint8_t ev_qid = ev->queue_id; + uint8_t ev_qid = conf->ev.queue_id; struct dpaa2_dpcon_dev *dpcon = priv->evq_info[ev_qid].dpcon; int ret; @@ -876,10 +877,10 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev, if (rx_queue_id == -1) return dpaa2_eventdev_crypto_queue_add_all(dev, - cryptodev, ev); + cryptodev, &conf->ev); ret = dpaa2_sec_eventq_attach(cryptodev, rx_queue_id, - dpcon, ev); + dpcon, &conf->ev); if (ret) { DPAA2_EVENTDEV_ERR( "dpaa2_sec_eventq_attach failed: ret: %d\n", ret); diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c index 9e14e35d10..17acd8ef64 100644 --- a/drivers/event/octeontx/ssovf_evdev.c +++ b/drivers/event/octeontx/ssovf_evdev.c @@ -745,12 +745,12 @@ static int ssovf_crypto_adapter_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev, int32_t queue_pair_id, - const struct rte_event *event) + const struct rte_event_crypto_adapter_queue_conf *conf) { struct cpt_instance *qp; uint8_t qp_id; - RTE_SET_USED(event); + RTE_SET_USED(conf); if (queue_pair_id == -1) { for (qp_id = 0; qp_id < cdev->data->nb_queue_pairs; qp_id++) { diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 69402668d8..bcfc9cbcb2 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -907,6 +907,7 @@ rte_event_pmd_selftest_seqn(struct rte_mbuf *mbuf) } struct rte_cryptodev; +struct rte_event_crypto_adapter_queue_conf; /** * This API may change without prior notice @@ -961,11 +962,11 @@ typedef int (*eventdev_crypto_adapter_caps_get_t) * - <0: Error code returned by the driver function. * */ -typedef int (*eventdev_crypto_adapter_queue_pair_add_t) - (const struct rte_eventdev *dev, - const struct rte_cryptodev *cdev, - int32_t queue_pair_id, - const struct rte_event *event); +typedef int (*eventdev_crypto_adapter_queue_pair_add_t)( + const struct rte_eventdev *dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id, + const struct rte_event_crypto_adapter_queue_conf *queue_conf); /** @@ -1074,6 +1075,27 @@ typedef int (*eventdev_crypto_adapter_stats_reset) (const struct rte_eventdev *dev, const struct rte_cryptodev *cdev); +struct rte_event_crypto_adapter_vector_limits; +/** + * Get event vector limits for a given event, crypto device pair. + * + * @param dev + * Event device pointer + * + * @param cdev + * Crypto device pointer + * + * @param[out] limits + * Pointer to the limits structure to be filled. + * + * @return + * - 0: Success. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_crypto_adapter_vector_limits_get_t)( + const struct rte_eventdev *dev, const struct rte_cryptodev *cdev, + struct rte_event_crypto_adapter_vector_limits *limits); + /** * Retrieve the event device's eth Tx adapter capabilities. * @@ -1339,6 +1361,9 @@ struct eventdev_ops { /**< Get crypto stats */ eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset; /**< Reset crypto stats */ + eventdev_crypto_adapter_vector_limits_get_t + crypto_adapter_vector_limits_get; + /**< Get event vector limits for the crypto adapter */ eventdev_eth_rx_adapter_q_stats_get eth_rx_adapter_queue_stats_get; /**< Get ethernet Rx queue stats */ diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h index 5ec43d80ee..d48cd58850 100644 --- a/lib/eventdev/eventdev_trace.h +++ b/lib/eventdev/eventdev_trace.h @@ -18,6 +18,7 @@ extern "C" { #include #include "rte_eventdev.h" +#include "rte_event_crypto_adapter.h" #include "rte_event_eth_rx_adapter.h" #include "rte_event_timer_adapter.h" @@ -271,11 +272,12 @@ RTE_TRACE_POINT( RTE_TRACE_POINT( rte_eventdev_trace_crypto_adapter_queue_pair_add, RTE_TRACE_POINT_ARGS(uint8_t adptr_id, uint8_t cdev_id, - const void *event, int32_t queue_pair_id), + int32_t queue_pair_id, + const struct rte_event_crypto_adapter_queue_conf *conf), rte_trace_point_emit_u8(adptr_id); rte_trace_point_emit_u8(cdev_id); rte_trace_point_emit_i32(queue_pair_id); - rte_trace_point_emit_ptr(event); + rte_trace_point_emit_ptr(conf); ) RTE_TRACE_POINT( diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 7c695176f4..73a4f231e2 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -921,11 +921,12 @@ int rte_event_crypto_adapter_queue_pair_add(uint8_t id, uint8_t cdev_id, int32_t queue_pair_id, - const struct rte_event *event) + const struct rte_event_crypto_adapter_queue_conf *conf) { + struct rte_event_crypto_adapter_vector_limits limits; struct event_crypto_adapter *adapter; - struct rte_eventdev *dev; struct crypto_device_info *dev_info; + struct rte_eventdev *dev; uint32_t cap; int ret; @@ -951,11 +952,47 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id, } if ((cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) && - (event == NULL)) { + (conf == NULL)) { RTE_EDEV_LOG_ERR("Conf value can not be NULL for dev_id=%u", cdev_id); return -EINVAL; } + if ((conf != NULL) && + (conf->flags & RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR)) { + if ((cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR) == 0) { + RTE_EDEV_LOG_ERR("Event vectorization is not supported," + "dev %" PRIu8 " cdev %" PRIu8, id, + cdev_id); + return -ENOTSUP; + } + + ret = rte_event_crypto_adapter_vector_limits_get( + adapter->eventdev_id, cdev_id, &limits); + if (ret < 0) { + RTE_EDEV_LOG_ERR("Failed to get event device vector " + "limits, dev %" PRIu8 " cdev %" PRIu8, + id, cdev_id); + return -EINVAL; + } + if (conf->vector_sz < limits.min_sz || + conf->vector_sz > limits.max_sz || + conf->vector_timeout_ns < limits.min_timeout_ns || + conf->vector_timeout_ns > limits.max_timeout_ns || + conf->vector_mp == NULL) { + RTE_EDEV_LOG_ERR("Invalid event vector configuration," + " dev %" PRIu8 " cdev %" PRIu8, + id, cdev_id); + return -EINVAL; + } + if (conf->vector_mp->elt_size < + (sizeof(struct rte_event_vector) + + (sizeof(uintptr_t) * conf->vector_sz))) { + RTE_EDEV_LOG_ERR("Invalid event vector configuration," + " dev %" PRIu8 " cdev %" PRIu8, + id, cdev_id); + return -EINVAL; + } + } dev_info = &adapter->cdevs[cdev_id]; @@ -990,7 +1027,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id, ret = (*dev->dev_ops->crypto_adapter_queue_pair_add)(dev, dev_info->dev, queue_pair_id, - event); + conf); if (ret) return ret; @@ -1030,8 +1067,8 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id, rte_service_component_runstate_set(adapter->service_id, 1); } - rte_eventdev_trace_crypto_adapter_queue_pair_add(id, cdev_id, event, - queue_pair_id); + rte_eventdev_trace_crypto_adapter_queue_pair_add(id, cdev_id, + queue_pair_id, conf); return 0; } @@ -1290,3 +1327,44 @@ rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) return 0; } + +int +rte_event_crypto_adapter_vector_limits_get( + uint8_t dev_id, uint16_t cdev_id, + struct rte_event_crypto_adapter_vector_limits *limits) +{ + struct rte_cryptodev *cdev; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (!rte_cryptodev_is_valid_dev(cdev_id)) { + RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id); + return -EINVAL; + } + + if (limits == NULL) + return -EINVAL; + + dev = &rte_eventdevs[dev_id]; + cdev = rte_cryptodev_pmd_get_dev(cdev_id); + + ret = rte_event_crypto_adapter_caps_get(dev_id, cdev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps edev %" PRIu8 + "cdev %" PRIu16, dev_id, cdev_id); + return ret; + } + + if (!(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR)) + return -ENOTSUP; + + RTE_FUNC_PTR_OR_ERR_RET( + *dev->dev_ops->crypto_adapter_vector_limits_get, + -ENOTSUP); + + return dev->dev_ops->crypto_adapter_vector_limits_get( + dev, cdev, limits); +} diff --git a/lib/eventdev/rte_event_crypto_adapter.h b/lib/eventdev/rte_event_crypto_adapter.h index d90a19e72c..7dd6171b9b 100644 --- a/lib/eventdev/rte_event_crypto_adapter.h +++ b/lib/eventdev/rte_event_crypto_adapter.h @@ -253,6 +253,78 @@ struct rte_event_crypto_adapter_conf { */ }; +#define RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR 0x1 +/**< This flag indicates that crypto operations processed on the crypto + * adapter need to be vectorized + * @see rte_event_crypto_adapter_queue_conf::flags + */ + +/** + * Adapter queue configuration structure + */ +struct rte_event_crypto_adapter_queue_conf { + uint32_t flags; + /**< Flags for handling crypto operations + * @see RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR + */ + struct rte_event ev; + /**< If HW supports cryptodev queue pair to event queue binding, + * application is expected to fill in event information. + * @see RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND + */ + uint16_t vector_sz; + /**< Indicates the maximum number for crypto operations to combine and + * form a vector. + * @see rte_event_crypto_adapter_vector_limits::min_sz + * @see rte_event_crypto_adapter_vector_limits::max_sz + * Valid when RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR flag is set in + * @see rte_event_crypto_adapter_queue_conf::rx_queue_flags + */ + uint64_t vector_timeout_ns; + /**< + * Indicates the maximum number of nanoseconds to wait for aggregating + * crypto operations. Should be within vectorization limits of the + * adapter + * @see rte_event_crypto_adapter_vector_limits::min_timeout_ns + * @see rte_event_crypto_adapter_vector_limits::max_timeout_ns + * Valid when RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR flag is set in + * @see rte_event_crypto_adapter_queue_conf::flags + */ + struct rte_mempool *vector_mp; + /**< Indicates the mempool that should be used for allocating + * rte_event_vector container. + * Should be created by using `rte_event_vector_pool_create`. + * Valid when RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR flag is set in + * @see rte_event_crypto_adapter_queue_conf::flags. + */ +}; + +/** + * A structure used to retrieve event crypto adapter vector limits. + */ +struct rte_event_crypto_adapter_vector_limits { + uint16_t min_sz; + /**< Minimum vector limit configurable. + * @see rte_event_crypto_adapter_queue_conf::vector_sz + */ + uint16_t max_sz; + /**< Maximum vector limit configurable. + * @see rte_event_crypto_adapter_queue_conf::vector_sz + */ + uint8_t log2_sz; + /**< True if the size configured should be in log2. + * @see rte_event_crypto_adapter_queue_conf::vector_sz + */ + uint64_t min_timeout_ns; + /**< Minimum vector timeout configurable. + * @see rte_event_crypto_adapter_queue_conf::vector_timeout_ns + */ + uint64_t max_timeout_ns; + /**< Maximum vector timeout configurable. + * @see rte_event_crypto_adapter_queue_conf::vector_timeout_ns + */ +}; + /** * Function type used for adapter configuration callback. The callback is * used to fill in members of the struct rte_event_crypto_adapter_conf, this @@ -392,10 +464,9 @@ rte_event_crypto_adapter_free(uint8_t id); * Cryptodev queue pair identifier. If queue_pair_id is set -1, * adapter adds all the pre configured queue pairs to the instance. * - * @param event - * if HW supports cryptodev queue pair to event queue binding, application is - * expected to fill in event information, else it will be NULL. - * @see RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND + * @param conf + * Additional configuration structure of type + * *rte_event_crypto_adapter_queue_conf* * * @return * - 0: Success, queue pair added correctly. @@ -405,7 +476,7 @@ int rte_event_crypto_adapter_queue_pair_add(uint8_t id, uint8_t cdev_id, int32_t queue_pair_id, - const struct rte_event *event); + const struct rte_event_crypto_adapter_queue_conf *conf); /** * Delete a queue pair from an event crypto adapter. @@ -523,6 +594,26 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Retrieve vector limits for a given event dev and crypto dev pair. + * @see rte_event_crypto_adapter_vector_limits + * + * @param dev_id + * Event device identifier. + * @param cdev_id + * Crypto device identifier. + * @param [out] limits + * A pointer to rte_event_crypto_adapter_vector_limits structure that has to + * be filled. + * + * @return + * - 0: Success. + * - <0: Error code on failure. + */ +int rte_event_crypto_adapter_vector_limits_get( + uint8_t dev_id, uint16_t cdev_id, + struct rte_event_crypto_adapter_vector_limits *limits); + /** * Enqueue a burst of crypto operations as event objects supplied in *rte_event* * structure on an event crypto adapter designated by its event *dev_id* through diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h index 3608a7b2cf..c8f2936866 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.h +++ b/lib/eventdev/rte_event_eth_rx_adapter.h @@ -457,7 +457,8 @@ int rte_event_eth_rx_adapter_free(uint8_t id); * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ * * @param conf - * Additional configuration structure of type *rte_event_eth_rx_adapter_conf* + * Additional configuration structure of type + * *rte_event_eth_rx_adapter_queue_conf* * * @return * - 0: Success, Receive queue added correctly. diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 6a6f6ea4c1..1a737bf851 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1203,6 +1203,9 @@ struct rte_event_vector { #define RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR \ (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_ETH_RX_ADAPTER) /**< The event vector generated from eth Rx adapter. */ +#define RTE_EVENT_TYPE_CRYPTODEV_VECTOR \ + (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CRYPTODEV) +/**< The event vector generated from cryptodev adapter. */ #define RTE_EVENT_TYPE_MAX 0x10 /**< Maximum number of event types */ @@ -1420,6 +1423,11 @@ rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps); * the private data information along with the crypto session. */ +#define RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR 0x10 +/**< Flag indicates HW is capable of aggregating processed + * crypto operations into rte_event_vector. + */ + /** * Retrieve the event device's crypto adapter capabilities for the * specified cryptodev device