From patchwork Thu Aug 4 10:36:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 114611 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7B9AA00C4; Thu, 4 Aug 2022 12:37:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CFD4942BF1; Thu, 4 Aug 2022 12:36:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2BD9542BF3 for ; Thu, 4 Aug 2022 12:36:56 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2746toYc012702; Thu, 4 Aug 2022 03:36:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=uSM6y9aC+SKHlSwlcEfBV7gHfP6gNKwa9ZcuBpHLViU=; b=By5PBkT/ZSYO+7tmY4W8ZkIefmYVNdkwTEwJytesUae7+kM0N0LmkMlxfn6lyjgeypfG 3E3aaqy7rxSVGLgVDHv6biVkhUzD4GUPM0zuhQZlNzmlqevIajQeNZw4kjrnIk7RSCGo cRcXAZmRUPtudAsTOb4J/6jSMYXsTa30fn2CBRQowKlcYptid8VG1zdiQNxuZwsN7R+r jcL0Ly65PWzj6HOeXKFdtGwmxbSWGXp6qp/Yw26PMdZYJHFPlZAKYVgYK9H06DeYgZDe nyMKOzA+YdQipfdp605fP+BpotDTi0cWWfwCoy1LdQOgpyakkHTqP30Tc11IN7WEt0Iv ag== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3hqp04n60t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 03:36:55 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 4 Aug 2022 03:36:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 4 Aug 2022 03:36:53 -0700 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id D5EA33F7057; Thu, 4 Aug 2022 03:36:51 -0700 (PDT) From: Volodymyr Fialko To: , Radu Nicolau , Akhil Goyal CC: , , Volodymyr Fialko Subject: [PATCH 5/6] examples/ipsec-secgw: add event vector support for lookaside Date: Thu, 4 Aug 2022 12:36:25 +0200 Message-ID: <20220804103626.102688-6-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220804103626.102688-1-vfialko@marvell.com> References: <20220804103626.102688-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: cQJMa4nMvk5t1wSTUP_rCatgFyDYk0dN X-Proofpoint-ORIG-GUID: cQJMa4nMvk5t1wSTUP_rCatgFyDYk0dN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add vector support for event crypto adapter in lookaside mode. Once --event-vector enabled, event crypto adapter will group processed crypto operation into rte_event_vector event with type RTE_EVENT_TYPE_CRYPTODEV_VECTOR. Signed-off-by: Volodymyr Fialko --- doc/guides/sample_app_ug/ipsec_secgw.rst | 3 + examples/ipsec-secgw/event_helper.c | 34 ++- examples/ipsec-secgw/ipsec-secgw.c | 2 +- examples/ipsec-secgw/ipsec-secgw.h | 1 + examples/ipsec-secgw/ipsec_worker.c | 281 ++++++++++++++++++----- 5 files changed, 265 insertions(+), 56 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index c7b87889f1..2a1aeae7c5 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -94,6 +94,9 @@ The application supports two modes of operation: poll mode and event mode. (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns). By default event vectorization is disabled and it can be enabled using event-vector option. + For the event devices, crypto device pairs which support the capability + ``RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR`` vector aggregation could also be enable + using event-vector option. Additionally the event mode introduces two submodes of processing packets: diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 9c20a05da8..635e6f24bf 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -790,12 +790,15 @@ eh_start_eventdev(struct eventmode_conf *em_conf) static int eh_initialize_crypto_adapter(struct eventmode_conf *em_conf) { + struct rte_event_crypto_adapter_queue_conf queue_conf; struct rte_event_dev_info evdev_default_conf = {0}; struct rte_event_port_conf port_conf = {0}; struct eventdev_params *eventdev_config; + char mp_name[RTE_MEMPOOL_NAMESIZE]; + const uint8_t nb_qp_per_cdev = 1; uint8_t eventdev_id, cdev_id, n; - uint32_t cap; - int ret; + uint32_t cap, nb_elem; + int ret, socket_id; if (!em_conf->enable_event_crypto_adapter) return 0; @@ -850,10 +853,35 @@ eh_initialize_crypto_adapter(struct eventmode_conf *em_conf) return ret; } + memset(&queue_conf, 0, sizeof(queue_conf)); + if ((cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR) && + (em_conf->ext_params.event_vector)) { + queue_conf.flags |= RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR; + queue_conf.vector_sz = em_conf->ext_params.vector_size; + /* + * Currently all sessions configured with same response + * info fields, so packets will be aggregated to the + * same vector. This allows us to configure number of + * vectors only to hold all queue pair descriptors. + */ + nb_elem = (qp_desc_nb / queue_conf.vector_sz) + 1; + nb_elem *= nb_qp_per_cdev; + socket_id = rte_cryptodev_socket_id(cdev_id); + snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, + "QP_VEC_%u_%u", socket_id, cdev_id); + queue_conf.vector_mp = rte_event_vector_pool_create( + mp_name, nb_elem, 0, + queue_conf.vector_sz, socket_id); + if (queue_conf.vector_mp == NULL) { + EH_LOG_ERR("failed to create event vector pool"); + return -ENOMEM; + } + } + /* Add crypto queue pairs to event crypto adapter */ ret = rte_event_crypto_adapter_queue_pair_add(cdev_id, eventdev_id, -1, /* adds all the pre configured queue pairs to the instance */ - NULL); + &queue_conf); if (ret < 0) { EH_LOG_ERR("Failed to add queue pairs to event crypto adapter %d", ret); return ret; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 0bd1f15ae5..02b1fabaf5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -85,7 +85,7 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; /* * Configurable number of descriptors per queue pair */ -static uint32_t qp_desc_nb = 2048; +uint32_t qp_desc_nb = 2048; #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index f02736075b..c6d11f3aac 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -145,6 +145,7 @@ extern bool per_port_pool; extern uint32_t mtu_size; extern uint32_t frag_tbl_sz; +extern uint32_t qp_desc_nb; #define SS_F (1U << 0) /* Single SA mode */ #define INL_PR_F (1U << 1) /* Inline Protocol */ diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index f94ab10a5b..466bb03bde 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -267,6 +267,21 @@ pkt_l3_len_set(struct rte_mbuf *pkt) } } +static inline void +ipsec_sad_lookup(struct ipsec_sad *sad, struct rte_mbuf *pkts[], void *sa[], uint16_t nb_pkts) +{ + uint16_t i; + + if (nb_pkts == 0) + return; + + for (i = 0; i < nb_pkts; i++) { + rte_pktmbuf_adj(pkts[i], RTE_ETHER_HDR_LEN); + pkt_l3_len_set(pkts[i]); + } + sad_lookup(sad, pkts, sa, nb_pkts); +} + static inline uint16_t route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { @@ -348,18 +363,11 @@ crypto_op_reset(const struct rte_ipsec_session *ss, } } -static inline int -event_crypto_enqueue(struct ipsec_ctx *ctx __rte_unused, struct rte_mbuf *pkt, - struct ipsec_sa *sa, const struct eh_event_link_info *ev_link) +static inline void +crypto_prepare_event(struct rte_mbuf *pkt, struct rte_ipsec_session *sess, struct rte_event *ev) { struct ipsec_mbuf_metadata *priv; - struct rte_ipsec_session *sess; struct rte_crypto_op *cop; - struct rte_event cev; - int ret; - - /* Get IPsec session */ - sess = ipsec_get_primary_session(sa); /* Get pkt private data */ priv = get_priv(pkt); @@ -369,13 +377,39 @@ event_crypto_enqueue(struct ipsec_ctx *ctx __rte_unused, struct rte_mbuf *pkt, crypto_op_reset(sess, &pkt, &cop, 1); /* Update event_ptr with rte_crypto_op */ - cev.event = 0; - cev.event_ptr = cop; + ev->event = 0; + ev->event_ptr = cop; +} + +static inline void +free_pkts_from_events(struct rte_event events[], uint16_t count) +{ + struct rte_crypto_op *cop; + int i; + + for (i = 0; i < count; i++) { + cop = events[i].event_ptr; + free_pkts(&cop->sym->m_src, 1); + } +} + +static inline int +event_crypto_enqueue(struct rte_mbuf *pkt, + struct ipsec_sa *sa, const struct eh_event_link_info *ev_link) +{ + struct rte_ipsec_session *sess; + struct rte_event ev; + int ret; + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + crypto_prepare_event(pkt, sess, &ev); /* Enqueue event to crypto adapter */ ret = rte_event_crypto_adapter_enqueue(ev_link->eventdev_id, - ev_link->event_port_id, &cev, 1); - if (unlikely(ret <= 0)) { + ev_link->event_port_id, &ev, 1); + if (unlikely(ret != 1)) { /* pkt will be freed by the caller */ RTE_LOG_DP(DEBUG, IPSEC, "Cannot enqueue event: %i (errno: %i)\n", ret, rte_errno); return rte_errno; @@ -449,7 +483,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, goto drop_pkt_and_exit; } - if (unlikely(event_crypto_enqueue(ctx, pkt, sa, ev_link))) + if (unlikely(event_crypto_enqueue(pkt, sa, ev_link))) goto drop_pkt_and_exit; return PKT_POSTED; @@ -596,7 +630,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, /* prepare pkt - advance start to L3 */ rte_pktmbuf_adj(pkt, RTE_ETHER_HDR_LEN); - if (likely(event_crypto_enqueue(ctx, pkt, sa, ev_link) == 0)) + if (likely(event_crypto_enqueue(pkt, sa, ev_link) == 0)) return PKT_POSTED; drop_pkt_and_exit: @@ -607,14 +641,12 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, } static inline int -ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, - struct ipsec_traffic *t, struct sa_ctx *sa_ctx) +ipsec_ev_route_ip_pkts(struct rte_event_vector *vec, struct route_table *rt, + struct ipsec_traffic *t) { - struct rte_ipsec_session *sess; - uint32_t sa_idx, i, j = 0; - uint16_t port_id = 0; struct rte_mbuf *pkt; - struct ipsec_sa *sa; + uint16_t port_id = 0; + uint32_t i, j = 0; /* Route IPv4 packets */ for (i = 0; i < t->ip4.num; i++) { @@ -646,34 +678,111 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, free_pkts(&pkt, 1); } + return j; +} + +static inline int +ipsec_ev_inbound_route_pkts(struct rte_event_vector *vec, + struct route_table *rt, + struct ipsec_traffic *t, + const struct eh_event_link_info *ev_link) +{ + uint32_t ret, i, j, ev_len = 0; + struct rte_event events[MAX_PKTS]; + struct rte_ipsec_session *sess; + struct rte_mbuf *pkt; + struct ipsec_sa *sa; + + j = ipsec_ev_route_ip_pkts(vec, rt, t); + /* Route ESP packets */ + for (i = 0; i < t->ipsec.num; i++) { + pkt = t->ipsec.pkts[i]; + sa = ipsec_mask_saptr(t->ipsec.saptr[i]); + if (unlikely(sa == NULL)) { + free_pkts(&pkt, 1); + continue; + } + sess = ipsec_get_primary_session(sa); + crypto_prepare_event(pkt, sess, &events[ev_len]); + ev_len++; + } + + if (ev_len) { + ret = rte_event_crypto_adapter_enqueue(ev_link->eventdev_id, + ev_link->event_port_id, events, ev_len); + if (ret < ev_len) { + RTE_LOG_DP(DEBUG, IPSEC, "Cannot enqueue events: %i (errno: %i)\n", + ev_len, rte_errno); + free_pkts_from_events(&events[ret], ev_len - ret); + return -rte_errno; + } + } + + return j; +} + +static inline int +ipsec_ev_outbound_route_pkts(struct rte_event_vector *vec, struct route_table *rt, + struct ipsec_traffic *t, struct sa_ctx *sa_ctx, + const struct eh_event_link_info *ev_link) +{ + uint32_t sa_idx, ret, i, j, ev_len = 0; + struct rte_event events[MAX_PKTS]; + struct rte_ipsec_session *sess; + uint16_t port_id = 0; + struct rte_mbuf *pkt; + struct ipsec_sa *sa; + + j = ipsec_ev_route_ip_pkts(vec, rt, t); + + /* Handle IPsec packets. + * For lookaside IPsec packets, submit to cryptodev queue. + * For inline IPsec packets, route the packet. + */ for (i = 0; i < t->ipsec.num; i++) { /* Validate sa_idx */ sa_idx = t->ipsec.res[i]; pkt = t->ipsec.pkts[i]; - if (unlikely(sa_idx >= sa_ctx->nb_sa)) + if (unlikely(sa_idx >= sa_ctx->nb_sa)) { free_pkts(&pkt, 1); - else { - /* Else the packet has to be protected */ - sa = &(sa_ctx->sa[sa_idx]); - /* Get IPsec session */ - sess = ipsec_get_primary_session(sa); - /* Allow only inline protocol for now */ - if (unlikely(sess->type != - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) { - RTE_LOG(ERR, IPSEC, "SA type not supported\n"); - free_pkts(&pkt, 1); - continue; - } + continue; + } + /* Else the packet has to be protected */ + sa = &(sa_ctx->sa[sa_idx]); + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + switch (sess->type) { + case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: + rte_pktmbuf_adj(pkt, RTE_ETHER_HDR_LEN); + crypto_prepare_event(pkt, sess, &events[ev_len]); + ev_len++; + break; + case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: rte_security_set_pkt_metadata(sess->security.ctx, sess->security.ses, pkt, NULL); - pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD; port_id = sa->portid; update_mac_addrs(pkt, port_id); ipsec_event_pre_forward(pkt, port_id); ev_vector_attr_update(vec, pkt); vec->mbufs[j++] = pkt; + break; + default: + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + free_pkts(&pkt, 1); + break; + } + } + + if (ev_len) { + ret = rte_event_crypto_adapter_enqueue(ev_link->eventdev_id, + ev_link->event_port_id, events, ev_len); + if (ret < ev_len) { + RTE_LOG_DP(DEBUG, IPSEC, "Cannot enqueue events: %i (errno: %i)\n", + ev_len, rte_errno); + free_pkts_from_events(&events[ret], ev_len - ret); + return -rte_errno; } } @@ -698,6 +807,10 @@ classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t) t->ip6.data[t->ip6.num] = nlp; t->ip6.pkts[(t->ip6.num)++] = pkt; break; + case PKT_TYPE_IPSEC_IPV4: + case PKT_TYPE_IPSEC_IPV6: + t->ipsec.pkts[(t->ipsec.num)++] = pkt; + break; default: RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", type); @@ -708,7 +821,8 @@ classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t) static inline int process_ipsec_ev_inbound_vector(struct ipsec_ctx *ctx, struct route_table *rt, - struct rte_event_vector *vec) + struct rte_event_vector *vec, + const struct eh_event_link_info *ev_link) { struct ipsec_traffic t; struct rte_mbuf *pkt; @@ -738,12 +852,15 @@ process_ipsec_ev_inbound_vector(struct ipsec_ctx *ctx, struct route_table *rt, check_sp_sa_bulk(ctx->sp4_ctx, ctx->sa_ctx, &t.ip4); check_sp_sa_bulk(ctx->sp6_ctx, ctx->sa_ctx, &t.ip6); - return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx); + ipsec_sad_lookup(&ctx->sa_ctx->sad, t.ipsec.pkts, t.ipsec.saptr, t.ipsec.num); + + return ipsec_ev_inbound_route_pkts(vec, rt, &t, ev_link); } static inline int process_ipsec_ev_outbound_vector(struct ipsec_ctx *ctx, struct route_table *rt, - struct rte_event_vector *vec) + struct rte_event_vector *vec, + const struct eh_event_link_info *ev_link) { struct ipsec_traffic t; struct rte_mbuf *pkt; @@ -766,7 +883,7 @@ process_ipsec_ev_outbound_vector(struct ipsec_ctx *ctx, struct route_table *rt, check_sp_bulk(ctx->sp4_ctx, &t.ip4, &t.ipsec); check_sp_bulk(ctx->sp6_ctx, &t.ip6, &t.ipsec); - return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx); + return ipsec_ev_outbound_route_pkts(vec, rt, &t, ctx->sa_ctx, ev_link); } static inline int @@ -817,12 +934,13 @@ ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf, ev_vector_attr_init(vec); core_stats_update_rx(vec->nb_elem); + if (is_unprotected_port(pkt->port)) ret = process_ipsec_ev_inbound_vector(&lconf->inbound, - &lconf->rt, vec); + &lconf->rt, vec, links); else ret = process_ipsec_ev_outbound_vector(&lconf->outbound, - &lconf->rt, vec); + &lconf->rt, vec, links); if (likely(ret > 0)) { core_stats_update_tx(vec->nb_elem); @@ -857,24 +975,19 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, } static inline int -ipsec_ev_cryptodev_process(const struct lcore_conf_ev_tx_int_port_wrkr *lconf, - struct rte_event *ev) +ipsec_ev_cryptodev_process_one_pkt( + const struct lcore_conf_ev_tx_int_port_wrkr *lconf, + const struct rte_crypto_op *cop, struct rte_mbuf *pkt) { struct rte_ether_hdr *ethhdr; - struct rte_crypto_op *cop; - struct rte_mbuf *pkt; uint16_t port_id; struct ip *ip; - /* Get pkt data */ - cop = ev->event_ptr; - pkt = cop->sym->m_src; - - /* If operation was not successful, drop the packet */ + /* If operation was not successful, free the packet */ if (unlikely(cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS)) { RTE_LOG_DP(INFO, IPSEC, "Crypto operation failed\n"); free_pkts(&pkt, 1); - return PKT_DROPPED; + return -1; } ip = rte_pktmbuf_mtod(pkt, struct ip *); @@ -904,13 +1017,74 @@ ipsec_ev_cryptodev_process(const struct lcore_conf_ev_tx_int_port_wrkr *lconf, if (unlikely(port_id == RTE_MAX_ETHPORTS)) { RTE_LOG_DP(DEBUG, IPSEC, "Cannot route processed packet\n"); free_pkts(&pkt, 1); - return PKT_DROPPED; + return -1; } /* Update Ether with port's MAC addresses */ memcpy(ðhdr->src_addr, ðaddr_tbl[port_id].src, sizeof(struct rte_ether_addr)); memcpy(ðhdr->dst_addr, ðaddr_tbl[port_id].dst, sizeof(struct rte_ether_addr)); + ipsec_event_pre_forward(pkt, port_id); + + return 0; +} + +static inline void +ipsec_ev_cryptodev_vector_process( + const struct lcore_conf_ev_tx_int_port_wrkr *lconf, + const struct eh_event_link_info *links, + struct rte_event *ev) +{ + struct rte_event_vector *vec = ev->vec; + const uint16_t nb_events = 1; + struct rte_crypto_op *cop; + struct rte_mbuf *pkt; + uint16_t enqueued; + int i, n = 0; + + ev_vector_attr_init(vec); + /* Transform cop vec into pkt vec */ + for (i = 0; i < vec->nb_elem; i++) { + /* Get pkt data */ + cop = vec->ptrs[i]; + pkt = cop->sym->m_src; + if (ipsec_ev_cryptodev_process_one_pkt(lconf, cop, pkt)) + continue; + + vec->mbufs[n++] = pkt; + ev_vector_attr_update(vec, pkt); + } + + if (n == 0) { + rte_mempool_put(rte_mempool_from_obj(vec), vec); + return; + } + + vec->nb_elem = n; + enqueued = rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, ev, nb_events, 0); + if (enqueued != nb_events) { + RTE_LOG_DP(INFO, IPSEC, "Failed to enqueue to tx, ret = %u," + " errno = %i\n", enqueued, rte_errno); + free_pkts(vec->mbufs, vec->nb_elem); + rte_mempool_put(rte_mempool_from_obj(vec), vec); + } +} + +static inline int +ipsec_ev_cryptodev_process(const struct lcore_conf_ev_tx_int_port_wrkr *lconf, + struct rte_event *ev) +{ + struct rte_crypto_op *cop; + struct rte_mbuf *pkt; + + /* Get pkt data */ + cop = ev->event_ptr; + pkt = cop->sym->m_src; + + if (ipsec_ev_cryptodev_process_one_pkt(lconf, cop, pkt)) + return PKT_DROPPED; + /* Update event */ ev->mbuf = pkt; @@ -1154,6 +1328,9 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, if (unlikely(ret != PKT_FORWARDED)) continue; break; + case RTE_EVENT_TYPE_CRYPTODEV_VECTOR: + ipsec_ev_cryptodev_vector_process(&lconf, links, &ev); + continue; default: RTE_LOG(ERR, IPSEC, "Invalid event type %u", ev.event_type);