From patchwork Thu Aug 4 10:36:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 114609 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13E9DA00C4; Thu, 4 Aug 2022 12:36:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4FAC442BE6; Thu, 4 Aug 2022 12:36:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9CF3B4281B for ; Thu, 4 Aug 2022 12:36:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2746toYa012702; Thu, 4 Aug 2022 03:36:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KKUOn867nMaB8+isYjjHjfHRROZ1gWac7mZ5ykuF0Yg=; b=K3QTGJlDv1xndRRN/otigVcHP1eI1f9BGzlBuZODR9k7SoKMC+w3f6JenR5dkB20HGU6 BZA+NzuEiO8FySS+iuyWB90veNFR4/XhyqdO9Slj1kcMHLY2ftEv41+QHKg6mRQYwuvu f7LtdyeB/go7ji7eJ/m8zrJ0cs7NOWwUoeYwyMn2c1FjtBO6gDt4BnBjZwKxW4QD+qtD Ty8MCSGNyMTkCz9DOLEIveQqeSP2eRi+4TjjgN7aP9uDQhiblCSfx1eW9x+6PUlVuKsX K6t2J7w46j/KoqeamrwXN5QteFQEfkuIEfY/d2R2an32CRluPMvpAZgTnnJlY4SxonQB Ag== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3hqp04n608-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 04 Aug 2022 03:36:48 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Aug 2022 03:36:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 4 Aug 2022 03:36:47 -0700 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 2FE403F705C; Thu, 4 Aug 2022 03:36:44 -0700 (PDT) From: Volodymyr Fialko To: , Radu Nicolau , Akhil Goyal CC: , , Volodymyr Fialko Subject: [PATCH 3/6] examples/ipsec-secgw: add lookaside event mode Date: Thu, 4 Aug 2022 12:36:23 +0200 Message-ID: <20220804103626.102688-4-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220804103626.102688-1-vfialko@marvell.com> References: <20220804103626.102688-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: esik8_lflTvP-2jRrSWkmcnuIRbTrAcH X-Proofpoint-ORIG-GUID: esik8_lflTvP-2jRrSWkmcnuIRbTrAcH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-04_03,2022-08-04_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add base support for lookaside event mode. Events that are coming from ethdev will be enqueued to the event crypto adapter, processed and enqueued back to ethdev for the transmission. Signed-off-by: Volodymyr Fialko --- doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +- examples/ipsec-secgw/ipsec-secgw.c | 3 +- examples/ipsec-secgw/ipsec.c | 35 +++- examples/ipsec-secgw/ipsec.h | 8 +- examples/ipsec-secgw/ipsec_worker.c | 224 +++++++++++++++++++++-- examples/ipsec-secgw/sa.c | 23 ++- 6 files changed, 262 insertions(+), 35 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index 07686d2285..c7b87889f1 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -83,8 +83,8 @@ The application supports two modes of operation: poll mode and event mode. every type of event device without affecting existing paths/use cases. The worker to be used will be determined by the operating conditions and the underlying device capabilities. **Currently the application provides non-burst, internal port worker - threads and supports inline protocol only.** It also provides infrastructure for - non-internal port however does not define any worker threads. + threads.** It also provides infrastructure for non-internal port however does not + define any worker threads. Event mode also supports event vectorization. The event devices, ethernet device pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 4ca5936bdf..0bd1f15ae5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -3121,7 +3121,8 @@ main(int32_t argc, char **argv) if ((socket_ctx[socket_id].session_pool != NULL) && (socket_ctx[socket_id].sa_in == NULL) && (socket_ctx[socket_id].sa_out == NULL)) { - sa_init(&socket_ctx[socket_id], socket_id, lcore_conf); + sa_init(&socket_ctx[socket_id], socket_id, lcore_conf, + eh_conf->mode_params); sp4_init(&socket_ctx[socket_id], socket_id); sp6_init(&socket_ctx[socket_id], socket_id); rt_init(&socket_ctx[socket_id], socket_id); diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 7b7bfff696..030cfe7a82 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -6,6 +6,7 @@ #include #include +#include #include #include #include @@ -56,14 +57,17 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec) int create_lookaside_session(struct ipsec_ctx *ipsec_ctx_lcore[], - struct socket_ctx *skt_ctx, struct ipsec_sa *sa, - struct rte_ipsec_session *ips) + struct socket_ctx *skt_ctx, const struct eventmode_conf *em_conf, + struct ipsec_sa *sa, struct rte_ipsec_session *ips) { uint16_t cdev_id = RTE_CRYPTO_MAX_DEVS; + enum rte_crypto_op_sess_type sess_type; struct rte_cryptodev_info cdev_info; + enum rte_crypto_op_type op_type; unsigned long cdev_id_qp = 0; - struct cdev_key key = { 0 }; struct ipsec_ctx *ipsec_ctx; + struct cdev_key key = { 0 }; + void *sess = NULL; uint32_t lcore_id; int32_t ret = 0; @@ -159,6 +163,10 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx_lcore[], return -1; } ips->security.ctx = ctx; + + sess = ips->security.ses; + op_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + sess_type = RTE_CRYPTO_OP_SECURITY_SESSION; } else { RTE_LOG(ERR, IPSEC, "Inline not supported\n"); return -1; @@ -183,6 +191,27 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx_lcore[], rte_cryptodev_info_get(cdev_id, &cdev_info); } + /* Setup meta data required by event crypto adapter */ + if (em_conf->enable_event_crypto_adapter && sess != NULL) { + union rte_event_crypto_metadata m_data = {0}; + const struct eventdev_params *eventdev_conf; + + eventdev_conf = &(em_conf->eventdev_config[0]); + + /* Fill in response information */ + m_data.response_info.sched_type = em_conf->ext_params.sched_type; + m_data.response_info.op = RTE_EVENT_OP_NEW; + m_data.response_info.queue_id = eventdev_conf->ev_cpt_queue_id; + + /* Fill in request information */ + m_data.request_info.cdev_id = cdev_id; + m_data.request_info.queue_pair_id = 0; + + /* Attach meta info to session */ + rte_cryptodev_session_event_mdata_set(cdev_id, sess, op_type, + sess_type, &m_data, sizeof(m_data)); + } + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 2005ae8fec..5ef63e8fc4 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -14,6 +14,7 @@ #include #include +#include "event_helper.h" #include "ipsec-secgw.h" #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 @@ -424,7 +425,8 @@ sa_spi_present(struct sa_ctx *sa_ctx, uint32_t spi, int inbound); void sa_init(struct socket_ctx *ctx, int32_t socket_id, - struct lcore_conf *lcore_conf); + struct lcore_conf *lcore_conf, + const struct eventmode_conf *em_conf); void rt_init(struct socket_ctx *ctx, int32_t socket_id); @@ -441,8 +443,8 @@ enqueue_cop_burst(struct cdev_qp *cqp); int create_lookaside_session(struct ipsec_ctx *ipsec_ctx[], - struct socket_ctx *skt_ctx, struct ipsec_sa *sa, - struct rte_ipsec_session *ips); + struct socket_ctx *skt_ctx, const struct eventmode_conf *em_conf, + struct ipsec_sa *sa, struct rte_ipsec_session *ips); int create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 803157d8ee..2661f0275f 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -3,6 +3,7 @@ * Copyright (C) 2020 Marvell International Ltd. */ #include +#include #include #include #include @@ -11,6 +12,7 @@ #include "ipsec.h" #include "ipsec-secgw.h" #include "ipsec_worker.h" +#include "sad.h" #if defined(__ARM_NEON) #include "ipsec_lpm_neon.h" @@ -228,6 +230,43 @@ check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx, ip->num = j; } +static inline void +pkt_l3_len_set(struct rte_mbuf *pkt) +{ + struct rte_ipv4_hdr *ipv4; + struct rte_ipv6_hdr *ipv6; + size_t l3len, ext_len; + uint32_t l3_type; + int next_proto; + uint8_t *p; + + l3_type = pkt->packet_type & RTE_PTYPE_L3_MASK; + if (l3_type == RTE_PTYPE_L3_IPV4) { + ipv4 = rte_pktmbuf_mtod(pkt, struct rte_ipv4_hdr *); + pkt->l3_len = ipv4->ihl * 4; + } else if (l3_type & RTE_PTYPE_L3_IPV6) { + ipv6 = rte_pktmbuf_mtod(pkt, struct rte_ipv6_hdr *); + l3len = sizeof(struct rte_ipv6_hdr); + if (l3_type == RTE_PTYPE_L3_IPV6_EXT || + l3_type == RTE_PTYPE_L3_IPV6_EXT_UNKNOWN) { + p = rte_pktmbuf_mtod(pkt, uint8_t *); + next_proto = ipv6->proto; + while (next_proto != IPPROTO_ESP && + l3len < pkt->data_len && + (next_proto = rte_ipv6_get_next_ext(p + l3len, + next_proto, &ext_len)) >= 0) + l3len += ext_len; + + /* Drop pkt when IPv6 header exceeds first seg size */ + if (unlikely(l3len > pkt->data_len)) { + free_pkts(&pkt, 1); + return; + } + } + pkt->l3_len = l3len; + } +} + static inline uint16_t route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { @@ -287,9 +326,67 @@ get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) return RTE_MAX_ETHPORTS; } +static inline void +crypto_op_reset(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + struct rte_crypto_sym_op *sop; + uint32_t i; + + const struct rte_crypto_op unproc_cop = { + .type = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + .status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED, + .sess_type = RTE_CRYPTO_OP_SECURITY_SESSION, + }; + + for (i = 0; i != num; i++) { + cop[i]->raw = unproc_cop.raw; + sop = cop[i]->sym; + sop->m_src = mb[i]; + sop->m_dst = NULL; + __rte_security_attach_session(sop, ss->security.ses); + } +} + +static inline int +event_crypto_enqueue(struct ipsec_ctx *ctx __rte_unused, struct rte_mbuf *pkt, + struct ipsec_sa *sa, const struct eh_event_link_info *ev_link) +{ + struct ipsec_mbuf_metadata *priv; + struct rte_ipsec_session *sess; + struct rte_crypto_op *cop; + struct rte_event cev; + int ret; + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Get pkt private data */ + priv = get_priv(pkt); + cop = &priv->cop; + + /* Reset crypto operation data */ + crypto_op_reset(sess, &pkt, &cop, 1); + + /* Update event_ptr with rte_crypto_op */ + cev.event = 0; + cev.event_ptr = cop; + + /* Enqueue event to crypto adapter */ + ret = rte_event_crypto_adapter_enqueue(ev_link->eventdev_id, + ev_link->event_port_id, &cev, 1); + if (unlikely(ret <= 0)) { + /* pkt will be freed by the caller */ + RTE_LOG_DP(DEBUG, IPSEC, "Cannot enqueue event: %i (errno: %i)\n", ret, rte_errno); + return rte_errno; + } + + return 0; +} + static inline int process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, - struct rte_event *ev) + const struct eh_event_link_info *ev_link, struct rte_event *ev) { struct ipsec_sa *sa = NULL; struct rte_mbuf *pkt; @@ -340,7 +437,22 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, goto drop_pkt_and_exit; } break; + case PKT_TYPE_IPSEC_IPV4: + case PKT_TYPE_IPSEC_IPV6: + rte_pktmbuf_adj(pkt, RTE_ETHER_HDR_LEN); + pkt_l3_len_set(pkt); + + sad_lookup(&ctx->sa_ctx->sad, &pkt, (void **)&sa, 1); + sa = ipsec_mask_saptr(sa); + if (unlikely(sa == NULL)) { + RTE_LOG_DP(DEBUG, IPSEC, "Cannot find sa\n"); + goto drop_pkt_and_exit; + } + if (unlikely(event_crypto_enqueue(ctx, pkt, sa, ev_link))) + goto drop_pkt_and_exit; + + return PKT_POSTED; default: RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", type); @@ -389,7 +501,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, static inline int process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, - struct rte_event *ev) + const struct eh_event_link_info *ev_link, struct rte_event *ev) { struct rte_ipsec_session *sess; struct sa_ctx *sa_ctx; @@ -456,11 +568,9 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, /* Get IPsec session */ sess = ipsec_get_primary_session(sa); - /* Allow only inline protocol for now */ - if (unlikely(sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) { - RTE_LOG(ERR, IPSEC, "SA type not supported\n"); - goto drop_pkt_and_exit; - } + /* Determine protocol type */ + if (sess->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) + goto lookaside; rte_security_set_pkt_metadata(sess->security.ctx, sess->security.ses, pkt, NULL); @@ -482,6 +592,13 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, ipsec_event_pre_forward(pkt, port_id); return PKT_FORWARDED; +lookaside: + /* prepare pkt - advance start to L3 */ + rte_pktmbuf_adj(pkt, RTE_ETHER_HDR_LEN); + + if (likely(event_crypto_enqueue(ctx, pkt, sa, ev_link) == 0)) + return PKT_POSTED; + drop_pkt_and_exit: RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); rte_pktmbuf_free(pkt); @@ -737,6 +854,67 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, rte_mempool_put(rte_mempool_from_obj(vec), vec); } +static inline int +ipsec_ev_cryptodev_process(const struct lcore_conf_ev_tx_int_port_wrkr *lconf, + struct rte_event *ev) +{ + struct rte_ether_hdr *ethhdr; + struct rte_crypto_op *cop; + struct rte_mbuf *pkt; + uint16_t port_id; + struct ip *ip; + + /* Get pkt data */ + cop = ev->event_ptr; + pkt = cop->sym->m_src; + + /* If operation was not successful, drop the packet */ + if (unlikely(cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS)) { + RTE_LOG_DP(INFO, IPSEC, "Crypto operation failed\n"); + free_pkts(&pkt, 1); + return PKT_DROPPED; + } + + ip = rte_pktmbuf_mtod(pkt, struct ip *); + + /* Prepend Ether layer */ + ethhdr = (struct rte_ether_hdr *)rte_pktmbuf_prepend(pkt, RTE_ETHER_HDR_LEN); + + /* Route pkt and update required fields */ + if (ip->ip_v == IPVERSION) { + pkt->ol_flags |= lconf->outbound.ipv4_offloads; + pkt->l3_len = sizeof(struct ip); + pkt->l2_len = RTE_ETHER_HDR_LEN; + + ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + + port_id = route4_pkt(pkt, lconf->rt.rt4_ctx); + } else { + pkt->ol_flags |= lconf->outbound.ipv6_offloads; + pkt->l3_len = sizeof(struct ip6_hdr); + pkt->l2_len = RTE_ETHER_HDR_LEN; + + ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + + port_id = route6_pkt(pkt, lconf->rt.rt6_ctx); + } + + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + RTE_LOG_DP(DEBUG, IPSEC, "Cannot route processed packet\n"); + free_pkts(&pkt, 1); + return PKT_DROPPED; + } + + /* Update Ether with port's MAC addresses */ + memcpy(ðhdr->src_addr, ðaddr_tbl[port_id].src, sizeof(struct rte_ether_addr)); + memcpy(ðhdr->dst_addr, ðaddr_tbl[port_id].dst, sizeof(struct rte_ether_addr)); + + /* Update event */ + ev->mbuf = pkt; + + return PKT_FORWARDED; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -924,6 +1102,14 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, "Launching event mode worker (non-burst - Tx internal port - " "app mode) on lcore %d\n", lcore_id); + ret = ipsec_sad_lcore_cache_init(app_sa_prm.cache_sz); + if (ret != 0) { + RTE_LOG(ERR, IPSEC, + "SAD cache init on lcore %u, failed with code: %d\n", + lcore_id, ret); + return; + } + /* Check if it's single link */ if (nb_links != 1) { RTE_LOG(INFO, IPSEC, @@ -950,6 +1136,20 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, ipsec_ev_vector_process(&lconf, links, &ev); continue; case RTE_EVENT_TYPE_ETHDEV: + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, links, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, links, &ev); + if (ret != 1) + /* The pkt has been dropped or posted */ + continue; + break; + case RTE_EVENT_TYPE_CRYPTODEV: + ret = ipsec_ev_cryptodev_process(&lconf, &ev); + if (unlikely(ret != PKT_FORWARDED)) + continue; break; default: RTE_LOG(ERR, IPSEC, "Invalid event type %u", @@ -957,16 +1157,6 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, continue; } - if (is_unprotected_port(ev.mbuf->port)) - ret = process_ipsec_ev_inbound(&lconf.inbound, - &lconf.rt, &ev); - else - ret = process_ipsec_ev_outbound(&lconf.outbound, - &lconf.rt, &ev); - if (ret != 1) - /* The pkt has been dropped */ - continue; - /* * Since tx internal port is available, events can be * directly enqueued to the adapter and it would be diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 5dca578790..7a0c528f75 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -1235,7 +1235,8 @@ static int sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], uint32_t nb_entries, uint32_t inbound, struct socket_ctx *skt_ctx, - struct ipsec_ctx *ips_ctx[]) + struct ipsec_ctx *ips_ctx[], + const struct eventmode_conf *em_conf) { struct ipsec_sa *sa; uint32_t i, idx; @@ -1408,7 +1409,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } } else { - rc = create_lookaside_session(ips_ctx, skt_ctx, sa, ips); + rc = create_lookaside_session(ips_ctx, skt_ctx, + em_conf, sa, ips); if (rc != 0) { RTE_LOG(ERR, IPSEC_ESP, "create_lookaside_session() failed\n"); @@ -1431,17 +1433,19 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], static inline int sa_out_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], uint32_t nb_entries, struct socket_ctx *skt_ctx, - struct ipsec_ctx *ips_ctx[]) + struct ipsec_ctx *ips_ctx[], + const struct eventmode_conf *em_conf) { - return sa_add_rules(sa_ctx, entries, nb_entries, 0, skt_ctx, ips_ctx); + return sa_add_rules(sa_ctx, entries, nb_entries, 0, skt_ctx, ips_ctx, em_conf); } static inline int sa_in_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], uint32_t nb_entries, struct socket_ctx *skt_ctx, - struct ipsec_ctx *ips_ctx[]) + struct ipsec_ctx *ips_ctx[], + const struct eventmode_conf *em_conf) { - return sa_add_rules(sa_ctx, entries, nb_entries, 1, skt_ctx, ips_ctx); + return sa_add_rules(sa_ctx, entries, nb_entries, 1, skt_ctx, ips_ctx, em_conf); } /* @@ -1673,7 +1677,8 @@ sa_spi_present(struct sa_ctx *sa_ctx, uint32_t spi, int inbound) void sa_init(struct socket_ctx *ctx, int32_t socket_id, - struct lcore_conf *lcore_conf) + struct lcore_conf *lcore_conf, + const struct eventmode_conf *em_conf) { int32_t rc; const char *name; @@ -1705,7 +1710,7 @@ sa_init(struct socket_ctx *ctx, int32_t socket_id, rte_exit(EXIT_FAILURE, "failed to init SAD\n"); RTE_LCORE_FOREACH(lcore_id) ipsec_ctx[lcore_id] = &lcore_conf[lcore_id].inbound; - sa_in_add_rules(ctx->sa_in, sa_in, nb_sa_in, ctx, ipsec_ctx); + sa_in_add_rules(ctx->sa_in, sa_in, nb_sa_in, ctx, ipsec_ctx, em_conf); if (app_sa_prm.enable != 0) { rc = ipsec_satbl_init(ctx->sa_in, nb_sa_in, @@ -1727,7 +1732,7 @@ sa_init(struct socket_ctx *ctx, int32_t socket_id, RTE_LCORE_FOREACH(lcore_id) ipsec_ctx[lcore_id] = &lcore_conf[lcore_id].outbound; - sa_out_add_rules(ctx->sa_out, sa_out, nb_sa_out, ctx, ipsec_ctx); + sa_out_add_rules(ctx->sa_out, sa_out, nb_sa_out, ctx, ipsec_ctx, em_conf); if (app_sa_prm.enable != 0) { rc = ipsec_satbl_init(ctx->sa_out, nb_sa_out,