From patchwork Tue Feb 4 13:58:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Lukas Bartosik [C]" X-Patchwork-Id: 65557 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53872A0534; Tue, 4 Feb 2020 15:01:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B3501C206; Tue, 4 Feb 2020 14:59:32 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D61D91C1FD for ; Tue, 4 Feb 2020 14:59:30 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 014DsxRk005026; Tue, 4 Feb 2020 05:59:30 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=mWvVL6UvjOAGdN+WHcImJa8LqgHeJ5vG9gNMcjzz/Vk=; b=C3R76bsm9Et1k3YC5JO6+MhDjhAcUgxlIUCgtWnVdiSsj1YW5Dz3Z3p0tf85PJWT7cXq AS+VesGRD0oMR4+rYEw9jn0EIfhxnxS+WJTbSkYipk0giHEhi2t8owFnNSNv93uKUVLS ZSwMRsnQA5cf2RDbVwlrbAa3Kr+UMj8w3/ddzbttLtSHMHzGC24/ydAPIoskdRTTmI1J WKEtBBfTxV48z8gc+Ywrywn7yCSK7x4wlShlrPg1IN+VvEStGb9dBjz8SWdWlO3dYgxV W7dlPaGD28ndKxFX/sKxdl99hU4fMNzoKdojXgUkFMPhWlNbsrXXWa+//v2vg99gIsdm GA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2xw7jvmm81-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 04 Feb 2020 05:59:29 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 4 Feb 2020 05:59:28 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 4 Feb 2020 05:59:28 -0800 Received: from luke.marvell.com (unknown [10.95.130.32]) by maili.marvell.com (Postfix) with ESMTP id 40BA43F70E7; Tue, 4 Feb 2020 05:59:25 -0800 (PST) From: Lukasz Bartosik To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Konstantin Ananyev" , Date: Tue, 4 Feb 2020 14:58:40 +0100 Message-ID: <1580824721-21527-13-git-send-email-lbartosik@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1580824721-21527-1-git-send-email-lbartosik@marvell.com> References: <1579527918-360-1-git-send-email-anoobj@marvell.com> <1580824721-21527-1-git-send-email-lbartosik@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-02-04_04:2020-02-04, 2020-02-04 signatures=0 Subject: [dpdk-dev] [PATCH v3 12/13] examples/ipsec-secgw: add app mode worker X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add application inbound/outbound worker thread and IPsec application processing code for event mode. Exampple ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel Signed-off-by: Anoob Joseph Signed-off-by: Ankur Dwivedi Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/ipsec-secgw.c | 31 +-- examples/ipsec-secgw/ipsec-secgw.h | 65 ++++++ examples/ipsec-secgw/ipsec.h | 22 -- examples/ipsec-secgw/ipsec_worker.c | 420 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 39 ++++ 5 files changed, 523 insertions(+), 54 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index acd7135..862a7f0 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,12 +50,11 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -85,29 +84,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -119,11 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 06995cf..2638c8f 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -4,8 +4,73 @@ #ifndef _IPSEC_SECGW_H_ #define _IPSEC_SECGW_H_ +#include + #define NB_SOCKETS 4 +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + /* Port mask to identify the unprotected ports */ extern uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 15360fb..447e936 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -246,29 +244,9 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ extern struct socket_ctx socket_ctx[NB_SOCKETS]; -void -ipsec_poll_mode_worker(void); - -int -ipsec_launch_one_lcore(void *args); - extern struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3f63ab0..715774b 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -15,6 +15,7 @@ #include #include +#include #include #include #include @@ -29,13 +30,52 @@ #include #include #include +#include +#include #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" extern volatile bool force_quit; +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} + static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) { @@ -86,6 +126,286 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = 0; + return 1; + } + + *sa_idx = SPI2IDX(res); + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) + return 1; + + /* Invalid SA IDX */ + return 0; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) + goto route_and_send_pkt; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != sa_idx)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->userdata = sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -93,7 +413,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -171,7 +491,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } /* Save security session */ - pkt->udata64 = (uint64_t) sess_tbl[port_id]; + pkt->userdata = sess_tbl[port_id]; /* Mark the packet for Tx security offload */ pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -190,6 +510,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -205,6 +613,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..1b18b3c --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +/* TODO + * + * Move this function to ipsec_worker.c + */ +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */