From patchwork Tue Apr 25 13:15:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 126508 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB600429EB; Tue, 25 Apr 2023 15:15:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80EEA42BD9; Tue, 25 Apr 2023 15:15:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3A5EA40ED6 for ; Tue, 25 Apr 2023 15:15:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33P9EsIB013403; Tue, 25 Apr 2023 06:15:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=kYaAUYukWWxpoCw8YaCNyyjV5/HmP1gHgvjR6Ww37UI=; b=QzcBrhkX/82xsWNLkyunAZ1TdntVxmgkiJevMLIFzxcBH8GiVcd9yCr8Rp2MTNxK6gBL N0f7rnZqHiZLKjr/HVWb24gkdjLOXtH4IRuN5ct8qF6x08CwFauc0y99/YgrQ8sjdR9p eCfH2AzZurQkrIJNqwF2Qeh6bSoPsWIeqPV2fJfDGZjT4ZB9MSrGl1L5DZdy9WedZk+r R03zS+ltdmGHYkPIyxjWCxTVPEDdAYvMP6ow/a9i244TvPjotzwua6DU1xg3NjYpMjKc eY1n4jcBwB0JsxauMHkFCw57i0ynA2hzEquW/Kh4fPBvld+IfTU7dQ/7Td5hP8GZLDRO ww== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3q6c2f8vq0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 25 Apr 2023 06:15:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 25 Apr 2023 06:15:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 25 Apr 2023 06:15:42 -0700 Received: from localhost.localdomain (unknown [10.28.36.156]) by maili.marvell.com (Postfix) with ESMTP id ECDF53F7058; Tue, 25 Apr 2023 06:15:39 -0700 (PDT) From: Vamsi Attunuru To: , , CC: , Subject: [PATCH v2 1/4] node: add pkt punt to kernel node Date: Tue, 25 Apr 2023 06:15:13 -0700 Message-ID: <20230425131516.3308612-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425131516.3308612-1-vattunuru@marvell.com> References: <20230421060245.3136217-1-vattunuru@marvell.com> <20230425131516.3308612-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: hg9jQCOw3HfF_CfqJKQDXZOk0HJoE2-5 X-Proofpoint-ORIG-GUID: hg9jQCOw3HfF_CfqJKQDXZOk0HJoE2-5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-25_05,2023-04-25_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Patch adds a node to punt the packets to kernel over a raw socket. Signed-off-by: Vamsi Attunuru --- doc/guides/prog_guide/graph_lib.rst | 10 +++ lib/node/meson.build | 1 + lib/node/punt_kernel.c | 125 ++++++++++++++++++++++++++++ lib/node/punt_kernel_priv.h | 36 ++++++++ 4 files changed, 172 insertions(+) diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst index 1cfdc86433..b3b5b14827 100644 --- a/doc/guides/prog_guide/graph_lib.rst +++ b/doc/guides/prog_guide/graph_lib.rst @@ -392,3 +392,13 @@ null ~~~~ This node ignores the set of objects passed to it and reports that all are processed. + +punt_kernel +~~~~~~~~~~~ +This node punts the packets to kernel using a raw socket interface. For sending +the received packets, raw socket uses the packet's destination IP address in +sockaddr_in address structure and node uses ``sendto`` function to send data +on the raw socket. + +Aftering sending the burst of packets to kernel, this node redirects the same +objects to pkt_drop node to free up the packet buffers. diff --git a/lib/node/meson.build b/lib/node/meson.build index dbdf673c86..48c2da73f7 100644 --- a/lib/node/meson.build +++ b/lib/node/meson.build @@ -17,6 +17,7 @@ sources = files( 'null.c', 'pkt_cls.c', 'pkt_drop.c', + 'punt_kernel.c', ) headers = files('rte_node_ip4_api.h', 'rte_node_eth_api.h') # Strict-aliasing rules are violated by uint8_t[] to context size casts. diff --git a/lib/node/punt_kernel.c b/lib/node/punt_kernel.c new file mode 100644 index 0000000000..e5dd15b759 --- /dev/null +++ b/lib/node/punt_kernel.c @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "node_private.h" +#include "punt_kernel_priv.h" + +static __rte_always_inline void +punt_kernel_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t cnt) +{ + punt_kernel_node_ctx_t *ctx = (punt_kernel_node_ctx_t *)node->ctx; + struct sockaddr_in sin = {0}; + struct rte_ipv4_hdr *ip4; + size_t len; + char *buf; + int i; + + for (i = 0; i < cnt; i++) { + ip4 = rte_pktmbuf_mtod(mbufs[i], struct rte_ipv4_hdr *); + len = rte_pktmbuf_data_len(mbufs[i]); + buf = (char *)ip4; + + sin.sin_family = AF_INET; + sin.sin_port = 0; + sin.sin_addr.s_addr = ip4->dst_addr; + + if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) + node_err("punt_kernel", "Unable to send packets: %s\n", strerror(errno)); + } +} + +static uint16_t +punt_kernel_node_process(struct rte_graph *graph __rte_unused, struct rte_node *node, void **objs, + uint16_t nb_objs) +{ + struct rte_mbuf **pkts = (struct rte_mbuf **)objs; + uint16_t obj_left = nb_objs; + +#define PREFETCH_CNT 4 + + while (obj_left >= 12) { + /* Prefetch next-next mbufs */ + rte_prefetch0(pkts[8]); + rte_prefetch0(pkts[9]); + rte_prefetch0(pkts[10]); + rte_prefetch0(pkts[11]); + + /* Prefetch next mbuf data */ + rte_prefetch0(rte_pktmbuf_mtod_offset(pkts[4], void *, pkts[4]->l2_len)); + rte_prefetch0(rte_pktmbuf_mtod_offset(pkts[5], void *, pkts[5]->l2_len)); + rte_prefetch0(rte_pktmbuf_mtod_offset(pkts[6], void *, pkts[6]->l2_len)); + rte_prefetch0(rte_pktmbuf_mtod_offset(pkts[7], void *, pkts[7]->l2_len)); + + punt_kernel_process_mbuf(node, pkts, PREFETCH_CNT); + + obj_left -= PREFETCH_CNT; + pkts += PREFETCH_CNT; + } + + while (obj_left > 0) { + punt_kernel_process_mbuf(node, pkts, 1); + + obj_left--; + pkts++; + } + + rte_node_next_stream_move(graph, node, PUNT_KERNEL_NEXT_PKT_DROP); + + return nb_objs; +} + +static int +punt_kernel_node_init(const struct rte_graph *graph __rte_unused, struct rte_node *node) +{ + punt_kernel_node_ctx_t *ctx = (punt_kernel_node_ctx_t *)node->ctx; + + ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); + if (ctx->sock < 0) + node_err("punt_kernel", "Unable to open RAW socket\n"); + + return 0; +} + +static void +punt_kernel_node_fini(const struct rte_graph *graph __rte_unused, struct rte_node *node) +{ + punt_kernel_node_ctx_t *ctx = (punt_kernel_node_ctx_t *)node->ctx; + + if (ctx->sock >= 0) { + close(ctx->sock); + ctx->sock = -1; + } +} + +static struct rte_node_register punt_kernel_node_base = { + .process = punt_kernel_node_process, + .name = "punt_kernel", + + .init = punt_kernel_node_init, + .fini = punt_kernel_node_fini, + + .nb_edges = PUNT_KERNEL_NEXT_MAX, + .next_nodes = { + [PUNT_KERNEL_NEXT_PKT_DROP] = "pkt_drop", + }, +}; + +struct rte_node_register * +punt_kernel_node_get(void) +{ + return &punt_kernel_node_base; +} + +RTE_NODE_REGISTER(punt_kernel_node_base); diff --git a/lib/node/punt_kernel_priv.h b/lib/node/punt_kernel_priv.h new file mode 100644 index 0000000000..25ba2072e7 --- /dev/null +++ b/lib/node/punt_kernel_priv.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef __INCLUDE_PUNT_KERNEL_PRIV_H__ +#define __INCLUDE_PUNT_KERNEL_PRIV_H__ + +struct punt_kernel_node_elem; +struct punt_kernel_node_ctx; +typedef struct punt_kernel_node_elem punt_kernel_node_elem_t; + +/** + * @internal + * + * PUNT Kernel node context structure. + */ +typedef struct punt_kernel_node_ctx { + int sock; +} punt_kernel_node_ctx_t; + +enum punt_kernel_next_nodes { + PUNT_KERNEL_NEXT_PKT_DROP, + PUNT_KERNEL_NEXT_MAX, +}; + +/** + * @internal + * + * Get the PUNT Kernel node. + * + * @return + * Pointer to the PUNT Kernel node. + */ +struct rte_node_register *punt_kernel_node_get(void); + +#endif /* __INCLUDE_PUNT_KERNEL_PRIV_H__ */ From patchwork Tue Apr 25 13:15:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 126509 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE0A6429EB; Tue, 25 Apr 2023 15:15:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 905FB42D0C; Tue, 25 Apr 2023 15:15:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8012342BC9 for ; Tue, 25 Apr 2023 15:15:47 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33PC7PX9004832; Tue, 25 Apr 2023 06:15:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=x6Qz3JFCWeMUedPrPqc1QnuLxeEcWsxOt5N9ijcmTzE=; b=Fc2wBgJ4BqVu6ndOCPtoJbxqZcyFVhM6c/v93qmdEU9tu6akToW3M5wPehnSewFmkQTj mOFqhRQ75JBg94Ph5FOaM1oF+nhLCM/aacxeoPNckC/drsYM10OuLPTmGmr9aZZ2ue7a we7ckbDqqY5KnrExVARIr3kUt+FafXjjbUUZZWtjJC1vx1p9NJsXnhBJu6YCJuSy71XD Kuf1P49v3S5QKVvn27j2nyi7AVX7/d6wQT/4q2MQktXRZqY76Hv8BAGMYWbf3d5Q3DaT CJgjKWPTkq7w6ABcvrkgJgvnu7XwSM18AYHwDtLBUKeSLnMfTlQtIsKbQAAOxjXzNiUz eQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3q4f3pa6ar-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 25 Apr 2023 06:15:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 25 Apr 2023 06:15:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 25 Apr 2023 06:15:44 -0700 Received: from localhost.localdomain (unknown [10.28.36.156]) by maili.marvell.com (Postfix) with ESMTP id 370F43F7057; Tue, 25 Apr 2023 06:15:41 -0700 (PDT) From: Vamsi Attunuru To: , , CC: , Subject: [PATCH v2 2/4] node: add a node to receive pkts from kernel Date: Tue, 25 Apr 2023 06:15:14 -0700 Message-ID: <20230425131516.3308612-3-vattunuru@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425131516.3308612-1-vattunuru@marvell.com> References: <20230421060245.3136217-1-vattunuru@marvell.com> <20230425131516.3308612-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: AQ0hiF0DXvR2732ZAPOe3SzwA4Bwhk6y X-Proofpoint-GUID: AQ0hiF0DXvR2732ZAPOe3SzwA4Bwhk6y X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-25_05,2023-04-25_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Patch adds a node to receive packets from kernel over a raw socket. Signed-off-by: Vamsi Attunuru --- doc/guides/prog_guide/graph_lib.rst | 7 + lib/node/kernel_recv.c | 276 ++++++++++++++++++++++++++++ lib/node/kernel_recv_priv.h | 74 ++++++++ lib/node/meson.build | 1 + 4 files changed, 358 insertions(+) diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst index b3b5b14827..1057f16de8 100644 --- a/doc/guides/prog_guide/graph_lib.rst +++ b/doc/guides/prog_guide/graph_lib.rst @@ -402,3 +402,10 @@ on the raw socket. Aftering sending the burst of packets to kernel, this node redirects the same objects to pkt_drop node to free up the packet buffers. + +kernel_recv +~~~~~~~~~~~ +This node receives packets from kernel over a raw socket interface. Uses ``poll`` +function to poll on the socket fd for ``POLLIN`` events to read the packets from +raw socket to stream buffer and does ``rte_node_next_stream_move()`` when there +are received packets. diff --git a/lib/node/kernel_recv.c b/lib/node/kernel_recv.c new file mode 100644 index 0000000000..9b28ad76d3 --- /dev/null +++ b/lib/node/kernel_recv.c @@ -0,0 +1,276 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ethdev_rx_priv.h" +#include "kernel_recv_priv.h" +#include "node_private.h" + +static struct kernel_recv_node_main kernel_recv_main; + +static inline struct rte_mbuf * +alloc_rx_mbuf(kernel_recv_node_ctx_t *ctx) +{ + kernel_recv_info_t *rx = ctx->recv_info; + + if (rx->idx >= rx->cnt) { + uint16_t cnt; + + rx->idx = 0; + rx->cnt = 0; + + cnt = rte_pktmbuf_alloc_bulk(ctx->pktmbuf_pool, rx->rx_bufs, KERN_RECV_CACHE_COUNT); + if (cnt <= 0) + return NULL; + + rx->cnt = cnt; + } + + return rx->rx_bufs[rx->idx++]; +} + +static inline void +mbuf_update(struct rte_mbuf **mbufs, uint16_t nb_pkts) +{ + struct rte_net_hdr_lens hdr_lens; + struct rte_mbuf *m; + int i; + + for (i = 0; i < nb_pkts; i++) { + m = mbufs[i]; + + m->packet_type = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK); + + m->ol_flags = 0; + m->tx_offload = 0; + + m->l2_len = hdr_lens.l2_len; + m->l3_len = hdr_lens.l3_len; + m->l4_len = hdr_lens.l4_len; + } +} + +static uint16_t +recv_pkt_parse(void **objs, uint16_t nb_pkts) +{ + uint16_t pkts_left = nb_pkts; + struct rte_mbuf **pkts; + int i; + + pkts = (struct rte_mbuf **)objs; + + if (pkts_left >= 4) { + for (i = 0; i < 4; i++) + rte_prefetch0(rte_pktmbuf_mtod(pkts[i], void *)); + } + + while (pkts_left >= 12) { + /* Prefetch next-next mbufs */ + rte_prefetch0(pkts[8]); + rte_prefetch0(pkts[9]); + rte_prefetch0(pkts[10]); + rte_prefetch0(pkts[11]); + + /* Prefetch next mbuf data */ + rte_prefetch0(rte_pktmbuf_mtod(pkts[4], void *)); + rte_prefetch0(rte_pktmbuf_mtod(pkts[5], void *)); + rte_prefetch0(rte_pktmbuf_mtod(pkts[6], void *)); + rte_prefetch0(rte_pktmbuf_mtod(pkts[7], void *)); + + /* Extract ptype of mbufs */ + mbuf_update(pkts, 4); + + pkts += 4; + pkts_left -= 4; + } + + if (pkts_left > 0) + mbuf_update(pkts, pkts_left); + + return nb_pkts; +} + +static uint16_t +kernel_recv_node_do(struct rte_graph *graph, struct rte_node *node, kernel_recv_node_ctx_t *ctx) +{ + kernel_recv_info_t *rx; + uint16_t next_index; + int fd; + + rx = ctx->recv_info; + next_index = rx->cls_next; + + fd = rx->sock; + if (fd > 0) { + struct rte_mbuf **mbufs; + uint16_t len = 0, count = 0; + int nb_cnt, i; + + nb_cnt = (node->size >= RTE_GRAPH_BURST_SIZE) ? RTE_GRAPH_BURST_SIZE : node->size; + + mbufs = (struct rte_mbuf **)node->objs; + for (i = 0; i < nb_cnt; i++) { + struct rte_mbuf *m = alloc_rx_mbuf(ctx); + + if (!m) + break; + + len = read(fd, rte_pktmbuf_mtod(m, char *), rte_pktmbuf_tailroom(m)); + if (len == 0 || len == 0xFFFF) { + rte_pktmbuf_free(m); + if (rx->idx <= 0) + node_dbg("kernel_recv", "rx_mbuf array is empty\n"); + rx->idx--; + break; + } + *mbufs++ = m; + + m->port = node->id; + rte_pktmbuf_data_len(m) = len; + + count++; + } + + if (count) { + recv_pkt_parse(node->objs, count); + node->idx = count; + + /* Enqueue to next node */ + rte_node_next_stream_move(graph, node, next_index); + } + + return count; + } + + return 0; +} + +static uint16_t +kernel_recv_node_process(struct rte_graph *graph, struct rte_node *node, void **objs, + uint16_t nb_objs) +{ + kernel_recv_node_ctx_t *ctx = (kernel_recv_node_ctx_t *)node->ctx; + int fd; + + RTE_SET_USED(objs); + RTE_SET_USED(nb_objs); + + if (!ctx) + return 0; + + fd = ctx->recv_info->sock; + if (fd > 0) { + struct pollfd fds = {.fd = fd, .events = POLLIN}; + + if (poll(&fds, 1, 0) > 0) { + if (fds.revents & POLLIN) + return kernel_recv_node_do(graph, node, ctx); + } + } + + return 0; +} + +static int +kernel_recv_node_init(const struct rte_graph *graph, struct rte_node *node) +{ + kernel_recv_node_ctx_t *ctx = (kernel_recv_node_ctx_t *)node->ctx; + kernel_recv_node_elem_t *elem = kernel_recv_main.head; + kernel_recv_info_t *recv_info; + int sock; + + while (elem) { + if (elem->nid == node->id) { + /* Update node specific context */ + memcpy(ctx, &elem->ctx, sizeof(kernel_recv_node_ctx_t)); + break; + } + elem = elem->next; + } + + RTE_VERIFY(elem != NULL); + + if (ctx->pktmbuf_pool == NULL) { + node_err("kernel_recv", "Invalid mbuf pool on graph %s\n", graph->name); + return -EINVAL; + } + + recv_info = + rte_zmalloc("kernel_recv_info", sizeof(kernel_recv_info_t), RTE_CACHE_LINE_SIZE); + if (!recv_info) { + node_err("kernel_recv", "Kernel recv_info is NULL\n"); + return -ENOMEM; + } + + sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); + if (sock < 0) { + node_err("kernel_recv", "Unable to open RAW socket\n"); + return sock; + } + + recv_info->sock = sock; + ctx->recv_info = recv_info; + + return 0; +} + +static void +kernel_recv_node_fini(const struct rte_graph *graph __rte_unused, struct rte_node *node) +{ + kernel_recv_node_ctx_t *ctx = (kernel_recv_node_ctx_t *)node->ctx; + + if (ctx->recv_info) { + close(ctx->recv_info->sock); + ctx->recv_info->sock = -1; + rte_free(ctx->recv_info); + } + + ctx->recv_info = NULL; +} + +struct kernel_recv_node_main * +kernel_recv_node_data_get(void) +{ + return &kernel_recv_main; +} + +static struct rte_node_register kernel_recv_node_base = { + .process = kernel_recv_node_process, + .flags = RTE_NODE_SOURCE_F, + .name = "kernel_recv", + + .init = kernel_recv_node_init, + .fini = kernel_recv_node_fini, + + .nb_edges = KERNEL_RECV_NEXT_MAX, + .next_nodes = { + /* Default pkt classification node */ + [KERNEL_RECV_NEXT_PKT_CLS] = "pkt_cls", + [KERNEL_RECV_NEXT_IP4_LOOKUP] = "ip4_lookup", + }, +}; + +struct rte_node_register * +kernel_recv_node_get(void) +{ + return &kernel_recv_node_base; +} + +RTE_NODE_REGISTER(kernel_recv_node_base); diff --git a/lib/node/kernel_recv_priv.h b/lib/node/kernel_recv_priv.h new file mode 100644 index 0000000000..c2bfbf40bd --- /dev/null +++ b/lib/node/kernel_recv_priv.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef __INCLUDE_KERNEL_RECV_PRIV_H__ +#define __INCLUDE_KERNEL_RECV_PRIV_H__ + +#define KERN_RECV_CACHE_COUNT 64 + +typedef struct kernel_recv_info { + struct rte_mbuf *rx_bufs[KERN_RECV_CACHE_COUNT]; + uint16_t cls_next; + uint16_t idx; + uint16_t cnt; + int sock; +} kernel_recv_info_t; + +/** + * @internal + * + * Kernel Recv node context structure. + */ +typedef struct kernel_recv_node_ctx { + struct rte_mempool *pktmbuf_pool; + kernel_recv_info_t *recv_info; +} kernel_recv_node_ctx_t; + +/** + * @internal + * + * Kernel Recv node list element structure. + */ +typedef struct kernel_recv_node_elem { + struct kernel_recv_node_elem *next; /**< Pointer to the next node element. */ + struct kernel_recv_node_ctx ctx; /**< Kernel Recv node context. */ + rte_node_t nid; /**< Node identifier of the Kernel Recv node. */ +} kernel_recv_node_elem_t; + +enum kernel_recv_next_nodes { + KERNEL_RECV_NEXT_IP4_LOOKUP, + KERNEL_RECV_NEXT_PKT_CLS, + KERNEL_RECV_NEXT_MAX, +}; + +/** + * @internal + * + * Kernel Recv node main structure. + */ +struct kernel_recv_node_main { + kernel_recv_node_elem_t *head; /**< Pointer to the head node element. */ +}; + +/** + * @internal + * + * Get the Kernel Recv node data. + * + * @return + * Pointer to Kernel Recv node data. + */ +struct kernel_recv_node_main *kernel_recv_node_data_get(void); + +/** + * @internal + * + * Get the Kernel Recv node. + * + * @return + * Pointer to the Kernel Recv node. + */ +struct rte_node_register *kernel_recv_node_get(void); + +#endif /* __INCLUDE_KERNEL_RECV_PRIV_H__ */ diff --git a/lib/node/meson.build b/lib/node/meson.build index 48c2da73f7..8b1b024f61 100644 --- a/lib/node/meson.build +++ b/lib/node/meson.build @@ -13,6 +13,7 @@ sources = files( 'ethdev_tx.c', 'ip4_lookup.c', 'ip4_rewrite.c', + 'kernel_recv.c', 'log.c', 'null.c', 'pkt_cls.c', From patchwork Tue Apr 25 13:15:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 126510 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A5E1429EB; Tue, 25 Apr 2023 15:16:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1236C42D33; Tue, 25 Apr 2023 15:15:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3945F42D12 for ; Tue, 25 Apr 2023 15:15:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33PC7PXC004832; Tue, 25 Apr 2023 06:15:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1A62cTdvOBmAEuF1KAOqe6dLjj9imzCUvhfWsk3X8SY=; b=BzcKj3hSnixjKXIkxVoo81WAobUG4MpOnDVJCQsoZ7Z0WISY6mH6jAa0RAmI72blCiSU kWbGzEI09IJBdWqw5tofcYu2pkG1oabsR9QuJwB0UQ31NmwPzOY2UsccBghrveAxpvUV 72IryZ3QZwWN3ZNI6wT/kH3fTDwbBpw/U/8fT0Y1yqpOwqeY2/9a7PbZk24O9qxJOPR7 78o39u/cfCXSWIWoY+YIR7fimRNceExs1OQmxmr0WXgnhqfoFqrFtJqcRsiAwMXIXRmZ UnzjE9P4CGZ/hOlscfYHFHmkBw9VMOSN1qIjPbTqzLp1Jjt7H4tEiA7HkSIfiFPHmshd EA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3q4f3pa6ar-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 25 Apr 2023 06:15:48 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 25 Apr 2023 06:15:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 25 Apr 2023 06:15:46 -0700 Received: from localhost.localdomain (unknown [10.28.36.156]) by maili.marvell.com (Postfix) with ESMTP id 74B055B6931; Tue, 25 Apr 2023 06:15:44 -0700 (PDT) From: Vamsi Attunuru To: , , CC: , Subject: [PATCH v2 3/4] node: remove hardcoded node next details Date: Tue, 25 Apr 2023 06:15:15 -0700 Message-ID: <20230425131516.3308612-4-vattunuru@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425131516.3308612-1-vattunuru@marvell.com> References: <20230421060245.3136217-1-vattunuru@marvell.com> <20230425131516.3308612-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: bXwpzkTD8_-ZeOIK_mRCJKFhbE4m_xcs X-Proofpoint-GUID: bXwpzkTD8_-ZeOIK_mRCJKFhbE4m_xcs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-25_05,2023-04-25_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For ethdev_rx node, node_next details can be populated during node cloning time and same gets assigned to node context structure during node initialization. Patch removes overriding node_next details in node init(). Signed-off-by: Vamsi Attunuru --- lib/node/ethdev_rx.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index a19237b42f..85816c489c 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -194,8 +194,6 @@ ethdev_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); - ctx->cls_next = ETHDEV_RX_NEXT_PKT_CLS; - /* Check and setup ptype */ return ethdev_ptype_setup(ctx->port_id, ctx->queue_id); } From patchwork Tue Apr 25 13:15:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 126511 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7F0D429EB; Tue, 25 Apr 2023 15:16:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3732B42D12; Tue, 25 Apr 2023 15:15:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0148342D12 for ; Tue, 25 Apr 2023 15:15:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33PCAm6K010073; Tue, 25 Apr 2023 06:15:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=J39trFeDpN4dIEbrXa4b7chli6twzcYemYZllrBy31A=; b=MIsOnvdEeRhkUs586SrxwRDBQRRRZ/yKkLXeD35XjSFj7+12bZK4cLOG56iPwSJgO776 B54aCd2RidHM5G48FDi/xOMtkswEVqNP29T1lgpt/V9XEDVZw0AzS73gFjU6xdSzgqFt tTC/JxeTuttSH2k9K/99X3kQaEUZbwi1jgp5Dwg3JPL1LDWvybZXmxeu+mFMRHBYRB7Q zMnI3jmM3EPvKtPMaqB64dh0VE2vx/dYSAS+JOEh2azUVqA2gZ/1Su4qv1aLeRXrfuNV YeExm74uAA/SLWu5+hcZrEn/EsSE2ffc3tmzLMdWMckqQYXHVVjtiyPkZpu4ZP5JqWKg Og== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3q4f3pa6bf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 25 Apr 2023 06:15:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 25 Apr 2023 06:15:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 25 Apr 2023 06:15:49 -0700 Received: from localhost.localdomain (unknown [10.28.36.156]) by maili.marvell.com (Postfix) with ESMTP id BD6913F7057; Tue, 25 Apr 2023 06:15:46 -0700 (PDT) From: Vamsi Attunuru To: , , CC: , Subject: [PATCH v2 4/4] app: add testgraph application Date: Tue, 25 Apr 2023 06:15:16 -0700 Message-ID: <20230425131516.3308612-5-vattunuru@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425131516.3308612-1-vattunuru@marvell.com> References: <20230421060245.3136217-1-vattunuru@marvell.com> <20230425131516.3308612-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: K1qK5yUKxS1lZurT7pGXsamxzT1aKWfz X-Proofpoint-GUID: K1qK5yUKxS1lZurT7pGXsamxzT1aKWfz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-25_05,2023-04-25_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Patch adds test-graph application to validate graph and node libraries. Signed-off-by: Vamsi Attunuru --- app/meson.build | 1 + app/test-graph/cmdline.c | 211 +++++ app/test-graph/cmdline_graph.c | 294 +++++++ app/test-graph/cmdline_graph.h | 19 + app/test-graph/meson.build | 14 + app/test-graph/parameters.c | 157 ++++ app/test-graph/testgraph.c | 1426 ++++++++++++++++++++++++++++++++ app/test-graph/testgraph.h | 91 ++ doc/guides/tools/index.rst | 1 + doc/guides/tools/testgraph.rst | 131 +++ 10 files changed, 2345 insertions(+) diff --git a/app/meson.build b/app/meson.build index 74d2420f67..6c7b24e604 100644 --- a/app/meson.build +++ b/app/meson.build @@ -22,6 +22,7 @@ apps = [ 'test-eventdev', 'test-fib', 'test-flow-perf', + 'test-graph', 'test-gpudev', 'test-mldev', 'test-pipeline', diff --git a/app/test-graph/cmdline.c b/app/test-graph/cmdline.c new file mode 100644 index 0000000000..d9474d827a --- /dev/null +++ b/app/test-graph/cmdline.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "cmdline_graph.h" +#include "testgraph.h" + +static struct cmdline *testgraph_cl; +static cmdline_parse_ctx_t *main_ctx; + +/* *** Help command with introduction. *** */ +struct cmd_help_brief_result { + cmdline_fixed_string_t help; +}; + +static void +cmd_help_brief_parsed(__rte_unused void *parsed_result, struct cmdline *cl, __rte_unused void *data) +{ + cmdline_printf(cl, + "\n" + "Help is available for the following sections:\n\n" + " help control : Start and stop graph walk.\n" + " help display : Displaying port, stats and config " + "information.\n" + " help config : Configuration information.\n" + " help all : All of the above sections.\n\n"); +} + +static cmdline_parse_token_string_t cmd_help_brief_help = + TOKEN_STRING_INITIALIZER(struct cmd_help_brief_result, help, "help"); + +static cmdline_parse_inst_t cmd_help_brief = { + .f = cmd_help_brief_parsed, + .data = NULL, + .help_str = "help: Show help", + .tokens = { + (void *)&cmd_help_brief_help, + NULL, + }, +}; + +/* *** Help command with help sections. *** */ +struct cmd_help_long_result { + cmdline_fixed_string_t help; + cmdline_fixed_string_t section; +}; + +static void +cmd_help_long_parsed(void *parsed_result, struct cmdline *cl, __rte_unused void *data) +{ + int show_all = 0; + struct cmd_help_long_result *res = parsed_result; + + if (!strcmp(res->section, "all")) + show_all = 1; + + if (show_all || !strcmp(res->section, "control")) { + + cmdline_printf(cl, "\n" + "Control forwarding:\n" + "-------------------\n\n" + + "start graph_walk\n" + " Start graph_walk on worker threads.\n\n" + + "stop graph_walk\n" + " Stop worker threads from running graph_walk.\n\n" + + "quit\n" + " Quit to prompt.\n\n"); + } + + if (show_all || !strcmp(res->section, "display")) { + + cmdline_printf(cl, + "\n" + "Display:\n" + "--------\n\n" + + "show node_list\n" + " Display the list of supported nodes.\n\n" + + "show graph_stats\n" + " Display the node statistics of graph cluster.\n\n"); + } + + if (show_all || !strcmp(res->section, "config")) { + cmdline_printf(cl, "\n" + "Configuration:\n" + "--------------\n" + "set lcore_config (port_id0,rxq0,lcore_idX),..." + ".....,(port_idX,rxqX,lcoreidY)\n" + " Set lcore configuration.\n\n" + + "create_graph (node0_name,node1_name,...,nodeX_name)\n" + " Create graph instances using the provided node details.\n\n" + + "destroy_graph\n" + " Destroy the graph instances.\n\n"); + } +} + +static cmdline_parse_token_string_t cmd_help_long_help = + TOKEN_STRING_INITIALIZER(struct cmd_help_long_result, help, "help"); + +static cmdline_parse_token_string_t cmd_help_long_section = TOKEN_STRING_INITIALIZER( + struct cmd_help_long_result, section, "all#control#display#config"); + +static cmdline_parse_inst_t cmd_help_long = { + .f = cmd_help_long_parsed, + .data = NULL, + .help_str = "help all|control|display|config: " + "Show help", + .tokens = { + (void *)&cmd_help_long_help, + (void *)&cmd_help_long_section, + NULL, + }, +}; + +/* *** QUIT *** */ +struct cmd_quit_result { + cmdline_fixed_string_t quit; +}; + +static void +cmd_quit_parsed(__rte_unused void *parsed_result, struct cmdline *cl, __rte_unused void *data) +{ + cmdline_quit(cl); +} + +static cmdline_parse_token_string_t cmd_quit_quit = + TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit"); + +static cmdline_parse_inst_t cmd_quit = { + .f = cmd_quit_parsed, + .data = NULL, + .help_str = "quit: Exit application", + .tokens = { + (void *)&cmd_quit_quit, + NULL, + }, +}; + +/* list of instructions */ +static cmdline_parse_ctx_t builtin_ctx[] = { + (cmdline_parse_inst_t *)&cmd_help_brief, + (cmdline_parse_inst_t *)&cmd_help_long, + (cmdline_parse_inst_t *)&cmd_quit, + (cmdline_parse_inst_t *)&cmd_show_node_list, + (cmdline_parse_inst_t *)&cmd_set_lcore_config, + (cmdline_parse_inst_t *)&cmd_create_graph, + (cmdline_parse_inst_t *)&cmd_destroy_graph, + (cmdline_parse_inst_t *)&cmd_start_graph_walk, + (cmdline_parse_inst_t *)&cmd_stop_graph_walk, + (cmdline_parse_inst_t *)&cmd_show_graph_stats, + NULL, +}; + +int +init_cmdline(void) +{ + unsigned int count; + unsigned int i; + + count = 0; + for (i = 0; builtin_ctx[i] != NULL; i++) + count++; + + /* cmdline expects a NULL terminated array */ + main_ctx = calloc(count + 1, sizeof(main_ctx[0])); + if (main_ctx == NULL) + return -1; + + count = 0; + for (i = 0; builtin_ctx[i] != NULL; i++, count++) + main_ctx[count] = builtin_ctx[i]; + + return 0; +} + +void +prompt_exit(void) +{ + cmdline_quit(testgraph_cl); +} + +/* prompt function, called from main on MAIN lcore */ +void +prompt(void) +{ + testgraph_cl = cmdline_stdin_new(main_ctx, "testgraph> "); + if (testgraph_cl == NULL) { + fprintf(stderr, "Failed to create stdin based cmdline context\n"); + return; + } + + cmdline_interact(testgraph_cl); + cmdline_stdin_exit(testgraph_cl); +} diff --git a/app/test-graph/cmdline_graph.c b/app/test-graph/cmdline_graph.c new file mode 100644 index 0000000000..d66149a224 --- /dev/null +++ b/app/test-graph/cmdline_graph.c @@ -0,0 +1,294 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include + +#include +#include +#include + +#include "cmdline_graph.h" +#include "testgraph.h" + +/* *** Show supported node details *** */ +struct cmd_show_node_list_result { + cmdline_fixed_string_t show; + cmdline_fixed_string_t node_list; +}; + +static cmdline_parse_token_string_t cmd_show_node_list_show = + TOKEN_STRING_INITIALIZER(struct cmd_show_node_list_result, show, "show"); +static cmdline_parse_token_string_t cmd_show_node_list_node_list = + TOKEN_STRING_INITIALIZER(struct cmd_show_node_list_result, node_list, "node_list"); + +static void +cmd_show_node_list_parsed(__rte_unused void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + rte_node_t node_cnt = rte_node_max_count(); + rte_node_t id; + + printf("\n**** Supported Graph Nodes ****\n"); + for (id = 0; id < node_cnt; id++) + printf("%s\n", rte_node_id_to_name(id)); + + printf("********************************\n"); +} + +cmdline_parse_inst_t cmd_show_node_list = { + .f = cmd_show_node_list_parsed, + .data = NULL, + .help_str = "show node_list", + .tokens = { + (void *)&cmd_show_node_list_show, + (void *)&cmd_show_node_list_node_list, + NULL, + }, +}; + +/* *** Set lcore config *** */ +struct cmd_set_lcore_config_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t lcore_config; + cmdline_multi_string_t token_string; +}; + +static cmdline_parse_token_string_t cmd_set_lcore_config_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_lcore_config_result, set, "set"); +static cmdline_parse_token_string_t cmd_set_lcore_config_lcore_config = + TOKEN_STRING_INITIALIZER(struct cmd_set_lcore_config_result, lcore_config, "lcore_config"); +static cmdline_parse_token_string_t cmd_set_lcore_config_token_string = TOKEN_STRING_INITIALIZER( + struct cmd_set_lcore_config_result, token_string, TOKEN_STRING_MULTI); + +static void +cmd_set_lcore_config_parsed(void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_lcore_config_result *res = parsed_result; + const char *t_str = res->token_string; + int ret; + + /* Parse string */ + ret = parse_config(t_str); + if (ret) { + printf(" lcore_config string parse error\n"); + return; + } + + validate_config(); +} + +cmdline_parse_inst_t cmd_set_lcore_config = { + .f = cmd_set_lcore_config_parsed, + .data = NULL, + .help_str = "set lcore_config " + "(port,queue,lcore),[(port,queue,lcore) ... (port,queue,lcore)]", + .tokens = { + (void *)&cmd_set_lcore_config_set, + (void *)&cmd_set_lcore_config_lcore_config, + (void *)&cmd_set_lcore_config_token_string, + NULL, + }, +}; + +/* *** Create graph *** */ +struct cmd_create_graph_result { + cmdline_fixed_string_t create_graph; + cmdline_multi_string_t token_string; +}; + +static cmdline_parse_token_string_t cmd_create_graph_create_graph = + TOKEN_STRING_INITIALIZER(struct cmd_create_graph_result, create_graph, "create_graph"); +static cmdline_parse_token_string_t cmd_create_graph_token_string = + TOKEN_STRING_INITIALIZER(struct cmd_create_graph_result, token_string, TOKEN_STRING_MULTI); + +static void +cmd_create_graph_parsed(void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_create_graph_result *res = parsed_result; + const char *t_str = res->token_string; + uint64_t valid_nodes = 0; + int ret; + + ret = parse_node_patterns(t_str); + if (ret) { + printf("parse_node_patterns failed\n"); + cleanup_node_pattern(); + return; + } + + ret = validate_node_names(&valid_nodes); + if (ret) { + printf("validate_node_names() failed\n"); + cleanup_node_pattern(); + return; + } + + nb_conf = ethdev_ports_setup(); + + ethdev_rxq_configure(); + ethdev_txq_configure(); + + ret = configure_graph_nodes(valid_nodes); + if (ret) + rte_exit(EXIT_FAILURE, "configure_graph_nodes: err=%d\n", ret); + + ret = create_graph(valid_nodes); + if (ret) + rte_exit(EXIT_FAILURE, "create_graph: err=%d\n", ret); + + stats = create_graph_cluster_stats(); + if (stats == NULL) + rte_exit(EXIT_FAILURE, "create_graph_cluster_stats() failed\n"); + + check_all_ports_link_status(enabled_port_mask); + start_eth_ports(); +} + +cmdline_parse_inst_t cmd_create_graph = { + .f = cmd_create_graph_parsed, + .data = NULL, + .help_str = "create_graph " + "[node_name0,node_name1,node_name2 ... node_nameX]", + .tokens = { + (void *)&cmd_create_graph_create_graph, + (void *)&cmd_create_graph_token_string, + NULL, + }, +}; + +/**** Destroy graph ****/ +struct cmd_destroy_graph_result { + cmdline_fixed_string_t destroy_graph; +}; + +static cmdline_parse_token_string_t cmd_destroy_graph_destroy_graph = + TOKEN_STRING_INITIALIZER(struct cmd_destroy_graph_result, destroy_graph, "destroy_graph"); + +static void +cmd_destroy_graph_parsed(__rte_unused void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + uint32_t lcore_id; + + run_graph_walk = false; + graph_walk_quit = true; + + RTE_LCORE_FOREACH_WORKER(lcore_id) + rte_eal_wait_lcore(lcore_id); + + destroy_graph(); + stop_eth_ports(); +} + +cmdline_parse_inst_t cmd_destroy_graph = { + .f = cmd_destroy_graph_parsed, + .data = NULL, + .help_str = "destroy_graph", + .tokens = { + (void *)&cmd_destroy_graph_destroy_graph, + NULL, + }, +}; + +/**** Start graph_walk ****/ +struct cmd_start_graph_walk_result { + cmdline_fixed_string_t start; + cmdline_fixed_string_t graph_walk; +}; + +static cmdline_parse_token_string_t cmd_start_graph_walk_start = + TOKEN_STRING_INITIALIZER(struct cmd_start_graph_walk_result, start, "start"); +static cmdline_parse_token_string_t cmd_start_graph_walk_graph_walk = + TOKEN_STRING_INITIALIZER(struct cmd_start_graph_walk_result, graph_walk, "graph_walk"); + +static void +cmd_start_graph_walk_parsed(__rte_unused void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + static bool launch_graph_walk; + + if (!launch_graph_walk) { + rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MAIN); + launch_graph_walk = true; + } + + run_graph_walk = true; +} + +cmdline_parse_inst_t cmd_start_graph_walk = { + .f = cmd_start_graph_walk_parsed, + .data = NULL, + .help_str = "start graph_walk", + .tokens = { + (void *)&cmd_start_graph_walk_start, + (void *)&cmd_start_graph_walk_graph_walk, + NULL, + }, +}; + +/**** Stop graph_walk ****/ +struct cmd_stop_graph_walk_result { + cmdline_fixed_string_t stop; + cmdline_fixed_string_t graph_walk; +}; + +static cmdline_parse_token_string_t cmd_stop_graph_walk_stop = + TOKEN_STRING_INITIALIZER(struct cmd_stop_graph_walk_result, stop, "stop"); +static cmdline_parse_token_string_t cmd_stop_graph_walk_graph_walk = + TOKEN_STRING_INITIALIZER(struct cmd_stop_graph_walk_result, graph_walk, "graph_walk"); + +static void +cmd_stop_graph_walk_parsed(__rte_unused void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + run_graph_walk = false; +} + +cmdline_parse_inst_t cmd_stop_graph_walk = { + .f = cmd_stop_graph_walk_parsed, + .data = NULL, + .help_str = "stop graph_walk", + .tokens = { + (void *)&cmd_stop_graph_walk_stop, + (void *)&cmd_stop_graph_walk_graph_walk, + NULL, + }, +}; + +/**** Show graph_stats ****/ +struct cmd_show_graph_stats_result { + cmdline_fixed_string_t show; + cmdline_fixed_string_t graph_stats; +}; + +static cmdline_parse_token_string_t cmd_show_graph_stats_show = + TOKEN_STRING_INITIALIZER(struct cmd_show_graph_stats_result, show, "show"); +static cmdline_parse_token_string_t cmd_show_graph_stats_graph_stats = + TOKEN_STRING_INITIALIZER(struct cmd_show_graph_stats_result, graph_stats, "graph_stats"); + +static void +cmd_show_graph_stats_parsed(__rte_unused void *parsed_result, __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + if (rte_graph_has_stats_feature()) { + if (stats) + rte_graph_cluster_stats_get(stats, 0); + } else { + printf(" graph stats feature not enabled in rte_config.\n"); + } +} + +cmdline_parse_inst_t cmd_show_graph_stats = { + .f = cmd_show_graph_stats_parsed, + .data = NULL, + .help_str = "show graph_stats", + .tokens = { + (void *)&cmd_show_graph_stats_show, + (void *)&cmd_show_graph_stats_graph_stats, + NULL, + }, +}; diff --git a/app/test-graph/cmdline_graph.h b/app/test-graph/cmdline_graph.h new file mode 100644 index 0000000000..2846ff5425 --- /dev/null +++ b/app/test-graph/cmdline_graph.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef _CMDLINE_GRAPH_H_ +#define _CMDLINE_GRAPH_H_ + +extern cmdline_parse_inst_t cmd_show_node_list; +extern cmdline_parse_inst_t cmd_set_lcore_config; + +extern cmdline_parse_inst_t cmd_create_graph; +extern cmdline_parse_inst_t cmd_destroy_graph; + +extern cmdline_parse_inst_t cmd_start_graph_walk; +extern cmdline_parse_inst_t cmd_stop_graph_walk; + +extern cmdline_parse_inst_t cmd_show_graph_stats; + +#endif /* _CMDLINE_GRAPH_H_ */ diff --git a/app/test-graph/meson.build b/app/test-graph/meson.build new file mode 100644 index 0000000000..d93802a975 --- /dev/null +++ b/app/test-graph/meson.build @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2023 Marvell International Ltd. + +# override default name to drop the hyphen +name = 'test-graph' +cflags += '-Wno-deprecated-declarations' +sources = files( + 'cmdline.c', + 'cmdline_graph.c', + 'parameters.c', + 'testgraph.c', +) + +deps += ['ethdev', 'cmdline', 'graph', 'node', 'eal'] diff --git a/app/test-graph/parameters.c b/app/test-graph/parameters.c new file mode 100644 index 0000000000..b990ca4a1c --- /dev/null +++ b/app/test-graph/parameters.c @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include + +#include "testgraph.h" + +static const char short_options[] = "p:" /* portmask */ + "P" /* promiscuous */ + "i" /* interactive */ + ; + +#define CMD_LINE_OPT_CONFIG "config" +#define CMD_LINE_OPT_NODE_PATTERN "node-pattern" +#define CMD_LINE_OPT_INTERACTIVE "interactive" +#define CMD_LINE_OPT_NO_NUMA "no-numa" +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" +enum { + /* Long options mapped to a short option */ + + /* First long only option value must be >= 256, so that we won't + * conflict with short options + */ + CMD_LINE_OPT_MIN_NUM = 256, + CMD_LINE_OPT_CONFIG_NUM, + CMD_LINE_OPT_NODE_PATTERN_NUM, + CMD_LINE_OPT_INTERACTIVE_NUM, + CMD_LINE_OPT_NO_NUMA_NUM, + CMD_LINE_OPT_PARSE_PER_PORT_POOL, +}; + +static const struct option lgopts[] = { + {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, + {CMD_LINE_OPT_NODE_PATTERN, 1, 0, CMD_LINE_OPT_NODE_PATTERN_NUM}, + {CMD_LINE_OPT_INTERACTIVE, 0, 0, CMD_LINE_OPT_INTERACTIVE_NUM}, + {CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM}, + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL}, + {NULL, 0, 0, 0}, +}; + +/* Display usage */ +static void +print_usage(const char *prgname) +{ + fprintf(stderr, + "%s [EAL options] --" + " -p PORTMASK" + " [-P]" + " [-i]" + " --config (port,queue,lcore)[,(port,queue,lcore)]" + " --node-pattern (node_name0,node_name1[,node_nameX)]" + " [--no-numa]" + " [--per-port-pool]" + " [--interactive]" + + " -p PORTMASK: Hexadecimal bitmask of ports to configure\n" + " -P : Enable promiscuous mode\n" + " -i : Enter interactive mode\n" + " --config (port,queue,lcore): Rx queue configuration\n" + " --node-pattern (node_names): node patterns to create graph\n" + " --no-numa: Disable numa awareness\n" + " --per-port-pool: Use separate buffer pool per port\n" + " --interactive: Enter interactive mode\n", + prgname); +} + +static int +parse_portmask(const char *portmask) +{ + char *end = NULL; + unsigned long pm; + + /* Parse hexadecimal string */ + pm = strtoul(portmask, &end, 16); + if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0')) + return 0; + + return pm; +} + +/* Parse the argument given in the command line of the application */ +int +parse_cmdline_args(int argc, char **argv) +{ + char *prgname = argv[0]; + int option_index; + char **argvopt; + int opt, ret; + + argvopt = argv; + + /* Error or normal output strings. */ + while ((opt = getopt_long(argc, argvopt, short_options, lgopts, &option_index)) != EOF) { + + switch (opt) { + /* Portmask */ + case 'p': + enabled_port_mask = parse_portmask(optarg); + if (enabled_port_mask == 0) { + fprintf(stderr, "Invalid portmask\n"); + print_usage(prgname); + return -1; + } + break; + + case 'P': + promiscuous_on = 1; + break; + + /* Long options */ + case CMD_LINE_OPT_CONFIG_NUM: + ret = parse_config(optarg); + if (ret) { + fprintf(stderr, "Invalid config\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_NODE_PATTERN_NUM: + ret = parse_node_patterns(optarg); + if (ret) { + fprintf(stderr, "Invalid node_patterns\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_INTERACTIVE_NUM: + case 'i': + printf("Interactive-mode selected\n"); + interactive = 1; + break; + + case CMD_LINE_OPT_NO_NUMA_NUM: + numa_on = 0; + break; + + case CMD_LINE_OPT_PARSE_PER_PORT_POOL: + printf("Per port buffer pool is enabled\n"); + per_port_pool = 1; + break; + + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind - 1] = prgname; + ret = optind - 1; + optind = 1; /* Reset getopt lib */ + + return ret; +} diff --git a/app/test-graph/testgraph.c b/app/test-graph/testgraph.c new file mode 100644 index 0000000000..aff921acf2 --- /dev/null +++ b/app/test-graph/testgraph.c @@ -0,0 +1,1426 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "testgraph.h" + +/* Log type */ +#define RTE_LOGTYPE_TEST_GRAPH RTE_LOGTYPE_USER1 + +/* + * Configurable number of RX/TX ring descriptors + */ +#define RX_DESC_DEFAULT 1024 +#define TX_DESC_DEFAULT 1024 + +#define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS +#define MAX_RX_QUEUE_PER_PORT 128 + +#define MAX_RX_QUEUE_PER_LCORE 16 + +#define NB_SOCKETS 8 + +/* Static global variables used within this file. */ +uint16_t nb_rxd = RX_DESC_DEFAULT; +uint16_t nb_txd = TX_DESC_DEFAULT; + +static volatile bool force_quit; +volatile bool graph_walk_quit; +volatile bool run_graph_walk = true; + +char node_pattern[MAX_NODE_PATTERNS][RTE_NODE_NAMESIZE] = {0}; +const char **node_patterns; +uint8_t num_patterns; + +uint8_t interactive; /**< interactive mode is off by default */ +int promiscuous_on; /**< Ports set in promiscuous mode off by default. */ +int numa_on = 1; /**< NUMA is enabled by default. */ +int per_port_pool; /**< Use separate buffer pools per port, disabled by default */ +int testgraph_logtype; /**< Log type for testgraph logs */ + +struct rte_graph_cluster_stats *stats; + +uint32_t enabled_port_mask; /**< Mask of enabled ports */ + +uint32_t nb_conf; + +struct lcore_rx_queue { + uint16_t port_id; + uint8_t queue_id; + char node_name[RTE_NODE_NAMESIZE]; +}; + +/* Lcore conf */ +struct lcore_conf { + uint16_t n_rx_queue; + struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE]; + char punt_kernel_node_name[RTE_NODE_NAMESIZE]; + char kernel_recv_node_name[RTE_NODE_NAMESIZE]; + + struct rte_graph *graph; + char name[RTE_GRAPH_NAMESIZE]; + rte_graph_t graph_id; +} __rte_cache_aligned; + +static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; + +struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; +static struct lcore_params lcore_params_array_default[] = { + {0, 0, 2}, {1, 0, 2}, {2, 0, 2}, {0, 1, 3}, {1, 1, 3}, + {2, 1, 3}, {0, 2, 4}, {1, 2, 4}, {2, 2, 4}, +}; + +struct lcore_params *lcore_params = lcore_params_array_default; +uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default); + +static struct rte_eth_conf port_conf = { + .rxmode = { + .mq_mode = RTE_ETH_MQ_RX_RSS, + }, + .rx_adv_conf = { + .rss_conf = { + .rss_key = NULL, + .rss_hf = RTE_ETH_RSS_IP, + }, + }, + .txmode = { + .mq_mode = RTE_ETH_MQ_TX_NONE, + }, +}; + +static const struct node_list test_node_list[] = {{{"ethdev_rx", "ethdev_tx"}, 0, 2}, + {{"ethdev_rx", "punt_kernel"}, 0, 2}, + {{"kernel_recv", "ethdev_tx"}, 0, 2} }; + +static const struct node_list supported_nodes[] = {{{"ethdev_rx"}, TEST_GRAPH_ETHDEV_RX_NODE, 1}, + {{"ethdev_tx"}, TEST_GRAPH_ETHDEV_TX_NODE, 1}, + {{"punt_kernel"}, TEST_GRAPH_PUNT_KERNEL_NODE, 1}, + {{"kernel_recv"}, TEST_GRAPH_KERNEL_RECV_NODE, 1}, + {{"ip4_lookup"}, TEST_GRAPH_IP4_LOOKUP_NODE, 1}, + {{"ip4_rewrite"}, TEST_GRAPH_IP4_REWRITE_NODE, 1}, + {{"pkt_cls"}, TEST_GRAPH_PKT_CLS_NODE, 1}, + {{"pkt_drop"}, TEST_GRAPH_PKT_DROP_NODE, 1}, + {{"NULL"}, TEST_GRAPH_NULL_NODE, 1} }; + +static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; + +struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS]; + +static int +check_lcore_params(void) +{ + uint8_t queue, lcore; + int socketid; + uint16_t i; + + for (i = 0; i < nb_lcore_params; ++i) { + queue = lcore_params[i].queue_id; + if (queue >= MAX_RX_QUEUE_PER_PORT) { + printf("Invalid queue number: %hhu\n", queue); + return -1; + } + lcore = lcore_params[i].lcore_id; + if (!rte_lcore_is_enabled(lcore)) { + printf("Error: lcore %hhu is not enabled in lcore mask\n", lcore); + return -1; + } + + if (lcore == rte_get_main_lcore()) { + printf("Error: lcore %u is main lcore\n", lcore); + return -1; + } + socketid = rte_lcore_to_socket_id(lcore); + if ((socketid != 0) && (numa_on == 0)) { + printf("Warning: lcore %hhu is on socket %d with numa off\n", lcore, + socketid); + } + } + + return 0; +} + +static int +check_port_config(void) +{ + uint16_t portid; + uint16_t i; + + for (i = 0; i < nb_lcore_params; ++i) { + portid = lcore_params[i].port_id; + if ((enabled_port_mask & (1 << portid)) == 0) { + printf("Port %u is not enabled in port mask\n", portid); + return -1; + } + if (!rte_eth_dev_is_valid_port(portid)) { + printf("Port %u is not present on the board\n", portid); + return -1; + } + } + + return 0; +} + +static uint8_t +get_port_n_rx_queues(const uint16_t port) +{ + int queue = -1; + uint16_t i; + + for (i = 0; i < nb_lcore_params; ++i) { + if (lcore_params[i].port_id == port) { + if (lcore_params[i].queue_id == queue + 1) + queue = lcore_params[i].queue_id; + else + rte_exit(EXIT_FAILURE, "Queue ids of the port %d must be" + " in sequence and must start with 0\n", + lcore_params[i].port_id); + } + } + + return (uint8_t)(++queue); +} + +static int +init_lcore_rx_queues(void) +{ + uint16_t i, nb_rx_queue; + uint8_t lcore; + + for (i = 0; i < nb_lcore_params; ++i) { + lcore = lcore_params[i].lcore_id; + nb_rx_queue = lcore_conf[lcore].n_rx_queue; + if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) { + printf("Error: too many queues (%u) for lcore: %u\n", + (unsigned int)nb_rx_queue + 1, (unsigned int)lcore); + return -1; + } + + lcore_conf[lcore].rx_queue_list[nb_rx_queue].port_id = lcore_params[i].port_id; + lcore_conf[lcore].rx_queue_list[nb_rx_queue].queue_id = lcore_params[i].queue_id; + lcore_conf[lcore].n_rx_queue++; + } + + return 0; +} + +int +validate_config(void) +{ + int rc = -1; + + if (check_lcore_params() < 0) { + printf("check_lcore_params() failed\n"); + goto exit; + } + + if (init_lcore_rx_queues() < 0) { + printf("init_lcore_rx_queues() failed\n"); + goto exit; + } + + if (check_port_config() < 0) { + printf("check_port_config() failed\n"); + goto exit; + } + + return 0; + +exit: + return rc; +} + +#define MEMPOOL_CACHE_SIZE 256 + +/* + * This expression is used to calculate the number of mbufs needed + * depending on user input, taking into account memory for rx and + * tx hardware rings, cache per lcore and mtable per port per lcore. + * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum + * value of 8192 + */ +#define NB_MBUF(nports) \ + RTE_MAX((nports * nb_rx_queue * nb_rxd + nports * nb_lcores * RTE_GRAPH_BURST_SIZE + \ + nports * nb_tx_queue * nb_txd + nb_lcores * MEMPOOL_CACHE_SIZE), \ + 8192u) + +static int +init_mem(uint16_t portid, uint32_t nb_mbuf) +{ + uint32_t lcore_id; + int socketid; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + if (numa_on) + socketid = rte_lcore_to_socket_id(lcore_id); + else + socketid = 0; + + if (socketid >= NB_SOCKETS) { + rte_exit(EXIT_FAILURE, "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + } + + if (pktmbuf_pool[portid][socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d:%d", portid, socketid); + /* Create a pool with priv size of a cacheline */ + pktmbuf_pool[portid][socketid] = rte_pktmbuf_pool_create( + s, nb_mbuf, MEMPOOL_CACHE_SIZE, RTE_CACHE_LINE_SIZE, + RTE_MBUF_DEFAULT_BUF_SIZE, socketid); + if (pktmbuf_pool[portid][socketid] == NULL) + rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", + socketid); + else + printf("Allocated mbuf pool on socket %d\n", socketid); + } + } + + return 0; +} + +void +check_all_ports_link_status(uint32_t port_mask) +{ +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */ + uint8_t count, all_ports_up, print_flag = 0; + struct rte_eth_link link; + uint16_t portid; + int ret; + char link_status_text[RTE_ETH_LINK_MAX_STR_LEN]; + + printf("\nChecking link status"); + fflush(stdout); + for (count = 0; count <= MAX_CHECK_TIME; count++) { + if (force_quit) + return; + all_ports_up = 1; + RTE_ETH_FOREACH_DEV(portid) { + if (force_quit) + return; + if ((port_mask & (1 << portid)) == 0) + continue; + memset(&link, 0, sizeof(link)); + ret = rte_eth_link_get_nowait(portid, &link); + if (ret < 0) { + all_ports_up = 0; + if (print_flag == 1) + printf("Port %u link get failed: %s\n", portid, + rte_strerror(-ret)); + continue; + } + /* Print link status if flag set */ + if (print_flag == 1) { + rte_eth_link_to_str(link_status_text, sizeof(link_status_text), + &link); + printf("Port %d %s\n", portid, link_status_text); + continue; + } + /* Clear all_ports_up flag if any link down */ + if (link.link_status == RTE_ETH_LINK_DOWN) { + all_ports_up = 0; + break; + } + } + /* After finally printing all link status, get out */ + if (print_flag == 1) + break; + + if (all_ports_up == 0) { + printf("."); + fflush(stdout); + rte_delay_ms(CHECK_INTERVAL); + } + + /* Set the print_flag if all ports up or timeout */ + if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) { + print_flag = 1; + printf("Done\n"); + } + } +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", signum); + force_quit = true; + } + prompt_exit(); +} + +int +graph_main_loop(void *conf) +{ + struct lcore_conf *qconf; + struct rte_graph *graph; + uint32_t lcore_id; + + RTE_SET_USED(conf); + + lcore_id = rte_lcore_id(); + qconf = &lcore_conf[lcore_id]; + graph = qconf->graph; + + if (!graph) { + RTE_LOG(INFO, TEST_GRAPH, "Lcore %u has nothing to do\n", lcore_id); + return 0; + } + + RTE_LOG(INFO, TEST_GRAPH, "Entering main loop on lcore %u, graph %s(%p)\n", lcore_id, + qconf->name, graph); + + while (likely(!force_quit && !graph_walk_quit)) { + if (likely(run_graph_walk)) + rte_graph_walk(graph); + } + + return 0; +} + +struct rte_graph_cluster_stats * +create_graph_cluster_stats(void) +{ + struct rte_graph_cluster_stats_param s_param; + struct rte_graph_cluster_stats *stat; + const char *pattern = "worker_*"; + + /* Prepare stats object */ + memset(&s_param, 0, sizeof(s_param)); + s_param.f = stdout; + s_param.socket_id = SOCKET_ID_ANY; + s_param.graph_patterns = &pattern; + s_param.nb_graph_patterns = 1; + + stat = rte_graph_cluster_stats_create(&s_param); + if (stat == NULL) + rte_exit(EXIT_FAILURE, "Unable to create stats object\n"); + + return stat; +} + +static void +print_stats(void) +{ + const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0'}; + const char clr[] = {27, '[', '2', 'J', '\0'}; + + while (!force_quit) { + /* Clear screen and move to top left */ + printf("%s%s", clr, topLeft); + rte_graph_cluster_stats_get(stats, 0); + rte_delay_ms(1E3); + } + + rte_graph_cluster_stats_destroy(stats); +} + +int +parse_config(const char *q_arg) +{ + enum fieldnames { FLD_PORT = 0, FLD_QUEUE, FLD_LCORE, _NUM_FLD }; + unsigned long int_fld[_NUM_FLD]; + const char *p, *p0 = q_arg; + char *str_fld[_NUM_FLD]; + uint32_t size; + char s[256]; + char *end; + int i; + + nb_lcore_params = 0; + + while ((p = strchr(p0, '(')) != NULL) { + ++p; + p0 = strchr(p, ')'); + if (p0 == NULL) + goto exit; + + size = p0 - p; + if (size >= sizeof(s)) + goto exit; + + memcpy(s, p, size); + s[size] = '\0'; + if (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != _NUM_FLD) + goto exit; + for (i = 0; i < _NUM_FLD; i++) { + errno = 0; + int_fld[i] = strtoul(str_fld[i], &end, 0); + if (errno != 0 || end == str_fld[i]) + goto exit; + } + + if (nb_lcore_params >= MAX_LCORE_PARAMS) { + printf("Exceeded max number of lcore params: %hu\n", nb_lcore_params); + goto exit; + } + + if (int_fld[FLD_PORT] >= RTE_MAX_ETHPORTS || int_fld[FLD_LCORE] >= RTE_MAX_LCORE) { + printf("Invalid port/lcore id\n"); + goto exit; + } + + lcore_params_array[nb_lcore_params].port_id = (uint8_t)int_fld[FLD_PORT]; + lcore_params_array[nb_lcore_params].queue_id = (uint8_t)int_fld[FLD_QUEUE]; + lcore_params_array[nb_lcore_params].lcore_id = (uint8_t)int_fld[FLD_LCORE]; + ++nb_lcore_params; + } + lcore_params = lcore_params_array; + + return 0; +exit: + /* Revert to default config */ + lcore_params = lcore_params_array_default; + nb_lcore_params = RTE_DIM(lcore_params_array_default); + + return -1; +} + +int +parse_node_patterns(const char *q_arg) +{ + const char *p, *p0 = q_arg; + int ret = -EINVAL; + uint32_t size; + + num_patterns = 0; + + p = strchr(p0, '('); + if (p != NULL) { + ++p; + while ((p0 = strchr(p, ',')) != NULL) { + size = p0 - p; + if (size >= RTE_NODE_NAMESIZE) + goto exit; + + if (num_patterns >= MAX_NODE_PATTERNS) { + printf("Too many nodes passed.\n"); + goto exit; + } + + memcpy(node_pattern[num_patterns++], p, size); + p = p0 + 1; + } + + p0 = strchr(p, ')'); + if (p0 != NULL) { + size = p0 - p; + if (size >= RTE_NODE_NAMESIZE) + goto exit; + + if (num_patterns >= MAX_NODE_PATTERNS) { + printf("Too many nodes passed.\n"); + goto exit; + } + + memcpy(node_pattern[num_patterns++], p, size); + } else { + goto exit; + } + } else { + goto exit; + } + + return 0; +exit: + return ret; +} + +static void +set_default_node_pattern(void) +{ + uint16_t idx; + + for (idx = 0; idx < test_node_list[0].size; idx++) + strcpy(node_pattern[num_patterns++], test_node_list[0].nodes[idx]); +} + +int +validate_node_names(uint64_t *valid_nodes) +{ + rte_node_t node_cnt = rte_node_max_count(); + bool pattern_matched = false; + rte_node_t id = 0; + int ret = -EINVAL; + uint16_t idx, i, j; + + for (idx = 0; idx < num_patterns; idx++) { + for (id = 0; id < node_cnt; id++) { + if (strncmp(node_pattern[idx], rte_node_id_to_name(id), + RTE_GRAPH_NAMESIZE) == 0) + break; + } + if (node_cnt == id) { + printf("Invalid node name passed\n"); + return ret; + } + } + + for (i = 0; i < RTE_DIM(test_node_list); i++) { + idx = 0; + if (test_node_list[i].size == num_patterns) { + for (j = 0; j < num_patterns; j++) { + if (strncmp(node_pattern[j], test_node_list[i].nodes[j], + RTE_GRAPH_NAMESIZE) == 0) + idx++; + } + if (idx == num_patterns) + pattern_matched = true; + } + } + + if (!pattern_matched) { + printf("Unsupported node pattern passed\n\n"); + printf("Test supported node patterns are:\n"); + for (i = 0; i < RTE_DIM(test_node_list); i++) { + printf("("); + for (j = 0; j < (test_node_list[i].size - 1); j++) + printf("%s,", test_node_list[i].nodes[j]); + printf("%s", test_node_list[i].nodes[j]); + printf(")\n"); + } + + return ret; + } + + for (i = 0; i < RTE_DIM(supported_nodes); i++) { + for (j = 0; j < num_patterns; j++) { + if (strncmp(node_pattern[j], supported_nodes[i].nodes[0], + RTE_GRAPH_NAMESIZE) == 0) { + *valid_nodes |= supported_nodes[i].test_id; + break; + } + } + } + + return 0; +} + +void +cleanup_node_pattern(void) +{ + while (num_patterns) { + memset(node_pattern[num_patterns - 1], 0, RTE_GRAPH_NAMESIZE); + num_patterns--; + } +} + +static int +ethdev_tx_node_configure(void) +{ + struct ethdev_tx_node_main *tx_node_data; + struct rte_node_register *tx_node; + char name[RTE_NODE_NAMESIZE]; + uint16_t port_id; + uint32_t id; + + tx_node_data = ethdev_tx_node_data_get(); + tx_node = ethdev_tx_node_get(); + + RTE_ETH_FOREACH_DEV(port_id) { + /* Skip ports that are not enabled */ + if ((enabled_port_mask & (1 << port_id)) == 0) { + printf("\nSkipping disabled port %d\n", port_id); + continue; + } + + if (!rte_eth_dev_is_valid_port(port_id)) + return -EINVAL; + + /* Create a per port tx node from base node */ + snprintf(name, sizeof(name), "%u", port_id); + id = rte_node_clone(tx_node->id, name); + tx_node_data->nodes[port_id] = id; + + printf("ethdev:: Tx node %s-%s: is at %u\n", tx_node->name, name, id); + } + + return 0; +} + +static int +punt_kernel_node_configure(void) +{ + struct rte_node_register *punt_node; + char name[RTE_NODE_NAMESIZE]; + struct lcore_conf *qconf; + uint32_t lcore_id; + uint32_t id; + + punt_node = punt_kernel_node_get(); + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + + /* Create a per lcore punt_kernel node from base node */ + snprintf(name, sizeof(name), "%u", lcore_id); + id = rte_node_clone(punt_node->id, name); + strcpy(qconf->punt_kernel_node_name, rte_node_id_to_name(id)); + + printf("punt_kernel node %s-%s: is at %u\n", punt_node->name, name, id); + } + + return 0; +} + +static int +kernel_recv_node_configure(void) +{ + struct rte_node_register *recv_node; + char name[RTE_NODE_NAMESIZE]; + struct lcore_conf *qconf; + uint32_t lcore_id; + uint32_t id; + + recv_node = kernel_recv_node_get(); + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + + /* Create a per lcore kernel_recv node from base node */ + snprintf(name, sizeof(name), "%u", lcore_id); + id = rte_node_clone(recv_node->id, name); + strcpy(qconf->kernel_recv_node_name, rte_node_id_to_name(id)); + + printf("kernel_recv node %s-%s: is at %u\n", recv_node->name, name, id); + } + + return 0; +} + +static int +ethdev_rx_node_configure(struct rte_node_ethdev_config *conf, uint16_t nb_confs) +{ + char name[RTE_NODE_NAMESIZE]; + uint16_t i, j, port_id; + uint32_t id; + + for (i = 0; i < nb_confs; i++) { + port_id = conf[i].port_id; + + if (!rte_eth_dev_is_valid_port(port_id)) + return -EINVAL; + + /* Create node for each rx port queue pair */ + for (j = 0; j < conf[i].num_rx_queues; j++) { + struct ethdev_rx_node_main *rx_node_data; + struct rte_node_register *rx_node; + ethdev_rx_node_elem_t *elem; + + rx_node_data = ethdev_rx_get_node_data_get(); + rx_node = ethdev_rx_node_get(); + snprintf(name, sizeof(name), "%u-%u", port_id, j); + /* Clone a new rx node with same edges as parent */ + id = rte_node_clone(rx_node->id, name); + if (id == RTE_NODE_ID_INVALID) + return -EIO; + + /* Add it to list of ethdev rx nodes for lookup */ + elem = malloc(sizeof(ethdev_rx_node_elem_t)); + if (elem == NULL) + return -ENOMEM; + memset(elem, 0, sizeof(ethdev_rx_node_elem_t)); + elem->ctx.port_id = port_id; + elem->ctx.queue_id = j; + elem->nid = id; + elem->next = rx_node_data->head; + rx_node_data->head = elem; + + printf("ethdev:: Rx node %s-%s: is at %u\n", rx_node->name, name, id); + } + } + + return 0; +} + +static int +update_ethdev_rx_node_next(rte_node_t id, const char *edge_name) +{ + struct ethdev_rx_node_main *rx_node_data; + ethdev_rx_node_elem_t *elem; + char *next_nodes[16]; + rte_edge_t count; + uint16_t i; + + count = rte_node_edge_count(id); + rte_node_edge_get(id, next_nodes); + + for (i = 0; i < count; i++) { + if (fnmatch(edge_name, next_nodes[i], 0) == 0) { + rx_node_data = ethdev_rx_get_node_data_get(); + elem = rx_node_data->head; + while (elem->next != rx_node_data->head) { + if (elem->nid == id) + break; + elem = elem->next; + } + + if (elem->nid == id) + elem->ctx.cls_next = i; + break; + } + } + + return 0; +} + +static int +link_ethdev_rx_to_tx_node(uint32_t lcore_id) +{ + const char * const pattern[] = {"ethdev_tx-*"}; + uint16_t queue, queue_id, port_id; + char name[RTE_NODE_NAMESIZE]; + const char *next_node = name; + struct lcore_conf *qconf; + uint32_t rx_id; + + qconf = &lcore_conf[lcore_id]; + + for (queue = 0; queue < qconf->n_rx_queue; ++queue) { + port_id = qconf->rx_queue_list[queue].port_id; + queue_id = qconf->rx_queue_list[queue].queue_id; + + snprintf(name, sizeof(name), "ethdev_rx-%u-%u", port_id, queue_id); + rx_id = rte_node_from_name(name); + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], name); + + /* Prepare the actual name of the cloned node */ + snprintf(name, sizeof(name), "ethdev_tx-%u", port_id); + + /* Update ethdev_rx node edges */ + rte_node_edge_update(rx_id, RTE_EDGE_ID_INVALID, &next_node, 1); + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], name); + + /* Update node_next details */ + update_ethdev_rx_node_next(rx_id, pattern[0]); + } + + return 0; +} + +static int +link_ethdev_rx_to_punt_kernel_node(uint32_t lcore_id) +{ + const char * const pattern[] = {"punt_kernel-*"}; + uint16_t queueid, portid, queue; + char name[RTE_NODE_NAMESIZE]; + const char *next_node = name; + struct lcore_conf *qconf; + rte_node_t rx_id; + + qconf = &lcore_conf[lcore_id]; + + for (queue = 0; queue < qconf->n_rx_queue; ++queue) { + + portid = qconf->rx_queue_list[queue].port_id; + queueid = qconf->rx_queue_list[queue].queue_id; + + snprintf(name, sizeof(name), "ethdev_rx-%u-%u", portid, queueid); + rx_id = rte_node_from_name(name); + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], name); + + next_node = qconf->punt_kernel_node_name; + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], next_node); + + /* Update ethdev_rx node edges */ + rte_node_edge_update(rx_id, RTE_EDGE_ID_INVALID, &next_node, 1); + + /* Update node_next details */ + update_ethdev_rx_node_next(rx_id, pattern[0]); + } + + return 0; +} + +static int +update_kernel_recv_node_next(rte_node_t id, const char *edge_name) +{ + struct kernel_recv_node_main *rx_node_data; + kernel_recv_node_elem_t *elem; + char *next_nodes[16]; + rte_edge_t count; + uint16_t i; + + count = rte_node_edge_count(id); + rte_node_edge_get(id, next_nodes); + + for (i = 0; i < count; i++) { + if (fnmatch(edge_name, next_nodes[i], 0) == 0) { + rx_node_data = kernel_recv_node_data_get(); + elem = rx_node_data->head; + while (elem->next != rx_node_data->head) { + if (elem->nid == id) + break; + elem = elem->next; + } + + if (elem->nid == id) + elem->ctx.recv_info->cls_next = i; + break; + } + } + + return 0; +} + +static int +link_kernel_recv_to_ethdev_tx_node(uint32_t lcore_id) +{ + const char * const pattern[] = {"ethdev_tx-*"}; + char name[RTE_NODE_NAMESIZE]; + const char *next_node = name; + struct lcore_conf *qconf; + uint16_t port_id; + rte_node_t id; + + qconf = &lcore_conf[lcore_id]; + + id = rte_node_from_name(qconf->kernel_recv_node_name); + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], qconf->kernel_recv_node_name); + + port_id = lcore_id; + + if ((enabled_port_mask & (1 << port_id)) == 0) { + /* Use available port_id */ + RTE_ETH_FOREACH_DEV(port_id) { + if ((enabled_port_mask & (1 << port_id)) != 0) + break; + } + } + + if (!rte_eth_dev_is_valid_port(port_id)) { + printf("Port %u is not present on the board\n", port_id); + return -1; + } + + /* Prepare the actual name of the cloned node */ + snprintf(name, sizeof(name), "ethdev_tx-%u", port_id); + + /* Update ethdev_rx node edges */ + rte_node_edge_update(id, RTE_EDGE_ID_INVALID, &next_node, 1); + + /* Fill node pattern */ + strcpy(node_pattern[num_patterns++], name); + + /* Update node_next details */ + update_kernel_recv_node_next(id, pattern[0]); + + return 0; +} + +uint32_t +ethdev_ports_setup(void) +{ + struct rte_eth_dev_info dev_info; + uint32_t nb_tx_queue, nb_lcores; + uint32_t nb_ports, nb_conf = 0; + uint8_t nb_rx_queue; + uint16_t portid; + int ret; + + nb_ports = rte_eth_dev_count_avail(); + nb_lcores = rte_lcore_count(); + + RTE_ETH_FOREACH_DEV(portid) { + struct rte_eth_conf local_port_conf = port_conf; + + /* Skip ports that are not enabled */ + if ((enabled_port_mask & (1 << portid)) == 0) { + printf("\nSkipping disabled port %d\n", portid); + continue; + } + + /* Init port */ + printf("Initializing port %d ... ", portid); + fflush(stdout); + + nb_rx_queue = get_port_n_rx_queues(portid); + nb_tx_queue = nb_lcores; + if (nb_tx_queue > MAX_TX_QUEUE_PER_PORT) + nb_tx_queue = MAX_TX_QUEUE_PER_PORT; + printf("Creating queues: nb_rxq=%d nb_txq=%u...\n", nb_rx_queue, nb_tx_queue); + + rte_eth_dev_info_get(portid, &dev_info); + + if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) + local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + local_port_conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads; + if (local_port_conf.rx_adv_conf.rss_conf.rss_hf != + port_conf.rx_adv_conf.rss_conf.rss_hf) { + printf("Port %u modified RSS hash function based on hardware support," + "requested:%#" PRIx64 " configured:%#" PRIx64 "\n", + portid, port_conf.rx_adv_conf.rss_conf.rss_hf, + local_port_conf.rx_adv_conf.rss_conf.rss_hf); + } + + ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue, &local_port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%d\n", ret, + portid); + + ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd, &nb_txd); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot adjust number of descriptors: err=%d," + "port=%d\n", + ret, portid); + + /* Init memory */ + if (!per_port_pool) + ret = init_mem(0, NB_MBUF(nb_ports)); + else + ret = init_mem(portid, NB_MBUF(1)); + + if (ret < 0) + rte_exit(EXIT_FAILURE, "init_mem() failed\n"); + + /* Setup ethdev node config */ + ethdev_conf[nb_conf].port_id = portid; + ethdev_conf[nb_conf].num_rx_queues = nb_rx_queue; + ethdev_conf[nb_conf].num_tx_queues = nb_tx_queue; + if (!per_port_pool) + ethdev_conf[nb_conf].mp = pktmbuf_pool[0]; + + else + ethdev_conf[nb_conf].mp = pktmbuf_pool[portid]; + ethdev_conf[nb_conf].mp_count = NB_SOCKETS; + + nb_conf++; + } + + return nb_conf; +} + +void +ethdev_rxq_configure(void) +{ + struct rte_eth_dev_info dev_info; + uint16_t queueid, portid; + struct lcore_conf *qconf; + uint8_t queue, socketid; + uint32_t lcore_id; + int ret; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + qconf = &lcore_conf[lcore_id]; + printf("\nInitializing rx queues on lcore %u ... ", lcore_id); + fflush(stdout); + + /* Init RX queues */ + for (queue = 0; queue < qconf->n_rx_queue; ++queue) { + struct rte_eth_rxconf rxq_conf; + + portid = qconf->rx_queue_list[queue].port_id; + queueid = qconf->rx_queue_list[queue].queue_id; + + if (numa_on) + socketid = (uint8_t)rte_lcore_to_socket_id(lcore_id); + else + socketid = 0; + + printf("rxq=%d,%d,%d ", portid, queueid, socketid); + fflush(stdout); + + rte_eth_dev_info_get(portid, &dev_info); + rxq_conf = dev_info.default_rxconf; + rxq_conf.offloads = port_conf.rxmode.offloads; + if (!per_port_pool) + ret = rte_eth_rx_queue_setup(portid, queueid, nb_rxd, socketid, + &rxq_conf, pktmbuf_pool[0][socketid]); + else + ret = rte_eth_rx_queue_setup(portid, queueid, nb_rxd, socketid, + &rxq_conf, + pktmbuf_pool[portid][socketid]); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%d\n", + ret, portid); + + snprintf(qconf->rx_queue_list[queue].node_name, RTE_NODE_NAMESIZE, + "ethdev_rx-%u-%u", portid, queueid); + } + } + printf("\n"); +} + +void +ethdev_txq_configure(void) +{ + struct rte_eth_dev_info dev_info; + struct rte_eth_txconf *txconf; + uint16_t queueid, portid; + uint32_t lcore_id; + uint8_t socketid; + int ret; + + RTE_ETH_FOREACH_DEV(portid) { + struct rte_eth_conf local_port_conf = port_conf; + + /* Skip ports that are not enabled */ + if ((enabled_port_mask & (1 << portid)) == 0) { + printf("\nSkipping disabled port %d\n", portid); + continue; + } + + rte_eth_dev_info_get(portid, &dev_info); + + /* Init one TX queue per (lcore,port) pair */ + queueid = 0; + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + if (numa_on) + socketid = (uint8_t)rte_lcore_to_socket_id(lcore_id); + else + socketid = 0; + + printf("txq=%u,%d,%d ", lcore_id, queueid, socketid); + fflush(stdout); + + txconf = &dev_info.default_txconf; + txconf->offloads = local_port_conf.txmode.offloads; + ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd, socketid, txconf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup: err=%d, port=%d\n", + ret, portid); + queueid++; + } + } + printf("\n"); +} + +int +configure_graph_nodes(uint64_t valid_nodes) +{ + int ret = 0; + + if (valid_nodes & TEST_GRAPH_ETHDEV_RX_NODE) { + ret = ethdev_rx_node_configure(ethdev_conf, nb_conf); + if (ret) { + printf("ethdev_rx_node_configure: err=%d\n", ret); + goto exit; + } + } + + if (valid_nodes & TEST_GRAPH_ETHDEV_TX_NODE) { + ret = ethdev_tx_node_configure(); + if (ret) { + printf("ethdev_tx_node_configure: err=%d\n", ret); + goto exit; + } + } + + if (valid_nodes & TEST_GRAPH_PUNT_KERNEL_NODE) { + ret = punt_kernel_node_configure(); + if (ret) { + printf("punt_kernel_node_configure: err=%d\n", ret); + goto exit; + } + } + + if (valid_nodes & TEST_GRAPH_KERNEL_RECV_NODE) { + ret = kernel_recv_node_configure(); + if (ret) { + printf("kernel_recv_node_configure: err=%d\n", ret); + goto exit; + } + } + +exit: + return ret; +} + +static int +link_graph_nodes(uint64_t valid_nodes, uint32_t lcore_id) +{ + int ret = 0; + + num_patterns = 0; + + if (valid_nodes == (TEST_GRAPH_ETHDEV_TX_NODE | TEST_GRAPH_ETHDEV_RX_NODE)) { + ret = link_ethdev_rx_to_tx_node(lcore_id); + if (ret) { + printf("link_ethdev_rx_to_tx_node: err=%d\n", ret); + goto exit; + } + } else if (valid_nodes == (TEST_GRAPH_ETHDEV_RX_NODE | TEST_GRAPH_PUNT_KERNEL_NODE)) { + link_ethdev_rx_to_punt_kernel_node(lcore_id); + } else if (valid_nodes == (TEST_GRAPH_ETHDEV_TX_NODE | TEST_GRAPH_KERNEL_RECV_NODE)) { + link_kernel_recv_to_ethdev_tx_node(lcore_id); + } else { + printf("Invalid node map\n"); + ret = -EINVAL; + } + +exit: + return ret; +} + +void +start_eth_ports(void) +{ + uint16_t portid; + int ret; + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + ret = rte_eth_dev_start(portid); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_dev_start: err=%d, port=%d\n", ret, portid); + + if (promiscuous_on) + rte_eth_promiscuous_enable(portid); + } +} + +void +stop_eth_ports(void) +{ + uint16_t portid; + int ret; + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + ret = rte_eth_dev_stop(portid); + if (ret != 0) + printf("Failed to stop port %u: %s\n", portid, rte_strerror(-ret)); + rte_eth_dev_close(portid); + } +} + +int +create_graph(uint64_t valid_nodes) +{ + struct rte_graph_param graph_conf; + struct lcore_conf *qconf; + uint32_t lcore_id; + int ret; + + node_patterns = malloc(MAX_NODE_PATTERNS * sizeof(*node_patterns)); + if (!node_patterns) + return -ENOMEM; + + memset(&graph_conf, 0, sizeof(graph_conf)); + graph_conf.node_patterns = node_patterns; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + rte_graph_t graph_id; + rte_edge_t i; + + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + + /* Skip graph creation if no source exists */ + if (!qconf->n_rx_queue) + continue; + + ret = link_graph_nodes(valid_nodes, lcore_id); + if (ret) + rte_exit(EXIT_FAILURE, "link_graph_nodes(): failed\n"); + + for (i = 0; i < num_patterns; i++) { + graph_conf.node_patterns[i] = node_pattern[i]; + printf("%s,", graph_conf.node_patterns[i]); + } + printf("\n"); + + graph_conf.nb_node_patterns = num_patterns; + + graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); + + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", lcore_id); + + graph_id = rte_graph_create(qconf->name, &graph_conf); + if (graph_id == RTE_GRAPH_ID_INVALID) + rte_exit(EXIT_FAILURE, "rte_graph_create(): graph_id invalid for lcore%u\n", + lcore_id); + + qconf->graph_id = graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + + if (!qconf->graph) + rte_exit(EXIT_FAILURE, "rte_graph_lookup(): graph %s not found\n", + qconf->name); + } + + return 0; +} + +int +destroy_graph(void) +{ + uint32_t lcore_id; + rte_graph_t id; + int ret; + + RTE_LCORE_FOREACH_WORKER(lcore_id) + { + if (lcore_conf[lcore_id].graph) { + id = rte_graph_from_name(lcore_conf[lcore_id].name); + if (rte_graph_destroy(id)) { + printf("graph_id %u destroy failed.\n", id); + ret = -1; + } + } + } + + if (node_patterns) + free(node_patterns); + + return ret; +} + +int +main(int argc, char **argv) +{ + uint64_t valid_nodes; + uint32_t lcore_id; + int ret; + + graph_walk_quit = false; + force_quit = false; + interactive = 0; + + node_patterns = NULL; + + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + testgraph_logtype = rte_log_register("testgraph"); + if (testgraph_logtype < 0) + rte_exit(EXIT_FAILURE, "Cannot register log type"); + + set_default_node_pattern(); + + rte_log_set_level(testgraph_logtype, RTE_LOG_DEBUG); + + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Cannot init EAL: %s\n", rte_strerror(rte_errno)); + argc -= ret; + argv += ret; + + if (argc > 1) { + ret = parse_cmdline_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid command line parameters\n"); + } + +#ifdef RTE_LIB_CMDLINE + if (init_cmdline() != 0) + rte_exit(EXIT_FAILURE, "Could not initialise cmdline context.\n"); + + if (interactive == 1) { + prompt(); + } else +#endif + { + if (validate_config() < 0) + rte_exit(EXIT_FAILURE, "Config validation failed.\n"); + + ret = validate_node_names(&valid_nodes); + if (ret) + rte_exit(EXIT_FAILURE, "validate_node_names: err=%d\n", ret); + + nb_conf = ethdev_ports_setup(); + + ethdev_rxq_configure(); + + ethdev_txq_configure(); + + ret = configure_graph_nodes(valid_nodes); + if (ret) + rte_exit(EXIT_FAILURE, "configure_graph_nodes: err=%d\n", ret); + + ret = create_graph(valid_nodes); + if (ret) + rte_exit(EXIT_FAILURE, "create_graph: err=%d\n", ret); + + stats = create_graph_cluster_stats(); + if (stats == NULL) + rte_exit(EXIT_FAILURE, "create_graph_cluster_stats() failed\n"); + + check_all_ports_link_status(enabled_port_mask); + + start_eth_ports(); + + /* Launch per-lcore init on every worker lcore */ + rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MAIN); + + /* Accumulate and print stats on main until exit */ + if (rte_graph_has_stats_feature()) + print_stats(); + + /* Wait for worker cores to exit */ + RTE_LCORE_FOREACH_WORKER(lcore_id) { + ret = rte_eal_wait_lcore(lcore_id); + if (ret < 0) + break; + } + + ret = destroy_graph(); + + stop_eth_ports(); + } + + /* clean up the EAL */ + ret = rte_eal_cleanup(); + if (ret != 0) + rte_exit(EXIT_FAILURE, "EAL cleanup failed: %s\n", strerror(-ret)); + + return EXIT_SUCCESS; +} diff --git a/app/test-graph/testgraph.h b/app/test-graph/testgraph.h new file mode 100644 index 0000000000..ea1f8ef4fa --- /dev/null +++ b/app/test-graph/testgraph.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef _TESTGRAPH_H_ +#define _TESTGRAPH_H_ + +#include + +#include +#include +#include + +#include +#include + + +#define MAX_LCORE_PARAMS 1024 +#define MAX_NODE_PATTERNS 128 + +#ifndef BIT_ULL +#define BIT_ULL(nr) (1ULL << (nr)) +#endif + +#define TEST_GRAPH_ETHDEV_RX_NODE BIT_ULL(0) +#define TEST_GRAPH_ETHDEV_TX_NODE BIT_ULL(1) +#define TEST_GRAPH_PUNT_KERNEL_NODE BIT_ULL(2) +#define TEST_GRAPH_KERNEL_RECV_NODE BIT_ULL(3) +#define TEST_GRAPH_IP4_LOOKUP_NODE BIT_ULL(4) +#define TEST_GRAPH_IP4_REWRITE_NODE BIT_ULL(5) +#define TEST_GRAPH_PKT_CLS_NODE BIT_ULL(6) +#define TEST_GRAPH_PKT_DROP_NODE BIT_ULL(7) +#define TEST_GRAPH_NULL_NODE BIT_ULL(8) + +static volatile bool force_quit; + +extern struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS]; +extern uint32_t enabled_port_mask; +extern uint32_t nb_conf; + +extern int promiscuous_on; /**< Ports set in promiscuous mode off by default. */ +extern uint8_t interactive; /**< interactive mode is disabled by default. */ +extern uint32_t enabled_port_mask; /**< Mask of enabled ports */ +extern int numa_on; /**< NUMA is enabled by default. */ +extern int per_port_pool; + +extern volatile bool graph_walk_quit; +extern volatile bool run_graph_walk; +extern struct rte_graph_cluster_stats *stats; + +extern char node_pattern[MAX_NODE_PATTERNS][RTE_NODE_NAMESIZE]; +extern uint8_t num_patterns; + +struct node_list { + const char *nodes[MAX_NODE_PATTERNS]; + uint64_t test_id; + uint8_t size; +}; + +struct lcore_params { + uint16_t port_id; + uint8_t queue_id; + uint8_t lcore_id; +} __rte_cache_aligned; + +void prompt(void); +void prompt_exit(void); +int init_cmdline(void); +int validate_config(void); +int parse_cmdline_args(int argc, char **argv); +uint32_t ethdev_ports_setup(void); +void ethdev_rxq_configure(void); +void ethdev_txq_configure(void); +void start_eth_ports(void); +void stop_eth_ports(void); +int create_graph(uint64_t valid_nodes); +int destroy_graph(void); +int graph_main_loop(void *conf); +struct rte_graph_cluster_stats *create_graph_cluster_stats(void); +void check_all_ports_link_status(uint32_t port_mask); +int configure_graph_nodes(uint64_t valid_nodes); + +int parse_config(const char *q_arg); +int parse_node_patterns(const char *q_arg); +int validate_node_names(uint64_t *valid_nodes); +void cleanup_node_pattern(void); + +#define TESTGRAPH_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_##level, testgraph_logtype, "testgraph: " fmt, ##args) + +#endif /* _TESTGRAPH_H_ */ diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst index 6f84fc31ff..f18c508fa2 100644 --- a/doc/guides/tools/index.rst +++ b/doc/guides/tools/index.rst @@ -20,6 +20,7 @@ DPDK Tools User Guides cryptoperf comp_perf testeventdev + testgraph testregex testmldev dts diff --git a/doc/guides/tools/testgraph.rst b/doc/guides/tools/testgraph.rst new file mode 100644 index 0000000000..3c1e058724 --- /dev/null +++ b/doc/guides/tools/testgraph.rst @@ -0,0 +1,131 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2023 Marvell International Ltd. + +dpdk-test-graph Application +=========================== + +The ``dpdk-test-graph`` tool is a Data Plane Development Kit (DPDK) application that allows +exercising various graph library features. This application has a generic framework to add +new test configurations and expand test coverage to verify the functionality of graph nodes +and observe the graph cluster statistics. + +Running the Application +----------------------- + +The application has a number of command line options: + +.. code-block:: console + + dpdk-test-eventdev [EAL Options] -- [application options] + +EAL Options +~~~~~~~~~~~ + +The following are the EAL command-line options that can be used in conjunction +with the ``dpdk-test-graph`` application. +See the DPDK Getting Started Guides for more information on these options. + +* ``-c `` or ``-l `` + + Set the hexadecimal bitmask of the cores to run on. The corelist is a + list of cores to use. + +Application Options +~~~~~~~~~~~~~~~~~~~ + +The following are the application command-line options: + +* ``-p `` + + Set the ethdev port mask. + +* ``-P`` + + Set the ethdev ports in promiscuous mode. + +* ``--config `` + + Set the Rxq configuration. + (i.e. ``--config (port_id,rxq,lcore_id)[,(port_id,rxq,lcore_id)]``). + +* ``--node-pattern `` + + Set the node patterns to use in graph creation. + (i.e. ``--node-pattern (node_name0,node_name1[,node_nameX])``). + +* ``--per-port-pool`` + + Use separate buffer pool per port. + +* ``--no-numa`` + + Disable numa awareness. + +* ``--interactive`` + + Switch to interactive mode. + +Running the Tool +~~~~~~~~~~~~~~~~ + +Here is the sample command line to run simple iofwd test:: + + ./dpdk-test-graph -a 0002:03:00.0 -a 0002:04:00.0 -c 0xF -- -p 0x3 -P \ + --config "(0,0,2),(1,0,2)" --node-pattern "(ethdev_rx,ethdev_tx)" + +Below is a sample command line to punt rx packets to kernel:: + + ./dpdk-test-graph -a 0002:03:00.0 -a 0002:04:00.0 -c 0xF -- -p 0x3 -P \ + --config "(0,0,2),(1,0,2)" --node-pattern "(ethdev_rx,punt_kernel)" + +Interactive mode +~~~~~~~~~~~~~~~~ + +Tool uses ``--interactive`` command line option to enter interactive mode and use cmdline options +to setup the required node configurations, create graph and than start graph_walk. + + +testgraph> help + +Help is available for the following sections: + + help control : Start and stop graph walk. + help display : Displaying port, stats and config information. + help config : Configuration information. + help all : All of the above sections. + +testgraph> help all + +Control forwarding: + +start graph_walk + Start graph_walk on worker threads. + +stop graph_walk + Stop worker threads from running graph_walk. + +quit + Quit to prompt. + + +Display: + +show node_list + Display the list of supported nodes. + +show graph_stats + Display the node statistics of graph cluster. + + +Configuration: + +set lcore_config (port_id0,rxq0,lcore_idX),........,(port_idX,rxqX,lcoreidY) + Set lcore configuration. + +create_graph (node0_name,node1_name,...,nodeX_name) + Create graph instances using the provided node details. + +destroy_graph + Destroy the graph instances. + +testgraph>