From patchwork Wed Mar 18 21:35:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 66920 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F06CA057D; Wed, 18 Mar 2020 22:40:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 144131C122; Wed, 18 Mar 2020 22:37:10 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id A473C1C122 for ; Wed, 18 Mar 2020 22:37:08 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ILVPcI013826; Wed, 18 Mar 2020 14:37:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=cVckand9r0/Jr1ZOG/cJPu2sIw6GxkaHUkL6apEIsAI=; b=OomCpZPuZhHQMnL8anrJmzvyCwoanS4L4GC7/jGt+0crEFvhxA/qeeC9PRwZpRkNrVsO fW0N6BDx8X9a8fhOcfCEQ6L7UWeZdy9gITGAItF9pXsWoHsxHT0dxeobd5AfaE5L+28P Ntj898PrAwteULUuf/zzuBlwSNLru34mViYAaLAdRJbrxoigHDLwW7cEMvl2TjuTBf2O YKU8MXb+QvEvq6FMfi/g07xfrYcduLjbLubml1cKuaiDFJUFtTEK5GzwozqRYhUO0Lj1 bHnPs8d6N7z9dCL0L/GclwYbWQsjggOetz/Vatps41qSMyl87cFRS10g8FHzErgvEAhO tw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2yu9rpcc7y-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 18 Mar 2020 14:37:07 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 18 Mar 2020 14:37:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 18 Mar 2020 14:37:05 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 847843F704E; Wed, 18 Mar 2020 14:37:01 -0700 (PDT) From: To: Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , Akhil Goyal , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" , Nithin Dabilpuram CC: , , , , , Date: Thu, 19 Mar 2020 03:05:51 +0530 Message-ID: <20200318213551.3489504-27-jerinj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200318213551.3489504-1-jerinj@marvell.com> References: <20200318213551.3489504-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-18_07:2020-03-18, 2020-03-18 signatures=0 Subject: [dpdk-dev] [PATCH v1 26/26] l3fwd-graph: add graph config and main loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add graph creation, configuration logic and graph main loop. This graph main loop is run on every slave lcore and calls rte_graph_walk() to walk over lcore specific rte_graph. Master core accumulates and prints graph walk stats of all the lcore's graph's. Signed-off-by: Nithin Dabilpuram --- examples/l3fwd-graph/main.c | 241 +++++++++++++++++++++++++++++++++++- 1 file changed, 239 insertions(+), 2 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 436987877..104a4d8db 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -22,9 +22,13 @@ #include #include #include +#include +#include #include #include #include +#include +#include #include #include #include @@ -74,12 +78,17 @@ static uint32_t enabled_port_mask; struct lcore_rx_queue { uint16_t port_id; uint8_t queue_id; + char node_name[RTE_NODE_NAMESIZE]; }; /* Lcore conf */ struct lcore_conf { uint16_t n_rx_queue; struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE]; + + struct rte_graph *graph; + char name[RTE_GRAPH_NAMESIZE]; + rte_graph_t graph_id; } __rte_cache_aligned; static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; @@ -118,6 +127,25 @@ static struct rte_eth_conf port_conf = { static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; +static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS]; + +struct ipv4_l3fwd_lpm_route { + uint32_t ip; + uint8_t depth; + uint8_t if_out; +}; + +#define IPV4_L3FWD_LPM_NUM_ROUTES \ + (sizeof(ipv4_l3fwd_lpm_route_array) / \ + sizeof(ipv4_l3fwd_lpm_route_array[0])) +/* 198.18.0.0/16 are set aside for RFC2544 benchmarking. */ +static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = { + {RTE_IPV4(198, 18, 0, 0), 24, 0}, {RTE_IPV4(198, 18, 1, 0), 24, 1}, + {RTE_IPV4(198, 18, 2, 0), 24, 2}, {RTE_IPV4(198, 18, 3, 0), 24, 3}, + {RTE_IPV4(198, 18, 4, 0), 24, 4}, {RTE_IPV4(198, 18, 5, 0), 24, 5}, + {RTE_IPV4(198, 18, 6, 0), 24, 6}, {RTE_IPV4(198, 18, 7, 0), 24, 7}, +}; + static int check_lcore_params(void) { @@ -627,17 +655,87 @@ signal_handler(int signum) } } +static void +print_stats(void) +{ + const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0'}; + const char clr[] = {27, '[', '2', 'J', '\0'}; + struct rte_graph_cluster_stats_param s_param; + struct rte_graph_cluster_stats *stats; + const char *pattern = "worker_*"; + + /* Prepare stats object */ + memset(&s_param, 0, sizeof(s_param)); + s_param.f = stdout; + s_param.socket_id = SOCKET_ID_ANY; + s_param.graph_patterns = &pattern; + s_param.nb_graph_patterns = 1; + + stats = rte_graph_cluster_stats_create(&s_param); + if (stats == NULL) + rte_exit(EXIT_FAILURE, "Unable to create stats object\n"); + + while (!force_quit) { + /* Clear screen and move to top left */ + printf("%s%s", clr, topLeft); + rte_graph_cluster_stats_get(stats, 0); + rte_delay_ms(1E3); + } + + rte_graph_cluster_stats_destroy(stats); +} + +/* Main processing loop */ +static int +graph_main_loop(void *conf) +{ + struct lcore_conf *qconf; + struct rte_graph *graph; + uint32_t lcore_id; + + RTE_SET_USED(conf); + + lcore_id = rte_lcore_id(); + qconf = &lcore_conf[lcore_id]; + graph = qconf->graph; + + if (!graph) { + RTE_LOG(INFO, L3FWD_GRAPH, "Lcore %u has nothing to do\n", + lcore_id); + return 0; + } + + RTE_LOG(INFO, L3FWD_GRAPH, + "Entering main loop on lcore %u, graph %s(%p)\n", lcore_id, + qconf->name, graph); + + while (likely(!force_quit)) + rte_graph_walk(graph); + + return 0; +} + int main(int argc, char **argv) { + const char *node_patterns[64] = { + "ip4*", + "ethdev_tx-*", + "pkt_drop", + }; + uint8_t rewrite_data[2 * sizeof(struct rte_ether_addr)]; uint8_t nb_rx_queue, queue, socketid; + struct rte_graph_param graph_conf; struct rte_eth_dev_info dev_info; + uint32_t nb_ports, nb_conf = 0; uint32_t n_tx_queue, nb_lcores; struct rte_eth_txconf *txconf; - uint16_t queueid, portid; + uint16_t queueid, portid, i; struct lcore_conf *qconf; + uint16_t nb_patterns = 3; + uint16_t nb_graphs = 0; + uint8_t rewrite_len; uint32_t lcore_id; - uint32_t nb_ports; int ret; /* Init EAL */ @@ -786,6 +884,18 @@ main(int argc, char **argv) queueid++; } + /* Setup ethdev node config */ + ethdev_conf[nb_conf].port_id = portid; + ethdev_conf[nb_conf].num_rx_queues = nb_rx_queue; + ethdev_conf[nb_conf].num_tx_queues = n_tx_queue; + if (!per_port_pool) + ethdev_conf[nb_conf].mp = pktmbuf_pool[0]; + + else + ethdev_conf[nb_conf].mp = pktmbuf_pool[portid]; + ethdev_conf[nb_conf].mp_count = NB_SOCKETS; + + nb_conf++; printf("\n"); } @@ -829,11 +939,26 @@ main(int argc, char **argv) "port=%d\n", ret, portid); + /* Add this queue node to its graph */ + snprintf(qconf->rx_queue_list[queue].node_name, + RTE_NODE_NAMESIZE, "ethdev_rx-%u-%u", portid, + queueid); + } + + /* Alloc a graph to this lcore only if source exists */ + if (qconf->n_rx_queue) { + qconf->graph_id = nb_graphs; + nb_graphs++; } } printf("\n"); + /* Ethdev node config, skip rx queue mapping */ + ret = rte_node_eth_config(ethdev_conf, nb_conf, nb_graphs); + if (ret) + rte_exit(EXIT_FAILURE, "rte_node_eth_config: err=%d\n", ret); + /* Start ports */ RTE_ETH_FOREACH_DEV(portid) { @@ -861,6 +986,118 @@ main(int argc, char **argv) check_all_ports_link_status(enabled_port_mask); + /* Graph Initialization */ + memset(&graph_conf, 0, sizeof(graph_conf)); + graph_conf.node_patterns = node_patterns; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + rte_graph_t graph_id; + rte_edge_t i; + + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + + /* Skip graph creation if no source exists */ + if (!qconf->n_rx_queue) + continue; + + /* Add rx node patterns of this lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + graph_conf.node_patterns[nb_patterns + i] = + qconf->rx_queue_list[i].node_name; + } + + graph_conf.nb_node_patterns = nb_patterns + i; + graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); + + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + lcore_id); + + graph_id = rte_graph_create(qconf->name, &graph_conf); + if (graph_id != qconf->graph_id) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): graph_id=%d not " + " as expected for lcore %u(%u\n", + graph_id, lcore_id, qconf->graph_id); + + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + } + + memset(&rewrite_data, 0, sizeof(rewrite_data)); + rewrite_len = sizeof(rewrite_data); + + /* Add route to ip4 graph infra */ + for (i = 0; i < IPV4_L3FWD_LPM_NUM_ROUTES; i++) { + char route_str[INET6_ADDRSTRLEN * 4]; + char abuf[INET6_ADDRSTRLEN]; + struct in_addr in; + uint32_t dst_port; + uint16_t next_hop; + + /* Skip unused ports */ + if ((1 << ipv4_l3fwd_lpm_route_array[i].if_out & + enabled_port_mask) == 0) + continue; + + dst_port = ipv4_l3fwd_lpm_route_array[i].if_out; + next_hop = i; + + in.s_addr = htonl(ipv4_l3fwd_lpm_route_array[i].ip); + snprintf(route_str, sizeof(route_str), "%s / %d (%d)", + inet_ntop(AF_INET, &in, abuf, sizeof(abuf)), + ipv4_l3fwd_lpm_route_array[i].depth, + ipv4_l3fwd_lpm_route_array[i].if_out); + + ret = rte_node_ip4_route_add( + ipv4_l3fwd_lpm_route_array[i].ip, + ipv4_l3fwd_lpm_route_array[i].depth, next_hop, + RTE_NODE_IP4_LOOKUP_NEXT_REWRITE); + + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Unable to add ip4 route %s to graph\n", + route_str); + + memcpy(rewrite_data, val_eth + dst_port, rewrite_len); + + /* Add next hop for a given destination */ + ret = rte_node_ip4_rewrite_add(next_hop, rewrite_data, + rewrite_len, dst_port); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Unable to add next hop %u for " + "route %s\n", + next_hop, route_str); + + RTE_LOG(INFO, L3FWD_GRAPH, "Added route %s, next_hop %u\n", + route_str, next_hop); + } + + /* Launch per-lcore init on every slave lcore */ + rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MASTER); + + /* Accumulate and print stats on master until exit */ + if (rte_graph_has_stats_feature()) + print_stats(); + + /* Wait for slave cores to exit */ + ret = 0; + RTE_LCORE_FOREACH_SLAVE(lcore_id) { + ret = rte_eal_wait_lcore(lcore_id); + /* Destroy graph */ + rte_graph_destroy(lcore_conf[lcore_id].name); + if (ret < 0) { + ret = -1; + break; + } + } + /* Stop ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0)