From patchwork Wed Oct 9 15:10:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 60812 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BB7331E91B; Wed, 9 Oct 2019 17:11:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 38BA11E91B for ; Wed, 9 Oct 2019 17:11:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x99EscIZ024426; Wed, 9 Oct 2019 08:11:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jHFxj+lms3NYD/Vrs14rMrzG6YbFp7Js+uSjcpbRNg8=; b=VsIIRjkwASSn4IAN1LBj+lfupjEO/P1T+FT8GBbCKBy3QHsS2xc6eD91JUr1+zkgdTnL NcQuoyNfj31kl7U/R7u62O/T8kseUouGIGGrKanzPHXAdxv7z2UmILD/0ABKGRQ+XVO2 QWxFtnMDNz5dqeYTpbRsaj4qpGs3ju0XwCSgA3M0ho3V6sOdZwi8ecHFejTwj7sZs2xF jIDPfZQ7IOSiZlOHPOx5njyPh5LeF9Yh6YwnLVjye4m7OIX7PA121HQY+5i1gn0f928H 101gJsDabPSZr72y6YnzuD4EfU5r++vkBus3r2tSSRORTW3B+e3L3ul7YMwvOLsCSu0f iQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2vhdxbrwmu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 09 Oct 2019 08:11:00 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 9 Oct 2019 08:10:58 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 9 Oct 2019 08:10:58 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id 9C9773F703F; Wed, 9 Oct 2019 08:10:55 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau CC: Anoob Joseph , Thomas Monjalon , Jerin Jacob , Narayana Prasad , , Lukasz Bartosik Date: Wed, 9 Oct 2019 20:40:09 +0530 Message-ID: <1570633816-4706-7-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570633816-4706-1-git-send-email-anoobj@marvell.com> References: <1570633816-4706-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-09_06:2019-10-08,2019-10-09 signatures=0 Subject: [dpdk-dev] [RFC PATCH 06/13] examples/ipsec-secgw: add routines to launch workers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" With eventmode, workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 347 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 48 +++++ 2 files changed, 391 insertions(+), 4 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index c43360e..6993092 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2019 Marvell International Ltd. */ +#include #include #include #include @@ -9,6 +10,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -98,6 +101,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_validate_user_params(struct eventmode_conf *em_conf) { @@ -692,10 +705,8 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, } /* - * TODO * Rx core will invoke the service when required. The runstate check * is not required. - * */ rte_service_set_runstate_mapped_check(service_id, 0); @@ -727,6 +738,262 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *em_conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Need to parse adapter conf to see which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(em_conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter need to be handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret != 0) { + EH_LOG_ERR( + "Error getting service ID used by Rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Need to parse adapter conf to see which all Tx adapters need to be + * handled this core. + */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per Tx core"); + break; + } + + tx_adapter = &(em_conf->tx_adapter[i]); + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter need to be handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret != 0) { + EH_LOG_ERR( + "Error getting service ID used by Tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_event_link_info *link = NULL; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + struct eh_app_worker_params curr_conf = { + {{0} }, NULL}; + struct eh_app_worker_params *tmp_wrkr; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* + * Event device to be used will be derived from the first lcore-event + * link. + * + * Assumption: All lcore-event links tied to a core would be using the + * same event device. in other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR( + "No valid link found for lcore(%d)", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Now parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker( + struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered for second stage"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *mode_conf, + struct eh_event_link_info **links) +{ + int i; + int index = 0; + uint8_t lcore_nb_link = 0; + struct eh_event_link_info *link; + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + size_t cache_size; + size_t single_link_size; + + if (mode_conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode conf"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * + sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -833,10 +1100,8 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } /* - * TODO * Tx core will invoke the service when required. The runstate check * is not required. - * */ rte_service_set_runstate_mapped_check(service_id, 0); @@ -1246,6 +1511,80 @@ eh_devs_uninit(struct eh_conf *mode_conf) return 0; } +void +eh_launch_worker(struct eh_conf *mode_conf, + struct eh_app_worker_params *app_wrkr, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (mode_conf == NULL) { + EH_LOG_ERR("Invalid conf"); + return; + } + + if (mode_conf->mode_params == NULL) { + EH_LOG_ERR("Invalid mode params"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application would be + * registering multiple workers with various capabilities. The + * worker to be run will be selected by the capabilities of the + * event device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, mode_conf, app_wrkr, + nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR( + "No matching worker registered for lcore %d", lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Error in validating the matched worker\n"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, mode_conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(mode_conf, links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *mode_conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index c7fdd8f..1d5a087 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -54,6 +54,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -65,6 +68,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -175,6 +186,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_conf *mode_conf, + struct eh_event_link_info *links, uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -242,6 +269,27 @@ eh_get_tx_queue(struct eh_conf *mode_conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *mode_conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param mode_conf + * Configuration of the mode in which app is doing packet handling + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *mode_conf, + struct eh_app_worker_params *app_wrkr, uint8_t nb_wrkr_param); + #ifdef __cplusplus } #endif