From patchwork Wed Oct 9 15:10:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 60809 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A32F91E87A; Wed, 9 Oct 2019 17:10:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 9B35A1E8FE for ; Wed, 9 Oct 2019 17:10:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x99EtBup020613; Wed, 9 Oct 2019 08:10:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=VGd7xQR2iHtX8pxp56cDM998qXgQe2BgK8h49kQP1gk=; b=M1v1VDwFQEsaVGx/W7ApqQDqza6LD568/LxJLLstqIwfTYOXMF6+1ngeGZK8xObtuCxj mDEay+CojfOeRw4N3c5oUkOoKYpHVvOErxyWxMvxLsUe3ytPS3MNnOXDN3zoCj+8JWbB 1UkCedR+aGO8BodehypC86z0Nge3sfsxRp+W9VLZiE8w57kPNQQX4EowfpLZlCJCqpNK XZfsl9hiHFmPm2Q8SJYF6X96DJDLLlFuJBec+lLGh2CFdecbW10KJwh/lVvVAXvxDBym MK5iHF6y1wwKsnmrT/lWdbaw2hlzfd/fa/OI05vqj+ilpdyvrHq5QlKgWan8wTWLZfYC 8g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2vh5rqjkk4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 09 Oct 2019 08:10:46 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 9 Oct 2019 08:10:45 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 9 Oct 2019 08:10:45 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id C28C63F7041; Wed, 9 Oct 2019 08:10:42 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau CC: Anoob Joseph , Thomas Monjalon , Jerin Jacob , Narayana Prasad , , Lukasz Bartosik Date: Wed, 9 Oct 2019 20:40:06 +0530 Message-ID: <1570633816-4706-4-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570633816-4706-1-git-send-email-anoobj@marvell.com> References: <1570633816-4706-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-09_06:2019-10-08,2019-10-09 signatures=0 Subject: [dpdk-dev] [RFC PATCH 03/13] examples/ipsec-secgw: add Rx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. It will check the available eth ports and event queues and map them 1:1. So one eth port will be connected to one event queue. This way event queue ID could be used to figure out the port on which a packet came in. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 267 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 294 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 3080625..b5417a0 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2019 Marvell International Ltd. */ -#include +#include #include #include #include @@ -9,6 +9,29 @@ #include "event_helper.h" static inline unsigned int +eh_get_next_rx_core(struct eventmode_conf *em_conf, + unsigned int prev_core) +{ + unsigned int next_core; + +get_next_core: + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Only some cores would be marked as rx cores. Skip others */ + if (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))) { + prev_core = next_core; + goto get_next_core; + } + + return next_core; +} + +static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -167,6 +190,87 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + int nb_eth_dev; + int i; + int adapter_id; + int eventdev_id; + int conn_id; + struct rx_adapter_conf *adapter; + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + + /* Create one adapter with all eth queues mapped to event queues 1:1 */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered\n"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_rx_core(em_conf, -1); + + /* + * All queues of one eth device (port) will be mapped to one event + * queue. Each port will have an individual connection. + * + */ + + /* Make sure there is enough event queues for 1:1 mapping */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) { + EH_LOG_ERR("Not enough event queues for 1:1 mapping " + "[eth devs: %d, event queues: %d]\n", + nb_eth_dev, eventdev_config->nb_eventqueue); + return -EINVAL; + } + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set 1:1 mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = i; + + /* Add all eth queues of one eth port to one event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -196,6 +300,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * See if rx adapters are specified. Else generate a default conf + * with one rx adapter and all eth queue - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -353,6 +467,118 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + int j; + int ret; + uint8_t eventdev_id; + uint32_t service_id; + struct rx_adapter_connection_info *conn; + struct rte_event_port_conf port_conf = {0}; + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, + &port_conf); + if (ret < 0) { + EH_LOG_ERR("Error in rx adapter creation"); + return ret; + } + + /* Setup various connections in the adapter */ + + queue_conf.rx_queue_flags = + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID; + + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + + /* Set flow ID as ethdev ID */ + queue_conf.ev.flow_id = conn->ethdev_id; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add( + adapter->adapter_id, + conn->ethdev_id, + conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Error in adding eth queue in Rx adapter"); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret != 0) { + EH_LOG_ERR("Error getting service ID used by Rx adapter"); + return ret; + } + + /* + * TODO + * Rx core will invoke the service when required. The runstate check + * is not required. + * + */ + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret) { + EH_LOG_ERR("Error in starting rx adapter"); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + int i, ret; + struct rx_adapter_conf *adapter; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Rx adapter configuration failed"); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *mode_conf) { @@ -376,6 +602,9 @@ eh_devs_init(struct eh_conf *mode_conf) /* Get eventmode conf */ em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = mode_conf->eth_portmask; + /* Validate the conf requested */ if (eh_validate_conf(em_conf) != 0) { EH_LOG_ERR("Failed while validating the conf requested"); @@ -397,6 +626,11 @@ eh_devs_init(struct eh_conf *mode_conf) if (ret != 0) return ret; + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret != 0) + return ret; + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(portid) { @@ -417,7 +651,7 @@ eh_devs_init(struct eh_conf *mode_conf) int32_t eh_devs_uninit(struct eh_conf *mode_conf) { - int ret, i; + int ret, i, j; uint16_t id; struct eventmode_conf *em_conf; @@ -437,6 +671,35 @@ eh_devs_uninit(struct eh_conf *mode_conf) /* Get eventmode conf */ em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Error stopping rx adapter %d", id); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Error deleting rx adapter queues %d", + id); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Error freeing rx adapter %d", id); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 052ff25..4233b42 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,12 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -57,12 +63,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -70,6 +97,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct {