From patchwork Sat Sep 18 13:11:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99281 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2813FA0C45; Sat, 18 Sep 2021 15:11:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A712C4014E; Sat, 18 Sep 2021 15:11:50 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id EC6EC40041 for ; Sat, 18 Sep 2021 15:11:48 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="221064705" X-IronPort-AV: E=Sophos;i="5.85,304,1624345200"; d="scan'208";a="221064705" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2021 06:11:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,304,1624345200"; d="scan'208";a="547202310" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 18 Sep 2021 06:11:47 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org, Ganapati Kundapura Date: Sat, 18 Sep 2021 08:11:36 -0500 Message-Id: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 1/5] eventdev: rx_adapter: add support to configure event buffer size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently Rx event buffer is static array with a default size of 192(6*BATCH_SIZE). ``rte_event_eth_rx_adapter_create2`` api is added which takes ``struct rte_event_eth_rx_adapter_params`` to configure event buffer size in addition other params . The event buffer is allocated dynamically at run time aligned to BATCH_SIZE + 2*BATCH_SIZE. Signed-off-by: Naga Harish K S V Signed-off-by: Ganapati Kundapura --- .../prog_guide/event_ethernet_rx_adapter.rst | 7 ++ lib/eventdev/rte_event_eth_rx_adapter.c | 87 +++++++++++++++++-- lib/eventdev/rte_event_eth_rx_adapter.h | 45 +++++++++- lib/eventdev/version.map | 2 + 4 files changed, 133 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst index 0780b6f711..cbf694c66b 100644 --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst @@ -62,6 +62,13 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_eth_rx_adapter_conf structure`` passed to it. +If the application desires to control the event buffer size, it can use the +``rte_event_eth_rx_adapter_create2()`` api. The event buffer size is +specified using ``struct rte_event_eth_rx_adapter_params::event_buf_size``. +The function is passed the event device to be associated with the adapter +and port configuration for the adapter to setup an event port if the +adapter needs to use a service function. + Adding Rx Queues to the Adapter Instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index f2dc69503d..f567a83223 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -82,7 +82,9 @@ struct rte_eth_event_enqueue_buffer { /* Count of events in this buffer */ uint16_t count; /* Array of events in this buffer */ - struct rte_event events[ETH_EVENT_BUFFER_SIZE]; + struct rte_event *events; + /* size of event buffer */ + uint16_t events_size; /* Event enqueue happens from head */ uint16_t head; /* New packets from rte_eth_rx_burst is enqued from tail */ @@ -919,7 +921,7 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, dropped = 0; nb_cb = dev_info->cb_fn(eth_dev_id, rx_queue_id, buf->last | - (RTE_DIM(buf->events) & ~buf->last_mask), + (buf->events_size & ~buf->last_mask), buf->count >= BATCH_SIZE ? buf->count - BATCH_SIZE : 0, &buf->events[buf->tail], @@ -945,7 +947,7 @@ rxa_pkt_buf_available(struct rte_eth_event_enqueue_buffer *buf) uint32_t nb_req = buf->tail + BATCH_SIZE; if (!buf->last) { - if (nb_req <= RTE_DIM(buf->events)) + if (nb_req <= buf->events_size) return true; if (buf->head >= BATCH_SIZE) { @@ -2164,12 +2166,15 @@ rxa_ctrl(uint8_t id, int start) return 0; } -int -rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, - rte_event_eth_rx_adapter_conf_cb conf_cb, - void *conf_arg) +static int +rxa_create(uint8_t id, uint8_t dev_id, + struct rte_event_eth_rx_adapter_params *rxa_params, + rte_event_eth_rx_adapter_conf_cb conf_cb, + void *conf_arg) { struct rte_event_eth_rx_adapter *rx_adapter; + struct rte_eth_event_enqueue_buffer *buf; + struct rte_event *events; int ret; int socket_id; uint16_t i; @@ -2184,6 +2189,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + if (conf_cb == NULL) return -EINVAL; @@ -2231,11 +2237,30 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, rte_free(rx_adapter); return -ENOMEM; } + rte_spinlock_init(&rx_adapter->rx_lock); + for (i = 0; i < RTE_MAX_ETHPORTS; i++) rx_adapter->eth_devices[i].dev = &rte_eth_devices[i]; + /* Rx adapter event buffer allocation */ + buf = &rx_adapter->event_enqueue_buffer; + buf->events_size = RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); + + events = rte_zmalloc_socket(rx_adapter->mem_name, + buf->events_size * sizeof(*events), + 0, socket_id); + if (events == NULL) { + RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n"); + rte_free(rx_adapter->eth_devices); + rte_free(rx_adapter); + return -ENOMEM; + } + + rx_adapter->event_enqueue_buffer.events = events; + event_eth_rx_adapter[id] = rx_adapter; + if (conf_cb == rxa_default_conf_cb) rx_adapter->default_cb_arg = 1; rte_eventdev_trace_eth_rx_adapter_create(id, dev_id, conf_cb, @@ -2243,6 +2268,50 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, return 0; } +int +rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, + rte_event_eth_rx_adapter_conf_cb conf_cb, + void *conf_arg) +{ + struct rte_event_eth_rx_adapter_params rxa_params; + + /* Event buffer with default size = 6*BATCH_SIZE */ + rxa_params.event_buf_size = ETH_EVENT_BUFFER_SIZE; + return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); +} + +int +rte_event_eth_rx_adapter_create2(uint8_t id, uint8_t dev_id, + struct rte_event_eth_rx_adapter_params *rxa_params, + struct rte_event_port_conf *port_config) +{ + struct rte_event_port_conf *pc; + int ret; + + if (port_config == NULL) + return -EINVAL; + + if (rxa_params == NULL || rxa_params->event_buf_size == 0) + return -EINVAL; + + pc = rte_malloc(NULL, sizeof(*pc), 0); + if (pc == NULL) + return -ENOMEM; + + *pc = *port_config; + + /* event buff size aligned to BATCH_SIZE + 2*BATCH_SIZE */ + rxa_params->event_buf_size = RTE_ALIGN(rxa_params->event_buf_size, + BATCH_SIZE); + rxa_params->event_buf_size += BATCH_SIZE + BATCH_SIZE; + + ret = rxa_create(id, dev_id, rxa_params, rxa_default_conf_cb, pc); + if (ret) + rte_free(pc); + + return ret; +} + int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, struct rte_event_port_conf *port_config) @@ -2252,12 +2321,14 @@ rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, if (port_config == NULL) return -EINVAL; + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); pc = rte_malloc(NULL, sizeof(*pc), 0); if (pc == NULL) return -ENOMEM; *pc = *port_config; + ret = rte_event_eth_rx_adapter_create_ext(id, dev_id, rxa_default_conf_cb, pc); @@ -2286,6 +2357,7 @@ rte_event_eth_rx_adapter_free(uint8_t id) if (rx_adapter->default_cb_arg) rte_free(rx_adapter->conf_arg); rte_free(rx_adapter->eth_devices); + rte_free(rx_adapter->event_enqueue_buffer.events); rte_free(rx_adapter); event_eth_rx_adapter[id] = NULL; @@ -2658,6 +2730,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id, stats->rx_packets += dev_stats_sum.rx_packets; stats->rx_enq_count += dev_stats_sum.rx_enq_count; + return 0; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h index 3f8b362295..a1b5e0ed37 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.h +++ b/lib/eventdev/rte_event_eth_rx_adapter.h @@ -26,6 +26,7 @@ * The ethernet Rx event adapter's functions are: * - rte_event_eth_rx_adapter_create_ext() * - rte_event_eth_rx_adapter_create() + * - rte_event_eth_rx_adapter_create2() * - rte_event_eth_rx_adapter_free() * - rte_event_eth_rx_adapter_queue_add() * - rte_event_eth_rx_adapter_queue_del() @@ -36,7 +37,7 @@ * * The application creates an ethernet to event adapter using * rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create() - * functions. + * or rte_event_eth_rx_adapter_create2() functions. * The adapter needs to know which ethernet rx queues to poll for mbufs as well * as event device parameters such as the event queue identifier, event * priority and scheduling type that the adapter should use when constructing @@ -256,6 +257,14 @@ struct rte_event_eth_rx_adapter_vector_limits { */ }; +/** + * A structure to hold adapter config params + */ +struct rte_event_eth_rx_adapter_params { + uint16_t event_buf_size; + /**< size of event buffer for the adapter */ +}; + /** * * Callback function invoked by the SW adapter before it continues @@ -330,6 +339,40 @@ int rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, rte_event_eth_rx_adapter_conf_cb conf_cb, void *conf_arg); +/** + * Create a new ethernet Rx event adapter with the specified identifier. + * This function allocates Rx adapter event buffer with the size specified + * in rxa_params aligned to BATCH_SIZE plus (BATCH_SIZE+BATCH_SIZE) and + * uses an internal configuration function that creates an event port. + * This default function reconfigures the event device with an + * additional event port and setups up the event port using the port config + * parameter passed into this function. In case the application needs more + * control in configuration of the service, it should use the + * rte_event_eth_rx_adapter_create_ext() version. + * + * @param id + * The identifier of the ethernet Rx event adapter. + * + * @param dev_id + * The identifier of the event device to configure. + * + * @param rxa_params + * Pointer to struct rte_event_eth_rx_adapter_params containing + * size to allocate rx event buffer. + * + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb + * function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_eth_rx_adapter_create2(uint8_t id, uint8_t dev_id, + struct rte_event_eth_rx_adapter_params *rxa_params, + struct rte_event_port_conf *port_config); + /** * Create a new ethernet Rx event adapter with the specified identifier. * This function uses an internal configuration function that creates an event diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd86d2d908..868d352eb3 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -138,6 +138,8 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + # added in 21.11 + rte_event_eth_rx_adapter_create2; #added in 21.05 rte_event_vector_pool_create;