From patchwork Tue Sep 21 09:21:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99333 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D874A0C4C; Tue, 21 Sep 2021 11:22:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 043D240DF8; Tue, 21 Sep 2021 11:22:10 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id D5DDD4003C for ; Tue, 21 Sep 2021 11:21:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10113"; a="223349431" X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="223349431" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 02:21:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="549411900" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 21 Sep 2021 02:21:55 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org, Ganapati Kundapura Date: Tue, 21 Sep 2021 04:21:42 -0500 Message-Id: <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> References: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/5] eventdev/rx_adapter: add support to configure event buffer size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently Rx event buffer is static array with a default size of 192(6*BATCH_SIZE). ``rte_event_eth_rx_adapter_create_with_params`` api is added which takes ``struct rte_event_eth_rx_adapter_params`` to configure event buffer size in addition other params . The event buffer is allocated from heap after aligning the size to BATCH_SIZE and adding 2*BATCH_SIZE. In case of NULL params argument, default event buffer size is used. Signed-off-by: Naga Harish K S V Signed-off-by: Ganapati Kundapura --- .../prog_guide/event_ethernet_rx_adapter.rst | 7 ++ lib/eventdev/rte_event_eth_rx_adapter.c | 94 +++++++++++++++++-- lib/eventdev/rte_event_eth_rx_adapter.h | 40 +++++++- lib/eventdev/version.map | 2 + 4 files changed, 135 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst index 0780b6f711..dd753613bd 100644 --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst @@ -62,6 +62,13 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_eth_rx_adapter_conf structure`` passed to it. +If the application desires to control the event buffer size, it can use the +``rte_event_eth_rx_adapter_create_with_params()`` api. The event buffer size is +specified using ``struct rte_event_eth_rx_adapter_params::event_buf_size``. +The function is passed the event device to be associated with the adapter +and port configuration for the adapter to setup an event port if the +adapter needs to use a service function. + Adding Rx Queues to the Adapter Instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index f2dc69503d..df1653b497 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -82,7 +82,9 @@ struct rte_eth_event_enqueue_buffer { /* Count of events in this buffer */ uint16_t count; /* Array of events in this buffer */ - struct rte_event events[ETH_EVENT_BUFFER_SIZE]; + struct rte_event *events; + /* size of event buffer */ + uint16_t events_size; /* Event enqueue happens from head */ uint16_t head; /* New packets from rte_eth_rx_burst is enqued from tail */ @@ -919,7 +921,7 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, dropped = 0; nb_cb = dev_info->cb_fn(eth_dev_id, rx_queue_id, buf->last | - (RTE_DIM(buf->events) & ~buf->last_mask), + (buf->events_size & ~buf->last_mask), buf->count >= BATCH_SIZE ? buf->count - BATCH_SIZE : 0, &buf->events[buf->tail], @@ -945,7 +947,7 @@ rxa_pkt_buf_available(struct rte_eth_event_enqueue_buffer *buf) uint32_t nb_req = buf->tail + BATCH_SIZE; if (!buf->last) { - if (nb_req <= RTE_DIM(buf->events)) + if (nb_req <= buf->events_size) return true; if (buf->head >= BATCH_SIZE) { @@ -2164,12 +2166,15 @@ rxa_ctrl(uint8_t id, int start) return 0; } -int -rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, - rte_event_eth_rx_adapter_conf_cb conf_cb, - void *conf_arg) +static int +rxa_create(uint8_t id, uint8_t dev_id, + struct rte_event_eth_rx_adapter_params *rxa_params, + rte_event_eth_rx_adapter_conf_cb conf_cb, + void *conf_arg) { struct rte_event_eth_rx_adapter *rx_adapter; + struct rte_eth_event_enqueue_buffer *buf; + struct rte_event *events; int ret; int socket_id; uint16_t i; @@ -2184,6 +2189,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + if (conf_cb == NULL) return -EINVAL; @@ -2231,11 +2237,30 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, rte_free(rx_adapter); return -ENOMEM; } + rte_spinlock_init(&rx_adapter->rx_lock); + for (i = 0; i < RTE_MAX_ETHPORTS; i++) rx_adapter->eth_devices[i].dev = &rte_eth_devices[i]; + /* Rx adapter event buffer allocation */ + buf = &rx_adapter->event_enqueue_buffer; + buf->events_size = RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); + + events = rte_zmalloc_socket(rx_adapter->mem_name, + buf->events_size * sizeof(*events), + 0, socket_id); + if (events == NULL) { + RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n"); + rte_free(rx_adapter->eth_devices); + rte_free(rx_adapter); + return -ENOMEM; + } + + rx_adapter->event_enqueue_buffer.events = events; + event_eth_rx_adapter[id] = rx_adapter; + if (conf_cb == rxa_default_conf_cb) rx_adapter->default_cb_arg = 1; rte_eventdev_trace_eth_rx_adapter_create(id, dev_id, conf_cb, @@ -2243,6 +2268,57 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, return 0; } +int +rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, + rte_event_eth_rx_adapter_conf_cb conf_cb, + void *conf_arg) +{ + struct rte_event_eth_rx_adapter_params rxa_params; + + /* Event buffer with default size = 6*BATCH_SIZE */ + rxa_params.event_buf_size = ETH_EVENT_BUFFER_SIZE; + return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); +} + +int +rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id, + struct rte_event_port_conf *port_config, + struct rte_event_eth_rx_adapter_params *rxa_params) +{ + struct rte_event_port_conf *pc; + int ret; + struct rte_event_eth_rx_adapter_params temp_params = {0}; + + if (port_config == NULL) + return -EINVAL; + + /* use default values if rxa_parmas is NULL */ + if (rxa_params == NULL) { + rxa_params = &temp_params; + rxa_params->event_buf_size = ETH_EVENT_BUFFER_SIZE; + } + + if (rxa_params->event_buf_size == 0) + return -EINVAL; + + pc = rte_malloc(NULL, sizeof(*pc), 0); + if (pc == NULL) + return -ENOMEM; + + *pc = *port_config; + + /* event buff size aligned to BATCH_SIZE + 2*BATCH_SIZE */ + rxa_params->event_buf_size = RTE_ALIGN(rxa_params->event_buf_size, + BATCH_SIZE); + rxa_params->event_buf_size += BATCH_SIZE + BATCH_SIZE; + + ret = rxa_create(id, dev_id, rxa_params, rxa_default_conf_cb, pc); + if (ret) + rte_free(pc); + + return ret; +} + int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, struct rte_event_port_conf *port_config) @@ -2252,12 +2328,14 @@ rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, if (port_config == NULL) return -EINVAL; + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); pc = rte_malloc(NULL, sizeof(*pc), 0); if (pc == NULL) return -ENOMEM; *pc = *port_config; + ret = rte_event_eth_rx_adapter_create_ext(id, dev_id, rxa_default_conf_cb, pc); @@ -2286,6 +2364,7 @@ rte_event_eth_rx_adapter_free(uint8_t id) if (rx_adapter->default_cb_arg) rte_free(rx_adapter->conf_arg); rte_free(rx_adapter->eth_devices); + rte_free(rx_adapter->event_enqueue_buffer.events); rte_free(rx_adapter); event_eth_rx_adapter[id] = NULL; @@ -2658,6 +2737,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id, stats->rx_packets += dev_stats_sum.rx_packets; stats->rx_enq_count += dev_stats_sum.rx_enq_count; + return 0; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h index 3f8b362295..a7881097b4 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.h +++ b/lib/eventdev/rte_event_eth_rx_adapter.h @@ -26,6 +26,7 @@ * The ethernet Rx event adapter's functions are: * - rte_event_eth_rx_adapter_create_ext() * - rte_event_eth_rx_adapter_create() + * - rte_event_eth_rx_adapter_create_with_params() * - rte_event_eth_rx_adapter_free() * - rte_event_eth_rx_adapter_queue_add() * - rte_event_eth_rx_adapter_queue_del() @@ -36,7 +37,7 @@ * * The application creates an ethernet to event adapter using * rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create() - * functions. + * or rte_event_eth_rx_adapter_create_with_params() functions. * The adapter needs to know which ethernet rx queues to poll for mbufs as well * as event device parameters such as the event queue identifier, event * priority and scheduling type that the adapter should use when constructing @@ -256,6 +257,16 @@ struct rte_event_eth_rx_adapter_vector_limits { */ }; +/** + * A structure to hold adapter config params + */ +struct rte_event_eth_rx_adapter_params { + uint16_t event_buf_size; + /**< size of event buffer for the adapter. + * the size is aligned to BATCH_SIZE and added (2 * BATCH_SIZE) + */ +}; + /** * * Callback function invoked by the SW adapter before it continues @@ -356,6 +367,33 @@ int rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, struct rte_event_port_conf *port_config); +/** + * This is a variant of rte_event_eth_rx_adapter_create() with additional + * adapter params specified in ``struct rte_event_eth_rx_adapter_params``. + * + * @param id + * The identifier of the ethernet Rx event adapter. + * + * @param dev_id + * The identifier of the event device to configure. + * + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb + * function. + * + * @param rxa_params + * Pointer to struct rte_event_eth_rx_adapter_params. + * In case of NULL, default values are used. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id, + struct rte_event_port_conf *port_config, + struct rte_event_eth_rx_adapter_params *rxa_params); + /** * Free an event adapter * diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd86d2d908..87586de879 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -138,6 +138,8 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + # added in 21.11 + rte_event_eth_rx_adapter_create_with_params; #added in 21.05 rte_event_vector_pool_create; From patchwork Tue Sep 21 09:21:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99331 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E36D4A0C4C; Tue, 21 Sep 2021 11:22:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CFA4D40F35; Tue, 21 Sep 2021 11:22:02 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id D33FF40683 for ; Tue, 21 Sep 2021 11:22:00 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10113"; a="223349435" X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="223349435" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 02:22:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="549411924" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 21 Sep 2021 02:21:59 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org Date: Tue, 21 Sep 2021 04:21:43 -0500 Message-Id: <20210921092146.1778421-2-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> References: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/5] test/event: add unit test for event buffer size config api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" this patch adds unit test for rte_event_eth_rx_adapter_create_with_params api and validate all possible input combinations. Signed-off-by: Naga Harish K S V --- app/test/test_event_eth_rx_adapter.c | 53 +++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 4 deletions(-) diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c index add4d8a678..3c0f0ad7cc 100644 --- a/app/test/test_event_eth_rx_adapter.c +++ b/app/test/test_event_eth_rx_adapter.c @@ -428,6 +428,50 @@ adapter_create_free(void) return TEST_SUCCESS; } +static int +adapter_create_free_v2(void) +{ + int err; + + struct rte_event_port_conf rx_p_conf = { + .dequeue_depth = 8, + .enqueue_depth = 8, + .new_event_threshold = 1200, + }; + + struct rte_event_eth_rx_adapter_params rxa_params = { + .event_buf_size = 1024 + }; + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, NULL, NULL); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err); + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err); + + rxa_params.event_buf_size = 0; + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err); + + err = rte_event_eth_rx_adapter_free(TEST_INST_ID); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_eth_rx_adapter_free(TEST_INST_ID); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err); + + err = rte_event_eth_rx_adapter_free(1); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err); + + return TEST_SUCCESS; +} + static int adapter_queue_add_del(void) { @@ -435,7 +479,7 @@ adapter_queue_add_del(void) struct rte_event ev; uint32_t cap; - struct rte_event_eth_rx_adapter_queue_conf queue_config; + struct rte_event_eth_rx_adapter_queue_conf queue_config = {0}; err = rte_event_eth_rx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID, &cap); @@ -523,7 +567,7 @@ adapter_multi_eth_add_del(void) uint16_t port_index, port_index_base, drv_id = 0; char driver_name[50]; - struct rte_event_eth_rx_adapter_queue_conf queue_config; + struct rte_event_eth_rx_adapter_queue_conf queue_config = {0}; ev.queue_id = 0; ev.sched_type = RTE_SCHED_TYPE_ATOMIC; @@ -594,7 +638,7 @@ adapter_intr_queue_add_del(void) struct rte_event ev; uint32_t cap; uint16_t eth_port; - struct rte_event_eth_rx_adapter_queue_conf queue_config; + struct rte_event_eth_rx_adapter_queue_conf queue_config = {0}; if (!default_params.rx_intr_port_inited) return 0; @@ -687,7 +731,7 @@ adapter_start_stop(void) ev.sched_type = RTE_SCHED_TYPE_ATOMIC; ev.priority = 0; - struct rte_event_eth_rx_adapter_queue_conf queue_config; + struct rte_event_eth_rx_adapter_queue_conf queue_config = {0}; queue_config.rx_queue_flags = 0; if (default_params.caps & @@ -753,6 +797,7 @@ static struct unit_test_suite event_eth_rx_tests = { .teardown = testsuite_teardown, .unit_test_cases = { TEST_CASE_ST(NULL, NULL, adapter_create_free), + TEST_CASE_ST(NULL, NULL, adapter_create_free_v2), TEST_CASE_ST(adapter_create, adapter_free, adapter_queue_add_del), TEST_CASE_ST(adapter_create, adapter_free, From patchwork Tue Sep 21 09:21:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99332 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CAC8AA0C4C; Tue, 21 Sep 2021 11:22:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E259C410EF; Tue, 21 Sep 2021 11:22:05 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 6DE19410F5 for ; Tue, 21 Sep 2021 11:22:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10113"; a="223349440" X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="223349440" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 02:22:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="549411961" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 21 Sep 2021 02:22:03 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org Date: Tue, 21 Sep 2021 04:21:44 -0500 Message-Id: <20210921092146.1778421-3-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> References: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/5] eventdev/rx_adapter:add per queue event buffer configure support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To configure per queue event buffer size, applications sets ``rte_event_eth_rx_adapter_params::use_queue_event_buf`` flag as true and is passed to ``rte_event_eth_rx_adapter_create_with_params`` api. The per queue event buffer size is populated in ``rte_event_eth_rx_adapter_queue_conf::event_buf_size`` and passed to ``rte_event_eth_rx_adapter_queue_add`` api. Signed-off-by: Naga Harish K S V --- .../prog_guide/event_ethernet_rx_adapter.rst | 19 ++++++++++++------- lib/eventdev/rte_event_eth_rx_adapter.h | 4 ++++ 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst index dd753613bd..333e6f8192 100644 --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst @@ -62,12 +62,14 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_eth_rx_adapter_conf structure`` passed to it. -If the application desires to control the event buffer size, it can use the -``rte_event_eth_rx_adapter_create_with_params()`` api. The event buffer size is -specified using ``struct rte_event_eth_rx_adapter_params::event_buf_size``. -The function is passed the event device to be associated with the adapter -and port configuration for the adapter to setup an event port if the -adapter needs to use a service function. +If the application desires to control the event buffer size at adapter level, +it can use the ``rte_event_eth_rx_adapter_create_with_params()`` api. The event +buffer size is specified using ``struct rte_event_eth_rx_adapter_params:: +event_buf_size``. To configure the event buffer size at queue level, the boolean +flag ``struct rte_event_eth_rx_adapter_params::use_queue_event_buf`` need to be +set to true. The function is passed the event device to be associated with +the adapter and port configuration for the adapter to setup an event port +if the adapter needs to use a service function. Adding Rx Queues to the Adapter Instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -79,7 +81,9 @@ parameter. Event information for packets from this Rx queue is encoded in the ``ev`` field of ``struct rte_event_eth_rx_adapter_queue_conf``. The servicing_weight member of the struct rte_event_eth_rx_adapter_queue_conf is the relative polling frequency of the Rx queue and is applicable when the -adapter uses a service core function. +adapter uses a service core function. The applications can configure queue +event buffer size in ``struct rte_event_eth_rx_adapter_queue_conf::event_buf_size`` +parameter. .. code-block:: c @@ -90,6 +94,7 @@ adapter uses a service core function. queue_config.rx_queue_flags = 0; queue_config.ev = ev; queue_config.servicing_weight = 1; + queue_config.event_buf_size = 1024; err = rte_event_eth_rx_adapter_queue_add(id, eth_dev_id, diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h index a7881097b4..b9f0563244 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.h +++ b/lib/eventdev/rte_event_eth_rx_adapter.h @@ -199,6 +199,8 @@ struct rte_event_eth_rx_adapter_queue_conf { * Valid when RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR flag is set in * @see rte_event_eth_rx_adapter_queue_conf::rx_queue_flags. */ + uint16_t event_buf_size; + /**< event buffer size for this queue */ }; /** @@ -265,6 +267,8 @@ struct rte_event_eth_rx_adapter_params { /**< size of event buffer for the adapter. * the size is aligned to BATCH_SIZE and added (2 * BATCH_SIZE) */ + bool use_queue_event_buf; + /**< flag to indicate that event buffer is separate for each queue */ }; /** From patchwork Tue Sep 21 09:21:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99334 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29009A0C4C; Tue, 21 Sep 2021 11:22:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2995D41102; Tue, 21 Sep 2021 11:22:12 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 78FC2410F7 for ; Tue, 21 Sep 2021 11:22:09 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10113"; a="223349447" X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="223349447" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 02:22:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="549412002" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 21 Sep 2021 02:22:08 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org Date: Tue, 21 Sep 2021 04:21:45 -0500 Message-Id: <20210921092146.1778421-4-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> References: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/5] eventdev/rx_adapter: implement per queue event buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" this patch implement the per queue event buffer after required validations. Signed-off-by: Naga Harish K S V --- lib/eventdev/rte_event_eth_rx_adapter.c | 188 ++++++++++++++++++------ 1 file changed, 139 insertions(+), 49 deletions(-) diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index df1653b497..20ea440275 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -99,10 +99,12 @@ struct rte_event_eth_rx_adapter { uint8_t rss_key_be[RSS_KEY_SIZE]; /* Event device identifier */ uint8_t eventdev_id; - /* Per ethernet device structure */ - struct eth_device_info *eth_devices; /* Event port identifier */ uint8_t event_port_id; + /* Flag indicating per rxq event buffer */ + bool use_queue_event_buf; + /* Per ethernet device structure */ + struct eth_device_info *eth_devices; /* Lock to serialize config updates with service function */ rte_spinlock_t rx_lock; /* Max mbufs processed in any service function invocation */ @@ -238,6 +240,7 @@ struct eth_rx_queue_info { uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else 0 */ uint64_t event; struct eth_rx_vector_data vector_data; + struct rte_eth_event_enqueue_buffer *event_buf; }; static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; @@ -753,10 +756,9 @@ rxa_enq_block_end_ts(struct rte_event_eth_rx_adapter *rx_adapter, /* Enqueue buffered events to event device */ static inline uint16_t -rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter) +rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter, + struct rte_eth_event_enqueue_buffer *buf) { - struct rte_eth_event_enqueue_buffer *buf = - &rx_adapter->event_enqueue_buffer; struct rte_event_eth_rx_adapter_stats *stats = &rx_adapter->stats; uint16_t count = buf->last ? buf->last - buf->head : buf->count; @@ -874,15 +876,14 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, uint16_t rx_queue_id, struct rte_mbuf **mbufs, - uint16_t num) + uint16_t num, + struct rte_eth_event_enqueue_buffer *buf) { uint32_t i; struct eth_device_info *dev_info = &rx_adapter->eth_devices[eth_dev_id]; struct eth_rx_queue_info *eth_rx_queue_info = &dev_info->rx_queue[rx_queue_id]; - struct rte_eth_event_enqueue_buffer *buf = - &rx_adapter->event_enqueue_buffer; uint16_t new_tail = buf->tail; uint64_t event = eth_rx_queue_info->event; uint32_t flow_id_mask = eth_rx_queue_info->flow_id_mask; @@ -968,11 +969,10 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adapter, uint16_t queue_id, uint32_t rx_count, uint32_t max_rx, - int *rxq_empty) + int *rxq_empty, + struct rte_eth_event_enqueue_buffer *buf) { struct rte_mbuf *mbufs[BATCH_SIZE]; - struct rte_eth_event_enqueue_buffer *buf = - &rx_adapter->event_enqueue_buffer; struct rte_event_eth_rx_adapter_stats *stats = &rx_adapter->stats; uint16_t n; @@ -985,7 +985,7 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adapter, */ while (rxa_pkt_buf_available(buf)) { if (buf->count >= BATCH_SIZE) - rxa_flush_event_buffer(rx_adapter); + rxa_flush_event_buffer(rx_adapter, buf); stats->rx_poll_count++; n = rte_eth_rx_burst(port_id, queue_id, mbufs, BATCH_SIZE); @@ -994,14 +994,14 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adapter, *rxq_empty = 1; break; } - rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n); + rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf); nb_rx += n; if (rx_count + nb_rx > max_rx) break; } if (buf->count > 0) - rxa_flush_event_buffer(rx_adapter); + rxa_flush_event_buffer(rx_adapter, buf); return nb_rx; } @@ -1142,7 +1142,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapter *rx_adapter) ring_lock = &rx_adapter->intr_ring_lock; if (buf->count >= BATCH_SIZE) - rxa_flush_event_buffer(rx_adapter); + rxa_flush_event_buffer(rx_adapter, buf); while (rxa_pkt_buf_available(buf)) { struct eth_device_info *dev_info; @@ -1194,7 +1194,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapter *rx_adapter) continue; n = rxa_eth_rx(rx_adapter, port, i, nb_rx, rx_adapter->max_nb_rx, - &rxq_empty); + &rxq_empty, buf); nb_rx += n; enq_buffer_full = !rxq_empty && n == 0; @@ -1215,7 +1215,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapter *rx_adapter) } else { n = rxa_eth_rx(rx_adapter, port, queue, nb_rx, rx_adapter->max_nb_rx, - &rxq_empty); + &rxq_empty, buf); rx_adapter->qd_valid = !rxq_empty; nb_rx += n; if (nb_rx > rx_adapter->max_nb_rx) @@ -1246,13 +1246,12 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adapter) { uint32_t num_queue; uint32_t nb_rx = 0; - struct rte_eth_event_enqueue_buffer *buf; + struct rte_eth_event_enqueue_buffer *buf = NULL; uint32_t wrr_pos; uint32_t max_nb_rx; wrr_pos = rx_adapter->wrr_pos; max_nb_rx = rx_adapter->max_nb_rx; - buf = &rx_adapter->event_enqueue_buffer; /* Iterate through a WRR sequence */ for (num_queue = 0; num_queue < rx_adapter->wrr_len; num_queue++) { @@ -1260,24 +1259,36 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adapter) uint16_t qid = rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid; uint16_t d = rx_adapter->eth_rx_poll[poll_idx].eth_dev_id; + if (rx_adapter->use_queue_event_buf) { + struct eth_device_info *dev_info = + &rx_adapter->eth_devices[d]; + buf = dev_info->rx_queue[qid].event_buf; + } else + buf = &rx_adapter->event_enqueue_buffer; + /* Don't do a batch dequeue from the rx queue if there isn't * enough space in the enqueue buffer. */ if (buf->count >= BATCH_SIZE) - rxa_flush_event_buffer(rx_adapter); + rxa_flush_event_buffer(rx_adapter, buf); if (!rxa_pkt_buf_available(buf)) { - rx_adapter->wrr_pos = wrr_pos; - return nb_rx; + if (rx_adapter->use_queue_event_buf) + goto poll_next_entry; + else { + rx_adapter->wrr_pos = wrr_pos; + return nb_rx; + } } nb_rx += rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx, - NULL); + NULL, buf); if (nb_rx > max_nb_rx) { rx_adapter->wrr_pos = (wrr_pos + 1) % rx_adapter->wrr_len; break; } +poll_next_entry: if (++wrr_pos == rx_adapter->wrr_len) wrr_pos = 0; } @@ -1288,12 +1299,18 @@ static void rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) { struct rte_event_eth_rx_adapter *rx_adapter = arg; - struct rte_eth_event_enqueue_buffer *buf = - &rx_adapter->event_enqueue_buffer; + struct rte_eth_event_enqueue_buffer *buf = NULL; struct rte_event *ev; + if (rx_adapter->use_queue_event_buf) { + struct eth_device_info *dev_info = + &rx_adapter->eth_devices[vec->port]; + buf = dev_info->rx_queue[vec->queue].event_buf; + } else + buf = &rx_adapter->event_enqueue_buffer; + if (buf->count) - rxa_flush_event_buffer(rx_adapter); + rxa_flush_event_buffer(rx_adapter, buf); if (vec->vector_ev->nb_elem == 0) return; @@ -1905,9 +1922,16 @@ rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, rx_adapter->num_rx_intr -= intrq; dev_info->nb_rx_intr -= intrq; dev_info->nb_shared_intr -= intrq && sintrq; + if (rx_adapter->use_queue_event_buf) { + struct rte_eth_event_enqueue_buffer *event_buf = + dev_info->rx_queue[rx_queue_id].event_buf; + rte_free(event_buf->events); + rte_free(event_buf); + dev_info->rx_queue[rx_queue_id].event_buf = NULL; + } } -static void +static int rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, struct eth_device_info *dev_info, int32_t rx_queue_id, @@ -1919,15 +1943,21 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, int intrq; int sintrq; struct rte_event *qi_ev; + struct rte_eth_event_enqueue_buffer *new_rx_buf = NULL; + uint16_t eth_dev_id = dev_info->dev->data->port_id; + int ret; if (rx_queue_id == -1) { uint16_t nb_rx_queues; uint16_t i; nb_rx_queues = dev_info->dev->data->nb_rx_queues; - for (i = 0; i < nb_rx_queues; i++) - rxa_add_queue(rx_adapter, dev_info, i, conf); - return; + for (i = 0; i < nb_rx_queues; i++) { + ret = rxa_add_queue(rx_adapter, dev_info, i, conf); + if (ret) + return ret; + } + return 0; } pollq = rxa_polled_queue(dev_info, rx_queue_id); @@ -1990,6 +2020,37 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, dev_info->next_q_idx = 0; } } + + if (!rx_adapter->use_queue_event_buf) + return 0; + + new_rx_buf = rte_zmalloc_socket("rx_buffer_meta", + sizeof(*new_rx_buf), 0, + rte_eth_dev_socket_id(eth_dev_id)); + if (new_rx_buf == NULL) { + RTE_EDEV_LOG_ERR("Failed to allocate event buffer meta for " + "dev_id: %d queue_id: %d", + eth_dev_id, rx_queue_id); + return -ENOMEM; + } + + new_rx_buf->events_size = RTE_ALIGN(conf->event_buf_size, BATCH_SIZE); + new_rx_buf->events_size += (2 * BATCH_SIZE); + new_rx_buf->events = rte_zmalloc_socket("rx_buffer", + sizeof(struct rte_event) * + new_rx_buf->events_size, 0, + rte_eth_dev_socket_id(eth_dev_id)); + if (new_rx_buf->events == NULL) { + rte_free(new_rx_buf); + RTE_EDEV_LOG_ERR("Failed to allocate event buffer for " + "dev_id: %d queue_id: %d", + eth_dev_id, rx_queue_id); + return -ENOMEM; + } + + queue_info->event_buf = new_rx_buf; + + return 0; } static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, @@ -2018,6 +2079,16 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, temp_conf.servicing_weight = 1; } queue_conf = &temp_conf; + + if (queue_conf->servicing_weight == 0 && + rx_adapter->use_queue_event_buf) { + + RTE_EDEV_LOG_ERR("Use of queue level event buffer " + "not supported for interrupt queues " + "dev_id: %d queue_id: %d", + eth_dev_id, rx_queue_id); + return -EINVAL; + } } nb_rx_queues = dev_info->dev->data->nb_rx_queues; @@ -2097,7 +2168,9 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, - rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); + ret = rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); + if (ret) + goto err_free_rxqueue; rxa_calc_wrr_sequence(rx_adapter, rx_poll, rx_wrr); rte_free(rx_adapter->eth_rx_poll); @@ -2118,7 +2191,7 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, rte_free(rx_poll); rte_free(rx_wrr); - return 0; + return ret; } static int @@ -2244,20 +2317,26 @@ rxa_create(uint8_t id, uint8_t dev_id, rx_adapter->eth_devices[i].dev = &rte_eth_devices[i]; /* Rx adapter event buffer allocation */ - buf = &rx_adapter->event_enqueue_buffer; - buf->events_size = RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); - - events = rte_zmalloc_socket(rx_adapter->mem_name, - buf->events_size * sizeof(*events), - 0, socket_id); - if (events == NULL) { - RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n"); - rte_free(rx_adapter->eth_devices); - rte_free(rx_adapter); - return -ENOMEM; - } + rx_adapter->use_queue_event_buf = rxa_params->use_queue_event_buf; + + if (!rx_adapter->use_queue_event_buf) { + buf = &rx_adapter->event_enqueue_buffer; + buf->events_size = RTE_ALIGN(rxa_params->event_buf_size, + BATCH_SIZE); + + events = rte_zmalloc_socket(rx_adapter->mem_name, + buf->events_size * sizeof(*events), + 0, socket_id); + if (events == NULL) { + RTE_EDEV_LOG_ERR("Failed to allocate memory " + "for adapter event buffer"); + rte_free(rx_adapter->eth_devices); + rte_free(rx_adapter); + return -ENOMEM; + } - rx_adapter->event_enqueue_buffer.events = events; + rx_adapter->event_enqueue_buffer.events = events; + } event_eth_rx_adapter[id] = rx_adapter; @@ -2277,6 +2356,8 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, /* Event buffer with default size = 6*BATCH_SIZE */ rxa_params.event_buf_size = ETH_EVENT_BUFFER_SIZE; + rxa_params.use_queue_event_buf = false; + return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); } @@ -2296,9 +2377,9 @@ rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id, if (rxa_params == NULL) { rxa_params = &temp_params; rxa_params->event_buf_size = ETH_EVENT_BUFFER_SIZE; - } - - if (rxa_params->event_buf_size == 0) + rxa_params->use_queue_event_buf = false; + } else if ((!rxa_params->use_queue_event_buf && + rxa_params->event_buf_size == 0)) return -EINVAL; pc = rte_malloc(NULL, sizeof(*pc), 0); @@ -2364,7 +2445,8 @@ rte_event_eth_rx_adapter_free(uint8_t id) if (rx_adapter->default_cb_arg) rte_free(rx_adapter->conf_arg); rte_free(rx_adapter->eth_devices); - rte_free(rx_adapter->event_enqueue_buffer.events); + if (!rx_adapter->use_queue_event_buf) + rte_free(rx_adapter->event_enqueue_buffer.events); rte_free(rx_adapter); event_eth_rx_adapter[id] = NULL; @@ -2468,6 +2550,14 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, return -EINVAL; } + if ((rx_adapter->use_queue_event_buf && + queue_conf->event_buf_size == 0) || + (!rx_adapter->use_queue_event_buf && + queue_conf->event_buf_size != 0)) { + RTE_EDEV_LOG_ERR("Invalid Event buffer size for the queue"); + return -EINVAL; + } + dev_info = &rx_adapter->eth_devices[eth_dev_id]; if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { From patchwork Tue Sep 21 09:21:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 99335 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F128A0C4C; Tue, 21 Sep 2021 11:22:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9EFFB40E5A; Tue, 21 Sep 2021 11:22:15 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id EB9684111E for ; Tue, 21 Sep 2021 11:22:13 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10113"; a="223349452" X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="223349452" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2021 02:22:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,310,1624345200"; d="scan'208";a="549412026" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by FMSMGA003.fm.intel.com with ESMTP; 21 Sep 2021 02:22:12 -0700 From: Naga Harish K S V To: jerinj@marvell.com, jay.jayatheerthan@intel.com Cc: dev@dpdk.org Date: Tue, 21 Sep 2021 04:21:46 -0500 Message-Id: <20210921092146.1778421-5-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> References: <20210918131140.3543317-1-s.v.naga.harish.k@intel.com> <20210921092146.1778421-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/5] test/eventdev: add per rx queue event buffer unit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" this patch adds unit tests for per rx queue event buffer Signed-off-by: Naga Harish K S V --- app/test/test_event_eth_rx_adapter.c | 86 ++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c index 3c0f0ad7cc..11110564d0 100644 --- a/app/test/test_event_eth_rx_adapter.c +++ b/app/test/test_event_eth_rx_adapter.c @@ -387,6 +387,90 @@ adapter_create(void) return err; } +static int +adapter_create_with_params(void) +{ + int err; + struct rte_event_dev_info dev_info; + struct rte_event_port_conf rx_p_conf; + struct rte_event_eth_rx_adapter_params rxa_params; + + memset(&rx_p_conf, 0, sizeof(rx_p_conf)); + + err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + rx_p_conf.new_event_threshold = dev_info.max_num_events; + rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + + rxa_params.use_queue_event_buf = false; + rxa_params.event_buf_size = 0; + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err); + + rxa_params.use_queue_event_buf = true; + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID, + TEST_DEV_ID, &rx_p_conf, &rxa_params); + TEST_ASSERT(err == -EEXIST, "Expected -EEXIST got %d", err); + + return TEST_SUCCESS; +} + +static int +adapter_queue_event_buf_test(void) +{ + int err; + struct rte_event ev; + uint32_t cap; + + struct rte_event_eth_rx_adapter_queue_conf queue_config = {0}; + + err = rte_event_eth_rx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID, + &cap); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + ev.queue_id = 0; + ev.sched_type = RTE_SCHED_TYPE_ATOMIC; + ev.priority = 0; + + queue_config.rx_queue_flags = 0; + if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) { + ev.flow_id = 1; + queue_config.rx_queue_flags = + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID; + } + queue_config.ev = ev; + queue_config.servicing_weight = 1; + queue_config.event_buf_size = 0; + + err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID, + TEST_ETHDEV_ID, 0, + &queue_config); + TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err); + + queue_config.event_buf_size = 1024; + + err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID, + TEST_ETHDEV_ID, 0, + &queue_config); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID, + TEST_ETHDEV_ID, + 0); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + return TEST_SUCCESS; +} + static void adapter_free(void) { @@ -804,6 +888,8 @@ static struct unit_test_suite event_eth_rx_tests = { adapter_multi_eth_add_del), TEST_CASE_ST(adapter_create, adapter_free, adapter_start_stop), TEST_CASE_ST(adapter_create, adapter_free, adapter_stats), + TEST_CASE_ST(adapter_create_with_params, adapter_free, + adapter_queue_event_buf_test), TEST_CASES_END() /**< NULL terminate unit test array */ } };