From patchwork Sat Apr 9 15:13:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 109543 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78736A00C2; Sat, 9 Apr 2022 17:13:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25FB74067E; Sat, 9 Apr 2022 17:13:27 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id D97AF4067C for ; Sat, 9 Apr 2022 17:13:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649517205; x=1681053205; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=34T7R4vS0Cititw/oqPyOc51iOaev+Fu99pSFzHmw7M=; b=kkhVQ2ynoH+5h7jNn7YFpNFedm2kw9LfomYba5hw3y7STsrR9xQ4BocB nzBV/nKtha/aESzw/b5VNre3HDWGZJwUircp52KjXLyeHOO08xXv1cMtB vu8xXgJgWcKl/Y79/yAYh6bphUHPTGvSVgZZhO2Rx9843tBCdc+q5cADl 4daAZkyIhg/J2DYyaLo9RIlH8Ht6R7DVQYPhi5IFzegju/JD7RXtZwMmt FWEUNHhnsOmUWJx3gEQ4n2Pg2s91SI6HK2ydj4p35V+uEEAeCBsw2WUtk /7FNADnjYX1PFJA6E9uMpgsoLYy5SqQsQMV0PPoAEkGoRTHdP/MXzn6u3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10312"; a="259411531" X-IronPort-AV: E=Sophos;i="5.90,247,1643702400"; d="scan'208";a="259411531" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2022 08:13:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,247,1643702400"; d="scan'208";a="698684128" Received: from txanpdk03.an.intel.com ([10.123.117.78]) by fmsmga001.fm.intel.com with ESMTP; 09 Apr 2022 08:13:23 -0700 From: Timothy McDaniel To: jerinj@marvell.com Cc: dev@dpdk.org Subject: [PATCH] event/dlb2: allow CQ depths up to 1024 Date: Sat, 9 Apr 2022 10:13:20 -0500 Message-Id: <20220409151320.1007320-1-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Updated to allow overriding the default CQ depth of 32. Since there are only 2048 DLB history list entries, increasing the cq depth decreases the number of available ldb ports to 2048/max_cq_depth. Resource query will take this into account and return the correct maximum number of ldb ports. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/dlb2.c | 57 ++++++++++++++++++++++++++++++--- drivers/event/dlb2/dlb2_priv.h | 10 ++++-- drivers/event/dlb2/pf/dlb2_pf.c | 3 +- 3 files changed, 62 insertions(+), 8 deletions(-) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 7789dd74e0..36f07d0061 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -55,7 +55,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = { .max_event_queue_priority_levels = DLB2_QID_PRIORITIES, .max_event_priority_levels = DLB2_QID_PRIORITIES, .max_event_ports = DLB2_MAX_NUM_LDB_PORTS, - .max_event_port_dequeue_depth = DLB2_MAX_CQ_DEPTH, + .max_event_port_dequeue_depth = DLB2_DEFAULT_CQ_DEPTH, .max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH, .max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ, .max_num_events = DLB2_MAX_NUM_LDB_CREDITS, @@ -111,6 +111,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2) { struct dlb2_hw_dev *handle = &dlb2->qm_instance; struct dlb2_hw_resource_info *dlb2_info = &handle->info; + int num_ldb_ports; int ret; /* Query driver resources provisioned for this device */ @@ -127,11 +128,15 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2) * The capabilities (CAPs) were set at compile time. */ + if (dlb2->max_cq_depth != DLB2_DEFAULT_CQ_DEPTH) + num_ldb_ports = DLB2_MAX_HL_ENTRIES / dlb2->max_cq_depth; + else + num_ldb_ports = dlb2->hw_rsrc_query_results.num_ldb_ports; + evdev_dlb2_default_info.max_event_queues = dlb2->hw_rsrc_query_results.num_ldb_queues; - evdev_dlb2_default_info.max_event_ports = - dlb2->hw_rsrc_query_results.num_ldb_ports; + evdev_dlb2_default_info.max_event_ports = num_ldb_ports; if (dlb2->version == DLB2_HW_V2_5) { evdev_dlb2_default_info.max_num_events = @@ -159,8 +164,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2) handle->info.hw_rsrc_max.num_ldb_queues = dlb2->hw_rsrc_query_results.num_ldb_queues; - handle->info.hw_rsrc_max.num_ldb_ports = - dlb2->hw_rsrc_query_results.num_ldb_ports; + handle->info.hw_rsrc_max.num_ldb_ports = num_ldb_ports; handle->info.hw_rsrc_max.num_dir_ports = dlb2->hw_rsrc_query_results.num_dir_ports; @@ -212,6 +216,36 @@ set_numa_node(const char *key __rte_unused, const char *value, void *opaque) return 0; } + +static int +set_max_cq_depth(const char *key __rte_unused, + const char *value, + void *opaque) +{ + int *max_cq_depth = opaque; + int ret; + + if (value == NULL || opaque == NULL) { + DLB2_LOG_ERR("NULL pointer\n"); + return -EINVAL; + } + + ret = dlb2_string_to_int(max_cq_depth, value); + if (ret < 0) + return ret; + + if (*max_cq_depth < DLB2_MIN_CQ_DEPTH_OVERRIDE || + *max_cq_depth > DLB2_MAX_CQ_DEPTH_OVERRIDE || + !rte_is_power_of_2(*max_cq_depth)) { + DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2\n", + DLB2_MIN_CQ_DEPTH_OVERRIDE, + DLB2_MAX_CQ_DEPTH_OVERRIDE); + return -EINVAL; + } + + return 0; +} + static int set_max_num_events(const char *key __rte_unused, const char *value, @@ -4504,6 +4538,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev, dlb2->hw_credit_quanta = dlb2_args->hw_credit_quanta; dlb2->default_depth_thresh = dlb2_args->default_depth_thresh; dlb2->vector_opts_enabled = dlb2_args->vector_opts_enabled; + dlb2->max_cq_depth = dlb2_args->max_cq_depth; err = dlb2_iface_open(&dlb2->qm_instance, name); if (err < 0) { @@ -4609,6 +4644,7 @@ dlb2_parse_params(const char *params, DLB2_HW_CREDIT_QUANTA_ARG, DLB2_DEPTH_THRESH_ARG, DLB2_VECTOR_OPTS_ENAB_ARG, + DLB2_MAX_CQ_DEPTH, NULL }; if (params != NULL && params[0] != '\0') { @@ -4744,6 +4780,17 @@ dlb2_parse_params(const char *params, return ret; } + ret = rte_kvargs_process(kvlist, + DLB2_MAX_CQ_DEPTH, + set_max_cq_depth, + &dlb2_args->max_cq_depth); + if (ret != 0) { + DLB2_LOG_ERR("%s: Error parsing vector opts enabled", + name); + rte_kvargs_free(kvlist); + return ret; + } + rte_kvargs_free(kvlist); } } diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index 7837ae8733..3e47e4776b 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -28,6 +28,8 @@ #define DLB2_SW_CREDIT_P_QUANTA_DEFAULT 256 /* Producer */ #define DLB2_SW_CREDIT_C_QUANTA_DEFAULT 256 /* Consumer */ #define DLB2_DEPTH_THRESH_DEFAULT 256 +#define DLB2_MIN_CQ_DEPTH_OVERRIDE 32 +#define DLB2_MAX_CQ_DEPTH_OVERRIDE 1024 /* command line arg strings */ #define NUMA_NODE_ARG "numa_node" @@ -41,6 +43,7 @@ #define DLB2_HW_CREDIT_QUANTA_ARG "hw_credit_quanta" #define DLB2_DEPTH_THRESH_ARG "default_depth_thresh" #define DLB2_VECTOR_OPTS_ENAB_ARG "vector_opts_enable" +#define DLB2_MAX_CQ_DEPTH "max_cq_depth" /* Begin HW related defines and structs */ @@ -87,11 +90,12 @@ * depth must be a power of 2 and must also be >= HIST LIST entries. * As a result we just limit the maximum dequeue depth to 32. */ +#define DLB2_MAX_HL_ENTRIES 2048 #define DLB2_MIN_CQ_DEPTH 1 -#define DLB2_MAX_CQ_DEPTH 32 +#define DLB2_DEFAULT_CQ_DEPTH 32 #define DLB2_MIN_HARDWARE_CQ_DEPTH 8 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \ - DLB2_MAX_CQ_DEPTH + DLB2_DEFAULT_CQ_DEPTH #define DLB2_HW_DEVICE_FROM_PCI_ID(_pdev) \ (((_pdev->id.device_id == PCI_DEVICE_ID_INTEL_DLB2_5_PF) || \ @@ -572,6 +576,7 @@ struct dlb2_eventdev { int max_num_events_override; int num_dir_credits_override; bool vector_opts_enabled; + int max_cq_depth; volatile enum dlb2_run_state run_state; uint16_t num_dir_queues; /* total num of evdev dir queues requested */ union { @@ -632,6 +637,7 @@ struct dlb2_devargs { int hw_credit_quanta; int default_depth_thresh; bool vector_opts_enabled; + int max_cq_depth; }; /* End Eventdev related defines and structs */ diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c index dba6f3d5f7..5c80c724f1 100644 --- a/drivers/event/dlb2/pf/dlb2_pf.c +++ b/drivers/event/dlb2/pf/dlb2_pf.c @@ -619,7 +619,8 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev) .poll_interval = DLB2_POLL_INTERVAL_DEFAULT, .sw_credit_quanta = DLB2_SW_CREDIT_QUANTA_DEFAULT, .hw_credit_quanta = DLB2_SW_CREDIT_BATCH_SZ, - .default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT + .default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT, + .max_cq_depth = DLB2_DEFAULT_CQ_DEPTH }; struct dlb2_eventdev *dlb2;