From patchwork Wed Sep 21 16:19:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sevincer, Abdullah" X-Patchwork-Id: 116584 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31442A00C3; Wed, 21 Sep 2022 18:19:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CAAA440691; Wed, 21 Sep 2022 18:19:18 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 995E24067C for ; Wed, 21 Sep 2022 18:19:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663777156; x=1695313156; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=reGO35qZV6Plf7RAF6oJNjy9L6mb6C8mBWTTy2NQfvo=; b=MVc+QBewkVW2mHV4FGPHOj3iaCLFgNH1XvS7XW1N0sIbmF+q9pov2vp/ +dX0tyfH3HnJbpuFVj74EpXgU7ALxircZ3lrJ/M3Rv2aA7C+5gda6cngZ 6MFJ3bRGUbSMalf3sj7xS4l0uiVmG2klNgk1oBDFZ/bIEYo5HAjLpGGPV sj7UhSr/l6sQcL4amXI78oDnF0NxO9SRBy5qstng2AJA3Cr05BPZbGLIu mimqPItTIEhtZXmgVoa/G9LjwdYzUU/N6EUYXtFR1ZmxXJvVjNA5QE2J/ 4jTskpchla+UHnX/4HjyipgKv3Z28hJsmSI9kQ+2WdpVHf4R6es+o8r8O Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="298767645" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="298767645" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 09:19:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="650143057" Received: from txanpdk02.an.intel.com ([10.123.117.76]) by orsmga008.jf.intel.com with ESMTP; 21 Sep 2022 09:19:12 -0700 From: Abdullah Sevincer To: dev@dpdk.org Cc: jerinj@marvell.com, rashmi.shetty@intel.com, pravin.pathak@intel.com, mike.ximing.chen@intel.com, timothy.mcdaniel@intel.com, shivani.doneria@intel.com, tirthendu.sarkar@intel.com, Abdullah Sevincer Subject: [PATCH v2] event/dlb2: fix max cq_depth/enq_depth cli override Date: Wed, 21 Sep 2022 11:19:11 -0500 Message-Id: <20220921161911.737899-1-abdullah.sevincer@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch addresses an issue of enqueuing more than max_enq_depth and not able to dequeuing events equal to max_cq_depth in a single call of rte_event_enqueue_burst and rte_event_dequeue_burst. Apply fix for restricting enqueue of events to max_enq_depth so that in a single rte_event_enqueue_burst() call at most max_enq_depth events are enqueued. Also set per port and domain history list sizes based on cq_depth. This results in dequeing correct number of events as set by max_cq_depth. Signed-off-by: Abdullah Sevincer --- drivers/event/dlb2/dlb2.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 5a443acff8..e8c21c41fd 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -813,7 +813,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2, cfg->num_ldb_queues; cfg->num_hist_list_entries = resources_asked->num_ldb_ports * - DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; + evdev_dlb2_default_info.max_event_port_dequeue_depth; if (device_version == DLB2_HW_V2_5) { DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n", @@ -1538,7 +1538,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, cfg.cq_depth = rte_align32pow2(dequeue_depth); cfg.cq_depth_threshold = 1; - cfg.cq_history_list_size = DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; + cfg.cq_history_list_size = cfg.cq_depth; cfg.cos_id = ev_port->cos_id; cfg.cos_strict = 0;/* best effots */ @@ -2966,6 +2966,7 @@ __dlb2_event_enqueue_burst(void *event_port, struct dlb2_port *qm_port = &ev_port->qm_port; struct process_local_port_data *port_data; int retries = ev_port->enq_retries; + int num_tx; int i; RTE_ASSERT(ev_port->enq_configured); @@ -2974,8 +2975,8 @@ __dlb2_event_enqueue_burst(void *event_port, i = 0; port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; - - while (i < num) { + num_tx = RTE_MIN(num, ev_port->conf.enqueue_depth); + while (i < num_tx) { uint8_t sched_types[DLB2_NUM_QES_PER_CACHE_LINE]; uint8_t queue_ids[DLB2_NUM_QES_PER_CACHE_LINE]; int pop_offs = 0;