From patchwork Wed Jul 6 23:28:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chautru, Nicolas" X-Patchwork-Id: 113767 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7DCDA0540; Thu, 7 Jul 2022 01:43:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7704242B74; Thu, 7 Jul 2022 01:42:50 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 97DFF406B4 for ; Thu, 7 Jul 2022 01:42:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657150965; x=1688686965; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=B0HCmVqS+oitl4pcezQnVvQlYUn0T2fXeaq4BpiNKHQ=; b=l/yL0HX4vlqGTVTtxjmdIAV0yLyFdGAQh5ssZySuEXv3WCdj9YUEDlyA zsdLryDtinEdBntG0Cx6+cJERZm3s8d3mhX6L1abJwAypBT/N9HDbHeIb f6ZJxEGAXZeX2RxVUPowwRWMejTtEYtc8Ib54qv+xVDZi9eTYFcFKBhjT mOpJAhJ/3rFprXOIpEeDXcUiPkhhU1ZEsHJjbmXPkHfUiIW5N5lsGab1K aDbuP6Cl2jn3GaQK1b40tn7CD2h7l419p1VSdfjBoDY9X0s+dJHlw4kr7 IhYr16mSdmG6anD4hdoHzwmD+uSEJ31uGHtT6tfM5kmrwk4Pj7DbBwtKW Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10400"; a="370210641" X-IronPort-AV: E=Sophos;i="5.92,251,1650956400"; d="scan'208";a="370210641" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jul 2022 16:42:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,251,1650956400"; d="scan'208";a="770221934" Received: from skx-5gnr-sc12-4.sc.intel.com ([172.25.69.210]) by orsmga005.jf.intel.com with ESMTP; 06 Jul 2022 16:42:43 -0700 From: Nicolas Chautru To: dev@dpdk.org, thomas@monjalon.net, gakhil@marvell.com, hemant.agrawal@nxp.com Cc: maxime.coquelin@redhat.com, trix@redhat.com, mdr@ashroe.eu, bruce.richardson@intel.com, david.marchand@redhat.com, stephen@networkplumber.org, Nicolas Chautru Subject: [PATCH v5 6/7] bbdev: add queue related warning and status information Date: Wed, 6 Jul 2022 16:28:29 -0700 Message-Id: <1657150110-69957-7-git-send-email-nicolas.chautru@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1657150110-69957-1-git-send-email-nicolas.chautru@intel.com> References: <1655491040-183649-6-git-send-email-nicolas.chautru@intel.com> <1657150110-69957-1-git-send-email-nicolas.chautru@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This allows to expose more information with regards to any queue related failure and warning which cannot be supported in existing API. Signed-off-by: Nicolas Chautru --- app/test-bbdev/test_bbdev_perf.c | 2 ++ lib/bbdev/rte_bbdev.c | 19 +++++++++++++++++++ lib/bbdev/rte_bbdev.h | 34 ++++++++++++++++++++++++++++++++++ lib/bbdev/version.map | 1 + 4 files changed, 56 insertions(+) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 1abda2d..653b21f 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -4360,6 +4360,8 @@ typedef int (test_case_function)(struct active_device *ad, stats->dequeued_count = q_stats->dequeued_count; stats->enqueue_err_count = q_stats->enqueue_err_count; stats->dequeue_err_count = q_stats->dequeue_err_count; + stats->enqueue_warning_count = q_stats->enqueue_warning_count; + stats->dequeue_warning_count = q_stats->dequeue_warning_count; stats->acc_offload_cycles = q_stats->acc_offload_cycles; return 0; diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index 9d65ba8..bdd7c2f 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -721,6 +721,8 @@ struct rte_bbdev * stats->dequeued_count += q_stats->dequeued_count; stats->enqueue_err_count += q_stats->enqueue_err_count; stats->dequeue_err_count += q_stats->dequeue_err_count; + stats->enqueue_warn_count += q_stats->enqueue_warn_count; + stats->dequeue_warn_count += q_stats->dequeue_warn_count; } rte_bbdev_log_debug("Got stats on %u", dev->data->dev_id); } @@ -1163,3 +1165,20 @@ struct rte_mempool * rte_bbdev_log(ERR, "Invalid device status"); return NULL; } + +const char * +rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status) +{ + static const char * const enq_sta_string[] = { + "RTE_BBDEV_ENQ_STATUS_NONE", + "RTE_BBDEV_ENQ_STATUS_QUEUE_FULL", + "RTE_BBDEV_ENQ_STATUS_RING_FULL", + "RTE_BBDEV_ENQ_STATUS_INVALID_OP", + }; + + if (status < sizeof(enq_sta_string) / sizeof(char *)) + return enq_sta_string[status]; + + rte_bbdev_log(ERR, "Invalid enqueue status"); + return NULL; +} diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index ed528b8..b7ecf94 100644 --- a/lib/bbdev/rte_bbdev.h +++ b/lib/bbdev/rte_bbdev.h @@ -224,6 +224,19 @@ struct rte_bbdev_queue_conf { rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id); /** + * Flags indicate the reason why a previous enqueue may not have + * consumed all requested operations + * In case of multiple reasons the latter superdes a previous one + */ +enum rte_bbdev_enqueue_status { + RTE_BBDEV_ENQ_STATUS_NONE, /**< Nothing to report */ + RTE_BBDEV_ENQ_STATUS_QUEUE_FULL, /**< Not enough room in queue */ + RTE_BBDEV_ENQ_STATUS_RING_FULL, /**< Not enough room in ring */ + RTE_BBDEV_ENQ_STATUS_INVALID_OP, /**< Operation was rejected as invalid */ + RTE_BBDEV_ENQ_STATUS_PADDED_MAX = 6, /**< Maximum enq status number including padding */ +}; + +/** * Flags indicate the status of the device */ enum rte_bbdev_device_status { @@ -246,6 +259,12 @@ struct rte_bbdev_stats { uint64_t enqueue_err_count; /** Total error count on operations dequeued */ uint64_t dequeue_err_count; + /** Total warning count on operations enqueued */ + uint64_t enqueue_warn_count; + /** Total warning count on operations dequeued */ + uint64_t dequeue_warn_count; + /** Total enqueue status count based on rte_bbdev_enqueue_status enum */ + uint64_t enqueue_status_count[RTE_BBDEV_ENQ_STATUS_PADDED_MAX]; /** CPU cycles consumed by the (HW/SW) accelerator device to offload * the enqueue request to its internal queues. * - For a HW device this is the cycles consumed in MMIO write @@ -386,6 +405,7 @@ struct rte_bbdev_queue_data { void *queue_private; /**< Driver-specific per-queue data */ struct rte_bbdev_queue_conf conf; /**< Current configuration */ struct rte_bbdev_stats queue_stats; /**< Queue statistics */ + enum rte_bbdev_enqueue_status enqueue_status; /**< Enqueue status when op is rejected */ bool started; /**< Queue state */ }; @@ -938,6 +958,20 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id, const char* rte_bbdev_device_status_str(enum rte_bbdev_device_status status); +/** + * Converts queue status from enum to string + * + * @param status + * Queue status as enum + * + * @returns + * Queue status as string or NULL if op_type is invalid + * + */ +__rte_experimental +const char* +rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status); + #ifdef __cplusplus } #endif diff --git a/lib/bbdev/version.map b/lib/bbdev/version.map index efae50b..1c06738 100644 --- a/lib/bbdev/version.map +++ b/lib/bbdev/version.map @@ -44,6 +44,7 @@ EXPERIMENTAL { global: rte_bbdev_device_status_str; + rte_bbdev_enqueue_status_str; rte_bbdev_enqueue_fft_ops; rte_bbdev_dequeue_fft_ops; rte_bbdev_fft_op_alloc_bulk;