From patchwork Wed Aug 30 15:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 130881 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1214641FD0; Wed, 30 Aug 2023 17:53:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D4AB4028D; Wed, 30 Aug 2023 17:53:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5DD1E40284 for ; Wed, 30 Aug 2023 17:53:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37U85vcI011785 for ; Wed, 30 Aug 2023 08:53:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=p/MjXDJRm01Sk8QmUsTkzrsrFJme4FI4mjnuexL6Ikw=; b=hgjRtq28E7D2F6utcAOnhp12Nn3cykR4QG/fgVzhxyUa46mxLTeA9u6MOV2O5HcGlF4F WCbB+6pcQNLGToGDr9FNq3mkcLkmBHj+Yqxzu8Kbl41jA0sQBWJ1NdCqGVKaijhqQjIZ IPdYukLr9CUE1xr0sA4GeVRkJSvTaPuHc++NJytsGTESmbNfGEHoy96xPd0O/vmgtFlK 8fIWxiE7SIdea2d2Aj0yqjcbpgFrsw5o8Ee2fDb+tzKe7AkRXjaOoZ8MOvrgud3JVDZe cVoczK9USfelDL2KdymRLA4eGJabYytW45Vutdmo4KC4waB5C3hdvP3advgwQsqMHxxK kQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3st1y61fg4-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Aug 2023 08:53:18 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 30 Aug 2023 08:53:15 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 30 Aug 2023 08:53:15 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 84FBC3F707E; Wed, 30 Aug 2023 08:53:15 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v1 3/3] mldev: drop input and output size get APIs Date: Wed, 30 Aug 2023 08:53:02 -0700 Message-ID: <20230830155303.30380-4-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230830155303.30380-1-syalavarthi@marvell.com> References: <20230830155303.30380-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: OKo1jy0F5ghGYVe04pmrkb-VDhSvtCZH X-Proofpoint-GUID: OKo1jy0F5ghGYVe04pmrkb-VDhSvtCZH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-30_12,2023-08-29_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Drop support and use of ML input and output size get functions, rte_ml_io_input_size_get and rte_ml_io_output_size_get. These functions are not required, as the model buffer size can be computed from the fields of updated rte_ml_io_info structure. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 50 ---------------------------- lib/mldev/rte_mldev.c | 38 --------------------- lib/mldev/rte_mldev.h | 60 ---------------------------------- lib/mldev/rte_mldev_core.h | 54 ------------------------------ lib/mldev/version.map | 2 -- 5 files changed, 204 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 1d72fb52a6a..4abf4ae0d39 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -2110,54 +2110,6 @@ cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *bu return 0; } -static int -cn10k_ml_io_input_size_get(struct rte_ml_dev *dev, uint16_t model_id, uint32_t nb_batches, - uint64_t *input_qsize, uint64_t *input_dsize) -{ - struct cn10k_ml_model *model; - - model = dev->data->models[model_id]; - - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } - - if (input_qsize != NULL) - *input_qsize = PLT_U64_CAST(model->addr.total_input_sz_q * - PLT_DIV_CEIL(nb_batches, model->batch_size)); - - if (input_dsize != NULL) - *input_dsize = PLT_U64_CAST(model->addr.total_input_sz_d * - PLT_DIV_CEIL(nb_batches, model->batch_size)); - - return 0; -} - -static int -cn10k_ml_io_output_size_get(struct rte_ml_dev *dev, uint16_t model_id, uint32_t nb_batches, - uint64_t *output_qsize, uint64_t *output_dsize) -{ - struct cn10k_ml_model *model; - - model = dev->data->models[model_id]; - - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } - - if (output_qsize != NULL) - *output_qsize = PLT_U64_CAST(model->addr.total_output_sz_q * - PLT_DIV_CEIL(nb_batches, model->batch_size)); - - if (output_dsize != NULL) - *output_dsize = PLT_U64_CAST(model->addr.total_output_sz_d * - PLT_DIV_CEIL(nb_batches, model->batch_size)); - - return 0; -} - static int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer) @@ -2636,8 +2588,6 @@ struct rte_ml_dev_ops cn10k_ml_ops = { .model_params_update = cn10k_ml_model_params_update, /* I/O ops */ - .io_input_size_get = cn10k_ml_io_input_size_get, - .io_output_size_get = cn10k_ml_io_output_size_get, .io_quantize = cn10k_ml_io_quantize, .io_dequantize = cn10k_ml_io_dequantize, }; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index 9a48ed3e944..cc5f2e0cc63 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -691,44 +691,6 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return (*dev->dev_ops->model_params_update)(dev, model_id, buffer); } -int -rte_ml_io_input_size_get(int16_t dev_id, uint16_t model_id, uint32_t nb_batches, - uint64_t *input_qsize, uint64_t *input_dsize) -{ - struct rte_ml_dev *dev; - - if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); - return -EINVAL; - } - - dev = rte_ml_dev_pmd_get_dev(dev_id); - if (*dev->dev_ops->io_input_size_get == NULL) - return -ENOTSUP; - - return (*dev->dev_ops->io_input_size_get)(dev, model_id, nb_batches, input_qsize, - input_dsize); -} - -int -rte_ml_io_output_size_get(int16_t dev_id, uint16_t model_id, uint32_t nb_batches, - uint64_t *output_qsize, uint64_t *output_dsize) -{ - struct rte_ml_dev *dev; - - if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); - return -EINVAL; - } - - dev = rte_ml_dev_pmd_get_dev(dev_id); - if (*dev->dev_ops->io_output_size_get == NULL) - return -ENOTSUP; - - return (*dev->dev_ops->io_output_size_get)(dev, model_id, nb_batches, output_qsize, - output_dsize); -} - int rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 316c6fd0188..63b2670bb04 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -1008,66 +1008,6 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer); /* IO operations */ -/** - * Get size of quantized and dequantized input buffers. - * - * Calculate the size of buffers required for quantized and dequantized input data. - * This API would return the buffer sizes for the number of batches provided and would - * consider the alignment requirements as per the PMD. Input sizes computed by this API can - * be used by the application to allocate buffers. - * - * @param[in] dev_id - * The identifier of the device. - * @param[in] model_id - * Identifier for the model created - * @param[in] nb_batches - * Number of batches of input to be processed in a single inference job - * @param[out] input_qsize - * Quantized input size pointer. - * NULL value is allowed, in which case input_qsize is not calculated by the driver. - * @param[out] input_dsize - * Dequantized input size pointer. - * NULL value is allowed, in which case input_dsize is not calculated by the driver. - * - * @return - * - Returns 0 on success - * - Returns negative value on failure - */ -__rte_experimental -int -rte_ml_io_input_size_get(int16_t dev_id, uint16_t model_id, uint32_t nb_batches, - uint64_t *input_qsize, uint64_t *input_dsize); - -/** - * Get size of quantized and dequantized output buffers. - * - * Calculate the size of buffers required for quantized and dequantized output data. - * This API would return the buffer sizes for the number of batches provided and would consider - * the alignment requirements as per the PMD. Output sizes computed by this API can be used by the - * application to allocate buffers. - * - * @param[in] dev_id - * The identifier of the device. - * @param[in] model_id - * Identifier for the model created - * @param[in] nb_batches - * Number of batches of input to be processed in a single inference job - * @param[out] output_qsize - * Quantized output size pointer. - * NULL value is allowed, in which case output_qsize is not calculated by the driver. - * @param[out] output_dsize - * Dequantized output size pointer. - * NULL value is allowed, in which case output_dsize is not calculated by the driver. - * - * @return - * - Returns 0 on success - * - Returns negative value on failure - */ -__rte_experimental -int -rte_ml_io_output_size_get(int16_t dev_id, uint16_t model_id, uint32_t nb_batches, - uint64_t *output_qsize, uint64_t *output_dsize); - /** * Quantize input data. * diff --git a/lib/mldev/rte_mldev_core.h b/lib/mldev/rte_mldev_core.h index 8530b073162..2279b1dcecb 100644 --- a/lib/mldev/rte_mldev_core.h +++ b/lib/mldev/rte_mldev_core.h @@ -466,54 +466,6 @@ typedef int (*mldev_model_info_get_t)(struct rte_ml_dev *dev, uint16_t model_id, */ typedef int (*mldev_model_params_update_t)(struct rte_ml_dev *dev, uint16_t model_id, void *buffer); -/** - * @internal - * - * Get size of input buffers. - * - * @param dev - * ML device pointer. - * @param model_id - * Model ID to use. - * @param nb_batches - * Number of batches. - * @param input_qsize - * Size of quantized input. - * @param input_dsize - * Size of dequantized input. - * - * @return - * - 0 on success. - * - <0, error on failure. - */ -typedef int (*mldev_io_input_size_get_t)(struct rte_ml_dev *dev, uint16_t model_id, - uint32_t nb_batches, uint64_t *input_qsize, - uint64_t *input_dsize); - -/** - * @internal - * - * Get size of output buffers. - * - * @param dev - * ML device pointer. - * @param model_id - * Model ID to use. - * @param nb_batches - * Number of batches. - * @param output_qsize - * Size of quantized output. - * @param output_dsize - * Size of dequantized output. - * - * @return - * - 0 on success. - * - <0, error on failure. - */ -typedef int (*mldev_io_output_size_get_t)(struct rte_ml_dev *dev, uint16_t model_id, - uint32_t nb_batches, uint64_t *output_qsize, - uint64_t *output_dsize); - /** * @internal * @@ -627,12 +579,6 @@ struct rte_ml_dev_ops { /** Update model params. */ mldev_model_params_update_t model_params_update; - /** Get input buffer size. */ - mldev_io_input_size_get_t io_input_size_get; - - /** Get output buffer size. */ - mldev_io_output_size_get_t io_output_size_get; - /** Quantize data */ mldev_io_quantize_t io_quantize; diff --git a/lib/mldev/version.map b/lib/mldev/version.map index 40ff27f4b95..99841db6aa9 100644 --- a/lib/mldev/version.map +++ b/lib/mldev/version.map @@ -23,8 +23,6 @@ EXPERIMENTAL { rte_ml_dev_xstats_reset; rte_ml_enqueue_burst; rte_ml_io_dequantize; - rte_ml_io_input_size_get; - rte_ml_io_output_size_get; rte_ml_io_quantize; rte_ml_model_info_get; rte_ml_model_load;