From patchwork Wed Sep 20 07:25:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131704 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 040C2425E4; Wed, 20 Sep 2023 09:28:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2725942DC9; Wed, 20 Sep 2023 09:26:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 970F540ED1 for ; Wed, 20 Sep 2023 09:25:44 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW16008355 for ; Wed, 20 Sep 2023 00:25:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8Fcrp6IZ0ztVf+u1E100cf37+8zyOS8XTWw8vSeee8k=; b=M8GI1OHC0G6HSFhhUwIsoH4odqLZJybPbOrwOyutdep1HshWtAwuQ6nbbhMDLkqcqBgh CNXMSXpHsbdUEkLYuBwVz/qq2Z3o73V4myQ44YcCjGcHig6sjkdI9filU4mM61nBnmx9 YtBgT83v3EpuXjmhVMCm0yQj3RWtIIDCj/MVPE+Z46FfG7OsY0aZIm46eSZ38f6GpgYK gQYUiX3+bLbdZHHPxKJQ5LH+7QmB3XgQB8bY9kMvzmd5L4aug2gN3hZNzbianTiec6Fb tfGbXjXl4WwPdmagoig44tM9JuMnXjGyDkvbakrwcmrjVK9ZpzVn6xEVBOB9o1fYoSpv fQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-14 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:41 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id CBB915B692A; Wed, 20 Sep 2023 00:25:41 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 30/34] ml/cnxk: implement I/O alloc and free callbacks Date: Wed, 20 Sep 2023 00:25:21 -0700 Message-ID: <20230920072528.14185-31-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: XSH2iaBiQ4hAJIzzdvSoVUmNDEreQkjc X-Proofpoint-GUID: XSH2iaBiQ4hAJIzzdvSoVUmNDEreQkjc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented callback functions for IO allocation and free for Glow layers. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 123 +++++++++++++++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_ops.h | 3 + drivers/ml/cnxk/mvtvm_ml_ops.c | 2 + 3 files changed, 128 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index f70383b128..23e98b96c5 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -1399,3 +1399,126 @@ cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output, error_enqueue: return ret; } + +int +cn10k_ml_io_alloc(void *device, uint16_t model_id, const char *layer_name, uint64_t **input_qbuffer, + uint64_t **output_qbuffer) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; + + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint16_t layer_id = 0; + uint64_t output_size; + uint64_t input_size; + +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM + PLT_SET_USED(layer_name); +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", cnxk_mldev); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + + layer = &model->layer[layer_id]; + input_size = PLT_ALIGN_CEIL(layer->info.total_input_sz_q, ML_CN10K_ALIGN_SIZE); + output_size = PLT_ALIGN_CEIL(layer->info.total_output_sz_q, ML_CN10K_ALIGN_SIZE); + + sprintf(str, "cn10k_ml_io_mz_%u_%u", model_id, layer_id); + mz = plt_memzone_reserve_aligned(str, input_size + output_size, 0, ML_CN10K_ALIGN_SIZE); + if (mz == NULL) { + plt_err("io_alloc failed: Unable to allocate memory: model_id = %u, layer_name = %s", + model_id, layer_name); + return -ENOMEM; + } + + *input_qbuffer = mz->addr; + *output_qbuffer = PLT_PTR_ADD(mz->addr, input_size); + + return 0; +} + +int +cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint16_t layer_id = 0; + +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM + PLT_SET_USED(layer_name); +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", cnxk_mldev); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + + sprintf(str, "cn10k_ml_io_mz_%u_%u", model_id, layer_id); + mz = plt_memzone_lookup(str); + if (mz == NULL) { + plt_err("io_free failed: Memzone not found: model_id = %u, layer_name = %s", + model_id, layer_name); + return -EINVAL; + } + + return plt_memzone_free(mz); +} diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 3e75cae65a..055651eaa2 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -328,5 +328,8 @@ int cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, int cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name); int cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name); int cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name); +int cn10k_ml_io_alloc(void *device, uint16_t model_id, const char *layer_name, + uint64_t **input_qbuffer, uint64_t **output_qbuffer); +int cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name); #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index d4518412be..a41ba4d343 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -164,6 +164,8 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback = &model->mvtvm.cb; callback->tvmrt_glow_layer_load = cn10k_ml_layer_load; callback->tvmrt_glow_layer_unload = cn10k_ml_layer_unload; + callback->tvmrt_io_alloc = cn10k_ml_io_alloc; + callback->tvmrt_io_free = cn10k_ml_io_free; } else { callback = NULL; }