From patchwork Tue Oct 26 04:12:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radha Chintakuntla X-Patchwork-Id: 102834 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C006A0C47; Tue, 26 Oct 2021 06:13:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA3B241173; Tue, 26 Oct 2021 06:13:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 847C740A4B for ; Tue, 26 Oct 2021 06:13:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19PLKxk5012626; Mon, 25 Oct 2021 21:13:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=FIldgSkaVRSxmyGnW8mZPn07Z3dMpGtAT1dQgn1BRd8=; b=IocZIVt27ELjXoA+31/X98MQkjVkXEEe49QiqOpElHk4WLMiRGUvj56CULbyw6wjNsMO uIkNiKbGHbGZM3ia8NAp6jtVcir3IkB3HyUXrRG/OQ+ddR4HKFI0+NNU4uDYW9+9mrPu 5CGBYP9odTj14aRNkSvjT014L1zRhA8QLG3O9eKnr0IW/XbPXeRF3e4utmoGhaBfaA+y rNSRK0xYJRr+17C9y25do0pTeqoUeeGzQXE3sKRo03iUIFtzBLVN9PznrKB3qtpc2cwQ cfl7h+SYuwX+38K6JxuoE/ihFbZlhlO46gD5C8ZWQ2I69TcpbsaysB/oFnkGakunX3i0 Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3bx4dx1971-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 25 Oct 2021 21:13:24 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 25 Oct 2021 21:13:22 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 25 Oct 2021 21:13:22 -0700 Received: from rchintakuntla-lnx3.caveonetworks.com (unknown [10.111.140.81]) by maili.marvell.com (Postfix) with ESMTP id 12BD53F7093; Mon, 25 Oct 2021 21:13:22 -0700 (PDT) From: Radha Mohan Chintakuntla To: , , , , , , , CC: , Radha Mohan Chintakuntla Date: Mon, 25 Oct 2021 21:12:59 -0700 Message-ID: <20211026041300.28924-3-radhac@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211026041300.28924-1-radhac@marvell.com> References: <20211026041300.28924-1-radhac@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: c6yF7zaCDraoPPMrjjnTBF0nATeH5Tzi X-Proofpoint-ORIG-GUID: c6yF7zaCDraoPPMrjjnTBF0nATeH5Tzi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-25_08,2021-10-25_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH 3/4] dma/cnxk: add dma channel operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functions for the dmadev vchan setup and DMA operations. Signed-off-by: Radha Mohan Chintakuntla --- drivers/dma/cnxk/cnxk_dmadev.c | 322 +++++++++++++++++++++++++++++++++ drivers/dma/cnxk/cnxk_dmadev.h | 53 ++++++ drivers/dma/cnxk/version.map | 3 + 3 files changed, 378 insertions(+) create mode 100644 drivers/dma/cnxk/version.map diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 620766743d..8434579aa2 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -18,6 +18,322 @@ #include #include +static int +cnxk_dmadev_info_get(const struct rte_dma_dev *dev, + struct rte_dma_info *dev_info, uint32_t size) +{ + RTE_SET_USED(dev); + RTE_SET_USED(size); + + dev_info->max_vchans = 1; + dev_info->nb_vchans = 1; + dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | + RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | + RTE_DMA_CAPA_OPS_COPY; + dev_info->max_desc = DPI_MAX_DESC; + dev_info->min_desc = 1; + dev_info->max_sges = DPI_MAX_POINTER; + + return 0; +} + +static int +cnxk_dmadev_configure(struct rte_dma_dev *dev, + const struct rte_dma_conf *conf, uint32_t conf_sz) +{ + struct cnxk_dpi_vf_s *dpivf = NULL; + int rc = 0; + + RTE_SET_USED(conf); + RTE_SET_USED(conf); + RTE_SET_USED(conf_sz); + RTE_SET_USED(conf_sz); + dpivf = dev->fp_obj->dev_private; + rc = roc_dpi_queue_configure(&dpivf->rdpi); + if (rc < 0) + plt_err("DMA queue configure failed err = %d", rc); + + return rc; +} + +static int +cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, + const struct rte_dma_vchan_conf *conf, + uint32_t conf_sz) +{ + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + struct cnxk_dpi_compl_s *comp_data; + int i; + + RTE_SET_USED(vchan); + RTE_SET_USED(conf_sz); + + switch (conf->direction) { + case RTE_DMA_DIR_DEV_TO_MEM: + dpivf->conf.direction = DPI_XTYPE_INBOUND; + dpivf->conf.src_port = conf->src_port.pcie.coreid; + dpivf->conf.dst_port = 0; + break; + case RTE_DMA_DIR_MEM_TO_DEV: + dpivf->conf.direction = DPI_XTYPE_OUTBOUND; + dpivf->conf.src_port = 0; + dpivf->conf.dst_port = conf->dst_port.pcie.coreid; + break; + case RTE_DMA_DIR_MEM_TO_MEM: + dpivf->conf.direction = DPI_XTYPE_INTERNAL_ONLY; + dpivf->conf.src_port = 0; + dpivf->conf.dst_port = 0; + break; + case RTE_DMA_DIR_DEV_TO_DEV: + dpivf->conf.direction = DPI_XTYPE_EXTERNAL_ONLY; + dpivf->conf.src_port = conf->src_port.pcie.coreid; + dpivf->conf.dst_port = conf->src_port.pcie.coreid; + }; + + for (i = 0; i < conf->nb_desc; i++) { + comp_data = rte_zmalloc(NULL, sizeof(*comp_data), 0); + dpivf->conf.c_desc.compl_ptr[i] = comp_data; + }; + dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; + dpivf->conf.c_desc.head = 0; + dpivf->conf.c_desc.tail = 0; + + return 0; +} + +static int +cnxk_dmadev_start(struct rte_dma_dev *dev) +{ + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + + roc_dpi_queue_start(&dpivf->rdpi); + + return 0; +} + +static int +cnxk_dmadev_stop(struct rte_dma_dev *dev) +{ + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + + roc_dpi_queue_stop(&dpivf->rdpi); + + return 0; +} + +static int +cnxk_dmadev_close(struct rte_dma_dev *dev) +{ + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + + roc_dpi_queue_stop(&dpivf->rdpi); + roc_dpi_dev_fini(&dpivf->rdpi); + + return 0; +} + +static inline int +__dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) +{ + uint64_t *ptr = dpi->chunk_base; + + if ((cmd_count < DPI_MIN_CMD_SIZE) || (cmd_count > DPI_MAX_CMD_SIZE) || + cmds == NULL) + return -EINVAL; + + /* + * Normally there is plenty of room in the current buffer for the + * command + */ + if (dpi->chunk_head + cmd_count < dpi->pool_size_m1) { + ptr += dpi->chunk_head; + dpi->chunk_head += cmd_count; + while (cmd_count--) + *ptr++ = *cmds++; + } else { + int count; + uint64_t *new_buff = dpi->chunk_next; + + dpi->chunk_next = + (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); + if (!dpi->chunk_next) { + plt_err("Failed to alloc next buffer from NPA"); + return -ENOMEM; + } + + /* + * Figure out how many cmd words will fit in this buffer. + * One location will be needed for the next buffer pointer. + */ + count = dpi->pool_size_m1 - dpi->chunk_head; + ptr += dpi->chunk_head; + cmd_count -= count; + while (count--) + *ptr++ = *cmds++; + + /* + * chunk next ptr is 2 DWORDS + * second DWORD is reserved. + */ + *ptr++ = (uint64_t)new_buff; + *ptr = 0; + + /* + * The current buffer is full and has a link to the next + * buffers. Time to write the rest of the commands into the new + * buffer. + */ + dpi->chunk_base = new_buff; + dpi->chunk_head = cmd_count; + ptr = new_buff; + while (cmd_count--) + *ptr++ = *cmds++; + + /* queue index may be greater than pool size */ + if (dpi->chunk_head >= dpi->pool_size_m1) { + new_buff = dpi->chunk_next; + dpi->chunk_next = + (void *)roc_npa_aura_op_alloc(dpi->aura_handle, + 0); + if (!dpi->chunk_next) { + plt_err("Failed to alloc next buffer from NPA"); + return -ENOMEM; + } + /* Write next buffer address */ + *ptr = (uint64_t)new_buff; + dpi->chunk_base = new_buff; + dpi->chunk_head = 0; + } + } + + return 0; +} + +static int +cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, + rte_iova_t dst, uint32_t length, uint64_t flags) +{ + uint64_t cmd[DPI_MAX_CMD_SIZE] = {0}; + union dpi_instr_hdr_s *header = (union dpi_instr_hdr_s *)&cmd[0]; + rte_iova_t fptr, lptr; + struct cnxk_dpi_vf_s *dpivf = dev_private; + struct cnxk_dpi_compl_s *comp_ptr; + int num_words = 0; + int rc; + + RTE_SET_USED(vchan); + + header->s.xtype = dpivf->conf.direction; + header->s.pt = DPI_HDR_PT_ZBW_CA; + comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr->cdata = DPI_REQ_CDATA; + header->s.ptr = (uint64_t)comp_ptr; + STRM_INC(dpivf->conf.c_desc); + + /* pvfe should be set for inbound and outbound only */ + if (header->s.xtype <= 1) + header->s.pvfe = 1; + num_words += 4; + + header->s.nfst = 1; + header->s.nlst = 1; + /* + * For inbound case, src pointers are last pointers. + * For all other cases, src pointers are first pointers. + */ + if (header->s.xtype == DPI_XTYPE_INBOUND) { + fptr = dst; + lptr = src; + header->s.fport = dpivf->conf.dst_port & 0x3; + header->s.lport = dpivf->conf.src_port & 0x3; + } else { + fptr = src; + lptr = dst; + header->s.fport = dpivf->conf.src_port & 0x3; + header->s.lport = dpivf->conf.dst_port & 0x3; + } + + cmd[num_words++] = length; + cmd[num_words++] = fptr; + cmd[num_words++] = length; + cmd[num_words++] = lptr; + + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + if (!rc) { + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, + dpivf->rdpi.rbase + DPI_VDMA_DBELL); + } + dpivf->num_words = num_words; + } + + return rc; +} + +static uint16_t +cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) +{ + struct cnxk_dpi_vf_s *dpivf = dev_private; + int cnt; + + RTE_SET_USED(vchan); + RTE_SET_USED(last_idx); + RTE_SET_USED(has_error); + for (cnt = 0; cnt < nb_cpls; cnt++) { + struct cnxk_dpi_compl_s *comp_ptr = + dpivf->conf.c_desc.compl_ptr[cnt]; + + if (comp_ptr->cdata) + break; + } + + dpivf->conf.c_desc.tail = cnt; + + return cnt; +} + +static uint16_t +cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, + const uint16_t nb_cpls, uint16_t *last_idx, + enum rte_dma_status_code *status) +{ + struct cnxk_dpi_vf_s *dpivf = dev_private; + int cnt; + + RTE_SET_USED(vchan); + RTE_SET_USED(last_idx); + for (cnt = 0; cnt < nb_cpls; cnt++) { + struct cnxk_dpi_compl_s *comp_ptr = + dpivf->conf.c_desc.compl_ptr[cnt]; + status[cnt] = comp_ptr->cdata; + } + + dpivf->conf.c_desc.tail = 0; + return cnt; +} + +static int +cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) +{ + struct cnxk_dpi_vf_s *dpivf = dev_private; + + rte_wmb(); + plt_write64(dpivf->num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + + return 0; +} + +static const struct rte_dma_dev_ops cnxk_dmadev_ops = { + .dev_info_get = cnxk_dmadev_info_get, + .dev_configure = cnxk_dmadev_configure, + .dev_start = cnxk_dmadev_start, + .dev_stop = cnxk_dmadev_stop, + .vchan_setup = cnxk_dmadev_vchan_setup, + .dev_close = cnxk_dmadev_close, +}; + static int cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) @@ -50,6 +366,12 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, dmadev->device = &pci_dev->device; dmadev->fp_obj->dev_private = dpivf; + dmadev->dev_ops = &cnxk_dmadev_ops; + + dmadev->fp_obj->copy = cnxk_dmadev_copy; + dmadev->fp_obj->submit = cnxk_dmadev_submit; + dmadev->fp_obj->completed = cnxk_dmadev_completed; + dmadev->fp_obj->completed_status = cnxk_dmadev_completed_status; rdpi = &dpivf->rdpi; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9e0bb7b2ce..ce301a5945 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -4,8 +4,61 @@ #ifndef _CNXK_DMADEV_H_ #define _CNXK_DMADEV_H_ +#define DPI_MAX_POINTER 15 +#define DPI_QUEUE_STOP 0x0 +#define DPI_QUEUE_START 0x1 +#define STRM_INC(s) ((s).tail = ((s).tail + 1) % (s).max_cnt) +#define DPI_MAX_DESC DPI_MAX_POINTER + +/* DPI Transfer Type, pointer type in DPI_DMA_INSTR_HDR_S[XTYPE] */ +#define DPI_XTYPE_OUTBOUND (0) +#define DPI_XTYPE_INBOUND (1) +#define DPI_XTYPE_INTERNAL_ONLY (2) +#define DPI_XTYPE_EXTERNAL_ONLY (3) +#define DPI_XTYPE_MASK 0x3 +#define DPI_HDR_PT_ZBW_CA 0x0 +#define DPI_HDR_PT_ZBW_NC 0x1 +#define DPI_HDR_PT_WQP 0x2 +#define DPI_HDR_PT_WQP_NOSTATUS 0x0 +#define DPI_HDR_PT_WQP_STATUSCA 0x1 +#define DPI_HDR_PT_WQP_STATUSNC 0x3 +#define DPI_HDR_PT_CNT 0x3 +#define DPI_HDR_PT_MASK 0x3 +#define DPI_W0_TT_MASK 0x3 +#define DPI_W0_GRP_MASK 0x3FF + +/* Set Completion data to 0xFF when request submitted, + * upon successful request completion engine reset to completion status + */ +#define DPI_REQ_CDATA 0xFF + +#define DPI_MIN_CMD_SIZE 8 +#define DPI_MAX_CMD_SIZE 64 + +struct cnxk_dpi_compl_s { + uint64_t cdata; + void *cb_data; +}; + +struct cnxk_dpi_cdesc_data_s { + struct cnxk_dpi_compl_s *compl_ptr[DPI_MAX_DESC]; + uint16_t max_cnt; + uint16_t head; + uint16_t tail; +}; + +struct cnxk_dpi_queue_conf { + uint8_t direction; + uint8_t src_port; + uint8_t dst_port; + uint64_t comp_ptr; + struct cnxk_dpi_cdesc_data_s c_desc; +}; + struct cnxk_dpi_vf_s { struct roc_dpi rdpi; + struct cnxk_dpi_queue_conf conf; + uint32_t num_words; }; #endif diff --git a/drivers/dma/cnxk/version.map b/drivers/dma/cnxk/version.map new file mode 100644 index 0000000000..4a76d1d52d --- /dev/null +++ b/drivers/dma/cnxk/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +};