From patchwork Thu May 23 08:13:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 53659 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 69C541B9F3; Thu, 23 May 2019 10:17:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3645E1B94D for ; Thu, 23 May 2019 10:17:11 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4N89ev9019037; Thu, 23 May 2019 01:17:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=F1fHqOAgJrFtFLqnxmni9YArWHbWN6yxNAC3IcV2uj4=; b=Oi6O8SBIgmZsArzlc8yU35Na0JXr8DXyt/f6kTQn+7SL43y5C5EkCCIjFKeOELHo833E hVznaGU/gxT8MwPF3hQYwr3LTM3pbkE5RXngMwVCHLSFRqO32firJ4+9qaVjkIRiDJ2x O6Pgs1wv2kkawgBPAMOxJApyuhqODQBHA2JzyNngIBskpNcEOyxuGsiEmuVPgLWP0bOk KeaVvK4o5EyuTsxpRJzztKW3+Kiu7Om70YxqkiFToZk7dyibIZ8yhAqxyGylL0L2BbEx 7gJfSSpy5zWRJfMF3Fq2EsrpvHw8qMFRYyTk4/hzsLgJSDFTzBT0pBYM2MzwQPaYWWxS hQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2smnwk0sfp-10 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 23 May 2019 01:17:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 23 May 2019 01:16:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 23 May 2019 01:16:09 -0700 Received: from jerin-lab.marvell.com (unknown [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1CA383F703F; Thu, 23 May 2019 01:16:07 -0700 (PDT) From: To: CC: , Jerin Jacob , Olivier Matz Date: Thu, 23 May 2019 13:43:34 +0530 Message-ID: <20190523081339.56348-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523081339.56348-1-jerinj@marvell.com> References: <20190523081339.56348-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-23_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 22/27] mempool/octeontx2: add mempool free op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The DPDK mempool free operation frees HW AURA and POOL reserved in alloc operation. In addition to that it free all the memory resources allocated in mempool alloc operations. Cc: Olivier Matz Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 104 +++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 0e7b7a77c..94570319a 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -47,6 +47,62 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, return NPA_LF_ERR_AURA_POOL_INIT; } +static int +npa_lf_aura_pool_fini(struct otx2_mbox *mbox, + uint32_t aura_id, + uint64_t aura_handle) +{ + struct npa_aq_enq_req *aura_req, *pool_req; + struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct ndc_sync_op *ndc_req; + int rc, off; + + /* Procedure for disabling an aura/pool */ + rte_delay_us(10); + npa_lf_aura_op_alloc(aura_handle, 0); + + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.ena = 0; + pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + off = mbox->rx_start + pool_rsp->hdr.next_msgoff; + aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) + return NPA_LF_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->npa_lf_sync = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_LF_ERR_AURA_POOL_FINI; + } + return 0; +} + static inline char* npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) { @@ -65,6 +121,18 @@ npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); } +static inline int +npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name)); + if (mz == NULL) + return -EINVAL; + + return rte_memzone_free(mz); +} + static inline int bitmap_ctzll(uint64_t slab) { @@ -179,6 +247,24 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, return rc; } +static int +npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) +{ + char name[RTE_MEMZONE_NAMESIZE]; + int aura_id, pool_id, rc; + + if (!lf || !aura_handle) + return NPA_LF_ERR_PARAM; + + aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle); + rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle); + rc |= npa_lf_stack_dma_free(lf, name, pool_id); + + rte_bitmap_set(lf->npa_bmp, aura_id); + + return rc; +} + static int otx2_npa_alloc(struct rte_mempool *mp) { @@ -238,9 +324,27 @@ otx2_npa_alloc(struct rte_mempool *mp) return rc; } +static void +otx2_npa_free(struct rte_mempool *mp) +{ + struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); + int rc = 0; + + otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id); + if (lf != NULL) + rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id); + + if (rc) + otx2_err("Failed to free pool or aura rc=%d", rc); + + /* Release the reference of npalf */ + otx2_npa_lf_fini(); +} + static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, + .free = otx2_npa_free, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops);