From patchwork Tue Mar 24 16:53:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 67084 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55CF7A058A; Tue, 24 Mar 2020 17:53:54 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8A9B62BCE; Tue, 24 Mar 2020 17:53:53 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4C18F2AB for ; Tue, 24 Mar 2020 17:53:52 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OGZEsP019137; Tue, 24 Mar 2020 09:53:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=WLb1M5hiknuGy4eh+bLKGyDE++WrU6zh6IH5ayAlf+U=; b=BPFHsPHWsi2A6yBl5VMPlJdE3JxrkEqnLLk8cBrzGO6HPpezX3HYO4+p/v/KRUTYoShj afjdFMAdZsxMB4ODj8zsK8vpCaU+scyak9efmLJDp2HM7vjosXa8imdkntc2mZDW9WaJ Q7Ozot8eQLdA7/yHQhyVROY/vOAY5/8yxO+12R77kuEus+yXBMBLpYjTT+gqafnIJrEc tiUOEKwTeYdB5S7o7nk1Go1Kgufth8ReWJuZ9dTkq/H59KneXHeQsHQ9Jx9lYDWwut6c ou/S1eb/fTq4QfUj0G9rT7Vm+xWzIsGQlCJ4BNpGiXcQOxiFfm48Z2Etly34+5u50qLU 7A== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqtds8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 24 Mar 2020 09:53:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 24 Mar 2020 09:53:49 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 24 Mar 2020 09:53:48 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 24 Mar 2020 09:53:48 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.163.117]) by maili.marvell.com (Postfix) with ESMTP id 596C63F703F; Tue, 24 Mar 2020 09:53:43 -0700 (PDT) From: To: , , Pavan Nikhilesh , John McNamara , "Marko Kovacevic" , Nithin Dabilpuram , Vamsi Attunuru , "Kiran Kumar K" CC: Date: Tue, 24 Mar 2020 22:23:40 +0530 Message-ID: <20200324165342.2055-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200306163524.1650-1-pbhagavatula@marvell.com> References: <20200306163524.1650-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23, 2020-03-24 signatures=0 Subject: [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add device arguments to lock NPA aura and pool contexts in NDC cache. The device args take hexadecimal bitmask where each bit represent the corresponding aura/pool id. Example: -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx Signed-off-by: Pavan Nikhilesh --- v2 Changes: - Fix formatting in doc(Andrzej). - Add error returns for all failures(Andrzej). - Fix devargs parameter list(Andrzej). doc/guides/eventdevs/octeontx2.rst | 10 +++ doc/guides/mempool/octeontx2.rst | 10 +++ doc/guides/nics/octeontx2.rst | 12 +++ drivers/common/octeontx2/Makefile | 2 +- drivers/common/octeontx2/meson.build | 2 +- drivers/common/octeontx2/otx2_common.c | 34 +++++++++ drivers/common/octeontx2/otx2_common.h | 5 ++ .../rte_common_octeontx2_version.map | 7 ++ drivers/event/octeontx2/otx2_evdev.c | 5 +- drivers/mempool/octeontx2/otx2_mempool.c | 4 +- drivers/mempool/octeontx2/otx2_mempool_ops.c | 74 +++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +- 12 files changed, 163 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index d4b2515ce..6502f6415 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -148,6 +148,16 @@ Runtime Config Options -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0] +- ``Lock NPA contexts in NDC`` + + Lock NPA aura and pool contexts in NDC cache. + The device args take hexadecimal bitmask where each bit represent the + corresponding aura/pool id. + + For example:: + + -w 0002:0e:00.0,npa_lock_mask=0xf + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst index 2c9a0953b..49b45a04e 100644 --- a/doc/guides/mempool/octeontx2.rst +++ b/doc/guides/mempool/octeontx2.rst @@ -61,6 +61,16 @@ Runtime Config Options provide ``max_pools`` parameter to the first PCIe device probed by the given application. +- ``Lock NPA contexts in NDC`` + + Lock NPA aura and pool contexts in NDC cache. + The device args take hexadecimal bitmask where each bit represent the + corresponding aura/pool id. + + For example:: + + -w 0002:02:00.0,npa_lock_mask=0xf + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 60187ec72..c2d87c9d0 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -194,6 +194,7 @@ Runtime Config Options Setting this flag to 1 to select the legacy mode. For example to select the legacy mode(RSS tag adder as XOR):: + -w 0002:02:00.0,tag_as_xor=1 - ``Max SPI for inbound inline IPsec`` (default ``1``) @@ -202,6 +203,7 @@ Runtime Config Options ``ipsec_in_max_spi`` ``devargs`` parameter. For example:: + -w 0002:02:00.0,ipsec_in_max_spi=128 With the above configuration, application can enable inline IPsec processing @@ -213,6 +215,16 @@ Runtime Config Options parameters to all the PCIe devices if application requires to configure on all the ethdev ports. +- ``Lock NPA contexts in NDC`` + + Lock NPA aura and pool contexts in NDC cache. + The device args take hexadecimal bitmask where each bit represent the + corresponding aura/pool id. + + For example:: + + -w 0002:02:00.0,npa_lock_mask=0xf + Limitations ----------- diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile index 48f033dc6..64c5e60e2 100644 --- a/drivers/common/octeontx2/Makefile +++ b/drivers/common/octeontx2/Makefile @@ -35,6 +35,6 @@ SRCS-y += otx2_common.c SRCS-y += otx2_sec_idev.c LDLIBS += -lrte_eal -LDLIBS += -lrte_ethdev +LDLIBS += -lrte_ethdev -lrte_kvargs include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build index cc2c26123..bc4917b8c 100644 --- a/drivers/common/octeontx2/meson.build +++ b/drivers/common/octeontx2/meson.build @@ -23,6 +23,6 @@ foreach flag: extra_flags endif endforeach -deps = ['eal', 'pci', 'ethdev'] +deps = ['eal', 'pci', 'ethdev', 'kvargs'] includes += include_directories('../../common/octeontx2', '../../mempool/octeontx2', '../../bus/pci') diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c index 1a257cf07..5e7272f69 100644 --- a/drivers/common/octeontx2/otx2_common.c +++ b/drivers/common/octeontx2/otx2_common.c @@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void) return cnt ? 0 : -EINVAL; } +static int +parse_npa_lock_mask(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint64_t val; + + val = strtoull(value, NULL, 16); + + *(uint64_t *)extra_args = val; + + return 0; +} + +/* + * @internal + * Parse common device arguments + */ +void otx2_parse_common_devargs(struct rte_kvargs *kvlist) +{ + + struct otx2_idev_cfg *idev; + uint64_t npa_lock_mask = 0; + + idev = otx2_intra_dev_get_cfg(); + + if (idev == NULL) + return; + + rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK, + &parse_npa_lock_mask, &npa_lock_mask); + + idev->npa_lock_mask = npa_lock_mask; +} + /** * @internal */ diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index bf5ea86b3..b3fdefe95 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -49,6 +50,8 @@ (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) #endif +#define OTX2_NPA_LOCK_MASK "npa_lock_mask" + /* Compiler attributes */ #ifndef __hot #define __hot __attribute__((hot)) @@ -65,6 +68,7 @@ struct otx2_idev_cfg { rte_atomic16_t npa_refcnt; uint16_t npa_refcnt_u16; }; + uint64_t npa_lock_mask; }; struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void); @@ -75,6 +79,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void); void otx2_npa_set_defaults(struct otx2_idev_cfg *idev); int otx2_npa_lf_active(void *dev); int otx2_npa_lf_obj_ref(void); +void otx2_parse_common_devargs(struct rte_kvargs *kvlist); /* Log */ extern int otx2_logtype_base; diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index 8f2404bd9..e070e898c 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -45,6 +45,13 @@ DPDK_20.0.1 { otx2_sec_idev_tx_cpt_qp_put; } DPDK_20.0; +DPDK_20.0.2 { + global: + + otx2_parse_common_devargs; + +} DPDK_20.0; + EXPERIMENTAL { global: diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index d20213d78..630073de5 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) &single_ws); rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict, dev); - + otx2_parse_common_devargs(kvlist); dev->dual_ws = !single_ws; rte_kvargs_free(kvlist); } @@ -1821,4 +1821,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" OTX2_SSO_SINGLE_WS "=1" OTX2_SSO_GGRP_QOS "=" - OTX2_SSO_SELFTEST "=1"); + OTX2_SSO_SELFTEST "=1" + OTX2_NPA_LOCK_MASK "=<1-65535>"); diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index 3a4a9425f..fb630fecf 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs) goto exit; rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz); + otx2_parse_common_devargs(kvlist); rte_kvargs_free(kvlist); exit: return aura_sz; @@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2, - OTX2_MAX_POOLS "=<128-1048576>"); + OTX2_MAX_POOLS "=<128-1048576>" + OTX2_NPA_LOCK_MASK "=<1-65535>"); diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index ac2d61861..1cc34f0d1 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, struct npa_aq_enq_req *aura_init_req, *pool_init_req; struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp; struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct otx2_idev_cfg *idev; int rc, off; + idev = otx2_intra_dev_get_cfg(); + if (idev == NULL) + return -ENOMEM; + aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); aura_init_req->aura_id = aura_id; @@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, return 0; else return NPA_LF_ERR_AURA_POOL_INIT; + + if (!(idev->npa_lock_mask & BIT_ULL(aura_id))) + return 0; + + aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_init_req->aura_id = aura_id; + aura_init_req->ctype = NPA_AQ_CTYPE_AURA; + aura_init_req->op = NPA_AQ_INSTOP_LOCK; + + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + if (!pool_init_req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) { + otx2_err("Failed to LOCK AURA context"); + return -ENOMEM; + } + + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + if (!pool_init_req) { + otx2_err("Failed to LOCK POOL context"); + return -ENOMEM; + } + } + pool_init_req->aura_id = aura_id; + pool_init_req->ctype = NPA_AQ_CTYPE_POOL; + pool_init_req->op = NPA_AQ_INSTOP_LOCK; + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to lock POOL ctx to NDC"); + return -ENOMEM; + } + + return 0; } static int @@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox, struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; struct otx2_mbox_dev *mdev = &mbox->dev[0]; struct ndc_sync_op *ndc_req; + struct otx2_idev_cfg *idev; int rc, off; + idev = otx2_intra_dev_get_cfg(); + if (idev == NULL) + return -EINVAL; + /* Procedure for disabling an aura/pool */ rte_delay_us(10); npa_lf_aura_op_alloc(aura_handle, 0); @@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox, otx2_err("Error on NDC-NPA LF sync, rc %d", rc); return NPA_LF_ERR_AURA_POOL_FINI; } + + if (!(idev->npa_lock_mask & BIT_ULL(aura_id))) + return 0; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_UNLOCK; + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to unlock AURA ctx to NDC"); + return -EINVAL; + } + + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_UNLOCK; + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to unlock POOL ctx to NDC"); + return -EINVAL; + } + return 0; } diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c index f29f01564..5390eb217 100644 --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c @@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) &parse_switch_header_type, &switch_header_type); rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR, &parse_flag, &rss_tag_as_xor); + otx2_parse_common_devargs(kvlist); rte_kvargs_free(kvlist); null_devargs: @@ -186,4 +187,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2, OTX2_FLOW_PREALLOC_SIZE "=<1-32>" OTX2_FLOW_MAX_PRIORITY "=<1-32>" OTX2_SWITCH_HEADER_TYPE "=" - OTX2_RSS_TAG_AS_XOR "=1"); + OTX2_RSS_TAG_AS_XOR "=1" + OTX2_NPA_LOCK_MASK "=<1-65535>"); From patchwork Tue Mar 24 16:53:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 67085 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25745A058A; Tue, 24 Mar 2020 17:54:07 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F12491BE8C; Tue, 24 Mar 2020 17:53:59 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C4FEF1BEE5 for ; Tue, 24 Mar 2020 17:53:55 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OGaNGY016121; Tue, 24 Mar 2020 09:53:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Jd3EEz+rVhLl1bvbjSUgtOTwmQe/tdXxIjCSSXpakL8=; b=JrkeEkzxnnGIyxe9BnDQRXbbg4RgajOfVEZTZvXexR9vKgP/m5pMQOZoi5UYp4XeHSGg gXuTPAfGPNYv4uGtoBeKnG1e/jOIWGGT/3ehtYxQnWPLUl1InW74qSXeI3KIO4kC0DXx 4OTDP9DBScgixOkWv8xZCsVZJogQmWcNAFf9vhNR36ROzPA/njZJg86AyV45CbN5DuHk tXH1k7lKqPLxGIS7z+KoAVfqDDW9FFhwzTNnXmx8zateQAfTiRDi3xFxe6atXQ/2U0hd szai9ziUjIFG8EaeffBQkVUggRSgjbo6orbTD4rdpUybCL7Q3+cKEDut3FPH7PbXS6rF 4A== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nm8e5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 24 Mar 2020 09:53:54 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 24 Mar 2020 09:53:53 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 24 Mar 2020 09:53:53 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.163.117]) by maili.marvell.com (Postfix) with ESMTP id 17B7F3F703F; Tue, 24 Mar 2020 09:53:49 -0700 (PDT) From: To: , , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic CC: , Pavan Nikhilesh Date: Tue, 24 Mar 2020 22:23:41 +0530 Message-ID: <20200324165342.2055-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324165342.2055-1-pbhagavatula@marvell.com> References: <20200306163524.1650-1-pbhagavatula@marvell.com> <20200324165342.2055-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23, 2020-03-24 signatures=0 Subject: [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add device arguments to lock Rx/Tx contexts. Application can either choose to lock Rx or Tx contexts by using 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port. Example: -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1 Signed-off-by: Pavan Nikhilesh --- doc/guides/nics/octeontx2.rst | 16 ++ drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++- drivers/net/octeontx2/otx2_ethdev.h | 2 + drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +- drivers/net/octeontx2/otx2_rss.c | 23 +++ 5 files changed, 241 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index c2d87c9d0..df19443e3 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -209,6 +209,22 @@ Runtime Config Options With the above configuration, application can enable inline IPsec processing on 128 SAs (SPI 0-127). +- ``Lock Rx contexts in NDC cache`` + + Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter. + + For example:: + + -w 0002:02:00.0,lock_rx_ctx=1 + +- ``Lock Tx contexts in NDC cache`` + + Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter. + + For example:: + + -w 0002:02:00.0,lock_tx_ctx=1 + .. note:: Above devarg parameters are configurable per device, user needs to pass the diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index e60f4901c..6369c2fa9 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, goto fail; } + if (dev->lock_rx_ctx) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_LOCK; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) { + otx2_err("Failed to LOCK cq context"); + goto fail; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + otx2_err("Failed to LOCK rq context"); + return -ENOMEM; + } + } + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_LOCK; + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to LOCK rq context"); + goto fail; + } + } + return 0; fail: return rc; @@ -430,6 +464,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) return rc; } + if (dev->lock_rx_ctx) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_UNLOCK; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) { + otx2_err("Failed to UNLOCK cq context"); + return rc; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + otx2_err("Failed to UNLOCK rq context"); + return -ENOMEM; + } + } + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_UNLOCK; + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to UNLOCK rq context"); + return rc; + } + } + return 0; } @@ -715,6 +783,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev) return flags; } +static int +nix_sqb_lock(struct rte_mempool *mp) +{ + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; + struct npa_aq_enq_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_LOCK; + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(npa_lf->mbox, 0); + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0); + if (rc < 0) { + otx2_err("Failed to LOCK AURA context"); + return rc; + } + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + if (!req) { + otx2_err("Failed to LOCK POOL context"); + return -ENOMEM; + } + } + + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); + req->ctype = NPA_AQ_CTYPE_POOL; + req->op = NPA_AQ_INSTOP_LOCK; + + rc = otx2_mbox_process(npa_lf->mbox); + if (rc < 0) { + otx2_err("Unable to lock POOL in NDC"); + return rc; + } + + return 0; +} + +static int +nix_sqb_unlock(struct rte_mempool *mp) +{ + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf; + struct npa_aq_enq_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_UNLOCK; + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(npa_lf->mbox, 0); + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0); + if (rc < 0) { + otx2_err("Failed to UNLOCK AURA context"); + return rc; + } + + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + if (!req) { + otx2_err("Failed to UNLOCK POOL context"); + return -ENOMEM; + } + } + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox); + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id); + req->ctype = NPA_AQ_CTYPE_POOL; + req->op = NPA_AQ_INSTOP_UNLOCK; + + rc = otx2_mbox_process(npa_lf->mbox); + if (rc < 0) { + otx2_err("Unable to UNLOCK AURA in NDC"); + return rc; + } + + return 0; +} + static int nix_sq_init(struct otx2_eth_txq *txq) { @@ -757,7 +913,20 @@ nix_sq_init(struct otx2_eth_txq *txq) /* Many to one reduction */ sq->sq.qint_idx = txq->sq % dev->qints; - return otx2_mbox_process(mbox); + rc = otx2_mbox_process(mbox); + if (rc < 0) + return rc; + + if (dev->lock_tx_ctx) { + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + sq->qidx = txq->sq; + sq->ctype = NIX_AQ_CTYPE_SQ; + sq->op = NIX_AQ_INSTOP_LOCK; + + rc = otx2_mbox_process(mbox); + } + + return rc; } static int @@ -800,6 +969,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq) if (rc) return rc; + if (dev->lock_tx_ctx) { + /* Unlock sq */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_UNLOCK; + + rc = otx2_mbox_process(mbox); + if (rc < 0) + return rc; + + nix_sqb_unlock(txq->sqb_pool); + } + /* Read SQ and free sqb's */ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); aq->qidx = txq->sq; @@ -921,6 +1104,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) } nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs); + if (dev->lock_tx_ctx) + nix_sqb_lock(txq->sqb_pool); return 0; fail: diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index e5684f9f0..90ca8cbed 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -272,6 +272,8 @@ struct otx2_eth_dev { uint8_t max_mac_entries; uint8_t lf_tx_stats; uint8_t lf_rx_stats; + uint8_t lock_rx_ctx; + uint8_t lock_tx_ctx; uint16_t flags; uint16_t cints; uint16_t qints; diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c index 5390eb217..e8eba3d91 100644 --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args) #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority" #define OTX2_SWITCH_HEADER_TYPE "switch_header" #define OTX2_RSS_TAG_AS_XOR "tag_as_xor" +#define OTX2_LOCK_RX_CTX "lock_rx_ctx" +#define OTX2_LOCK_TX_CTX "lock_tx_ctx" int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) uint16_t switch_header_type = 0; uint16_t flow_max_priority = 3; uint16_t ipsec_in_max_spi = 1; - uint16_t scalar_enable = 0; uint16_t rss_tag_as_xor = 0; + uint16_t scalar_enable = 0; struct rte_kvargs *kvlist; + uint8_t lock_rx_ctx = 0; + uint8_t lock_tx_ctx = 0; if (devargs == NULL) goto null_devargs; @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) &parse_switch_header_type, &switch_header_type); rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR, &parse_flag, &rss_tag_as_xor); + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX, + &parse_flag, &lock_rx_ctx); + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX, + &parse_flag, &lock_tx_ctx); otx2_parse_common_devargs(kvlist); rte_kvargs_free(kvlist); @@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) dev->scalar_ena = scalar_enable; dev->rss_tag_as_xor = rss_tag_as_xor; dev->max_sqb_count = sqb_count; + dev->lock_rx_ctx = lock_rx_ctx; + dev->lock_tx_ctx = lock_tx_ctx; dev->rss_info.rss_size = rss_size; dev->npc_flow.flow_prealloc_size = flow_prealloc_size; dev->npc_flow.flow_max_priority = flow_max_priority; @@ -188,4 +198,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2, OTX2_FLOW_MAX_PRIORITY "=<1-32>" OTX2_SWITCH_HEADER_TYPE "=" OTX2_RSS_TAG_AS_XOR "=1" - OTX2_NPA_LOCK_MASK "=<1-65535>"); + OTX2_NPA_LOCK_MASK "=<1-65535>" + OTX2_LOCK_RX_CTX "=1" + OTX2_LOCK_TX_CTX "=1"); diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c index 7a8c8f3de..34005ef02 100644 --- a/drivers/net/octeontx2/otx2_rss.c +++ b/drivers/net/octeontx2/otx2_rss.c @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, req->qidx = (group * rss->rss_size) + idx; req->ctype = NIX_AQ_CTYPE_RSS; req->op = NIX_AQ_INSTOP_INIT; + + if (!dev->lock_rx_ctx) + continue; + + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) + return -ENOMEM; + } + req->rss.rq = ind_tbl[idx]; + /* Fill AQ info */ + req->qidx = (group * rss->rss_size) + idx; + req->ctype = NIX_AQ_CTYPE_RSS; + req->op = NIX_AQ_INSTOP_LOCK; } otx2_mbox_msg_send(mbox, 0);