From patchwork Thu Jan 20 16:59:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 106149 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97CCAA034E; Thu, 20 Jan 2022 18:00:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E3224271C; Thu, 20 Jan 2022 18:00:05 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A526A4271C for ; Thu, 20 Jan 2022 18:00:03 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 20KAAYq9026995; Thu, 20 Jan 2022 08:59:55 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=T3jyRvrMM/pU3lzA9x6QOtSxlvyx12Jt6nSouDcvlOc=; b=YqTsBH0U1BCJZjJhh0L8koEDCOf2gaI5innmP2dIdfVq8JgVOm2xwnIB8z0AaFfH5GLv zETuPv+JxyEI0DqXJp1BDPgSFna/AZiAJvABGyl/WBDvRnGod91Vm73TRBelqUgMCdji iuGtiBwsujIsDw5j2xXQU5OQ4YAh24kMqfqF0R5FEdksdPhIHJlwL35zR31IXeQX8ksa U+IDS7VFCoibP4wbu1tp85D3jXz9cnmFsNVau075YJCAKfMPW4pqPlj1YfN/DTZS0jze B0FvDMBMQJX/+z/ON3NeqA/QpJvhxnyFdtusQlYGcjjxm5az8ExMuajciEnlkrrUVl2Y ZA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3dq5re1he0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 20 Jan 2022 08:59:55 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 20 Jan 2022 08:59:53 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 20 Jan 2022 08:59:53 -0800 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 787B83F7048; Thu, 20 Jan 2022 08:59:51 -0800 (PST) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Ray Kinsella CC: Subject: [PATCH v4 1/2] common/cnxk: support priority flow ctrl config API Date: Thu, 20 Jan 2022 22:29:46 +0530 Message-ID: <20220120165947.1388662-1-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220118132858.1260496-2-skori@marvell.com> References: <20220118132858.1260496-2-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: fn0NZFsUf02_xExR_EQViTwRR6b-u-pP X-Proofpoint-ORIG-GUID: fn0NZFsUf02_xExR_EQViTwRR6b-u-pP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-01-20_06,2022-01-20_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori CNXK platforms support priority flow control(802.1qbb) to pause respective traffic per class on that link. Patch adds RoC interface to configure priority flow control on MAC block i.e. CGX on cn9k and RPM on cn10k. Signed-off-by: Sunil Kumar Kori --- v1..v2: - fix RoC API naming convention. v2..v3: - fix pause quanta configuration for cn10k. - remove unnecessary code v3..v4: - fix PFC configuration with other type of TM tree i.e. default, user and rate limit tree. drivers/common/cnxk/roc_mbox.h | 19 ++- drivers/common/cnxk/roc_nix.h | 21 ++++ drivers/common/cnxk/roc_nix_fc.c | 95 +++++++++++++-- drivers/common/cnxk/roc_nix_priv.h | 6 +- drivers/common/cnxk/roc_nix_tm.c | 171 ++++++++++++++++++++++++++- drivers/common/cnxk/roc_nix_tm_ops.c | 14 ++- drivers/common/cnxk/version.map | 4 + 7 files changed, 310 insertions(+), 20 deletions(-) diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index e97d93e261..39f63c9271 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -95,6 +95,8 @@ struct mbox_msghdr { msg_rsp) \ M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \ M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \ + M(CGX_PRIO_FLOW_CTRL_CFG, 0x21F, cgx_prio_flow_ctrl_cfg, cgx_pfc_cfg, \ + cgx_pfc_rsp) \ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \ npa_lf_alloc_rsp) \ @@ -550,6 +552,19 @@ struct cgx_pause_frm_cfg { uint8_t __io tx_pause; }; +struct cgx_pfc_cfg { + struct mbox_msghdr hdr; + uint8_t __io rx_pause; + uint8_t __io tx_pause; + uint16_t __io pfc_en; /* bitmap indicating enabled traffic classes */ +}; + +struct cgx_pfc_rsp { + struct mbox_msghdr hdr; + uint8_t __io rx_pause; + uint8_t __io tx_pause; +}; + struct sfp_eeprom_s { #define SFP_EEPROM_SIZE 256 uint16_t __io sff_id; @@ -1124,7 +1139,9 @@ struct nix_bp_cfg_req { /* PF can be mapped to either CGX or LBK interface, * so maximum 64 channels are possible. */ -#define NIX_MAX_CHAN 64 +#define NIX_MAX_CHAN 64 +#define NIX_CGX_MAX_CHAN 16 +#define NIX_LBK_MAX_CHAN NIX_MAX_CHAN struct nix_bp_cfg_rsp { struct mbox_msghdr hdr; /* Channel and bpid mapping */ diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 69a5e8e7b4..e05b7b7dd8 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -165,16 +165,27 @@ struct roc_nix_fc_cfg { struct { uint32_t rq; + uint16_t tc; uint16_t cq_drop; bool enable; } cq_cfg; struct { + uint32_t sq; + uint16_t tc; bool enable; } tm_cfg; }; }; +struct roc_nix_pfc_cfg { + enum roc_nix_fc_mode mode; + /* For SET, tc must be [0, 15]. + * For GET, TC will represent bitmap + */ + uint16_t tc; +}; + struct roc_nix_eeprom_info { #define ROC_NIX_EEPROM_SIZE 256 uint16_t sff_id; @@ -478,6 +489,7 @@ void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); enum roc_nix_tm_tree { ROC_NIX_TM_DEFAULT = 0, ROC_NIX_TM_RLIMIT, + ROC_NIX_TM_PFC, ROC_NIX_TM_USER, ROC_NIX_TM_TREE_MAX, }; @@ -624,6 +636,7 @@ roc_nix_tm_shaper_default_red_algo(struct roc_nix_tm_node *node, int __roc_api roc_nix_tm_lvl_cnt_get(struct roc_nix *roc_nix); int __roc_api roc_nix_tm_lvl_have_link_access(struct roc_nix *roc_nix, int lvl); int __roc_api roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix); +int __roc_api roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix); bool __roc_api roc_nix_tm_is_user_hierarchy_enabled(struct roc_nix *nix); int __roc_api roc_nix_tm_tree_type_get(struct roc_nix *nix); @@ -736,6 +749,14 @@ int __roc_api roc_nix_fc_config_get(struct roc_nix *roc_nix, int __roc_api roc_nix_fc_mode_set(struct roc_nix *roc_nix, enum roc_nix_fc_mode mode); +int __roc_api roc_nix_pfc_mode_set(struct roc_nix *roc_nix, + struct roc_nix_pfc_cfg *pfc_cfg); + +int __roc_api roc_nix_pfc_mode_get(struct roc_nix *roc_nix, + struct roc_nix_pfc_cfg *pfc_cfg); + +uint16_t __roc_api roc_nix_chan_count_get(struct roc_nix *roc_nix); + enum roc_nix_fc_mode __roc_api roc_nix_fc_mode_get(struct roc_nix *roc_nix); void __roc_api rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c index ca29cd2bf9..814ccab839 100644 --- a/drivers/common/cnxk/roc_nix_fc.c +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -36,7 +36,7 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) struct mbox *mbox = get_mbox(roc_nix); struct nix_bp_cfg_req *req; struct nix_bp_cfg_rsp *rsp; - int rc = -ENOSPC; + int rc = -ENOSPC, i; if (roc_nix_is_sdp(roc_nix)) return 0; @@ -45,22 +45,28 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) req = mbox_alloc_msg_nix_bp_enable(mbox); if (req == NULL) return rc; + req->chan_base = 0; - req->chan_cnt = 1; - req->bpid_per_chan = 0; + if (roc_nix_is_lbk(roc_nix)) + req->chan_cnt = NIX_LBK_MAX_CHAN; + else + req->chan_cnt = NIX_CGX_MAX_CHAN; + + req->bpid_per_chan = true; rc = mbox_process_msg(mbox, (void *)&rsp); if (rc || (req->chan_cnt != rsp->chan_cnt)) goto exit; - nix->bpid[0] = rsp->chan_bpid[0]; nix->chan_cnt = rsp->chan_cnt; + for (i = 0; i < rsp->chan_cnt; i++) + nix->bpid[i] = rsp->chan_bpid[i] & 0x1FF; } else { req = mbox_alloc_msg_nix_bp_disable(mbox); if (req == NULL) return rc; req->chan_base = 0; - req->chan_cnt = 1; + req->chan_cnt = nix->chan_cnt; rc = mbox_process(mbox); if (rc) @@ -152,7 +158,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->op = NIX_AQ_INSTOP_WRITE; if (fc_cfg->cq_cfg.enable) { - aq->cq.bpid = nix->bpid[0]; + aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc]; aq->cq_mask.bpid = ~(aq->cq_mask.bpid); aq->cq.bp = fc_cfg->cq_cfg.cq_drop; aq->cq_mask.bp = ~(aq->cq_mask.bp); @@ -169,7 +175,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->op = NIX_AQ_INSTOP_WRITE; if (fc_cfg->cq_cfg.enable) { - aq->cq.bpid = nix->bpid[0]; + aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc]; aq->cq_mask.bpid = ~(aq->cq_mask.bpid); aq->cq.bp = fc_cfg->cq_cfg.cq_drop; aq->cq_mask.bp = ~(aq->cq_mask.bp); @@ -210,7 +216,9 @@ roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) return nix_fc_rxchan_bpid_set(roc_nix, fc_cfg->rxchan_cfg.enable); else if (fc_cfg->type == ROC_NIX_FC_TM_CFG) - return nix_tm_bp_config_set(roc_nix, fc_cfg->tm_cfg.enable); + return nix_tm_bp_config_set(roc_nix, fc_cfg->tm_cfg.sq, + fc_cfg->tm_cfg.tc, + fc_cfg->tm_cfg.enable); return -EINVAL; } @@ -391,3 +399,74 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, mbox_process(mbox); } + +int +roc_nix_pfc_mode_set(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + uint8_t tx_pause, rx_pause; + struct cgx_pfc_cfg *req; + struct cgx_pfc_rsp *rsp; + int rc = -ENOSPC; + + if (roc_nix_is_lbk(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + rx_pause = (pfc_cfg->mode == ROC_NIX_FC_FULL) || + (pfc_cfg->mode == ROC_NIX_FC_RX); + tx_pause = (pfc_cfg->mode == ROC_NIX_FC_FULL) || + (pfc_cfg->mode == ROC_NIX_FC_TX); + + req = mbox_alloc_msg_cgx_prio_flow_ctrl_cfg(mbox); + if (req == NULL) + goto exit; + + req->pfc_en = pfc_cfg->tc; + req->rx_pause = rx_pause; + req->tx_pause = tx_pause; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + nix->rx_pause = rsp->rx_pause; + nix->tx_pause = rsp->tx_pause; + if (rsp->tx_pause) + nix->cev |= BIT(pfc_cfg->tc); + else + nix->cev &= ~BIT(pfc_cfg->tc); + +exit: + return rc; +} + +int +roc_nix_pfc_mode_get(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (roc_nix_is_lbk(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + pfc_cfg->tc = nix->cev; + + if (nix->rx_pause && nix->tx_pause) + pfc_cfg->mode = ROC_NIX_FC_FULL; + else if (nix->rx_pause) + pfc_cfg->mode = ROC_NIX_FC_RX; + else if (nix->tx_pause) + pfc_cfg->mode = ROC_NIX_FC_TX; + else + pfc_cfg->mode = ROC_NIX_FC_NONE; + + return 0; +} + +uint16_t +roc_nix_chan_count_get(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->chan_cnt; +} diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 04575af295..db34bcadd0 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -33,6 +33,7 @@ struct nix_qint { /* Traffic Manager */ #define NIX_TM_MAX_HW_TXSCHQ 512 #define NIX_TM_HW_ID_INVALID UINT32_MAX +#define NIX_TM_CHAN_INVALID UINT16_MAX /* TM flags */ #define NIX_TM_HIERARCHY_ENA BIT_ULL(0) @@ -56,6 +57,7 @@ struct nix_tm_node { uint32_t priority; uint32_t weight; uint16_t lvl; + uint16_t rel_chan; uint32_t parent_id; uint32_t shaper_profile_id; void (*free_fn)(void *node); @@ -139,6 +141,7 @@ struct nix { uint16_t msixoff; uint8_t rx_pause; uint8_t tx_pause; + uint16_t cev; uint64_t rx_cfg; struct dev dev; uint16_t cints; @@ -376,7 +379,8 @@ int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena); int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable); int nix_tm_bp_config_get(struct roc_nix *roc_nix, bool *is_enabled); -int nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable); +int nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, + bool enable); /* * TM priv utils. diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index b3d8ebd3c2..89d1478486 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -121,7 +121,7 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) if (is_pf_or_lbk && !skip_bp && node->hw_lvl == nix->tm_link_cfg_lvl) { node->bp_capa = 1; - skip_bp = true; + skip_bp = false; } rc = nix_tm_node_reg_conf(nix, node); @@ -317,21 +317,38 @@ nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node) } int -nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) +nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, + bool enable) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); enum roc_nix_tm_tree tree = nix->tm_tree; struct mbox *mbox = (&nix->dev)->mbox; struct nix_txschq_config *req = NULL; struct nix_tm_node_list *list; + struct nix_tm_node *sq_node; + struct nix_tm_node *parent; struct nix_tm_node *node; uint8_t k = 0; uint16_t link; int rc = 0; + sq_node = nix_tm_node_search(nix, sq, nix->tm_tree); + parent = sq_node->parent; + while (parent) { + if (parent->lvl == ROC_TM_LVL_SCH2) + break; + + parent = parent->parent; + } + list = nix_tm_node_list(nix, tree); link = nix->tx_link; + if (parent->rel_chan != NIX_TM_CHAN_INVALID && parent->rel_chan != tc) { + rc = -EINVAL; + goto err; + } + TAILQ_FOREACH(node, list, node) { if (node->hw_lvl != nix->tm_link_cfg_lvl) continue; @@ -339,6 +356,9 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) if (!(node->flags & NIX_TM_NODE_HWRES) || !node->bp_capa) continue; + if (node->hw_id != parent->hw_id) + continue; + if (!req) { req = mbox_alloc_msg_nix_txschq_cfg(mbox); req->lvl = nix->tm_link_cfg_lvl; @@ -346,8 +366,9 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) } req->reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(node->hw_id, link); - req->regval[k] = enable ? BIT_ULL(13) : 0; - req->regval_mask[k] = ~BIT_ULL(13); + req->regval[k] = enable ? tc : 0; + req->regval[k] |= enable ? BIT_ULL(13) : 0; + req->regval_mask[k] = ~(BIT_ULL(13) | GENMASK_ULL(7, 0)); k++; if (k >= MAX_REGS_PER_MBOX_MSG) { @@ -366,6 +387,7 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) goto err; } + parent->rel_chan = tc; return 0; err: plt_err("Failed to %s bp on link %u, rc=%d(%s)", @@ -602,7 +624,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq) } /* Disable backpressure */ - rc = nix_tm_bp_config_set(roc_nix, false); + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false); if (rc) { plt_err("Failed to disable backpressure for flush, rc=%d", rc); return rc; @@ -731,7 +753,7 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq) return 0; /* Restore backpressure */ - rc = nix_tm_bp_config_set(roc_nix, true); + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, true); if (rc) { plt_err("Failed to restore backpressure, rc=%d", rc); return rc; @@ -1293,6 +1315,7 @@ nix_tm_prepare_default_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_DEFAULT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1319,6 +1342,7 @@ nix_tm_prepare_default_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = leaf_lvl; node->tree = ROC_NIX_TM_DEFAULT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1359,6 +1383,7 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1384,6 +1409,7 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1408,6 +1434,139 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = leaf_lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int +roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t tl2_node_id; + uint32_t parent, i; + int rc = -ENOMEM; + + parent = ROC_NIX_TM_NODE_ID_INVALID; + lvl_end = ROC_TM_LVL_SCH3; + leaf_lvl = ROC_TM_LVL_QUEUE; + + /* TL1 node */ + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = ROC_TM_LVL_ROOT; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + + /* TL2 node */ + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = ROC_TM_LVL_SCH1; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + tl2_node_id = nonleaf_id; + nonleaf_id++; + + for (i = 0; i < nix->nb_tx_queues; i++) { + parent = tl2_node_id; + for (lvl = ROC_TM_LVL_SCH2; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = + ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + } + + lvl = ROC_TM_LVL_SCH4; + + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 3d81247a12..d3d39eeb99 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -464,10 +464,16 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix) /* Disable backpressure, it will be enabled back if needed on * hierarchy enable */ - rc = nix_tm_bp_config_set(roc_nix, false); - if (rc) { - plt_err("Failed to disable backpressure for flush, rc=%d", rc); - goto cleanup; + for (i = 0; i < sq_cnt; i++) { + sq = nix->sqs[i]; + if (!sq) + continue; + + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false); + if (rc) { + plt_err("Failed to disable backpressure, rc=%d", rc); + goto cleanup; + } } /* Flush all tx queues */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 5a03b91784..f36a662911 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -106,6 +106,7 @@ INTERNAL { roc_nix_bpf_stats_reset; roc_nix_bpf_stats_to_idx; roc_nix_bpf_timeunit_get; + roc_nix_chan_count_get; roc_nix_cq_dump; roc_nix_cq_fini; roc_nix_cq_init; @@ -196,6 +197,8 @@ INTERNAL { roc_nix_npc_promisc_ena_dis; roc_nix_npc_rx_ena_dis; roc_nix_npc_mcast_config; + roc_nix_pfc_mode_set; + roc_nix_pfc_mode_get; roc_nix_ptp_clock_read; roc_nix_ptp_info_cb_register; roc_nix_ptp_info_cb_unregister; @@ -260,6 +263,7 @@ INTERNAL { roc_nix_tm_node_stats_get; roc_nix_tm_node_suspend_resume; roc_nix_tm_prealloc_res; + roc_nix_tm_pfc_prepare_tree; roc_nix_tm_prepare_rate_limited_tree; roc_nix_tm_rlimit_sq; roc_nix_tm_root_has_sp; From patchwork Thu Jan 20 16:59:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 106148 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC96DA034E; Thu, 20 Jan 2022 18:00:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8EFEA42710; Thu, 20 Jan 2022 18:00:00 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id AB60940042 for ; Thu, 20 Jan 2022 17:59:58 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 20KAALcj026477 for ; Thu, 20 Jan 2022 08:59:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/FcgHjnN5iWew7aPOR1NLHPgcRy0vvPN/5SaTr82dFc=; b=AcMe/dkiAyzDy5wmTPorG26KczPPXzr3d5H6aNzDVDnh8OYQTGR89DZVTrQI+JIBDLyi R4GhM1JbX6o36ZGDkqYO600k0jWnKmuYlgdr8YlVPsLPAG6E0cIDd3qtAmNbrFj4ZlDk xeXFCqLMvnqkBHJ5FgmpyN3RwZ9/5DEhgCTbhMMELaz6GwTXy8zXY8HVAR1kLvWpBuEz qgHgwLWx8/eihy2tiicfEpxdQDLZ90eG0efXry5TTamFLqRhxTOnrAVZt6GbV1UoRYqs NmOfUpBRVGP7Z4r7rwbDlggHvaUbCqmXS4SuHdQkd9q1cwoW1Mk6XgMKOgKGKDyIwVNM zg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3dq5re1he8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 20 Jan 2022 08:59:57 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 20 Jan 2022 08:59:56 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 20 Jan 2022 08:59:56 -0800 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 4838D3F704A; Thu, 20 Jan 2022 08:59:54 -0800 (PST) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH v4 2/2] net/cnxk: support priority flow control Date: Thu, 20 Jan 2022 22:29:47 +0530 Message-ID: <20220120165947.1388662-2-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120165947.1388662-1-skori@marvell.com> References: <20220118132858.1260496-2-skori@marvell.com> <20220120165947.1388662-1-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: C5q4JZw7_eM3keMXE6J2MDOaXCyZKtld X-Proofpoint-ORIG-GUID: C5q4JZw7_eM3keMXE6J2MDOaXCyZKtld X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-01-20_06,2022-01-20_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Patch implements priority flow control support for CNXK platforms. Signed-off-by: Sunil Kumar Kori --- v1..v2: - fix application restart issue. v2..v3: - fix pause quanta configuration for cn10k. - fix review comments. v3..v4: - fix PFC configuration with other type of TM tree i.e. default, user and rate limit tree. drivers/net/cnxk/cnxk_ethdev.c | 27 +++++ drivers/net/cnxk/cnxk_ethdev.h | 18 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 177 +++++++++++++++++++++++++++-- 3 files changed, 213 insertions(+), 9 deletions(-) diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 74f625553d..4248267a12 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1260,6 +1260,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev) goto cq_fini; } + /* Initialize TC to SQ mapping as invalid */ + memset(dev->pfc_tc_sq_map, 0xFF, sizeof(dev->pfc_tc_sq_map)); /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1548,6 +1550,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .tx_burst_mode_get = cnxk_nix_tx_burst_mode_get, .flow_ctrl_get = cnxk_nix_flow_ctrl_get, .flow_ctrl_set = cnxk_nix_flow_ctrl_set, + .priority_flow_ctrl_queue_set = cnxk_nix_priority_flow_ctrl_queue_set, .dev_set_link_up = cnxk_nix_set_link_up, .dev_set_link_down = cnxk_nix_set_link_down, .get_module_info = cnxk_nix_get_module_info, @@ -1721,6 +1724,8 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct eth_dev_ops *dev_ops = eth_dev->dev_ops; + struct rte_eth_pfc_queue_conf pfc_conf = {0}; + struct rte_eth_fc_conf fc_conf = {0}; struct roc_nix *nix = &dev->nix; int rc, i; @@ -1736,6 +1741,28 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) roc_nix_npc_rx_ena_dis(nix, false); + /* Restore 802.3 Flow control configuration */ + fc_conf.mode = RTE_ETH_FC_NONE; + rc = cnxk_nix_flow_ctrl_set(eth_dev, &fc_conf); + + pfc_conf.mode = RTE_ETH_FC_NONE; + for (i = 0; i < CNXK_NIX_PFC_CHAN_COUNT; i++) { + if (dev->pfc_tc_sq_map[i] != 0xFFFF) { + pfc_conf.rx_pause.tx_qid = dev->pfc_tc_sq_map[i]; + pfc_conf.rx_pause.tc = i; + pfc_conf.tx_pause.rx_qid = i; + pfc_conf.tx_pause.tc = i; + rc = cnxk_nix_priority_flow_ctrl_queue_set(eth_dev, + &pfc_conf); + if (rc) + plt_err("Failed to reset PFC. error code(%d)", + rc); + } + } + + fc_conf.mode = RTE_ETH_FC_FULL; + rc = cnxk_nix_flow_ctrl_set(eth_dev, &fc_conf); + /* Disable and free rte_meter entries */ nix_meter_fini(dev); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 5bfda3d815..c4f28625f3 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -137,12 +137,24 @@ /* SPI will be in 20 bits of tag */ #define CNXK_ETHDEV_SPI_TAG_MASK 0xFFFFFUL +#define CNXK_NIX_PFC_CHAN_COUNT 16 + struct cnxk_fc_cfg { enum rte_eth_fc_mode mode; uint8_t rx_pause; uint8_t tx_pause; }; +struct cnxk_pfc_cfg { + struct cnxk_fc_cfg fc_cfg; + uint16_t class_en; + uint16_t pause_time; + uint8_t rx_tc; + uint8_t rx_qid; + uint8_t tx_tc; + uint8_t tx_qid; +}; + struct cnxk_eth_qconf { union { struct rte_eth_txconf tx; @@ -366,6 +378,8 @@ struct cnxk_eth_dev { struct cnxk_eth_qconf *rx_qconf; /* Flow control configuration */ + uint16_t pfc_tc_sq_map[CNXK_NIX_PFC_CHAN_COUNT]; + struct cnxk_pfc_cfg pfc_cfg; struct cnxk_fc_cfg fc_cfg; /* PTP Counters */ @@ -467,6 +481,8 @@ int cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, struct rte_eth_fc_conf *fc_conf); int cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, struct rte_eth_fc_conf *fc_conf); +int cnxk_nix_priority_flow_ctrl_queue_set(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_conf *pfc_conf); int cnxk_nix_set_link_up(struct rte_eth_dev *eth_dev); int cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev); int cnxk_nix_get_module_info(struct rte_eth_dev *eth_dev, @@ -606,6 +622,8 @@ int nix_mtr_color_action_validate(struct rte_eth_dev *eth_dev, uint32_t id, uint32_t *prev_id, uint32_t *next_id, struct cnxk_mtr_policy_node *policy, int *tree_level); +int nix_priority_flow_ctrl_configure(struct rte_eth_dev *eth_dev, + struct cnxk_pfc_cfg *conf); /* Inlines */ static __rte_always_inline uint64_t diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index ce5f1f7240..1b47fe9dc3 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -69,6 +69,8 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + + devinfo->pfc_queue_tc_max = roc_nix_chan_count_get(&dev->nix); return 0; } @@ -230,6 +232,8 @@ nix_fc_cq_config_set(struct cnxk_eth_dev *dev, uint16_t qid, bool enable) cq = &dev->cqs[qid]; fc_cfg.type = ROC_NIX_FC_CQ_CFG; fc_cfg.cq_cfg.enable = enable; + /* Map all CQs to last channel */ + fc_cfg.cq_cfg.tc = roc_nix_chan_count_get(nix) - 1; fc_cfg.cq_cfg.rq = qid; fc_cfg.cq_cfg.cq_drop = cq->drop_thresh; @@ -248,6 +252,8 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, struct rte_eth_dev_data *data = eth_dev->data; struct cnxk_fc_cfg *fc = &dev->fc_cfg; struct roc_nix *nix = &dev->nix; + struct cnxk_eth_rxq_sp *rxq; + struct cnxk_eth_txq_sp *txq; uint8_t rx_pause, tx_pause; int rc, i; @@ -282,7 +288,12 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, } for (i = 0; i < data->nb_rx_queues; i++) { - rc = nix_fc_cq_config_set(dev, i, tx_pause); + struct roc_nix_fc_cfg fc_cfg; + + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[i]) - + 1; + rc = nix_fc_cq_config_set(dev, rxq->qid, !!tx_pause); if (rc) return rc; } @@ -290,14 +301,19 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, /* Check if RX pause frame is enabled or not */ if (fc->rx_pause ^ rx_pause) { - struct roc_nix_fc_cfg fc_cfg; - - memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); - fc_cfg.type = ROC_NIX_FC_TM_CFG; - fc_cfg.tm_cfg.enable = !!rx_pause; - rc = roc_nix_fc_config_set(nix, &fc_cfg); - if (rc) - return rc; + for (i = 0; i < data->nb_tx_queues; i++) { + struct roc_nix_fc_cfg fc_cfg; + + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + txq = ((struct cnxk_eth_txq_sp *)data->tx_queues[i]) - + 1; + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = txq->qid; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + } } rc = roc_nix_fc_mode_set(nix, mode_map[fc_conf->mode]); @@ -311,6 +327,29 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, return rc; } +int +cnxk_nix_priority_flow_ctrl_queue_set(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_conf *pfc_conf) +{ + struct cnxk_pfc_cfg conf = {0}; + int rc; + + conf.fc_cfg.mode = pfc_conf->mode; + + conf.pause_time = pfc_conf->tx_pause.pause_time; + conf.rx_tc = pfc_conf->tx_pause.tc; + conf.rx_qid = pfc_conf->tx_pause.rx_qid; + + conf.tx_tc = pfc_conf->rx_pause.tc; + conf.tx_qid = pfc_conf->rx_pause.tx_qid; + + rc = nix_priority_flow_ctrl_configure(eth_dev, &conf); + if (rc) + return rc; + + return rc; +} + int cnxk_nix_flow_ops_get(struct rte_eth_dev *eth_dev, const struct rte_flow_ops **ops) @@ -911,3 +950,123 @@ cnxk_nix_mc_addr_list_configure(struct rte_eth_dev *eth_dev, return 0; } + +int +nix_priority_flow_ctrl_configure(struct rte_eth_dev *eth_dev, + struct cnxk_pfc_cfg *conf) +{ + enum roc_nix_fc_mode mode_map[] = {ROC_NIX_FC_NONE, ROC_NIX_FC_RX, + ROC_NIX_FC_TX, ROC_NIX_FC_FULL}; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct cnxk_pfc_cfg *pfc = &dev->pfc_cfg; + struct roc_nix *nix = &dev->nix; + struct roc_nix_pfc_cfg pfc_cfg; + struct roc_nix_fc_cfg fc_cfg; + struct cnxk_eth_rxq_sp *rxq; + struct cnxk_eth_txq_sp *txq; + uint8_t rx_pause, tx_pause; + enum rte_eth_fc_mode mode; + struct roc_nix_cq *cq; + struct roc_nix_sq *sq; + int rc; + + if (roc_nix_is_vf_or_sdp(nix)) { + plt_err("Prio flow ctrl config is not allowed on VF and SDP"); + return -ENOTSUP; + } + + if (roc_model_is_cn96_ax() && data->dev_started) { + /* On Ax, CQ should be in disabled state + * while setting flow control configuration. + */ + plt_info("Stop the port=%d for setting flow control", + data->port_id); + return 0; + } + + if (dev->pfc_tc_sq_map[conf->tx_tc] != 0xFFFF && + dev->pfc_tc_sq_map[conf->tx_tc] != conf->tx_qid) { + plt_err("Same TC can not be configured on multiple SQs"); + return -ENOTSUP; + } + + mode = conf->fc_cfg.mode; + rx_pause = (mode == RTE_FC_FULL) || (mode == RTE_FC_RX_PAUSE); + tx_pause = (mode == RTE_FC_FULL) || (mode == RTE_FC_TX_PAUSE); + + /* Configure CQs */ + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[conf->rx_qid]) - 1; + cq = &dev->cqs[rxq->qid]; + fc_cfg.type = ROC_NIX_FC_CQ_CFG; + fc_cfg.cq_cfg.tc = conf->rx_tc; + fc_cfg.cq_cfg.enable = !!tx_pause; + fc_cfg.cq_cfg.rq = cq->qid; + fc_cfg.cq_cfg.cq_drop = cq->drop_thresh; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + goto exit; + + /* Check if RX pause frame is enabled or not */ + if (pfc->fc_cfg.rx_pause ^ rx_pause) { + if (conf->tx_qid >= eth_dev->data->nb_tx_queues) + goto exit; + + if ((roc_nix_tm_tree_type_get(nix) == ROC_NIX_TM_DEFAULT) && + eth_dev->data->nb_tx_queues > 1) { + /* + * Disabled xmit will be enabled when + * new topology is available. + */ + rc = roc_nix_tm_hierarchy_disable(nix); + if (rc) + goto exit; + + rc = roc_nix_tm_pfc_prepare_tree(nix); + if (rc) + goto exit; + + rc = roc_nix_tm_hierarchy_enable(nix, ROC_NIX_TM_PFC, + true); + if (rc) + goto exit; + } + } + + txq = ((struct cnxk_eth_txq_sp *)data->tx_queues[conf->tx_qid]) - 1; + sq = &dev->sqs[txq->qid]; + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = sq->qid; + fc_cfg.tm_cfg.tc = conf->tx_tc; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + + dev->pfc_tc_sq_map[conf->tx_tc] = sq->qid; + + /* Configure MAC block */ + if (tx_pause) + pfc->class_en |= BIT(conf->rx_tc); + else + pfc->class_en &= ~BIT(conf->rx_tc); + + if (pfc->class_en) + mode = RTE_ETH_FC_FULL; + + memset(&pfc_cfg, 0, sizeof(struct roc_nix_pfc_cfg)); + pfc_cfg.mode = mode_map[mode]; + pfc_cfg.tc = pfc->class_en; + rc = roc_nix_pfc_mode_set(nix, &pfc_cfg); + if (rc) + return rc; + + pfc->fc_cfg.rx_pause = rx_pause; + pfc->fc_cfg.tx_pause = tx_pause; + pfc->fc_cfg.mode = mode; + +exit: + return rc; +}