From patchwork Tue May 23 20:03:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 127256 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1340942B85; Tue, 23 May 2023 22:05:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6AEA642D67; Tue, 23 May 2023 22:04:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C66E342D77 for ; Tue, 23 May 2023 22:04:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34NHuJjc012380; Tue, 23 May 2023 13:04:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=oqzCNBxGCtL2vVL0EJtTOYmGzzXYQVz1e4ja6ABG9rs=; b=AnV/A0pOTk1PysVLfJrOvqzIyRZzQnPc2JkhX+EZ2fFfBlhDFCRwbzlJNYoUKLGfHk6K sYOaJrL0iEXnD0iCSMpLRtniboV3HvEFyrwl/fHHuAzFR2oRasFcqzeV4Sg7dwY8sI5P RpPy7NoMcrZnvo5m6XK540FVVlvkuV5JzrJUqqY8hH0esD8PldNjsEGlNGSUYTOymmUa iqG53+uwWIKzFuPN+8+vmpsybPPBFt6grX5qhCEBSXCA70zP9ApYt2EZPZ5rQupv/LDY LK0Jsx101sWvT9Cwd6cESjPi1SSduzQeAuW8rzQ/g4M6qj8rkBd75BLTsVsTBvGPL223 Ow== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3qpwqk32j9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 23 May 2023 13:04:41 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 13:04:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 13:04:39 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id 03A553F70AC; Tue, 23 May 2023 13:04:35 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , Akhil Goyal Subject: [PATCH 08/15] common/cnxk: add MACsec port configuration Date: Wed, 24 May 2023 01:33:54 +0530 Message-ID: <20230523200401.1945974-9-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230523200401.1945974-1-gakhil@marvell.com> References: <20220928124516.93050-1-gakhil@marvell.com> <20230523200401.1945974-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: m8kBr7KpkfPbEAHv-xjlJhCjKTM2_tC4 X-Proofpoint-GUID: m8kBr7KpkfPbEAHv-xjlJhCjKTM2_tC4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_12,2023-05-23_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ROC APIs for MACsec port configurations Signed-off-by: Ankur Dwivedi Signed-off-by: Vamsi Attunuru Signed-off-by: Akhil Goyal --- drivers/common/cnxk/roc_mbox.h | 40 ++++ drivers/common/cnxk/roc_mcs.c | 345 ++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_mcs.h | 48 +++++ drivers/common/cnxk/version.map | 4 + 4 files changed, 437 insertions(+) diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 6e2b32a43f..96515deafd 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -299,6 +299,9 @@ struct mbox_msghdr { M(MCS_INTR_CFG, 0xa012, mcs_intr_cfg, mcs_intr_cfg, msg_rsp) \ M(MCS_SET_LMAC_MODE, 0xa013, mcs_set_lmac_mode, mcs_set_lmac_mode, msg_rsp) \ M(MCS_SET_PN_THRESHOLD, 0xa014, mcs_set_pn_threshold, mcs_set_pn_threshold, msg_rsp) \ + M(MCS_PORT_RESET, 0xa018, mcs_port_reset, mcs_port_reset_req, msg_rsp) \ + M(MCS_PORT_CFG_SET, 0xa019, mcs_port_cfg_set, mcs_port_cfg_set_req, msg_rsp) \ + M(MCS_PORT_CFG_GET, 0xa020, mcs_port_cfg_get, mcs_port_cfg_get_req, mcs_port_cfg_get_rsp) \ /* Messages initiated by AF (range 0xC00 - 0xDFF) */ #define MBOX_UP_CGX_MESSAGES \ @@ -899,6 +902,43 @@ struct mcs_set_pn_threshold { uint64_t __io rsvd; }; +struct mcs_port_cfg_set_req { + struct mbox_msghdr hdr; + uint8_t __io cstm_tag_rel_mode_sel; + uint8_t __io custom_hdr_enb; + uint8_t __io fifo_skid; + uint8_t __io lmac_mode; + uint8_t __io lmac_id; + uint8_t __io mcs_id; + uint64_t __io rsvd; +}; + +struct mcs_port_cfg_get_req { + struct mbox_msghdr hdr; + uint8_t __io lmac_id; + uint8_t __io mcs_id; + uint64_t __io rsvd; +}; + +struct mcs_port_cfg_get_rsp { + struct mbox_msghdr hdr; + uint8_t __io cstm_tag_rel_mode_sel; + uint8_t __io custom_hdr_enb; + uint8_t __io fifo_skid; + uint8_t __io lmac_mode; + uint8_t __io lmac_id; + uint8_t __io mcs_id; + uint64_t __io rsvd; +}; + +struct mcs_port_reset_req { + struct mbox_msghdr hdr; + uint8_t __io reset; + uint8_t __io mcs_id; + uint8_t __io lmac_id; + uint64_t __io rsvd; +}; + struct mcs_stats_req { struct mbox_msghdr hdr; uint8_t __io id; diff --git a/drivers/common/cnxk/roc_mcs.c b/drivers/common/cnxk/roc_mcs.c index c2f0a46f23..32cb8d106d 100644 --- a/drivers/common/cnxk/roc_mcs.c +++ b/drivers/common/cnxk/roc_mcs.c @@ -80,6 +80,25 @@ roc_mcs_active_lmac_set(struct roc_mcs *mcs, struct roc_mcs_set_active_lmac *lma return mbox_process_msg(mcs->mbox, (void *)&rsp); } +static int +mcs_port_reset_set(struct roc_mcs *mcs, struct roc_mcs_port_reset_req *port, uint8_t reset) +{ + struct mcs_port_reset_req *req; + struct msg_rsp *rsp; + + MCS_SUPPORT_CHECK; + + req = mbox_alloc_msg_mcs_port_reset(mcs->mbox); + if (req == NULL) + return -ENOMEM; + + req->reset = reset; + req->lmac_id = port->port_id; + req->mcs_id = mcs->idx; + + return mbox_process_msg(mcs->mbox, (void *)&rsp); +} + int roc_mcs_lmac_mode_set(struct roc_mcs *mcs, struct roc_mcs_set_lmac_mode *port) { @@ -125,6 +144,64 @@ roc_mcs_pn_threshold_set(struct roc_mcs *mcs, struct roc_mcs_set_pn_threshold *p return mbox_process_msg(mcs->mbox, (void *)&rsp); } +int +roc_mcs_port_cfg_set(struct roc_mcs *mcs, struct roc_mcs_port_cfg_set_req *req) +{ + struct mcs_port_cfg_set_req *set_req; + struct msg_rsp *rsp; + + MCS_SUPPORT_CHECK; + + if (req == NULL) + return -EINVAL; + + set_req = mbox_alloc_msg_mcs_port_cfg_set(mcs->mbox); + if (set_req == NULL) + return -ENOMEM; + + set_req->cstm_tag_rel_mode_sel = req->cstm_tag_rel_mode_sel; + set_req->custom_hdr_enb = req->custom_hdr_enb; + set_req->fifo_skid = req->fifo_skid; + set_req->lmac_mode = req->port_mode; + set_req->lmac_id = req->port_id; + set_req->mcs_id = mcs->idx; + + return mbox_process_msg(mcs->mbox, (void *)&rsp); +} + +int +roc_mcs_port_cfg_get(struct roc_mcs *mcs, struct roc_mcs_port_cfg_get_req *req, + struct roc_mcs_port_cfg_get_rsp *rsp) +{ + struct mcs_port_cfg_get_req *get_req; + struct mcs_port_cfg_get_rsp *get_rsp; + int rc; + + MCS_SUPPORT_CHECK; + + if (req == NULL) + return -EINVAL; + + get_req = mbox_alloc_msg_mcs_port_cfg_get(mcs->mbox); + if (get_req == NULL) + return -ENOMEM; + + get_req->lmac_id = req->port_id; + get_req->mcs_id = mcs->idx; + + rc = mbox_process_msg(mcs->mbox, (void *)&get_rsp); + if (rc) + return rc; + + rsp->cstm_tag_rel_mode_sel = get_rsp->cstm_tag_rel_mode_sel; + rsp->custom_hdr_enb = get_rsp->custom_hdr_enb; + rsp->fifo_skid = get_rsp->fifo_skid; + rsp->port_mode = get_rsp->lmac_mode; + rsp->port_id = get_rsp->lmac_id; + + return 0; +} + int roc_mcs_intr_configure(struct roc_mcs *mcs, struct roc_mcs_intr_cfg *config) { @@ -146,6 +223,274 @@ roc_mcs_intr_configure(struct roc_mcs *mcs, struct roc_mcs_intr_cfg *config) return mbox_process_msg(mcs->mbox, (void *)&rsp); } +int +roc_mcs_port_recovery(struct roc_mcs *mcs, union roc_mcs_event_data *mdata, uint8_t port_id) +{ + struct mcs_priv *priv = roc_mcs_to_mcs_priv(mcs); + struct roc_mcs_pn_table_write_req pn_table = {0}; + struct roc_mcs_rx_sc_sa_map rx_map = {0}; + struct roc_mcs_tx_sc_sa_map tx_map = {0}; + struct roc_mcs_port_reset_req port = {0}; + struct roc_mcs_clear_stats stats = {0}; + int tx_cnt = 0, rx_cnt = 0, rc = 0; + uint64_t set; + + port.port_id = port_id; + rc = mcs_port_reset_set(mcs, &port, 1); + + /* Reset TX/RX PN tables */ + for (int i = 0; i < (priv->sa_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sa_bmap, i); + if (set) { + pn_table.pn_id = i; + pn_table.next_pn = 1; + pn_table.dir = MCS_RX; + if (i >= priv->sa_entries) { + pn_table.dir = MCS_TX; + pn_table.pn_id -= priv->sa_entries; + } + rc = roc_mcs_pn_table_write(mcs, &pn_table); + if (rc) + return rc; + + if (i >= priv->sa_entries) + tx_cnt++; + else + rx_cnt++; + } + } + + if (tx_cnt || rx_cnt) { + mdata->tx_sa_array = plt_zmalloc(tx_cnt * sizeof(uint16_t), 0); + if (tx_cnt && (mdata->tx_sa_array == NULL)) { + rc = -ENOMEM; + goto exit; + } + mdata->rx_sa_array = plt_zmalloc(rx_cnt * sizeof(uint16_t), 0); + if (rx_cnt && (mdata->rx_sa_array == NULL)) { + rc = -ENOMEM; + goto exit; + } + + mdata->num_tx_sa = tx_cnt; + mdata->num_rx_sa = rx_cnt; + for (int i = 0; i < (priv->sa_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sa_bmap, i); + if (set) { + if (i >= priv->sa_entries) + mdata->tx_sa_array[--tx_cnt] = i - priv->sa_entries; + else + mdata->rx_sa_array[--rx_cnt] = i; + } + } + } + tx_cnt = 0; + rx_cnt = 0; + + /* Reset Tx active SA to index:0 */ + for (int i = priv->sc_entries; i < (priv->sc_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sc_bmap, i); + if (set) { + uint16_t sc_id = i - priv->sc_entries; + + tx_map.sa_index0 = priv->port_rsrc[port_id].sc_conf[sc_id].tx.sa_idx0; + tx_map.sa_index1 = priv->port_rsrc[port_id].sc_conf[sc_id].tx.sa_idx1; + tx_map.rekey_ena = priv->port_rsrc[port_id].sc_conf[sc_id].tx.rekey_enb; + tx_map.sectag_sci = priv->port_rsrc[port_id].sc_conf[sc_id].tx.sci; + tx_map.sa_index0_vld = 1; + tx_map.sa_index1_vld = 0; + tx_map.tx_sa_active = 0; + tx_map.sc_id = sc_id; + rc = roc_mcs_tx_sc_sa_map_write(mcs, &tx_map); + if (rc) + return rc; + + tx_cnt++; + } + } + + if (tx_cnt) { + mdata->tx_sc_array = plt_zmalloc(tx_cnt * sizeof(uint16_t), 0); + if (tx_cnt && (mdata->tx_sc_array == NULL)) { + rc = -ENOMEM; + goto exit; + } + + mdata->num_tx_sc = tx_cnt; + for (int i = priv->sc_entries; i < (priv->sc_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sc_bmap, i); + if (set) + mdata->tx_sc_array[--tx_cnt] = i - priv->sc_entries; + } + } + + /* Clear SA_IN_USE for active ANs in RX CPM */ + for (int i = 0; i < priv->sc_entries; i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sc_bmap, i); + if (set) { + rx_map.sa_index = priv->port_rsrc[port_id].sc_conf[i].rx.sa_idx; + rx_map.an = priv->port_rsrc[port_id].sc_conf[i].rx.an; + rx_map.sa_in_use = 0; + rx_map.sc_id = i; + rc = roc_mcs_rx_sc_sa_map_write(mcs, &rx_map); + if (rc) + return rc; + + rx_cnt++; + } + } + + /* Reset flow(flow/secy/sc/sa) stats mapped to this PORT */ + for (int i = 0; i < (priv->tcam_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].tcam_bmap, i); + if (set) { + stats.type = MCS_FLOWID_STATS; + stats.id = i; + stats.dir = MCS_RX; + if (i >= priv->sa_entries) { + stats.dir = MCS_TX; + stats.id -= priv->tcam_entries; + } + rc = roc_mcs_stats_clear(mcs, &stats); + if (rc) + return rc; + } + } + for (int i = 0; i < (priv->secy_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].secy_bmap, i); + if (set) { + stats.type = MCS_SECY_STATS; + stats.id = i; + stats.dir = MCS_RX; + if (i >= priv->sa_entries) { + stats.dir = MCS_TX; + stats.id -= priv->secy_entries; + } + rc = roc_mcs_stats_clear(mcs, &stats); + if (rc) + return rc; + } + } + for (int i = 0; i < (priv->sc_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sc_bmap, i); + if (set) { + stats.type = MCS_SC_STATS; + stats.id = i; + stats.dir = MCS_RX; + if (i >= priv->sa_entries) { + stats.dir = MCS_TX; + stats.id -= priv->sc_entries; + } + rc = roc_mcs_stats_clear(mcs, &stats); + if (rc) + return rc; + } + } + if (roc_model_is_cn10kb_a0()) { + for (int i = 0; i < (priv->sa_entries << 1); i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sa_bmap, i); + if (set) { + stats.type = MCS_SA_STATS; + stats.id = i; + stats.dir = MCS_RX; + if (i >= priv->sa_entries) { + stats.dir = MCS_TX; + stats.id -= priv->sa_entries; + } + rc = roc_mcs_stats_clear(mcs, &stats); + if (rc) + return rc; + } + } + } + { + stats.type = MCS_PORT_STATS; + stats.id = port_id; + rc = roc_mcs_stats_clear(mcs, &stats); + if (rc) + return rc; + } + + if (rx_cnt) { + mdata->rx_sc_array = plt_zmalloc(rx_cnt * sizeof(uint16_t), 0); + if (mdata->rx_sc_array == NULL) { + rc = -ENOMEM; + goto exit; + } + mdata->sc_an_array = plt_zmalloc(rx_cnt * sizeof(uint8_t), 0); + if (mdata->sc_an_array == NULL) { + rc = -ENOMEM; + goto exit; + } + + mdata->num_rx_sc = rx_cnt; + } + + /* Reactivate in-use ANs for active SCs in RX CPM */ + for (int i = 0; i < priv->sc_entries; i++) { + set = plt_bitmap_get(priv->port_rsrc[port_id].sc_bmap, i); + if (set) { + rx_map.sa_index = priv->port_rsrc[port_id].sc_conf[i].rx.sa_idx; + rx_map.an = priv->port_rsrc[port_id].sc_conf[i].rx.an; + rx_map.sa_in_use = 1; + rx_map.sc_id = i; + rc = roc_mcs_rx_sc_sa_map_write(mcs, &rx_map); + if (rc) + return rc; + + mdata->rx_sc_array[--rx_cnt] = i; + mdata->sc_an_array[rx_cnt] = priv->port_rsrc[port_id].sc_conf[i].rx.an; + } + } + + port.port_id = port_id; + rc = mcs_port_reset_set(mcs, &port, 0); + + return rc; +exit: + if (mdata->num_tx_sa) + plt_free(mdata->tx_sa_array); + if (mdata->num_rx_sa) + plt_free(mdata->rx_sa_array); + if (mdata->num_tx_sc) + plt_free(mdata->tx_sc_array); + if (mdata->num_rx_sc) { + plt_free(mdata->rx_sc_array); + plt_free(mdata->sc_an_array); + } + return rc; +} + +int +roc_mcs_port_reset(struct roc_mcs *mcs, struct roc_mcs_port_reset_req *port) +{ + struct roc_mcs_event_desc desc = {0}; + int rc; + + /* Initiate port reset and software recovery */ + rc = roc_mcs_port_recovery(mcs, &desc.metadata, port->port_id); + if (rc) + goto exit; + + desc.type = ROC_MCS_EVENT_PORT_RESET_RECOVERY; + /* Notify the entity details to the application which are recovered */ + mcs_event_cb_process(mcs, &desc); + +exit: + if (desc.metadata.num_tx_sa) + plt_free(desc.metadata.tx_sa_array); + if (desc.metadata.num_rx_sa) + plt_free(desc.metadata.rx_sa_array); + if (desc.metadata.num_tx_sc) + plt_free(desc.metadata.tx_sc_array); + if (desc.metadata.num_rx_sc) { + plt_free(desc.metadata.rx_sc_array); + plt_free(desc.metadata.sc_an_array); + } + + return rc; +} + int roc_mcs_event_cb_register(struct roc_mcs *mcs, enum roc_mcs_event_type event, roc_mcs_dev_cb_fn cb_fn, void *cb_arg, void *userdata) diff --git a/drivers/common/cnxk/roc_mcs.h b/drivers/common/cnxk/roc_mcs.h index d53aeb6b1e..77f2cee681 100644 --- a/drivers/common/cnxk/roc_mcs.h +++ b/drivers/common/cnxk/roc_mcs.h @@ -163,6 +163,43 @@ struct roc_mcs_set_pn_threshold { uint64_t rsvd; }; +struct roc_mcs_port_cfg_set_req { + /* Index of custom tag (= cstm_indx[x] in roc_mcs_custom_tag_cfg_get_rsp struct) to use + * when TX SECY_PLCY_MEMX[SECTAG_INSERT_MODE] = 0 (relative offset mode) + */ + uint8_t cstm_tag_rel_mode_sel; + /* In ingress path, custom_hdr_enb = 1 when the port is expected to receive pkts + * that have 8B custom header before DMAC + */ + uint8_t custom_hdr_enb; + /* Valid fifo skid values are 14,28,56 for 25G,50G,100G respectively + * FIFOs need to be configured based on the port_mode, valid only for 105N + */ + uint8_t fifo_skid; + uint8_t port_mode; /* 2'b00 - 25G or less, 2'b01 - 50G, 2'b10 - 100G */ + uint8_t port_id; + uint64_t rsvd; +}; + +struct roc_mcs_port_cfg_get_req { + uint8_t port_id; + uint64_t rsvd; +}; + +struct roc_mcs_port_cfg_get_rsp { + uint8_t cstm_tag_rel_mode_sel; + uint8_t custom_hdr_enb; + uint8_t fifo_skid; + uint8_t port_mode; + uint8_t port_id; + uint64_t rsvd; +}; + +struct roc_mcs_port_reset_req { + uint8_t port_id; + uint64_t rsvd; +}; + struct roc_mcs_stats_req { uint8_t id; uint8_t dir; @@ -365,6 +402,13 @@ __roc_api int roc_mcs_active_lmac_set(struct roc_mcs *mcs, struct roc_mcs_set_ac __roc_api int roc_mcs_lmac_mode_set(struct roc_mcs *mcs, struct roc_mcs_set_lmac_mode *port); /* (X)PN threshold set */ __roc_api int roc_mcs_pn_threshold_set(struct roc_mcs *mcs, struct roc_mcs_set_pn_threshold *pn); +/* Reset port */ +__roc_api int roc_mcs_port_reset(struct roc_mcs *mcs, struct roc_mcs_port_reset_req *port); +/* Get port config */ +__roc_api int roc_mcs_port_cfg_set(struct roc_mcs *mcs, struct roc_mcs_port_cfg_set_req *req); +/* Set port config */ +__roc_api int roc_mcs_port_cfg_get(struct roc_mcs *mcs, struct roc_mcs_port_cfg_get_req *req, + struct roc_mcs_port_cfg_get_rsp *rsp); /* Resource allocation and free */ __roc_api int roc_mcs_alloc_rsrc(struct roc_mcs *mcs, struct roc_mcs_alloc_rsrc_req *req, @@ -434,4 +478,8 @@ __roc_api int roc_mcs_event_cb_unregister(struct roc_mcs *mcs, enum roc_mcs_even /* Configure interrupts */ __roc_api int roc_mcs_intr_configure(struct roc_mcs *mcs, struct roc_mcs_intr_cfg *config); +/* Port recovery from fatal errors */ +__roc_api int roc_mcs_port_recovery(struct roc_mcs *mcs, union roc_mcs_event_data *mdata, + uint8_t port_id); + #endif /* _ROC_MCS_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 97f107dd05..9ba804a04f 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -151,6 +151,10 @@ INTERNAL { roc_mcs_pn_table_write; roc_mcs_pn_table_read; roc_mcs_pn_threshold_set; + roc_mcs_port_cfg_get; + roc_mcs_port_cfg_set; + roc_mcs_port_recovery; + roc_mcs_port_reset; roc_mcs_port_stats_get; roc_mcs_rx_sc_cam_enable; roc_mcs_rx_sc_cam_read;