From patchwork Thu Feb 1 13:07:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 136264 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB4C243A3A; Thu, 1 Feb 2024 14:09:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D7F442E34; Thu, 1 Feb 2024 14:08:38 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6734642E33 for ; Thu, 1 Feb 2024 14:08:36 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4117LbOZ028311; Thu, 1 Feb 2024 05:08:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=gLQCqA9jqb4m/4KrFjBwc 7ggRP+JfjGv7FaaH9ylm+U=; b=lA2l3EMBwudns64WW2a72+Mvd4HJTXHZlzMHP E/60bqaoCd5QnvtYADGhng7aTe+huzy9YwNYxdX2L6dsy2h9FrRoKlqKIBQ6n73Z Bf54LYGv5+auwhmaME502ewjR6UGjaDeTxDRc1uMfP9dcvl7pedLefQQrTLoRuGD S39rO2LpOEOQ5atPE15Jpi+qAja1s/myX5scEkiNqMGuOYEjt6I+hXB7cfvLrKcE KIl9bfkYq8nMkcy3v2Gg6AReJnMMULC2raKMgRH9+bEtv5eiFMV+ClWU6JjNKBLc QnXWDgRz4aNBLnNw0Io3oxUAQRoaffa+lZbS4LJUy8TFl6U6Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3w06u8h1fv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 01 Feb 2024 05:08:35 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 1 Feb 2024 05:08:33 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 1 Feb 2024 05:08:33 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 1178E3F7077; Thu, 1 Feb 2024 05:08:30 -0800 (PST) From: Harman Kalra To: Thomas Monjalon , Nithin Dabilpuram , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao , "Harman Kalra" , Anatoly Burakov CC: Subject: [PATCH v3 05/23] net/cnxk: probing representor ports Date: Thu, 1 Feb 2024 18:37:36 +0530 Message-ID: <20240201130754.194352-6-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20240201130754.194352-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20240201130754.194352-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ES8DebLvT0R0hF_SYbviGjWI6PMxY1tS X-Proofpoint-ORIG-GUID: ES8DebLvT0R0hF_SYbviGjWI6PMxY1tS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-01_02,2024-01-31_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Basic skeleton for probing representor devices. If PF device is passed with "representor" devargs, representor ports gets probed as a separate ethdev device. Signed-off-by: Harman Kalra --- MAINTAINERS | 1 + doc/guides/nics/cnxk.rst | 35 +++++ drivers/net/cnxk/cnxk_eswitch.c | 12 ++ drivers/net/cnxk/cnxk_eswitch.h | 8 +- drivers/net/cnxk/cnxk_rep.c | 256 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 50 +++++++ drivers/net/cnxk/cnxk_rep_ops.c | 129 ++++++++++++++++ drivers/net/cnxk/meson.build | 2 + 8 files changed, 492 insertions(+), 1 deletion(-) create mode 100644 drivers/net/cnxk/cnxk_rep.c create mode 100644 drivers/net/cnxk/cnxk_rep.h create mode 100644 drivers/net/cnxk/cnxk_rep_ops.c diff --git a/MAINTAINERS b/MAINTAINERS index 0d1c8126e3..2716178e18 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -827,6 +827,7 @@ M: Nithin Dabilpuram M: Kiran Kumar K M: Sunil Kumar Kori M: Satha Rao +M: Harman Kalra T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/common/cnxk/ F: drivers/net/cnxk/ diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 58cb8e2283..496474913f 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -37,6 +37,7 @@ Features of the CNXK Ethdev PMD are: - Inline IPsec processing support - Ingress meter support - Queue based priority flow control support +- Port representors Prerequisites ------------- @@ -613,6 +614,40 @@ Runtime Config Options for inline device With the above configuration, driver would poll for aging flows every 50 seconds. +Port Representors +----------------- + +The CNXK driver supports port representor model by adding virtual ethernet +ports providing a logical representation in DPDK for physical function(PF) or +SR-IOV virtual function (VF) devices for control and monitoring. + +Base device or parent device underneath the representor ports is a eswitch +device which is not a cnxk ethernet device but has NIC RX and TX capabilities. +Each representor port is represented by a RQ and SQ pair of this eswitch +device. + +Implementation supports representors for both physical function and virtual +function. + +Port representor ethdev instances can be spawned on an as needed basis +through configuration parameters passed to the driver of the underlying +base device using devargs ``-a ,representor=pf*vf*`` + +.. note:: + + Representor ports to be created for respective representees should be + defined via standard representor devargs patterns + Eg. To create a representor for representee PF1VF0, devargs to be passed + is ``-a ,representor=pf01vf0`` + + Implementation supports creation of multiple port representors with pattern: + ``-a ,representor=[pf0vf[1,2],pf1vf[2-5]]`` + +Port representor PMD supports following operations: + +- Get PF/VF statistics +- Flow operations - create, validate, destroy, query, flush, dump + Debugging Options ----------------- diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index df1011cf7a..4b2c907f9f 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_DEF_SQ_COUNT 512 @@ -62,6 +63,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) goto exit; } + /* Remove representor devices associated with PF */ + if (eswitch_dev->repr_cnt.nb_repr_created) + cnxk_rep_dev_remove(eswitch_dev); + /* Cleanup HW resources */ eswitch_hw_rsrc_cleanup(eswitch_dev, pci_dev); @@ -655,6 +660,13 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_created, roc_nix_get_pf_func(&eswitch_dev->nix)); + /* Probe representor ports */ + rc = cnxk_rep_dev_probe(pci_dev, eswitch_dev); + if (rc) { + plt_err("Failed to probe representor ports"); + goto rsrc_cleanup; + } + /* Spinlock for synchronization between representors traffic and control * messages */ diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 6ff296399e..dcd5add6d0 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -66,6 +66,11 @@ struct cnxk_eswitch_repr_cnt { uint16_t nb_repr_started; }; +struct cnxk_eswitch_switch_domain { + uint16_t switch_domain_id; + uint16_t pf; +}; + struct cnxk_rep_info { struct rte_eth_dev *rep_eth_dev; }; @@ -121,7 +126,8 @@ struct cnxk_eswitch_dev { /* Port representor fields */ rte_spinlock_t rep_lock; - uint16_t switch_domain_id; + uint16_t nb_switch_domain; + struct cnxk_eswitch_switch_domain sw_dom[RTE_MAX_ETHPORTS]; uint16_t eswitch_vdev; struct cnxk_rep_info *rep_info; }; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c new file mode 100644 index 0000000000..55156f5b56 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ +#include + +#define PF_SHIFT 10 +#define PF_MASK 0x3F + +static uint16_t +get_pf(uint16_t hw_func) +{ + return (hw_func >> PF_SHIFT) & PF_MASK; +} + +static uint16_t +switch_domain_id_allocate(struct cnxk_eswitch_dev *eswitch_dev, uint16_t pf) +{ + int i = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + if (eswitch_dev->sw_dom[i].pf == pf) + return eswitch_dev->sw_dom[i].switch_domain_id; + } + + return RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; +} + +int +cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + + return 0; +} + +int +cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) +{ + int i, rc = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); + if (rc) + plt_err("Failed to alloc switch domain: %d", rc); + } + + return rc; +} + +static int +cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t pf, prev_pf = 0, switch_domain_id; + int rc, i, j = 0; + + if (eswitch_dev->rep_info) + return 0; + + eswitch_dev->rep_info = + plt_zmalloc(sizeof(eswitch_dev->rep_info[0]) * eswitch_dev->repr_cnt.max_repr, 0); + if (!eswitch_dev->rep_info) { + plt_err("Failed to alloc memory for rep info"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate switch domain for all PFs (VFs will be under same domain as PF) */ + for (i = 0; i < eswitch_dev->repr_cnt.max_repr; i++) { + pf = get_pf(eswitch_dev->nix.rep_pfvf_map[i]); + if (pf == prev_pf) + continue; + + rc = rte_eth_switch_domain_alloc(&switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto fail; + } + plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf); + eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id; + eswitch_dev->sw_dom[j].pf = pf; + prev_pf = pf; + j++; + } + eswitch_dev->nb_switch_domain = j; + + return 0; +fail: + return rc; +} + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static int +cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) +{ + struct cnxk_rep_dev *rep_params = (struct cnxk_rep_dev *)params; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + + rep_dev->port_id = rep_params->port_id; + rep_dev->switch_domain_id = rep_params->switch_domain_id; + rep_dev->parent_dev = rep_params->parent_dev; + rep_dev->hw_func = rep_params->hw_func; + rep_dev->rep_id = rep_params->rep_id; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + eth_dev->data->representor_id = rep_params->port_id; + eth_dev->data->backer_port_id = eth_dev->data->port_id; + + eth_dev->data->mac_addrs = plt_zmalloc(RTE_ETHER_ADDR_LEN, 0); + if (!eth_dev->data->mac_addrs) { + plt_err("Failed to allocate memory for mac addr"); + return -ENOMEM; + } + + rte_eth_random_addr(rep_dev->mac_addr); + memcpy(eth_dev->data->mac_addrs, rep_dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Set the device operations */ + eth_dev->dev_ops = &cnxk_rep_dev_ops; + + /* Rx/Tx functions stubs to avoid crashing */ + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + + /* Only single queues for representor devices */ + eth_dev->data->nb_rx_queues = 1; + eth_dev->data->nb_tx_queues = 1; + + eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP; + eth_dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED; + + return 0; +} + +static int +create_representor_ethdev(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev, + struct cnxk_eswitch_devargs *esw_da, int idx) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct rte_eth_dev *rep_eth_dev; + uint16_t hw_func; + int rc = 0; + + struct cnxk_rep_dev rep = {.port_id = eswitch_dev->repr_cnt.nb_repr_probed, + .parent_dev = eswitch_dev}; + + if (esw_da->type == CNXK_ESW_DA_TYPE_PFVF) { + hw_func = esw_da->repr_hw_info[idx].hw_func; + rep.switch_domain_id = switch_domain_id_allocate(eswitch_dev, get_pf(hw_func)); + if (rep.switch_domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID) { + plt_err("Failed to get a valid switch domain id"); + rc = -EINVAL; + goto fail; + } + + esw_da->repr_hw_info[idx].port_id = rep.port_id; + /* Representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_hw_%x_representor_%d", pci_dev->device.name, + hw_func, rep.port_id); + + rep.hw_func = hw_func; + rep.rep_id = esw_da->repr_hw_info[idx].rep_id; + + } else { + snprintf(name, sizeof(name), "net_%s_representor_%d", pci_dev->device.name, + rep.port_id); + rep.switch_domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; + } + + rc = rte_eth_dev_create(&pci_dev->device, name, sizeof(struct cnxk_rep_dev), NULL, NULL, + cnxk_rep_dev_init, &rep); + if (rc) { + plt_err("Failed to create cnxk vf representor %s", name); + rc = -EINVAL; + goto fail; + } + + rep_eth_dev = rte_eth_dev_allocated(name); + if (!rep_eth_dev) { + plt_err("Failed to find the eth_dev for VF-Rep: %s.", name); + rc = -ENODEV; + goto fail; + } + + plt_rep_dbg("Representor portid %d (%s) type %d probe done", rep_eth_dev->data->port_id, + name, esw_da->da.type); + eswitch_dev->rep_info[rep.port_id].rep_eth_dev = rep_eth_dev; + eswitch_dev->repr_cnt.nb_repr_probed++; + + return 0; +fail: + return rc; +} + +int +cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) +{ + struct cnxk_eswitch_devargs *esw_da; + uint16_t num_rep; + int i, j, rc; + + if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) { + plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n", + eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS); + rc = -EINVAL; + goto fail; + } + + /* Initialize the internals of representor ports */ + rc = cnxk_rep_parent_setup(eswitch_dev); + if (rc) { + plt_err("Failed to setup the parent device, err %d", rc); + goto fail; + } + + for (i = eswitch_dev->last_probed; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + /* Check the representor devargs */ + num_rep = esw_da->nb_repr_ports; + for (j = 0; j < num_rep; j++) { + rc = create_representor_ethdev(pci_dev, eswitch_dev, esw_da, j); + if (rc) + goto fail; + } + } + eswitch_dev->last_probed = i; + + return 0; +fail: + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h new file mode 100644 index 0000000000..b802c44b33 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ +#include +#include + +#ifndef __CNXK_REP_H__ +#define __CNXK_REP_H__ + +/* Common ethdev ops */ +extern struct eth_dev_ops cnxk_rep_dev_ops; + +struct cnxk_rep_dev { + uint16_t port_id; + uint16_t rep_id; + uint16_t switch_domain_id; + struct cnxk_eswitch_dev *parent_dev; + uint16_t hw_func; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +static inline struct cnxk_rep_dev * +cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); +int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); +int cnxk_rep_dev_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_rep_representor_info_get(struct rte_eth_dev *dev, struct rte_eth_representor_info *info); +int cnxk_rep_dev_configure(struct rte_eth_dev *eth_dev); + +int cnxk_rep_link_update(struct rte_eth_dev *eth_dev, int wait_to_compl); +int cnxk_rep_dev_start(struct rte_eth_dev *eth_dev); +int cnxk_rep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int cnxk_rep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void cnxk_rep_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +void cnxk_rep_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +int cnxk_rep_dev_stop(struct rte_eth_dev *eth_dev); +int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); +int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); +int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); +int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); + +#endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c new file mode 100644 index 0000000000..15448688ce --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#include + +int +cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(wait_to_complete); + return 0; +} + +int +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(devinfo); + return 0; +} + +int +cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16_t nb_rx_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(rx_queue_id); + PLT_SET_USED(nb_rx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(rx_conf); + PLT_SET_USED(mb_pool); + return 0; +} + +void +cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(tx_queue_id); + PLT_SET_USED(nb_tx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(tx_conf); + return 0; +} + +void +cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(stats); + return 0; +} + +int +cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(ops); + return 0; +} + +/* CNXK platform representor dev ops */ +struct eth_dev_ops cnxk_rep_dev_ops = { + .dev_infos_get = cnxk_rep_dev_info_get, + .dev_configure = cnxk_rep_dev_configure, + .dev_start = cnxk_rep_dev_start, + .rx_queue_setup = cnxk_rep_rx_queue_setup, + .rx_queue_release = cnxk_rep_rx_queue_release, + .tx_queue_setup = cnxk_rep_tx_queue_setup, + .tx_queue_release = cnxk_rep_tx_queue_release, + .link_update = cnxk_rep_link_update, + .dev_close = cnxk_rep_dev_close, + .dev_stop = cnxk_rep_dev_stop, + .stats_get = cnxk_rep_stats_get, + .stats_reset = cnxk_rep_stats_reset, + .flow_ops_get = cnxk_rep_flow_ops_get +}; diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index ea7e363e89..fcd5d3d569 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -34,6 +34,8 @@ sources = files( 'cnxk_lookup.c', 'cnxk_ptp.c', 'cnxk_flow.c', + 'cnxk_rep.c', + 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', )