From patchwork Fri Aug 11 16:34:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130172 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D24043036; Fri, 11 Aug 2023 18:35:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2EEAC40F17; Fri, 11 Aug 2023 18:35:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 98EC740E03 for ; Fri, 11 Aug 2023 18:35:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BDcIHA011977 for ; Fri, 11 Aug 2023 09:35:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=0RVmVCg0BiSz5Gc4EZTwvv1vmBTAlchbNn7yYVsCerQ=; b=LaEw8ACOxMmBgVtqhC4/FGIBrrQP+3D9Z/8x1ajxk6PKoW5UatgzJ7HOJcIDVE/Lh2n5 B6crfffy85Th1Khp2kpINgjNAwCBD0fly9krHYbEyx0uzo/9oihYX7PjFhkE858QUVSs CGWVIzZklMLCcOefLQ9hXqB1oPnUvpbixu2awjJLWFOyaLKaA55CumPUuEMQK/pxDywD 3dWh0GmReCyJGdmY8JUMqNwRM3pUCIvoOc/nDnflrfhwCsFQhzjYUnXmNTFs+ONg0syo n8KCHYwsdYpDnerKnFdJQ7iDkD5aw3JoDKJbsORNXV2IdFpbnEl19kNibLQogTkLGghk Dw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qs5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:34:54 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:34:54 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 7173F3F707C; Fri, 11 Aug 2023 09:34:52 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 1/9] common/cnxk: debug log type for representors Date: Fri, 11 Aug 2023 22:04:11 +0530 Message-ID: <20230811163419.165790-2-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QlVjC-jv_-bCCwUgW4cHIyi-jtZr_rmH X-Proofpoint-GUID: QlVjC-jv_-bCCwUgW4cHIyi-jtZr_rmH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Registering exclusive debug log type for representors Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 ++ drivers/common/cnxk/version.map | 1 + 3 files changed, 4 insertions(+) diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index f91b95ceab..2016be8354 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -70,4 +70,5 @@ RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_sso, pmd.event.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tim, pmd.event.cnxk.timer, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); +RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_rep, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_ree, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 9884398a99..a8077cd7bc 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -243,6 +243,7 @@ extern int cnxk_logtype_sso; extern int cnxk_logtype_tim; extern int cnxk_logtype_tm; extern int cnxk_logtype_ree; +extern int cnxk_logtype_rep; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -271,6 +272,7 @@ extern int cnxk_logtype_ree; #define plt_tim_dbg(fmt, ...) plt_dbg(tim, fmt, ##__VA_ARGS__) #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #define plt_ree_dbg(fmt, ...) plt_dbg(ree, fmt, ##__VA_ARGS__) +#define plt_rep_dbg(fmt, ...) plt_dbg(rep, fmt, ##__VA_ARGS__) /* Datapath logs */ #define plt_dp_err(fmt, args...) \ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 8c71497df8..1d6e306848 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -13,6 +13,7 @@ INTERNAL { cnxk_logtype_npa; cnxk_logtype_npc; cnxk_logtype_ree; + cnxk_logtype_rep; cnxk_logtype_sso; cnxk_logtype_tim; cnxk_logtype_tm; From patchwork Fri Aug 11 16:34:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130174 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 131D043036; Fri, 11 Aug 2023 18:35:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7420743266; Fri, 11 Aug 2023 18:35:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 554D540F16 for ; Fri, 11 Aug 2023 18:35:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BEGFrP021082; Fri, 11 Aug 2023 09:35:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=10d2VHUTdAMWym+ahjJokJKlC0amfK5q5DPnGsW7FXI=; b=bCbBx6pIP7CFr79Bh5koFUNz1o2L+dwUre0JjYBaQ5BHPfe+OnoZNoRggFkVFmD+5g+c nyTt2MzrgiZKY8plTUSLeb0sJ9+7adM8ZRrid9qggKrqxIr+OEs70sf4SsCQqyqvvdhH D8eL9BQ/qjIHaWtEypIYfe1O65oMwSxqeU+rj/mFvjdsCAC4ffL8BjIf1S/9NX6W4AjY IGdDHFNoLbpl50Oo84DXcRol0bKKSByaSsDx6aopsLMpRfG7NOY6siWnAcQg2lQQ/NvW hNe8bEE7p0rHvtjMijZTUpBtIZ6y0KmSaoRZx4avT7hSXQRUiCGei2UqVvN7/lid/r5j UA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qsa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 11 Aug 2023 09:35:06 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:34:58 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:34:58 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id A3F513F7068; Fri, 11 Aug 2023 09:34:55 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Anatoly Burakov CC: , Harman Kalra Subject: [PATCH 2/9] net/cnxk: probing representor ports Date: Fri, 11 Aug 2023 22:04:12 +0530 Message-ID: <20230811163419.165790-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: UETJFLL380m288Gxy0ThjGYXEx9Xuf9u X-Proofpoint-GUID: UETJFLL380m288Gxy0ThjGYXEx9Xuf9u X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Basic skeleton for probing representor devices. If PF device is passed with "representor" devargs, representor ports gets probed as a separate ethdev device. Signed-off-by: Harman Kalra --- doc/guides/nics/cnxk.rst | 39 +++++ drivers/net/cnxk/cn10k_ethdev.c | 4 +- drivers/net/cnxk/cn9k_ethdev.c | 4 +- drivers/net/cnxk/cnxk_ethdev.c | 42 ++++- drivers/net/cnxk/cnxk_ethdev.h | 12 ++ drivers/net/cnxk/cnxk_rep.c | 262 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 51 +++++++ drivers/net/cnxk/cnxk_rep_ops.c | 112 ++++++++++++++ drivers/net/cnxk/meson.build | 2 + 9 files changed, 516 insertions(+), 12 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep.c create mode 100644 drivers/net/cnxk/cnxk_rep.h create mode 100644 drivers/net/cnxk/cnxk_rep_ops.c diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 9229056f6f..dd14102efa 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -37,6 +37,8 @@ Features of the CNXK Ethdev PMD are: - Inline IPsec processing support - Ingress meter support - Queue based priority flow control support +- Virtual function representors +- Represented port pattern matching and action Prerequisites ------------- @@ -581,6 +583,41 @@ Runtime Config Options for inline device With the above configuration, driver would poll for soft expiry events every 1000 usec. +Virtual Function Representors +----------------------------- + +The CNXK driver supports port representor model by adding virtual ethernet +ports providing a logical representation in DPDK for SR-IOV virtual function +(VF) devices for control and monitoring. + +These port representor ethdev instances can be spawned on an as needed basis +through configuration parameters passed to the driver of the underlying +base device using devargs ``-a pci:dbdf,representor=[0]`` + +.. note:: + + Base device is the PF whose VFs will be represented by these representors + + Above devarg parameters can be provided as a range of representor device + ``-a pci:dbdf,representor=[0-3]`` or a single representor device on need + basis ``-a pci:dbdf,representor=[0]`` + +In case of exception path (i.e. until the flow definition is offloaded to the +hardware), packets transmitted by the VFs shall be received by these +representor port, while packets transmitted by representor ports shall be +received by respective VFs. + +On receiving the VF traffic via these representor ports, applications holding +these representor ports can decide to offload the traffic flow into the HW. +Henceforth the matching traffic shall be directly steered to the respective +VFs without being received by the application. + +Current virtual representor port PMD supports following operations: + +- Get and clear VF statistics +- Set mac address +- Flow operations - create, validate, destroy, query, flush, dump + Debugging Options ----------------- @@ -595,3 +632,5 @@ Debugging Options +---+------------+-------------------------------------------------------+ | 2 | NPC | --log-level='pmd\.net.cnxk\.flow,8' | +---+------------+-------------------------------------------------------+ + | 3 | REP | --log-level='pmd\.net.cnxk\.rep,8' | + +---+------------+-------------------------------------------------------+ diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index 4c4acc7cf0..a6a4665af1 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -912,8 +912,8 @@ static const struct rte_pci_id cn10k_pci_nix_map[] = { static struct rte_pci_driver cn10k_pci_nix = { .id_table = cn10k_pci_nix_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | - RTE_PCI_DRV_INTR_LSC, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | RTE_PCI_DRV_INTR_LSC | + RTE_PCI_DRV_PROBE_AGAIN, .probe = cn10k_nix_probe, .remove = cn10k_nix_remove, }; diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index bae4dda5e2..0448d7e219 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -834,8 +834,8 @@ static const struct rte_pci_id cn9k_pci_nix_map[] = { static struct rte_pci_driver cn9k_pci_nix = { .id_table = cn9k_pci_nix_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | - RTE_PCI_DRV_INTR_LSC, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA | RTE_PCI_DRV_INTR_LSC | + RTE_PCI_DRV_PROBE_AGAIN, .probe = cn9k_nix_probe, .remove = cn9k_nix_remove, }; diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 4b98faa729..902e6df72d 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -2102,6 +2102,10 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; + /* Remove representor devices associated with PF */ + if (dev->num_reps) + cnxk_rep_dev_remove(eth_dev); + rc = roc_nix_dev_fini(nix); /* Can be freed later by PMD if NPA LF is in use */ if (rc == -EAGAIN) { @@ -2180,18 +2184,40 @@ cnxk_nix_remove(struct rte_pci_device *pci_dev) int cnxk_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { - int rc; + struct rte_eth_devargs eth_da = {.nb_representor_ports = 0}; + struct rte_eth_dev *pf_ethdev; + uint16_t num_rep; + int rc = 0; RTE_SET_USED(pci_drv); - rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct cnxk_eth_dev), - cnxk_eth_dev_init); + if (pci_dev->device.devargs) { + rc = rte_eth_devargs_parse(pci_dev->device.devargs->args, ð_da); + if (rc) + return rc; + } + + num_rep = eth_da.nb_representor_ports; + plt_rep_dbg("nb_representor_ports = %d\n", num_rep); - /* On error on secondary, recheck if port exists in primary or - * in mid of detach state. + /* This probing API may get invoked even after first level of probe is + * done, as part of an application bringup(OVS-DPDK vswitchd), checking + * if eth_dev is allocated for the PF device */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc) - if (!rte_eth_dev_allocated(pci_dev->device.name)) - return 0; + pf_ethdev = rte_eth_dev_allocated(pci_dev->device.name); + if (pf_ethdev == NULL) { + rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct cnxk_eth_dev), + cnxk_eth_dev_init); + if (rc || !num_rep) + return rc; + + pf_ethdev = rte_eth_dev_allocated(pci_dev->device.name); + } + + if (!num_rep) + return rc; + + rc = cnxk_rep_dev_probe(pci_dev, pf_ethdev, ð_da); + return rc; } diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index ed531fb277..3896db38e1 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -22,7 +22,9 @@ #include #include "roc_api.h" + #include +#include #define CNXK_ETH_DEV_PMD_VERSION "1.0" @@ -307,6 +309,10 @@ struct cnxk_macsec_sess { }; TAILQ_HEAD(cnxk_macsec_sess_list, cnxk_macsec_sess); +struct cnxk_rep_info { + struct rte_eth_dev *rep_eth_dev; +}; + struct cnxk_eth_dev { /* ROC NIX */ struct roc_nix nix; @@ -414,6 +420,12 @@ struct cnxk_eth_dev { /* MCS device */ struct cnxk_mcs_dev *mcs_dev; struct cnxk_macsec_sess_list mcs_list; + + /* Port representor fields */ + uint16_t switch_domain_id; + uint16_t num_reps; + uint16_t rep_xport_vdev; + struct cnxk_rep_info *rep_info; }; struct cnxk_eth_rxq_sp { diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c new file mode 100644 index 0000000000..ebefc34ac8 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.c @@ -0,0 +1,262 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include + +/* CNXK platform representor dev ops */ +struct eth_dev_ops cnxk_rep_dev_ops = { + .dev_infos_get = cnxk_rep_dev_info_get, + .dev_configure = cnxk_rep_dev_configure, + .dev_start = cnxk_rep_dev_start, + .rx_queue_setup = cnxk_rep_rx_queue_setup, + .rx_queue_release = cnxk_rep_rx_queue_release, + .tx_queue_setup = cnxk_rep_tx_queue_setup, + .tx_queue_release = cnxk_rep_tx_queue_release, + .link_update = cnxk_rep_link_update, + .dev_close = cnxk_rep_dev_close, + .dev_stop = cnxk_rep_dev_stop, + .stats_get = cnxk_rep_stats_get, + .stats_reset = cnxk_rep_stats_reset, + .flow_ops_get = cnxk_rep_flow_ops_get +}; + +int +cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + + return 0; +} + +int +cnxk_rep_dev_remove(struct rte_eth_dev *pf_ethdev) +{ + struct cnxk_eth_dev *pf_dev = cnxk_eth_pmd_priv(pf_ethdev); + int rc = 0; + + rc = rte_eth_switch_domain_free(pf_dev->switch_domain_id); + if (rc) + plt_err("Failed to alloc switch domain: %d", rc); + + return rc; +} + +static int +hotplug_rep_xport_vdev(struct cnxk_eth_dev *pf_dev) +{ + char rep_xport_devargs[] = CNXK_REP_XPORT_VDEV_DEVARGS; + char name[] = CNXK_REP_XPORT_VDEV_NAME; + uint16_t portid; + int rc = 0; + + rc = rte_eth_dev_get_port_by_name(name, &portid); + if (rc != 0) { + if (rc == -ENODEV) { + /* rep_xport device should get added once during first PF probe */ + rc = rte_eal_hotplug_add("vdev", name, rep_xport_devargs); + if (rc) { + plt_err("rep base hotplug failed %d", -rte_errno); + goto fail; + } + + /* Get the portID of rep_xport port */ + if (rte_eth_dev_get_port_by_name(name, &portid)) { + plt_err("cannot find added vdev %s", name); + goto free; + } + } else { + plt_err("cannot find added vdev %s", name); + goto free; + } + } + + plt_rep_dbg("rep_xport vdev port %d, name %s", portid, name); + pf_dev->rep_xport_vdev = portid; + + return 0; +free: + rte_eal_hotplug_remove("vdev", name); +fail: + return rc; +} + +static int +cnxk_init_rep_internal(struct cnxk_eth_dev *pf_dev) +{ + int rc; + + if (pf_dev->rep_info) + return 0; + + pf_dev->rep_info = + plt_zmalloc(sizeof(pf_dev->rep_info[0]) * CNXK_MAX_REP_PORTS, 0); + if (!pf_dev->rep_info) { + plt_err("Failed to alloc memory for rep info"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate switch domain for this PF */ + rc = rte_eth_switch_domain_alloc(&pf_dev->switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto fail; + } + + rc = hotplug_rep_xport_vdev(pf_dev); + if (rc) { + plt_err("Failed to hotplug representor base port, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +static int +cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct cnxk_rep_dev *rep_params = (struct cnxk_rep_dev *)params; + struct rte_eth_link *link; + struct cnxk_eth_dev *pf_dev; + + rep_dev->vf_id = rep_params->vf_id; + rep_dev->switch_domain_id = rep_params->switch_domain_id; + rep_dev->parent_dev = rep_params->parent_dev; + + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + eth_dev->data->representor_id = rep_params->vf_id; + eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id; + + eth_dev->data->mac_addrs = plt_zmalloc(RTE_ETHER_ADDR_LEN, 0); + if (!eth_dev->data->mac_addrs) { + plt_err("Failed to allocate memory for mac addr"); + return -ENOMEM; + } + + rte_eth_random_addr(rep_dev->mac_addr); + memcpy(eth_dev->data->mac_addrs, rep_dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Set the device operations */ + eth_dev->dev_ops = &cnxk_rep_dev_ops; + + /* Rx/Tx functions stubs to avoid crashing */ + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + + /* Link state. Inherited from PF */ + pf_dev = cnxk_eth_pmd_priv(rep_dev->parent_dev); + link = &pf_dev->eth_dev->data->dev_link; + + eth_dev->data->dev_link.link_speed = link->link_speed; + eth_dev->data->dev_link.link_duplex = link->link_duplex; + eth_dev->data->dev_link.link_status = link->link_status; + eth_dev->data->dev_link.link_autoneg = link->link_autoneg; + + return 0; +} + +int +cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev, + struct rte_eth_devargs *eth_da) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct rte_eth_dev *rep_eth_dev; + struct cnxk_eth_dev *pf_dev; + uint16_t num_rep; + int i, rc; + + /* Get the PF device */ + pf_dev = cnxk_eth_pmd_priv(pf_ethdev); + + /* Check the representor devargs */ + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE) + return 0; + if (eth_da->type != RTE_ETH_REPRESENTOR_VF) { + plt_err("unsupported representor type %d\n", eth_da->type); + return -ENOTSUP; + } + num_rep = eth_da->nb_representor_ports; + if (num_rep > CNXK_MAX_REP_PORTS) { + plt_err("nb_representor_ports = %d > %d MAX VF REPS\n", num_rep, + CNXK_MAX_REP_PORTS); + return -EINVAL; + } + + if (num_rep >= RTE_MAX_ETHPORTS) { + plt_err("nb_representor_ports = %d > %d MAX ETHPORTS\n", num_rep, RTE_MAX_ETHPORTS); + return -EINVAL; + } + + /* Initialize the internals of representor ports */ + if (cnxk_init_rep_internal(pf_dev)) + return 0; + + for (i = 0; i < num_rep; i++) { + struct cnxk_rep_dev representor = {.vf_id = eth_da->representor_ports[i], + .switch_domain_id = pf_dev->switch_domain_id, + .parent_dev = pf_ethdev}; + + if (representor.vf_id >= pci_dev->max_vfs) { + plt_err("VF-Rep id %d >= %d pci dev max vfs\n", representor.vf_id, + pci_dev->max_vfs); + continue; + } + + /* Representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_representor_%d", pci_dev->device.name, + eth_da->representor_ports[i]); + + rc = rte_eth_dev_create(&pci_dev->device, name, sizeof(struct cnxk_rep_dev), NULL, + NULL, cnxk_rep_dev_init, &representor); + if (rc) { + plt_err("failed to create cnxk vf representor %s", name); + rc = -EINVAL; + goto err; + } + + rep_eth_dev = rte_eth_dev_allocated(name); + if (!rep_eth_dev) { + plt_err("Failed to find the eth_dev for VF-Rep: %s.", name); + rc = -ENODEV; + goto err; + } + + plt_rep_dbg("PF portid %d switch domain %d representor portid %d (%s) probe done", + pf_ethdev->data->port_id, pf_dev->switch_domain_id, + rep_eth_dev->data->port_id, name); + pf_dev->rep_info[representor.vf_id].rep_eth_dev = rep_eth_dev; + pf_dev->num_reps++; + } + + return 0; +err: + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h new file mode 100644 index 0000000000..24adb9649b --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include + +#ifndef __CNXK_REP_H__ +#define __CNXK_REP_H__ + +#define CNXK_REP_XPORT_VDEV_DEVARGS "role=server" +#define CNXK_REP_XPORT_VDEV_NAME "net_memif" +#define CNXK_MAX_REP_PORTS 128 + +/* Common ethdev ops */ +extern struct eth_dev_ops cnxk_rep_dev_ops; + +struct cnxk_rep_dev { + uint16_t vf_id; + uint16_t switch_domain_id; + struct rte_eth_dev *parent_dev; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +static inline struct cnxk_rep_dev * +cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev, + struct rte_eth_devargs *eth_da); +int cnxk_rep_dev_remove(struct rte_eth_dev *pf_ethdev); +int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); +int cnxk_rep_dev_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_rep_dev_configure(struct rte_eth_dev *eth_dev); + +int cnxk_rep_link_update(struct rte_eth_dev *eth_dev, int wait_to_compl); +int cnxk_rep_dev_start(struct rte_eth_dev *eth_dev); +int cnxk_rep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int cnxk_rep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t queue_idx, uint16_t nb_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void cnxk_rep_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +void cnxk_rep_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); +int cnxk_rep_dev_stop(struct rte_eth_dev *eth_dev); +int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); +int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); +int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); +int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); + +#endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c new file mode 100644 index 0000000000..3f1aab077b --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -0,0 +1,112 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +int +cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(wait_to_complete); + return 0; +} + +int +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(devinfo); + return 0; +} + +int +cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16_t nb_rx_desc, + unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(rx_queue_id); + PLT_SET_USED(nb_rx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(rx_conf); + PLT_SET_USED(mb_pool); + return 0; +} + +void +cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, + unsigned int socket_id, const struct rte_eth_txconf *tx_conf) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(tx_queue_id); + PLT_SET_USED(nb_tx_desc); + PLT_SET_USED(socket_id); + PLT_SET_USED(tx_conf); + return 0; +} + +void +cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(queue_id); +} + +int +cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(stats); + return 0; +} + +int +cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) +{ + PLT_SET_USED(ethdev); + return 0; +} + +int +cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) +{ + PLT_SET_USED(ethdev); + PLT_SET_USED(ops); + return 0; +} diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index e83f3c9050..38dde54ce9 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -32,6 +32,8 @@ sources = files( 'cnxk_lookup.c', 'cnxk_ptp.c', 'cnxk_flow.c', + 'cnxk_rep.c', + 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', ) From patchwork Fri Aug 11 16:34:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130175 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79ADA43036; Fri, 11 Aug 2023 18:35:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9172F4326B; Fri, 11 Aug 2023 18:35:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E98DA40E03 for ; Fri, 11 Aug 2023 18:35:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BEGFrQ021082 for ; Fri, 11 Aug 2023 09:35:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=RKguetw2Y3TykXF8rOE9J0pV+Qfap5jZLs9u3gfuPKE=; b=F5TLo095WoFfLFMyPRuFcylPvMcvlbCVpcm+4/OeJ3xXfW1zAynMhAX/1OLfSAQcUXWu wANmzgfueHDPbqyRrE6waW6Mzx0l3uwb9XaxyZAPKfyVErdsnZJLS0G3y1q91cGFTHtx GK51d6zBBRjjRFTnHChdq14DcP0QWui4syaWXmQr8CxZbcC3FQ9ed4VfWsbDT5p4ZAmJ 1PgWvzza2R5v/ikCOhWB4H8CPBNBCYhq5MKfpW3UNQne2Y5HwIUZdt0tR3cjMSaCbLLn x6U02+gesV7agSIaAyyu/rHmKwfkyGZpc7wUyXtoIZCPuDXAsCl3BdpmlCdoctvn1tUu zQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qsa-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:07 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:01 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 1DDE93F707D; Fri, 11 Aug 2023 09:34:58 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 3/9] common/cnxk: maintaining representor state Date: Fri, 11 Aug 2023 22:04:13 +0530 Message-ID: <20230811163419.165790-4-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: yVf5Fi8um6KZeM5A3woD_p5eZzc9wWs4 X-Proofpoint-GUID: yVf5Fi8um6KZeM5A3woD_p5eZzc9wWs4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Maintaining the state of representor which describes whether it has an active VF and what is the func id of the representee. Implement a mbox between VF and PF for the VF to know if representors are available. Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_dev.c | 167 +++++++++++++++++++++++------ drivers/common/cnxk/roc_dev_priv.h | 7 +- drivers/common/cnxk/roc_nix.c | 23 ++++ drivers/common/cnxk/roc_nix.h | 22 ++-- drivers/common/cnxk/version.map | 3 + 5 files changed, 182 insertions(+), 40 deletions(-) diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index 4b0ba218ed..4057380eb7 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -216,15 +216,120 @@ af_pf_wait_msg(struct dev *dev, uint16_t vf, int num_msg) return req_hdr->num_msgs; } +static int +foward_msg_to_af(struct dev *dev, struct mbox_msghdr *msg, size_t size) +{ + struct mbox_msghdr *af_req; + + /* Reserve AF/PF mbox message */ + size = PLT_ALIGN(size, MBOX_MSG_ALIGN); + af_req = mbox_alloc_msg(dev->mbox, 0, size); + if (af_req == NULL) + return -ENOSPC; + mbox_req_init(msg->id, af_req); + + /* Copy message from VF<->PF mbox to PF<->AF mbox */ + mbox_memcpy((uint8_t *)af_req + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + af_req->pcifunc = msg->pcifunc; + + return 0; +} + +static int +process_vf_ready_msg(struct dev *dev, struct mbox *mbox, struct mbox_msghdr *msg, + uint16_t vf) +{ + uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; + struct ready_msg_rsp *rsp; + int rc; + + /* Handle READY message in PF */ + dev->active_vfs[vf / max_bits] |= BIT_ULL(vf % max_bits); + rsp = (struct ready_msg_rsp *)mbox_alloc_msg(mbox, vf, sizeof(*rsp)); + if (!rsp) { + plt_err("Failed to alloc VF%d READY message", vf); + return -1; + } + + mbox_rsp_init(msg->id, rsp); + + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + + /* Set pffunc value to its representor, op = 0 */ + if (dev->ops && dev->ops->rep_state) { + rc = dev->ops->rep_state(dev->roc_nix, msg->pcifunc, 0); + if (rc < 0) + plt_err("Failed to set repr status, pcifunc 0x%x", + msg->pcifunc); + } + + return 0; +} + +static int +process_vf_read_base_rule_msg(struct dev *dev, struct mbox *mbox, struct mbox_msghdr *msg, + uint16_t vf, size_t size, int *routed) +{ + struct npc_mcam_read_base_rule_rsp *rsp; + int route = *routed; + int rc = 0; + + /* Check if pcifunc has representor, op = 1 */ + if (dev->ops && dev->ops->rep_state) { + rc = dev->ops->rep_state(dev->roc_nix, msg->pcifunc, 1); + if (rc < 0) { + plt_err("Failed to get repr status, pcifunc 0x%x", + msg->pcifunc); + return rc; + } + } + + /* If ret is 1 meaning pci func has a representor, + * return without forwarding base rule mbox + */ + if (rc == 1) { + rsp = (struct npc_mcam_read_base_rule_rsp *)mbox_alloc_msg( + mbox, vf, sizeof(*rsp)); + if (!rsp) { + plt_err("Failed to alloc VF%d rep status message", vf); + return -1; + } + + mbox_rsp_init(msg->id, rsp); + + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + } else { + /* If ret is 0, default case i.e. forwarding to AF + * should happen. + */ + rc = foward_msg_to_af(dev, msg, size); + if (rc) { + plt_err("Failed to forward msg ID %d to af, err %d", + msg->id, rc); + return rc; + } + route++; + } + *routed = route; + + return 0; +} + /* PF receives mbox DOWN messages from VF and forwards to AF */ static int vf_pf_process_msgs(struct dev *dev, uint16_t vf) { struct mbox *mbox = &dev->mbox_vfpf; struct mbox_dev *mdev = &mbox->dev[vf]; + int offset, routed = 0, ret = 0; struct mbox_hdr *req_hdr; struct mbox_msghdr *msg; - int offset, routed = 0; size_t size; uint16_t i; @@ -242,42 +347,31 @@ vf_pf_process_msgs(struct dev *dev, uint16_t vf) /* RVU_PF_FUNC_S */ msg->pcifunc = dev_pf_func(dev->pf, vf); - if (msg->id == MBOX_MSG_READY) { - struct ready_msg_rsp *rsp; - uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; - - /* Handle READY message in PF */ - dev->active_vfs[vf / max_bits] |= - BIT_ULL(vf % max_bits); - rsp = (struct ready_msg_rsp *)mbox_alloc_msg( - mbox, vf, sizeof(*rsp)); - if (!rsp) { - plt_err("Failed to alloc VF%d READY message", - vf); + switch (msg->id) { + case MBOX_MSG_READY: + ret = process_vf_ready_msg(dev, mbox, msg, vf); + if (ret) { + plt_err("Failed to process ready msg for vf %d", vf); continue; } - mbox_rsp_init(msg->id, rsp); + break; + case MBOX_MSG_NPC_MCAM_READ_BASE_RULE: + ret = process_vf_read_base_rule_msg(dev, mbox, msg, vf, size, &routed); + if (ret) { + plt_err("Failed to process base rule for vf %d, err %d", vf, ret); + continue; + } - /* PF/VF function ID */ - rsp->hdr.pcifunc = msg->pcifunc; - rsp->hdr.rc = 0; - } else { - struct mbox_msghdr *af_req; - /* Reserve AF/PF mbox message */ - size = PLT_ALIGN(size, MBOX_MSG_ALIGN); - af_req = mbox_alloc_msg(dev->mbox, 0, size); - if (af_req == NULL) - return -ENOSPC; - mbox_req_init(msg->id, af_req); - - /* Copy message from VF<->PF mbox to PF<->AF mbox */ - mbox_memcpy((uint8_t *)af_req + - sizeof(struct mbox_msghdr), - (uint8_t *)msg + sizeof(struct mbox_msghdr), - size - sizeof(struct mbox_msghdr)); - af_req->pcifunc = msg->pcifunc; + break; + default: { + ret = foward_msg_to_af(dev, msg, size); + if (ret) { + plt_err("Failed to forward msg ID %d to af, err %d", msg->id, ret); + return ret; + } routed++; + } break; } offset = mbox->rx_start + msg->next_msgoff; } @@ -1051,6 +1145,7 @@ vf_flr_handle_msg(void *param, dev_intr_t *flr) { uint16_t vf, max_vf, max_bits; struct dev *dev = param; + int ret; max_bits = sizeof(flr->bits[0]) * sizeof(uint64_t); max_vf = max_bits * MAX_VFPF_DWORD_BITS; @@ -1063,6 +1158,14 @@ vf_flr_handle_msg(void *param, dev_intr_t *flr) vf_flr_send_msg(dev, vf); flr->bits[vf / max_bits] &= ~(BIT_ULL(vf % max_bits)); + /* Reset VF representors state, op = 2 */ + if (dev->ops && dev->ops->rep_state) { + ret = dev->ops->rep_state(dev->roc_nix, dev_pf_func(dev->pf, vf), + 2); + if (ret < 0) + plt_err("Failed to set repr status, for vf %x", vf); + } + /* Signal FLR finish */ plt_write64(BIT_ULL(vf % max_bits), dev->bar2 + RVU_PF_VFTRPENDX(vf / max_bits)); diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index 1f84f74ff3..50a7a67d42 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -34,14 +34,17 @@ typedef int (*ptp_info_t)(void *roc_nix, bool enable); typedef void (*q_err_cb_t)(void *roc_nix, void *data); /* Link status get callback */ -typedef void (*link_status_get_t)(void *roc_nix, - struct cgx_link_user_info *link); +typedef void (*link_status_get_t)(void *roc_nix, struct cgx_link_user_info *link); + +/* Process representor status callback */ +typedef int (*rep_state_t)(void *roc_nix, uint16_t pf_func, uint8_t op); struct dev_ops { link_info_t link_status_update; ptp_info_t ptp_info_update; link_status_get_t link_status_get; q_err_cb_t q_err_cb; + rep_state_t rep_state; }; #define dev_is_vf(dev) ((dev)->hwcap & DEV_HWCAP_F_VF) diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index 152ef7269e..0ee534f188 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -522,3 +522,26 @@ roc_nix_dev_fini(struct roc_nix *roc_nix) rc |= dev_fini(&nix->dev, nix->pci_dev); return rc; } + +int +roc_nix_process_rep_state_cb_register(struct roc_nix *roc_nix, + process_rep_state_t proc_rep_st) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + if (proc_rep_st == NULL) + return NIX_ERR_PARAM; + + dev->ops->rep_state = (rep_state_t)proc_rep_st; + return 0; +} + +void +roc_nix_process_rep_state_cb_unregister(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + dev->ops->rep_state = NULL; +} diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 9c2ba9a685..47ab3560ea 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -443,8 +443,14 @@ typedef int (*ptp_info_update_t)(struct roc_nix *roc_nix, bool enable); typedef void (*q_err_get_t)(struct roc_nix *roc_nix, void *data); /* Link status get callback */ -typedef void (*link_info_get_t)(struct roc_nix *roc_nix, - struct roc_nix_link_info *link); +typedef void (*link_info_get_t)(struct roc_nix *roc_nix, struct roc_nix_link_info *link); + +/* Process representor status callback: + * op = 0 update pffunc of vf being represented + * op = 1 check if any representor is representing pffunc + * op = 2 vf is going down, reset rep state + */ +typedef int (*process_rep_state_t)(void *roc_nix, uint16_t pf_func, uint8_t op); TAILQ_HEAD(roc_nix_list, roc_nix); @@ -520,6 +526,9 @@ roc_nix_tm_max_shaper_burst_get(void) /* Dev */ int __roc_api roc_nix_dev_init(struct roc_nix *roc_nix); int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix); +int __roc_api roc_nix_process_rep_state_cb_register(struct roc_nix *roc_nix, + process_rep_state_t proc_rep_st); +void __roc_api roc_nix_process_rep_state_cb_unregister(struct roc_nix *roc_nix); /* Type */ bool __roc_api roc_nix_is_lbk(struct roc_nix *roc_nix); @@ -532,13 +541,14 @@ int __roc_api roc_nix_get_vf(struct roc_nix *roc_nix); uint16_t __roc_api roc_nix_get_pf_func(struct roc_nix *roc_nix); uint16_t __roc_api roc_nix_get_vwqe_interval(struct roc_nix *roc_nix); int __roc_api roc_nix_max_pkt_len(struct roc_nix *roc_nix); +bool __roc_api roc_nix_has_rep(struct roc_nix *roc_nix); /* LF ops */ -int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, - uint32_t nb_txq, uint64_t rx_cfg); +int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq, + uint64_t rx_cfg); int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix); -int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, - struct roc_nix_ipsec_cfg *cfg, bool enb); +int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, struct roc_nix_ipsec_cfg *cfg, + bool enb); int __roc_api roc_nix_cpt_ctx_cache_sync(struct roc_nix *roc_nix); int __roc_api roc_nix_rx_drop_re_set(struct roc_nix *roc_nix, bool ena); diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 1d6e306848..327840429f 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -205,6 +205,8 @@ INTERNAL { roc_nix_cqe_dump; roc_nix_dev_fini; roc_nix_dev_init; + roc_nix_process_rep_state_cb_register; + roc_nix_process_rep_state_cb_unregister; roc_nix_dump; roc_nix_err_intr_ena_dis; roc_nix_fc_config_get; @@ -217,6 +219,7 @@ INTERNAL { roc_nix_get_pf_func; roc_nix_get_vf; roc_nix_get_vwqe_interval; + roc_nix_has_rep; roc_nix_inl_cb_register; roc_nix_inl_cb_unregister; roc_nix_inl_ctx_write; From patchwork Fri Aug 11 16:34:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130173 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF6AA43036; Fri, 11 Aug 2023 18:35:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B36343260; Fri, 11 Aug 2023 18:35:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 264FD40E03 for ; Fri, 11 Aug 2023 18:35:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BDcIHC011977 for ; Fri, 11 Aug 2023 09:35:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=IZgoVWCM0NOFOxdNquLRTMSrIm0uOU7iZgVi4//FMx8=; b=fmLjvFRqy0y7I9ptbgfNk4crSXmFK4Qts5a2cDObJEOc5A9yowNXfSOdTXQtN9v3l+Ae rT01ivbETUCCwHsukOQUWHhuRM57vk/LH7o6RGDzAnD6aixezplGnBlk9eC1QtaDwWVt 1WTmsyCixK2/s5qoBR2YAoEXnTiFYc3g1ACNYpKuyHTIxpSWo5JUg8QVPzVhsOhxgdFm 77latReSiq0VRsdxrOAWpoD1cAeMILVt4NYUg5mhzVpG6zry7L0awVyjIeAI+ngegxyX /Paq7jjQZqHFgHzu50AC6ASYO7Fwn+abE+ppVnAIjK8xo2ndG0dJWVL3IxSUqc5lbajP ew== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qs5-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:06 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:04 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 445723F7099; Fri, 11 Aug 2023 09:35:02 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 4/9] net/cnxk: callbacks for representor state Date: Fri, 11 Aug 2023 22:04:14 +0530 Message-ID: <20230811163419.165790-5-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 8-D_wV6nhYQyF49lEz0uJtGkTQ6Lj3N8 X-Proofpoint-GUID: 8-D_wV6nhYQyF49lEz0uJtGkTQ6Lj3N8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing the callbacks for processing representor state. Three operations currently supported: - set a representor to be active if its VF is enabled and set it appropriate pf func value. - check if the VF which sent a mbox has a representor - clear representor state if its VF goes down. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep.c | 65 +++++++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 4 +++ 2 files changed, 69 insertions(+) diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index ebefc34ac8..4dd564058c 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -39,6 +39,7 @@ cnxk_rep_dev_remove(struct rte_eth_dev *pf_ethdev) struct cnxk_eth_dev *pf_dev = cnxk_eth_pmd_priv(pf_ethdev); int rc = 0; + roc_nix_process_rep_state_cb_unregister(&pf_dev->nix); rc = rte_eth_switch_domain_free(pf_dev->switch_domain_id); if (rc) plt_err("Failed to alloc switch domain: %d", rc); @@ -183,6 +184,63 @@ cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) return 0; } +static int +cnxk_process_representor_status(void *roc_nix, uint16_t pf_func, uint8_t op) +{ + struct cnxk_eth_dev *pf_dev = (struct cnxk_eth_dev *)roc_nix; + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + uint16_t match = 0, func_val; + bool is_vf_active; + int i, rc = 0; + + if (!pf_dev) { + plt_err("Failed to get PF ethdev handle"); + return -1; + } + + switch (op) { + case 0: /* update pffunc of vf being represented */ + match = 0; + func_val = pf_func; + is_vf_active = true; + break; + case 1: /* check if any representor is representing pffunc */ + match = pf_func; + func_val = pf_func; + is_vf_active = true; + break; + case 2: /* vf is going down, reset rep state */ + match = pf_func; + func_val = 0; + is_vf_active = false; + break; + default: + plt_err("Invalid op received %d pf_func %x", op, pf_func); + return -1; + }; + + for (i = 0; i < pf_dev->num_reps; i++) { + rep_eth_dev = pf_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + return -1; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->pf_func == match) { + plt_base_dbg("Representor port %d op %d match %d func_val %d vf_active %d", + i, op, match, func_val, is_vf_active); + rep_dev->pf_func = func_val; + rep_dev->is_vf_active = is_vf_active; + rc = 1; + break; + } + } + + return rc; +} + int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev, struct rte_eth_devargs *eth_da) @@ -256,6 +314,13 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev pf_dev->num_reps++; } + /* Register up msg callbacks for processing representor information */ + if (roc_nix_process_rep_state_cb_register(&pf_dev->nix, cnxk_process_representor_status)) { + plt_err("Failed to register callback for representor status"); + rc = -EINVAL; + goto err; + } + return 0; err: return rc; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 24adb9649b..e3fc717a58 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -17,6 +17,10 @@ struct cnxk_rep_dev { uint16_t vf_id; uint16_t switch_domain_id; struct rte_eth_dev *parent_dev; + struct rte_mempool *ctrl_chan_pool; + uint16_t rep_xport_vdev; + bool is_vf_active; + uint16_t pf_func; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; From patchwork Fri Aug 11 16:34:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130176 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 207BA43036; Fri, 11 Aug 2023 18:35:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 293BF43272; Fri, 11 Aug 2023 18:35:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5741B43263 for ; Fri, 11 Aug 2023 18:35:12 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BBmUgn001610 for ; Fri, 11 Aug 2023 09:35:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=t7DTeBJCxxvXtrbrs+d4wXFn6oNkKJL1grifnzCPxb8=; b=B2MvHyVkmnpY51CLwEStaa5vvrM4+rDmvToaPDEoaopvLZcrTpRbFutFWs0jyD2hPfzY 879QHBtnuiMxbNaOAiSFSd6dehGcbNW9YigyGKPbR3R/dreAlqEvIOqW0R2IqAjd6eyc CnvUqdQNvstAz51an5fuXqcBArE+l+WpTh9OwDiEeBk7k7zzrRI2ck5CnxxCCsljbLmz SIO4MjhtRPnd5JQ+qG5fBWbJ3g8W3bYT5n/M+4lEe2EXt9kYx31DX9lWVbpMgJPaghVS 7SeH+ROEeClM6uqVgtxBPZyiFEkzkbQ29prUFuUvw6Lu/mJ3tRFX5FBUBMV5l4jGTmOf 3A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ypb91u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:10 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:07 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 729BD3F705D; Fri, 11 Aug 2023 09:35:05 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 5/9] net/cnxk: add representor control plane Date: Fri, 11 Aug 2023 22:04:15 +0530 Message-ID: <20230811163419.165790-6-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: sSE1Y2BCVuQb7HV5u9zb_mr76dW7GwTo X-Proofpoint-GUID: sSE1Y2BCVuQb7HV5u9zb_mr76dW7GwTo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing the control path for representor ports, where represented ports can be configured using TLV messaging. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_ethdev.c | 8 + drivers/net/cnxk/cnxk_ethdev.h | 3 + drivers/net/cnxk/cnxk_rep.c | 13 +- drivers/net/cnxk/cnxk_rep.h | 1 + drivers/net/cnxk/cnxk_rep_msg.c | 559 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 78 +++++ drivers/net/cnxk/meson.build | 1 + 7 files changed, 662 insertions(+), 1 deletion(-) create mode 100644 drivers/net/cnxk/cnxk_rep_msg.c create mode 100644 drivers/net/cnxk/cnxk_rep_msg.h diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 902e6df72d..a63c020c0e 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1645,6 +1645,14 @@ cnxk_nix_dev_stop(struct rte_eth_dev *eth_dev) memset(&link, 0, sizeof(link)); rte_eth_linkstatus_set(eth_dev, &link); + /* Exiting the rep msg ctrl thread */ + if (dev->num_reps) { + if (dev->start_rep_thread) { + dev->start_rep_thread = false; + pthread_join(dev->rep_ctrl_msg_thread, NULL); + } + } + return 0; } diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 3896db38e1..0a1a4e377d 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -425,6 +425,9 @@ struct cnxk_eth_dev { uint16_t switch_domain_id; uint16_t num_reps; uint16_t rep_xport_vdev; + rte_spinlock_t rep_lock; + bool start_rep_thread; + pthread_t rep_ctrl_msg_thread; struct cnxk_rep_info *rep_info; }; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index 4dd564058c..e6f5790adc 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -2,6 +2,7 @@ * Copyright(C) 2023 Marvell. */ #include +#include /* CNXK platform representor dev ops */ struct eth_dev_ops cnxk_rep_dev_ops = { @@ -203,7 +204,7 @@ cnxk_process_representor_status(void *roc_nix, uint16_t pf_func, uint8_t op) case 0: /* update pffunc of vf being represented */ match = 0; func_val = pf_func; - is_vf_active = true; + is_vf_active = false; break; case 1: /* check if any representor is representing pffunc */ match = pf_func; @@ -314,6 +315,9 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev pf_dev->num_reps++; } + /* Spinlock for synchronization between the control messages */ + plt_spinlock_init(&pf_dev->rep_lock); + /* Register up msg callbacks for processing representor information */ if (roc_nix_process_rep_state_cb_register(&pf_dev->nix, cnxk_process_representor_status)) { plt_err("Failed to register callback for representor status"); @@ -321,6 +325,13 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev goto err; } + /* Launch a thread to handle control messages */ + rc = cnxk_rep_control_thread_launch(pf_dev); + if (rc) { + plt_err("Failed to launch message ctrl thread"); + goto err; + } + return 0; err: return rc; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index e3fc717a58..8825fa1cf2 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -8,6 +8,7 @@ #define CNXK_REP_XPORT_VDEV_DEVARGS "role=server" #define CNXK_REP_XPORT_VDEV_NAME "net_memif" +#define CNXK_REP_VDEV_CTRL_QUEUE 0 #define CNXK_MAX_REP_PORTS 128 /* Common ethdev ops */ diff --git a/drivers/net/cnxk/cnxk_rep_msg.c b/drivers/net/cnxk/cnxk_rep_msg.c new file mode 100644 index 0000000000..ca3b6b014e --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_msg.c @@ -0,0 +1,559 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#define CTRL_MSG_RCV_TIMEOUT_MS 2000 +#define CTRL_MSG_READY_WAIT_US 2000 +#define CTRL_MSG_THRD_NAME_LEN 35 +#define CTRL_MSG_BUFFER_SZ 1500 +#define CTRL_MSG_SIGNATURE 0xcdacdeadbeefcadc + +static int +send_message(void *buffer, size_t len, struct rte_mempool *mb_pool, uint16_t portid) +{ + struct rte_mbuf *m = NULL; + uint8_t nb_pkt; + int rc = 0; + char *data; + + m = rte_pktmbuf_alloc(mb_pool); + if (m == NULL) { + plt_err("Cannot allocate mbuf"); + rc = -rte_errno; + goto fail; + } + + if (rte_pktmbuf_pkt_len(m) != 0) { + plt_err("Bad length"); + rc = -EINVAL; + goto fail; + } + + /* append data */ + data = rte_pktmbuf_append(m, len); + if (data == NULL) { + plt_err("Cannot append data\n"); + rc = -EINVAL; + goto fail; + } + if (rte_pktmbuf_pkt_len(m) != len) { + plt_err("Bad pkt length\n"); + rc = -EINVAL; + goto fail; + } + + if (rte_pktmbuf_data_len(m) != len) { + plt_err("Bad data length\n"); + rc = -EINVAL; + goto fail; + } + + rte_memcpy(data, buffer, len); + + /* Send the control message */ + nb_pkt = rte_eth_tx_burst(portid, CNXK_REP_VDEV_CTRL_QUEUE, (struct rte_mbuf **)&m, 1); + if (nb_pkt == 0) { + plt_err("Failed to send message"); + rc = -EINVAL; + goto fail; + } + + return 0; +fail: + return rc; +} + +void +cnxk_rep_msg_populate_msg_end(void *buffer, uint32_t *length) +{ + cnxk_rep_msg_populate_command(buffer, length, CNXK_REP_MSG_END, 0); +} + +void +cnxk_rep_msg_populate_type(void *buffer, uint32_t *length, cnxk_type_t type, uint32_t sz) +{ + uint32_t len = *length; + cnxk_type_data_t data; + + /* Prepare type data */ + data.type = type; + data.length = sz; + + /* Populate the type data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &data, sizeof(cnxk_type_data_t)); + len += sizeof(cnxk_type_data_t); + + *length = len; +} + +void +cnxk_rep_msg_populate_header(void *buffer, uint32_t *length) +{ + cnxk_header_t hdr; + int len; + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_HEADER, sizeof(cnxk_header_t)); + + len = *length; + /* Prepare header data */ + hdr.signature = CTRL_MSG_SIGNATURE; + + /* Populate header data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &hdr, sizeof(cnxk_header_t)); + len += sizeof(cnxk_header_t); + + *length = len; +} + +void +cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size) +{ + cnxk_rep_msg_data_t msg_data; + uint32_t len; + uint16_t sz = sizeof(cnxk_rep_msg_data_t); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_MSG, sz); + + len = *length; + /* Prepare command data */ + msg_data.type = type; + msg_data.length = size; + + /* Populate the command */ + rte_memcpy(RTE_PTR_ADD(buffer, len), &msg_data, sz); + len += sz; + + *length = len; +} + +void +cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, + cnxk_rep_msg_t msg) +{ + uint32_t len; + + cnxk_rep_msg_populate_command(buffer, length, msg, sz); + + len = *length; + /* Populate command data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), msg_meta, sz); + len += sz; + + *length = len; +} + +static int +parse_validate_header(void *msg_buf, uint32_t *buf_trav_len) +{ + cnxk_type_data_t *tdata = NULL; + cnxk_header_t *hdr = NULL; + void *data = NULL; + uint16_t len = 0; + + /* Read first bytes of type data */ + data = msg_buf; + tdata = (cnxk_type_data_t *)data; + if (tdata->type != CNXK_TYPE_HEADER) { + plt_err("Invalid type %d, type header expected", tdata->type); + goto fail; + } + + /* Get the header value */ + data = RTE_PTR_ADD(msg_buf, sizeof(cnxk_type_data_t)); + len += sizeof(cnxk_type_data_t); + + /* Validate the header */ + hdr = (cnxk_header_t *)data; + if (hdr->signature != CTRL_MSG_SIGNATURE) { + plt_err("Invalid signature detected: 0x%lx", hdr->signature); + goto fail; + } + + /* Update length read till point */ + len += tdata->length; + + *buf_trav_len = len; + return 0; +fail: + return errno; +} + +static cnxk_rep_msg_data_t * +message_data_extract(void *msg_buf, uint32_t *buf_trav_len) +{ + cnxk_type_data_t *tdata = NULL; + cnxk_rep_msg_data_t *msg = NULL; + uint16_t len = *buf_trav_len; + void *data; + + tdata = (cnxk_type_data_t *)RTE_PTR_ADD(msg_buf, len); + if (tdata->type != CNXK_TYPE_MSG) { + plt_err("Invalid type %d, type MSG expected", tdata->type); + goto fail; + } + + /* Get the message type */ + len += sizeof(cnxk_type_data_t); + data = RTE_PTR_ADD(msg_buf, len); + msg = (cnxk_rep_msg_data_t *)data; + + /* Advance to actual message data */ + len += tdata->length; + *buf_trav_len = len; + + return msg; +fail: + return NULL; +} + +static void +process_ack_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data) +{ + cnxk_rep_msg_ack_data_t *adata = (cnxk_rep_msg_ack_data_t *)data; + uint16_t len = *buf_trav_len; + void *buf; + + /* Get the message type data viz ack data */ + buf = RTE_PTR_ADD(msg_buf, len); + adata->u.data = rte_zmalloc("Ack data", msg_len, 0); + adata->size = msg_len; + if (adata->size == sizeof(uint64_t)) + rte_memcpy(&adata->u.data, buf, msg_len); + else + rte_memcpy(adata->u.data, buf, msg_len); + plt_rep_dbg("Address %p val 0x%lx sval %ld msg_len %d", adata->u.data, adata->u.val, + adata->u.sval, msg_len); + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; +} + +static int +notify_rep_dev_ready(void *data, bool state) +{ + struct cnxk_eth_dev *pf_dev = (struct cnxk_eth_dev *)data; + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i; + + for (i = 0; i < pf_dev->num_reps; i++) { + rep_eth_dev = pf_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) + continue; + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + rep_dev->is_vf_active = state; + } + + return 0; +} + +static void +process_ready_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data) +{ + cnxk_rep_msg_ready_data_t *rdata = NULL; + uint16_t len = *buf_trav_len; + void *buf; + + /* Get the message type data viz ready data */ + buf = RTE_PTR_ADD(msg_buf, len); + rdata = (cnxk_rep_msg_ready_data_t *)buf; + + plt_rep_dbg("Ready data received %d", rdata->val); + + /* Wait required to ensure other side ready for recieving the ack */ + usleep(CTRL_MSG_READY_WAIT_US); + /* Update all representor about ready message */ + if (rdata->val) + notify_rep_dev_ready(data, true); + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; +} + +static void +process_exit_message(void *msg_buf, uint32_t *buf_trav_len, uint32_t msg_len, void *data) +{ + cnxk_rep_msg_exit_data_t *edata = NULL; + uint16_t len = *buf_trav_len; + void *buf; + + /* Get the message type data viz exit data */ + buf = RTE_PTR_ADD(msg_buf, len); + edata = (cnxk_rep_msg_exit_data_t *)buf; + + plt_rep_dbg("Exit data received %d", edata->val); + + /* Update all representor about ready/exit message */ + if (edata->val) + notify_rep_dev_ready(data, false); + + /* Advance length to nex message */ + len += msg_len; + *buf_trav_len = len; +} + +static void +populate_ack_msg(void *buffer, uint32_t *length, cnxk_rep_msg_ack_data_t *adata) +{ + uint32_t sz = sizeof(cnxk_rep_msg_ack_data_t); + uint32_t len; + + cnxk_rep_msg_populate_command(buffer, length, CNXK_REP_MSG_ACK, sz); + + len = *length; + + rte_memcpy(RTE_PTR_ADD(buffer, len), adata, sz); + + len += sz; + + *length = len; +} + +static int +send_ack_message(cnxk_rep_msg_ack_data_t *adata, struct rte_mempool *mb_pool, uint16_t portid) +{ + uint32_t len = 0, size; + void *buffer; + int rc = 0; + + /* Allocate memory for preparing a message */ + size = CTRL_MSG_BUFFER_SZ; + buffer = rte_zmalloc("ACK msg", size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + return -ENOMEM; + } + + /* Prepare the ACK message */ + cnxk_rep_msg_populate_header(buffer, &len); + populate_ack_msg(buffer, &len, adata); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + /* Send it to the peer */ + rc = send_message(buffer, len, mb_pool, portid); + if (rc) + plt_err("Failed send ack"); + + return rc; +} + +static int +process_message(void *msg_buf, uint32_t *buf_trav_len, void *data, struct rte_mempool *mb_pool, + uint16_t portid) +{ + cnxk_rep_msg_data_t *msg = NULL; + cnxk_rep_msg_ack_data_t adata; + bool send_ack; + int rc = 0; + + /* Get the message data */ + msg = message_data_extract(msg_buf, buf_trav_len); + if (!msg) { + plt_err("Failed to get message data"); + rc = -EINVAL; + goto fail; + } + + /* Different message type processing */ + while (msg->type != CNXK_REP_MSG_END) { + send_ack = true; + switch (msg->type) { + case CNXK_REP_MSG_ACK: + process_ack_message(msg_buf, buf_trav_len, msg->length, data); + send_ack = false; + break; + case CNXK_REP_MSG_READY: + process_ready_message(msg_buf, buf_trav_len, msg->length, data); + adata.type = CNXK_REP_MSG_READY; + adata.u.val = 0; + adata.size = sizeof(uint64_t); + break; + case CNXK_REP_MSG_EXIT: + process_exit_message(msg_buf, buf_trav_len, msg->length, data); + adata.type = CNXK_REP_MSG_EXIT; + adata.u.val = 0; + adata.size = sizeof(uint64_t); + break; + default: + plt_err("Invalid message type: %d", msg->type); + rc = -EINVAL; + }; + + /* Send ACK */ + if (send_ack) + send_ack_message(&adata, mb_pool, portid); + + /* Advance to next message */ + msg = message_data_extract(msg_buf, buf_trav_len); + } + + return 0; +fail: + return rc; +} + +static int +process_control_packet(struct rte_mbuf *mbuf, void *data, uint16_t portid) +{ + uint32_t len = mbuf->data_len; + uint32_t buf_trav_len = 0; + void *msg_buf; + int rc; + + msg_buf = plt_zmalloc(len, 0); + if (!msg_buf) { + plt_err("Failed to allocate mem for msg_buf"); + rc = -ENOMEM; + goto fail; + } + + /* Extract the packet data which contains the message */ + rte_memcpy(msg_buf, rte_pktmbuf_mtod(mbuf, void *), len); + + /* Validate the validity of the received message */ + parse_validate_header(msg_buf, &buf_trav_len); + + /* Detect message and process */ + rc = process_message(msg_buf, &buf_trav_len, data, mbuf->pool, portid); + if (rc) { + plt_err("Failed to process message"); + goto fail; + } + + /* Ensuring entire message has been processed */ + if (len != buf_trav_len) { + plt_err("Out of %d bytes %d bytes of msg_buf processed", len, buf_trav_len); + rc = -EFAULT; + goto fail; + } + + rte_free(msg_buf); + + return 0; +fail: + return rc; +} + +static int +receive_control_msg_resp(uint16_t portid, void *data) +{ + uint32_t wait_us = CTRL_MSG_RCV_TIMEOUT_MS * 1000; + uint32_t timeout = 0, sleep = 1; + struct rte_mbuf *m = NULL; + uint8_t rx = 0; + int rc = -1; + + do { + rx = rte_eth_rx_burst(portid, CNXK_REP_VDEV_CTRL_QUEUE, &m, 1); + if (rx != 0) + break; + + /* Timeout after CTRL_MSG_RCV_TIMEOUT_MS */ + if (timeout >= wait_us) { + plt_err("Control message wait timedout"); + return -ETIMEDOUT; + } + + plt_delay_us(sleep); + timeout += sleep; + } while ((rx == 0) || (timeout < wait_us)); + + if (rx) { + rc = process_control_packet(m, data, portid); + /* Freeing the allocated buffer */ + rte_pktmbuf_free(m); + } + + return rc; +} + +int +cnxk_rep_msg_send_process(struct cnxk_rep_dev *rep_dev, void *buffer, uint32_t len, + cnxk_rep_msg_ack_data_t *adata) +{ + struct cnxk_eth_dev *pf_dev; + int rc = 0; + + pf_dev = cnxk_eth_pmd_priv(rep_dev->parent_dev); + if (!pf_dev) { + plt_err("Failed to get parent pf handle"); + rc = -1; + goto fail; + } + + plt_spinlock_lock(&pf_dev->rep_lock); + rc = send_message(buffer, len, rep_dev->ctrl_chan_pool, rep_dev->rep_xport_vdev); + if (rc) { + plt_err("Failed to send the message, err %d", rc); + goto free; + } + + rc = receive_control_msg_resp(rep_dev->rep_xport_vdev, adata); + if (rc) { + plt_err("Failed to receive the response, err %d", rc); + goto free; + } + plt_spinlock_unlock(&pf_dev->rep_lock); + + return 0; +free: + plt_spinlock_unlock(&pf_dev->rep_lock); +fail: + return rc; +} + +static void +poll_for_control_msg(void *data) +{ + struct cnxk_eth_dev *pf_dev = (struct cnxk_eth_dev *)data; + uint16_t portid = pf_dev->rep_xport_vdev; + struct rte_mbuf *m = NULL; + uint8_t rx = 0; + + do { + rx = rte_eth_rx_burst(portid, CNXK_REP_VDEV_CTRL_QUEUE, &m, 1); + if (rx != 0) + break; + } while (rx == 0 && pf_dev->start_rep_thread); + + if (rx) { + plt_spinlock_lock(&pf_dev->rep_lock); + process_control_packet(m, data, portid); + /* Freeing the allocated buffer */ + rte_pktmbuf_free(m); + plt_spinlock_unlock(&pf_dev->rep_lock); + } +} + +static void * +rep_ctrl_msg_thread_main(void *arg) +{ + struct cnxk_eth_dev *pf_dev = (struct cnxk_eth_dev *)arg; + + while (pf_dev->start_rep_thread) + poll_for_control_msg(pf_dev); + + return NULL; +} + +int +cnxk_rep_control_thread_launch(struct cnxk_eth_dev *pf_dev) +{ + char name[CTRL_MSG_THRD_NAME_LEN]; + int rc = 0; + + rte_strscpy(name, "rep_ctrl_msg_hndlr", CTRL_MSG_THRD_NAME_LEN); + pf_dev->start_rep_thread = true; + rc = plt_ctrl_thread_create(&pf_dev->rep_ctrl_msg_thread, name, NULL, + rep_ctrl_msg_thread_main, pf_dev); + if (rc != 0) + plt_err("Failed to create rep control message handling"); + + return rc; +} diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h new file mode 100644 index 0000000000..a28c63f762 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __CNXK_REP_MSG_H__ +#define __CNXK_REP_MSG_H__ + +#include + +#define CNXK_REP_MSG_MAX_BUFFER_SZ 1500 + +typedef enum CNXK_TYPE { + CNXK_TYPE_HEADER = 0, + CNXK_TYPE_MSG, +} cnxk_type_t; + +typedef enum CNXK_REP_MSG { + /* General sync messages */ + CNXK_REP_MSG_READY = 0, + CNXK_REP_MSG_ACK, + CNXK_REP_MSG_EXIT, + /* End of messaging sequence */ + CNXK_REP_MSG_END, +} cnxk_rep_msg_t; + +/* Types */ +typedef struct cnxk_type_data { + cnxk_type_t type; + uint32_t length; + uint64_t data[]; +} __rte_packed cnxk_type_data_t; + +/* Header */ +typedef struct cnxk_header { + uint64_t signature; + uint16_t nb_hops; +} __rte_packed cnxk_header_t; + +/* Message meta */ +typedef struct cnxk_rep_msg_data { + cnxk_rep_msg_t type; + uint32_t length; + uint64_t data[]; +} __rte_packed cnxk_rep_msg_data_t; + +/* Ack msg */ +typedef struct cnxk_rep_msg_ack_data { + cnxk_rep_msg_t type; + uint32_t size; + union { + void *data; + uint64_t val; + int64_t sval; + } u; +} __rte_packed cnxk_rep_msg_ack_data_t; + +/* Ready msg */ +typedef struct cnxk_rep_msg_ready_data { + uint8_t val; +} __rte_packed cnxk_rep_msg_ready_data_t; + +/* Exit msg */ +typedef struct cnxk_rep_msg_exit_data { + uint8_t val; +} __rte_packed cnxk_rep_msg_exit_data_t; + +void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, + uint32_t size); +void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, + cnxk_rep_msg_t msg); +void cnxk_rep_msg_populate_msg_end(void *buffer, uint32_t *length); +void cnxk_rep_msg_populate_type(void *buffer, uint32_t *length, cnxk_type_t type, uint32_t sz); +void cnxk_rep_msg_populate_header(void *buffer, uint32_t *length); +int cnxk_rep_msg_send_process(struct cnxk_rep_dev *rep_dev, void *buffer, uint32_t len, + cnxk_rep_msg_ack_data_t *adata); +int cnxk_rep_control_thread_launch(struct cnxk_eth_dev *pf_dev); + +#endif /* __CNXK_REP_MSG_H__ */ diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 38dde54ce9..0e7334f5cd 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -33,6 +33,7 @@ sources = files( 'cnxk_ptp.c', 'cnxk_flow.c', 'cnxk_rep.c', + 'cnxk_rep_msg.c', 'cnxk_rep_ops.c', 'cnxk_stats.c', 'cnxk_tm.c', From patchwork Fri Aug 11 16:34:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130177 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF72E43036; Fri, 11 Aug 2023 18:35:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9D2A94327B; Fri, 11 Aug 2023 18:35:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3489043273 for ; Fri, 11 Aug 2023 18:35:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BEGHYK021465 for ; Fri, 11 Aug 2023 09:35:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=HPBu6oQAif/lUxtbnZ7C5yu6r6XRwh43Uo48qhby5PY=; b=M1000LfmlA2Lu0dVPFJDbHf8g+eH+ud8BmojR1hkNIcxb6QJ7/oKcPq8DfWWiefygd2j PXOokNwinJC09bDYS0PRz8CI9+7UYWo0t2ojc5z1OGRyI/KvhBU6aciNZfp9KYw0E522 Ec6XgQXsFIYJeBT9fakb/+L8uIx9vhiwzj8mLWdzw+1Y9a/0wx+PjYQZ8W5KY27EtWBI ecF7cHm11B1xNLuLVb6TnL6tPUM/oo5ckMhMrz7SBc/61HpKNHMxqSPZme1H/eg2kezx wjlxZowLwODW8WkFVigTGKcPyrAjyLOFIMZCsljsV2dUvzhISMcnKFrUpm7qUGu6TsFw ew== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2quj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:11 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 9A1973F7055; Fri, 11 Aug 2023 09:35:08 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 6/9] net/cnxk: representor ethdev ops Date: Fri, 11 Aug 2023 22:04:16 +0530 Message-ID: <20230811163419.165790-7-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: GY4kkK-xGf3KNFX3VkWJh8jJdHjsC0B6 X-Proofpoint-GUID: GY4kkK-xGf3KNFX3VkWJh8jJdHjsC0B6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing ethernet device operation callbacks for port representors PMD Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep.c | 62 +-- drivers/net/cnxk/cnxk_rep.h | 36 ++ drivers/net/cnxk/cnxk_rep_msg.h | 15 + drivers/net/cnxk/cnxk_rep_ops.c | 655 ++++++++++++++++++++++++++++++-- 4 files changed, 713 insertions(+), 55 deletions(-) diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index e6f5790adc..5ee7e93ab9 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -13,6 +13,9 @@ struct eth_dev_ops cnxk_rep_dev_ops = { .rx_queue_release = cnxk_rep_rx_queue_release, .tx_queue_setup = cnxk_rep_tx_queue_setup, .tx_queue_release = cnxk_rep_tx_queue_release, + .promiscuous_enable = cnxk_rep_promiscuous_enable, + .promiscuous_disable = cnxk_rep_promiscuous_disable, + .mac_addr_set = cnxk_rep_mac_addr_set, .link_update = cnxk_rep_link_update, .dev_close = cnxk_rep_dev_close, .dev_stop = cnxk_rep_dev_stop, @@ -24,14 +27,36 @@ struct eth_dev_ops cnxk_rep_dev_ops = { int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) { + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + + rep_xport_vdev_cfg = mz->addr; plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); rte_free(ethdev->data->mac_addrs); ethdev->data->mac_addrs = NULL; + rep_xport_vdev_cfg->nb_rep_ports--; + /* Once all representors are closed, cleanup rep base vdev config */ + if (!rep_xport_vdev_cfg->nb_rep_ports) { + plt_free(rep_xport_vdev_cfg->q_bmap_mem); + plt_free(rep_xport_vdev_cfg->mdevinfo); + plt_memzone_free(mz); + } + return 0; +fail: + return rte_errno; } int @@ -121,26 +146,6 @@ cnxk_init_rep_internal(struct cnxk_eth_dev *pf_dev) return rc; } -static uint16_t -cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(tx_queue); - PLT_SET_USED(tx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - -static uint16_t -cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(rx_queue); - PLT_SET_USED(rx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - static int cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) { @@ -152,6 +157,11 @@ cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) rep_dev->vf_id = rep_params->vf_id; rep_dev->switch_domain_id = rep_params->switch_domain_id; rep_dev->parent_dev = rep_params->parent_dev; + rep_dev->u.rxq = UINT16_MAX; + rep_dev->u.txq = UINT16_MAX; + + pf_dev = cnxk_eth_pmd_priv(rep_dev->parent_dev); + rep_dev->rep_xport_vdev = pf_dev->rep_xport_vdev; eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; eth_dev->data->representor_id = rep_params->vf_id; @@ -170,11 +180,10 @@ cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) eth_dev->dev_ops = &cnxk_rep_dev_ops; /* Rx/Tx functions stubs to avoid crashing */ - eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; - eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst_dummy; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst_dummy; /* Link state. Inherited from PF */ - pf_dev = cnxk_eth_pmd_priv(rep_dev->parent_dev); link = &pf_dev->eth_dev->data->dev_link; eth_dev->data->dev_link.link_speed = link->link_speed; @@ -325,13 +334,6 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev goto err; } - /* Launch a thread to handle control messages */ - rc = cnxk_rep_control_thread_launch(pf_dev); - if (rc) { - plt_err("Failed to launch message ctrl thread"); - goto err; - } - return 0; err: return rc; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 8825fa1cf2..2b6403f003 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -6,6 +6,7 @@ #ifndef __CNXK_REP_H__ #define __CNXK_REP_H__ +#define CNXK_REP_XPORT_VDEV_CFG_MZ "rep_xport_vdev_cfg" #define CNXK_REP_XPORT_VDEV_DEVARGS "role=server" #define CNXK_REP_XPORT_VDEV_NAME "net_memif" #define CNXK_REP_VDEV_CTRL_QUEUE 0 @@ -14,6 +15,18 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +/* Representor base device configurations */ +typedef struct rep_xport_vdev_cfg_s { + struct plt_bitmap *q_map; + void *q_bmap_mem; + uint8_t nb_rep_ports; + uint8_t nb_rep_started; + struct rte_mempool *ctrl_chan_pool; + struct rte_eth_dev_info *mdevinfo; + bool rep_xport_configured; +} rep_xport_vdev_cfg_t; + +/* Representor port configurations */ struct cnxk_rep_dev { uint16_t vf_id; uint16_t switch_domain_id; @@ -22,15 +35,33 @@ struct cnxk_rep_dev { uint16_t rep_xport_vdev; bool is_vf_active; uint16_t pf_func; + union { + uint16_t rxq; + uint16_t txq; + uint16_t rep_portid; + } u; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; +/* Inline functions */ static inline struct cnxk_rep_dev * cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) { return eth_dev->data->dev_private; } +static inline struct rte_eth_dev * +cnxk_rep_xport_eth_dev(uint16_t portid) +{ + if (!rte_eth_dev_is_valid_port(portid)) { + plt_err("Invalid port_id=%u", portid); + return NULL; + } + + return &rte_eth_devices[portid]; +} + +/* Prototypes */ int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev, struct rte_eth_devargs *eth_da); int cnxk_rep_dev_remove(struct rte_eth_dev *pf_ethdev); @@ -52,5 +83,10 @@ int cnxk_rep_dev_close(struct rte_eth_dev *eth_dev); int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats); int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); +int cnxk_rep_promiscuous_enable(struct rte_eth_dev *ethdev); +int cnxk_rep_promiscuous_disable(struct rte_eth_dev *ethdev); +int cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +uint16_t cnxk_rep_tx_burst_dummy(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t cnxk_rep_rx_burst_dummy(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); #endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index a28c63f762..554122d7f8 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -19,6 +19,10 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_READY = 0, CNXK_REP_MSG_ACK, CNXK_REP_MSG_EXIT, + /* Ethernet operation msgs */ + CNXK_REP_MSG_ETH_SET_MAC, + CNXK_REP_MSG_ETH_STATS_GET, + CNXK_REP_MSG_ETH_STATS_CLEAR, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -64,6 +68,17 @@ typedef struct cnxk_rep_msg_exit_data { uint8_t val; } __rte_packed cnxk_rep_msg_exit_data_t; +/* Ethernet op - set mac */ +typedef struct cnxk_rep_msg_eth_mac_set_meta { + uint16_t portid; + uint8_t addr_bytes[RTE_ETHER_ADDR_LEN]; +} __rte_packed cnxk_rep_msg_eth_set_mac_meta_t; + +/* Ethernet op - get/clear stats */ +typedef struct cnxk_rep_msg_eth_stats_meta { + uint16_t portid; +} __rte_packed cnxk_rep_msg_eth_stats_meta_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index 3f1aab077b..022a5137df 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -3,6 +3,54 @@ */ #include +#include + +#define MEMPOOL_CACHE_SIZE 256 +#define TX_DESC_PER_QUEUE 512 +#define RX_DESC_PER_QUEUE 256 +#define NB_REP_VDEV_MBUF 1024 + +static uint16_t +cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct cnxk_rep_dev *rep_dev = tx_queue; + + nb_pkts = rte_eth_tx_burst(rep_dev->rep_xport_vdev, rep_dev->u.txq, tx_pkts, nb_pkts); + + return nb_pkts; +} + +static uint16_t +cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct cnxk_rep_dev *rep_dev = rx_queue; + + nb_pkts = rte_eth_rx_burst(rep_dev->rep_xport_vdev, rep_dev->u.txq, rx_pkts, 32); + if (nb_pkts == 0) + return 0; + + return nb_pkts; +} + +uint16_t +cnxk_rep_tx_burst_dummy(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(tx_queue); + PLT_SET_USED(tx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} + +uint16_t +cnxk_rep_rx_burst_dummy(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + PLT_SET_USED(rx_queue); + PLT_SET_USED(rx_pkts); + PLT_SET_USED(nb_pkts); + + return 0; +} int cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) @@ -13,39 +61,379 @@ cnxk_rep_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) } int -cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) +cnxk_rep_dev_info_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *dev_info) { - PLT_SET_USED(ethdev); - PLT_SET_USED(devinfo); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + struct rte_eth_dev_info mdevinfo; + const struct plt_memzone *mz; + int rc = 0; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + mz = plt_memzone_reserve_cache_align(CNXK_REP_XPORT_VDEV_CFG_MZ, + sizeof(rep_xport_vdev_cfg_t)); + if (!mz) { + plt_err("Failed to reserve a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + } + + rep_xport_vdev_cfg = mz->addr; + /* Get the rep base vdev devinfo */ + if (!rep_xport_vdev_cfg->mdevinfo) { + rc = rte_eth_dev_info_get(rep_dev->rep_xport_vdev, &mdevinfo); + if (rc) { + plt_err("Failed to get rep_xport port dev info, err %d", rc); + goto fail; + } + rep_xport_vdev_cfg->mdevinfo = plt_zmalloc(sizeof(struct rte_eth_dev_info), 0); + if (!rep_xport_vdev_cfg->mdevinfo) { + plt_err("Failed to alloc memory for dev info"); + goto fail; + } + rte_memcpy(rep_xport_vdev_cfg->mdevinfo, &mdevinfo, + sizeof(struct rte_eth_dev_info)); + } + + /* Use rep_xport device info */ + dev_info->max_mac_addrs = rep_xport_vdev_cfg->mdevinfo->max_mac_addrs; + dev_info->max_rx_pktlen = rep_xport_vdev_cfg->mdevinfo->max_rx_pktlen; + dev_info->min_rx_bufsize = rep_xport_vdev_cfg->mdevinfo->min_rx_bufsize; + dev_info->tx_offload_capa = rep_xport_vdev_cfg->mdevinfo->tx_offload_capa; + + /* For the sake of symmetry, max_rx_queues = max_tx_queues */ + dev_info->max_rx_queues = 1; + dev_info->max_tx_queues = 1; + + /* MTU specifics */ + dev_info->max_mtu = rep_xport_vdev_cfg->mdevinfo->max_mtu; + dev_info->min_mtu = rep_xport_vdev_cfg->mdevinfo->min_mtu; + + /* Switch info specific */ + dev_info->switch_info.name = ethdev->device->name; + dev_info->switch_info.domain_id = rep_dev->switch_domain_id; + dev_info->switch_info.port_id = rep_dev->vf_id; + return 0; +fail: + return rc; +} + +static inline int +bitmap_ctzll(uint64_t slab) +{ + if (slab == 0) + return 0; + + return __builtin_ctzll(slab); +} + +static uint16_t +alloc_rep_xport_qid(struct plt_bitmap *bmp) +{ + uint16_t idx, rc; + uint64_t slab; + uint32_t pos; + + pos = 0; + slab = 0; + /* Scan from the beginning */ + plt_bitmap_scan_init(bmp); + /* Scan bitmap to get the free pool */ + rc = plt_bitmap_scan(bmp, &pos, &slab); + /* Empty bitmap */ + if (rc == 0) + return UINT16_MAX; + + idx = pos + bitmap_ctzll(slab); + plt_bitmap_clear(bmp, idx); + return idx; +} + +static int +configure_rep_xport_queues_map(rep_xport_vdev_cfg_t *rep_xport_vdev_cfg) +{ + int id, rc = 0, q_max; + uint32_t bmap_sz; + void *bmap_mem; + + q_max = CNXK_MAX_REP_PORTS + 1; + /* Return success on no-pci case */ + if (!q_max) + return 0; + + bmap_sz = plt_bitmap_get_memory_footprint(q_max); + + /* Allocate memory for rep_xport queue bitmap */ + bmap_mem = plt_zmalloc(bmap_sz, RTE_CACHE_LINE_SIZE); + if (bmap_mem == NULL) { + plt_err("Failed to allocate memory for worker lmt bmap"); + rc = -ENOMEM; + goto exit; + } + rep_xport_vdev_cfg->q_bmap_mem = bmap_mem; + + /* Initialize worker lmt bitmap */ + rep_xport_vdev_cfg->q_map = plt_bitmap_init(q_max, bmap_mem, bmap_sz); + if (!rep_xport_vdev_cfg->q_map) { + plt_err("Failed to initialize rep_xport queue bitmap"); + rc = -EIO; + goto exit; + } + + /* Set all the queue initially */ + for (id = 0; id < q_max; id++) + plt_bitmap_set(rep_xport_vdev_cfg->q_bmap_mem, id); + + return 0; +exit: + return rc; +} + +static uint16_t +cnxk_rep_eth_dev_count_total(void) +{ + uint16_t port, count = 0; + struct rte_eth_dev *ethdev; + + RTE_ETH_FOREACH_DEV(port) { + ethdev = &rte_eth_devices[port]; + if (ethdev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) + count++; + } + + return count; +} + +static int +configure_control_channel(rep_xport_vdev_cfg_t *rep_xport_vdev_cfg, uint16_t portid) +{ + struct rte_mempool *ctrl_chan_pool = NULL; + int rc; + + /* Allocate a qid for control channel */ + alloc_rep_xport_qid(rep_xport_vdev_cfg->q_map); + + /* Create the mbuf pool. */ + ctrl_chan_pool = rte_pktmbuf_pool_create("rep_xport_ctrl_pool", NB_REP_VDEV_MBUF, + MEMPOOL_CACHE_SIZE, RTE_CACHE_LINE_SIZE, + RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (ctrl_chan_pool == NULL) { + plt_err("Cannot init mbuf pool"); + rc = -ENOMEM; + goto fail; + } + + /* Setup a RX queue for control channel */ + rc = rte_eth_rx_queue_setup(portid, CNXK_REP_VDEV_CTRL_QUEUE, RX_DESC_PER_QUEUE, + rte_eth_dev_socket_id(portid), NULL, ctrl_chan_pool); + if (rc < 0) { + plt_err("rte_eth_rx_queue_setup:err=%d, port=%u\n", rc, portid); + goto fail; + } + + /* Setup a TX queue for control channel */ + rc = rte_eth_tx_queue_setup(portid, CNXK_REP_VDEV_CTRL_QUEUE, TX_DESC_PER_QUEUE, + rte_eth_dev_socket_id(portid), NULL); + if (rc < 0) { + plt_err("TX queue setup failed, err %d port %d", rc, portid); + goto fail; + } + + rep_xport_vdev_cfg->ctrl_chan_pool = ctrl_chan_pool; + + return 0; +fail: + return rc; +} + +static int +configure_rep_xport_dev(rep_xport_vdev_cfg_t *rep_xport_vdev_cfg, uint16_t portid) +{ + struct rte_eth_dev *rep_xport_ethdev = cnxk_rep_xport_eth_dev(portid); + static struct rte_eth_conf port_conf_default; + uint16_t nb_rxq, nb_txq, nb_rep_ports; + int rc = 0; + + /* If rep_xport port already started, stop it and reconfigure */ + if (rep_xport_ethdev->data->dev_started) + rte_eth_dev_stop(portid); + + /* Get the no of representors probed */ + nb_rep_ports = cnxk_rep_eth_dev_count_total(); + if (nb_rep_ports > CNXK_MAX_REP_PORTS) { + plt_err("Representors probed %d > Max supported %d", nb_rep_ports, + CNXK_MAX_REP_PORTS); + goto fail; + } + + /* Each queue of rep_xport describes representor port. 1 additional queue is + * configured as control channel to configure flows, etc. + */ + nb_rxq = CNXK_MAX_REP_PORTS + 1; + nb_txq = CNXK_MAX_REP_PORTS + 1; + + rc = rte_eth_dev_configure(portid, nb_rxq, nb_txq, &port_conf_default); + if (rc) { + plt_err("Failed to configure rep_xport port: %d", rc); + goto fail; + } + + rep_xport_vdev_cfg->rep_xport_configured = true; + rep_xport_vdev_cfg->nb_rep_ports = nb_rep_ports; + + return 0; +fail: + return rc; } int cnxk_rep_dev_configure(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + int rc = -1; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + mz = plt_memzone_reserve_cache_align(CNXK_REP_XPORT_VDEV_CFG_MZ, + sizeof(rep_xport_vdev_cfg_t)); + if (!mz) { + plt_err("Failed to reserve a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + } + + rep_xport_vdev_cfg = mz->addr; + /* Return if rep_xport dev already configured */ + if (rep_xport_vdev_cfg->rep_xport_configured) { + rep_dev->ctrl_chan_pool = rep_xport_vdev_cfg->ctrl_chan_pool; + return 0; + } + + /* Configure rep_xport pmd */ + rc = configure_rep_xport_dev(rep_xport_vdev_cfg, rep_dev->rep_xport_vdev); + if (rc) { + plt_err("Configuring rep_xport port failed"); + goto free; + } + + /* Setup a bitmap for rep_xport queues */ + rc = configure_rep_xport_queues_map(rep_xport_vdev_cfg); + if (rc != 0) { + plt_err("Failed to setup rep_xport queue map, err %d", rc); + goto free; + } + + /* Setup a queue for control channel */ + rc = configure_control_channel(rep_xport_vdev_cfg, rep_dev->rep_xport_vdev); + if (rc != 0) { + plt_err("Failed to setup control channgel, err %d", rc); + goto free; + } + rep_dev->ctrl_chan_pool = rep_xport_vdev_cfg->ctrl_chan_pool; + return 0; +free: + plt_memzone_free(mz); +fail: + return rc; } int -cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +cnxk_rep_promiscuous_enable(struct rte_eth_dev *ethdev) { PLT_SET_USED(ethdev); return 0; } int -cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +cnxk_rep_promiscuous_disable(struct rte_eth_dev *ethdev) { PLT_SET_USED(ethdev); return 0; } +int +cnxk_rep_dev_start(struct rte_eth_dev *ethdev) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + int rc = 0; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + + rep_xport_vdev_cfg = mz->addr; + ethdev->rx_pkt_burst = cnxk_rep_rx_burst; + ethdev->tx_pkt_burst = cnxk_rep_tx_burst; + + /* Start rep_xport device only once after first representor gets active */ + if (!rep_xport_vdev_cfg->nb_rep_started) { + rc = rte_eth_dev_start(rep_dev->rep_xport_vdev); + if (rc) { + plt_err("Rep base vdev portid %d start failed, err %d", + rep_dev->rep_xport_vdev, rc); + goto fail; + } + + /* Launch a thread to handle control messages */ + rc = cnxk_rep_control_thread_launch(cnxk_eth_pmd_priv(rep_dev->parent_dev)); + if (rc) { + plt_err("Failed to launch message ctrl thread"); + goto fail; + } + } + + rep_xport_vdev_cfg->nb_rep_started++; + + return 0; +fail: + return rc; +} + +int +cnxk_rep_dev_close(struct rte_eth_dev *ethdev) +{ + return cnxk_rep_dev_uninit(ethdev); +} + int cnxk_rep_dev_stop(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + + rep_xport_vdev_cfg = mz->addr; + ethdev->rx_pkt_burst = cnxk_rep_rx_burst_dummy; + ethdev->tx_pkt_burst = cnxk_rep_tx_burst_dummy; + rep_xport_vdev_cfg->nb_rep_started--; + + /* Stop rep_xport device only after all other devices stopped */ + if (!rep_xport_vdev_cfg->nb_rep_started) + rte_eth_dev_stop(rep_dev->rep_xport_vdev); + return 0; +fail: + return rte_errno; } int @@ -53,54 +441,220 @@ cnxk_rep_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, uint16 unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool) { - PLT_SET_USED(ethdev); - PLT_SET_USED(rx_queue_id); - PLT_SET_USED(nb_rx_desc); - PLT_SET_USED(socket_id); - PLT_SET_USED(rx_conf); - PLT_SET_USED(mb_pool); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + int rc = 0; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + + rep_xport_vdev_cfg = mz->addr; + /* Allocate a qid, if tx queue setup already done use the same qid */ + if (rep_dev->u.rxq == UINT16_MAX && rep_dev->u.txq == UINT16_MAX) + rep_dev->u.rxq = alloc_rep_xport_qid(rep_xport_vdev_cfg->q_map); + else + rep_dev->u.rxq = rep_dev->u.txq; + + /* Setup the RX queue */ + rc = rte_eth_rx_queue_setup(rep_dev->rep_xport_vdev, rep_dev->u.rxq, nb_rx_desc, socket_id, + rx_conf, mb_pool); + if (rc < 0) { + plt_err("rte_eth_rx_queue_setup:err=%d, port=%u\n", rc, rep_dev->rep_xport_vdev); + goto fail; + } + + ethdev->data->rx_queues[rx_queue_id] = rep_dev; + plt_info("Representor id %d portid %d rxq %d", rep_dev->vf_id, ethdev->data->port_id, + rep_dev->u.rxq); + return 0; +fail: + return rc; } void cnxk_rep_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) { - PLT_SET_USED(ethdev); - PLT_SET_USED(queue_id); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + RTE_SET_USED(queue_id); + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + return; + } + + rep_xport_vdev_cfg = mz->addr; + plt_bitmap_clear(rep_xport_vdev_cfg->q_bmap_mem, rep_dev->u.rxq); } int cnxk_rep_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - PLT_SET_USED(ethdev); - PLT_SET_USED(tx_queue_id); - PLT_SET_USED(nb_tx_desc); - PLT_SET_USED(socket_id); - PLT_SET_USED(tx_conf); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; + int rc = 0; + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + goto fail; + } + + rep_xport_vdev_cfg = mz->addr; + /* Allocate a qid, if rx queue setup already done use the same qid */ + if (rep_dev->u.rxq == UINT16_MAX && rep_dev->u.txq == UINT16_MAX) + rep_dev->u.txq = alloc_rep_xport_qid(rep_xport_vdev_cfg->q_map); + else + rep_dev->u.txq = rep_dev->u.rxq; + + /* Setup the TX queue */ + rc = rte_eth_tx_queue_setup(rep_dev->rep_xport_vdev, rep_dev->u.txq, nb_tx_desc, socket_id, + tx_conf); + if (rc < 0) { + plt_err("TX queue setup failed, err %d port %d", rc, rep_dev->rep_xport_vdev); + goto fail; + } + + ethdev->data->tx_queues[tx_queue_id] = rep_dev; + plt_info("Representor id %d portid %d txq %d", rep_dev->vf_id, ethdev->data->port_id, + rep_dev->u.txq); + return 0; +fail: + return rc; } void cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) { - PLT_SET_USED(ethdev); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + rep_xport_vdev_cfg_t *rep_xport_vdev_cfg = NULL; + const struct plt_memzone *mz; PLT_SET_USED(queue_id); + + mz = plt_memzone_lookup(CNXK_REP_XPORT_VDEV_CFG_MZ); + if (!mz) { + plt_err("Failed to lookup a memzone, rep id %d, err %d", + rep_dev->vf_id, rte_errno); + return; + } + + rep_xport_vdev_cfg = mz->addr; + plt_bitmap_clear(rep_xport_vdev_cfg->q_bmap_mem, rep_dev->u.txq); +} + +static int +process_eth_stats(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_eth_stats_meta_t msg_st_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = CNXK_REP_MSG_MAX_BUFFER_SZ; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_st_meta.portid = rep_dev->u.rxq; + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_st_meta, + sizeof(cnxk_rep_msg_eth_stats_meta_t), msg); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + rte_free(buffer); + + return 0; +fail: + rte_free(buffer); + return rc; } int cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) { - PLT_SET_USED(ethdev); - PLT_SET_USED(stats); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + struct rte_eth_stats vf_stats; + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_eth_stats(rep_dev, &adata, CNXK_REP_MSG_ETH_STATS_GET); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + plt_err("Failed to clear stats for vf rep %x, err %d", rep_dev->vf_id, rc); + } + + if (adata.size != sizeof(struct rte_eth_stats)) { + rc = -EINVAL; + plt_err("Incomplete stats received for vf rep %d", rep_dev->vf_id); + goto fail; + } + + rte_memcpy(&vf_stats, adata.u.data, adata.size); + + stats->q_ipackets[0] = vf_stats.ipackets; + stats->q_ibytes[0] = vf_stats.ibytes; + stats->ipackets = vf_stats.ipackets; + stats->ibytes = vf_stats.ibytes; + + stats->q_opackets[0] = vf_stats.opackets; + stats->q_obytes[0] = vf_stats.obytes; + stats->opackets = vf_stats.opackets; + stats->obytes = vf_stats.obytes; + return 0; +fail: + return rc; } int cnxk_rep_stats_reset(struct rte_eth_dev *ethdev) { - PLT_SET_USED(ethdev); - return 0; + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_eth_stats(rep_dev, &adata, CNXK_REP_MSG_ETH_STATS_CLEAR); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + plt_err("Failed to clear stats for vf rep %x, err %d", rep_dev->vf_id, rc); + } + + return rc; } int @@ -110,3 +664,54 @@ cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **op PLT_SET_USED(ops); return 0; } + +int +cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_eth_set_mac_meta_t msg_sm_meta; + cnxk_rep_msg_ack_data_t adata; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + /* If representor not representing any VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + size = CNXK_REP_MSG_MAX_BUFFER_SZ; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_sm_meta.portid = rep_dev->u.rxq; + rte_memcpy(&msg_sm_meta.addr_bytes, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_sm_meta, + sizeof(cnxk_rep_msg_eth_set_mac_meta_t), + CNXK_REP_MSG_ETH_SET_MAC); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, &adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + if (adata.u.sval < 0) { + rc = adata.u.sval; + plt_err("Failed to set mac address, err %d", rc); + goto fail; + } + + rte_free(buffer); + + return 0; +fail: + rte_free(buffer); + return rc; +} From patchwork Fri Aug 11 16:34:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130178 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6F3943036; Fri, 11 Aug 2023 18:35:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EBBC4327D; Fri, 11 Aug 2023 18:35:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D8F994327D for ; Fri, 11 Aug 2023 18:35:16 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BBmUgq001610 for ; Fri, 11 Aug 2023 09:35:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=leJPPRq1hJLXIBmKpFWiaMH0a9dB7FHyd6mcCT3l4y8=; b=fLSxCm6r+dHZ9d/QVluUdI7qLUZ+y5uEx/zRxIALf/AonmfDyBb7bG9HZZj5FS+svvGK DYEzlSfu4aiYTPo90PFipzAlUntEN94r7nUdlKlfFy48EwK9Hvy2Guwo4pkBsmFOlyne CCV5CQhwY2YRPblkEnGksTRLmJGmAKe61HHSKU7fFWizqkKd2cbrlcyaCq7Yzyj9cKzq VjUHQIs/MTrYAgeULNmBpDxntm+7jzBofVZvSRLp3iGTRypc6wj3dbd93FVsoQcbFGHV sB5tZiVL7gbOyjpFx4A39yiBjdDnxetjCpW/DVZRYnLSYLMgVuqlMnpy/QE5aYPgWE9j Zw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ypb92p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:15 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:14 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id C7DC23F705F; Fri, 11 Aug 2023 09:35:11 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 7/9] net/cnxk: representor flow ops Date: Fri, 11 Aug 2023 22:04:17 +0530 Message-ID: <20230811163419.165790-8-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Cn6biVzL4X0No_h5EiOut6FTtfPJJZha X-Proofpoint-GUID: Cn6biVzL4X0No_h5EiOut6FTtfPJJZha X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implementing flow operation callbacks for port representors PMD Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.h | 9 +- drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_flow.c | 715 +++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 58 +++ drivers/net/cnxk/cnxk_rep_ops.c | 3 +- drivers/net/cnxk/meson.build | 1 + 6 files changed, 786 insertions(+), 3 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_flow.c diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index bb23629819..303002176b 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -16,8 +16,13 @@ struct cnxk_rte_flow_term_info { uint16_t item_size; }; -struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, +struct cnxk_rte_flow_action_info { + uint16_t conf_size; +}; + +extern const struct cnxk_rte_flow_term_info term[]; + +struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 2b6403f003..4886527f83 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -15,6 +15,9 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +/* Flow ops for representor ports */ +extern struct rte_flow_ops cnxk_rep_flow_ops; + /* Representor base device configurations */ typedef struct rep_xport_vdev_cfg_s { struct plt_bitmap *q_map; diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c new file mode 100644 index 0000000000..9e181f5173 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -0,0 +1,715 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#include +#include +#include + +#define DEFAULT_DUMP_FILE_NAME "/tmp/fdump" +#define MAX_BUFFER_SIZE 1500 + +const struct cnxk_rte_flow_action_info action_info[] = { + [RTE_FLOW_ACTION_TYPE_MARK] = {sizeof(struct rte_flow_action_mark)}, + [RTE_FLOW_ACTION_TYPE_VF] = {sizeof(struct rte_flow_action_vf)}, + [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_PORT_ID] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_QUEUE] = {sizeof(struct rte_flow_action_queue)}, + [RTE_FLOW_ACTION_TYPE_RSS] = {sizeof(struct rte_flow_action_rss)}, + [RTE_FLOW_ACTION_TYPE_SECURITY] = {sizeof(struct rte_flow_action_security)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {sizeof(struct rte_flow_action_of_set_vlan_vid)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {sizeof(struct rte_flow_action_of_push_vlan)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {sizeof(struct rte_flow_action_of_set_vlan_pcp)}, + [RTE_FLOW_ACTION_TYPE_METER] = {sizeof(struct rte_flow_action_meter)}, + [RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {sizeof(struct rte_flow_action_of_pop_mpls)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {sizeof(struct rte_flow_action_of_push_mpls)}, + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {sizeof(struct rte_flow_action_vxlan_encap)}, + [RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {sizeof(struct rte_flow_action_nvgre_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {sizeof(struct rte_flow_action_raw_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {sizeof(struct rte_flow_action_raw_decap)}, + [RTE_FLOW_ACTION_TYPE_COUNT] = {sizeof(struct rte_flow_action_count)}, +}; + +static void +cnxk_flow_params_count(const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + uint16_t *n_pattern, uint16_t *n_action) +{ + int i = 0; + + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + i++; + + *n_pattern = ++i; + plt_rep_dbg("Total patterns is %d", *n_pattern); + + i = 0; + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) + i++; + *n_action = ++i; + plt_rep_dbg("Total actions is %d", *n_action); +} + +static void +populate_attr_data(void *buffer, uint32_t *length, const struct rte_flow_attr *attr) +{ + uint32_t sz = sizeof(struct rte_flow_attr); + uint32_t len; + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ATTR, sz); + + len = *length; + /* Populate the attribute data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), attr, sz); + len += sz; + + *length = len; +} + +static uint16_t +prepare_pattern_data(const struct rte_flow_item *pattern, uint16_t nb_pattern, + uint64_t *pattern_data) +{ + cnxk_pattern_hdr_t hdr; + uint16_t len = 0; + int i = 0; + + for (i = 0; i < nb_pattern; i++) { + /* Populate the pattern type hdr */ + memset(&hdr, 0, sizeof(cnxk_pattern_hdr_t)); + hdr.type = pattern->type; + if (pattern->spec) { + hdr.spec_sz = term[pattern->type].item_size; + hdr.last_sz = 0; + hdr.mask_sz = term[pattern->type].item_size; + } + + rte_memcpy(RTE_PTR_ADD(pattern_data, len), &hdr, sizeof(cnxk_pattern_hdr_t)); + len += sizeof(cnxk_pattern_hdr_t); + + /* Copy pattern spec data */ + if (pattern->spec) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->spec, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern last data */ + if (pattern->last) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->last, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern mask data */ + if (pattern->mask) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->mask, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + pattern++; + } + + return len; +} + +static void +populate_pattern_data(void *buffer, uint32_t *length, const struct rte_flow_item *pattern, + uint16_t nb_pattern) +{ + uint64_t pattern_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare pattern_data */ + sz = prepare_pattern_data(pattern, nb_pattern, pattern_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_PATTERN, sz); + + len = *length; + /* Populate the pattern data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), pattern_data, sz); + len += sz; + + *length = len; +} + +static uint16_t +populate_rss_action_conf(const struct rte_flow_action_rss *conf, void + *rss_action_conf) +{ + int len, sz; + + len = sizeof(struct rte_flow_action_rss) - sizeof(conf->key) - + sizeof(conf->queue); + + if (rss_action_conf) + rte_memcpy(rss_action_conf, conf, len); + + if (conf->key) { + sz = conf->key_len; + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), conf->key, + sz); + len += sz; + } + + if (conf->queue) { + sz = conf->queue_num * sizeof(conf->queue); + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), + conf->queue, sz); + len += sz; + } + + return len; +} + +static uint16_t +prepare_action_data(const struct rte_flow_action *action, uint16_t nb_action, uint64_t *action_data) +{ + void *action_conf_data = NULL; + cnxk_action_hdr_t hdr; + uint16_t len = 0, sz = 0; + int i = 0; + + for (i = 0; i < nb_action; i++) { + if (action->conf) { + if (action->type == RTE_FLOW_ACTION_TYPE_RSS) { + sz = populate_rss_action_conf(action->conf, NULL); + action_conf_data = plt_zmalloc(sz, 0); + if (populate_rss_action_conf(action->conf, + action_conf_data) != sz) { + plt_err("Populating RSS action config failed"); + return 0; + } + } else { + sz = action_info[action->type].conf_size; + action_conf_data = plt_zmalloc(sz, 0); + rte_memcpy(action_conf_data, action->conf, sz); + } + } + + /* Populate the action type hdr */ + memset(&hdr, 0, sizeof(cnxk_action_hdr_t)); + hdr.type = action->type; + hdr.conf_sz = sz; + + rte_memcpy(RTE_PTR_ADD(action_data, len), &hdr, sizeof(cnxk_action_hdr_t)); + len += sizeof(cnxk_action_hdr_t); + + /* Copy action conf data */ + if (action_conf_data) { + rte_memcpy(RTE_PTR_ADD(action_data, len), + action_conf_data, sz); + len += sz; + plt_free(action_conf_data); + action_conf_data = NULL; + } + + action++; + } + + return len; +} + +static void +populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_action *action, + uint16_t nb_action) +{ + uint64_t action_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare action_data */ + sz = prepare_action_data(action, nb_action, action_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ACTION, sz); + + len = *length; + /* Populate the action data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), action_data, sz); + len += sz; + + *length = len; +} + +static int +process_flow_destroy(struct cnxk_rep_dev *rep_dev, void *flow, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_destroy_meta_t msg_fd_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fd_meta.portid = rep_dev->u.rep_portid; + msg_fd_meta.flow = (uint64_t)flow; + plt_rep_dbg("Flow Destroy: flow 0x%lx, portid %d", msg_fd_meta.flow, msg_fd_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fd_meta, + sizeof(cnxk_rep_msg_flow_destroy_meta_t), + CNXK_REP_MSG_FLOW_DESTROY); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +copy_flow_dump_file(FILE *target) +{ + FILE *source = NULL; + int pos; + char ch; + + source = fopen(DEFAULT_DUMP_FILE_NAME, "r"); + if (source == NULL) { + plt_err("Failed to read default dump file: %s, err %d", DEFAULT_DUMP_FILE_NAME, + errno); + return errno; + } + + fseek(source, 0L, SEEK_END); + pos = ftell(source); + fseek(source, 0L, SEEK_SET); + while (pos--) { + ch = fgetc(source); + fputc(ch, target); + } + + fclose(source); + + /* Remove the default file after reading */ + remove(DEFAULT_DUMP_FILE_NAME); + + return 0; +} + +static int +process_flow_dump(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, FILE *file, + cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_dump_meta_t msg_fp_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fp_meta.portid = rep_dev->u.rep_portid; + msg_fp_meta.flow = (uint64_t)flow; + msg_fp_meta.is_stdout = (file == stdout) ? 1 : 0; + + plt_rep_dbg("Flow Dump: flow 0x%lx, portid %d stdout %d", msg_fp_meta.flow, + msg_fp_meta.portid, msg_fp_meta.is_stdout); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fp_meta, + sizeof(cnxk_rep_msg_flow_dump_meta_t), + CNXK_REP_MSG_FLOW_DUMP); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + /* Copy contents from default file to user file */ + if (file != stdout) + copy_flow_dump_file(file); + + return 0; +fail: + return rc; +} + +static int +process_flow_flush(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_flush_meta_t msg_ff_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_ff_meta.portid = rep_dev->u.rep_portid; + plt_rep_dbg("Flow Flush: portid %d", msg_ff_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_ff_meta, + sizeof(cnxk_rep_msg_flow_flush_meta_t), + CNXK_REP_MSG_FLOW_FLUSH); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +process_flow_query(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, + const struct rte_flow_action *action, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_query_meta_t *msg_fq_meta; + uint32_t len = 0, rc, sz, total_sz; + uint64_t action_data[BUFSIZ]; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + sz = prepare_action_data(action, 1, action_data); + total_sz = sz + sizeof(cnxk_rep_msg_flow_query_meta_t); + + msg_fq_meta = plt_zmalloc(total_sz, 0); + if (!msg_fq_meta) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto fail; + } + + msg_fq_meta->portid = rep_dev->u.rep_portid; + msg_fq_meta->flow = (uint64_t)flow; + /* Populate the action data */ + rte_memcpy(msg_fq_meta->action_data, action_data, sz); + msg_fq_meta->action_data_sz = sz; + + plt_rep_dbg("Flow query: flow 0x%lx, portid %d, action type %d total sz %d action sz %d", + msg_fq_meta->flow, msg_fq_meta->portid, action->type, total_sz, sz); + cnxk_rep_msg_populate_command_meta(buffer, &len, msg_fq_meta, total_sz, + CNXK_REP_MSG_FLOW_QUERY); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto free; + } + + rte_free(msg_fq_meta); + + return 0; + +free: + rte_free(msg_fq_meta); +fail: + return rc; +} + +static int +process_flow_rule(struct cnxk_rep_dev *rep_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error, cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_flow_create_meta_t msg_fc_meta; + uint16_t n_pattern, n_action; + uint32_t len = 0, rc = 0; + void *buffer; + size_t size; + + RTE_SET_USED(error); + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + /* Get no of actions and patterns */ + cnxk_flow_params_count(pattern, actions, &n_pattern, &n_action); + + /* Adding the header */ + cnxk_rep_msg_populate_header(buffer, &len); + + /* Representor port identified as rep_xport queue */ + msg_fc_meta.portid = rep_dev->u.rep_portid; + msg_fc_meta.nb_pattern = n_pattern; + msg_fc_meta.nb_action = n_action; + + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fc_meta, + sizeof(cnxk_rep_msg_flow_create_meta_t), msg); + + /* Populate flow create parameters data */ + populate_attr_data(buffer, &len, attr); + populate_pattern_data(buffer, &len, pattern, n_pattern); + populate_action_data(buffer, &len, actions, n_action); + + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static struct rte_flow * +cnxk_rep_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct rte_flow *flow = NULL; + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_flow_rule(rep_dev, attr, pattern, actions, error, &adata, + CNXK_REP_MSG_FLOW_CREATE); + if (!rc || adata.u.sval < 0) { + if (adata.u.sval < 0) { + rc = (int)adata.u.sval; + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to validate flow"); + goto fail; + } + + flow = adata.u.data; + if (!flow) { + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to create flow"); + goto fail; + } + } else { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create flow"); + } + plt_rep_dbg("Flow %p created successfully", adata.u.data); + + return flow; +fail: + return NULL; +} + +static int +cnxk_rep_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_flow_rule(rep_dev, attr, pattern, actions, error, &adata, + CNXK_REP_MSG_FLOW_VALIDATE); + if (!rc || adata.u.sval < 0) { + if (adata.u.sval < 0) { + rc = (int)adata.u.sval; + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to validate flow"); + goto fail; + } + } else { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to validate flow"); + } + + plt_rep_dbg("Flow %p validated successfully", adata.u.data); + +fail: + return rc; +} + +static int +cnxk_rep_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + RTE_SET_USED(error); + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_flow_destroy(rep_dev, flow, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) { + rc = -ENOTSUP; + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Only COUNT is supported in query"); + goto fail; + } + + rc = process_flow_query(rep_dev, flow, action, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to query the flow"); + goto fail; + } + + rte_memcpy(data, adata.u.data, adata.size); + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_flow_flush(rep_dev, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + cnxk_rep_msg_ack_data_t adata; + int rc; + + RTE_SET_USED(error); + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + rc = process_flow_dump(rep_dev, flow, file, &adata); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) + rc = adata.u.sval; + + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to destroy flow"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +cnxk_rep_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, int enable __rte_unused, + struct rte_flow_error *error) +{ + /* If we support, we need to un-install the default mcam + * entry for this port. + */ + + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Flow isolation not supported"); + + return -rte_errno; +} + +struct rte_flow_ops cnxk_rep_flow_ops = { + .validate = cnxk_rep_flow_validate, + .create = cnxk_rep_flow_create, + .destroy = cnxk_rep_flow_destroy, + .query = cnxk_rep_flow_query, + .flush = cnxk_rep_flow_flush, + .isolate = cnxk_rep_flow_isolate, + .dev_dump = cnxk_rep_flow_dev_dump, +}; diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 554122d7f8..23fd72434c 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -12,6 +12,10 @@ typedef enum CNXK_TYPE { CNXK_TYPE_HEADER = 0, CNXK_TYPE_MSG, + CNXK_TYPE_ATTR, + CNXK_TYPE_PATTERN, + CNXK_TYPE_ACTION, + CNXK_TYPE_FLOW } cnxk_type_t; typedef enum CNXK_REP_MSG { @@ -23,6 +27,13 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_ETH_SET_MAC, CNXK_REP_MSG_ETH_STATS_GET, CNXK_REP_MSG_ETH_STATS_CLEAR, + /* Flow operation msgs */ + CNXK_REP_MSG_FLOW_CREATE, + CNXK_REP_MSG_FLOW_DESTROY, + CNXK_REP_MSG_FLOW_VALIDATE, + CNXK_REP_MSG_FLOW_FLUSH, + CNXK_REP_MSG_FLOW_DUMP, + CNXK_REP_MSG_FLOW_QUERY, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -79,6 +90,53 @@ typedef struct cnxk_rep_msg_eth_stats_meta { uint16_t portid; } __rte_packed cnxk_rep_msg_eth_stats_meta_t; +/* Flow create msg meta */ +typedef struct cnxk_rep_msg_flow_create_meta { + uint16_t portid; + uint16_t nb_pattern; + uint16_t nb_action; +} __rte_packed cnxk_rep_msg_flow_create_meta_t; + +/* Flow destroy msg meta */ +typedef struct cnxk_rep_msg_flow_destroy_meta { + uint64_t flow; + uint16_t portid; +} __rte_packed cnxk_rep_msg_flow_destroy_meta_t; + +/* Flow flush msg meta */ +typedef struct cnxk_rep_msg_flow_flush_meta { + uint16_t portid; +} __rte_packed cnxk_rep_msg_flow_flush_meta_t; + +/* Flow dump msg meta */ +typedef struct cnxk_rep_msg_flow_dump_meta { + uint64_t flow; + uint16_t portid; + uint8_t is_stdout; +} __rte_packed cnxk_rep_msg_flow_dump_meta_t; + +/* Flow query msg meta */ +typedef struct cnxk_rep_msg_flow_query_meta { + uint64_t flow; + uint16_t portid; + uint32_t action_data_sz; + uint8_t action_data[]; +} __rte_packed cnxk_rep_msg_flow_query_meta_t; + +/* Type pattern meta */ +typedef struct cnxk_pattern_hdr { + uint16_t type; + uint16_t spec_sz; + uint16_t last_sz; + uint16_t mask_sz; +} __rte_packed cnxk_pattern_hdr_t; + +/* Type action meta */ +typedef struct cnxk_action_hdr { + uint16_t type; + uint16_t conf_sz; +} __rte_packed cnxk_action_hdr_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index 022a5137df..c418ecf383 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -661,7 +661,8 @@ int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) { PLT_SET_USED(ethdev); - PLT_SET_USED(ops); + *ops = &cnxk_rep_flow_ops; + return 0; } diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 0e7334f5cd..d4b1110f38 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -35,6 +35,7 @@ sources = files( 'cnxk_rep.c', 'cnxk_rep_msg.c', 'cnxk_rep_ops.c', + 'cnxk_rep_flow.c', 'cnxk_stats.c', 'cnxk_tm.c', ) From patchwork Fri Aug 11 16:34:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130179 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 046A543036; Fri, 11 Aug 2023 18:36:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6B3943261; Fri, 11 Aug 2023 18:35:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 13C1240144 for ; Fri, 11 Aug 2023 18:35:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BEFxAt020668 for ; Fri, 11 Aug 2023 09:35:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=XBM5zILv64amtO4c9OnklYnL8SzmACRQpuA5PGlzrvM=; b=DK3pPvbZDlK35iaPH8v6/soYE9CtOiUR1wvEdjh94vWQEVYPPDW2oCOhn7kpDYmiHkWS qiLDPZZ+1g1jux3wySv3wWteeylkNlSopOVCi0hq8D7ClaJz6BiNqbUdOwwJHhy5U6gD /CAE5gR7XNp90wuixT8ywG+SQBUZ5Ta3HLqwHBmV5/h/LVixq2E+pmrNh+RVRA0B2xFG zfY4GUPR1Bqh1VdnfaRgfYQ3Mb/v+wPMsWYtOJp/mrsqy4bFGENDmZPuwSg0F1+bxUMr 0eNCUI+wtF1dGHHEHDs7SXOMy6j28qNAJ9h4KoaKASdwQKNu+Grn28gViqs8IvjYZHXe aA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qv6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:20 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 01EBE3F7068; Fri, 11 Aug 2023 09:35:14 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 8/9] common/cnxk: support represented port for cnxk Date: Fri, 11 Aug 2023 22:04:18 +0530 Message-ID: <20230811163419.165790-9-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jPc9YsfliIhRNFdGNGWalVzOBn8GVGdg X-Proofpoint-GUID: jPc9YsfliIhRNFdGNGWalVzOBn8GVGdg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding represented port item an action support for cnxk device. Flow operations can be performed via representor ports as well as represented ports. Signed-off-by: Kiran Kumar K Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_npc.c | 54 ++++++++++++++++------ drivers/common/cnxk/roc_npc.h | 16 ++++++- drivers/common/cnxk/roc_npc_mcam.c | 69 ++++++++++++++--------------- drivers/common/cnxk/roc_npc_parse.c | 14 ++++++ drivers/common/cnxk/roc_npc_priv.h | 1 + drivers/net/cnxk/cnxk_flow.c | 27 +++++++++-- drivers/net/cnxk/cnxk_rep.h | 13 ++++++ 7 files changed, 140 insertions(+), 54 deletions(-) diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c index 586bc55791..0a5bc5c2b1 100644 --- a/drivers/common/cnxk/roc_npc.c +++ b/drivers/common/cnxk/roc_npc.c @@ -779,10 +779,14 @@ npc_parse_pattern(struct npc *npc, const struct roc_npc_item_info pattern[], struct roc_npc_flow *flow, struct npc_parse_state *pst) { npc_parse_stage_func_t parse_stage_funcs[] = { - npc_parse_meta_items, npc_parse_mark_item, npc_parse_pre_l2, npc_parse_cpt_hdr, - npc_parse_higig2_hdr, npc_parse_tx_queue, npc_parse_la, npc_parse_lb, - npc_parse_lc, npc_parse_ld, npc_parse_le, npc_parse_lf, - npc_parse_lg, npc_parse_lh, + npc_parse_meta_items, npc_parse_represented_port_id, + npc_parse_mark_item, npc_parse_pre_l2, + npc_parse_cpt_hdr, npc_parse_higig2_hdr, + npc_parse_tx_queue, npc_parse_la, + npc_parse_lb, npc_parse_lc, + npc_parse_ld, npc_parse_le, + npc_parse_lf, npc_parse_lg, + npc_parse_lh, }; uint8_t layer = 0; int key_offset; @@ -843,11 +847,11 @@ npc_parse_attr(struct npc *npc, const struct roc_npc_attr *attr, return NPC_ERR_PARAM; else if (attr->priority >= npc->flow_max_priority) return NPC_ERR_PARAM; - else if ((!attr->egress && !attr->ingress) || - (attr->egress && attr->ingress)) + else if ((!attr->egress && !attr->ingress && !attr->transfer) || + (attr->egress && attr->ingress && attr->transfer)) return NPC_ERR_PARAM; - if (attr->ingress) + if (attr->ingress || attr->transfer) flow->nix_intf = ROC_NPC_INTF_RX; else flow->nix_intf = ROC_NPC_INTF_TX; @@ -1002,15 +1006,18 @@ npc_rss_action_program(struct roc_npc *roc_npc, struct roc_npc_flow *flow) { const struct roc_npc_action_rss *rss; + struct roc_npc *npc = roc_npc; uint32_t rss_grp; uint8_t alg_idx; int rc; + if (flow->has_rep) + npc = roc_npc->rep_npc; + for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { if (actions->type == ROC_NPC_ACTION_TYPE_RSS) { rss = (const struct roc_npc_action_rss *)actions->conf; - rc = npc_rss_action_configure(roc_npc, rss, &alg_idx, - &rss_grp, flow->mcam_id); + rc = npc_rss_action_configure(npc, rss, &alg_idx, &rss_grp, flow->mcam_id); if (rc) return rc; @@ -1448,6 +1455,17 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, memset(flow, 0, sizeof(*flow)); memset(&parse_state, 0, sizeof(parse_state)); + flow->port_id = -1; + if (roc_npc->rep_npc) { + flow->rep_channel = roc_nix_to_nix_priv(roc_npc->rep_npc->roc_nix)->rx_chan_base; + flow->rep_pf_func = roc_npc->rep_pf_func; + flow->rep_mbox = roc_npc_to_npc_priv(roc_npc->rep_npc)->mbox; + flow->has_rep = true; + flow->is_rep_vf = !roc_nix_is_pf(roc_npc->rep_npc->roc_nix); + flow->port_id = roc_npc->rep_port_id; + flow->rep_npc = roc_npc_to_npc_priv(roc_npc->rep_npc); + } + parse_state.dst_pf_func = dst_pf_func; rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow, &parse_state); @@ -1475,6 +1493,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, *errcode = rc; goto set_rss_failed; } + roc_npc->rep_npc = NULL; if (flow->use_pre_alloc == 0) list = &npc->flow_list[flow->priority]; @@ -1484,6 +1503,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, TAILQ_FOREACH(flow_iter, list, next) { if (flow_iter->mcam_id > flow->mcam_id) { TAILQ_INSERT_BEFORE(flow_iter, flow, next); + roc_npc->rep_npc = NULL; return flow; } } @@ -1491,6 +1511,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, return flow; set_rss_failed: + roc_npc->rep_npc = NULL; if (flow->use_pre_alloc == 0) { rc = roc_npc_mcam_free_entry(roc_npc, flow->mcam_id); if (rc != 0) { @@ -1502,6 +1523,7 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, npc_inline_dev_ipsec_action_free(npc, flow); } err_exit: + roc_npc->rep_npc = NULL; plt_free(flow); return NULL; } @@ -1509,15 +1531,19 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, int npc_rss_group_free(struct npc *npc, struct roc_npc_flow *flow) { + struct npc *lnpc = npc; uint32_t rss_grp; + if (flow->has_rep) + lnpc = flow->rep_npc; + if ((flow->npc_action & 0xF) == NIX_RX_ACTIONOP_RSS) { rss_grp = (flow->npc_action >> NPC_RSS_ACT_GRP_OFFSET) & NPC_RSS_ACT_GRP_MASK; if (rss_grp == 0 || rss_grp >= npc->rss_grps) return -EINVAL; - plt_bitmap_clear(npc->rss_grp_entries, rss_grp); + plt_bitmap_clear(lnpc->rss_grp_entries, rss_grp); } return 0; @@ -1591,7 +1617,7 @@ roc_npc_flow_destroy(struct roc_npc *roc_npc, struct roc_npc_flow *flow) } void -roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc) +roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc, int rep_port_id) { struct npc *npc = roc_npc_to_npc_priv(roc_npc); struct roc_npc_flow *flow_iter; @@ -1605,12 +1631,14 @@ roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc) /* List in ascending order of mcam entries */ TAILQ_FOREACH(flow_iter, list, next) { - roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); + if (rep_port_id == -1 || rep_port_id == flow_iter->port_id) + roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); } } TAILQ_FOREACH(flow_iter, &npc->ipsec_list, next) { - roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); + if (rep_port_id == -1 || rep_port_id == flow_iter->port_id) + roc_npc_flow_mcam_dump(file, roc_npc, flow_iter); } } diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h index 2ada774934..0f50c55175 100644 --- a/drivers/common/cnxk/roc_npc.h +++ b/drivers/common/cnxk/roc_npc.h @@ -41,6 +41,7 @@ enum roc_npc_item_type { ROC_NPC_ITEM_TYPE_MARK, ROC_NPC_ITEM_TYPE_TX_QUEUE, ROC_NPC_ITEM_TYPE_IPV6_ROUTING_EXT, + ROC_NPC_ITEM_TYPE_REPRESENTED_PORT, ROC_NPC_ITEM_TYPE_END, }; @@ -272,7 +273,8 @@ struct roc_npc_attr { uint32_t priority; /**< Rule priority level within group. */ uint32_t ingress : 1; /**< Rule applies to ingress traffic. */ uint32_t egress : 1; /**< Rule applies to egress traffic. */ - uint32_t reserved : 30; /**< Reserved, must be zero. */ + uint32_t transfer : 1; /**< Rule applies to transfer traffic. */ + uint32_t reserved : 29; /**< Reserved, must be zero. */ }; struct roc_npc_flow_dump_data { @@ -312,6 +314,13 @@ struct roc_npc_flow { uint16_t match_id; uint8_t is_inline_dev; bool use_pre_alloc; + uint16_t rep_pf_func; + uint16_t rep_channel; + struct mbox *rep_mbox; + bool has_rep; + bool is_rep_vf; + struct npc *rep_npc; + int port_id; TAILQ_ENTRY(roc_npc_flow) next; }; @@ -366,6 +375,9 @@ struct roc_npc { bool is_sdp_mask_set; uint16_t sdp_channel; uint16_t sdp_channel_mask; + struct roc_npc *rep_npc; + uint16_t rep_pf_func; + int rep_port_id; #define ROC_NPC_MEM_SZ (6 * 1024) uint8_t reserved[ROC_NPC_MEM_SZ]; @@ -401,7 +413,7 @@ int __roc_api roc_npc_mcam_clear_counter(struct roc_npc *roc_npc, uint32_t ctr_i int __roc_api roc_npc_inl_mcam_read_counter(uint32_t ctr_id, uint64_t *count); int __roc_api roc_npc_inl_mcam_clear_counter(uint32_t ctr_id); int __roc_api roc_npc_mcam_free_all_resources(struct roc_npc *roc_npc); -void __roc_api roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc); +void __roc_api roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc, int rep_port_id); void __roc_api roc_npc_flow_mcam_dump(FILE *file, struct roc_npc *roc_npc, struct roc_npc_flow *mcam); int __roc_api roc_npc_mark_actions_get(struct roc_npc *roc_npc); diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index 62e0ce21b2..d25e82c652 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -143,8 +143,8 @@ npc_lid_lt_in_kex(struct npc *npc, uint8_t lid, uint8_t lt) } static void -npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, - uint8_t lt, uint8_t ld) +npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, uint8_t lt, + uint8_t ld) { struct npc_xtract_info *x_info, *infoflag; int hdr_off, keylen; @@ -197,8 +197,7 @@ npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid, * @param len length of the match */ static bool -npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset, - int len) +npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset, int len) { struct plt_bitmap *bmap; uint32_t bmap_sz; @@ -349,8 +348,8 @@ npc_mcam_alloc_entries(struct mbox *mbox, int ref_mcam, int *alloc_entry, int re } int -npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, - struct roc_npc_flow *ref_mcam, int prio, int *resp_count) +npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam, struct roc_npc_flow *ref_mcam, + int prio, int *resp_count) { struct npc_mcam_alloc_entry_req *req; struct npc_mcam_alloc_entry_rsp *rsp; @@ -450,22 +449,17 @@ npc_mcam_write_entry(struct mbox *mbox, struct roc_npc_flow *mcam) static void npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp) { - volatile uint64_t( - *q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + volatile uint64_t(*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; struct npc_xtract_info *x_info = NULL; int lid, lt, ld, fl, ix; npc_dxcfg_t *p; uint64_t keyw; uint64_t val; - npc->keyx_supp_nmask[NPC_MCAM_RX] = - kex_rsp->rx_keyx_cfg & 0x7fffffffULL; - npc->keyx_supp_nmask[NPC_MCAM_TX] = - kex_rsp->tx_keyx_cfg & 0x7fffffffULL; - npc->keyx_len[NPC_MCAM_RX] = - npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); - npc->keyx_len[NPC_MCAM_TX] = - npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); + npc->keyx_supp_nmask[NPC_MCAM_RX] = kex_rsp->rx_keyx_cfg & 0x7fffffffULL; + npc->keyx_supp_nmask[NPC_MCAM_TX] = kex_rsp->tx_keyx_cfg & 0x7fffffffULL; + npc->keyx_len[NPC_MCAM_RX] = npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); + npc->keyx_len[NPC_MCAM_TX] = npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL; npc->keyw[NPC_MCAM_RX] = keyw; @@ -485,8 +479,7 @@ npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp) /* Update LID, LT and LDATA cfg */ p = &npc->prx_dxcfg; - q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])( - &kex_rsp->intf_lid_lt_ld); + q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])(&kex_rsp->intf_lid_lt_ld); for (ix = 0; ix < NPC_MAX_INTF; ix++) { for (lid = 0; lid < NPC_MAX_LID; lid++) { for (lt = 0; lt < NPC_MAX_LT; lt++) { @@ -539,8 +532,7 @@ npc_mcam_fetch_kex_cfg(struct npc *npc) goto done; } - mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name, - MKEX_NAME_LEN); + mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name, MKEX_NAME_LEN); npc->exact_match_ena = (kex_rsp->rx_keyx_cfg >> 40) & 0xF; npc_mcam_process_mkex_cfg(npc, kex_rsp); @@ -551,9 +543,8 @@ npc_mcam_fetch_kex_cfg(struct npc *npc) } static void -npc_mcam_set_channel(struct roc_npc_flow *flow, - struct npc_mcam_write_entry_req *req, uint16_t channel, - uint16_t chan_mask, bool is_second_pass) +npc_mcam_set_channel(struct roc_npc_flow *flow, struct npc_mcam_write_entry_req *req, + uint16_t channel, uint16_t chan_mask, bool is_second_pass) { uint16_t chan = 0, mask = 0; @@ -733,6 +724,14 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow, struct npc_ npc_mcam_set_channel(flow, req, inl_dev->channel, inl_dev->chan_mask, false); + } else if (flow->has_rep) { + pf_func = flow->rep_pf_func; + req->entry_data.action &= ~(GENMASK(19, 4)); + req->entry_data.action |= (uint64_t)pf_func << 4; + flow->npc_action &= ~(GENMASK(19, 4)); + flow->npc_action |= (uint64_t)pf_func << 4; + npc_mcam_set_channel(flow, req, flow->rep_channel, (BIT_ULL(12) - 1), + false); } else if (npc->is_sdp_link) { npc_mcam_set_channel(flow, req, npc->sdp_channel, npc->sdp_channel_mask, pst->is_second_pass_rule); @@ -789,9 +788,8 @@ npc_set_vlan_ltype(struct npc_parse_state *pst) uint64_t val, mask; uint8_t lb_offset; - lb_offset = - __builtin_popcount(pst->npc->keyx_supp_nmask[pst->nix_intf] & - ((1ULL << NPC_LTYPE_LB_OFFSET) - 1)); + lb_offset = __builtin_popcount(pst->npc->keyx_supp_nmask[pst->nix_intf] & + ((1ULL << NPC_LTYPE_LB_OFFSET) - 1)); lb_offset *= 4; mask = ~((0xfULL << lb_offset)); @@ -811,9 +809,8 @@ npc_set_ipv6ext_ltype_mask(struct npc_parse_state *pst) uint8_t lc_offset, lcflag_offset; uint64_t val, mask; - lc_offset = - __builtin_popcount(pst->npc->keyx_supp_nmask[pst->nix_intf] & - ((1ULL << NPC_LTYPE_LC_OFFSET) - 1)); + lc_offset = __builtin_popcount(pst->npc->keyx_supp_nmask[pst->nix_intf] & + ((1ULL << NPC_LTYPE_LC_OFFSET) - 1)); lc_offset *= 4; mask = ~((0xfULL << lc_offset)); @@ -903,13 +900,11 @@ npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc) data_off = 0; index++; } - key_data[index] |= - ((uint64_t)data << data_off); + key_data[index] |= ((uint64_t)data << data_off); if (lt == 0) mask = 0; - key_mask[index] |= - ((uint64_t)mask << data_off); + key_mask[index] |= ((uint64_t)mask << data_off); data_off += 4; } } @@ -934,8 +929,12 @@ npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc) (pst->flow->npc_action & NIX_RX_ACTIONOP_UCAST_IPSEC)) skip_base_rule = true; - if (pst->is_vf && pst->flow->nix_intf == NIX_INTF_RX && !skip_base_rule) { - mbox = mbox_get(npc->mbox); + if ((pst->is_vf || pst->flow->is_rep_vf) && pst->flow->nix_intf == NIX_INTF_RX && + !skip_base_rule) { + if (pst->flow->has_rep) + mbox = mbox_get(pst->flow->rep_mbox); + else + mbox = mbox_get(npc->mbox); (void)mbox_alloc_msg_npc_read_base_steer_rule(mbox); rc = mbox_process_msg(mbox, (void *)&base_rule_rsp); if (rc) { diff --git a/drivers/common/cnxk/roc_npc_parse.c b/drivers/common/cnxk/roc_npc_parse.c index ecd1b3e13b..f59a053761 100644 --- a/drivers/common/cnxk/roc_npc_parse.c +++ b/drivers/common/cnxk/roc_npc_parse.c @@ -35,6 +35,20 @@ npc_parse_mark_item(struct npc_parse_state *pst) return 0; } +int +npc_parse_represented_port_id(struct npc_parse_state *pst) +{ + if (pst->pattern->type != ROC_NPC_ITEM_TYPE_REPRESENTED_PORT) + return 0; + + if (pst->flow->nix_intf != NIX_INTF_RX) + return -EINVAL; + + pst->pattern++; + + return 0; +} + static int npc_flow_raw_item_prepare(const struct roc_npc_flow_item_raw *raw_spec, const struct roc_npc_flow_item_raw *raw_mask, diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 593dca353b..50c54b895d 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -448,6 +448,7 @@ int npc_mask_is_supported(const char *mask, const char *hw_mask, int len); int npc_parse_item_basic(const struct roc_npc_item_info *item, struct npc_parse_item_info *info); int npc_parse_meta_items(struct npc_parse_state *pst); int npc_parse_mark_item(struct npc_parse_state *pst); +int npc_parse_represented_port_id(struct npc_parse_state *pst); int npc_parse_pre_l2(struct npc_parse_state *pst); int npc_parse_higig2_hdr(struct npc_parse_state *pst); int npc_parse_cpt_hdr(struct npc_parse_state *pst); diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 970daec035..c08b09338d 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -187,9 +187,28 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, plt_err("Output port not under same driver"); goto err_exit; } - hw_dst = portid_eth_dev->data->dev_private; - roc_npc_dst = &hw_dst->npc; - *dst_pf_func = roc_npc_dst->pf_func; + + if (cnxk_ethdev_is_representor(if_name)) { + struct rte_eth_dev *rep_eth_dev = portid_eth_dev; + struct rte_flow_action_mark *act_mark; + struct cnxk_rep_dev *rep_dev; + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + *dst_pf_func = rep_dev->pf_func; + plt_rep_dbg("Representor port %d act port %d rep_dev->pf_func 0x%x", + port_act->id, act_ethdev->port_id, rep_dev->pf_func); + + /* Add Mark action */ + i++; + act_mark = plt_zmalloc(sizeof(struct rte_flow_action_mark), 0); + act_mark->id = 1; + in_actions[i].type = ROC_NPC_ACTION_TYPE_MARK; + in_actions[i].conf = (struct rte_flow_action_mark *)act_mark; + } else { + hw_dst = portid_eth_dev->data->dev_private; + roc_npc_dst = &hw_dst->npc; + *dst_pf_func = roc_npc_dst->pf_func; + } break; case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -477,7 +496,7 @@ cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, return -EINVAL; } - roc_npc_flow_dump(file, npc); + roc_npc_flow_dump(file, npc, -1); return 0; } diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 4886527f83..1b3de65329 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -6,6 +6,8 @@ #ifndef __CNXK_REP_H__ #define __CNXK_REP_H__ +#include + #define CNXK_REP_XPORT_VDEV_CFG_MZ "rep_xport_vdev_cfg" #define CNXK_REP_XPORT_VDEV_DEVARGS "role=server" #define CNXK_REP_XPORT_VDEV_NAME "net_memif" @@ -64,6 +66,17 @@ cnxk_rep_xport_eth_dev(uint16_t portid) return &rte_eth_devices[portid]; } +static inline int +cnxk_ethdev_is_representor(const char *if_name) +{ + regex_t regex; + int val; + + val = regcomp(®ex, "net_.*_representor_.*", 0); + val = regexec(®ex, if_name, 0, NULL, 0); + return (val == 0); +} + /* Prototypes */ int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct rte_eth_dev *pf_ethdev, struct rte_eth_devargs *eth_da); From patchwork Fri Aug 11 16:34:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 130180 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CFE9943036; Fri, 11 Aug 2023 18:36:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC58843275; Fri, 11 Aug 2023 18:35:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9AB7C40144 for ; Fri, 11 Aug 2023 18:35:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37BEGFrW021082 for ; Fri, 11 Aug 2023 09:35:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=BAwXTObH9Zs0Pyr0Bg0Gv94ZEBDPmDGQfXsrKTn4Z1o=; b=Rx2WZwHJSsq8gxdNb+NyZcfOy9KsBnPI5YrHu7lXud+nqxTfXLtaSYpD0y7vDDct5wAa oSQsIOI38r1UHcfFURPxb546LqSVnOoR3rI49UnHYzro6jzxnMAHjB2Mb9dD/CVFW3AB x3GfG1ICbbkyCFXx1ZQgWyNOy5QzMLNV0+tZhTu83QEZZbJn9Uq/kbtn02B1Sqh4+znS sDb58lOZ49l3nMzTD0vV7f6i5ZQpMYuhmObFNhjl2vdl5HIKHeU9g5oHdWcj7XC3t0r0 rvi7B9jGXvh7iw59dGgGbpHreiKiTrc8xTcUgqqbBVUxaVNwyOHgPX/ZtrBOUp7YCxez lw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya2qv8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 09:35:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 09:35:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 09:35:20 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 2FF843F7055; Fri, 11 Aug 2023 09:35:17 -0700 (PDT) From: Harman Kalra To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: , Harman Kalra Subject: [PATCH 9/9] net/cnxk: add represented port for cnxk Date: Fri, 11 Aug 2023 22:04:19 +0530 Message-ID: <20230811163419.165790-10-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230811163419.165790-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ZDvieZlTOMQgvIP4_1o7C2_4hM_JTfe2 X-Proofpoint-GUID: ZDvieZlTOMQgvIP4_1o7C2_4hM_JTfe2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-11_08,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding represented port item matching and action support for cnxk device. Signed-off-by: Kiran Kumar K Signed-off-by: Harman Kalra --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 2 + doc/guides/nics/features/cnxk_vf.ini | 2 + drivers/net/cnxk/cnxk_flow.c | 161 ++++++++++++++++---------- 4 files changed, 107 insertions(+), 59 deletions(-) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index f5ff692c27..3b74c6739e 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -79,6 +79,7 @@ udp = Y vlan = Y vxlan = Y vxlan_gpe = Y +represented_port = Y [rte_flow actions] count = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index e2cac64e4b..b33eaaf5d9 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -73,6 +73,7 @@ udp = Y vlan = Y vxlan = Y vxlan_gpe = Y +represented_port = Y [rte_flow actions] count = Y @@ -86,5 +87,6 @@ of_set_vlan_vid = Y pf = Y queue = Y rss = Y +represented_port = Y security = Y vf = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 5579007831..70d7772b92 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -70,6 +70,7 @@ udp = Y vlan = Y vxlan = Y vxlan_gpe = Y +represented_port = Y [rte_flow actions] count = Y @@ -83,6 +84,7 @@ of_set_vlan_pcp = Y of_set_vlan_vid = Y pf = Y queue = Y +represented_port = Y rss = Y security = Y skip_cman = Y diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index c08b09338d..d8213f92e8 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -4,69 +4,48 @@ #include const struct cnxk_rte_flow_term_info term[] = { - [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, - sizeof(struct rte_flow_item_eth)}, - [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, - sizeof(struct rte_flow_item_vlan)}, - [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, - sizeof(struct rte_flow_item_e_tag)}, - [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, - sizeof(struct rte_flow_item_ipv4)}, - [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, - sizeof(struct rte_flow_item_ipv6)}, - [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = { - ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, - sizeof(struct rte_flow_item_ipv6_frag_ext)}, - [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = { - ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, - sizeof(struct rte_flow_item_arp_eth_ipv4)}, - [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, - sizeof(struct rte_flow_item_mpls)}, - [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, - sizeof(struct rte_flow_item_icmp)}, - [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, - sizeof(struct rte_flow_item_udp)}, - [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, - sizeof(struct rte_flow_item_tcp)}, - [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, - sizeof(struct rte_flow_item_sctp)}, - [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, - sizeof(struct rte_flow_item_esp)}, - [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, - sizeof(struct rte_flow_item_gre)}, - [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, - sizeof(struct rte_flow_item_nvgre)}, - [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, - sizeof(struct rte_flow_item_vxlan)}, - [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, - sizeof(struct rte_flow_item_gtp)}, - [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, - sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, + [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, sizeof(struct rte_flow_item_vlan)}, + [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, sizeof(struct rte_flow_item_e_tag)}, + [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, sizeof(struct rte_flow_item_ipv4)}, + [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, sizeof(struct rte_flow_item_ipv6)}, + [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, + sizeof(struct rte_flow_item_ipv6_frag_ext)}, + [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, + sizeof(struct rte_flow_item_arp_eth_ipv4)}, + [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, sizeof(struct rte_flow_item_mpls)}, + [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, sizeof(struct rte_flow_item_icmp)}, + [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, sizeof(struct rte_flow_item_udp)}, + [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, sizeof(struct rte_flow_item_tcp)}, + [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, sizeof(struct rte_flow_item_sctp)}, + [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, sizeof(struct rte_flow_item_esp)}, + [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, sizeof(struct rte_flow_item_gre)}, + [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, sizeof(struct rte_flow_item_nvgre)}, + [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, sizeof(struct rte_flow_item_vxlan)}, + [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, sizeof(struct rte_flow_item_gtp)}, [RTE_FLOW_ITEM_TYPE_GENEVE] = {ROC_NPC_ITEM_TYPE_GENEVE, sizeof(struct rte_flow_item_geneve)}, - [RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = { - ROC_NPC_ITEM_TYPE_VXLAN_GPE, - sizeof(struct rte_flow_item_vxlan_gpe)}, + [RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {ROC_NPC_ITEM_TYPE_VXLAN_GPE, + sizeof(struct rte_flow_item_vxlan_gpe)}, [RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_EXT, sizeof(struct rte_flow_item_ipv6_ext)}, [RTE_FLOW_ITEM_TYPE_VOID] = {ROC_NPC_ITEM_TYPE_VOID, 0}, [RTE_FLOW_ITEM_TYPE_ANY] = {ROC_NPC_ITEM_TYPE_ANY, 0}, - [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {ROC_NPC_ITEM_TYPE_GRE_KEY, - sizeof(uint32_t)}, + [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {ROC_NPC_ITEM_TYPE_GRE_KEY, sizeof(uint32_t)}, [RTE_FLOW_ITEM_TYPE_HIGIG2] = {ROC_NPC_ITEM_TYPE_HIGIG2, sizeof(struct rte_flow_item_higig2_hdr)}, - [RTE_FLOW_ITEM_TYPE_RAW] = {ROC_NPC_ITEM_TYPE_RAW, - sizeof(struct rte_flow_item_raw)}, - [RTE_FLOW_ITEM_TYPE_MARK] = {ROC_NPC_ITEM_TYPE_MARK, - sizeof(struct rte_flow_item_mark)}, + [RTE_FLOW_ITEM_TYPE_RAW] = {ROC_NPC_ITEM_TYPE_RAW, sizeof(struct rte_flow_item_raw)}, + [RTE_FLOW_ITEM_TYPE_MARK] = {ROC_NPC_ITEM_TYPE_MARK, sizeof(struct rte_flow_item_mark)}, [RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_ROUTING_EXT, - sizeof(struct rte_flow_item_ipv6_routing_ext)}, + sizeof(struct rte_flow_item_ipv6_routing_ext)}, [RTE_FLOW_ITEM_TYPE_TX_QUEUE] = {ROC_NPC_ITEM_TYPE_TX_QUEUE, - sizeof(struct rte_flow_item_tx_queue)}}; + sizeof(struct rte_flow_item_tx_queue)}, + [RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT] = {ROC_NPC_ITEM_TYPE_REPRESENTED_PORT, + sizeof(struct rte_flow_item_ethdev)}}; static int -npc_rss_action_validate(struct rte_eth_dev *eth_dev, - const struct rte_flow_attr *attr, +npc_rss_action_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action *act) { const struct rte_flow_action_rss *rss; @@ -274,28 +253,92 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, } static int -cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], - struct roc_npc_action in_actions[], uint32_t *flowkey_cfg, uint16_t *dst_pf_func) +cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern[], + struct roc_npc_item_info in_pattern[]) { + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct rte_flow_item_ethdev *rep_eth_dev; + struct rte_eth_dev *portid_eth_dev; + char if_name[RTE_ETH_NAME_MAX_LEN]; + struct cnxk_eth_dev *hw_dst; int i = 0; - in_attr->priority = attr->priority; - in_attr->ingress = attr->ingress; - in_attr->egress = attr->egress; - while (pattern->type != RTE_FLOW_ITEM_TYPE_END) { in_pattern[i].spec = pattern->spec; in_pattern[i].last = pattern->last; in_pattern[i].mask = pattern->mask; in_pattern[i].type = term[pattern->type].item_type; in_pattern[i].size = term[pattern->type].item_size; + if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) { + rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; + if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { + plt_err("Name not found for output port id"); + goto fail; + } + portid_eth_dev = rte_eth_dev_allocated(if_name); + if (!portid_eth_dev) { + plt_err("eth_dev not found for output port id"); + goto fail; + } + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + goto fail; + } + if (cnxk_ethdev_is_representor(if_name)) { + /* Case where represented port not part of same + * app and represented by a representor port. + */ + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *pf_dev; + + rep_dev = cnxk_rep_pmd_priv(portid_eth_dev); + pf_dev = cnxk_eth_pmd_priv(rep_dev->parent_dev); + dev->npc.rep_npc = &pf_dev->npc; + dev->npc.rep_port_id = rep_eth_dev->port_id; + dev->npc.rep_pf_func = rep_dev->pf_func; + plt_rep_dbg("Represented port %d act port %d rep_dev->pf_func 0x%x", + rep_eth_dev->port_id, eth_dev->data->port_id, + rep_dev->pf_func); + } else { + /* Case where represented port part of same app + * as PF. + */ + hw_dst = portid_eth_dev->data->dev_private; + dev->npc.rep_npc = &hw_dst->npc; + dev->npc.rep_port_id = rep_eth_dev->port_id; + dev->npc.rep_pf_func = hw_dst->npc.pf_func; + } + } pattern++; i++; } in_pattern[i].type = ROC_NPC_ITEM_TYPE_END; + return 0; +fail: + return -EINVAL; +} + +static int +cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], + struct roc_npc_action in_actions[], uint32_t *flowkey_cfg, uint16_t *dst_pf_func) +{ + int rc; + + in_attr->priority = attr->priority; + in_attr->ingress = attr->ingress; + in_attr->egress = attr->egress; + in_attr->transfer = attr->transfer; + + rc = cnxk_map_pattern(eth_dev, pattern, in_pattern); + if (rc) { + plt_err("Failed to map pattern list"); + return rc; + } + return cnxk_map_actions(eth_dev, attr, actions, in_actions, flowkey_cfg, dst_pf_func); }