From patchwork Tue Feb 8 19:44:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 107043 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 585FDA0350; Tue, 8 Feb 2022 20:44:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E54FE41147; Tue, 8 Feb 2022 20:44:52 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 5B77941101 for ; Tue, 8 Feb 2022 20:44:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644349491; x=1675885491; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZzJgb1xH/K4hduEJsC9sZFshwiY4OrZqbeGUGFOZdu4=; b=gcisj5S7qcIzvTsI6GmvJRKvaYLFEI07ntlcJ/XwLHHvgKEuGx/emp8t CtT5vdOw0JryXNCqAfu/KnJNNYwALL5CgCj2Xtmpji/CtAtkAi05VeWRL o36TdIv5FxtymYlHvZt2xsC9jrtVlN+BuLxBJb0WkYazv6CPimfdypLu2 WIGF9YoM1awMSzE3zvC+/BKsR9Opf5OaxyaIbVN7n758A4q4+Yh+vQpfo +uOd4FXl61SyoRY++Y0fS7fgl6CuvRbYqdrA5dk3j7eG4p2yxqqxsO/Gz RQApvILQczzIAzUrTyiOueSWil92eUtZfm4c+wTmn9F6ROXHjHF1ry0NL Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10252"; a="236443074" X-IronPort-AV: E=Sophos;i="5.88,353,1635231600"; d="scan'208";a="236443074" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2022 11:44:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,353,1635231600"; d="scan'208";a="771094274" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by fmsmga006.fm.intel.com with ESMTP; 08 Feb 2022 11:44:45 -0800 From: Ferruh Yigit To: Shepard Siegel , Ed Czeck , John Miller , Rasesh Mody , Shahed Shaikh , Ajit Khaparde , Somnath Kotur , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Hemant Agrawal , Sachin Saxena , John Daley , Hyong Youb Kim , "Min Hu (Connor)" , Yisen Zhuang , Lijun Ou , Matan Azrad , Viacheslav Ovsiienko , Gagandeep Singh , Devendra Singh Rawat , Thomas Monjalon , Andrew Rybchenko Cc: dev@dpdk.org, Ferruh Yigit , Ciara Loftus Subject: [PATCH] ethdev: introduce generic dummy packet burst function Date: Tue, 8 Feb 2022 19:44:36 +0000 Message-Id: <20220208194437.426143-1-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Multiple PMDs have dummy/noop Rx/Tx packet burst functions. These dummy functions are very simple, introduce a common function in the ethdev and update drivers to use it instead of each driver having its own functions. Signed-off-by: Ferruh Yigit Acked-by: Morten Brørup Acked-by: Viacheslav Ovsiienko --- Cc: Ciara Loftus --- drivers/net/ark/ark_ethdev.c | 8 ++--- drivers/net/ark/ark_ethdev_rx.c | 9 ----- drivers/net/ark/ark_ethdev_rx.h | 2 -- drivers/net/ark/ark_ethdev_tx.c | 9 ----- drivers/net/ark/ark_ethdev_tx.h | 3 -- drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++----- drivers/net/bnxt/bnxt.h | 4 --- drivers/net/bnxt/bnxt_cpr.c | 4 +-- drivers/net/bnxt/bnxt_rxr.c | 14 -------- drivers/net/bnxt/bnxt_txr.c | 14 -------- drivers/net/cnxk/cnxk_ethdev.c | 14 ++------ drivers/net/dpaa2/dpaa2_ethdev.c | 2 +- drivers/net/dpaa2/dpaa2_ethdev.h | 1 - drivers/net/dpaa2/dpaa2_rxtx.c | 25 -------------- drivers/net/enic/enic.h | 3 -- drivers/net/enic/enic_ethdev.c | 2 +- drivers/net/enic/enic_main.c | 2 +- drivers/net/enic/enic_rxtx.c | 11 ------ drivers/net/hns3/hns3_rxtx.c | 18 +++------- drivers/net/hns3/hns3_rxtx.h | 3 -- drivers/net/mlx4/mlx4.c | 8 ++--- drivers/net/mlx4/mlx4_mp.c | 4 +-- drivers/net/mlx4/mlx4_rxtx.c | 52 ----------------------------- drivers/net/mlx4/mlx4_rxtx.h | 4 --- drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +-- drivers/net/mlx5/linux/mlx5_os.c | 4 +-- drivers/net/mlx5/mlx5.c | 4 +-- drivers/net/mlx5/mlx5_rx.c | 27 +-------------- drivers/net/mlx5/mlx5_rx.h | 2 -- drivers/net/mlx5/mlx5_trigger.c | 4 +-- drivers/net/mlx5/mlx5_tx.c | 25 -------------- drivers/net/mlx5/mlx5_tx.h | 2 -- drivers/net/mlx5/windows/mlx5_os.c | 4 +-- drivers/net/pfe/pfe_ethdev.c | 20 ++--------- drivers/net/qede/qede_ethdev.c | 4 +-- drivers/net/qede/qede_rxtx.c | 9 ----- drivers/net/qede/qede_rxtx.h | 3 -- lib/ethdev/ethdev_driver.h | 19 +++++++++++ 38 files changed, 58 insertions(+), 301 deletions(-) diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index b618cba3f023..230a1272e986 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev) dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; /* Use dummy function until setup */ - dev->rx_pkt_burst = ð_ark_recv_pkts_noop; - dev->tx_pkt_burst = ð_ark_xmit_pkts_noop; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr; ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr; @@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev) if (ark->start_pg) ark_pktgen_pause(ark->pg); - dev->rx_pkt_burst = ð_ark_recv_pkts_noop; - dev->tx_pkt_burst = ð_ark_xmit_pkts_noop; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; /* STOP TX Side */ for (i = 0; i < dev->data->nb_tx_queues; i++) { diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index 98658ce621e2..37a88cbedee4 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev, return 0; } -/* ************************************************************************* */ -uint16_t -eth_ark_recv_pkts_noop(void *rx_queue __rte_unused, - struct rte_mbuf **rx_pkts __rte_unused, - uint16_t nb_pkts __rte_unused) -{ - return 0; -} - /* ************************************************************************* */ uint16_t eth_ark_recv_pkts(void *rx_queue, diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h index 859fcf1e6f71..f64b3dd137b3 100644 --- a/drivers/net/ark/ark_ethdev_rx.h +++ b/drivers/net/ark/ark_ethdev_rx.h @@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev, uint32_t eth_ark_dev_rx_queue_count(void *rx_queue); int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id); int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id); -uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); void eth_ark_dev_rx_queue_release(void *rx_queue); diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c index 676e4115d3bf..abdce6a8cc0d 100644 --- a/drivers/net/ark/ark_ethdev_tx.c +++ b/drivers/net/ark/ark_ethdev_tx.c @@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue, } -/* ************************************************************************* */ -uint16_t -eth_ark_xmit_pkts_noop(void *vtxq __rte_unused, - struct rte_mbuf **tx_pkts __rte_unused, - uint16_t nb_pkts __rte_unused) -{ - return 0; -} - /* ************************************************************************* */ uint16_t eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h index 12c71a7158a9..7134dbfeed81 100644 --- a/drivers/net/ark/ark_ethdev_tx.h +++ b/drivers/net/ark/ark_ethdev_tx.h @@ -10,9 +10,6 @@ #include -uint16_t eth_ark_xmit_pkts_noop(void *vtxq, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); uint16_t eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c index 66b0512c8695..cb5733c5972b 100644 --- a/drivers/net/bnx2x/bnx2x_rxtx.c +++ b/drivers/net/bnx2x/bnx2x_rxtx.c @@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_rx; } -static uint16_t -bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq, - __rte_unused struct rte_mbuf **rx_pkts, - __rte_unused uint16_t nb_pkts) -{ - return 0; -} - void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev) { - dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy; - dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; } void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 433f1c80bee8..851b3bb2be2a 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev); uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp); int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete); -uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); -uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); extern const struct rte_flow_ops bnxt_flow_ops; diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c index 9b9285b79903..99af0f9e87ee 100644 --- a/drivers/net/bnxt/bnxt_cpr.c +++ b/drivers/net/bnxt/bnxt_cpr.c @@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp) void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev) { - eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts; - eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index b60c2470f39e..5a9cf48e6739 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx_pkts; } -/* - * Dummy DPDK callback for RX. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - */ -uint16_t -bnxt_dummy_recv_pkts(void *rx_queue __rte_unused, - struct rte_mbuf **rx_pkts __rte_unused, - uint16_t nb_pkts __rte_unused) -{ - return 0; -} - void bnxt_free_rx_rings(struct bnxt *bp) { int i; diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 3b8f2382f92e..7a7196a23731 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx_pkts; } -/* - * Dummy DPDK callback for TX. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - */ -uint16_t -bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused, - struct rte_mbuf **tx_pkts __rte_unused, - uint16_t nb_pkts __rte_unused) -{ - return 0; -} - int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct bnxt *bp = dev->data->dev_private; diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 53dfb5eae80e..c6a9ada05bb4 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev) return rc; } -static uint16_t -nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts) -{ - RTE_SET_USED(queue); - RTE_SET_USED(mbufs); - RTE_SET_USED(pkts); - - return 0; -} - static void nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev) { @@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev) * which caused app crash since rx/tx burst is still * on different lcores */ - eth_dev->tx_pkt_burst = nix_eth_nop_burst; - eth_dev->rx_pkt_burst = nix_eth_nop_burst; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; rte_mb(); } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 379daec5f4e8..5be4fef8fe68 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev) } /*changing tx burst function to avoid any more enqueues */ - dev->tx_pkt_burst = dummy_dev_tx; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; /* Loop while dpni_disable() attempts to drain the egress FQs * and confirm them back to us. diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 1b49f43103a7..e79a7fc2e286 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -264,7 +264,6 @@ __rte_internal uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue, struct rte_mbuf **bufs, uint16_t nb_pkts); -uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci); void dpaa2_flow_clean(struct rte_eth_dev *dev); uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused; diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 81b28e20cb47..b8844fbdf107 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) return num_tx; } -/** - * Dummy DPDK callback for TX. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - * - * @param dpdk_txq - * Generic pointer to TX queue structure. - * @param[in] pkts - * Packets to transmit. - * @param pkts_n - * Number of packets in array. - * - * @return - * Number of packets successfully transmitted (<= pkts_n). - */ -uint16_t -dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) -{ - (void)queue; - (void)bufs; - (void)nb_pkts; - return 0; -} - #if defined(RTE_TOOLCHAIN_GCC) #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wcast-qual" diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h index d5493c98345d..163a1f037e26 100644 --- a/drivers/net/enic/enic.h +++ b/drivers/net/enic/enic.h @@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t enic_dummy_recv_pkts(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c index 163be09809b1..a8d470e8ac93 100644 --- a/drivers/net/enic/enic_ethdev.c +++ b/drivers/net/enic/enic_ethdev.c @@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev) RTE_PTYPE_UNKNOWN }; - if (dev->rx_pkt_burst != enic_dummy_recv_pkts && + if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy && dev->rx_pkt_burst != NULL) { struct enic *enic = pmd_priv(dev); if (enic->overlay_offload) diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 97d97ea793f2..9f351de72eb4 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu) } /* replace Rx function with a no-op to avoid getting stale pkts */ - eth_dev->rx_pkt_burst = enic_dummy_recv_pkts; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst; rte_mb(); diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index 74a90694c718..7a66d72275d9 100644 --- a/drivers/net/enic/enic_rxtx.c +++ b/drivers/net/enic/enic_rxtx.c @@ -31,17 +31,6 @@ #define rte_packet_prefetch(p) do {} while (0) #endif -/* dummy receive function to replace actual function in - * order to do safe reconfiguration operations. - */ -uint16_t -enic_dummy_recv_pkts(__rte_unused void *rx_queue, - __rte_unused struct rte_mbuf **rx_pkts, - __rte_unused uint16_t nb_pkts) -{ - return 0; -} - static inline uint16_t enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, const bool use_64b_desc) diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 3b72c2375a60..8dc6cfac704d 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep) return hns3_xmit_pkts; } -uint16_t -hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused) -{ - return 0; -} - static void hns3_trace_rxtx_function(struct rte_eth_dev *dev) { @@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev); eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status; eth_dev->tx_pkt_burst = hw->set_link_down ? - hns3_dummy_rxtx_burst : + rte_eth_pkt_burst_dummy : hns3_get_tx_function(eth_dev, &prep); eth_dev->tx_pkt_prepare = prep; eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status; hns3_trace_rxtx_function(eth_dev); } else { - eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst; - eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; eth_dev->tx_pkt_prepare = NULL; } @@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt) if (dev->tx_pkt_burst == hns3_xmit_pkts) return hns3_tx_done_cleanup_full(q, free_cnt); - else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst) + else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy) return 0; else return -ENOTSUP; @@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw) void hns3_stop_tx_datapath(struct rte_eth_dev *dev) { - dev->tx_pkt_burst = hns3_dummy_rxtx_burst; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; dev->tx_pkt_prepare = NULL; hns3_eth_dev_fp_ops_config(dev); diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index 094b65b7de70..a000318357ab 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev); void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev); eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep); -uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused); uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id); void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id, diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index 3f3c4a7c7214..910b76a92c42 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev) return 0; DEBUG("%p: detaching flows from all RX queues", (void *)dev); priv->started = 0; - dev->tx_pkt_burst = mlx4_tx_burst_removed; - dev->rx_pkt_burst = mlx4_rx_burst_removed; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; rte_wmb(); /* Disable datapath on secondary process. */ mlx4_mp_req_stop_rxtx(dev); @@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev) DEBUG("%p: closing device \"%s\"", (void *)dev, ((priv->ctx != NULL) ? priv->ctx->device->name : "")); - dev->rx_pkt_burst = mlx4_rx_burst_removed; - dev->tx_pkt_burst = mlx4_tx_burst_removed; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; rte_wmb(); /* Disable datapath on secondary process. */ mlx4_mp_req_stop_rxtx(dev); diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c index 8fcfb5490ee9..1da64910aadd 100644 --- a/drivers/net/mlx4/mlx4_mp.c +++ b/drivers/net/mlx4/mlx4_mp.c @@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer) break; case MLX4_MP_REQ_STOP_RXTX: INFO("port %u stopping datapath", dev->data->port_id); - dev->tx_pkt_burst = mlx4_tx_burst_removed; - dev->rx_pkt_burst = mlx4_rx_burst_removed; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; rte_mb(); mp_init_msg(dev, &mp_res, param->type); res->result = 0; diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index ed9e41fcdea9..059e432a63fc 100644 --- a/drivers/net/mlx4/mlx4_rxtx.c +++ b/drivers/net/mlx4/mlx4_rxtx.c @@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) rxq->stats.ipackets += i; return i; } - -/** - * Dummy DPDK callback for Tx. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - * - * @param dpdk_txq - * Generic pointer to Tx queue structure. - * @param[in] pkts - * Packets to transmit. - * @param pkts_n - * Number of packets in array. - * - * @return - * Number of packets successfully transmitted (<= pkts_n). - */ -uint16_t -mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) -{ - (void)dpdk_txq; - (void)pkts; - (void)pkts_n; - rte_mb(); - return 0; -} - -/** - * Dummy DPDK callback for Rx. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - * - * @param dpdk_rxq - * Generic pointer to Rx queue structure. - * @param[out] pkts - * Array to store received packets. - * @param pkts_n - * Maximum number of packets in array. - * - * @return - * Number of packets successfully received (<= pkts_n). - */ -uint16_t -mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) -{ - (void)dpdk_rxq; - (void)pkts; - (void)pkts_n; - rte_mb(); - return 0; -} diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h index 83e9534cd0a7..70f3cd868058 100644 --- a/drivers/net/mlx4/mlx4_rxtx.h +++ b/drivers/net/mlx4/mlx4_rxtx.h @@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n); uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); -uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, - uint16_t pkts_n); -uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, - uint16_t pkts_n); /* mlx4_txq.c */ diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index c448a3e9eb87..e607089e0e20 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -192,8 +192,8 @@ struct rte_mp_msg mp_res; break; case MLX5_MP_REQ_STOP_RXTX: DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id); - dev->rx_pkt_burst = removed_rx_burst; - dev->tx_pkt_burst = removed_tx_burst; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; rte_mb(); mp_init_msg(&priv->mp_id, &mp_res, param->type); res->result = 0; diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index aecdc5a68abb..bbe05bb837e0 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id, priv->mtu); /* Initialize burst functions to prevent crashes before link-up. */ - eth_dev->rx_pkt_burst = removed_rx_burst; - eth_dev->tx_pkt_burst = removed_tx_burst; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; eth_dev->dev_ops = &mlx5_dev_ops; eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status; eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 67eda41a60a5..5571e9067787 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_action_handle_flush(dev); mlx5_flow_meter_flush(dev, NULL); /* Prevent crashes when queues are still in use. */ - dev->rx_pkt_burst = removed_rx_burst; - dev->tx_pkt_burst = removed_tx_burst; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; rte_wmb(); /* Disable datapath on secondary process. */ mlx5_mp_os_req_stop_rxtx(dev); diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index f388fcc31395..11ea935d72f0 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue) dev = &rte_eth_devices[rxq->port_id]; if (dev->rx_pkt_burst == NULL || - dev->rx_pkt_burst == removed_rx_burst) { + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) { rte_errno = ENOTSUP; return -rte_errno; } @@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) return i; } -/** - * Dummy DPDK callback for RX. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - * - * @param dpdk_rxq - * Generic pointer to RX queue structure. - * @param[out] pkts - * Array to store received packets. - * @param pkts_n - * Maximum number of packets in array. - * - * @return - * Number of packets successfully received (<= pkts_n). - */ -uint16_t -removed_rx_burst(void *dpdk_rxq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused) -{ - rte_mb(); - return 0; -} - /* * Vectorized Rx routines are not compiled in when required vector instructions * are not supported on a target architecture. diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index cb5d51340db7..7e417819f7e8 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); -uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, - uint16_t pkts_n); int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset); uint32_t mlx5_rx_queue_count(void *rx_queue); void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 74c9c0a4fff8..3a59237b1a7a 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev) dev->data->dev_started = 0; /* Prevent crashes when queues are still in use. */ - dev->rx_pkt_burst = removed_rx_burst; - dev->tx_pkt_burst = removed_tx_burst; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; rte_wmb(); /* Disable datapath on secondary process. */ mlx5_mp_os_req_stop_rxtx(dev); diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index fd2cf2096753..8453b2701a9f 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq, return 0; } -/** - * Dummy DPDK callback for TX. - * - * This function is used to temporarily replace the real callback during - * unsafe control operations on the queue, or in case of error. - * - * @param dpdk_txq - * Generic pointer to TX queue structure. - * @param[in] pkts - * Packets to transmit. - * @param pkts_n - * Number of packets in array. - * - * @return - * Number of packets successfully transmitted (<= pkts_n). - */ -uint16_t -removed_tx_burst(void *dpdk_txq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused) -{ - rte_mb(); - return 0; -} - /** * Update completion queue consuming index via doorbell * and flush the completed data buffers. diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 099e72935a3a..31eb0a1ce28e 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev); /* mlx5_tx.c */ -uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, - uint16_t pkts_n); void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq, unsigned int olx __rte_unused); int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index ac0af0ff7d43..7f3532426f1f 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id, priv->mtu); /* Initialize burst functions to prevent crashes before link-up. */ - eth_dev->rx_pkt_burst = removed_rx_burst; - eth_dev->tx_pkt_burst = removed_tx_burst; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; eth_dev->dev_ops = &mlx5_dev_ops; eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status; eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status; diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index edf32aa70da6..c2991ab1ccaa 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return nb_pkts; } -static uint16_t -pfe_dummy_xmit_pkts(__rte_unused void *tx_queue, - __rte_unused struct rte_mbuf **tx_pkts, - __rte_unused uint16_t nb_pkts) -{ - return 0; -} - -static uint16_t -pfe_dummy_recv_pkts(__rte_unused void *rxq, - __rte_unused struct rte_mbuf **rx_pkts, - __rte_unused uint16_t nb_pkts) -{ - return 0; -} - static int pfe_eth_open(struct rte_eth_dev *dev) { @@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/) gemac_disable(priv->EMAC_baseaddr); gpi_disable(priv->GPI_baseaddr); - dev->rx_pkt_burst = &pfe_dummy_recv_pkts; - dev->tx_pkt_burst = &pfe_dummy_xmit_pkts; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; return 0; } diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index a1122a297e6b..ea6b71f09355 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy) bool use_tx_offload = false; if (is_dummy) { - dev->rx_pkt_burst = qede_rxtx_pkts_dummy; - dev->tx_pkt_burst = qede_rxtx_pkts_dummy; + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; return; } diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 7088c57b501d..85784f4a82a6 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return eng0_pkts + eng1_pkts; } -uint16_t -qede_rxtx_pkts_dummy(__rte_unused void *p_rxq, - __rte_unused struct rte_mbuf **pkts, - __rte_unused uint16_t nb_pkts) -{ - return 0; -} - - /* this function does a fake walk through over completion queue * to calculate number of BDs used by HW. * At the end, it restores the state of completion queue. diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h index 11ed1d9b9c50..013a4a07c716 100644 --- a/drivers/net/qede/qede_rxtx.h +++ b/drivers/net/qede/qede_rxtx.h @@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t qede_rxtx_pkts_dummy(void *p_rxq, - struct rte_mbuf **pkts, - uint16_t nb_pkts); int qede_start_queues(struct rte_eth_dev *eth_dev); diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 8f0ac0adf0ae..075f97a4b37a 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1432,6 +1432,25 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev, *dst = __atomic_load_n(src, __ATOMIC_SEQ_CST); } +/** + * @internal + * Dummy DPDK callback for Rx/Tx packet burst. + * + * @param queue + * Pointer to Rx/Tx queue + * @param pkts + * Packet array + * @param nb_pkts + * Number of packets in packet array + */ +static inline uint16_t +rte_eth_pkt_burst_dummy(void *queue __rte_unused, + struct rte_mbuf **pkts __rte_unused, + uint16_t nb_pkts __rte_unused) +{ + return 0; +} + /** * Allocate an unique switch domain identifier. * From patchwork Fri Feb 11 17:14:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 107380 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBC17A0032; Fri, 11 Feb 2022 18:15:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9143340140; Fri, 11 Feb 2022 18:15:01 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 98BAB40042 for ; Fri, 11 Feb 2022 18:14:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644599699; x=1676135699; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kKMaEeJF4lIT5Gy3aV9FszzI7zQRVdL90lvs1YP5dxk=; b=oFdpMN/IoCt3URtM6oM1Kjt8agVmjfuzsxQYpxPnmJKkmNUxrEqbaFqT 9wr1P582cBxGhASUQrTKycritIUAkaf5dZXhoarsgkJioAcf83d6/tGn6 aWBKILx2mKAUfUOWgmoPjlYJkWH3oOrnqTxwN/iDXw6pAdyLf9UbgWdOn zg3zmw1+J48HfSqVVls0eyWcrkV84+OxseFMjsiDBjviph1dnAojj3Zpw FtAjV6plQDJ3TGWB71mfX3LVtMef6YTaXnBKlzmXz7mLeoc0fqZpGND2r 7dbYdZA73ANK86EC2URttdWdsVGrsu5dstEclfO6cEnl+Hm4GkZgjzMrE Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10255"; a="247362410" X-IronPort-AV: E=Sophos;i="5.88,361,1635231600"; d="scan'208";a="247362410" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Feb 2022 09:14:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,361,1635231600"; d="scan'208";a="679563782" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by fmsmga001.fm.intel.com with ESMTP; 11 Feb 2022 09:14:56 -0800 From: Ferruh Yigit To: Thomas Monjalon , Andrew Rybchenko , Anatoly Burakov Cc: dev@dpdk.org, Ferruh Yigit Subject: [PATCH v3 2/2] ethdev: move driver interface functions to its own file Date: Fri, 11 Feb 2022 17:14:41 +0000 Message-Id: <20220211171441.2717010-2-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220211171441.2717010-1-ferruh.yigit@intel.com> References: <20220208194437.426143-1-ferruh.yigit@intel.com> <20220211171441.2717010-1-ferruh.yigit@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Relevant functions moved to ethdev_driver.c. No functional change. Signed-off-by: Ferruh Yigit Acked-by: Thomas Monjalon --- lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++ lib/ethdev/ethdev_private.c | 131 ++++++ lib/ethdev/ethdev_private.h | 36 ++ lib/ethdev/rte_ethdev.c | 901 ------------------------------------ 4 files changed, 925 insertions(+), 901 deletions(-) diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index fb7323f4d327..e0ea30be5fe9 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -2,7 +2,633 @@ * Copyright(c) 2022 Intel Corporation */ +#include +#include + #include "ethdev_driver.h" +#include "ethdev_private.h" + +/** + * A set of values to describe the possible states of a switch domain. + */ +enum rte_eth_switch_domain_state { + RTE_ETH_SWITCH_DOMAIN_UNUSED = 0, + RTE_ETH_SWITCH_DOMAIN_ALLOCATED +}; + +/** + * Array of switch domains available for allocation. Array is sized to + * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than + * ethdev ports in a single process. + */ +static struct rte_eth_dev_switch { + enum rte_eth_switch_domain_state state; +} eth_dev_switch_domains[RTE_MAX_ETHPORTS]; + +static struct rte_eth_dev * +eth_dev_allocated(const char *name) +{ + uint16_t i; + + RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX); + + for (i = 0; i < RTE_MAX_ETHPORTS; i++) { + if (rte_eth_devices[i].data != NULL && + strcmp(rte_eth_devices[i].data->name, name) == 0) + return &rte_eth_devices[i]; + } + return NULL; +} + +static uint16_t +eth_dev_find_free_port(void) +{ + uint16_t i; + + for (i = 0; i < RTE_MAX_ETHPORTS; i++) { + /* Using shared name field to find a free port. */ + if (eth_dev_shared_data->data[i].name[0] == '\0') { + RTE_ASSERT(rte_eth_devices[i].state == + RTE_ETH_DEV_UNUSED); + return i; + } + } + return RTE_MAX_ETHPORTS; +} + +static struct rte_eth_dev * +eth_dev_get(uint16_t port_id) +{ + struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id]; + + eth_dev->data = ð_dev_shared_data->data[port_id]; + + return eth_dev; +} + +struct rte_eth_dev * +rte_eth_dev_allocate(const char *name) +{ + uint16_t port_id; + struct rte_eth_dev *eth_dev = NULL; + size_t name_len; + + name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); + if (name_len == 0) { + RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); + return NULL; + } + + if (name_len >= RTE_ETH_NAME_MAX_LEN) { + RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); + return NULL; + } + + eth_dev_shared_data_prepare(); + + /* Synchronize port creation between primary and secondary threads. */ + rte_spinlock_lock(ð_dev_shared_data->ownership_lock); + + if (eth_dev_allocated(name) != NULL) { + RTE_ETHDEV_LOG(ERR, + "Ethernet device with name %s already allocated\n", + name); + goto unlock; + } + + port_id = eth_dev_find_free_port(); + if (port_id == RTE_MAX_ETHPORTS) { + RTE_ETHDEV_LOG(ERR, + "Reached maximum number of Ethernet ports\n"); + goto unlock; + } + + eth_dev = eth_dev_get(port_id); + strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name)); + eth_dev->data->port_id = port_id; + eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS; + eth_dev->data->mtu = RTE_ETHER_MTU; + pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL); + +unlock: + rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + + return eth_dev; +} + +struct rte_eth_dev * +rte_eth_dev_allocated(const char *name) +{ + struct rte_eth_dev *ethdev; + + eth_dev_shared_data_prepare(); + + rte_spinlock_lock(ð_dev_shared_data->ownership_lock); + + ethdev = eth_dev_allocated(name); + + rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + + return ethdev; +} + +/* + * Attach to a port already registered by the primary process, which + * makes sure that the same device would have the same port ID both + * in the primary and secondary process. + */ +struct rte_eth_dev * +rte_eth_dev_attach_secondary(const char *name) +{ + uint16_t i; + struct rte_eth_dev *eth_dev = NULL; + + eth_dev_shared_data_prepare(); + + /* Synchronize port attachment to primary port creation and release. */ + rte_spinlock_lock(ð_dev_shared_data->ownership_lock); + + for (i = 0; i < RTE_MAX_ETHPORTS; i++) { + if (strcmp(eth_dev_shared_data->data[i].name, name) == 0) + break; + } + if (i == RTE_MAX_ETHPORTS) { + RTE_ETHDEV_LOG(ERR, + "Device %s is not driven by the primary process\n", + name); + } else { + eth_dev = eth_dev_get(i); + RTE_ASSERT(eth_dev->data->port_id == i); + } + + rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + return eth_dev; +} + +int +rte_eth_dev_callback_process(struct rte_eth_dev *dev, + enum rte_eth_event_type event, void *ret_param) +{ + struct rte_eth_dev_callback *cb_lst; + struct rte_eth_dev_callback dev_cb; + int rc = 0; + + rte_spinlock_lock(ð_dev_cb_lock); + TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) { + if (cb_lst->cb_fn == NULL || cb_lst->event != event) + continue; + dev_cb = *cb_lst; + cb_lst->active = 1; + if (ret_param != NULL) + dev_cb.ret_param = ret_param; + + rte_spinlock_unlock(ð_dev_cb_lock); + rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event, + dev_cb.cb_arg, dev_cb.ret_param); + rte_spinlock_lock(ð_dev_cb_lock); + cb_lst->active = 0; + } + rte_spinlock_unlock(ð_dev_cb_lock); + return rc; +} + +void +rte_eth_dev_probing_finish(struct rte_eth_dev *dev) +{ + if (dev == NULL) + return; + + /* + * for secondary process, at that point we expect device + * to be already 'usable', so shared data and all function pointers + * for fast-path devops have to be setup properly inside rte_eth_dev. + */ + if (rte_eal_process_type() == RTE_PROC_SECONDARY) + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev); + + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL); + + dev->state = RTE_ETH_DEV_ATTACHED; +} + +int +rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) +{ + if (eth_dev == NULL) + return -EINVAL; + + eth_dev_shared_data_prepare(); + + if (eth_dev->state != RTE_ETH_DEV_UNUSED) + rte_eth_dev_callback_process(eth_dev, + RTE_ETH_EVENT_DESTROY, NULL); + + eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id); + + rte_spinlock_lock(ð_dev_shared_data->ownership_lock); + + eth_dev->state = RTE_ETH_DEV_UNUSED; + eth_dev->device = NULL; + eth_dev->process_private = NULL; + eth_dev->intr_handle = NULL; + eth_dev->rx_pkt_burst = NULL; + eth_dev->tx_pkt_burst = NULL; + eth_dev->tx_pkt_prepare = NULL; + eth_dev->rx_queue_count = NULL; + eth_dev->rx_descriptor_status = NULL; + eth_dev->tx_descriptor_status = NULL; + eth_dev->dev_ops = NULL; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + rte_free(eth_dev->data->rx_queues); + rte_free(eth_dev->data->tx_queues); + rte_free(eth_dev->data->mac_addrs); + rte_free(eth_dev->data->hash_mac_addrs); + rte_free(eth_dev->data->dev_private); + pthread_mutex_destroy(ð_dev->data->flow_ops_mutex); + memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data)); + } + + rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); + + return 0; +} + +int +rte_eth_dev_create(struct rte_device *device, const char *name, + size_t priv_data_size, + ethdev_bus_specific_init ethdev_bus_specific_init, + void *bus_init_params, + ethdev_init_t ethdev_init, void *init_params) +{ + struct rte_eth_dev *ethdev; + int retval; + + RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL); + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ethdev = rte_eth_dev_allocate(name); + if (!ethdev) + return -ENODEV; + + if (priv_data_size) { + ethdev->data->dev_private = rte_zmalloc_socket( + name, priv_data_size, RTE_CACHE_LINE_SIZE, + device->numa_node); + + if (!ethdev->data->dev_private) { + RTE_ETHDEV_LOG(ERR, + "failed to allocate private data\n"); + retval = -ENOMEM; + goto probe_failed; + } + } + } else { + ethdev = rte_eth_dev_attach_secondary(name); + if (!ethdev) { + RTE_ETHDEV_LOG(ERR, + "secondary process attach failed, ethdev doesn't exist\n"); + return -ENODEV; + } + } + + ethdev->device = device; + + if (ethdev_bus_specific_init) { + retval = ethdev_bus_specific_init(ethdev, bus_init_params); + if (retval) { + RTE_ETHDEV_LOG(ERR, + "ethdev bus specific initialisation failed\n"); + goto probe_failed; + } + } + + retval = ethdev_init(ethdev, init_params); + if (retval) { + RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); + goto probe_failed; + } + + rte_eth_dev_probing_finish(ethdev); + + return retval; + +probe_failed: + rte_eth_dev_release_port(ethdev); + return retval; +} + +int +rte_eth_dev_destroy(struct rte_eth_dev *ethdev, + ethdev_uninit_t ethdev_uninit) +{ + int ret; + + ethdev = rte_eth_dev_allocated(ethdev->data->name); + if (!ethdev) + return -ENODEV; + + RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL); + + ret = ethdev_uninit(ethdev); + if (ret) + return ret; + + return rte_eth_dev_release_port(ethdev); +} + +struct rte_eth_dev * +rte_eth_dev_get_by_name(const char *name) +{ + uint16_t pid; + + if (rte_eth_dev_get_port_by_name(name, &pid)) + return NULL; + + return &rte_eth_devices[pid]; +} + +int +rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) +{ + if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) + return 1; + return 0; +} + +int +rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) +{ + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) + return 1; + return 0; +} + +void +rte_eth_dev_internal_reset(struct rte_eth_dev *dev) +{ + if (dev->data->dev_started) { + RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", + dev->data->port_id); + return; + } + + eth_dev_rx_queue_config(dev, 0); + eth_dev_tx_queue_config(dev, 0); + + memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf)); +} + +static int +eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in) +{ + int state; + struct rte_kvargs_pair *pair; + char *letter; + + arglist->str = strdup(str_in); + if (arglist->str == NULL) + return -ENOMEM; + + letter = arglist->str; + state = 0; + arglist->count = 0; + pair = &arglist->pairs[0]; + while (1) { + switch (state) { + case 0: /* Initial */ + if (*letter == '=') + return -EINVAL; + else if (*letter == '\0') + return 0; + + state = 1; + pair->key = letter; + /* fall-thru */ + + case 1: /* Parsing key */ + if (*letter == '=') { + *letter = '\0'; + pair->value = letter + 1; + state = 2; + } else if (*letter == ',' || *letter == '\0') + return -EINVAL; + break; + + + case 2: /* Parsing value */ + if (*letter == '[') + state = 3; + else if (*letter == ',') { + *letter = '\0'; + arglist->count++; + pair = &arglist->pairs[arglist->count]; + state = 0; + } else if (*letter == '\0') { + letter--; + arglist->count++; + pair = &arglist->pairs[arglist->count]; + state = 0; + } + break; + + case 3: /* Parsing list */ + if (*letter == ']') + state = 2; + else if (*letter == '\0') + return -EINVAL; + break; + } + letter++; + } +} + +int +rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) +{ + struct rte_kvargs args; + struct rte_kvargs_pair *pair; + unsigned int i; + int result = 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + result = eth_dev_devargs_tokenise(&args, dargs); + if (result < 0) + goto parse_cleanup; + + for (i = 0; i < args.count; i++) { + pair = &args.pairs[i]; + if (strcmp("representor", pair->key) == 0) { + if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { + RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", + dargs); + result = -1; + goto parse_cleanup; + } + result = rte_eth_devargs_parse_representor_ports( + pair->value, eth_da); + if (result < 0) + goto parse_cleanup; + } + } + +parse_cleanup: + if (args.str) + free(args.str); + + return result; +} + +static inline int +eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id, + const char *ring_name) +{ + return snprintf(name, len, "eth_p%d_q%d_%s", + port_id, queue_id, ring_name); +} + +int +rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, + uint16_t queue_id) +{ + char z_name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + int rc = 0; + + rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, + queue_id, ring_name); + if (rc >= RTE_MEMZONE_NAMESIZE) { + RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + return -ENAMETOOLONG; + } + + mz = rte_memzone_lookup(z_name); + if (mz) + rc = rte_memzone_free(mz); + else + rc = -ENOENT; + + return rc; +} + +const struct rte_memzone * +rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, + uint16_t queue_id, size_t size, unsigned int align, + int socket_id) +{ + char z_name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + int rc; + + rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, + queue_id, ring_name); + if (rc >= RTE_MEMZONE_NAMESIZE) { + RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + rte_errno = ENAMETOOLONG; + return NULL; + } + + mz = rte_memzone_lookup(z_name); + if (mz) { + if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || + size > mz->len || + ((uintptr_t)mz->addr & (align - 1)) != 0) { + RTE_ETHDEV_LOG(ERR, + "memzone %s does not justify the requested attributes\n", + mz->name); + return NULL; + } + + return mz; + } + + return rte_memzone_reserve_aligned(z_name, size, socket_id, + RTE_MEMZONE_IOVA_CONTIG, align); +} + +int +rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction) +{ + struct rte_eth_dev *dev; + + if (peer_info == NULL) + return -EINVAL; + + /* No need to check the validity again. */ + dev = &rte_eth_devices[cur_port]; + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind, + -ENOTSUP); + + return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue, + peer_info, direction); +} + +int +rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue, + uint32_t direction) +{ + struct rte_eth_dev *dev; + + /* No need to check the validity again. */ + dev = &rte_eth_devices[cur_port]; + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind, + -ENOTSUP); + + return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue, + direction); +} + +int +rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue, + struct rte_hairpin_peer_info *cur_info, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction) +{ + struct rte_eth_dev *dev; + + /* Current queue information is not mandatory. */ + if (peer_info == NULL) + return -EINVAL; + + /* No need to check the validity again. */ + dev = &rte_eth_devices[peer_port]; + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update, + -ENOTSUP); + + return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue, + cur_info, peer_info, direction); +} + +int +rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset) +{ + static const struct rte_mbuf_dynfield field_desc = { + .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, + .size = sizeof(rte_eth_ip_reassembly_dynfield_t), + .align = __alignof__(rte_eth_ip_reassembly_dynfield_t), + }; + static const struct rte_mbuf_dynflag ip_reassembly_dynflag = { + .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, + }; + int offset; + + offset = rte_mbuf_dynfield_register(&field_desc); + if (offset < 0) + return -1; + if (field_offset != NULL) + *field_offset = offset; + + offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag); + if (offset < 0) + return -1; + if (flag_offset != NULL) + *flag_offset = offset; + + return 0; +} uint16_t rte_eth_pkt_burst_dummy(void *queue __rte_unused, @@ -11,3 +637,135 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused, { return 0; } + +int +rte_eth_representor_id_get(uint16_t port_id, + enum rte_eth_representor_type type, + int controller, int pf, int representor_port, + uint16_t *repr_id) +{ + int ret, n, count; + uint32_t i; + struct rte_eth_representor_info *info = NULL; + size_t size; + + if (type == RTE_ETH_REPRESENTOR_NONE) + return 0; + if (repr_id == NULL) + return -EINVAL; + + /* Get PMD representor range info. */ + ret = rte_eth_representor_info_get(port_id, NULL); + if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF && + controller == -1 && pf == -1) { + /* Direct mapping for legacy VF representor. */ + *repr_id = representor_port; + return 0; + } else if (ret < 0) { + return ret; + } + n = ret; + size = sizeof(*info) + n * sizeof(info->ranges[0]); + info = calloc(1, size); + if (info == NULL) + return -ENOMEM; + info->nb_ranges_alloc = n; + ret = rte_eth_representor_info_get(port_id, info); + if (ret < 0) + goto out; + + /* Default controller and pf to caller. */ + if (controller == -1) + controller = info->controller; + if (pf == -1) + pf = info->pf; + + /* Locate representor ID. */ + ret = -ENOENT; + for (i = 0; i < info->nb_ranges; ++i) { + if (info->ranges[i].type != type) + continue; + if (info->ranges[i].controller != controller) + continue; + if (info->ranges[i].id_end < info->ranges[i].id_base) { + RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + port_id, info->ranges[i].id_base, + info->ranges[i].id_end, i); + continue; + + } + count = info->ranges[i].id_end - info->ranges[i].id_base + 1; + switch (info->ranges[i].type) { + case RTE_ETH_REPRESENTOR_PF: + if (pf < info->ranges[i].pf || + pf >= info->ranges[i].pf + count) + continue; + *repr_id = info->ranges[i].id_base + + (pf - info->ranges[i].pf); + ret = 0; + goto out; + case RTE_ETH_REPRESENTOR_VF: + if (info->ranges[i].pf != pf) + continue; + if (representor_port < info->ranges[i].vf || + representor_port >= info->ranges[i].vf + count) + continue; + *repr_id = info->ranges[i].id_base + + (representor_port - info->ranges[i].vf); + ret = 0; + goto out; + case RTE_ETH_REPRESENTOR_SF: + if (info->ranges[i].pf != pf) + continue; + if (representor_port < info->ranges[i].sf || + representor_port >= info->ranges[i].sf + count) + continue; + *repr_id = info->ranges[i].id_base + + (representor_port - info->ranges[i].sf); + ret = 0; + goto out; + default: + break; + } + } +out: + free(info); + return ret; +} + +int +rte_eth_switch_domain_alloc(uint16_t *domain_id) +{ + uint16_t i; + + *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; + + for (i = 0; i < RTE_MAX_ETHPORTS; i++) { + if (eth_dev_switch_domains[i].state == + RTE_ETH_SWITCH_DOMAIN_UNUSED) { + eth_dev_switch_domains[i].state = + RTE_ETH_SWITCH_DOMAIN_ALLOCATED; + *domain_id = i; + return 0; + } + } + + return -ENOSPC; +} + +int +rte_eth_switch_domain_free(uint16_t domain_id) +{ + if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID || + domain_id >= RTE_MAX_ETHPORTS) + return -EINVAL; + + if (eth_dev_switch_domains[domain_id].state != + RTE_ETH_SWITCH_DOMAIN_ALLOCATED) + return -EINVAL; + + eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED; + + return 0; +} + diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 8fca20c7d45b..84dc0b320ed0 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -3,10 +3,22 @@ */ #include + #include "rte_ethdev.h" #include "ethdev_driver.h" #include "ethdev_private.h" +static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data"; + +/* Shared memory between primary and secondary processes. */ +struct eth_dev_shared *eth_dev_shared_data; + +/* spinlock for shared data allocation */ +static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER; + +/* spinlock for eth device callbacks */ +rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER; + uint16_t eth_dev_to_id(const struct rte_eth_dev *dev) { @@ -302,3 +314,122 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, return nb_pkts; } + +void +eth_dev_shared_data_prepare(void) +{ + const unsigned int flags = 0; + const struct rte_memzone *mz; + + rte_spinlock_lock(ð_dev_shared_data_lock); + + if (eth_dev_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate port data and ownership shared memory. */ + mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA, + sizeof(*eth_dev_shared_data), + rte_socket_id(), flags); + } else + mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA); + if (mz == NULL) + rte_panic("Cannot allocate ethdev shared data\n"); + + eth_dev_shared_data = mz->addr; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + eth_dev_shared_data->next_owner_id = + RTE_ETH_DEV_NO_OWNER + 1; + rte_spinlock_init(ð_dev_shared_data->ownership_lock); + memset(eth_dev_shared_data->data, 0, + sizeof(eth_dev_shared_data->data)); + } + } + + rte_spinlock_unlock(ð_dev_shared_data_lock); +} + +void +eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **rxq = dev->data->rx_queues; + + if (rxq[qid] == NULL) + return; + + if (dev->dev_ops->rx_queue_release != NULL) + (*dev->dev_ops->rx_queue_release)(dev, qid); + rxq[qid] = NULL; +} + +void +eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **txq = dev->data->tx_queues; + + if (txq[qid] == NULL) + return; + + if (dev->dev_ops->tx_queue_release != NULL) + (*dev->dev_ops->tx_queue_release)(dev, qid); + txq[qid] = NULL; +} + +int +eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) +{ + uint16_t old_nb_queues = dev->data->nb_rx_queues; + unsigned int i; + + if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */ + dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", + sizeof(dev->data->rx_queues[0]) * + RTE_MAX_QUEUES_PER_PORT, + RTE_CACHE_LINE_SIZE); + if (dev->data->rx_queues == NULL) { + dev->data->nb_rx_queues = 0; + return -(ENOMEM); + } + } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */ + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_rxq_release(dev, i); + + } else if (dev->data->rx_queues != NULL && nb_queues == 0) { + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_rxq_release(dev, i); + + rte_free(dev->data->rx_queues); + dev->data->rx_queues = NULL; + } + dev->data->nb_rx_queues = nb_queues; + return 0; +} + +int +eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) +{ + uint16_t old_nb_queues = dev->data->nb_tx_queues; + unsigned int i; + + if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */ + dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", + sizeof(dev->data->tx_queues[0]) * + RTE_MAX_QUEUES_PER_PORT, + RTE_CACHE_LINE_SIZE); + if (dev->data->tx_queues == NULL) { + dev->data->nb_tx_queues = 0; + return -(ENOMEM); + } + } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */ + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_txq_release(dev, i); + + } else if (dev->data->tx_queues != NULL && nb_queues == 0) { + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_txq_release(dev, i); + + rte_free(dev->data->tx_queues); + dev->data->tx_queues = NULL; + } + dev->data->nb_tx_queues = nb_queues; + return 0; +} + diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h index cc91025e8d9b..cc9879907ce5 100644 --- a/lib/ethdev/ethdev_private.h +++ b/lib/ethdev/ethdev_private.h @@ -5,10 +5,38 @@ #ifndef _ETH_PRIVATE_H_ #define _ETH_PRIVATE_H_ +#include + +#include #include #include "rte_ethdev.h" +struct eth_dev_shared { + uint64_t next_owner_id; + rte_spinlock_t ownership_lock; + struct rte_eth_dev_data data[RTE_MAX_ETHPORTS]; +}; + +extern struct eth_dev_shared *eth_dev_shared_data; + +/** + * The user application callback description. + * + * It contains callback address to be registered by user application, + * the pointer to the parameters for callback, and the event type. + */ +struct rte_eth_dev_callback { + TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */ + rte_eth_dev_cb_fn cb_fn; /**< Callback address */ + void *cb_arg; /**< Parameter for callback */ + void *ret_param; /**< Return parameter */ + enum rte_eth_event_type event; /**< Interrupt event type */ + uint32_t active; /**< Callback is executing */ +}; + +extern rte_spinlock_t eth_dev_cb_lock; + /* * Convert rte_eth_dev pointer to port ID. * NULL will be translated to RTE_MAX_ETHPORTS. @@ -33,4 +61,12 @@ void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo); void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, const struct rte_eth_dev *dev); + +void eth_dev_shared_data_prepare(void); + +void eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid); +void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid); +int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues); +int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues); + #endif /* _ETH_PRIVATE_H_ */ diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 2a479bea2128..70c850a2f18a 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -30,7 +30,6 @@ #include #include #include -#include #include #include #include @@ -41,37 +40,23 @@ #include "ethdev_profile.h" #include "ethdev_private.h" -static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data"; struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS]; /* public fast-path API */ struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS]; -/* spinlock for eth device callbacks */ -static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER; - /* spinlock for add/remove Rx callbacks */ static rte_spinlock_t eth_dev_rx_cb_lock = RTE_SPINLOCK_INITIALIZER; /* spinlock for add/remove Tx callbacks */ static rte_spinlock_t eth_dev_tx_cb_lock = RTE_SPINLOCK_INITIALIZER; -/* spinlock for shared data allocation */ -static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER; - /* store statistics names and its offset in stats structure */ struct rte_eth_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; unsigned offset; }; -/* Shared memory between primary and secondary processes. */ -static struct { - uint64_t next_owner_id; - rte_spinlock_t ownership_lock; - struct rte_eth_dev_data data[RTE_MAX_ETHPORTS]; -} *eth_dev_shared_data; - static const struct rte_eth_xstats_name_off eth_dev_stats_strings[] = { {"rx_good_packets", offsetof(struct rte_eth_stats, ipackets)}, {"tx_good_packets", offsetof(struct rte_eth_stats, opackets)}, @@ -175,21 +160,6 @@ static const struct { {RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP, "FLOW_SHARED_OBJECT_KEEP"}, }; -/** - * The user application callback description. - * - * It contains callback address to be registered by user application, - * the pointer to the parameters for callback, and the event type. - */ -struct rte_eth_dev_callback { - TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */ - rte_eth_dev_cb_fn cb_fn; /**< Callback address */ - void *cb_arg; /**< Parameter for callback */ - void *ret_param; /**< Return parameter */ - enum rte_eth_event_type event; /**< Interrupt event type */ - uint32_t active; /**< Callback is executing */ -}; - enum { STAT_QMAP_TX = 0, STAT_QMAP_RX @@ -399,227 +369,12 @@ rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id) rte_eth_devices[ref_port_id].device); } -static void -eth_dev_shared_data_prepare(void) -{ - const unsigned flags = 0; - const struct rte_memzone *mz; - - rte_spinlock_lock(ð_dev_shared_data_lock); - - if (eth_dev_shared_data == NULL) { - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - /* Allocate port data and ownership shared memory. */ - mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA, - sizeof(*eth_dev_shared_data), - rte_socket_id(), flags); - } else - mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA); - if (mz == NULL) - rte_panic("Cannot allocate ethdev shared data\n"); - - eth_dev_shared_data = mz->addr; - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - eth_dev_shared_data->next_owner_id = - RTE_ETH_DEV_NO_OWNER + 1; - rte_spinlock_init(ð_dev_shared_data->ownership_lock); - memset(eth_dev_shared_data->data, 0, - sizeof(eth_dev_shared_data->data)); - } - } - - rte_spinlock_unlock(ð_dev_shared_data_lock); -} - static bool eth_dev_is_allocated(const struct rte_eth_dev *ethdev) { return ethdev->data->name[0] != '\0'; } -static struct rte_eth_dev * -eth_dev_allocated(const char *name) -{ - uint16_t i; - - RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX); - - for (i = 0; i < RTE_MAX_ETHPORTS; i++) { - if (rte_eth_devices[i].data != NULL && - strcmp(rte_eth_devices[i].data->name, name) == 0) - return &rte_eth_devices[i]; - } - return NULL; -} - -struct rte_eth_dev * -rte_eth_dev_allocated(const char *name) -{ - struct rte_eth_dev *ethdev; - - eth_dev_shared_data_prepare(); - - rte_spinlock_lock(ð_dev_shared_data->ownership_lock); - - ethdev = eth_dev_allocated(name); - - rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); - - return ethdev; -} - -static uint16_t -eth_dev_find_free_port(void) -{ - uint16_t i; - - for (i = 0; i < RTE_MAX_ETHPORTS; i++) { - /* Using shared name field to find a free port. */ - if (eth_dev_shared_data->data[i].name[0] == '\0') { - RTE_ASSERT(rte_eth_devices[i].state == - RTE_ETH_DEV_UNUSED); - return i; - } - } - return RTE_MAX_ETHPORTS; -} - -static struct rte_eth_dev * -eth_dev_get(uint16_t port_id) -{ - struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id]; - - eth_dev->data = ð_dev_shared_data->data[port_id]; - - return eth_dev; -} - -struct rte_eth_dev * -rte_eth_dev_allocate(const char *name) -{ - uint16_t port_id; - struct rte_eth_dev *eth_dev = NULL; - size_t name_len; - - name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); - if (name_len == 0) { - RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); - return NULL; - } - - if (name_len >= RTE_ETH_NAME_MAX_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); - return NULL; - } - - eth_dev_shared_data_prepare(); - - /* Synchronize port creation between primary and secondary threads. */ - rte_spinlock_lock(ð_dev_shared_data->ownership_lock); - - if (eth_dev_allocated(name) != NULL) { - RTE_ETHDEV_LOG(ERR, - "Ethernet device with name %s already allocated\n", - name); - goto unlock; - } - - port_id = eth_dev_find_free_port(); - if (port_id == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Reached maximum number of Ethernet ports\n"); - goto unlock; - } - - eth_dev = eth_dev_get(port_id); - strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name)); - eth_dev->data->port_id = port_id; - eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS; - eth_dev->data->mtu = RTE_ETHER_MTU; - pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL); - -unlock: - rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); - - return eth_dev; -} - -/* - * Attach to a port already registered by the primary process, which - * makes sure that the same device would have the same port ID both - * in the primary and secondary process. - */ -struct rte_eth_dev * -rte_eth_dev_attach_secondary(const char *name) -{ - uint16_t i; - struct rte_eth_dev *eth_dev = NULL; - - eth_dev_shared_data_prepare(); - - /* Synchronize port attachment to primary port creation and release. */ - rte_spinlock_lock(ð_dev_shared_data->ownership_lock); - - for (i = 0; i < RTE_MAX_ETHPORTS; i++) { - if (strcmp(eth_dev_shared_data->data[i].name, name) == 0) - break; - } - if (i == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Device %s is not driven by the primary process\n", - name); - } else { - eth_dev = eth_dev_get(i); - RTE_ASSERT(eth_dev->data->port_id == i); - } - - rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); - return eth_dev; -} - -int -rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) -{ - if (eth_dev == NULL) - return -EINVAL; - - eth_dev_shared_data_prepare(); - - if (eth_dev->state != RTE_ETH_DEV_UNUSED) - rte_eth_dev_callback_process(eth_dev, - RTE_ETH_EVENT_DESTROY, NULL); - - eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id); - - rte_spinlock_lock(ð_dev_shared_data->ownership_lock); - - eth_dev->state = RTE_ETH_DEV_UNUSED; - eth_dev->device = NULL; - eth_dev->process_private = NULL; - eth_dev->intr_handle = NULL; - eth_dev->rx_pkt_burst = NULL; - eth_dev->tx_pkt_burst = NULL; - eth_dev->tx_pkt_prepare = NULL; - eth_dev->rx_queue_count = NULL; - eth_dev->rx_descriptor_status = NULL; - eth_dev->tx_descriptor_status = NULL; - eth_dev->dev_ops = NULL; - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - rte_free(eth_dev->data->rx_queues); - rte_free(eth_dev->data->tx_queues); - rte_free(eth_dev->data->mac_addrs); - rte_free(eth_dev->data->hash_mac_addrs); - rte_free(eth_dev->data->dev_private); - pthread_mutex_destroy(ð_dev->data->flow_ops_mutex); - memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data)); - } - - rte_spinlock_unlock(ð_dev_shared_data->ownership_lock); - - return 0; -} - int rte_eth_dev_is_valid_port(uint16_t port_id) { @@ -894,17 +649,6 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) return -ENODEV; } -struct rte_eth_dev * -rte_eth_dev_get_by_name(const char *name) -{ - uint16_t pid; - - if (rte_eth_dev_get_port_by_name(name, &pid)) - return NULL; - - return &rte_eth_devices[pid]; -} - static int eth_err(uint16_t port_id, int ret) { @@ -915,62 +659,6 @@ eth_err(uint16_t port_id, int ret) return ret; } -static void -eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid) -{ - void **rxq = dev->data->rx_queues; - - if (rxq[qid] == NULL) - return; - - if (dev->dev_ops->rx_queue_release != NULL) - (*dev->dev_ops->rx_queue_release)(dev, qid); - rxq[qid] = NULL; -} - -static void -eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid) -{ - void **txq = dev->data->tx_queues; - - if (txq[qid] == NULL) - return; - - if (dev->dev_ops->tx_queue_release != NULL) - (*dev->dev_ops->tx_queue_release)(dev, qid); - txq[qid] = NULL; -} - -static int -eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) -{ - uint16_t old_nb_queues = dev->data->nb_rx_queues; - unsigned i; - - if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */ - dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", - sizeof(dev->data->rx_queues[0]) * - RTE_MAX_QUEUES_PER_PORT, - RTE_CACHE_LINE_SIZE); - if (dev->data->rx_queues == NULL) { - dev->data->nb_rx_queues = 0; - return -(ENOMEM); - } - } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */ - for (i = nb_queues; i < old_nb_queues; i++) - eth_dev_rxq_release(dev, i); - - } else if (dev->data->rx_queues != NULL && nb_queues == 0) { - for (i = nb_queues; i < old_nb_queues; i++) - eth_dev_rxq_release(dev, i); - - rte_free(dev->data->rx_queues); - dev->data->rx_queues = NULL; - } - dev->data->nb_rx_queues = nb_queues; - return 0; -} - static int eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1161,36 +849,6 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) return eth_err(port_id, dev->dev_ops->tx_queue_stop(dev, tx_queue_id)); } -static int -eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) -{ - uint16_t old_nb_queues = dev->data->nb_tx_queues; - unsigned i; - - if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */ - dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", - sizeof(dev->data->tx_queues[0]) * - RTE_MAX_QUEUES_PER_PORT, - RTE_CACHE_LINE_SIZE); - if (dev->data->tx_queues == NULL) { - dev->data->nb_tx_queues = 0; - return -(ENOMEM); - } - } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */ - for (i = nb_queues; i < old_nb_queues; i++) - eth_dev_txq_release(dev, i); - - } else if (dev->data->tx_queues != NULL && nb_queues == 0) { - for (i = nb_queues; i < old_nb_queues; i++) - eth_dev_txq_release(dev, i); - - rte_free(dev->data->tx_queues); - dev->data->tx_queues = NULL; - } - dev->data->nb_tx_queues = nb_queues; - return 0; -} - uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex) { @@ -1682,21 +1340,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return ret; } -void -rte_eth_dev_internal_reset(struct rte_eth_dev *dev) -{ - if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", - dev->data->port_id); - return; - } - - eth_dev_rx_queue_config(dev, 0); - eth_dev_tx_queue_config(dev, 0); - - memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf)); -} - static void eth_dev_mac_restore(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) @@ -4914,52 +4557,6 @@ rte_eth_dev_callback_unregister(uint16_t port_id, return ret; } -int -rte_eth_dev_callback_process(struct rte_eth_dev *dev, - enum rte_eth_event_type event, void *ret_param) -{ - struct rte_eth_dev_callback *cb_lst; - struct rte_eth_dev_callback dev_cb; - int rc = 0; - - rte_spinlock_lock(ð_dev_cb_lock); - TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) { - if (cb_lst->cb_fn == NULL || cb_lst->event != event) - continue; - dev_cb = *cb_lst; - cb_lst->active = 1; - if (ret_param != NULL) - dev_cb.ret_param = ret_param; - - rte_spinlock_unlock(ð_dev_cb_lock); - rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event, - dev_cb.cb_arg, dev_cb.ret_param); - rte_spinlock_lock(ð_dev_cb_lock); - cb_lst->active = 0; - } - rte_spinlock_unlock(ð_dev_cb_lock); - return rc; -} - -void -rte_eth_dev_probing_finish(struct rte_eth_dev *dev) -{ - if (dev == NULL) - return; - - /* - * for secondary process, at that point we expect device - * to be already 'usable', so shared data and all function pointers - * for fast-path devops have to be setup properly inside rte_eth_dev. - */ - if (rte_eal_process_type() == RTE_PROC_SECONDARY) - eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev); - - rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL); - - dev->state = RTE_ETH_DEV_ATTACHED; -} - int rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) { @@ -5032,156 +4629,6 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) return fd; } -static inline int -eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id, - const char *ring_name) -{ - return snprintf(name, len, "eth_p%d_q%d_%s", - port_id, queue_id, ring_name); -} - -const struct rte_memzone * -rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, - uint16_t queue_id, size_t size, unsigned align, - int socket_id) -{ - char z_name[RTE_MEMZONE_NAMESIZE]; - const struct rte_memzone *mz; - int rc; - - rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, - queue_id, ring_name); - if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); - rte_errno = ENAMETOOLONG; - return NULL; - } - - mz = rte_memzone_lookup(z_name); - if (mz) { - if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || - size > mz->len || - ((uintptr_t)mz->addr & (align - 1)) != 0) { - RTE_ETHDEV_LOG(ERR, - "memzone %s does not justify the requested attributes\n", - mz->name); - return NULL; - } - - return mz; - } - - return rte_memzone_reserve_aligned(z_name, size, socket_id, - RTE_MEMZONE_IOVA_CONTIG, align); -} - -int -rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, - uint16_t queue_id) -{ - char z_name[RTE_MEMZONE_NAMESIZE]; - const struct rte_memzone *mz; - int rc = 0; - - rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, - queue_id, ring_name); - if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); - return -ENAMETOOLONG; - } - - mz = rte_memzone_lookup(z_name); - if (mz) - rc = rte_memzone_free(mz); - else - rc = -ENOENT; - - return rc; -} - -int -rte_eth_dev_create(struct rte_device *device, const char *name, - size_t priv_data_size, - ethdev_bus_specific_init ethdev_bus_specific_init, - void *bus_init_params, - ethdev_init_t ethdev_init, void *init_params) -{ - struct rte_eth_dev *ethdev; - int retval; - - RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL); - - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - ethdev = rte_eth_dev_allocate(name); - if (!ethdev) - return -ENODEV; - - if (priv_data_size) { - ethdev->data->dev_private = rte_zmalloc_socket( - name, priv_data_size, RTE_CACHE_LINE_SIZE, - device->numa_node); - - if (!ethdev->data->dev_private) { - RTE_ETHDEV_LOG(ERR, - "failed to allocate private data\n"); - retval = -ENOMEM; - goto probe_failed; - } - } - } else { - ethdev = rte_eth_dev_attach_secondary(name); - if (!ethdev) { - RTE_ETHDEV_LOG(ERR, - "secondary process attach failed, ethdev doesn't exist\n"); - return -ENODEV; - } - } - - ethdev->device = device; - - if (ethdev_bus_specific_init) { - retval = ethdev_bus_specific_init(ethdev, bus_init_params); - if (retval) { - RTE_ETHDEV_LOG(ERR, - "ethdev bus specific initialisation failed\n"); - goto probe_failed; - } - } - - retval = ethdev_init(ethdev, init_params); - if (retval) { - RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); - goto probe_failed; - } - - rte_eth_dev_probing_finish(ethdev); - - return retval; - -probe_failed: - rte_eth_dev_release_port(ethdev); - return retval; -} - -int -rte_eth_dev_destroy(struct rte_eth_dev *ethdev, - ethdev_uninit_t ethdev_uninit) -{ - int ret; - - ethdev = rte_eth_dev_allocated(ethdev->data->name); - if (!ethdev) - return -ENODEV; - - RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL); - - ret = ethdev_uninit(ethdev); - if (ret) - return ret; - - return rte_eth_dev_release_port(ethdev); -} - int rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, int epfd, int op, void *data) @@ -6005,22 +5452,6 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, return eth_err(port_id, (*dev->dev_ops->hairpin_cap_get)(dev, cap)); } -int -rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) -{ - if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) - return 1; - return 0; -} - -int -rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id) -{ - if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN) - return 1; - return 0; -} - int rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) { @@ -6042,255 +5473,6 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) return (*dev->dev_ops->pool_ops_supported)(dev, pool); } -/** - * A set of values to describe the possible states of a switch domain. - */ -enum rte_eth_switch_domain_state { - RTE_ETH_SWITCH_DOMAIN_UNUSED = 0, - RTE_ETH_SWITCH_DOMAIN_ALLOCATED -}; - -/** - * Array of switch domains available for allocation. Array is sized to - * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than - * ethdev ports in a single process. - */ -static struct rte_eth_dev_switch { - enum rte_eth_switch_domain_state state; -} eth_dev_switch_domains[RTE_MAX_ETHPORTS]; - -int -rte_eth_switch_domain_alloc(uint16_t *domain_id) -{ - uint16_t i; - - *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; - - for (i = 0; i < RTE_MAX_ETHPORTS; i++) { - if (eth_dev_switch_domains[i].state == - RTE_ETH_SWITCH_DOMAIN_UNUSED) { - eth_dev_switch_domains[i].state = - RTE_ETH_SWITCH_DOMAIN_ALLOCATED; - *domain_id = i; - return 0; - } - } - - return -ENOSPC; -} - -int -rte_eth_switch_domain_free(uint16_t domain_id) -{ - if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID || - domain_id >= RTE_MAX_ETHPORTS) - return -EINVAL; - - if (eth_dev_switch_domains[domain_id].state != - RTE_ETH_SWITCH_DOMAIN_ALLOCATED) - return -EINVAL; - - eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED; - - return 0; -} - -static int -eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in) -{ - int state; - struct rte_kvargs_pair *pair; - char *letter; - - arglist->str = strdup(str_in); - if (arglist->str == NULL) - return -ENOMEM; - - letter = arglist->str; - state = 0; - arglist->count = 0; - pair = &arglist->pairs[0]; - while (1) { - switch (state) { - case 0: /* Initial */ - if (*letter == '=') - return -EINVAL; - else if (*letter == '\0') - return 0; - - state = 1; - pair->key = letter; - /* fall-thru */ - - case 1: /* Parsing key */ - if (*letter == '=') { - *letter = '\0'; - pair->value = letter + 1; - state = 2; - } else if (*letter == ',' || *letter == '\0') - return -EINVAL; - break; - - - case 2: /* Parsing value */ - if (*letter == '[') - state = 3; - else if (*letter == ',') { - *letter = '\0'; - arglist->count++; - pair = &arglist->pairs[arglist->count]; - state = 0; - } else if (*letter == '\0') { - letter--; - arglist->count++; - pair = &arglist->pairs[arglist->count]; - state = 0; - } - break; - - case 3: /* Parsing list */ - if (*letter == ']') - state = 2; - else if (*letter == '\0') - return -EINVAL; - break; - } - letter++; - } -} - -int -rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) -{ - struct rte_kvargs args; - struct rte_kvargs_pair *pair; - unsigned int i; - int result = 0; - - memset(eth_da, 0, sizeof(*eth_da)); - - result = eth_dev_devargs_tokenise(&args, dargs); - if (result < 0) - goto parse_cleanup; - - for (i = 0; i < args.count; i++) { - pair = &args.pairs[i]; - if (strcmp("representor", pair->key) == 0) { - if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", - dargs); - result = -1; - goto parse_cleanup; - } - result = rte_eth_devargs_parse_representor_ports( - pair->value, eth_da); - if (result < 0) - goto parse_cleanup; - } - } - -parse_cleanup: - if (args.str) - free(args.str); - - return result; -} - -int -rte_eth_representor_id_get(uint16_t port_id, - enum rte_eth_representor_type type, - int controller, int pf, int representor_port, - uint16_t *repr_id) -{ - int ret, n, count; - uint32_t i; - struct rte_eth_representor_info *info = NULL; - size_t size; - - if (type == RTE_ETH_REPRESENTOR_NONE) - return 0; - if (repr_id == NULL) - return -EINVAL; - - /* Get PMD representor range info. */ - ret = rte_eth_representor_info_get(port_id, NULL); - if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF && - controller == -1 && pf == -1) { - /* Direct mapping for legacy VF representor. */ - *repr_id = representor_port; - return 0; - } else if (ret < 0) { - return ret; - } - n = ret; - size = sizeof(*info) + n * sizeof(info->ranges[0]); - info = calloc(1, size); - if (info == NULL) - return -ENOMEM; - info->nb_ranges_alloc = n; - ret = rte_eth_representor_info_get(port_id, info); - if (ret < 0) - goto out; - - /* Default controller and pf to caller. */ - if (controller == -1) - controller = info->controller; - if (pf == -1) - pf = info->pf; - - /* Locate representor ID. */ - ret = -ENOENT; - for (i = 0; i < info->nb_ranges; ++i) { - if (info->ranges[i].type != type) - continue; - if (info->ranges[i].controller != controller) - continue; - if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", - port_id, info->ranges[i].id_base, - info->ranges[i].id_end, i); - continue; - - } - count = info->ranges[i].id_end - info->ranges[i].id_base + 1; - switch (info->ranges[i].type) { - case RTE_ETH_REPRESENTOR_PF: - if (pf < info->ranges[i].pf || - pf >= info->ranges[i].pf + count) - continue; - *repr_id = info->ranges[i].id_base + - (pf - info->ranges[i].pf); - ret = 0; - goto out; - case RTE_ETH_REPRESENTOR_VF: - if (info->ranges[i].pf != pf) - continue; - if (representor_port < info->ranges[i].vf || - representor_port >= info->ranges[i].vf + count) - continue; - *repr_id = info->ranges[i].id_base + - (representor_port - info->ranges[i].vf); - ret = 0; - goto out; - case RTE_ETH_REPRESENTOR_SF: - if (info->ranges[i].pf != pf) - continue; - if (representor_port < info->ranges[i].sf || - representor_port >= info->ranges[i].sf + count) - continue; - *repr_id = info->ranges[i].id_base + - (representor_port - info->ranges[i].sf); - ret = 0; - goto out; - default: - break; - } - } -out: - free(info); - return ret; -} - static int eth_dev_handle_port_list(const char *cmd __rte_unused, const char *params __rte_unused, @@ -6533,61 +5715,6 @@ eth_dev_handle_port_info(const char *cmd __rte_unused, return 0; } -int -rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue, - struct rte_hairpin_peer_info *cur_info, - struct rte_hairpin_peer_info *peer_info, - uint32_t direction) -{ - struct rte_eth_dev *dev; - - /* Current queue information is not mandatory. */ - if (peer_info == NULL) - return -EINVAL; - - /* No need to check the validity again. */ - dev = &rte_eth_devices[peer_port]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update, - -ENOTSUP); - - return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue, - cur_info, peer_info, direction); -} - -int -rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue, - struct rte_hairpin_peer_info *peer_info, - uint32_t direction) -{ - struct rte_eth_dev *dev; - - if (peer_info == NULL) - return -EINVAL; - - /* No need to check the validity again. */ - dev = &rte_eth_devices[cur_port]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind, - -ENOTSUP); - - return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue, - peer_info, direction); -} - -int -rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue, - uint32_t direction) -{ - struct rte_eth_dev *dev; - - /* No need to check the validity again. */ - dev = &rte_eth_devices[cur_port]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind, - -ENOTSUP); - - return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue, - direction); -} - int rte_eth_representor_info_get(uint16_t port_id, struct rte_eth_representor_info *info) @@ -6722,34 +5849,6 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf)); } -int -rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset) -{ - static const struct rte_mbuf_dynfield field_desc = { - .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, - .size = sizeof(rte_eth_ip_reassembly_dynfield_t), - .align = __alignof__(rte_eth_ip_reassembly_dynfield_t), - }; - static const struct rte_mbuf_dynflag ip_reassembly_dynflag = { - .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, - }; - int offset; - - offset = rte_mbuf_dynfield_register(&field_desc); - if (offset < 0) - return -1; - if (field_offset != NULL) - *field_offset = offset; - - offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag); - if (offset < 0) - return -1; - if (flag_offset != NULL) - *flag_offset = offset; - - return 0; -} - int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) {