From patchwork Thu Jun 6 17:33:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 54515 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 059C21B94E; Thu, 6 Jun 2019 19:33:36 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id A556C5424 for ; Thu, 6 Jun 2019 19:33:34 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 24842B400D6; Thu, 6 Jun 2019 17:33:33 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Thu, 6 Jun 2019 10:33:30 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Thu, 6 Jun 2019 10:33:29 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id x56HXSFq017523; Thu, 6 Jun 2019 18:33:28 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id B1B24161421; Thu, 6 Jun 2019 18:33:28 +0100 (BST) From: Andrew Rybchenko To: CC: Georgiy Levashov Date: Thu, 6 Jun 2019 18:33:24 +0100 Message-ID: <1559842405-7987-1-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24660.005 X-TM-AS-Result: No-8.840100-4.000000-10 X-TMASE-MatchedRID: TbNh0rB8TujKwN16h9UrzUf49ONH0RaS9fvWztwgm5NIyDY579vwTP6r hVJiEMMHu3FYIsXLuimfdngo/TIKGaydbJHVd3oI2fov7TwhL8l6i696PjRPiB3RY4pGTCyHcij MZrr2iZ2t2gtuWr1LmkUREjGEml9vppxtrT/r6lGwVIp8Y8imtbqGBW9J0Yqjhmk5nFMmYkRlKA +SSxl8wCOrSz0yYpdaJ3TEshnsAz5uljFvSZI5Z7SkeRV328rMfglgnB0nDhPW3gJMbmpElqfMC G8D3t1HMqqNAwlLCF3dObLtjL6WM+5y09ovkmT9+yCukNfq/Z5JaD67iKvY04e/yi1B5QhCS9vO Bo+BAYN1znVM4TOLdb06cTAM2JRvWrOrD9QVEC1l1tleYYUuJ7CouBF2/ACKXZgp9Jjp/Mxqi2N Ot9D/o7TBKVBe5KYmxsTnGY7grBH5W/Ww3Kpo5Iph1hAtvKZN+w0sO/h8u3k52X8YwVUEW31VAA VhzJtLjwGY23I0A7hPNXQfM1Lh51mGwsZJV9auLyz9QvAyHjqFUOeR/MPu5pGPHiE2kiT4e6iwi fO6FIzrcZeS7mUy953e7V9WZ4GXAZ+RIFvH9N2VyEX4i+SWU6FCGy3An0bg0qcy3Ld1Sv6DuU30 s68UO/semI5++GPDQAEbepbZN9ahtRf2jPpdQR2Og9Ri4HLAZTIutdVEQlVXy6SPHzrw7pffmca R0qCL4vM1YF6AJbbCCfuIMF6xLXnN0DN7HnFmntCl16Bd+ODlyPO9Bm2u+3qsi14Z+7/OfBNytg yxryL0RC+D3GMF0pRMZUCEHkRt X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--8.840100-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24660.005 X-MDID: 1559842414-Mr-ecBHt1f1h Subject: [dpdk-dev] [PATCH 1/2] net/sfc: add Rx interrupts support for efx datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Georgiy Levashov When Rx interrupts are disabled, we simply disable rearm when the interrupt fires the next time. So, the next packet will trigger interrupt (if it is not happened yet after previous Rx burst processing). Signed-off-by: Georgiy Levashov Signed-off-by: Andrew Rybchenko --- doc/guides/nics/features/sfc_efx.ini | 1 + doc/guides/nics/sfc_efx.rst | 4 +- doc/guides/rel_notes/release_19_08.rst | 6 +++ drivers/net/sfc/sfc.c | 3 +- drivers/net/sfc/sfc.h | 1 + drivers/net/sfc/sfc_dp_rx.h | 10 +++++ drivers/net/sfc/sfc_ethdev.c | 28 ++++++++++++++ drivers/net/sfc/sfc_ev.c | 3 +- drivers/net/sfc/sfc_ev.h | 1 + drivers/net/sfc/sfc_intr.c | 37 ++++++++++++++++-- drivers/net/sfc/sfc_rx.c | 68 ++++++++++++++++++++++++++++++++-- drivers/net/sfc/sfc_rx.h | 1 + 12 files changed, 153 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/sfc_efx.ini b/doc/guides/nics/features/sfc_efx.ini index d1aa833..eca1427 100644 --- a/doc/guides/nics/features/sfc_efx.ini +++ b/doc/guides/nics/features/sfc_efx.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Link status = Y Link status event = Y +Rx interrupt = Y Fast mbuf free = Y Queue start/stop = Y Runtime Rx queue setup = Y diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst index 6d01d05..5f35bc5 100644 --- a/doc/guides/nics/sfc_efx.rst +++ b/doc/guides/nics/sfc_efx.rst @@ -82,6 +82,8 @@ SFC EFX PMD has support for: - Scattered Rx DMA for packet that are larger that a single Rx descriptor +- Receive queue interrupts + - Deferred receive and transmit queue start - Transmit VLAN insertion (if running firmware variant supports it) @@ -96,8 +98,6 @@ Non-supported Features The features not yet supported include: -- Receive queue interrupts - - Priority-based flow control - Configurable RX CRC stripping (always stripped) diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst index 8fcd4a7..72d74c5 100644 --- a/doc/guides/rel_notes/release_19_08.rst +++ b/doc/guides/rel_notes/release_19_08.rst @@ -66,6 +66,12 @@ New Features Added the new Shared Memory Packet Interface (``memif``) PMD. See the :doc:`../nics/memif` guide for more details on this new driver. +* **Updated Solarflare network PMD.** + + Updated the Solarflare ``sfc_efx`` driver with changes including: + + * Added support for Rx interrupts. + Removed Items ------------- diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c index dea57b2..141c767 100644 --- a/drivers/net/sfc/sfc.c +++ b/drivers/net/sfc/sfc.c @@ -148,7 +148,8 @@ rc = EINVAL; } - if (conf->intr_conf.rxq != 0) { + if (conf->intr_conf.rxq != 0 && + (sa->priv.dp_rx->features & SFC_DP_RX_FEAT_INTR) == 0) { sfc_err(sa, "Receive queue interrupt not supported"); rc = EINVAL; } diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h index dde25c5..cc52228 100644 --- a/drivers/net/sfc/sfc.h +++ b/drivers/net/sfc/sfc.h @@ -108,6 +108,7 @@ struct sfc_intr { efx_intr_type_t type; rte_intr_callback_fn handler; boolean_t lsc_intr; + boolean_t rxq_intr; }; struct sfc_rxq; diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h index a374311..73e5857 100644 --- a/drivers/net/sfc/sfc_dp_rx.h +++ b/drivers/net/sfc/sfc_dp_rx.h @@ -74,6 +74,8 @@ struct sfc_dp_rx_qcreate_info { /** DMA-mapped Rx descriptors ring */ void *rxq_hw_ring; + /** Event queue index in hardware */ + unsigned int evq_hw_index; /** Associated event queue size */ unsigned int evq_entries; /** Hardware event ring */ @@ -193,6 +195,11 @@ typedef bool (sfc_dp_rx_qrx_ps_ev_t)(struct sfc_dp_rxq *dp_rxq, /** Check Rx descriptor status */ typedef int (sfc_dp_rx_qdesc_status_t)(struct sfc_dp_rxq *dp_rxq, uint16_t offset); +/** Enable Rx interrupts */ +typedef int (sfc_dp_rx_intr_enable_t)(struct sfc_dp_rxq *dp_rxq); + +/** Disable Rx interrupts */ +typedef int (sfc_dp_rx_intr_disable_t)(struct sfc_dp_rxq *dp_rxq); /** Receive datapath definition */ struct sfc_dp_rx { @@ -202,6 +209,7 @@ struct sfc_dp_rx { #define SFC_DP_RX_FEAT_MULTI_PROCESS 0x1 #define SFC_DP_RX_FEAT_FLOW_FLAG 0x2 #define SFC_DP_RX_FEAT_FLOW_MARK 0x4 +#define SFC_DP_RX_FEAT_INTR 0x8 /** * Rx offload capabilities supported by the datapath on device * level only if HW/FW supports it. @@ -225,6 +233,8 @@ struct sfc_dp_rx { sfc_dp_rx_supported_ptypes_get_t *supported_ptypes_get; sfc_dp_rx_qdesc_npending_t *qdesc_npending; sfc_dp_rx_qdesc_status_t *qdesc_status; + sfc_dp_rx_intr_enable_t *intr_enable; + sfc_dp_rx_intr_disable_t *intr_disable; eth_rx_burst_t pkt_burst; }; diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 661432e..be185d5 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -1713,6 +1713,32 @@ enum sfc_udp_tunnel_op_e { return sap->dp_rx->pool_ops_supported(pool); } +static int +sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev); + struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev); + struct sfc_rxq_info *rxq_info; + + SFC_ASSERT(queue_id < sas->rxq_count); + rxq_info = &sas->rxq_info[queue_id]; + + return sap->dp_rx->intr_enable(rxq_info->dp); +} + +static int +sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev); + struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev); + struct sfc_rxq_info *rxq_info; + + SFC_ASSERT(queue_id < sas->rxq_count); + rxq_info = &sas->rxq_info[queue_id]; + + return sap->dp_rx->intr_disable(rxq_info->dp); +} + static const struct eth_dev_ops sfc_eth_dev_ops = { .dev_configure = sfc_dev_configure, .dev_start = sfc_dev_start, @@ -1743,6 +1769,8 @@ enum sfc_udp_tunnel_op_e { .rx_descriptor_done = sfc_rx_descriptor_done, .rx_descriptor_status = sfc_rx_descriptor_status, .tx_descriptor_status = sfc_tx_descriptor_status, + .rx_queue_intr_enable = sfc_rx_queue_intr_enable, + .rx_queue_intr_disable = sfc_rx_queue_intr_disable, .tx_queue_setup = sfc_tx_queue_setup, .tx_queue_release = sfc_tx_queue_release, .flow_ctrl_get = sfc_flow_ctrl_get, diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c index 5992833..0f216da 100644 --- a/drivers/net/sfc/sfc_ev.c +++ b/drivers/net/sfc/sfc_ev.c @@ -602,7 +602,8 @@ (void)memset((void *)esmp->esm_base, 0xff, efx_evq_size(sa->nic, evq->entries)); - if (sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) + if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) || + (sa->intr.rxq_intr && evq->dp_rxq != NULL)) evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT; else evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED; diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h index 5d070b1..2c401c7 100644 --- a/drivers/net/sfc/sfc_ev.h +++ b/drivers/net/sfc/sfc_ev.h @@ -46,6 +46,7 @@ struct sfc_evq { efx_evq_t *common; const efx_ev_callbacks_t *callbacks; unsigned int read_ptr; + unsigned int read_ptr_primed; boolean_t exception; efsys_mem_t mem; struct sfc_dp_rxq *dp_rxq; diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c index 0fbcd61..1f4969b 100644 --- a/drivers/net/sfc/sfc_intr.c +++ b/drivers/net/sfc/sfc_intr.c @@ -162,6 +162,27 @@ intr_handle = &pci_dev->intr_handle; if (intr->handler != NULL) { + if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) { + uint32_t intr_vector; + + intr_vector = sa->eth_dev->data->nb_rx_queues; + rc = rte_intr_efd_enable(intr_handle, intr_vector); + if (rc != 0) + goto fail_rte_intr_efd_enable; + } + if (rte_intr_dp_is_en(intr_handle)) { + intr_handle->intr_vec = + rte_calloc("intr_vec", + sa->eth_dev->data->nb_rx_queues, sizeof(int), + 0); + if (intr_handle->intr_vec == NULL) { + sfc_err(sa, + "Failed to allocate %d rx_queues intr_vec", + sa->eth_dev->data->nb_rx_queues); + goto fail_intr_vector_alloc; + } + } + sfc_log_init(sa, "rte_intr_callback_register"); rc = rte_intr_callback_register(intr_handle, intr->handler, (void *)sa); @@ -202,6 +223,12 @@ rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa); fail_rte_intr_cb_reg: + rte_free(intr_handle->intr_vec); + +fail_intr_vector_alloc: + rte_intr_efd_disable(intr_handle); + +fail_rte_intr_efd_enable: efx_intr_fini(sa->nic); fail_intr_init: @@ -224,6 +251,10 @@ efx_intr_disable(sa->nic); intr_handle = &pci_dev->intr_handle; + + rte_free(intr_handle->intr_vec); + rte_intr_efd_disable(intr_handle); + if (rte_intr_disable(intr_handle) != 0) sfc_err(sa, "cannot disable interrupts"); @@ -250,10 +281,10 @@ intr->handler = NULL; intr->lsc_intr = (sa->eth_dev->data->dev_conf.intr_conf.lsc != 0); - if (!intr->lsc_intr) { - sfc_notice(sa, "LSC tracking using interrupts is disabled"); + intr->rxq_intr = (sa->eth_dev->data->dev_conf.intr_conf.rxq != 0); + + if (!intr->lsc_intr && !intr->rxq_intr) goto done; - } switch (intr->type) { case EFX_INTR_MESSAGE: diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index 70e7614..23dff09 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -52,6 +52,19 @@ rxq_info->state &= ~SFC_RXQ_FLUSHING; } +static int +sfc_efx_rx_qprime(struct sfc_efx_rxq *rxq) +{ + int rc = 0; + + if (rxq->evq->read_ptr_primed != rxq->evq->read_ptr) { + rc = efx_ev_qprime(rxq->evq->common, rxq->evq->read_ptr); + if (rc == 0) + rxq->evq->read_ptr_primed = rxq->evq->read_ptr; + } + return rc; +} + static void sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq) { @@ -306,6 +319,9 @@ sfc_efx_rx_qrefill(rxq); + if (rxq->flags & SFC_EFX_RXQ_FLAG_INTR_EN) + sfc_efx_rx_qprime(rxq); + return done_pkts; } @@ -493,6 +509,12 @@ struct sfc_rxq * rte_free(rxq); } + +/* Use qstop and qstart functions in the case of qstart failure */ +static sfc_dp_rx_qstop_t sfc_efx_rx_qstop; +static sfc_dp_rx_qpurge_t sfc_efx_rx_qpurge; + + static sfc_dp_rx_qstart_t sfc_efx_rx_qstart; static int sfc_efx_rx_qstart(struct sfc_dp_rxq *dp_rxq, @@ -501,6 +523,7 @@ struct sfc_rxq * /* libefx-based datapath is specific to libefx-based PMD */ struct sfc_efx_rxq *rxq = sfc_efx_rxq_by_dp_rxq(dp_rxq); struct sfc_rxq *crxq = sfc_rxq_by_dp_rxq(dp_rxq); + int rc; rxq->common = crxq->common; @@ -510,10 +533,20 @@ struct sfc_rxq * rxq->flags |= (SFC_EFX_RXQ_FLAG_STARTED | SFC_EFX_RXQ_FLAG_RUNNING); + if (rxq->flags & SFC_EFX_RXQ_FLAG_INTR_EN) { + rc = sfc_efx_rx_qprime(rxq); + if (rc != 0) + goto fail_rx_qprime; + } + return 0; + +fail_rx_qprime: + sfc_efx_rx_qstop(dp_rxq, NULL); + sfc_efx_rx_qpurge(dp_rxq); + return rc; } -static sfc_dp_rx_qstop_t sfc_efx_rx_qstop; static void sfc_efx_rx_qstop(struct sfc_dp_rxq *dp_rxq, __rte_unused unsigned int *evq_read_ptr) @@ -528,7 +561,6 @@ struct sfc_rxq * */ } -static sfc_dp_rx_qpurge_t sfc_efx_rx_qpurge; static void sfc_efx_rx_qpurge(struct sfc_dp_rxq *dp_rxq) { @@ -551,13 +583,40 @@ struct sfc_rxq * rxq->flags &= ~SFC_EFX_RXQ_FLAG_STARTED; } +static sfc_dp_rx_intr_enable_t sfc_efx_rx_intr_enable; +static int +sfc_efx_rx_intr_enable(struct sfc_dp_rxq *dp_rxq) +{ + struct sfc_efx_rxq *rxq = sfc_efx_rxq_by_dp_rxq(dp_rxq); + int rc = 0; + + rxq->flags |= SFC_EFX_RXQ_FLAG_INTR_EN; + if (rxq->flags & SFC_EFX_RXQ_FLAG_STARTED) { + rc = sfc_efx_rx_qprime(rxq); + if (rc != 0) + rxq->flags &= ~SFC_EFX_RXQ_FLAG_INTR_EN; + } + return rc; +} + +static sfc_dp_rx_intr_disable_t sfc_efx_rx_intr_disable; +static int +sfc_efx_rx_intr_disable(struct sfc_dp_rxq *dp_rxq) +{ + struct sfc_efx_rxq *rxq = sfc_efx_rxq_by_dp_rxq(dp_rxq); + + /* Cannot disarm, just disable rearm */ + rxq->flags &= ~SFC_EFX_RXQ_FLAG_INTR_EN; + return 0; +} + struct sfc_dp_rx sfc_efx_rx = { .dp = { .name = SFC_KVARG_DATAPATH_EFX, .type = SFC_DP_RX, .hw_fw_caps = 0, }, - .features = 0, + .features = SFC_DP_RX_FEAT_INTR, .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM, .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER, .qsize_up_rings = sfc_efx_rx_qsize_up_rings, @@ -569,6 +628,8 @@ struct sfc_dp_rx sfc_efx_rx = { .supported_ptypes_get = sfc_efx_supported_ptypes_get, .qdesc_npending = sfc_efx_rx_qdesc_npending, .qdesc_status = sfc_efx_rx_qdesc_status, + .intr_enable = sfc_efx_rx_intr_enable, + .intr_disable = sfc_efx_rx_intr_disable, .pkt_burst = sfc_efx_recv_pkts, }; @@ -1094,6 +1155,7 @@ struct sfc_dp_rx sfc_efx_rx = { info.rxq_entries = rxq_info->entries; info.rxq_hw_ring = rxq->mem.esm_base; + info.evq_hw_index = sfc_evq_index_by_rxq_sw_index(sa, sw_index); info.evq_entries = evq_entries; info.evq_hw_ring = evq->mem.esm_base; info.hw_index = rxq->hw_index; diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h index aca0925..42b16e2 100644 --- a/drivers/net/sfc/sfc_rx.h +++ b/drivers/net/sfc/sfc_rx.h @@ -73,6 +73,7 @@ struct sfc_efx_rxq { #define SFC_EFX_RXQ_FLAG_STARTED 0x1 #define SFC_EFX_RXQ_FLAG_RUNNING 0x2 #define SFC_EFX_RXQ_FLAG_RSS_HASH 0x4 +#define SFC_EFX_RXQ_FLAG_INTR_EN 0x8 unsigned int ptr_mask; unsigned int pending; unsigned int completed; From patchwork Thu Jun 6 17:33:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 54516 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 62DA21B997; Thu, 6 Jun 2019 19:33:38 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id B4ADA1B94E for ; Thu, 6 Jun 2019 19:33:34 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id DFF07B400EA; Thu, 6 Jun 2019 17:33:32 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Thu, 6 Jun 2019 10:33:30 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Thu, 6 Jun 2019 10:33:30 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id x56HXSbr017526; Thu, 6 Jun 2019 18:33:28 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id B5F7A1616E0; Thu, 6 Jun 2019 18:33:28 +0100 (BST) From: Andrew Rybchenko To: CC: Georgiy Levashov Date: Thu, 6 Jun 2019 18:33:25 +0100 Message-ID: <1559842405-7987-2-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1559842405-7987-1-git-send-email-arybchenko@solarflare.com> References: <1559842405-7987-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24660.005 X-TM-AS-Result: No-3.742200-4.000000-10 X-TMASE-MatchedRID: TuwRzrPxPxY2jeY+Udg/IqiUivh0j2Pv6VTG9cZxEjIGmHr1eMxt2VMe 5Blkpry7rdoLblq9S5ra/g/NGTW3MnBgpI59xlp2LIrMljt3aduZ2scyRQcer3TcRTxyvO5LQlf mKk5XlH3mGdseOJ1P3iyiWj+fjuL0SVXlcVANHXbJ1E/nrJFEDxlKjo8zguyKBCzD0Dc8iUu6o5 pOE3X0ptu9HSeKDBZ/hnl/iEjUzwbym6NE9KB2dihJ5tvbfbyLD+jls0cSwJNIyDY579vwTNejy K9h2FCxonRtUk/HslhKcNiAgNXHNCoecX8cNVOhY/9H2iDo8CVDIUdcYoS0g5soi2XrUn/Jn6Kd MrRsL14qtq5d3cxkNY3CqpsRtoaOLJo/NlTIt2qUkQmjBjo02ZtKF6T2XhTVASGd9HFaz0sfO34 WZPljk7Z2CpOWaAsE7RRCgnxySPh0qj5Sqhgwh1X2XWozwhwOOKBkFAm8GOUPoO5ncI6OuehbQ2 QpmASdbcuA3Id6O6JDDKa3G4nrLQ== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--3.742200-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24660.005 X-MDID: 1559842413-k3xlK2mKhenn Subject: [dpdk-dev] [PATCH 2/2] net/sfc: add Rx interrupts support for ef10 datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Georgiy Levashov Similar to support for efx datapath, Rx interrupt disabling just avoids rearming the next time. Signed-off-by: Georgiy Levashov Signed-off-by: Andrew Rybchenko --- drivers/net/sfc/sfc_ef10.h | 12 +++++++++++ drivers/net/sfc/sfc_ef10_rx.c | 48 ++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 59 insertions(+), 1 deletion(-) diff --git a/drivers/net/sfc/sfc_ef10.h b/drivers/net/sfc/sfc_ef10.h index a73e0bd..deb134d 100644 --- a/drivers/net/sfc/sfc_ef10.h +++ b/drivers/net/sfc/sfc_ef10.h @@ -109,6 +109,18 @@ rte_write32(dword.ed_u32[0], doorbell); } +static inline void +sfc_ef10_ev_qprime(volatile void *qprime, unsigned int read_ptr, + unsigned int ptr_mask) +{ + efx_dword_t dword; + + EFX_POPULATE_DWORD_1(dword, ERF_DZ_EVQ_RPTR, read_ptr & ptr_mask); + + rte_write32_relaxed(dword.ed_u32[0], qprime); + rte_wmb(); +} + const uint32_t * sfc_ef10_supported_ptypes_get(uint32_t tunnel_encaps); diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c index b294b43..f2fc6e7 100644 --- a/drivers/net/sfc/sfc_ef10_rx.c +++ b/drivers/net/sfc/sfc_ef10_rx.c @@ -56,14 +56,17 @@ struct sfc_ef10_rxq { #define SFC_EF10_RXQ_NOT_RUNNING 0x2 #define SFC_EF10_RXQ_EXCEPTION 0x4 #define SFC_EF10_RXQ_RSS_HASH 0x8 +#define SFC_EF10_RXQ_FLAG_INTR_EN 0x10 unsigned int ptr_mask; unsigned int pending; unsigned int completed; unsigned int evq_read_ptr; + unsigned int evq_read_ptr_primed; efx_qword_t *evq_hw_ring; struct sfc_ef10_rx_sw_desc *sw_ring; uint64_t rearm_data; struct rte_mbuf *scatter_pkt; + volatile void *evq_prime; uint16_t prefix_size; /* Used on refill */ @@ -86,6 +89,13 @@ struct sfc_ef10_rxq { } static void +sfc_ef10_rx_qprime(struct sfc_ef10_rxq *rxq) +{ + sfc_ef10_ev_qprime(rxq->evq_prime, rxq->evq_read_ptr, rxq->ptr_mask); + rxq->evq_read_ptr_primed = rxq->evq_read_ptr; +} + +static void sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq) { const unsigned int ptr_mask = rxq->ptr_mask; @@ -436,6 +446,10 @@ struct sfc_ef10_rxq { /* It is not a problem if we refill in the case of exception */ sfc_ef10_rx_qrefill(rxq); + if ((rxq->flags & SFC_EF10_RXQ_FLAG_INTR_EN) && + rxq->evq_read_ptr_primed != rxq->evq_read_ptr) + sfc_ef10_rx_qprime(rxq); + done: return nb_pkts - (rx_pkts_end - rx_pkts); } @@ -653,6 +667,9 @@ struct sfc_ef10_rxq { rxq->doorbell = (volatile uint8_t *)info->mem_bar + ER_DZ_RX_DESC_UPD_REG_OFST + (info->hw_index << info->vi_window_shift); + rxq->evq_prime = (volatile uint8_t *)info->mem_bar + + ER_DZ_EVQ_RPTR_REG_OFST + + (info->evq_hw_index << info->vi_window_shift); *dp_rxqp = &rxq->dp; return 0; @@ -692,6 +709,9 @@ struct sfc_ef10_rxq { rxq->flags |= SFC_EF10_RXQ_STARTED; rxq->flags &= ~(SFC_EF10_RXQ_NOT_RUNNING | SFC_EF10_RXQ_EXCEPTION); + if (rxq->flags & SFC_EF10_RXQ_FLAG_INTR_EN) + sfc_ef10_rx_qprime(rxq); + return 0; } @@ -744,13 +764,37 @@ struct sfc_ef10_rxq { rxq->flags &= ~SFC_EF10_RXQ_STARTED; } +static sfc_dp_rx_intr_enable_t sfc_ef10_rx_intr_enable; +static int +sfc_ef10_rx_intr_enable(struct sfc_dp_rxq *dp_rxq) +{ + struct sfc_ef10_rxq *rxq = sfc_ef10_rxq_by_dp_rxq(dp_rxq); + + rxq->flags |= SFC_EF10_RXQ_FLAG_INTR_EN; + if (rxq->flags & SFC_EF10_RXQ_STARTED) + sfc_ef10_rx_qprime(rxq); + return 0; +} + +static sfc_dp_rx_intr_disable_t sfc_ef10_rx_intr_disable; +static int +sfc_ef10_rx_intr_disable(struct sfc_dp_rxq *dp_rxq) +{ + struct sfc_ef10_rxq *rxq = sfc_ef10_rxq_by_dp_rxq(dp_rxq); + + /* Cannot disarm, just disable rearm */ + rxq->flags &= ~SFC_EF10_RXQ_FLAG_INTR_EN; + return 0; +} + struct sfc_dp_rx sfc_ef10_rx = { .dp = { .name = SFC_KVARG_DATAPATH_EF10, .type = SFC_DP_RX, .hw_fw_caps = SFC_DP_HW_FW_CAP_EF10, }, - .features = SFC_DP_RX_FEAT_MULTI_PROCESS, + .features = SFC_DP_RX_FEAT_MULTI_PROCESS | + SFC_DP_RX_FEAT_INTR, .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER, @@ -765,5 +809,7 @@ struct sfc_dp_rx sfc_ef10_rx = { .supported_ptypes_get = sfc_ef10_supported_ptypes_get, .qdesc_npending = sfc_ef10_rx_qdesc_npending, .qdesc_status = sfc_ef10_rx_qdesc_status, + .intr_enable = sfc_ef10_rx_intr_enable, + .intr_disable = sfc_ef10_rx_intr_disable, .pkt_burst = sfc_ef10_recv_pkts, };