From patchwork Tue Oct 19 10:56:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 102189 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 50070A0C43; Tue, 19 Oct 2021 12:56:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E79B941157; Tue, 19 Oct 2021 12:56:51 +0200 (CEST) Received: from mail-lj1-f182.google.com (mail-lj1-f182.google.com [209.85.208.182]) by mails.dpdk.org (Postfix) with ESMTP id 1BA024114A for ; Tue, 19 Oct 2021 12:56:49 +0200 (CEST) Received: by mail-lj1-f182.google.com with SMTP id o11so5311654ljg.10 for ; Tue, 19 Oct 2021 03:56:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MPd0Vsn6Fll2A/g6WRRlq3/VnswplingoXitMWsVBq0=; b=1tsioCDOBnqWV4ppURHC36dib1xfJKASxdM/I/iZe65P8/qkOyIwbBLP9TLiOO55qe gTgQnVUuLOV9ITpq0EaMbJENxZW2V6y/J3aM3Ci92RAR1707VtlSlSSvLARmdqrbWA0K dkVXWh5igytqMFoWnrF2Tm2S/X4RKoMgqlYIhA24+8YLPd2YEktKbbR0f6fpuXZgMbeg YaZaG6u79psSkzTXAgTa9EBpYiucaKTXvyMJjggIQljk68g377TuEFYejkSUpONq9i0R 8N+4pXA1e4dPdVwz+P3JrVljRXh5sykY9QBmzX6hHHDk5XLZHsMfKMPKmTk4mY+4FnYC 1/TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MPd0Vsn6Fll2A/g6WRRlq3/VnswplingoXitMWsVBq0=; b=YtpzV2M/S0tBzRTTLiagOWAGY6TV3DyK8AHoEL7506qgr+g/GWq1McgX6ErmT9hPDI vhMZgkQl8P08k/7y5Nn/o9tbjTyGnZwDEqCIpJP1XxKf24SnL2MFTqxz5qXqiSziTEY0 wXXq3mT+dcpNhstEo6F7KbztYOoHyYQSjAR7Ef+8uqJpJYYDzZKLAyk9MsrgKV/oi52d t6oa7Lq2ea9q8dMqjTwnt1oxZJHWXb5DEOBdTUKc/fKRO0fS9qDlS65GWtsApWkaQ/pP OAJvO1UhZIC6ZXiFh/KaD+KT955YP0/anZ2iU1OwIiIk6i/JX38eTklK7nUB48w2RRnC tfWA== X-Gm-Message-State: AOAM531GdwP1FRACgrKK4UzdZZeY8jeXanBz6hhGx2F9//3IhvMJ6Fp0 QSEvNO2btF3qtVqpjl4e4Ys6vg== X-Google-Smtp-Source: ABdhPJydrw88h2plChfltEhDrZX2ZW4JcHcFXea4oMOwO088Ngq8aQNArPCTmX1r8c0VeeA+4sYeUw== X-Received: by 2002:a2e:bc03:: with SMTP id b3mr6053934ljf.54.1634641008724; Tue, 19 Oct 2021 03:56:48 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-181-52.dynamic.chello.pl. [89.79.181.52]) by smtp.gmail.com with ESMTPSA id bn17sm1496058ljb.4.2021.10.19.03.56.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Oct 2021 03:56:48 -0700 (PDT) From: Michal Krawczyk To: ferruh.yigit@intel.com Cc: dev@dpdk.org, upstream@semihalf.com, shaibran@amazon.com, ndagan@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Tue, 19 Oct 2021 12:56:24 +0200 Message-Id: <20211019105629.11731-3-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211019105629.11731-1-mk@semihalf.com> References: <20211015162701.16324-1-mk@semihalf.com> <20211019105629.11731-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/7] net/ena: support Tx/Rx free thresholds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The caller can pass Tx or Rx free threshold value to the configuration structure for each ring. It determines when the Tx/Rx function should start cleaning up/refilling the descriptors. ENA was ignoring this value and doing it's own calulcations. Now the user can configure ENA's behavior using this parameter and if this variable won't be set, the ENA will continue with the old behavior and will use it's own threshold value. The default value is not provided by the ENA in the ena_infos_get(), as it's being determined dynamically, depending on the requested ring size. Note that NULL check for Tx conf was removed from the function ena_tx_queue_setup(), as at this place the configuration will be either provided by the user or the default config will be used and it's handled by the upper (rte_ethdev) layer. Tx threshold shouldn't be used for the Tx cleanup budget as it can be inadequate to the used burst. Now the PMD tries to release mbufs for the ring until it will be depleted. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- v2: * Fix calculations of the default tx_free_thresh if it wasn't provided by the user. RTE_MIN was replaced with RTE_MAX. doc/guides/rel_notes/release_21_11.rst | 7 ++++ drivers/net/ena/ena_ethdev.c | 44 ++++++++++++++++++-------- drivers/net/ena/ena_ethdev.h | 5 +++ 3 files changed, 42 insertions(+), 14 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index bd6a388c9d..8341d979aa 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -102,6 +102,13 @@ New Features * Disabled secondary process support. +* **Updated Amazon ENA PMD.** + + Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced + bug fixes and improvements, including: + + * Support for the tx_free_thresh and rx_free_thresh configuration parameters. + * **Updated Broadcom bnxt PMD.** * Added flow offload support for Thor. diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 197cb7ecd4..fe9bac8888 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1128,6 +1128,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, struct ena_ring *txq = NULL; struct ena_adapter *adapter = dev->data->dev_private; unsigned int i; + uint16_t dyn_thresh; txq = &adapter->tx_ring[queue_idx]; @@ -1194,10 +1195,18 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; - if (tx_conf != NULL) { - txq->offloads = - tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + txq->offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + /* Check if caller provided the Tx cleanup threshold value. */ + if (tx_conf->tx_free_thresh != 0) { + txq->tx_free_thresh = tx_conf->tx_free_thresh; + } else { + dyn_thresh = txq->ring_size - + txq->ring_size / ENA_REFILL_THRESH_DIVIDER; + txq->tx_free_thresh = RTE_MAX(dyn_thresh, + txq->ring_size - ENA_REFILL_THRESH_PACKET); } + /* Store pointer to this queue in upper layer */ txq->configured = 1; dev->data->tx_queues[queue_idx] = txq; @@ -1216,6 +1225,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, struct ena_ring *rxq = NULL; size_t buffer_size; int i; + uint16_t dyn_thresh; rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { @@ -1295,6 +1305,14 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (rx_conf->rx_free_thresh != 0) { + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + } else { + dyn_thresh = rxq->ring_size / ENA_REFILL_THRESH_DIVIDER; + rxq->rx_free_thresh = RTE_MIN(dyn_thresh, + (uint16_t)(ENA_REFILL_THRESH_PACKET)); + } + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -2124,7 +2142,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, { struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int free_queue_entries; - unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t descs_in_use; struct rte_mbuf *mbuf; @@ -2206,12 +2223,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->next_to_clean = next_to_clean; free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); - refill_threshold = - RTE_MIN(rx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, - (unsigned int)ENA_REFILL_THRESH_PACKET); /* Burst refill to save doorbells, memory barriers, const interval */ - if (free_queue_entries > refill_threshold) { + if (free_queue_entries >= rx_ring->rx_free_thresh) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); ena_populate_rx_queue(rx_ring, free_queue_entries); } @@ -2578,12 +2592,12 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) static void ena_tx_cleanup(struct ena_ring *tx_ring) { - unsigned int cleanup_budget; unsigned int total_tx_descs = 0; + uint16_t cleanup_budget; uint16_t next_to_clean = tx_ring->next_to_clean; - cleanup_budget = RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, - (unsigned int)ENA_REFILL_THRESH_PACKET); + /* Attempt to release all Tx descriptors (ring_size - 1 -> size_mask) */ + cleanup_budget = tx_ring->size_mask; while (likely(total_tx_descs < cleanup_budget)) { struct rte_mbuf *mbuf; @@ -2624,6 +2638,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); + int available_desc; uint16_t sent_idx = 0; #ifdef RTE_ETHDEV_DEBUG_TX @@ -2643,8 +2658,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->size_mask)]); } - tx_ring->tx_stats.available_desc = - ena_com_free_q_entries(tx_ring->ena_com_io_sq); + available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); + tx_ring->tx_stats.available_desc = available_desc; /* If there are ready packets to be xmitted... */ if (likely(tx_ring->pkts_without_db)) { @@ -2654,7 +2669,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->pkts_without_db = false; } - ena_tx_cleanup(tx_ring); + if (available_desc < tx_ring->tx_free_thresh) + ena_tx_cleanup(tx_ring); tx_ring->tx_stats.available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 26d425a893..176d713dff 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -142,6 +142,11 @@ struct ena_ring { struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq; + union { + uint16_t tx_free_thresh; + uint16_t rx_free_thresh; + }; + struct ena_com_rx_buf_info ena_bufs[ENA_PKT_MAX_BUFS] __rte_cache_aligned;