From patchwork Tue Nov 20 15:31:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rafal Kozik X-Patchwork-Id: 48209 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B8445323; Tue, 20 Nov 2018 16:32:11 +0100 (CET) Received: from mail-ed1-f67.google.com (mail-ed1-f67.google.com [209.85.208.67]) by dpdk.org (Postfix) with ESMTP id 949BB1DB9 for ; Tue, 20 Nov 2018 16:32:10 +0100 (CET) Received: by mail-ed1-f67.google.com with SMTP id h50so2288876ede.5 for ; Tue, 20 Nov 2018 07:32:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=eH/Sh1jUeSWmuP6qNmgPA3Rmrm4FpiFKZnDPKXe7es8=; b=JDAFozAskwVruSkmYm/Ud4ILmQf6eaz03bYkKJqm5n8noOzZKM/cA23aPaPT8Iq67Z PF8IUDpBkjoelSPeGpgYBqy0y69dhu+NI1+e0qRNa9/0spxyHxd7OUMwTkKxHhd2Iiml JMKM5BW0LTr9B/UdAh+CAdFQW4tM7/DSpkwxtdCbl+hPIs/O2Kd1fusU5sjkpvEpeFxu Fu2fnyt2BTUY+dRXEVF482FfC7k65l0uaJhPTBN5tbTfJ9S8kj738OLbQBFvAaabGc3/ EaZXBstb27ogkhrNZMJbBJHuEPI6GKxJJUzqmY55MsVzSnZ22HHKeI0tEBwcUZmwG1HC h0ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=eH/Sh1jUeSWmuP6qNmgPA3Rmrm4FpiFKZnDPKXe7es8=; b=W6OCgnwPAKb7aTeIPfioEEojiUj9uZ06MFq5STMun4tchhlIYfVf4v4l/ibMBkfWMO 9XkTTtgJSofhsYPCENtxrIwgOh8E166vNSTPym0bjqoJz12S3dv4EbiiwyEUsAmOucMx Dl2gryOHx9O0dnVuB6T1L0klCAEL/oQJtphR9HLvLxq0Cx/jVBO3286EHxQp4z7nzvv/ +8EJJu/WOtYp7QxI+QvtbLhRr8ibxPCFWjhx9FYpj+GhZm0EuY9dTocqwoAXRQ4vcgxY 5Iyb5XIPWFL6CWtNS9RLLUHYP+j9BKXVwUb3qFC7TQcwI9HjkX1WlYlO6cdhMJ9OaiOm FUcQ== X-Gm-Message-State: AGRZ1gIlgjLZyh9Y+c+PKe707NgEegnpN/H21myHoR/RWFgkR/usTxq/ 4/fYFynGKYURA43v8wV4l8dNpBdVUyM= X-Google-Smtp-Source: AJdET5fd9/ejtrmqUBArfcpCxShNM6141GWRr0a435DpYlq1NAGaD+T/1T/3e7ucGaVLUipIsb/p4A== X-Received: by 2002:a17:906:4204:: with SMTP id z4-v6mr2328534ejk.13.1542727929932; Tue, 20 Nov 2018 07:32:09 -0800 (PST) Received: from rafalkozik.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id d2sm7879069eda.92.2018.11.20.07.32.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Nov 2018 07:32:09 -0800 (PST) From: Rafal Kozik To: dev@dpdk.org Cc: mw@semihalf.com, mk@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, matua@amazon.com, igorch@amazon.com, root , stable@dpdk.org, Rafal Kozik Date: Tue, 20 Nov 2018 16:31:58 +0100 Message-Id: <1542727918-8254-1-git-send-email-rk@semihalf.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH] net/ena: fix out of order completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: root rx_buffer_info should be refill not linearly, but out of order. IDs should be taken from empty_rx_reqs array. rx_refill_buffer is introduced to temporary storage bulk of mbufs taken from pool. In case of error unused mbufs are put back to pool. Fixes: c2034976673d ("net/ena: add Rx out of order completion") Cc: stable@dpdk.org Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 40 ++++++++++++++++++++++++++++------------ drivers/net/ena/ena_ethdev.h | 1 + 2 files changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3690afe..3a5cce9 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -776,6 +776,10 @@ static void ena_rx_queue_release(void *queue) rte_free(ring->rx_buffer_info); ring->rx_buffer_info = NULL; + if (ring->rx_refill_buffer) + rte_free(ring->rx_refill_buffer); + ring->rx_refill_buffer = NULL; + if (ring->empty_rx_reqs) rte_free(ring->empty_rx_reqs); ring->empty_rx_reqs = NULL; @@ -1318,6 +1322,17 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + rxq->rx_refill_buffer = rte_zmalloc("rxq->rx_refill_buffer", + sizeof(struct rte_mbuf *) * nb_desc, + RTE_CACHE_LINE_SIZE); + + if (!rxq->rx_refill_buffer) { + RTE_LOG(ERR, PMD, "failed to alloc mem for rx refill buffer\n"); + rte_free(rxq->rx_buffer_info); + rxq->rx_buffer_info = NULL; + return -ENOMEM; + } + rxq->empty_rx_reqs = rte_zmalloc("rxq->empty_rx_reqs", sizeof(uint16_t) * nb_desc, RTE_CACHE_LINE_SIZE); @@ -1325,6 +1340,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; + rte_free(rxq->rx_refill_buffer); + rxq->rx_refill_buffer = NULL; return -ENOMEM; } @@ -1346,7 +1363,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; uint16_t in_use, req_id; - struct rte_mbuf **mbufs = &rxq->rx_buffer_info[0]; + struct rte_mbuf **mbufs = rxq->rx_refill_buffer; if (unlikely(!count)) return 0; @@ -1354,13 +1371,8 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) in_use = rxq->next_to_use - rxq->next_to_clean; ena_assert_msg(((in_use + count) < ring_size), "bad ring state"); - count = RTE_MIN(count, - (uint16_t)(ring_size - (next_to_use & ring_mask))); - /* get resources for incoming packets */ - rc = rte_mempool_get_bulk(rxq->mb_pool, - (void **)(&mbufs[next_to_use & ring_mask]), - count); + rc = rte_mempool_get_bulk(rxq->mb_pool, (void **)mbufs, count); if (unlikely(rc < 0)) { rte_atomic64_inc(&rxq->adapter->drv_stats->rx_nombuf); PMD_RX_LOG(DEBUG, "there are no enough free buffers"); @@ -1369,15 +1381,17 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) for (i = 0; i < count; i++) { uint16_t next_to_use_masked = next_to_use & ring_mask; - struct rte_mbuf *mbuf = mbufs[next_to_use_masked]; + struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; - rte_prefetch0(mbufs[((next_to_use + 4) & ring_mask)]); + if (likely(i + 4 < count)) + rte_prefetch0(mbufs[i + 4]); req_id = rxq->empty_rx_reqs[next_to_use_masked]; rc = validate_rx_req_id(rxq, req_id); if (unlikely(rc < 0)) break; + rxq->rx_buffer_info[req_id] = mbuf; /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; @@ -1386,17 +1400,19 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) rc = ena_com_add_single_rx_desc(rxq->ena_com_io_sq, &ebuf, req_id); if (unlikely(rc)) { - rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbuf), - count - i); RTE_LOG(WARNING, PMD, "failed adding rx desc\n"); + rxq->rx_buffer_info[req_id] = NULL; break; } next_to_use++; } - if (unlikely(i < count)) + if (unlikely(i < count)) { RTE_LOG(WARNING, PMD, "refilled rx qid %d with only %d " "buffers (from %d)\n", rxq->id, i, count); + rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbufs[i]), + count - i); + } /* When we submitted free recources to device... */ if (likely(i > 0)) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 2dc8129..322e90a 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -87,6 +87,7 @@ struct ena_ring { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct rte_mbuf **rx_buffer_info; /* contex of rx packet */ }; + struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ struct ena_com_io_cq *ena_com_io_cq;