From patchwork Tue Nov 20 16:26:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rafal Kozik X-Patchwork-Id: 48211 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E4BE35424; Tue, 20 Nov 2018 17:26:42 +0100 (CET) Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) by dpdk.org (Postfix) with ESMTP id 19F712C54 for ; Tue, 20 Nov 2018 17:26:42 +0100 (CET) Received: by mail-ed1-f68.google.com with SMTP id f4so2431273edq.10 for ; Tue, 20 Nov 2018 08:26:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fPWbbtQjf2WeIY8CL0XJtt/27UWjNbEveXkiw41hwkc=; b=eaOfQa1dDsPUfXCq/Ec+l7PGLVEkZymIgm0MsSNZxIpqTTNqqUZJc5CPxuQx+EIL0B CbyWm5KMJ7Szp3gzHvf/UfhdLkkGfiJoNjX1tcID0gHV4V27Sp5zJr1Sm4+/V/QbMToP h9IBZiNKi2AeTxo/OaMLgo4ArV4sUvt6yofPx6eBpGsd8a6RXv6GXmtxhKdWfw2tiIU+ uW/uABPyQPJ1uShSpYrX8RUxcKRo3q4+qSVmLR58KGQSprMi7nCoZoWFu8YRI264HsDr 86aW+JQQjKZORkW13N3Zn4TJIDvkx1EeDe09uIgt3V7CykHHisaRZbRNK4Pxe4l9P7bb 6bJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fPWbbtQjf2WeIY8CL0XJtt/27UWjNbEveXkiw41hwkc=; b=T73GIt33JHQ/h7NWo79KBZ9aDHqGIj+zuJGkeAGyGTY+nl8pvMNxfuJNYiMBrrx+ee 5kPf+nqhHZB+plgl2DrN1BhCoY8/JMzzLCJQZs16Iu8zIJFKyb+sCVc890BCYzB/R7/x zOIIhdrAK/S2unwLpsxnOdmFYmsS4Qydlifr4gGaIN9ReC+jFl7tTUnm0I/wQQIR4Fc2 wRjMRlzF6WIIbVS3/AOu+yRUN6nxZRChIyJcQlaijjCfQpn5zjwtZALMQl5lpWKMpIpl asrr5IlB1Fqt13XsONFPk20eyqD2qT6VtYzWHuN39marcyOqkq2IHsIxrYJlWAm+7XTH gagA== X-Gm-Message-State: AA+aEWb0ForTsKpUWuMWoJVrZ+mRtOzl3+KvKKELXjrREOfg9NY/lpiZ EsHCNbFCxpQN0S/f6477CdULsZsjJfg= X-Google-Smtp-Source: AFSGD/XK1fiz5CX/DRfa0sY/KJxwoWm/3iwEBG49aRd3PKoWEi+Xe4OlMyl/rSH+yzky5taoGt86ow== X-Received: by 2002:a50:a6cf:: with SMTP id f15mr2732247edc.97.1542731200945; Tue, 20 Nov 2018 08:26:40 -0800 (PST) Received: from rafalkozik.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id n16-v6sm7207272eja.6.2018.11.20.08.26.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Nov 2018 08:26:40 -0800 (PST) From: Rafal Kozik To: dev@dpdk.org Cc: mw@semihalf.com, mk@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, matua@amazon.com, igorch@amazon.com, Rafal Kozik , stable@dpdk.org Date: Tue, 20 Nov 2018 17:26:25 +0100 Message-Id: <1542731185-10136-1-git-send-email-rk@semihalf.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1542727918-8254-1-git-send-email-rk@semihalf.com> References: <1542727918-8254-1-git-send-email-rk@semihalf.com> Subject: [dpdk-dev] [PATCH v2] net/ena: fix out of order completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rx_buffer_info should be refill not linearly, but out of order. IDs should be taken from empty_rx_reqs array. rx_refill_buffer is introduced to temporary storage bulk of mbufs taken from pool. In case of error unused mbufs are put back to pool. Fixes: c2034976673d ("net/ena: add Rx out of order completion") Cc: stable@dpdk.org Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- Fix commit author. --- drivers/net/ena/ena_ethdev.c | 40 ++++++++++++++++++++++++++++------------ drivers/net/ena/ena_ethdev.h | 1 + 2 files changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3690afe..3a5cce9 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -776,6 +776,10 @@ static void ena_rx_queue_release(void *queue) rte_free(ring->rx_buffer_info); ring->rx_buffer_info = NULL; + if (ring->rx_refill_buffer) + rte_free(ring->rx_refill_buffer); + ring->rx_refill_buffer = NULL; + if (ring->empty_rx_reqs) rte_free(ring->empty_rx_reqs); ring->empty_rx_reqs = NULL; @@ -1318,6 +1322,17 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + rxq->rx_refill_buffer = rte_zmalloc("rxq->rx_refill_buffer", + sizeof(struct rte_mbuf *) * nb_desc, + RTE_CACHE_LINE_SIZE); + + if (!rxq->rx_refill_buffer) { + RTE_LOG(ERR, PMD, "failed to alloc mem for rx refill buffer\n"); + rte_free(rxq->rx_buffer_info); + rxq->rx_buffer_info = NULL; + return -ENOMEM; + } + rxq->empty_rx_reqs = rte_zmalloc("rxq->empty_rx_reqs", sizeof(uint16_t) * nb_desc, RTE_CACHE_LINE_SIZE); @@ -1325,6 +1340,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; + rte_free(rxq->rx_refill_buffer); + rxq->rx_refill_buffer = NULL; return -ENOMEM; } @@ -1346,7 +1363,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; uint16_t in_use, req_id; - struct rte_mbuf **mbufs = &rxq->rx_buffer_info[0]; + struct rte_mbuf **mbufs = rxq->rx_refill_buffer; if (unlikely(!count)) return 0; @@ -1354,13 +1371,8 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) in_use = rxq->next_to_use - rxq->next_to_clean; ena_assert_msg(((in_use + count) < ring_size), "bad ring state"); - count = RTE_MIN(count, - (uint16_t)(ring_size - (next_to_use & ring_mask))); - /* get resources for incoming packets */ - rc = rte_mempool_get_bulk(rxq->mb_pool, - (void **)(&mbufs[next_to_use & ring_mask]), - count); + rc = rte_mempool_get_bulk(rxq->mb_pool, (void **)mbufs, count); if (unlikely(rc < 0)) { rte_atomic64_inc(&rxq->adapter->drv_stats->rx_nombuf); PMD_RX_LOG(DEBUG, "there are no enough free buffers"); @@ -1369,15 +1381,17 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) for (i = 0; i < count; i++) { uint16_t next_to_use_masked = next_to_use & ring_mask; - struct rte_mbuf *mbuf = mbufs[next_to_use_masked]; + struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; - rte_prefetch0(mbufs[((next_to_use + 4) & ring_mask)]); + if (likely(i + 4 < count)) + rte_prefetch0(mbufs[i + 4]); req_id = rxq->empty_rx_reqs[next_to_use_masked]; rc = validate_rx_req_id(rxq, req_id); if (unlikely(rc < 0)) break; + rxq->rx_buffer_info[req_id] = mbuf; /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; @@ -1386,17 +1400,19 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) rc = ena_com_add_single_rx_desc(rxq->ena_com_io_sq, &ebuf, req_id); if (unlikely(rc)) { - rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbuf), - count - i); RTE_LOG(WARNING, PMD, "failed adding rx desc\n"); + rxq->rx_buffer_info[req_id] = NULL; break; } next_to_use++; } - if (unlikely(i < count)) + if (unlikely(i < count)) { RTE_LOG(WARNING, PMD, "refilled rx qid %d with only %d " "buffers (from %d)\n", rxq->id, i, count); + rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbufs[i]), + count - i); + } /* When we submitted free recources to device... */ if (likely(i > 0)) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 2dc8129..322e90a 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -87,6 +87,7 @@ struct ena_ring { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct rte_mbuf **rx_buffer_info; /* contex of rx packet */ }; + struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ struct ena_com_io_cq *ena_com_io_cq;