From patchwork Thu Oct 12 12:19:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30275 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9D9381B332; Thu, 12 Oct 2017 14:20:53 +0200 (CEST) Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id DE12D1B2C7 for ; Thu, 12 Oct 2017 14:20:38 +0200 (CEST) Received: by mail-wm0-f47.google.com with SMTP id m72so12692846wmc.1 for ; Thu, 12 Oct 2017 05:20:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wS0VKQfx98w4TgkVtRp8qXvUtS56Kq6/J3ooLENKkVc=; b=d3yghcsPs63NThoO3GGTe+4I31hvyK6dQMGdqZR3Byb6CvYrqXLZ8bQHLMlyLJ/Lpp bjAzYYSiH3QBSUju02u2WT+nuoimYc/K1OvMOHtN6vtMXja6wKZy6EMEgPem/ZHVwXr1 nO6ncXK1g9Mw9ndmlX77s8X8gJc0lfsn9QYbQk1/UvzaJ9nLi++WE7dfnw2iP8Vnl236 SHxiS28p3wWhxdEQ+VmDKci87O/ezRVK3zR/9xRFpUgZTriW53KxG4Euv3q0heY+sMxg 0iS5KN6Eu7JYTwj29xR0EQatk9CUyumMeJK5MpbbqpxwiUKUO09CET02Wb1N2wwyU9Fx Qn+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wS0VKQfx98w4TgkVtRp8qXvUtS56Kq6/J3ooLENKkVc=; b=A6IcfMjEV+/d6VS7rxEG3rZoQAsMTnug7bduKWvEcMikpB9TLbqc33PJwM7YsczgCC hLsbQ3KiYj8XT7V/q2B5myCFmECLoW96409Kg25iSd0Qu0sI3cdOOr6957ovM+x7PP2J U2FDghtCVls/eZ+KAaHQlo6qqBVZhmCZ4S9SE//zV9CJNB+2KqlQInVuOrVF9A94dElo at+4hBsPl+SG20llRePpGb5MVM53sPqUZDjVmVpXiPpMS2/xAuTc44bogr6EN1NNQ4xz iRHQoW6h2UwJt928v3z7UX47IMdRR0N1P6d7FDoHxQDmRe2QICKKhG49hGVWry3fvrhc UNfA== X-Gm-Message-State: AMCzsaWvkLqEA8AEQl//wuY3FsLlQ4exp5w1/zrLCLgo/iqaMU3BMUQ2 qq+gpvy9Ny64+TykHRHD5UWCATlv X-Google-Smtp-Source: AOwi7QAkgJ/H49nPpicofPtaZK6B17oAOoJuZ9jZbMin4ORBL2CjBzx90BpgKdUmNH862awiqZL7yQ== X-Received: by 10.223.188.13 with SMTP id s13mr2233954wrg.39.1507810838364; Thu, 12 Oct 2017 05:20:38 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 67sm144623wmw.22.2017.10.12.05.20.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Oct 2017 05:20:37 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: Nelio Laranjeiro , dev@dpdk.org Date: Thu, 12 Oct 2017 14:19:38 +0200 Message-Id: <6a50316ed3b2b7a8607baa6c8cf8414e2400a980.1507809961.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 24/29] net/mlx4: allocate queues and mbuf rings together X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since live Tx and Rx queues cannot be reused anymore without being destroyed first, mbuf ring sizes are fixed and known from the start. This allows a single allocation for queue data structures and mbuf ring together, saving space and bringing them closer in memory. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_rxq.c | 71 +++++++++++-------------- drivers/net/mlx4/mlx4_rxtx.h | 2 + drivers/net/mlx4/mlx4_txq.c | 109 +++++++++++--------------------------- 3 files changed, 65 insertions(+), 117 deletions(-) diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 30b0654..9978e5d 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -69,36 +69,30 @@ * * @param rxq * Pointer to Rx queue structure. - * @param elts_n - * Number of elements to allocate. * * @return * 0 on success, negative errno value otherwise and rte_errno is set. */ static int -mlx4_rxq_alloc_elts(struct rxq *rxq, unsigned int elts_n) +mlx4_rxq_alloc_elts(struct rxq *rxq) { + struct rxq_elt (*elts)[rxq->elts_n] = rxq->elts; unsigned int i; - struct rxq_elt (*elts)[elts_n] = - rte_calloc_socket("RXQ elements", 1, sizeof(*elts), 0, - rxq->socket); - if (elts == NULL) { - rte_errno = ENOMEM; - ERROR("%p: can't allocate packets array", (void *)rxq); - goto error; - } /* For each WR (packet). */ - for (i = 0; (i != elts_n); ++i) { + for (i = 0; i != RTE_DIM(*elts); ++i) { struct rxq_elt *elt = &(*elts)[i]; struct ibv_recv_wr *wr = &elt->wr; struct ibv_sge *sge = &(*elts)[i].sge; struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (buf == NULL) { + while (i--) { + rte_pktmbuf_free_seg((*elts)[i].buf); + (*elts)[i].buf = NULL; + } rte_errno = ENOMEM; - ERROR("%p: empty mbuf pool", (void *)rxq); - goto error; + return -rte_errno; } elt->buf = buf; wr->next = &(*elts)[(i + 1)].wr; @@ -121,21 +115,7 @@ mlx4_rxq_alloc_elts(struct rxq *rxq, unsigned int elts_n) } /* The last WR pointer must be NULL. */ (*elts)[(i - 1)].wr.next = NULL; - DEBUG("%p: allocated and configured %u single-segment WRs", - (void *)rxq, elts_n); - rxq->elts_n = elts_n; - rxq->elts_head = 0; - rxq->elts = elts; return 0; -error: - if (elts != NULL) { - for (i = 0; (i != RTE_DIM(*elts)); ++i) - rte_pktmbuf_free_seg((*elts)[i].buf); - rte_free(elts); - } - DEBUG("%p: failed, freed everything", (void *)rxq); - assert(rte_errno > 0); - return -rte_errno; } /** @@ -148,17 +128,15 @@ static void mlx4_rxq_free_elts(struct rxq *rxq) { unsigned int i; - unsigned int elts_n = rxq->elts_n; - struct rxq_elt (*elts)[elts_n] = rxq->elts; + struct rxq_elt (*elts)[rxq->elts_n] = rxq->elts; DEBUG("%p: freeing WRs", (void *)rxq); - rxq->elts_n = 0; - rxq->elts = NULL; - if (elts == NULL) - return; - for (i = 0; (i != RTE_DIM(*elts)); ++i) + for (i = 0; (i != RTE_DIM(*elts)); ++i) { + if (!(*elts)[i].buf) + continue; rte_pktmbuf_free_seg((*elts)[i].buf); - rte_free(elts); + (*elts)[i].buf = NULL; + } } /** @@ -187,8 +165,21 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, { struct priv *priv = dev->data->dev_private; uint32_t mb_len = rte_pktmbuf_data_room_size(mp); + struct rxq_elt (*elts)[desc]; struct rte_flow_error error; struct rxq *rxq; + struct mlx4_malloc_vec vec[] = { + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*rxq), + .addr = (void **)&rxq, + }, + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*elts), + .addr = (void **)&elts, + }, + }; int ret; (void)conf; /* Thresholds configuration (ignored). */ @@ -213,9 +204,8 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -rte_errno; } /* Allocate and initialize Rx queue. */ - rxq = rte_calloc_socket("RXQ", 1, sizeof(*rxq), 0, socket); + mlx4_zmallocv_socket("RXQ", vec, RTE_DIM(vec), socket); if (!rxq) { - rte_errno = ENOMEM; ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); return -rte_errno; @@ -224,6 +214,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, .priv = priv, .mp = mp, .port_id = dev->data->port_id, + .elts_n = desc, + .elts_head = 0, + .elts = elts, .stats.idx = idx, .socket = socket, }; @@ -307,7 +300,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - ret = mlx4_rxq_alloc_elts(rxq, desc); + ret = mlx4_rxq_alloc_elts(rxq); if (ret) { ERROR("%p: RXQ allocation failed: %s", (void *)dev, strerror(rte_errno)); diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h index d62120e..d90f2f9 100644 --- a/drivers/net/mlx4/mlx4_rxtx.h +++ b/drivers/net/mlx4/mlx4_rxtx.h @@ -81,6 +81,7 @@ struct rxq { struct rxq_elt (*elts)[]; /**< Rx elements. */ struct mlx4_rxq_stats stats; /**< Rx queue counters. */ unsigned int socket; /**< CPU socket ID for allocations. */ + uint8_t data[]; /**< Remaining queue resources. */ }; /** Tx element. */ @@ -118,6 +119,7 @@ struct txq { unsigned int elts_comp_cd_init; /**< Initial value for countdown. */ struct mlx4_txq_stats stats; /**< Tx queue counters. */ unsigned int socket; /**< CPU socket ID for allocations. */ + uint8_t data[]; /**< Remaining queue resources. */ }; /* mlx4_rxq.c */ diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c index f102c68..915f8d7 100644 --- a/drivers/net/mlx4/mlx4_txq.c +++ b/drivers/net/mlx4/mlx4_txq.c @@ -64,59 +64,6 @@ #include "mlx4_utils.h" /** - * Allocate Tx queue elements. - * - * @param txq - * Pointer to Tx queue structure. - * @param elts_n - * Number of elements to allocate. - * - * @return - * 0 on success, negative errno value otherwise and rte_errno is set. - */ -static int -mlx4_txq_alloc_elts(struct txq *txq, unsigned int elts_n) -{ - unsigned int i; - struct txq_elt (*elts)[elts_n] = - rte_calloc_socket("TXQ", 1, sizeof(*elts), 0, txq->socket); - int ret = 0; - - if (elts == NULL) { - ERROR("%p: can't allocate packets array", (void *)txq); - ret = ENOMEM; - goto error; - } - for (i = 0; (i != elts_n); ++i) { - struct txq_elt *elt = &(*elts)[i]; - - elt->buf = NULL; - } - DEBUG("%p: allocated and configured %u WRs", (void *)txq, elts_n); - txq->elts_n = elts_n; - txq->elts = elts; - txq->elts_head = 0; - txq->elts_tail = 0; - txq->elts_comp = 0; - /* - * Request send completion every MLX4_PMD_TX_PER_COMP_REQ packets or - * at least 4 times per ring. - */ - txq->elts_comp_cd_init = - ((MLX4_PMD_TX_PER_COMP_REQ < (elts_n / 4)) ? - MLX4_PMD_TX_PER_COMP_REQ : (elts_n / 4)); - txq->elts_comp_cd = txq->elts_comp_cd_init; - assert(ret == 0); - return 0; -error: - rte_free(elts); - DEBUG("%p: failed, freed everything", (void *)txq); - assert(ret > 0); - rte_errno = ret; - return -rte_errno; -} - -/** * Free Tx queue elements. * * @param txq @@ -125,34 +72,21 @@ mlx4_txq_alloc_elts(struct txq *txq, unsigned int elts_n) static void mlx4_txq_free_elts(struct txq *txq) { - unsigned int elts_n = txq->elts_n; unsigned int elts_head = txq->elts_head; unsigned int elts_tail = txq->elts_tail; - struct txq_elt (*elts)[elts_n] = txq->elts; + struct txq_elt (*elts)[txq->elts_n] = txq->elts; DEBUG("%p: freeing WRs", (void *)txq); - txq->elts_n = 0; - txq->elts_head = 0; - txq->elts_tail = 0; - txq->elts_comp = 0; - txq->elts_comp_cd = 0; - txq->elts_comp_cd_init = 0; - txq->elts = NULL; - if (elts == NULL) - return; while (elts_tail != elts_head) { struct txq_elt *elt = &(*elts)[elts_tail]; assert(elt->buf != NULL); rte_pktmbuf_free(elt->buf); -#ifndef NDEBUG - /* Poisoning. */ - memset(elt, 0x77, sizeof(*elt)); -#endif - if (++elts_tail == elts_n) + elt->buf = NULL; + if (++elts_tail == RTE_DIM(*elts)) elts_tail = 0; } - rte_free(elts); + txq->elts_tail = txq->elts_head; } struct txq_mp2mr_mbuf_check_data { @@ -235,8 +169,21 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int socket, const struct rte_eth_txconf *conf) { struct priv *priv = dev->data->dev_private; + struct txq_elt (*elts)[desc]; struct ibv_qp_init_attr qp_init_attr; struct txq *txq; + struct mlx4_malloc_vec vec[] = { + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*txq), + .addr = (void **)&txq, + }, + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*elts), + .addr = (void **)&elts, + }, + }; int ret; (void)conf; /* Thresholds configuration (ignored). */ @@ -261,9 +208,8 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -rte_errno; } /* Allocate and initialize Tx queue. */ - txq = rte_calloc_socket("TXQ", 1, sizeof(*txq), 0, socket); + mlx4_zmallocv_socket("TXQ", vec, RTE_DIM(vec), socket); if (!txq) { - rte_errno = ENOMEM; ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); return -rte_errno; @@ -272,6 +218,19 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, .priv = priv, .stats.idx = idx, .socket = socket, + .elts_n = desc, + .elts = elts, + .elts_head = 0, + .elts_tail = 0, + .elts_comp = 0, + /* + * Request send completion every MLX4_PMD_TX_PER_COMP_REQ + * packets or at least 4 times per ring. + */ + .elts_comp_cd = + RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4), + .elts_comp_cd_init = + RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4), }; txq->cq = ibv_create_cq(priv->ctx, desc, NULL, NULL, 0); if (!txq->cq) { @@ -314,12 +273,6 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - ret = mlx4_txq_alloc_elts(txq, desc); - if (ret) { - ERROR("%p: TXQ allocation failed: %s", - (void *)dev, strerror(rte_errno)); - goto error; - } ret = ibv_modify_qp (txq->qp, &(struct ibv_qp_attr){