From patchwork Wed Oct 11 14:35:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30146 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9EFE01B265; Wed, 11 Oct 2017 16:36:59 +0200 (CEST) Received: from mail-wm0-f43.google.com (mail-wm0-f43.google.com [74.125.82.43]) by dpdk.org (Postfix) with ESMTP id EEA771B20F for ; Wed, 11 Oct 2017 16:36:47 +0200 (CEST) Received: by mail-wm0-f43.google.com with SMTP id b189so5314537wmd.4 for ; Wed, 11 Oct 2017 07:36:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=P4wfaMJOBpAmBygBq7wcAvkBmf+EPIubzWSq1RkWd8E=; b=HpJOG0UfkjmvTWVK0pehEyfblT2aD7xTWdNICEI46wn7rWZnP6TIcn12ni63m0tJL7 WRO3CeVDGVMa5koHCDI6+wZ/oR+hkP3EGGnj6QNG8XZ97sOX8mvYbIDp5blHBLXbQPyq Ou4+e8CTnnyvmgQvYea086cldfQevkXLx6/b28FV9uGHlMEQiGhJeQwZQ4ReXfBeRiZt 0ICcA3KEqHfyJoCIh/YljGyh/yesgzkyu5wPobWGZay1FTAJCb6EXCzzArJ/gSc5Wyp+ OxuBQvYdPKk8MHwZVrTMP3iSVlir4nbN2wL5C5Jy1blKUuKft0fPrjapCN2urgZ7rkMb n+Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=P4wfaMJOBpAmBygBq7wcAvkBmf+EPIubzWSq1RkWd8E=; b=Q3Vlw/XVGb63GvWcY+nmkJwr/rtEYtSQ+mmijvwTYOvkhO9Wrsmdr44vZbj9ZZ4uof sdi1TTAKWPDjK2N+3/pzrXYZLjZEU28ud1CUaZ+DhVMTFnw8RYui9csV3ROn+QerN24W m0r/X+Ua8xtbPGqgl8/UiesG+3pPY7VmmXnNQwPlQ3xZR6o8nh6elWoMQQ+b55FLFc3K J6VFBqJQ57Hr7jPHDuVTlRsVTSRFeW64BJRNzt+LuJe5oCHkzeOQjlhiLg9F7/NnP6O2 55WUuyjkt6eMjbwhrj8jnmH0uH06BO9PnTC9EoObPyA+Z7FgJz5wBluLHG0x+r3eE2si Hhlg== X-Gm-Message-State: AMCzsaVoAI9qpdNLhr1lZ7zRwn3YsZ9eMEmh4ijhTJGg9LtEWfIo8Nv1 edEZof1sA9e4gGXRhAvfHq44ZA== X-Google-Smtp-Source: AOwi7QAkdaBUsLIXNBzZYErt9I6bRtpTJGQJDFk9uJugOKLI+V8E8G3W5Qoaq/UOqXOkQ/Amh3VBlQ== X-Received: by 10.28.153.85 with SMTP id b82mr12648928wme.121.1507732607514; Wed, 11 Oct 2017 07:36:47 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id w14sm8455675wmf.13.2017.10.11.07.36.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Oct 2017 07:36:46 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Date: Wed, 11 Oct 2017 16:35:26 +0200 Message-Id: <6dfb09414f548d39e06c38da6dc717a3e3d6314d.1507730496.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 24/29] net/mlx4: allocate queues and mbuf rings together X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since live Tx and Rx queues cannot be reused anymore without being destroyed first, mbuf ring sizes are fixed and known from the start. This allows a single allocation for queue data structures and mbuf ring together, saving space and bringing them closer in memory. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_rxq.c | 71 +++++++++++-------------- drivers/net/mlx4/mlx4_rxtx.h | 2 + drivers/net/mlx4/mlx4_txq.c | 109 +++++++++++--------------------------- 3 files changed, 65 insertions(+), 117 deletions(-) diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 30b0654..03e6af5 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -69,36 +69,30 @@ * * @param rxq * Pointer to Rx queue structure. - * @param elts_n - * Number of elements to allocate. * * @return * 0 on success, negative errno value otherwise and rte_errno is set. */ static int -mlx4_rxq_alloc_elts(struct rxq *rxq, unsigned int elts_n) +mlx4_rxq_alloc_elts(struct rxq *rxq) { + struct rxq_elt (*elts)[rxq->elts_n] = rxq->elts; unsigned int i; - struct rxq_elt (*elts)[elts_n] = - rte_calloc_socket("RXQ elements", 1, sizeof(*elts), 0, - rxq->socket); - if (elts == NULL) { - rte_errno = ENOMEM; - ERROR("%p: can't allocate packets array", (void *)rxq); - goto error; - } /* For each WR (packet). */ - for (i = 0; (i != elts_n); ++i) { + for (i = 0; i != RTE_DIM(*elts); ++i) { struct rxq_elt *elt = &(*elts)[i]; struct ibv_recv_wr *wr = &elt->wr; struct ibv_sge *sge = &(*elts)[i].sge; struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (buf == NULL) { + while (i--) { + rte_pktmbuf_free_seg((*elts)[i].buf); + (*elts)[i].buf = NULL; + } rte_errno = ENOMEM; - ERROR("%p: empty mbuf pool", (void *)rxq); - goto error; + return -rte_errno; } elt->buf = buf; wr->next = &(*elts)[(i + 1)].wr; @@ -121,21 +115,7 @@ mlx4_rxq_alloc_elts(struct rxq *rxq, unsigned int elts_n) } /* The last WR pointer must be NULL. */ (*elts)[(i - 1)].wr.next = NULL; - DEBUG("%p: allocated and configured %u single-segment WRs", - (void *)rxq, elts_n); - rxq->elts_n = elts_n; - rxq->elts_head = 0; - rxq->elts = elts; return 0; -error: - if (elts != NULL) { - for (i = 0; (i != RTE_DIM(*elts)); ++i) - rte_pktmbuf_free_seg((*elts)[i].buf); - rte_free(elts); - } - DEBUG("%p: failed, freed everything", (void *)rxq); - assert(rte_errno > 0); - return -rte_errno; } /** @@ -148,17 +128,15 @@ static void mlx4_rxq_free_elts(struct rxq *rxq) { unsigned int i; - unsigned int elts_n = rxq->elts_n; - struct rxq_elt (*elts)[elts_n] = rxq->elts; + struct rxq_elt (*elts)[rxq->elts_n] = rxq->elts; DEBUG("%p: freeing WRs", (void *)rxq); - rxq->elts_n = 0; - rxq->elts = NULL; - if (elts == NULL) - return; - for (i = 0; (i != RTE_DIM(*elts)); ++i) + for (i = 0; (i != RTE_DIM(*elts)); ++i) { + if (!(*elts)[i].buf) + continue; rte_pktmbuf_free_seg((*elts)[i].buf); - rte_free(elts); + (*elts)[i].buf = NULL; + } } /** @@ -187,8 +165,21 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, { struct priv *priv = dev->data->dev_private; uint32_t mb_len = rte_pktmbuf_data_room_size(mp); + struct rxq_elt (*elts)[desc]; struct rte_flow_error error; struct rxq *rxq; + struct rte_malloc_vec vec[] = { + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*rxq), + .addr = (void **)&rxq, + }, + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*elts), + .addr = (void **)&elts, + }, + }; int ret; (void)conf; /* Thresholds configuration (ignored). */ @@ -213,9 +204,8 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -rte_errno; } /* Allocate and initialize Rx queue. */ - rxq = rte_calloc_socket("RXQ", 1, sizeof(*rxq), 0, socket); + rte_zmallocv_socket("RXQ", vec, RTE_DIM(vec), socket); if (!rxq) { - rte_errno = ENOMEM; ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); return -rte_errno; @@ -224,6 +214,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, .priv = priv, .mp = mp, .port_id = dev->data->port_id, + .elts_n = desc, + .elts_head = 0, + .elts = elts, .stats.idx = idx, .socket = socket, }; @@ -307,7 +300,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - ret = mlx4_rxq_alloc_elts(rxq, desc); + ret = mlx4_rxq_alloc_elts(rxq); if (ret) { ERROR("%p: RXQ allocation failed: %s", (void *)dev, strerror(rte_errno)); diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h index d62120e..d90f2f9 100644 --- a/drivers/net/mlx4/mlx4_rxtx.h +++ b/drivers/net/mlx4/mlx4_rxtx.h @@ -81,6 +81,7 @@ struct rxq { struct rxq_elt (*elts)[]; /**< Rx elements. */ struct mlx4_rxq_stats stats; /**< Rx queue counters. */ unsigned int socket; /**< CPU socket ID for allocations. */ + uint8_t data[]; /**< Remaining queue resources. */ }; /** Tx element. */ @@ -118,6 +119,7 @@ struct txq { unsigned int elts_comp_cd_init; /**< Initial value for countdown. */ struct mlx4_txq_stats stats; /**< Tx queue counters. */ unsigned int socket; /**< CPU socket ID for allocations. */ + uint8_t data[]; /**< Remaining queue resources. */ }; /* mlx4_rxq.c */ diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c index f102c68..7042cd9 100644 --- a/drivers/net/mlx4/mlx4_txq.c +++ b/drivers/net/mlx4/mlx4_txq.c @@ -64,59 +64,6 @@ #include "mlx4_utils.h" /** - * Allocate Tx queue elements. - * - * @param txq - * Pointer to Tx queue structure. - * @param elts_n - * Number of elements to allocate. - * - * @return - * 0 on success, negative errno value otherwise and rte_errno is set. - */ -static int -mlx4_txq_alloc_elts(struct txq *txq, unsigned int elts_n) -{ - unsigned int i; - struct txq_elt (*elts)[elts_n] = - rte_calloc_socket("TXQ", 1, sizeof(*elts), 0, txq->socket); - int ret = 0; - - if (elts == NULL) { - ERROR("%p: can't allocate packets array", (void *)txq); - ret = ENOMEM; - goto error; - } - for (i = 0; (i != elts_n); ++i) { - struct txq_elt *elt = &(*elts)[i]; - - elt->buf = NULL; - } - DEBUG("%p: allocated and configured %u WRs", (void *)txq, elts_n); - txq->elts_n = elts_n; - txq->elts = elts; - txq->elts_head = 0; - txq->elts_tail = 0; - txq->elts_comp = 0; - /* - * Request send completion every MLX4_PMD_TX_PER_COMP_REQ packets or - * at least 4 times per ring. - */ - txq->elts_comp_cd_init = - ((MLX4_PMD_TX_PER_COMP_REQ < (elts_n / 4)) ? - MLX4_PMD_TX_PER_COMP_REQ : (elts_n / 4)); - txq->elts_comp_cd = txq->elts_comp_cd_init; - assert(ret == 0); - return 0; -error: - rte_free(elts); - DEBUG("%p: failed, freed everything", (void *)txq); - assert(ret > 0); - rte_errno = ret; - return -rte_errno; -} - -/** * Free Tx queue elements. * * @param txq @@ -125,34 +72,21 @@ mlx4_txq_alloc_elts(struct txq *txq, unsigned int elts_n) static void mlx4_txq_free_elts(struct txq *txq) { - unsigned int elts_n = txq->elts_n; unsigned int elts_head = txq->elts_head; unsigned int elts_tail = txq->elts_tail; - struct txq_elt (*elts)[elts_n] = txq->elts; + struct txq_elt (*elts)[txq->elts_n] = txq->elts; DEBUG("%p: freeing WRs", (void *)txq); - txq->elts_n = 0; - txq->elts_head = 0; - txq->elts_tail = 0; - txq->elts_comp = 0; - txq->elts_comp_cd = 0; - txq->elts_comp_cd_init = 0; - txq->elts = NULL; - if (elts == NULL) - return; while (elts_tail != elts_head) { struct txq_elt *elt = &(*elts)[elts_tail]; assert(elt->buf != NULL); rte_pktmbuf_free(elt->buf); -#ifndef NDEBUG - /* Poisoning. */ - memset(elt, 0x77, sizeof(*elt)); -#endif - if (++elts_tail == elts_n) + elt->buf = NULL; + if (++elts_tail == RTE_DIM(*elts)) elts_tail = 0; } - rte_free(elts); + txq->elts_tail = txq->elts_head; } struct txq_mp2mr_mbuf_check_data { @@ -235,8 +169,21 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int socket, const struct rte_eth_txconf *conf) { struct priv *priv = dev->data->dev_private; + struct txq_elt (*elts)[desc]; struct ibv_qp_init_attr qp_init_attr; struct txq *txq; + struct rte_malloc_vec vec[] = { + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*txq), + .addr = (void **)&txq, + }, + { + .align = RTE_CACHE_LINE_SIZE, + .size = sizeof(*elts), + .addr = (void **)&elts, + }, + }; int ret; (void)conf; /* Thresholds configuration (ignored). */ @@ -261,9 +208,8 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -rte_errno; } /* Allocate and initialize Tx queue. */ - txq = rte_calloc_socket("TXQ", 1, sizeof(*txq), 0, socket); + rte_zmallocv_socket("TXQ", vec, RTE_DIM(vec), socket); if (!txq) { - rte_errno = ENOMEM; ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); return -rte_errno; @@ -272,6 +218,19 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, .priv = priv, .stats.idx = idx, .socket = socket, + .elts_n = desc, + .elts = elts, + .elts_head = 0, + .elts_tail = 0, + .elts_comp = 0, + /* + * Request send completion every MLX4_PMD_TX_PER_COMP_REQ + * packets or at least 4 times per ring. + */ + .elts_comp_cd = + RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4), + .elts_comp_cd_init = + RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4), }; txq->cq = ibv_create_cq(priv->ctx, desc, NULL, NULL, 0); if (!txq->cq) { @@ -314,12 +273,6 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - ret = mlx4_txq_alloc_elts(txq, desc); - if (ret) { - ERROR("%p: TXQ allocation failed: %s", - (void *)dev, strerror(rte_errno)); - goto error; - } ret = ibv_modify_qp (txq->qp, &(struct ibv_qp_attr){