From patchwork Wed Aug 2 14:10:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 27377 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 26D26A1D2; Wed, 2 Aug 2017 16:11:32 +0200 (CEST) Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id AE7229A04 for ; Wed, 2 Aug 2017 16:11:09 +0200 (CEST) Received: by mail-wm0-f49.google.com with SMTP id t201so42845475wmt.0 for ; Wed, 02 Aug 2017 07:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=l5ttzsEr55P6hZqahZOslZ8JhJvPmNizLvhnYlAccQg=; b=EHAj0G7P6ap/QEGWgTmuacGnS/N2s+os2BEMWnn9nVqFMhNoHsiaMjZxdSHFdOyeRD kUGmxf8cRQTZL00Thq43VCRDIc8mifW8W3q6X+kJXBSY9ud7AOYyu02/GDWfAfp878s5 iiHslbPtX9RQqYNRyv+Flc2M8NyRwpgoLk8sJlgFt9iJSrERWdLtvdIA1W6PQX07Kx3i SnaTRlcf7UUBjrih6zhSkjOEJvh6nC+iDi7htxee4KZV0tuofeO/Ua0OEB4xNQt154Lz GwPR9UdsYTNsToUJKshZXi7BLqpT5UlaCzdR78zDp+dwGsm10ie6zlm3G1f2oPrCykO0 ndoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=l5ttzsEr55P6hZqahZOslZ8JhJvPmNizLvhnYlAccQg=; b=FNQyX0CPN3Kmm3HYBwUbNZonTe35dAXGsgWqOm37azBS26gYaD15ynSzlfwDgjf7Ut RSi8kmSJgPouZV9KCoJlL6/zaD2gsxl1Jx+VB0wE0GTdtrZoysT5z8UzAgePXwFU4PYS IUOQ39f/VBJ0awGO/6zxMsiKAWnsaXWc3EYSVbmhByQClsk0QiOCGt3JaqaB+YQtM9K3 wYbdHLhWbcu8WIWhw2cRub8LQhY+HQS/aIyG2cI1ci6Z2Xffj0UzZAHePBxR+yY61nY4 w6/rQcs2qo7Z6KP/3rr0vOjotgg7udsw5QBQuGHDt6i/GrY9yugXajopRWwnp4zBw4e1 dwMw== X-Gm-Message-State: AIVw113Y/isN+MVKgUgteacxCWRaO0gmTs4BPoCNKCdWcQEqDCMUivmc 7E8vAz2SlP3PZZ8Zref0ug== X-Received: by 10.28.113.203 with SMTP id d72mr3954538wmi.109.1501683068709; Wed, 02 Aug 2017 07:11:08 -0700 (PDT) Received: from ping.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id d53sm39449552wrd.81.2017.08.02.07.11.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 02 Aug 2017 07:11:08 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: adrien.mazarguil@6wind.com Date: Wed, 2 Aug 2017 16:10:27 +0200 Message-Id: <5be9116b1c00129e284068eb63c57e3a544d5aa8.1501681927.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 11/21] net/mlx5: add reference counter on DPDK Rx queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5.c | 16 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_rxq.c | 492 +++++++++++++++++++++------------------- drivers/net/mlx5/mlx5_rxtx.h | 10 + drivers/net/mlx5/mlx5_trigger.c | 45 ++++ 5 files changed, 321 insertions(+), 243 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index c8be196..b37292c 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -156,17 +156,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (priv->rxqs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ usleep(1000); - for (i = 0; (i != priv->rxqs_n); ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl; - - if (rxq == NULL) - continue; - rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - (*priv->rxqs)[i] = NULL; - mlx5_rxq_cleanup(rxq_ctrl); - rte_free(rxq_ctrl); - } + for (i = 0; (i != priv->rxqs_n); ++i) + mlx5_priv_rxq_release(priv, i); priv->rxqs_n = 0; priv->rxqs = NULL; } @@ -194,6 +185,9 @@ mlx5_dev_close(struct rte_eth_dev *dev) i = mlx5_priv_rxq_ibv_verify(priv); if (i) WARN("%p: some Verbs Rx queue still remain", (void*)priv); + i = mlx5_priv_rxq_verify(priv); + if (i) + WARN("%p: some Rx Queues still remain", (void*)priv); i = mlx5_priv_txq_ibv_verify(priv); if (i) WARN("%p: some Verbs Tx queue still remain", (void*)priv); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4dd432b..448995e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -146,6 +146,7 @@ struct priv { struct rte_flow_drop *flow_drop_queue; /* Flow drop queue. */ TAILQ_HEAD(mlx5_flows, rte_flow) flows; /* RTE Flow rules. */ LIST_HEAD(mr, mlx5_mr) mr; /* Memory region. */ + LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqibv, mlx5_rxq_ibv) rxqsibv; /* Verbs Rx queues. */ LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */ LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 1663734..3b75a7e 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -37,6 +37,7 @@ #include #include #include +#include /* Verbs header. */ /* ISO C doesn't support unnamed structs/unions, disabling -pedantic. */ @@ -631,16 +632,15 @@ priv_rehash_flows(struct priv *priv) * * @param rxq_ctrl * Pointer to RX queue structure. - * @param elts_n - * Number of elements to allocate. * * @return * 0 on success, errno value on failure. */ -static int -rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl, unsigned int elts_n) +int +rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) { const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n; + unsigned int elts_n = 1 << rxq_ctrl->rxq.elts_n; unsigned int i; int ret = 0; @@ -669,9 +669,11 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl, unsigned int elts_n) NB_SEGS(buf) = 1; (*rxq_ctrl->rxq.elts)[i] = buf; } + /* If Rx vector is activated. */ if (rxq_check_vec_support(&rxq_ctrl->rxq) > 0) { struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; struct rte_mbuf *mbuf_init = &rxq->fake_mbuf; + int j; assert(rxq->elts_n == rxq->cqe_n); /* Initialize default rearm_data for vPMD. */ @@ -684,10 +686,12 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl, unsigned int elts_n) * rearm_data covers previous fields. */ rte_compiler_barrier(); - rxq->mbuf_initializer = *(uint64_t *)&mbuf_init->rearm_data; + rxq->mbuf_initializer = + *(uint64_t *)&mbuf_init->rearm_data; /* Padding with a fake mbuf for vectorized Rx. */ - for (i = 0; i < MLX5_VPMD_DESCS_PER_LOOP; ++i) - (*rxq->elts)[elts_n + i] = &rxq->fake_mbuf; + for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) + (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; + /* Mark that it need to be cleaned up for rxq_alloc_elts(). */ } DEBUG("%p: allocated and configured %u segments (max %u packets)", (void *)rxq_ctrl, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); @@ -740,174 +744,6 @@ rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl) } /** - * Clean up a RX queue. - * - * Destroy objects, free allocated memory and reset the structure for reuse. - * - * @param rxq_ctrl - * Pointer to RX queue structure. - */ -void -mlx5_rxq_cleanup(struct mlx5_rxq_ctrl *rxq_ctrl) -{ - DEBUG("cleaning up %p", (void *)rxq_ctrl); - rxq_free_elts(rxq_ctrl); - if (rxq_ctrl->ibv) - mlx5_priv_rxq_ibv_release(rxq_ctrl->priv, rxq_ctrl->ibv); - memset(rxq_ctrl, 0, sizeof(*rxq_ctrl)); -} - -/** - * Configure a RX queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param rxq_ctrl - * Pointer to RX queue structure. - * @param desc - * Number of descriptors to configure in queue. - * @param socket - * NUMA socket on which memory must be allocated. - * @param[in] conf - * Thresholds parameters. - * @param mp - * Memory pool for buffer allocations. - * - * @return - * 0 on success, errno value on failure. - */ -int -mlx5_rxq_ctrl_setup(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl, - uint16_t desc, unsigned int socket, - const struct rte_eth_rxconf *conf, struct rte_mempool *mp) -{ - struct priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl tmpl = { - .priv = priv, - .socket = socket, - .rxq = { - .elts = rte_calloc_socket("RXQ", 1, - desc * - sizeof(struct rte_mbuf *), 0, - socket), - .elts_n = log2above(desc), - .mp = mp, - .rss_hash = priv->rxqs_n > 1, - }, - }; - unsigned int mb_len = rte_pktmbuf_data_room_size(mp); - const uint16_t desc_n = - desc + priv->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP; - struct rte_mbuf *(*elts)[desc_n] = NULL; - int ret = 0; - - (void)conf; /* Thresholds configuration (ignored). */ - if (dev->data->dev_conf.intr_conf.rxq) - tmpl.memory_channel = 1; - /* Enable scattered packets support for this queue if necessary. */ - assert(mb_len >= RTE_PKTMBUF_HEADROOM); - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= - (mb_len - RTE_PKTMBUF_HEADROOM)) { - tmpl.rxq.sges_n = 0; - } else if (dev->data->dev_conf.rxmode.enable_scatter) { - unsigned int size = - RTE_PKTMBUF_HEADROOM + - dev->data->dev_conf.rxmode.max_rx_pkt_len; - unsigned int sges_n; - - /* - * Determine the number of SGEs needed for a full packet - * and round it to the next power of two. - */ - sges_n = log2above((size / mb_len) + !!(size % mb_len)); - tmpl.rxq.sges_n = sges_n; - /* Make sure rxq.sges_n did not overflow. */ - size = mb_len * (1 << tmpl.rxq.sges_n); - size -= RTE_PKTMBUF_HEADROOM; - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { - ERROR("%p: too many SGEs (%u) needed to handle" - " requested maximum packet size %u", - (void *)dev, - 1 << sges_n, - dev->data->dev_conf.rxmode.max_rx_pkt_len); - return EOVERFLOW; - } - } else { - WARN("%p: the requested maximum Rx packet size (%u) is" - " larger than a single mbuf (%u) and scattered" - " mode has not been requested", - (void *)dev, - dev->data->dev_conf.rxmode.max_rx_pkt_len, - mb_len - RTE_PKTMBUF_HEADROOM); - } - DEBUG("%p: maximum number of segments per packet: %u", - (void *)dev, 1 << tmpl.rxq.sges_n); - if (desc % (1 << tmpl.rxq.sges_n)) { - ERROR("%p: number of RX queue descriptors (%u) is not a" - " multiple of SGEs per packet (%u)", - (void *)dev, - desc, - 1 << tmpl.rxq.sges_n); - return EINVAL; - } - /* Toggle RX checksum offload if hardware supports it. */ - if (priv->hw_csum) - tmpl.rxq.csum = !!dev->data->dev_conf.rxmode.hw_ip_checksum; - if (priv->hw_csum_l2tun) - tmpl.rxq.csum_l2tun = - !!dev->data->dev_conf.rxmode.hw_ip_checksum; - /* Configure VLAN stripping. */ - tmpl.rxq.vlan_strip = (priv->hw_vlan_strip && - !!dev->data->dev_conf.rxmode.hw_vlan_strip); - /* By default, FCS (CRC) is stripped by hardware. */ - if (dev->data->dev_conf.rxmode.hw_strip_crc) { - tmpl.rxq.crc_present = 0; - } else if (priv->hw_fcs_strip) { - tmpl.rxq.crc_present = 1; - } else { - WARN("%p: CRC stripping has been disabled but will still" - " be performed by hardware, make sure MLNX_OFED and" - " firmware are up to date", - (void *)dev); - tmpl.rxq.crc_present = 0; - } - DEBUG("%p: CRC stripping is %s, %u bytes will be subtracted from" - " incoming frames to hide it", - (void *)dev, - tmpl.rxq.crc_present ? "disabled" : "enabled", - tmpl.rxq.crc_present << 2); - /* Save port ID. */ - tmpl.rxq.port_id = dev->data->port_id; - DEBUG("%p: RTE port ID: %u", (void *)rxq_ctrl, tmpl.rxq.port_id); - ret = rxq_alloc_elts(&tmpl, desc); - if (ret) { - ERROR("%p: RXQ allocation failed: %s", - (void *)dev, strerror(ret)); - goto error; - } - /* Clean up rxq in case we're reinitializing it. */ - DEBUG("%p: cleaning-up old rxq just in case", (void *)rxq_ctrl); - mlx5_rxq_cleanup(rxq_ctrl); - /* Move mbuf pointers to dedicated storage area in RX queue. */ - elts = (void *)(rxq_ctrl + 1); - rte_memcpy(elts, tmpl.rxq.elts, sizeof(*elts)); -#ifndef NDEBUG - memset(tmpl.rxq.elts, 0x55, sizeof(*elts)); -#endif - rte_free(tmpl.rxq.elts); - tmpl.rxq.elts = elts; - *rxq_ctrl = tmpl; - DEBUG("%p: rxq updated with %p", (void *)rxq_ctrl, (void *)&tmpl); - assert(ret == 0); - return 0; -error: - rte_free(tmpl.rxq.elts); - mlx5_rxq_cleanup(&tmpl); - assert(ret > 0); - return ret; -} - -/** * DPDK callback to configure a RX queue. * * @param dev @@ -935,13 +771,11 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - const uint16_t desc_n = - desc + priv->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP; - int ret; + int ret = 0; + (void)conf; if (mlx5_is_secondary()) return -E_RTE_SECONDARY; - priv_lock(priv); if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); @@ -957,54 +791,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, priv_unlock(priv); return -EOVERFLOW; } - if (rxq != NULL) { - DEBUG("%p: reusing already allocated queue index %u (%p)", - (void *)dev, idx, (void *)rxq); - if (dev->data->dev_started) { - priv_unlock(priv); - return -EEXIST; - } - (*priv->rxqs)[idx] = NULL; - mlx5_rxq_cleanup(rxq_ctrl); - /* Resize if rxq size is changed. */ - if (rxq_ctrl->rxq.elts_n != log2above(desc)) { - rxq_ctrl = rte_realloc(rxq_ctrl, - sizeof(*rxq_ctrl) + desc_n * - sizeof(struct rte_mbuf *), - RTE_CACHE_LINE_SIZE); - if (!rxq_ctrl) { - ERROR("%p: unable to reallocate queue index %u", - (void *)dev, idx); - priv_unlock(priv); - return -ENOMEM; - } - } - } else { - rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) + - desc_n * - sizeof(struct rte_mbuf *), - 0, socket); - if (rxq_ctrl == NULL) { - ERROR("%p: unable to allocate queue index %u", - (void *)dev, idx); - priv_unlock(priv); - return -ENOMEM; - } + if (!mlx5_priv_rxq_releasable(priv, idx)) { + ret = EBUSY; + ERROR("%p: unable to release queue index %u", + (void *)dev, idx); + goto out; } - ret = mlx5_rxq_ctrl_setup(dev, rxq_ctrl, desc, socket, conf, mp); - if (ret) { - rte_free(rxq_ctrl); + mlx5_priv_rxq_release(priv, idx); + rxq_ctrl = mlx5_priv_rxq_new(priv, idx, desc, socket, mp); + if (!rxq_ctrl) { + ERROR("%p: unable to allocate queue index %u", + (void *)dev, idx); + ret = ENOMEM; goto out; } - rxq_ctrl->rxq.stats.idx = idx; DEBUG("%p: adding RX queue %p to list", (void *)dev, (void *)rxq_ctrl); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; - rxq_ctrl->ibv = mlx5_priv_rxq_ibv_new(priv, idx); - if (!rxq_ctrl->ibv) { - ret = EAGAIN; - goto out; - } out: priv_unlock(priv); return -ret; @@ -1022,7 +825,6 @@ mlx5_rx_queue_release(void *dpdk_rxq) struct mlx5_rxq_data *rxq = (struct mlx5_rxq_data *)dpdk_rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct priv *priv; - unsigned int i; if (mlx5_is_secondary()) return; @@ -1032,18 +834,10 @@ mlx5_rx_queue_release(void *dpdk_rxq) rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); priv = rxq_ctrl->priv; priv_lock(priv); - if (!mlx5_priv_rxq_ibv_releasable(priv, rxq_ctrl->ibv)) + if (!mlx5_priv_rxq_releasable(priv, rxq_ctrl->rxq.stats.idx)) rte_panic("Rx queue %p is still used by a flow and cannot be" " removed\n", (void *)rxq_ctrl); - for (i = 0; (i != priv->rxqs_n); ++i) - if ((*priv->rxqs)[i] == rxq) { - DEBUG("%p: removing RX queue %p from list", - (void *)priv->dev, (void *)rxq_ctrl); - (*priv->rxqs)[i] = NULL; - break; - } - mlx5_rxq_cleanup(rxq_ctrl); - rte_free(rxq_ctrl); + mlx5_priv_rxq_release(priv, rxq_ctrl->rxq.stats.idx); priv_unlock(priv); } @@ -1511,3 +1305,237 @@ mlx5_priv_rxq_ibv_releasable(struct priv *priv, struct mlx5_rxq_ibv *rxq) (void)priv; return (rte_atomic32_read(&rxq->refcnt) == 1); } + +/** + * Create a DPDK Rx queue. + * + * @param priv + * Pointer to private structure. + * @param idx + * TX queue index. + * @param desc + * Number of descriptors to configure in queue. + * @param socket + * NUMA socket on which memory must be allocated. + * + * @return + * A DPDK queue object on success. + */ +struct mlx5_rxq_ctrl* +mlx5_priv_rxq_new(struct priv *priv, uint16_t idx, uint16_t desc, + unsigned int socket, struct rte_mempool *mp) +{ + struct rte_eth_dev *dev = priv->dev; + struct mlx5_rxq_ctrl *tmpl; + unsigned int mb_len = rte_pktmbuf_data_room_size(mp); + + tmpl = rte_calloc_socket("TXQ", 1, + sizeof(*tmpl) + + desc * sizeof(struct rte_mbuf *), + 0, socket); + if (!tmpl) + return NULL; + if (priv->dev->data->dev_conf.intr_conf.rxq) + tmpl->memory_channel = 1; + /* Enable scattered packets support for this queue if necessary. */ + assert(mb_len >= RTE_PKTMBUF_HEADROOM); + if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= + (mb_len - RTE_PKTMBUF_HEADROOM)) { + tmpl->rxq.sges_n = 0; + } else if (dev->data->dev_conf.rxmode.enable_scatter) { + unsigned int size = + RTE_PKTMBUF_HEADROOM + + dev->data->dev_conf.rxmode.max_rx_pkt_len; + unsigned int sges_n; + + /* + * Determine the number of SGEs needed for a full packet + * and round it to the next power of two. + */ + sges_n = log2above((size / mb_len) + !!(size % mb_len)); + tmpl->rxq.sges_n = sges_n; + /* Make sure rxq.sges_n did not overflow. */ + size = mb_len * (1 << tmpl->rxq.sges_n); + size -= RTE_PKTMBUF_HEADROOM; + if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ERROR("%p: too many SGEs (%u) needed to handle" + " requested maximum packet size %u", + (void *)dev, + 1 << sges_n, + dev->data->dev_conf.rxmode.max_rx_pkt_len); + goto error; + } + } else { + WARN("%p: the requested maximum Rx packet size (%u) is" + " larger than a single mbuf (%u) and scattered" + " mode has not been requested", + (void *)dev, + dev->data->dev_conf.rxmode.max_rx_pkt_len, + mb_len - RTE_PKTMBUF_HEADROOM); + } + DEBUG("%p: maximum number of segments per packet: %u", + (void *)dev, 1 << tmpl->rxq.sges_n); + if (desc % (1 << tmpl->rxq.sges_n)) { + ERROR("%p: number of RX queue descriptors (%u) is not a" + " multiple of SGEs per packet (%u)", + (void *)dev, + desc, + 1 << tmpl->rxq.sges_n); + goto error; + } + /* Toggle RX checksum offload if hardware supports it. */ + if (priv->hw_csum) + tmpl->rxq.csum = !!dev->data->dev_conf.rxmode.hw_ip_checksum; + if (priv->hw_csum_l2tun) + tmpl->rxq.csum_l2tun = + !!dev->data->dev_conf.rxmode.hw_ip_checksum; + /* Configure VLAN stripping. */ + tmpl->rxq.vlan_strip = (priv->hw_vlan_strip && + !!dev->data->dev_conf.rxmode.hw_vlan_strip); + /* By default, FCS (CRC) is stripped by hardware. */ + if (dev->data->dev_conf.rxmode.hw_strip_crc) { + tmpl->rxq.crc_present = 0; + } else if (priv->hw_fcs_strip) { + tmpl->rxq.crc_present = 1; + } else { + WARN("%p: CRC stripping has been disabled but will still" + " be performed by hardware, make sure MLNX_OFED and" + " firmware are up to date", + (void *)dev); + tmpl->rxq.crc_present = 0; + } + DEBUG("%p: CRC stripping is %s, %u bytes will be subtracted from" + " incoming frames to hide it", + (void *)dev, + tmpl->rxq.crc_present ? "disabled" : "enabled", + tmpl->rxq.crc_present << 2); + /* Save port ID. */ + tmpl->rxq.port_id = dev->data->port_id; + tmpl->priv = priv; + tmpl->rxq.mp = mp; + tmpl->rxq.stats.idx = idx; + tmpl->rxq.elts_n = log2above(desc); + tmpl->rxq.elts = + (struct rte_mbuf *(*)[1 << tmpl->rxq.elts_n])(tmpl + 1); + rte_atomic32_inc(&tmpl->refcnt); + DEBUG("%p: Rx queue %p: refcnt %d", (void*)priv, + (void*)tmpl, rte_atomic32_read(&tmpl->refcnt)); + LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); + return tmpl; +error: + rte_free(tmpl); + return NULL; +} + +/** + * Get a Rx queue. + * + * @param priv + * Pointer to private structure. + * @param idx + * TX queue index. + * + * @return + * A pointer to the queue if it exists. + */ +struct mlx5_rxq_ctrl* +mlx5_priv_rxq_get(struct priv *priv, uint16_t idx) +{ + struct mlx5_rxq_ctrl *ctrl = NULL; + + if ((*priv->rxqs)[idx]) { + ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, + rxq); + struct mlx5_rxq_ibv *ibv; + + (void)ibv; + ibv = mlx5_priv_rxq_ibv_get(priv, idx); + rte_atomic32_inc(&ctrl->refcnt); + DEBUG("%p: Rx queue %p: refcnt %d", (void*)priv, + (void*)ctrl, rte_atomic32_read(&ctrl->refcnt)); + } + return ctrl; +} + +/** + * Release a Rx queue. + * + * @param priv + * Pointer to private structure. + * @param idx + * TX queue index. + * + * @return + * 0 on success, errno value on failure. + */ +int +mlx5_priv_rxq_release(struct priv *priv, uint16_t idx) +{ + struct mlx5_rxq_ctrl *rxq; + + if (!(*priv->rxqs)[idx]) + return 0; + rxq = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); + assert(rxq->priv); + if (rxq->ibv) { + int ret; + + ret = mlx5_priv_rxq_ibv_release(rxq->priv, rxq->ibv); + if (!ret) + rxq->ibv = NULL; + } + DEBUG("%p: Rx queue %p: refcnt %d", (void*)priv, + (void*)rxq, rte_atomic32_read(&rxq->refcnt)); + if (rte_atomic32_dec_and_test(&rxq->refcnt)) { + rxq_free_elts(rxq); + LIST_REMOVE(rxq, next); + rte_free(rxq); + (*priv->rxqs)[idx] = NULL; + return 0; + } + return EBUSY; +} + +/** + * Verify if the queue can be released. + * + * @param priv + * Pointer to private structure. + * @param idx + * TX queue index. + * + * @return + * 1 if the queue can be released. + */ +int +mlx5_priv_rxq_releasable(struct priv *priv, uint16_t idx) +{ + struct mlx5_rxq_ctrl *rxq; + + if (!(*priv->rxqs)[idx]) + return -1; + rxq = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); + return (rte_atomic32_read(&rxq->refcnt) == 1); +} + +/** + * Verify the Rx Queue list is empty + * + * @param priv + * Pointer to private structure. + * + * @return the number of object not released. + */ +int +mlx5_priv_rxq_verify(struct priv *priv) +{ + struct mlx5_rxq_ctrl *rxq; + int ret = 0; + + LIST_FOREACH(rxq, &priv->rxqsctrl, next) { + DEBUG("%p: Rx Queue %p still referenced", (void*)priv, + (void*)rxq); + ++ret; + } + return ret; +} diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 13b50a1..672793a 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -143,6 +143,8 @@ struct mlx5_rxq_ibv { /* RX queue control descriptor. */ struct mlx5_rxq_ctrl { + LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ + rte_atomic32_t refcnt; /* Reference counter. */ struct priv *priv; /* Back pointer to private data. */ struct mlx5_rxq_ibv *ibv; /* Verbs elements. */ struct mlx5_rxq_data rxq; /* Data path structure. */ @@ -335,6 +337,14 @@ struct mlx5_rxq_ibv* mlx5_priv_rxq_ibv_get(struct priv *priv, uint16_t idx); int mlx5_priv_rxq_ibv_release(struct priv *priv, struct mlx5_rxq_ibv *rxq); int mlx5_priv_rxq_ibv_releasable(struct priv *priv, struct mlx5_rxq_ibv *rxq); int mlx5_priv_rxq_ibv_verify(struct priv *priv); +struct mlx5_rxq_ctrl* mlx5_priv_rxq_new(struct priv *priv, uint16_t idx, + uint16_t desc, unsigned int socket, + struct rte_mempool *mp); +struct mlx5_rxq_ctrl* mlx5_priv_rxq_get(struct priv *priv, uint16_t idx); +int mlx5_priv_rxq_release(struct priv *priv, uint16_t idx); +int mlx5_priv_rxq_releasable(struct priv *priv, uint16_t idx); +int mlx5_priv_rxq_verify(struct priv *priv); +int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); /* mlx5_txq.c */ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 7df85aa..bedce0a 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -78,6 +78,41 @@ priv_txq_start(struct priv *priv) return -ret; } +static void +priv_rxq_stop(struct priv *priv) +{ + unsigned int i; + + for (i = 0; i != priv->rxqs_n; ++i) + mlx5_priv_rxq_release(priv, i); +} + +static int +priv_rxq_start(struct priv *priv) +{ + unsigned int i; + int ret = 0; + + for (i = 0; i != priv->rxqs_n; ++i) { + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_priv_rxq_get(priv, i); + + if (!rxq_ctrl) + continue; + ret = rxq_alloc_elts(rxq_ctrl); + if (ret) + goto error; + rxq_ctrl->ibv = mlx5_priv_rxq_ibv_new(priv, i); + if (!rxq_ctrl->ibv) { + ret = ENOMEM; + goto error; + } + } + return -ret; +error: + priv_rxq_stop(priv); + return -ret; +} + /** * DPDK callback to start the device. * @@ -113,6 +148,14 @@ mlx5_dev_start(struct rte_eth_dev *dev) } /* Update send callback. */ priv_select_tx_function(priv); + err = priv_rxq_start(priv); + if (err) { + ERROR("%p: RXQ allocation failed: %s", + (void *)dev, strerror(err)); + goto error; + } + /* Update receive callback. */ + priv_select_rx_function(priv); err = priv_create_hash_rxqs(priv); if (!err) err = priv_rehash_flows(priv); @@ -147,6 +190,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) priv_mac_addrs_disable(priv); priv_destroy_hash_rxqs(priv); priv_flow_stop(priv); + priv_rxq_stop(priv); priv_txq_stop(priv); priv_unlock(priv); return -err; @@ -180,6 +224,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) priv_mr_release(priv, mr); } priv_txq_stop(priv); + priv_rxq_stop(priv); priv_dev_interrupt_handler_uninstall(priv, dev); priv_unlock(priv); }