From patchwork Tue Aug 1 08:05:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 27302 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A0F929B91; Tue, 1 Aug 2017 10:06:08 +0200 (CEST) Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 250B89B57 for ; Tue, 1 Aug 2017 10:06:03 +0200 (CEST) Received: by mail-wm0-f47.google.com with SMTP id m85so7196099wma.1 for ; Tue, 01 Aug 2017 01:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=Hum3NQPR9/Lyh/44oDh7vO8I36DRnj1BHipoKOQcPtQ=; b=PbxaeRpmPbi1vj0/gB2AYU/L6qiTb67IwrHVwVAuqQUEBlzKO6qFMRORoo9b6LAbWq IQ5urNqcpNlpa28VaTVE7epfDWuyYpLKCvDcUqNn7Iyap3O3j/7Tfq0bPRM9H2x9qx2y 7Cpq6dAfJxqB6Fdcon4YsFVqLIttCPp7W+ik6yBrtzjHAcCUXvD0eElrAr6kV6Vk9Pjf CAZjItWK2cTzmvnfsFHeEVEyqNe0TwZG70GQNx88Pk1pJwI4fC+gso7wkmDX8utx+HWX i1GIJrenPdz5WNAizbr4c5REqkL6D4g6KtKV1RLivU7MuXEJkXAJ3LEj5yCILbs7D8oS SiMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=Hum3NQPR9/Lyh/44oDh7vO8I36DRnj1BHipoKOQcPtQ=; b=oP83VtCzyI+peTNWADffJouXJDIc90yGC7WIReYyzPPaPW70SKMr/YtTwVrwP/ePCT XlrcNyzI0kcufneZUWGg2nffUR7BHIBF8i8muy1Zrt75J0q11gERrXYNk2M0lt4iM4K5 SJxh6cArJrsc/eNDjXaVOmrR0Dh22AHcdqqElnKIx17+/cYK8bhXCpoMPoVFrKjv610a qZ4skdfuSCrRAPDb8YOoVqLEYH63cFL+ZYaZPuh2zs+zkvXoGTd+w2LQpBZDx9ReSgrE JXtsfErKV06b11L1+nZVR8Ozjz5YBgYXjdUYs5WoexgPGy+Jab7tB5Zo07Lkyv6JceH7 pdOQ== X-Gm-Message-State: AIVw111Mh0Qd2DMJ0TrOj29BAgBjfB9StYB1CHjr+wPwumTT8QkVfWBZ +HXBMV/VIwKF0+bF4jNd/g== X-Received: by 10.28.195.86 with SMTP id t83mr732967wmf.176.1501574762565; Tue, 01 Aug 2017 01:06:02 -0700 (PDT) Received: from ping.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id v2sm21221020wra.2.2017.08.01.01.06.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Aug 2017 01:06:01 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Yongseok Koh Date: Tue, 1 Aug 2017 10:05:34 +0200 Message-Id: <81d82e10384ed50ebb298896461b291acda75077.1501574380.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 4/5] net/mlx5: prepare vector Rx ring at setup time X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To use the vector, it needs to add to the PMD Rx mbuf ring four extra mbuf to avoid memory corruption. This additional mbuf are added on dev_start() whereas all other mbuf are allocated on queue setup. This patch brings this allocation back to the same place as other mbuf allocation. Signed-off-by: Nelio Laranjeiro Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_ethdev.c | 1 - drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++++++++++++++++++-------- drivers/net/mlx5/mlx5_rxtx.c | 6 ----- drivers/net/mlx5/mlx5_rxtx.h | 1 - drivers/net/mlx5/mlx5_rxtx_vec_sse.c | 38 ------------------------------- 5 files changed, 34 insertions(+), 55 deletions(-) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index a233a73..ad5b28a 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -1540,7 +1540,6 @@ void priv_select_rx_function(struct priv *priv) { if (priv_check_vec_rx_support(priv) > 0) { - priv_prep_vec_rx_function(priv); priv->dev->rx_pkt_burst = mlx5_rx_burst_vec; INFO("selected RX vectorized function"); } else { diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index b54d7b0..c90ec81 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -720,6 +720,27 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n) }; (*rxq_ctrl->rxq.elts)[i] = buf; } + if (rxq_check_vec_support(&rxq_ctrl->rxq) > 0) { + struct rxq *rxq = &rxq_ctrl->rxq; + struct rte_mbuf *mbuf_init = &rxq->fake_mbuf; + + assert(rxq->elts_n == rxq->cqe_n); + /* Initialize default rearm_data for vPMD. */ + mbuf_init->data_off = RTE_PKTMBUF_HEADROOM; + rte_mbuf_refcnt_set(mbuf_init, 1); + mbuf_init->nb_segs = 1; + mbuf_init->port = rxq->port_id; + /* + * prevent compiler reordering: + * rearm_data covers previous fields. + */ + rte_compiler_barrier(); + rxq->mbuf_initializer = *(uint64_t *)&mbuf_init->rearm_data; + /* Padding with a fake mbuf for vectorized Rx. */ + for (i = 0; i < MLX5_VPMD_DESCS_PER_LOOP; ++i) + (*rxq->elts)[elts_n + i] = &rxq->fake_mbuf; + rxq->trim_elts = 1; + } DEBUG("%p: allocated and configured %u segments (max %u packets)", (void *)rxq_ctrl, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); assert(ret == 0); @@ -799,9 +820,11 @@ rxq_setup(struct rxq_ctrl *tmpl) struct ibv_cq *ibcq = tmpl->cq; struct ibv_mlx5_cq_info cq_info; struct mlx5_rwq *rwq = container_of(tmpl->wq, struct mlx5_rwq, wq); - struct rte_mbuf *(*elts)[1 << tmpl->rxq.elts_n] = + const uint16_t desc_n = + (1 << tmpl->rxq.elts_n) + tmpl->priv->rx_vec_en * + MLX5_VPMD_DESCS_PER_LOOP; + struct rte_mbuf *(*elts)[desc_n] = rte_calloc_socket("RXQ", 1, sizeof(*elts), 0, tmpl->socket); - if (ibv_mlx5_exp_get_cq_info(ibcq, &cq_info)) { ERROR("Unable to query CQ info. check your OFED."); return ENOTSUP; @@ -871,7 +894,9 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, } attr; unsigned int mb_len = rte_pktmbuf_data_room_size(mp); unsigned int cqe_n = desc - 1; - struct rte_mbuf *(*elts)[desc] = NULL; + const uint16_t desc_n = + desc + priv->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP; + struct rte_mbuf *(*elts)[desc_n] = NULL; int ret = 0; (void)conf; /* Thresholds configuration (ignored). */ @@ -1129,7 +1154,8 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct priv *priv = dev->data->dev_private; struct rxq *rxq = (*priv->rxqs)[idx]; struct rxq_ctrl *rxq_ctrl = container_of(rxq, struct rxq_ctrl, rxq); - const uint16_t desc_pad = MLX5_VPMD_DESCS_PER_LOOP; /* For vPMD. */ + const uint16_t desc_n = + desc + priv->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP; int ret; if (mlx5_is_secondary()) @@ -1162,9 +1188,8 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, /* Resize if rxq size is changed. */ if (rxq_ctrl->rxq.elts_n != log2above(desc)) { rxq_ctrl = rte_realloc(rxq_ctrl, - sizeof(*rxq_ctrl) + - (desc + desc_pad) * - sizeof(struct rte_mbuf *), + sizeof(*rxq_ctrl) + desc_n * + sizeof(struct rte_mbuf *), RTE_CACHE_LINE_SIZE); if (!rxq_ctrl) { ERROR("%p: unable to reallocate queue index %u", @@ -1175,8 +1200,8 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } } else { rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) + - (desc + desc_pad) * - sizeof(struct rte_mbuf *), + desc_n * + sizeof(struct rte_mbuf *), 0, socket); if (rxq_ctrl == NULL) { ERROR("%p: unable to allocate queue index %u", diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 2572a16..83101f6 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -2027,9 +2027,3 @@ priv_check_vec_rx_support(struct priv *priv) (void)priv; return -ENOTSUP; } - -void __attribute__((weak)) -priv_prep_vec_rx_function(struct priv *priv) -{ - (void)priv; -} diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index eda8b00..690b308 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -351,7 +351,6 @@ int priv_check_raw_vec_tx_support(struct priv *); int priv_check_vec_tx_support(struct priv *); int rxq_check_vec_support(struct rxq *); int priv_check_vec_rx_support(struct priv *); -void priv_prep_vec_rx_function(struct priv *); uint16_t mlx5_tx_burst_raw_vec(void *, struct rte_mbuf **, uint16_t); uint16_t mlx5_tx_burst_vec(void *, struct rte_mbuf **, uint16_t); uint16_t mlx5_rx_burst_vec(void *, struct rte_mbuf **, uint16_t); diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c index 40915f2..3339ac1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c @@ -1363,41 +1363,3 @@ priv_check_vec_rx_support(struct priv *priv) return -ENOTSUP; return 1; } - -/** - * Prepare for vectorized RX. - * - * @param priv - * Pointer to private structure. - */ -void -priv_prep_vec_rx_function(struct priv *priv) -{ - uint16_t i; - - for (i = 0; i < priv->rxqs_n; ++i) { - struct rxq *rxq = (*priv->rxqs)[i]; - struct rte_mbuf *mbuf_init = &rxq->fake_mbuf; - const uint16_t desc = 1 << rxq->elts_n; - int j; - - assert(rxq->elts_n == rxq->cqe_n); - /* Initialize default rearm_data for vPMD. */ - mbuf_init->data_off = RTE_PKTMBUF_HEADROOM; - rte_mbuf_refcnt_set(mbuf_init, 1); - mbuf_init->nb_segs = 1; - mbuf_init->port = rxq->port_id; - /* - * prevent compiler reordering: - * rearm_data covers previous fields. - */ - rte_compiler_barrier(); - rxq->mbuf_initializer = - *(uint64_t *)&mbuf_init->rearm_data; - /* Padding with a fake mbuf for vectorized Rx. */ - for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) - (*rxq->elts)[desc + j] = &rxq->fake_mbuf; - /* Mark that it need to be cleaned up for rxq_alloc_elts(). */ - rxq->trim_elts = 1; - } -}