From patchwork Wed Aug 2 14:10:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 27380 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id E623CA206; Wed, 2 Aug 2017 16:11:37 +0200 (CEST) Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id 72F0CA0C2 for ; Wed, 2 Aug 2017 16:11:11 +0200 (CEST) Received: by mail-wm0-f49.google.com with SMTP id m85so42774137wma.0 for ; Wed, 02 Aug 2017 07:11:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=V9uixaaYK8xOnnhlscf+lThoWHGQ3CzmIu7+CiQR7gw=; b=1T0IMo/2XGdLkKPukx7v5vXbu+pcs1AJ/334qh/Xr4xsFxO+cZEo+IeKII+CNBICSi OWhCbc7K4wF+u5WYveuphr21rmC22Edpu6MBQsy85XU3FlnV3Wt7BK69yg/W0PvphWpP HeLGZbNqL3TNvJdcha7jOFdjO9FRVusjpZEzES/r5idarDrtbq8Ixl1CZvqQRbwrhUPc HYGbgBvrBA/N9l2HUvufZ1vQ4WiIaL+Bn/0HDqXzhkyWpl14DMrnjGIX/HjFYISyraB8 7rLwXcdoTAE4IWByo/4dAlJEF3WZYf3/nJEeLh1oP1LfOKxH7ET3JbVERPEiA1IH1fEo oo5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=V9uixaaYK8xOnnhlscf+lThoWHGQ3CzmIu7+CiQR7gw=; b=UY0qiugSqyj28wr4Ez3ojnoUlR5cSogoHuhBpQ+CRApe+LXBKYu153LwFutSY3CQ4/ sDnTHlAK36AcQxFRwQ/LOG0zjwVXH/a6VOZcaaRQCbAwnUilqH+FqywxvrOQMRUuNU28 RRV9CqG9TqYGsvsTQGVYaoeEyfr5qj9HFeALM3dejmBISANk3VRmTP32GvGnq9CQvggz 2vPM1p2mf87iluWHLbdaiM7DGzRHFVvgxulkGrtC93ZS/TMVRZUWGQ8tkyIK9MRcUylD 3+7ikaUorvm+NfkzHGTv5jSzynPcyY/xudXit1xnswY0pree/l6qCRcETK9ufbM1Dj8c 9auA== X-Gm-Message-State: AIVw110QR6kdKlZVwNfAbcnSNmJoSP9rCnBbNI7UjVSAuYVbY9PR1JDl NmbNsKn7QdqUBWb7EV5CSw== X-Received: by 10.28.113.23 with SMTP id m23mr3819107wmc.128.1501683070625; Wed, 02 Aug 2017 07:11:10 -0700 (PDT) Received: from ping.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id d53sm39449552wrd.81.2017.08.02.07.11.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 02 Aug 2017 07:11:10 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: adrien.mazarguil@6wind.com Date: Wed, 2 Aug 2017 16:10:30 +0200 Message-Id: <77dd76fe2b05830a85911e708095d88299969607.1501681927.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 14/21] net/mlx5: add Hash Rx queue object X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5.c | 3 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 137 +++++++++++++++--------------------- drivers/net/mlx5/mlx5_rxq.c | 161 +++++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rxtx.h | 19 +++++ 5 files changed, 239 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d5cb6e4..52cbb20 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -182,6 +182,9 @@ mlx5_dev_close(struct rte_eth_dev *dev) } if (priv->reta_idx != NULL) rte_free(priv->reta_idx); + i = mlx5_priv_hrxq_ibv_verify(priv); + if (i) + WARN("%p: some Hash Rx queue still remain", (void*)priv); i = mlx5_priv_ind_table_ibv_verify(priv); if (i) WARN("%p: some Indirection table still remain", (void*)priv); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 081c2c6..3c2e719 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -147,6 +147,7 @@ struct priv { LIST_HEAD(mr, mlx5_mr) mr; /* Memory region. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqibv, mlx5_rxq_ibv) rxqsibv; /* Verbs Rx queues. */ + LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */ LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */ LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */ /* Verbs Indirection tables. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 049a8e2..f258567 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -90,13 +90,9 @@ mlx5_flow_create_vxlan(const struct rte_flow_item *item, struct rte_flow { TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ struct ibv_exp_flow_attr *ibv_attr; /**< Pointer to Verbs attributes. */ - struct mlx5_ind_table_ibv *ind_table; /**< Indirection table. */ - struct ibv_qp *qp; /**< Verbs queue pair. */ struct ibv_exp_flow *ibv_flow; /**< Verbs flow. */ - struct ibv_exp_wq *wq; /**< Verbs work queue. */ - struct ibv_cq *cq; /**< Verbs completion queue. */ + struct mlx5_hrxq *hrxq; /**< Hash Rx queue. */ uint32_t mark:1; /**< Set if the flow is marked. */ - uint64_t hash_fields; /**< Fields that participate in the hash. */ }; /** Static initializer for items. */ @@ -1033,56 +1029,36 @@ priv_flow_create_action_queue(struct priv *priv, NULL, "cannot allocate flow memory"); return NULL; } - for (i = 0; i != flow->actions.queues_n; ++i) { - struct mlx5_rxq_data *q = (*priv->rxqs)[flow->actions.queues[i]]; - - q->mark |= flow->actions.mark; - } rte_flow->mark = flow->actions.mark; rte_flow->ibv_attr = flow->ibv_attr; - rte_flow->hash_fields = flow->hash_fields; - rte_flow->ind_table = - mlx5_priv_ind_table_ibv_get(priv, flow->actions.queues, + rte_flow->hrxq = mlx5_priv_hrxq_get(priv, rss_hash_default_key, + rss_hash_default_key_len, + flow->hash_fields, + flow->actions.queues, flow->actions.queues_n); - if (!rte_flow->ind_table) { - rte_flow->ind_table = - mlx5_priv_ind_table_ibv_new(priv, flow->actions.queues, - flow->actions.queues_n); - if (!rte_flow->ind_table) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_HANDLE, - NULL, - "cannot allocate indirection table"); - goto error; - } + if (rte_flow->hrxq) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "duplicated flow"); + goto error; } - rte_flow->qp = ibv_exp_create_qp( - priv->ctx, - &(struct ibv_exp_qp_init_attr){ - .qp_type = IBV_QPT_RAW_PACKET, - .comp_mask = - IBV_EXP_QP_INIT_ATTR_PD | - IBV_EXP_QP_INIT_ATTR_PORT | - IBV_EXP_QP_INIT_ATTR_RX_HASH, - .pd = priv->pd, - .rx_hash_conf = &(struct ibv_exp_rx_hash_conf){ - .rx_hash_function = - IBV_EXP_RX_HASH_FUNC_TOEPLITZ, - .rx_hash_key_len = rss_hash_default_key_len, - .rx_hash_key = rss_hash_default_key, - .rx_hash_fields_mask = rte_flow->hash_fields, - .rwq_ind_tbl = rte_flow->ind_table->ind_table, - }, - .port_num = priv->port, - }); - if (!rte_flow->qp) { + rte_flow->hrxq = mlx5_priv_hrxq_new(priv, rss_hash_default_key, + rss_hash_default_key_len, + flow->hash_fields, + flow->actions.queues, + flow->actions.queues_n); + if (!rte_flow->hrxq) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, - NULL, "cannot allocate QP"); + NULL, "cannot create hash rxq"); goto error; } + for (i = 0; i != flow->actions.queues_n; ++i) { + struct mlx5_rxq_data *q = (*priv->rxqs)[flow->actions.queues[i]]; + + q->mark |= flow->actions.mark; + } if (!priv->dev->data->dev_started) return rte_flow; - rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp, + rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->hrxq->qp, rte_flow->ibv_attr); if (!rte_flow->ibv_flow) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, @@ -1092,10 +1068,8 @@ priv_flow_create_action_queue(struct priv *priv, return rte_flow; error: assert(rte_flow); - if (rte_flow->qp) - ibv_destroy_qp(rte_flow->qp); - if (rte_flow->ind_table) - mlx5_priv_ind_table_ibv_release(priv, rte_flow->ind_table); + if (rte_flow->hrxq) + mlx5_priv_hrxq_release(priv, rte_flow->hrxq); rte_free(rte_flow); return NULL; } @@ -1210,40 +1184,41 @@ priv_flow_destroy(struct priv *priv, struct rte_flow *flow) { unsigned int i; - - TAILQ_REMOVE(&priv->flows, flow, next); - if (flow->ibv_flow) - claim_zero(ibv_exp_destroy_flow(flow->ibv_flow)); - if (flow->qp) - claim_zero(ibv_destroy_qp(flow->qp)); - for (i = 0; i != flow->ind_table->queues_n; ++i) { + uint16_t *queues; + uint16_t queues_n; + + queues = flow->hrxq->ind_table->queues; + queues_n = flow->hrxq->ind_table->queues_n; + if (!flow->mark) + goto out; + for (i = 0; i != queues_n; ++i) { struct rte_flow *tmp; - struct mlx5_rxq_data *rxq = - (*priv->rxqs)[flow->ind_table->queues[i]]; + struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]]; + int mark = 0; /* * To remove the mark from the queue, the queue must not be * present in any other marked flow (RSS or not). */ - if (flow->mark) { - int mark = 0; - - TAILQ_FOREACH(tmp, &priv->flows, next) { - unsigned int j; - - if (!tmp->mark) - continue; - for (j = 0; - (j != tmp->ind_table->queues_n) && !mark; - j++) - if (tmp->ind_table->queues[j] == - flow->ind_table->queues[i]) - mark = 1; - } - rxq->mark = mark; + TAILQ_FOREACH(tmp, &priv->flows, next) { + unsigned int j; + + if (!tmp->mark) + continue; + for (j = 0; + (j != tmp->hrxq->ind_table->queues_n) && !mark; + j++) + if (tmp->hrxq->ind_table->queues[j] == + queues[i]) + mark = 1; } + rxq->mark = mark; } - mlx5_priv_ind_table_ibv_release(priv, flow->ind_table); +out: + TAILQ_REMOVE(&priv->flows, flow, next); + if (flow->ibv_flow) + claim_zero(ibv_exp_destroy_flow(flow->ibv_flow)); + mlx5_priv_hrxq_release(priv, flow->hrxq); rte_free(flow->ibv_attr); DEBUG("Flow destroyed %p", (void *)flow); rte_free(flow); @@ -1345,10 +1320,8 @@ priv_flow_start(struct priv *priv) struct rte_flow *flow; TAILQ_FOREACH(flow, &priv->flows, next) { - struct ibv_qp *qp; - - qp = flow->qp; - flow->ibv_flow = ibv_exp_create_flow(qp, flow->ibv_attr); + flow->ibv_flow = ibv_exp_create_flow(flow->hrxq->qp, + flow->ibv_attr); if (!flow->ibv_flow) { DEBUG("Flow %p cannot be applied", (void *)flow); rte_errno = EINVAL; @@ -1358,8 +1331,8 @@ priv_flow_start(struct priv *priv) if (flow->mark) { unsigned int n; - for (n = 0; n < flow->ind_table->queues_n; ++n) { - uint16_t idx = flow->ind_table->queues[n]; + for (n = 0; n < flow->hrxq->ind_table->queues_n; ++n) { + uint16_t idx = flow->hrxq->ind_table->queues[n]; (*priv->rxqs)[idx]->mark = 1; } } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index bd6f966..076b575 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1688,3 +1688,164 @@ mlx5_priv_ind_table_ibv_verify(struct priv *priv) } return ret; } + +/** + * Create an Rx Hash queue. + * + * @param priv + * Pointer to private structure. + * @param rss_key + * RSS key for the Rx hash queue. + * @param rss_key_len + * RSS key length. + * @param hash_fields + * Verbs protocol hash field to make the RSS on. + * @param queues + * Queues entering in hash queue. + * @param queues_n + * Number of queues. + * + * @return + * An hash Rx queue on success. + */ +struct mlx5_hrxq* +mlx5_priv_hrxq_new(struct priv *priv, uint8_t *rss_key, uint8_t rss_key_len, + uint64_t hash_fields, uint16_t queues[], uint16_t queues_n) +{ + struct mlx5_hrxq *hrxq; + struct mlx5_ind_table_ibv *ind_tbl; + struct ibv_qp *qp; + + ind_tbl = mlx5_priv_ind_table_ibv_get(priv, queues, queues_n); + if (!ind_tbl) + ind_tbl = mlx5_priv_ind_table_ibv_new(priv, queues, queues_n); + if (!ind_tbl) + return NULL; + qp = ibv_exp_create_qp( + priv->ctx, + &(struct ibv_exp_qp_init_attr){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_EXP_QP_INIT_ATTR_PD | + IBV_EXP_QP_INIT_ATTR_PORT | + IBV_EXP_QP_INIT_ATTR_RX_HASH, + .pd = priv->pd, + .rx_hash_conf = &(struct ibv_exp_rx_hash_conf){ + .rx_hash_function = + IBV_EXP_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_key_len, + .rx_hash_key = rss_key, + .rx_hash_fields_mask = hash_fields, + .rwq_ind_tbl = ind_tbl->ind_table, + }, + .port_num = priv->port, + }); + if (!qp) + goto error; + hrxq = rte_calloc(__func__, 1, sizeof(*hrxq) + rss_key_len, 0); + if (!hrxq) + goto error; + hrxq->ind_table = ind_tbl; + hrxq->qp = qp; + hrxq->rss_key_len = rss_key_len; + hrxq->hash_fields = hash_fields; + memcpy(hrxq->rss_key, rss_key, rss_key_len); + rte_atomic32_inc(&hrxq->refcnt); + LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next); + DEBUG("%p: Hash Rx queue %p: refcnt %d", (void*)priv, + (void*)hrxq, rte_atomic32_read(&hrxq->refcnt)); + return hrxq; +error: + mlx5_priv_ind_table_ibv_release(priv, ind_tbl); + if (qp) + claim_zero(ibv_destroy_qp(qp)); + return NULL; +} + +/** + * Get an Rx Hash queue. + * + * @param priv + * Pointer to private structure. + * @param rss_conf + * RSS configuration for the Rx hash queue. + * @param queues + * Queues entering in hash queue. + * @param queues_n + * Number of queues. + * + * @return + * An hash Rx queue on success. + */ +struct mlx5_hrxq* +mlx5_priv_hrxq_get(struct priv *priv, uint8_t *rss_key, uint8_t rss_key_len, + uint64_t hash_fields, uint16_t queues[], uint16_t queues_n) +{ + struct mlx5_hrxq *hrxq; + + LIST_FOREACH(hrxq, &priv->hrxqs, next) { + if (hrxq->rss_key_len != rss_key_len) + continue; + if (memcmp(hrxq->rss_key, rss_key, rss_key_len)) + continue; + if (hrxq->hash_fields != hash_fields) + continue; + mlx5_priv_ind_table_ibv_get(priv, queues, queues_n); + rte_atomic32_inc(&hrxq->refcnt); + DEBUG("%p: Hash Rx queue %p: refcnt %d", (void*)priv, + (void*)hrxq, rte_atomic32_read(&hrxq->refcnt)); + return hrxq; + } + return NULL; +} + +/** + * Release the hash Rx queue. + * + * @param priv + * Pointer to private structure. + * @param hrxq + * Pointer to Hash Rx queue to release. + * + * @return + * 0 on success, errno value on failure. + */ +int +mlx5_priv_hrxq_release(struct priv *priv, struct mlx5_hrxq *hrxq) +{ + DEBUG("%p: Hash Rx queue %p: refcnt %d", (void*)priv, + (void*)hrxq, rte_atomic32_read(&hrxq->refcnt)); + if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { + claim_zero(ibv_destroy_qp(hrxq->qp)); + mlx5_priv_ind_table_ibv_release(priv, hrxq->ind_table); + LIST_REMOVE(hrxq, next); + rte_free(hrxq); + return 0; + } else { + claim_nonzero(mlx5_priv_ind_table_ibv_release(priv, + hrxq->ind_table)); + } + return EBUSY; +} + +/** + * Verify the Rx Queue list is empty + * + * @param priv + * Pointer to private structure. + * + * @return the number of object not released. + */ +int +mlx5_priv_hrxq_ibv_verify(struct priv *priv) +{ + struct mlx5_hrxq *hrxq; + int ret = 0; + + LIST_FOREACH(hrxq, &priv->hrxqs, next) { + DEBUG("%p: Verbs Hash Rx queue %p still referenced", + (void*)priv, (void*)hrxq); + ++ret; + } + return ret; +} diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 2b48a01..6397a50 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -161,6 +161,17 @@ struct mlx5_ind_table_ibv { uint16_t queues[]; /**< Queue list. */ }; +/* Hash Rx queue. */ +struct mlx5_hrxq { + LIST_ENTRY(mlx5_hrxq) next; /* Pointer to the next element. */ + rte_atomic32_t refcnt; /* Reference counter. */ + struct mlx5_ind_table_ibv *ind_table; /* Indirection table. */ + struct ibv_qp *qp; /* Verbs queue pair. */ + uint64_t hash_fields; /* Verbs Hash fields. */ + uint8_t rss_key_len; /* Hash key length in bytes. */ + uint8_t rss_key[]; /* Hash key. */ +}; + /* Hash RX queue types. */ enum hash_rxq_type { HASH_RXQ_TCPV4, @@ -363,6 +374,14 @@ struct mlx5_ind_table_ibv* mlx5_priv_ind_table_ibv_get(struct priv *priv, int mlx5_priv_ind_table_ibv_release(struct priv * priv, struct mlx5_ind_table_ibv *ind_table); int mlx5_priv_ind_table_ibv_verify(struct priv *priv); +struct mlx5_hrxq* mlx5_priv_hrxq_new(struct priv *priv, uint8_t *rss_key, + uint8_t rss_key_len, uint64_t hash_fields, + uint16_t queues[], uint16_t queues_n); +struct mlx5_hrxq* mlx5_priv_hrxq_get(struct priv *priv, uint8_t *rss_key, + uint8_t rss_key_len, uint64_t hash_fields, + uint16_t queues[], uint16_t queues_n); +int mlx5_priv_hrxq_release(struct priv *priv, struct mlx5_hrxq *hrxq); +int mlx5_priv_hrxq_ibv_verify(struct priv *priv); /* mlx5_txq.c */