From patchwork Wed Oct 11 14:35:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30148 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2516D1B2C6; Wed, 11 Oct 2017 16:37:01 +0200 (CEST) Received: from mail-wm0-f52.google.com (mail-wm0-f52.google.com [74.125.82.52]) by dpdk.org (Postfix) with ESMTP id AF4101B258 for ; Wed, 11 Oct 2017 16:36:48 +0200 (CEST) Received: by mail-wm0-f52.google.com with SMTP id l68so5402912wmd.5 for ; Wed, 11 Oct 2017 07:36:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yct+T1qn5YhJiWMxIVRcfYFD5kK0C6NG6HppiETBmVg=; b=dX1a3JNa1pWeYR/xRGQcU/thlFE6KvgQju96LqyzUF07yOeCtbToawmp2I3AD66bhI i+TaPOmh2y5dsN8woMfPxQo63kbEusqQ0piV9hEbYU8aZSiTCWNsw3DXRGeu82aMndtk uIPYp7fH1+DOnQOFDvfwmhzyZeA0cRhLs9ZC9duSNtWjfhTDpX54Ih2MvlH5WCOsb5LY oPSFYmyqZcGBRzJHSrcJCjcytgJbEFzsXPL+iNDQP8BJhWw7obFlFMbn1xORsVXmbSTv 7yX3sxpE8WGEEBN4lCKoSVBjrLqGtd5J10xb56RSV+M0pPpJrav5f4TXY5BmIMZYEz0X z4vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yct+T1qn5YhJiWMxIVRcfYFD5kK0C6NG6HppiETBmVg=; b=LQEel+sM50Rxo9f63tcP5Livbj+lAZM3WFmwvnAbfu78NskmVJAYwi0nLvLvscavAz zsA7G/c6TywQp3aobBBVlB+Z+0jMdPvR+dT8oIt9E3Leld8klZyfuyb2M7DJkT7W2GEb aOEeB/mE7Ft2TByiC1+UWeewGLWr16UpwwWQ+NhzmAAqcciqEeL4y1TJAoZd0/HsJnCV yqxr0A5CYzzWwIqOtv+sAijGOZiXpUkJ/45gQ+bFHcCu5+YhYPbqCbhxnIsnFhgyy9gA 9MkHc5zSOkz4V3b1206eNZb2O1PV8xZehkLhmdVDIHxxiHuD9koZQl2CVxaKD18Q87/Z jkoQ== X-Gm-Message-State: AMCzsaXST7iAWUzHgYpZOs1Ju55G1oUqyXLoeHq3mqNG5GovjgZ06wBJ iSKMkROW1uPkd0a/ELytchIpOPUo X-Google-Smtp-Source: AOwi7QD3jOmepjzRcOiG4x/gSrri4RbR/eyfBJFCdpeMATdjLoUYOdjj7Hr6phoOzRZzep0FgpFhbw== X-Received: by 10.28.135.5 with SMTP id j5mr16359122wmd.21.1507732608380; Wed, 11 Oct 2017 07:36:48 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id x49sm16378518wrb.25.2017.10.11.07.36.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Oct 2017 07:36:47 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Date: Wed, 11 Oct 2017 16:35:27 +0200 Message-Id: <195ac97dd5748212aa284d5cac8691ff168437c8.1507730496.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 25/29] net/mlx4: convert Rx path to work queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Work queues (WQs) are lower-level than standard queue pairs (QPs). They are dedicated to one traffic direction and have to be used in conjunction with indirection tables and special "hash" QPs to get the same level of functionality. These extra objects however are the building blocks for RSS support brought by subsequent commits, as a single "hash" QP can manage several WQs through an indirection table according to a hash algorithm and other parameters. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4.h | 3 ++ drivers/net/mlx4/mlx4_rxq.c | 74 ++++++++++++++++++++++++++++++++------- drivers/net/mlx4/mlx4_rxtx.c | 2 +- drivers/net/mlx4/mlx4_rxtx.h | 2 ++ 4 files changed, 68 insertions(+), 13 deletions(-) diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index a27399a..b04a104 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -61,6 +61,9 @@ /** Maximum size for inline data. */ #define MLX4_PMD_MAX_INLINE 0 +/** Fixed RSS hash key size in bytes. Cannot be modified. */ +#define MLX4_RSS_HASH_KEY_SIZE 40 + /** * Maximum number of cached Memory Pools (MPs) per TX queue. Each RTE MP * from which buffers are to be transmitted will have to be mapped by this diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 03e6af5..b56f1ff 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -268,18 +268,64 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - rxq->qp = ibv_create_qp - (priv->pd, - &(struct ibv_qp_init_attr){ - .send_cq = rxq->cq, - .recv_cq = rxq->cq, - .cap = { - .max_recv_wr = - RTE_MIN(priv->device_attr.max_qp_wr, - desc), - .max_recv_sge = 1, + rxq->wq = ibv_create_wq + (priv->ctx, + &(struct ibv_wq_init_attr){ + .wq_type = IBV_WQT_RQ, + .max_wr = RTE_MIN(priv->device_attr.max_qp_wr, desc), + .max_sge = 1, + .pd = priv->pd, + .cq = rxq->cq, + }); + if (!rxq->wq) { + rte_errno = errno ? errno : EINVAL; + ERROR("%p: WQ creation failure: %s", + (void *)dev, strerror(rte_errno)); + goto error; + } + ret = ibv_modify_wq + (rxq->wq, + &(struct ibv_wq_attr){ + .attr_mask = IBV_WQ_ATTR_STATE, + .wq_state = IBV_WQS_RDY, + }); + if (ret) { + rte_errno = ret; + ERROR("%p: WQ state to IBV_WPS_RDY failed: %s", + (void *)dev, strerror(rte_errno)); + goto error; + } + rxq->ind = ibv_create_rwq_ind_table + (priv->ctx, + &(struct ibv_rwq_ind_table_init_attr){ + .log_ind_tbl_size = 0, + .ind_tbl = (struct ibv_wq *[]){ + rxq->wq, }, + .comp_mask = 0, + }); + if (!rxq->ind) { + rte_errno = errno ? errno : EINVAL; + ERROR("%p: indirection table creation failure: %s", + (void *)dev, strerror(errno)); + goto error; + } + rxq->qp = ibv_create_qp_ex + (priv->ctx, + &(struct ibv_qp_init_attr_ex){ + .comp_mask = (IBV_QP_INIT_ATTR_PD | + IBV_QP_INIT_ATTR_RX_HASH | + IBV_QP_INIT_ATTR_IND_TABLE), .qp_type = IBV_QPT_RAW_PACKET, + .pd = priv->pd, + .rwq_ind_tbl = rxq->ind, + .rx_hash_conf = { + .rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = MLX4_RSS_HASH_KEY_SIZE, + .rx_hash_key = + (uint8_t [MLX4_RSS_HASH_KEY_SIZE]){ 0 }, + .rx_hash_fields_mask = 0, + }, }); if (!rxq->qp) { rte_errno = errno ? errno : EINVAL; @@ -306,8 +352,8 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, strerror(rte_errno)); goto error; } - ret = ibv_post_recv(rxq->qp, &(*rxq->elts)[0].wr, - &(struct ibv_recv_wr *){ NULL }); + ret = ibv_post_wq_recv(rxq->wq, &(*rxq->elts)[0].wr, + &(struct ibv_recv_wr *){ NULL }); if (ret) { rte_errno = ret; ERROR("%p: ibv_post_recv() failed: %s", @@ -373,6 +419,10 @@ mlx4_rx_queue_release(void *dpdk_rxq) mlx4_rxq_free_elts(rxq); if (rxq->qp) claim_zero(ibv_destroy_qp(rxq->qp)); + if (rxq->ind) + claim_zero(ibv_destroy_rwq_ind_table(rxq->ind)); + if (rxq->wq) + claim_zero(ibv_destroy_wq(rxq->wq)); if (rxq->cq) claim_zero(ibv_destroy_cq(rxq->cq)); if (rxq->channel) diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index b5e7777..859f1bd 100644 --- a/drivers/net/mlx4/mlx4_rxtx.c +++ b/drivers/net/mlx4/mlx4_rxtx.c @@ -459,7 +459,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) /* Repost WRs. */ *wr_next = NULL; assert(wr_head); - ret = ibv_post_recv(rxq->qp, wr_head, &wr_bad); + ret = ibv_post_wq_recv(rxq->wq, wr_head, &wr_bad); if (unlikely(ret)) { /* Inability to repost WRs is fatal. */ DEBUG("%p: recv_burst(): failed (ret=%d)", diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h index d90f2f9..897fd2a 100644 --- a/drivers/net/mlx4/mlx4_rxtx.h +++ b/drivers/net/mlx4/mlx4_rxtx.h @@ -73,6 +73,8 @@ struct rxq { struct rte_mempool *mp; /**< Memory pool for allocations. */ struct ibv_mr *mr; /**< Memory region (for mp). */ struct ibv_cq *cq; /**< Completion queue. */ + struct ibv_wq *wq; /**< Work queue. */ + struct ibv_rwq_ind_table *ind; /**< Indirection table. */ struct ibv_qp *qp; /**< Queue pair. */ struct ibv_comp_channel *channel; /**< Rx completion channel. */ unsigned int port_id; /**< Port ID for incoming packets. */