From patchwork Wed Jun 14 11:49:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 25314 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 466687D0D; Wed, 14 Jun 2017 13:49:43 +0200 (CEST) Received: from mail-wr0-f174.google.com (mail-wr0-f174.google.com [209.85.128.174]) by dpdk.org (Postfix) with ESMTP id 66F0F7CF8 for ; Wed, 14 Jun 2017 13:49:33 +0200 (CEST) Received: by mail-wr0-f174.google.com with SMTP id 77so38722171wrb.1 for ; Wed, 14 Jun 2017 04:49:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=lC/LrfjZqhVAkSIBPdLKSCD9l0kI2pB+zuvnMMRJkS4=; b=waMBYLGX1cPpIYfGvrUVskceGo18skJT5rlEDcEeb9yhbvT8+d/U6AS0ZMiwJ/yt8W t38xfcWIlUUb+TOhl9rD3CAgqbvAPGnLgK5AL92INNodufJY1B/YrMcgRzUXSWuXA4V7 dcVM3YecteUt3v0O2tVJFEwe7uEQUVoXof2cbIKquqEIj4OMWFFnjko4sentZW9k4l7S 6sk35m7EY3J4P0E8R2DOSmrChkmk7kQvZMIe0/ZEq9WziW2hK6DM90nkoZ6hRP0T8zhX 36x6UqFQRvOpZewk97PL9zhLhOCNA58tnNpuBlAcMNX61xVxanLUkhl4vtHzfmVSteky CS2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=lC/LrfjZqhVAkSIBPdLKSCD9l0kI2pB+zuvnMMRJkS4=; b=arNvmgd2bQxEZsSm2QycycQAdMXWzhEzO0QJqW5R9UmVMxhXOflyAyGupNL34MV3R4 hIhToxZD8Tz17irEplaqkPF/ukdlZAHxLjp5aEAgHPuZnNdwH/e5r9e6ts8XQwpyYJcd MmaILyGwC2DvGfXLW/GBNkqW6SQD+2CDOxaXnutQJE1WeLE3vjXyXGoeViyMFNITKr4Z MTmXfEdd785JZfK0J6LWoqaQw/JQvGvFexzl23mF7gj981DKSx4FmZgFgity44y/PZ8F mKuRpXMr2atYDKgtF/l9CnUAuxRLDciSN8EgNNVEddetepnyyCskPYFeELwh6sNzHoir 9eRg== X-Gm-Message-State: AKS2vOxVtDmWnBDzlJquiD9ylSTMHwI2ThQlM933IU3XKrwQF+q8aY70 kLovCVJOny7RV6XZzw4= X-Received: by 10.28.60.137 with SMTP id j131mr785130wma.19.1497440972532; Wed, 14 Jun 2017 04:49:32 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id l16sm774207wre.25.2017.06.14.04.49.31 for (version=TLS1_2 cipher=AES128-SHA bits=128/128); Wed, 14 Jun 2017 04:49:31 -0700 (PDT) From: Adrien Mazarguil To: dev@dpdk.org Date: Wed, 14 Jun 2017 13:49:13 +0200 Message-Id: <2aa2646d50ddebad8f4cb0715e69f5b1a579d682.1497439616.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH 3/7] net/mlx4: fix Rx interrupts management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit addresses various issues that may lead to undefined behavior when configuring Rx interrupts. While failure to create a Rx queue completion channel in rxq_setup() prevents that queue from being created, existing queues still have theirs. Since the error handler disables dev_conf.intr_conf.rxq as well, subsequent calls to rxq_setup() create Rx queues without interrupts. This leads to a scenario where not all Rx queues support interrupts; missing checks on the presence of completion channels may crash the application. Considering that the PMD is not supposed to disable user-provided configuration parameters (dev_conf.intr_conf.rxq), and that these can change for subsequent rxq_setup() calls anyway, properly supporting a mixed mode where not all Rx queues have interrupts enabled is a better approach. To do so with a minimum set of changes, priv_intr_efd_enable() and priv_create_intr_vec() are first refactored as a single priv_rx_intr_vec_enable() function (same for their "disable" counterparts). Since they had to be used together, there was no point in keeping them separate. Remaining changes: - Always clean up before reconfiguring interrupts to avoid memory leaks. - Always clean up when closing the device. - Use malloc()/free() instead of their rte_*() counterparts since there is no need to store the vector in huge pages-backed memory. - Allow more Rx queues than the size of the event file descriptor array as long as Rx interrupts are not requested on all of them. - Properly clean up interrupt handle when disabling Rx interrupts (nb_efd and intr_vec reset to 0). - Check completion channel presence while toggling Rx interrupts on a given queue. Fixes: 9f05a4b81809 ("net/mlx4: support user space Rx interrupt event") Signed-off-by: Adrien Mazarguil Acked-by: Moti Haimovsky --- drivers/net/mlx4/mlx4.c | 166 +++++++++++++++++++------------------------ 1 file changed, 72 insertions(+), 94 deletions(-) diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index 2b4722f..050d646 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -135,16 +135,10 @@ static int mlx4_rx_intr_disable(struct rte_eth_dev *dev, uint16_t idx); static int -priv_intr_efd_enable(struct priv *priv); +priv_rx_intr_vec_enable(struct priv *priv); static void -priv_intr_efd_disable(struct priv *priv); - -static int -priv_create_intr_vec(struct priv *priv); - -static void -priv_destroy_intr_vec(struct priv *priv); +priv_rx_intr_vec_disable(struct priv *priv); /** * Check if running as a secondary process. @@ -3720,9 +3714,9 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc, if (dev->data->dev_conf.intr_conf.rxq) { tmpl.channel = ibv_create_comp_channel(priv->ctx); if (tmpl.channel == NULL) { - dev->data->dev_conf.intr_conf.rxq = 0; ret = ENOMEM; - ERROR("%p: Comp Channel creation failure: %s", + ERROR("%p: Rx interrupt completion channel creation" + " failure: %s", (void *)dev, strerror(ret)); goto error; } @@ -4037,10 +4031,11 @@ mlx4_dev_start(struct rte_eth_dev *dev) (void *)dev); goto err; } - if (dev->data->dev_conf.intr_conf.rxq) { - ret = priv_intr_efd_enable(priv); - if (!ret) - ret = priv_create_intr_vec(priv); + ret = priv_rx_intr_vec_enable(priv); + if (ret) { + ERROR("%p: Rx interrupt vector creation failed", + (void *)dev); + goto err; } ret = mlx4_priv_flow_start(priv); if (ret) { @@ -4234,10 +4229,7 @@ mlx4_dev_close(struct rte_eth_dev *dev) assert(priv->ctx == NULL); priv_dev_removal_interrupt_handler_uninstall(priv, dev); priv_dev_link_interrupt_handler_uninstall(priv, dev); - if (priv->dev->data->dev_conf.intr_conf.rxq) { - priv_destroy_intr_vec(priv); - priv_intr_efd_disable(priv); - } + priv_rx_intr_vec_disable(priv); priv_unlock(priv); memset(priv, 0, sizeof(*priv)); } @@ -5635,7 +5627,7 @@ priv_dev_removal_interrupt_handler_install(struct priv *priv, } /** - * Fill epoll fd list for rxq interrupts. + * Allocate queue vector and fill epoll fd list for Rx interrupts. * * @param priv * Pointer to private structure. @@ -5644,108 +5636,89 @@ priv_dev_removal_interrupt_handler_install(struct priv *priv, * 0 on success, negative on failure. */ static int -priv_intr_efd_enable(struct priv *priv) +priv_rx_intr_vec_enable(struct priv *priv) { unsigned int i; unsigned int rxqs_n = priv->rxqs_n; unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID); + unsigned int count = 0; struct rte_intr_handle *intr_handle = priv->dev->intr_handle; - if (n == 0) + if (!priv->dev->data->dev_conf.intr_conf.rxq) return 0; - if (n < rxqs_n) { - WARN("rxqs num is larger than EAL max interrupt vector " - "%u > %u unable to supprt rxq interrupts", - rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID); - return -EINVAL; + priv_rx_intr_vec_disable(priv); + intr_handle->intr_vec = malloc(sizeof(intr_handle->intr_vec[rxqs_n])); + if (intr_handle->intr_vec == NULL) { + ERROR("failed to allocate memory for interrupt vector," + " Rx interrupts will not be supported"); + return -ENOMEM; } intr_handle->type = RTE_INTR_HANDLE_EXT; for (i = 0; i != n; ++i) { struct rxq *rxq = (*priv->rxqs)[i]; - int fd = rxq->channel->fd; + int fd; int flags; int rc; + /* Skip queues that cannot request interrupts. */ + if (!rxq || !rxq->channel) { + /* Use invalid intr_vec[] index to disable entry. */ + intr_handle->intr_vec[i] = + RTE_INTR_VEC_RXTX_OFFSET + + RTE_MAX_RXTX_INTR_VEC_ID; + continue; + } + if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { + ERROR("too many Rx queues for interrupt vector size" + " (%d), Rx interrupts cannot be enabled", + RTE_MAX_RXTX_INTR_VEC_ID); + priv_rx_intr_vec_disable(priv); + return -1; + } + fd = rxq->channel->fd; flags = fcntl(fd, F_GETFL); rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK); if (rc < 0) { - WARN("failed to change rxq interrupt file " - "descriptor %d for queue index %d", fd, i); + ERROR("failed to make Rx interrupt file descriptor" + " %d non-blocking for queue index %d", fd, i); + priv_rx_intr_vec_disable(priv); return rc; } - intr_handle->efds[i] = fd; + intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; + intr_handle->efds[count] = fd; + count++; } - intr_handle->nb_efd = n; + if (!count) + priv_rx_intr_vec_disable(priv); + else + intr_handle->nb_efd = count; return 0; } /** - * Clean epoll fd list for rxq interrupts. + * Clean up Rx interrupts handler. * * @param priv * Pointer to private structure. */ static void -priv_intr_efd_disable(struct priv *priv) +priv_rx_intr_vec_disable(struct priv *priv) { struct rte_intr_handle *intr_handle = priv->dev->intr_handle; rte_intr_free_epoll_fd(intr_handle); + free(intr_handle->intr_vec); + intr_handle->nb_efd = 0; + intr_handle->intr_vec = NULL; } /** - * Create and init interrupt vector array. - * - * @param priv - * Pointer to private structure. - * - * @return - * 0 on success, negative on failure. - */ -static int -priv_create_intr_vec(struct priv *priv) -{ - unsigned int rxqs_n = priv->rxqs_n; - unsigned int i; - struct rte_intr_handle *intr_handle = priv->dev->intr_handle; - - if (rxqs_n == 0) - return 0; - intr_handle->intr_vec = (int *) - rte_malloc("intr_vec", rxqs_n * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { - WARN("Failed to allocate memory for intr_vec " - "rxq interrupt will not be supported"); - return -ENOMEM; - } - for (i = 0; i != rxqs_n; ++i) { - /* 1:1 mapping between rxq and interrupt. */ - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; - } - return 0; -} - -/** - * Destroy init interrupt vector array. - * - * @param priv - * Pointer to private structure. - */ -static void -priv_destroy_intr_vec(struct priv *priv) -{ - struct rte_intr_handle *intr_handle = priv->dev->intr_handle; - - rte_free(intr_handle->intr_vec); -} - -/** - * DPDK callback for rx queue interrupt enable. + * DPDK callback for Rx queue interrupt enable. * * @param dev * Pointer to Ethernet device structure. * @param idx - * RX queue index. + * Rx queue index. * * @return * 0 on success, negative on failure. @@ -5755,22 +5728,24 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx) { struct priv *priv = dev->data->dev_private; struct rxq *rxq = (*priv->rxqs)[idx]; - struct ibv_cq *cq = rxq->cq; - int ret = 0; + int ret; - ret = ibv_req_notify_cq(cq, 0); + if (!rxq || !rxq->channel) + ret = EINVAL; + else + ret = ibv_req_notify_cq(rxq->cq, 0); if (ret) WARN("unable to arm interrupt on rx queue %d", idx); return -ret; } /** - * DPDK callback for rx queue interrupt disable. + * DPDK callback for Rx queue interrupt disable. * * @param dev * Pointer to Ethernet device structure. * @param idx - * RX queue index. + * Rx queue index. * * @return * 0 on success, negative on failure. @@ -5780,20 +5755,23 @@ mlx4_rx_intr_disable(struct rte_eth_dev *dev, uint16_t idx) { struct priv *priv = dev->data->dev_private; struct rxq *rxq = (*priv->rxqs)[idx]; - struct ibv_cq *cq = rxq->cq; struct ibv_cq *ev_cq; void *ev_ctx; - int ret = 0; + int ret; - ret = ibv_get_cq_event(cq->channel, &ev_cq, &ev_ctx); - if (ret || ev_cq != cq) - ret = -1; - else - ibv_ack_cq_events(cq, 1); + if (!rxq || !rxq->channel) { + ret = EINVAL; + } else { + ret = ibv_get_cq_event(rxq->cq->channel, &ev_cq, &ev_ctx); + if (ret || ev_cq != rxq->cq) + ret = EINVAL; + } if (ret) WARN("unable to disable interrupt on rx queue %d", idx); - return ret; + else + ibv_ack_cq_events(rxq->cq, 1); + return -ret; } /**