From patchwork Thu Oct 12 12:19:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30266 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1E5141B2FE; Thu, 12 Oct 2017 14:20:38 +0200 (CEST) Received: from mail-wm0-f43.google.com (mail-wm0-f43.google.com [74.125.82.43]) by dpdk.org (Postfix) with ESMTP id CE1E81B2E5 for ; Thu, 12 Oct 2017 14:20:27 +0200 (CEST) Received: by mail-wm0-f43.google.com with SMTP id q132so12823850wmd.2 for ; Thu, 12 Oct 2017 05:20:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9FBNoRpUUBaTBS8QcpUu+VOka+H54qrLalZKrZ0ik6g=; b=ygJOGLwizdEBGsXhJCg6X0o5uUEp5UQQNkVv74gcKgaJLI/mtl4QKnCeDs5TgOdzKd CWQM1GZn3bWU+Taa7hd2xRQOgJjmGqIVVd9dXyzANyYNthEDhfeRFIu3d5DeucnPrziq TT/t4mbKHGXM/Uax+WXYaN0aDcmxOecJO3msIiTWLPzf3ZNXVnHm0pf+XbC9ETD+kFEi NCsQkJkDtnHrKQPQ81rNtrmUi+JJaMVLIJFinj4y1fmT+HgqFehnpqhkwRXXTJ+9wVkP TLUiESmkThT3uSwgTdWV+Zs/MHDc11MgoAvmhHYKxrhePUhguKllfV84o4VTySW8jeEB Zw1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9FBNoRpUUBaTBS8QcpUu+VOka+H54qrLalZKrZ0ik6g=; b=i38qJ9/3LDbLNcN3gEYzm4ANgeaSTudcv/N4pYSpfU26p6iGxzsQNh2w10T161ttDM kWgWkO6VLz+Ue/mN39I1aiHCwRXYmycY9XVCggcNe784eBvioWUOq4vhqinVQ4XC+gi1 CJlbWtsQPbtGMGpGr5UoTJ7g75YK6b1FSRkyRlb5NsArdHv4qqbhgrapDVbz6V40+o9G FMxOQAPX0ypwOhNj1StdHERfoL/mRSkcv5LQl+zG4xY5mnAX56VTggnYkd7tKHbr7WwF jfZDXYJBnb09UeE3ue875MTxzM6dfQiBOuqfFhpVIMNnFz2yD4Ap60ETIJdSPSg7vfkN qayA== X-Gm-Message-State: AMCzsaWGgi1LraofbgMoRzAwGrK0SQdFAKlYj42CE+EsUlQhrs+08oCT 5EhdUqtu5JX+veoKYgl8racrow== X-Google-Smtp-Source: AOwi7QBp+o/BMJjxeMlxas145xKuu8iKUtYGs5+i0mePc+m68I6HV+zBeP/xUGyln/t2lBgUd3t4SQ== X-Received: by 10.223.184.140 with SMTP id i12mr1787549wrf.31.1507810827332; Thu, 12 Oct 2017 05:20:27 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 9sm135128wml.24.2017.10.12.05.20.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Oct 2017 05:20:26 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: Nelio Laranjeiro , dev@dpdk.org Date: Thu, 12 Oct 2017 14:19:29 +0200 Message-Id: X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 15/29] net/mlx4: simplify trigger code for flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since flow rules synchronization function mlx4_flow_sync() takes into account the state of the device (whether it is started), trigger functions mlx4_flow_start() and mlx4_flow_stop() are redundant. Standardize on mlx4_flow_sync(). Use this opportunity to enhance this function with better error reporting as the inability to start the device due to a problem with a flow rule otherwise results in a nondescript error code. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4.c | 25 ++++++++----- drivers/net/mlx4/mlx4_flow.c | 76 +++++++++------------------------------ drivers/net/mlx4/mlx4_flow.h | 4 +-- drivers/net/mlx4/mlx4_rxq.c | 14 ++++++-- 4 files changed, 46 insertions(+), 73 deletions(-) diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index 40c0ee2..256aa3d 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -97,13 +98,17 @@ static int mlx4_dev_configure(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; + struct rte_flow_error error; int ret; /* Prepare internal flow rules. */ - ret = mlx4_flow_sync(priv); - if (ret) - ERROR("cannot set up internal flow rules: %s", - strerror(-ret)); + ret = mlx4_flow_sync(priv, &error); + if (ret) { + ERROR("cannot set up internal flow rules (code %d, \"%s\")," + " flow error type %d, cause %p, message: %s", + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); + } return ret; } @@ -122,6 +127,7 @@ static int mlx4_dev_start(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; + struct rte_flow_error error; int ret; if (priv->started) @@ -134,10 +140,13 @@ mlx4_dev_start(struct rte_eth_dev *dev) (void *)dev); goto err; } - ret = mlx4_flow_start(priv); + ret = mlx4_flow_sync(priv, &error); if (ret) { - ERROR("%p: flow start failed: %s", - (void *)dev, strerror(ret)); + ERROR("%p: cannot attach flow rules (code %d, \"%s\")," + " flow error type %d, cause %p, message: %s", + (void *)dev, + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); goto err; } return 0; @@ -164,7 +173,7 @@ mlx4_dev_stop(struct rte_eth_dev *dev) return; DEBUG("%p: detaching flows from all RX queues", (void *)dev); priv->started = 0; - mlx4_flow_stop(priv); + mlx4_flow_sync(priv, NULL); mlx4_intr_uninstall(priv); } diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index e1290a8..ec6c28f 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -956,14 +956,9 @@ mlx4_flow_isolate(struct rte_eth_dev *dev, if (!!enable == !!priv->isolated) return 0; priv->isolated = !!enable; - if (mlx4_flow_sync(priv)) { + if (mlx4_flow_sync(priv, error)) { priv->isolated = !enable; - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - enable ? - "cannot enter isolated mode" : - "cannot leave isolated mode"); + return -rte_errno; } return 0; } @@ -1075,12 +1070,14 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) * * @param priv * Pointer to private structure. + * @param[out] error + * Perform verbose error reporting if not NULL. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx4_flow_sync(struct priv *priv) +mlx4_flow_sync(struct priv *priv, struct rte_flow_error *error) { struct rte_flow *flow; int ret; @@ -1094,20 +1091,27 @@ mlx4_flow_sync(struct priv *priv) for (flow = LIST_FIRST(&priv->flows); flow && flow->internal; flow = LIST_FIRST(&priv->flows)) - claim_zero(mlx4_flow_destroy(priv->dev, flow, NULL)); + claim_zero(mlx4_flow_destroy(priv->dev, flow, error)); } else if (!LIST_FIRST(&priv->flows) || !LIST_FIRST(&priv->flows)->internal) { /* * If the first rule is not internal outside isolated mode, * they must be added back. */ - ret = mlx4_flow_internal(priv, NULL); + ret = mlx4_flow_internal(priv, error); + if (ret) + return ret; + } + /* Toggle the remaining flow rules . */ + for (flow = LIST_FIRST(&priv->flows); + flow; + flow = LIST_NEXT(flow, next)) { + ret = mlx4_flow_toggle(priv, flow, priv->started, error); if (ret) return ret; } - if (priv->started) - return mlx4_flow_start(priv); - mlx4_flow_stop(priv); + if (!priv->started) + assert(!priv->drop); return 0; } @@ -1129,52 +1133,6 @@ mlx4_flow_clean(struct priv *priv) mlx4_flow_destroy(priv->dev, flow, NULL); } -/** - * Disable flow rules. - * - * @param priv - * Pointer to private structure. - */ -void -mlx4_flow_stop(struct priv *priv) -{ - struct rte_flow *flow; - - for (flow = LIST_FIRST(&priv->flows); - flow; - flow = LIST_NEXT(flow, next)) { - claim_zero(mlx4_flow_toggle(priv, flow, 0, NULL)); - } - assert(!priv->drop); -} - -/** - * Enable flow rules. - * - * @param priv - * Pointer to private structure. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -int -mlx4_flow_start(struct priv *priv) -{ - int ret; - struct rte_flow *flow; - - for (flow = LIST_FIRST(&priv->flows); - flow; - flow = LIST_NEXT(flow, next)) { - ret = mlx4_flow_toggle(priv, flow, 1, NULL); - if (unlikely(ret)) { - mlx4_flow_stop(priv); - return ret; - } - } - return 0; -} - static const struct rte_flow_ops mlx4_flow_ops = { .validate = mlx4_flow_validate, .create = mlx4_flow_create, diff --git a/drivers/net/mlx4/mlx4_flow.h b/drivers/net/mlx4/mlx4_flow.h index c2ffa8d..13495d7 100644 --- a/drivers/net/mlx4/mlx4_flow.h +++ b/drivers/net/mlx4/mlx4_flow.h @@ -72,10 +72,8 @@ struct rte_flow { /* mlx4_flow.c */ -int mlx4_flow_sync(struct priv *priv); +int mlx4_flow_sync(struct priv *priv, struct rte_flow_error *error); void mlx4_flow_clean(struct priv *priv); -int mlx4_flow_start(struct priv *priv); -void mlx4_flow_stop(struct priv *priv); int mlx4_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type filter_type, enum rte_filter_op filter_op, diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 7bb2f9e..bcb7b94 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include #include @@ -401,7 +402,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } dev->data->rx_queues[idx] = NULL; /* Disable associated flows. */ - mlx4_flow_sync(priv); + mlx4_flow_sync(priv, NULL); mlx4_rxq_cleanup(rxq); } else { rxq = rte_calloc_socket("RXQ", 1, sizeof(*rxq), 0, socket); @@ -416,13 +417,20 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (ret) { rte_free(rxq); } else { + struct rte_flow_error error; + rxq->stats.idx = idx; DEBUG("%p: adding Rx queue %p to list", (void *)dev, (void *)rxq); dev->data->rx_queues[idx] = rxq; /* Re-enable associated flows. */ - ret = mlx4_flow_sync(priv); + ret = mlx4_flow_sync(priv, &error); if (ret) { + ERROR("cannot re-attach flow rules to queue %u" + " (code %d, \"%s\"), flow error type %d," + " cause %p, message: %s", idx, + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); dev->data->rx_queues[idx] = NULL; mlx4_rxq_cleanup(rxq); rte_free(rxq); @@ -457,7 +465,7 @@ mlx4_rx_queue_release(void *dpdk_rxq) priv->dev->data->rx_queues[i] = NULL; break; } - mlx4_flow_sync(priv); + mlx4_flow_sync(priv, NULL); mlx4_rxq_cleanup(rxq); rte_free(rxq); }