From patchwork Wed Oct 11 14:35:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30136 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 759041B25E; Wed, 11 Oct 2017 16:36:49 +0200 (CEST) Received: from mail-wm0-f41.google.com (mail-wm0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id 1FE7E1B21B for ; Wed, 11 Oct 2017 16:36:36 +0200 (CEST) Received: by mail-wm0-f41.google.com with SMTP id u138so5448733wmu.4 for ; Wed, 11 Oct 2017 07:36:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yLDMduvdTDowW/mq5B0RMoF7uLGuOp/vi1aBm7ljeag=; b=kSUC7sF42gBZaNPMNOR5Nh3GHHDHG0+Qc0dJfZ1A29lXfZE69DSLf0gw7l1+t4NXfj jE76fts0mFdHXmgq0COTixnl5Mz9gCHgAfCo7DPzfzDdJ5TgqMls0JTOUZTc7pwPPunD nDtieU6Jkd0ixcYyBfepwaeDv0UKmKpgjDJDXrUusUFZB7ucM8ECIZKYJYMGV8fkdKwZ wFljo0RrwXRZUfS6U7AAthiZn6QsObtxyDhanKAkEwEE9QazMHIZnqyIcrprKweuU/KZ k+F2lZHMwXV9fuWW1fJerdpijNEpTyljtE+2VTVl4ofCE9AS9IkM+pdUpAFCAWOGa56w cAOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yLDMduvdTDowW/mq5B0RMoF7uLGuOp/vi1aBm7ljeag=; b=L0pbsyDjlwb6nvQv4j9AqJW9NvwWKbieNF4HwfUOLzW5QBp//eno/zAv+1VxC6gxSh Q/EpGMfk780JMhyXJt1P89kHuSzCltTHIcYtWKA6IXCYFTkOBTYQYMuPjD8+IB/lmHK2 BR3rk0DypMV4keJDnz1FhbqTg2GWaqRXyF5R6/SK8jubhkW5fBo2LMqU6KzifgK36JKE LSCJi9UW4C8yjZ+Z6BBI53SNcm46JLSXVS/FxPj+wLtTFDiK4cQPlh5Q4lmw06J/Ggxe 5Mv5YUzm7XOTskDLfPHO4frbdQQQ7GiTWri1rW1w9XSLtA0U1pshYlYRqXJ/TtOxfLU3 wnAw== X-Gm-Message-State: AMCzsaXkf/JJ/oog+z+R7dYUdxrFePQ5dYi/W4XwDwjO8ooyzAB7JDUR Bx61gn1kj8cOxUlhGQZbCeTbig== X-Google-Smtp-Source: AOwi7QCM7pZTIU+clXZ0DHBoclkVTuD5yNQEuLDJKT5jaPOWiU8xV89DPLItdTKtVfXYVjzhdHV6AQ== X-Received: by 10.223.134.157 with SMTP id 29mr14730993wrx.72.1507732595716; Wed, 11 Oct 2017 07:36:35 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id e17sm29627837wmf.46.2017.10.11.07.36.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Oct 2017 07:36:34 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Date: Wed, 11 Oct 2017 16:35:17 +0200 Message-Id: <31b0fb1d1233fc2fed7855ed8dac004cf03a9c9f.1507730496.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 15/29] net/mlx4: simplify trigger code for flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since flow rules synchronization function mlx4_flow_sync() takes into account the state of the device (whether it is started), trigger functions mlx4_flow_start() and mlx4_flow_stop() are redundant. Standardize on mlx4_flow_sync(). Use this opportunity to enhance this function with better error reporting as the inability to start the device due to a problem with a flow rule otherwise results in a nondescript error code. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4.c | 25 ++++++++----- drivers/net/mlx4/mlx4_flow.c | 76 +++++++++------------------------------ drivers/net/mlx4/mlx4_flow.h | 4 +-- drivers/net/mlx4/mlx4_rxq.c | 14 ++++++-- 4 files changed, 46 insertions(+), 73 deletions(-) diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index 40c0ee2..256aa3d 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -97,13 +98,17 @@ static int mlx4_dev_configure(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; + struct rte_flow_error error; int ret; /* Prepare internal flow rules. */ - ret = mlx4_flow_sync(priv); - if (ret) - ERROR("cannot set up internal flow rules: %s", - strerror(-ret)); + ret = mlx4_flow_sync(priv, &error); + if (ret) { + ERROR("cannot set up internal flow rules (code %d, \"%s\")," + " flow error type %d, cause %p, message: %s", + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); + } return ret; } @@ -122,6 +127,7 @@ static int mlx4_dev_start(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; + struct rte_flow_error error; int ret; if (priv->started) @@ -134,10 +140,13 @@ mlx4_dev_start(struct rte_eth_dev *dev) (void *)dev); goto err; } - ret = mlx4_flow_start(priv); + ret = mlx4_flow_sync(priv, &error); if (ret) { - ERROR("%p: flow start failed: %s", - (void *)dev, strerror(ret)); + ERROR("%p: cannot attach flow rules (code %d, \"%s\")," + " flow error type %d, cause %p, message: %s", + (void *)dev, + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); goto err; } return 0; @@ -164,7 +173,7 @@ mlx4_dev_stop(struct rte_eth_dev *dev) return; DEBUG("%p: detaching flows from all RX queues", (void *)dev); priv->started = 0; - mlx4_flow_stop(priv); + mlx4_flow_sync(priv, NULL); mlx4_intr_uninstall(priv); } diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index c4de9d9..218b23f 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -956,14 +956,9 @@ mlx4_flow_isolate(struct rte_eth_dev *dev, if (!!enable == !!priv->isolated) return 0; priv->isolated = !!enable; - if (mlx4_flow_sync(priv)) { + if (mlx4_flow_sync(priv, error)) { priv->isolated = !enable; - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - enable ? - "cannot enter isolated mode" : - "cannot leave isolated mode"); + return -rte_errno; } return 0; } @@ -1075,12 +1070,14 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) * * @param priv * Pointer to private structure. + * @param[out] error + * Perform verbose error reporting if not NULL. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx4_flow_sync(struct priv *priv) +mlx4_flow_sync(struct priv *priv, struct rte_flow_error *error) { struct rte_flow *flow; int ret; @@ -1094,20 +1091,27 @@ mlx4_flow_sync(struct priv *priv) for (flow = LIST_FIRST(&priv->flows); flow && flow->internal; flow = LIST_FIRST(&priv->flows)) - claim_zero(mlx4_flow_destroy(priv->dev, flow, NULL)); + claim_zero(mlx4_flow_destroy(priv->dev, flow, error)); } else if (!LIST_FIRST(&priv->flows) || !LIST_FIRST(&priv->flows)->internal) { /* * If the first rule is not internal outside isolated mode, * they must be added back. */ - ret = mlx4_flow_internal(priv, NULL); + ret = mlx4_flow_internal(priv, error); + if (ret) + return ret; + } + /* Toggle the remaining flow rules . */ + for (flow = LIST_FIRST(&priv->flows); + flow; + flow = LIST_NEXT(flow, next)) { + ret = mlx4_flow_toggle(priv, flow, priv->started, error); if (ret) return ret; } - if (priv->started) - return mlx4_flow_start(priv); - mlx4_flow_stop(priv); + if (!priv->started) + assert(!priv->drop); return 0; } @@ -1129,52 +1133,6 @@ mlx4_flow_clean(struct priv *priv) mlx4_flow_destroy(priv->dev, flow, NULL); } -/** - * Disable flow rules. - * - * @param priv - * Pointer to private structure. - */ -void -mlx4_flow_stop(struct priv *priv) -{ - struct rte_flow *flow; - - for (flow = LIST_FIRST(&priv->flows); - flow; - flow = LIST_NEXT(flow, next)) { - claim_zero(mlx4_flow_toggle(priv, flow, 0, NULL)); - } - assert(!priv->drop); -} - -/** - * Enable flow rules. - * - * @param priv - * Pointer to private structure. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -int -mlx4_flow_start(struct priv *priv) -{ - int ret; - struct rte_flow *flow; - - for (flow = LIST_FIRST(&priv->flows); - flow; - flow = LIST_NEXT(flow, next)) { - ret = mlx4_flow_toggle(priv, flow, 1, NULL); - if (unlikely(ret)) { - mlx4_flow_stop(priv); - return ret; - } - } - return 0; -} - static const struct rte_flow_ops mlx4_flow_ops = { .validate = mlx4_flow_validate, .create = mlx4_flow_create, diff --git a/drivers/net/mlx4/mlx4_flow.h b/drivers/net/mlx4/mlx4_flow.h index c2ffa8d..13495d7 100644 --- a/drivers/net/mlx4/mlx4_flow.h +++ b/drivers/net/mlx4/mlx4_flow.h @@ -72,10 +72,8 @@ struct rte_flow { /* mlx4_flow.c */ -int mlx4_flow_sync(struct priv *priv); +int mlx4_flow_sync(struct priv *priv, struct rte_flow_error *error); void mlx4_flow_clean(struct priv *priv); -int mlx4_flow_start(struct priv *priv); -void mlx4_flow_stop(struct priv *priv); int mlx4_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type filter_type, enum rte_filter_op filter_op, diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 7bb2f9e..bcb7b94 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include #include @@ -401,7 +402,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } dev->data->rx_queues[idx] = NULL; /* Disable associated flows. */ - mlx4_flow_sync(priv); + mlx4_flow_sync(priv, NULL); mlx4_rxq_cleanup(rxq); } else { rxq = rte_calloc_socket("RXQ", 1, sizeof(*rxq), 0, socket); @@ -416,13 +417,20 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (ret) { rte_free(rxq); } else { + struct rte_flow_error error; + rxq->stats.idx = idx; DEBUG("%p: adding Rx queue %p to list", (void *)dev, (void *)rxq); dev->data->rx_queues[idx] = rxq; /* Re-enable associated flows. */ - ret = mlx4_flow_sync(priv); + ret = mlx4_flow_sync(priv, &error); if (ret) { + ERROR("cannot re-attach flow rules to queue %u" + " (code %d, \"%s\"), flow error type %d," + " cause %p, message: %s", idx, + -ret, strerror(-ret), error.type, error.cause, + error.message ? error.message : "(unspecified)"); dev->data->rx_queues[idx] = NULL; mlx4_rxq_cleanup(rxq); rte_free(rxq); @@ -457,7 +465,7 @@ mlx4_rx_queue_release(void *dpdk_rxq) priv->dev->data->rx_queues[i] = NULL; break; } - mlx4_flow_sync(priv); + mlx4_flow_sync(priv, NULL); mlx4_rxq_cleanup(rxq); rte_free(rxq); }