From patchwork Wed Oct 11 14:35:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30129 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 613291B206; Wed, 11 Oct 2017 16:36:31 +0200 (CEST) Received: from mail-wm0-f43.google.com (mail-wm0-f43.google.com [74.125.82.43]) by dpdk.org (Postfix) with ESMTP id ADC1C1B1BC for ; Wed, 11 Oct 2017 16:36:25 +0200 (CEST) Received: by mail-wm0-f43.google.com with SMTP id u138so5337644wmu.5 for ; Wed, 11 Oct 2017 07:36:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=row/afvs30JshxAtzctoftf/0/+7lIJh1XbnUg7dcvU=; b=RCe+W+yK6w24oDfOoAkLIzhjEmK2nT7uYkyP324PU30hMb1DsgyYhMzZrIdNvM65Q5 0Urjg+55vfmVP93YYrJzY49zL4hkZojzmKSmXAZYU+/bRHyxqgznK9ZTpe+q+Gyag6ce D7aKgCbQ4HG5850u2v4+PHMhMWWd16sEFW2KQwKXTSWH4JNhpOpvonGIPNZ5hHy9K5dA Gliavtq3AUh09T8n+JhPKl4jb7kQSoIPLUXxF021ebDQJcFV1OdAcHJ4KEFz3ocI2e+s 8uuuiodUFOPHiKrEIKdQSBCGDlwBOTaR+8ss9j5qBDiu3aqOUE7wZ74/Ld84sK9vPCGU Aeug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=row/afvs30JshxAtzctoftf/0/+7lIJh1XbnUg7dcvU=; b=RZyf/LckYZTI4WvHJaSPXg6Ht9K+TIY/3P2irAciU1ddKKMS8OIwW+TB1o39ys5bqI rZxVUwopMPqXru/rfumo1ML/dVCD9Wa5+jvln0Kkow2EePuCAHvYTRBGinYeML7PyU9A UpPFGOxYjNQXCO3nO0l8QcJTmSoMKJoyA5dPAggmjkhXAKwc5820HE5fiO9aUVN9SxmH zEmyKR2KeWwgZQaHE2acycDBPwuGHGb6X1FaarPb5KnT+d6ncrtfvwddfFqkjb2mTCpj ouXwwvvJ41TV8HOaFZAc64smDdB3d4CUQqJlVPeetzApzWqrHGFIqrK2n6Bj7X5FUv/u bVHg== X-Gm-Message-State: AMCzsaVyY+TB3ZTrRdYwGROVI/KxEbbA9ba+0D0sfiixZOOFWBl0NHsa xAdPSyUZ6yuz30lIe2zjjC05mPKY X-Google-Smtp-Source: AOwi7QCxZ08z7tmJVEfu5yiZ2dBZZjxaQd1npEq8BzZC+TbNV+j9JhfCWVhOMfO79RFjH46T5WnQuw== X-Received: by 10.28.212.65 with SMTP id l62mr12924779wmg.77.1507732585301; Wed, 11 Oct 2017 07:36:25 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 89sm180224wri.79.2017.10.11.07.36.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Oct 2017 07:36:24 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Date: Wed, 11 Oct 2017 16:35:09 +0200 Message-Id: <1035743f25d12d36d89448e16e86843c0236647e.1507730496.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 07/29] net/mlx4: tidy up flow rule handling code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" - Remove unnecessary casts. - Replace consecutive if/else blocks with switch statements. - Use proper big endian definitions for mask values. - Make end marker checks of item and action lists less verbose since they are explicitly documented as being equal to 0. - Remove unnecessary NULL check on action configuration structure. This commit does not cause any functional change. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_flow.c | 115 ++++++++++++++++++-------------------- 1 file changed, 53 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index e5854c6..fa56419 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -53,6 +53,7 @@ #pragma GCC diagnostic error "-Wpedantic" #endif +#include #include #include #include @@ -108,7 +109,7 @@ struct mlx4_flow_proc_item { * rte_flow item to convert. * @param default_mask * Default bit-masks to use when item->mask is not provided. - * @param data + * @param flow * Internal structure to store the conversion. * * @return @@ -116,7 +117,7 @@ struct mlx4_flow_proc_item { */ int (*convert)(const struct rte_flow_item *item, const void *default_mask, - void *data); + struct mlx4_flow *flow); /** Size in bytes of the destination structure. */ const unsigned int dst_sz; /** List of possible subsequent items. */ @@ -135,17 +136,16 @@ struct rte_flow_drop { * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_eth(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_eth *spec = item->spec; const struct rte_flow_item_eth *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_eth *eth; const unsigned int eth_size = sizeof(struct ibv_flow_spec_eth); unsigned int i; @@ -182,17 +182,16 @@ mlx4_flow_create_eth(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_vlan(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_vlan *spec = item->spec; const struct rte_flow_item_vlan *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_eth *eth; const unsigned int eth_size = sizeof(struct ibv_flow_spec_eth); @@ -214,17 +213,16 @@ mlx4_flow_create_vlan(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_ipv4(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_ipv4 *ipv4; unsigned int ipv4_size = sizeof(struct ibv_flow_spec_ipv4); @@ -260,17 +258,16 @@ mlx4_flow_create_ipv4(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_udp(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_tcp_udp *udp; unsigned int udp_size = sizeof(struct ibv_flow_spec_tcp_udp); @@ -302,17 +299,16 @@ mlx4_flow_create_udp(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_tcp(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_tcp_udp *tcp; unsigned int tcp_size = sizeof(struct ibv_flow_spec_tcp_udp); @@ -496,12 +492,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_VLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4), .mask = &(const struct rte_flow_item_vlan){ - /* rte_flow_item_vlan_mask is invalid for mlx4. */ -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN - .tci = 0x0fff, -#else - .tci = 0xff0f, -#endif + /* Only TCI VID matching is supported. */ + .tci = RTE_BE16(0x0fff), }, .mask_sz = sizeof(struct rte_flow_item_vlan), .validate = mlx4_flow_validate_vlan, @@ -513,8 +505,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { RTE_FLOW_ITEM_TYPE_TCP), .mask = &(const struct rte_flow_item_ipv4){ .hdr = { - .src_addr = -1, - .dst_addr = -1, + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), }, }, .default_mask = &rte_flow_item_ipv4_mask, @@ -526,8 +518,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_UDP] = { .mask = &(const struct rte_flow_item_udp){ .hdr = { - .src_port = -1, - .dst_port = -1, + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), }, }, .default_mask = &rte_flow_item_udp_mask, @@ -539,8 +531,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_TCP] = { .mask = &(const struct rte_flow_item_tcp){ .hdr = { - .src_port = -1, - .dst_port = -1, + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), }, }, .default_mask = &rte_flow_item_tcp_mask, @@ -627,7 +619,7 @@ mlx4_flow_prepare(struct priv *priv, return -rte_errno; } /* Go over pattern. */ - for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) { + for (item = pattern; item->type; ++item) { const struct mlx4_flow_proc_item *next = NULL; unsigned int i; int err; @@ -641,7 +633,7 @@ mlx4_flow_prepare(struct priv *priv, if (!item->spec && item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item *next = item + 1; - if (next->type != RTE_FLOW_ITEM_TYPE_END) { + if (next->type) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -650,10 +642,7 @@ mlx4_flow_prepare(struct priv *priv, return -rte_errno; } } - for (i = 0; - proc->next_item && - proc->next_item[i] != RTE_FLOW_ITEM_TYPE_END; - ++i) { + for (i = 0; proc->next_item && proc->next_item[i]; ++i) { if (proc->next_item[i] == item->type) { next = &mlx4_flow_proc_item_list[item->type]; break; @@ -680,22 +669,22 @@ mlx4_flow_prepare(struct priv *priv, if (priv->isolated && flow->ibv_attr) flow->ibv_attr->priority = priority_override; /* Go over actions list. */ - for (action = actions; - action->type != RTE_FLOW_ACTION_TYPE_END; - ++action) { - if (action->type == RTE_FLOW_ACTION_TYPE_VOID) { + for (action = actions; action->type; ++action) { + switch (action->type) { + const struct rte_flow_action_queue *queue; + + case RTE_FLOW_ACTION_TYPE_VOID: continue; - } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + case RTE_FLOW_ACTION_TYPE_DROP: target.drop = 1; - } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - const struct rte_flow_action_queue *queue = - action->conf; - - if (!queue || (queue->index > - (priv->dev->data->nb_rx_queues - 1))) + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = action->conf; + if (queue->index >= priv->dev->data->nb_rx_queues) goto exit_action_not_supported; target.queue = 1; - } else { + break; + default: goto exit_action_not_supported; } } @@ -907,19 +896,21 @@ mlx4_flow_create(struct rte_eth_dev *dev, .queue = 0, .drop = 0, }; - for (action = actions; - action->type != RTE_FLOW_ACTION_TYPE_END; - ++action) { - if (action->type == RTE_FLOW_ACTION_TYPE_VOID) { + for (action = actions; action->type; ++action) { + switch (action->type) { + const struct rte_flow_action_queue *queue; + + case RTE_FLOW_ACTION_TYPE_VOID: continue; - } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = action->conf; target.queue = 1; - target.queue_id = - ((const struct rte_flow_action_queue *) - action->conf)->index; - } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + target.queue_id = queue->index; + break; + case RTE_FLOW_ACTION_TYPE_DROP: target.drop = 1; - } else { + break; + default: rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, "unsupported action");