From patchwork Thu Oct 12 12:19:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30258 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 487AE1B2C8; Thu, 12 Oct 2017 14:20:21 +0200 (CEST) Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 7DE571B2C2 for ; Thu, 12 Oct 2017 14:20:16 +0200 (CEST) Received: by mail-wm0-f47.google.com with SMTP id i124so12806003wmf.3 for ; Thu, 12 Oct 2017 05:20:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=row/afvs30JshxAtzctoftf/0/+7lIJh1XbnUg7dcvU=; b=ixaob7XRazLDfSdeL43gF9eX1wjGoJJZ1ozOAMJPX20Fq76Wq4LmpImhBq9yFVK6L9 RmKPFehrPzb+hDFGtRbwL78r+iAQDovv8EBJ2kii7Sd11D8p0a9O3xPF/GAExU+LOxiG 2j5/RerK4j5fOG33UMb0QHhj0Fw+zqFBDJQFPW0TC1w9hmRF/KyYXjnZGyJOzgonLkbz /Hiu1ala1aBC+nC8SjuInutZMp/8m40gMUpwywdXvCJGnemB01Qac33TQQMnh3FhN2pz KLjOJfYjlA5izXak0yh1H/DZgW/tSkaDSrPWpU58XmBfAZcl9IANV7mMsfbYrFWvm4kg kN0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=row/afvs30JshxAtzctoftf/0/+7lIJh1XbnUg7dcvU=; b=oMDGYmByu3xvu8u4izGkR4SSO6hycoq++R3YoXVnadIGjJ6XHQf/32X0WLWMzJ4GSa N3y51E7YtaQBYQ+sJsODVJHziqbsVJFkFPJkSnkd15XXLfJr8igtSu6rQvb83s47GV9l E7ccmoGMyavLHRDYURLKNqWRa17kRvquGKD967uHuHXpffaG+0PIBCR6bBOSkiEw5TrZ tOm0gk01+Nz4Q1JjnBxAgF8tnt/4t+RthgWBSapzwTCVIADEPYEu7lxbUGrSnGazGQQX TdEuCeB62P2k3hc0SHztJh4rxXOP3UwZAw3iPgboTEUJvqWdb4FbfQgaH3DHUDsfK/X1 whtA== X-Gm-Message-State: AMCzsaXucnFCRjsJXdlCfQex2sxZpUhYmCnyvxqUqQSaV+1XIGLeCx1X UEcrT5QL9yWgux8eeRGp5h5Wcg== X-Google-Smtp-Source: AOwi7QCsItLGuexhmTvBzOrSFxeatvOpUNTXqO5iL7+OOfBkeo2/BiEFWv2sbgDfFsEGi8BJglD9HQ== X-Received: by 10.223.164.206 with SMTP id h14mr1803548wrb.221.1507810815838; Thu, 12 Oct 2017 05:20:15 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 64sm125844wma.21.2017.10.12.05.20.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Oct 2017 05:20:14 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: Nelio Laranjeiro , dev@dpdk.org Date: Thu, 12 Oct 2017 14:19:21 +0200 Message-Id: <9178a16f674af17afcace6da284352b978c49459.1507809961.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 07/29] net/mlx4: tidy up flow rule handling code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" - Remove unnecessary casts. - Replace consecutive if/else blocks with switch statements. - Use proper big endian definitions for mask values. - Make end marker checks of item and action lists less verbose since they are explicitly documented as being equal to 0. - Remove unnecessary NULL check on action configuration structure. This commit does not cause any functional change. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_flow.c | 115 ++++++++++++++++++-------------------- 1 file changed, 53 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index e5854c6..fa56419 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -53,6 +53,7 @@ #pragma GCC diagnostic error "-Wpedantic" #endif +#include #include #include #include @@ -108,7 +109,7 @@ struct mlx4_flow_proc_item { * rte_flow item to convert. * @param default_mask * Default bit-masks to use when item->mask is not provided. - * @param data + * @param flow * Internal structure to store the conversion. * * @return @@ -116,7 +117,7 @@ struct mlx4_flow_proc_item { */ int (*convert)(const struct rte_flow_item *item, const void *default_mask, - void *data); + struct mlx4_flow *flow); /** Size in bytes of the destination structure. */ const unsigned int dst_sz; /** List of possible subsequent items. */ @@ -135,17 +136,16 @@ struct rte_flow_drop { * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_eth(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_eth *spec = item->spec; const struct rte_flow_item_eth *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_eth *eth; const unsigned int eth_size = sizeof(struct ibv_flow_spec_eth); unsigned int i; @@ -182,17 +182,16 @@ mlx4_flow_create_eth(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_vlan(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_vlan *spec = item->spec; const struct rte_flow_item_vlan *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_eth *eth; const unsigned int eth_size = sizeof(struct ibv_flow_spec_eth); @@ -214,17 +213,16 @@ mlx4_flow_create_vlan(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_ipv4(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_ipv4 *ipv4; unsigned int ipv4_size = sizeof(struct ibv_flow_spec_ipv4); @@ -260,17 +258,16 @@ mlx4_flow_create_ipv4(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_udp(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_tcp_udp *udp; unsigned int udp_size = sizeof(struct ibv_flow_spec_tcp_udp); @@ -302,17 +299,16 @@ mlx4_flow_create_udp(const struct rte_flow_item *item, * Item specification. * @param default_mask[in] * Default bit-masks to use when item->mask is not provided. - * @param data[in, out] - * User structure. + * @param flow[in, out] + * Conversion result. */ static int mlx4_flow_create_tcp(const struct rte_flow_item *item, const void *default_mask, - void *data) + struct mlx4_flow *flow) { const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; - struct mlx4_flow *flow = (struct mlx4_flow *)data; struct ibv_flow_spec_tcp_udp *tcp; unsigned int tcp_size = sizeof(struct ibv_flow_spec_tcp_udp); @@ -496,12 +492,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_VLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4), .mask = &(const struct rte_flow_item_vlan){ - /* rte_flow_item_vlan_mask is invalid for mlx4. */ -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN - .tci = 0x0fff, -#else - .tci = 0xff0f, -#endif + /* Only TCI VID matching is supported. */ + .tci = RTE_BE16(0x0fff), }, .mask_sz = sizeof(struct rte_flow_item_vlan), .validate = mlx4_flow_validate_vlan, @@ -513,8 +505,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { RTE_FLOW_ITEM_TYPE_TCP), .mask = &(const struct rte_flow_item_ipv4){ .hdr = { - .src_addr = -1, - .dst_addr = -1, + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), }, }, .default_mask = &rte_flow_item_ipv4_mask, @@ -526,8 +518,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_UDP] = { .mask = &(const struct rte_flow_item_udp){ .hdr = { - .src_port = -1, - .dst_port = -1, + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), }, }, .default_mask = &rte_flow_item_udp_mask, @@ -539,8 +531,8 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { [RTE_FLOW_ITEM_TYPE_TCP] = { .mask = &(const struct rte_flow_item_tcp){ .hdr = { - .src_port = -1, - .dst_port = -1, + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), }, }, .default_mask = &rte_flow_item_tcp_mask, @@ -627,7 +619,7 @@ mlx4_flow_prepare(struct priv *priv, return -rte_errno; } /* Go over pattern. */ - for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) { + for (item = pattern; item->type; ++item) { const struct mlx4_flow_proc_item *next = NULL; unsigned int i; int err; @@ -641,7 +633,7 @@ mlx4_flow_prepare(struct priv *priv, if (!item->spec && item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item *next = item + 1; - if (next->type != RTE_FLOW_ITEM_TYPE_END) { + if (next->type) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -650,10 +642,7 @@ mlx4_flow_prepare(struct priv *priv, return -rte_errno; } } - for (i = 0; - proc->next_item && - proc->next_item[i] != RTE_FLOW_ITEM_TYPE_END; - ++i) { + for (i = 0; proc->next_item && proc->next_item[i]; ++i) { if (proc->next_item[i] == item->type) { next = &mlx4_flow_proc_item_list[item->type]; break; @@ -680,22 +669,22 @@ mlx4_flow_prepare(struct priv *priv, if (priv->isolated && flow->ibv_attr) flow->ibv_attr->priority = priority_override; /* Go over actions list. */ - for (action = actions; - action->type != RTE_FLOW_ACTION_TYPE_END; - ++action) { - if (action->type == RTE_FLOW_ACTION_TYPE_VOID) { + for (action = actions; action->type; ++action) { + switch (action->type) { + const struct rte_flow_action_queue *queue; + + case RTE_FLOW_ACTION_TYPE_VOID: continue; - } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + case RTE_FLOW_ACTION_TYPE_DROP: target.drop = 1; - } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - const struct rte_flow_action_queue *queue = - action->conf; - - if (!queue || (queue->index > - (priv->dev->data->nb_rx_queues - 1))) + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = action->conf; + if (queue->index >= priv->dev->data->nb_rx_queues) goto exit_action_not_supported; target.queue = 1; - } else { + break; + default: goto exit_action_not_supported; } } @@ -907,19 +896,21 @@ mlx4_flow_create(struct rte_eth_dev *dev, .queue = 0, .drop = 0, }; - for (action = actions; - action->type != RTE_FLOW_ACTION_TYPE_END; - ++action) { - if (action->type == RTE_FLOW_ACTION_TYPE_VOID) { + for (action = actions; action->type; ++action) { + switch (action->type) { + const struct rte_flow_action_queue *queue; + + case RTE_FLOW_ACTION_TYPE_VOID: continue; - } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = action->conf; target.queue = 1; - target.queue_id = - ((const struct rte_flow_action_queue *) - action->conf)->index; - } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + target.queue_id = queue->index; + break; + case RTE_FLOW_ACTION_TYPE_DROP: target.drop = 1; - } else { + break; + default: rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, "unsupported action");