From patchwork Wed Oct 11 14:35:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30135 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 48BB81B236; Wed, 11 Oct 2017 16:36:48 +0200 (CEST) Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id D9B9E1B21A for ; Wed, 11 Oct 2017 16:36:34 +0200 (CEST) Received: by mail-wm0-f49.google.com with SMTP id i124so5427756wmf.3 for ; Wed, 11 Oct 2017 07:36:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=p5IpDeCXSBx76CSTTFVsT886TuuRulwJUa7owKyLK+w=; b=KahImoMhdhPR+/j36Otz49MA8L0hvqZ1EFmtHYQIEoWYJECzkMSkAKdOdanrrtOJf3 C3Xn2q7zTm1S46MLfTk5nZv10Lh39Bw/PlOiR9HGdf5guUV98ltctsA4A7kQkqXwQWVb SL164KOFYW24Utyr4MPgo6iOe8zPii/MGuK2AlsSTrY0qRpJCKiNjcYso0rOWywTu8Tq T9InsAXZ33XCGqLIleDXBa/sNiYORCG/xQYwhJauwiVK7sskiaRnUcgTnGz9ZsV7IDuP Zd/Cqpbg8UzKAw6l9Vy9f4kMoeGsQzlL6sUk1QNdElFQ17Q8BuvSxmL4v3186PkwyQtb DB9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=p5IpDeCXSBx76CSTTFVsT886TuuRulwJUa7owKyLK+w=; b=OynNDNL9cARq5awRgFsWXmKpVa8ZFbpM29lSUCfN47o+OwI6g4WzsPzEzu1ejYEFed Mr7ws8TQfrrJkZq5m7LM48YZOqyJubzVILkN/ivCJ8WrNhz4EFnDSti5fTJb4rQBb3T2 ZiMYa6Pqoa2bw3tk6CHbpdfORlorAU3T6qMC3c0FYrZ/DiDsnoMGMdqZJcU/f6EehtOK n9q73SIXXeVqd1IR+jKOQjnfW1pXv7zQeQmIdojJvyhZBcDa5nYoGt8y9YOq5jN8A5NK ddiW7Ch9Q2OKCh0T8v8rYAZvVuQKl5caAOEsi1DaB7rhsAHEBIK53iLy3aTK0fztKaP0 6FvA== X-Gm-Message-State: AMCzsaXIcOCeweEPtnQ+lgfNBYRZWfCMUvRplcqRpXxg63QQjBCuZIQr W2LaKNrW1Dr3GX7JA2L5955weg== X-Google-Smtp-Source: AOwi7QDOUGdI+bCRaz70V0oufCS4vz0x4Eelsij2W6pn4V3BwES1uH4dHyd0EHuNgQ84mclvXii/lg== X-Received: by 10.28.97.194 with SMTP id v185mr15183921wmb.117.1507732594355; Wed, 11 Oct 2017 07:36:34 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 10sm11245918wmy.35.2017.10.11.07.36.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Oct 2017 07:36:33 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org Date: Wed, 11 Oct 2017 16:35:16 +0200 Message-Id: <07652488f01610d5cdb24466d900ae1d4e07c468.1507730496.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v1 14/29] net/mlx4: generalize flow rule priority support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since both internal and user-defined flow rules are handled by a common implementation, flow rule priority overlaps are easier to detect. No need to restrict their use to isolated mode only. With this patch, only the lowest priority level remains inaccessible to users outside isolated mode. Also, the PMD no longer automatically assigns a fixed priority level to user-defined flow rules, which means collisions between overlapping rules matching a different number of protocol layers at a given priority level won't be avoided anymore (e.g. "eth" vs. "eth / ipv4 / udp"). As a reminder, the outcome of overlapping rules for a given priority level was, and still is, undefined territory according to API documentation. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_flow.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index be644a4..c4de9d9 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -155,7 +155,6 @@ mlx4_flow_create_eth(const struct rte_flow_item *item, unsigned int i; ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 2; eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *eth = (struct ibv_flow_spec_eth) { .type = IBV_FLOW_SPEC_ETH, @@ -232,7 +231,6 @@ mlx4_flow_create_ipv4(const struct rte_flow_item *item, unsigned int ipv4_size = sizeof(struct ibv_flow_spec_ipv4); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 1; ipv4 = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *ipv4 = (struct ibv_flow_spec_ipv4) { .type = IBV_FLOW_SPEC_IPV4, @@ -277,7 +275,6 @@ mlx4_flow_create_udp(const struct rte_flow_item *item, unsigned int udp_size = sizeof(struct ibv_flow_spec_tcp_udp); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 0; udp = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *udp = (struct ibv_flow_spec_tcp_udp) { .type = IBV_FLOW_SPEC_UDP, @@ -318,7 +315,6 @@ mlx4_flow_create_tcp(const struct rte_flow_item *item, unsigned int tcp_size = sizeof(struct ibv_flow_spec_tcp_udp); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 0; tcp = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *tcp = (struct ibv_flow_spec_tcp_udp) { .type = IBV_FLOW_SPEC_TCP, @@ -581,19 +577,11 @@ mlx4_flow_prepare(struct priv *priv, const struct mlx4_flow_proc_item *proc; struct rte_flow temp = { .ibv_attr_size = sizeof(*temp.ibv_attr) }; struct rte_flow *flow = &temp; - uint32_t priority_override = 0; if (attr->group) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL, "groups are not supported"); - if (priv->isolated) - priority_override = attr->priority; - else if (attr->priority) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - NULL, - "priorities are not supported outside isolated mode"); if (attr->priority > MLX4_FLOW_PRIORITY_LAST) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, @@ -659,9 +647,6 @@ mlx4_flow_prepare(struct priv *priv, } flow->ibv_attr_size += proc->dst_sz; } - /* Use specified priority level when in isolated mode. */ - if (priv->isolated && flow != &temp) - flow->ibv_attr->priority = priority_override; /* Go over actions list. */ for (action = actions; action->type; ++action) { switch (action->type) { @@ -718,6 +703,7 @@ mlx4_flow_prepare(struct priv *priv, *flow->ibv_attr = (struct ibv_flow_attr){ .type = IBV_FLOW_ATTR_NORMAL, .size = sizeof(*flow->ibv_attr), + .priority = attr->priority, .port = priv->port, }; goto fill; @@ -854,6 +840,22 @@ mlx4_flow_toggle(struct priv *priv, mlx4_drop_put(priv->drop); return 0; } + assert(flow->ibv_attr); + if (!flow->internal && + !priv->isolated && + flow->ibv_attr->priority == MLX4_FLOW_PRIORITY_LAST) { + if (flow->ibv_flow) { + claim_zero(ibv_destroy_flow(flow->ibv_flow)); + flow->ibv_flow = NULL; + if (flow->drop) + mlx4_drop_put(priv->drop); + } + err = EACCES; + msg = ("priority level " + MLX4_STR_EXPAND(MLX4_FLOW_PRIORITY_LAST) + " is reserved when not in isolated mode"); + goto error; + } if (flow->queue) { struct rxq *rxq = NULL; @@ -883,7 +885,6 @@ mlx4_flow_toggle(struct priv *priv, qp = priv->drop->qp; } assert(qp); - assert(flow->ibv_attr); if (flow->ibv_flow) return 0; flow->ibv_flow = ibv_create_flow(qp, flow->ibv_attr); @@ -1028,6 +1029,7 @@ static int mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) { struct rte_flow_attr attr = { + .priority = MLX4_FLOW_PRIORITY_LAST, .ingress = 1, }; struct rte_flow_item pattern[] = {