From patchwork Thu Oct 12 12:19:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30265 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C847D1B2A4; Thu, 12 Oct 2017 14:20:36 +0200 (CEST) Received: from mail-wm0-f51.google.com (mail-wm0-f51.google.com [74.125.82.51]) by dpdk.org (Postfix) with ESMTP id A1CBC1B2DE for ; Thu, 12 Oct 2017 14:20:26 +0200 (CEST) Received: by mail-wm0-f51.google.com with SMTP id q132so12823657wmd.2 for ; Thu, 12 Oct 2017 05:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bfhslzNxwpY6KRKnyLq8Ib75lLxryNnI4impdOLqCMs=; b=DFYB81Jix1I3MYWBllGOpjszWATaxLmR0PA024q3mX2HZejm0Ti81C6h8A0palBJa5 sSMz1DigNTvfDdbf8vzfILXshkVeYVEP5BZvYFr1KFnrJCreG/uoIJy2jaGJabEt1u2y U5aSEZKaRDybU5o2kUN+HblZydPoArn4xyMLyy/fnPHWiJhzWhOnXtlqSz3CWYQf9IfC VMaQ7fHVeAP/2iH9SH3oix9cK+PuqkXdnEfG8F1/ctXya2AFSR0bpdLzi7NAPOUTpUnz KN6dlGWFQUBVkauqv+1yGZnYaw9/VL5/Ha2RJF0s5EJtCo8i03SS/l66RP/qIVCWPvKZ Ntiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bfhslzNxwpY6KRKnyLq8Ib75lLxryNnI4impdOLqCMs=; b=PQ80hw6GrlBo+46m+21p2GfsU4xxgfImobAvHQekkR5CgCOQ6moFFWB31n6qfxowBl AM5ggcpRdgRGvUCdMaiPCeCc0WKMWPWcKPT687gHsDYIwKT6sqqaQYql3YHQ0iizhjVZ tHFFdHodw6o1UmD2dTvnnukhXBUtlEBFp+vR5+e1soILRXpt2O6TAjT5oMrY8I3al41q CP8mTaQ7XXU09DbZFXOTZ/Gud8Btm/crcTCrtS3ojSEXVFXSGJ7/cWFdql15fKZZiJVn fU/35nTa4+mE32zqID/6SlVakahIM4imZE1a3SL1RDqowjOoh5G4GPhW7K6AOmQ28J8K u3lw== X-Gm-Message-State: AMCzsaWf6J81Ih5T+crN+edn8mjRraN65gmEJK4jA2zNBSFYux8/WdgA hOLCj3SUBjAXYgqEfK4g/55yiIq1 X-Google-Smtp-Source: AOwi7QBok74z3AF2w6io5PL8VMkUJGd6soRA7ObXnELHrRqjPRmgDEB5zhZn+47l/44XDbahUntCKQ== X-Received: by 10.223.187.1 with SMTP id r1mr1862980wrg.253.1507810826128; Thu, 12 Oct 2017 05:20:26 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id v10sm14564033wrb.92.2017.10.12.05.20.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Oct 2017 05:20:25 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: Nelio Laranjeiro , dev@dpdk.org Date: Thu, 12 Oct 2017 14:19:28 +0200 Message-Id: <8a7315445e35f1d72748da692a2d124a5d9e7e3a.1507809961.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 14/29] net/mlx4: generalize flow rule priority support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since both internal and user-defined flow rules are handled by a common implementation, flow rule priority overlaps are easier to detect. No need to restrict their use to isolated mode only. With this patch, only the lowest priority level remains inaccessible to users outside isolated mode. Also, the PMD no longer automatically assigns a fixed priority level to user-defined flow rules, which means collisions between overlapping rules matching a different number of protocol layers at a given priority level won't be avoided anymore (e.g. "eth" vs. "eth / ipv4 / udp"). As a reminder, the outcome of overlapping rules for a given priority level was, and still is, undefined territory according to API documentation. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- drivers/net/mlx4/mlx4_flow.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index fb38179..e1290a8 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -155,7 +155,6 @@ mlx4_flow_create_eth(const struct rte_flow_item *item, unsigned int i; ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 2; eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *eth = (struct ibv_flow_spec_eth) { .type = IBV_FLOW_SPEC_ETH, @@ -232,7 +231,6 @@ mlx4_flow_create_ipv4(const struct rte_flow_item *item, unsigned int ipv4_size = sizeof(struct ibv_flow_spec_ipv4); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 1; ipv4 = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *ipv4 = (struct ibv_flow_spec_ipv4) { .type = IBV_FLOW_SPEC_IPV4, @@ -277,7 +275,6 @@ mlx4_flow_create_udp(const struct rte_flow_item *item, unsigned int udp_size = sizeof(struct ibv_flow_spec_tcp_udp); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 0; udp = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *udp = (struct ibv_flow_spec_tcp_udp) { .type = IBV_FLOW_SPEC_UDP, @@ -318,7 +315,6 @@ mlx4_flow_create_tcp(const struct rte_flow_item *item, unsigned int tcp_size = sizeof(struct ibv_flow_spec_tcp_udp); ++flow->ibv_attr->num_of_specs; - flow->ibv_attr->priority = 0; tcp = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size); *tcp = (struct ibv_flow_spec_tcp_udp) { .type = IBV_FLOW_SPEC_TCP, @@ -581,19 +577,11 @@ mlx4_flow_prepare(struct priv *priv, const struct mlx4_flow_proc_item *proc; struct rte_flow temp = { .ibv_attr_size = sizeof(*temp.ibv_attr) }; struct rte_flow *flow = &temp; - uint32_t priority_override = 0; if (attr->group) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL, "groups are not supported"); - if (priv->isolated) - priority_override = attr->priority; - else if (attr->priority) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - NULL, - "priorities are not supported outside isolated mode"); if (attr->priority > MLX4_FLOW_PRIORITY_LAST) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, @@ -659,9 +647,6 @@ mlx4_flow_prepare(struct priv *priv, } flow->ibv_attr_size += proc->dst_sz; } - /* Use specified priority level when in isolated mode. */ - if (priv->isolated && flow != &temp) - flow->ibv_attr->priority = priority_override; /* Go over actions list. */ for (action = actions; action->type; ++action) { switch (action->type) { @@ -718,6 +703,7 @@ mlx4_flow_prepare(struct priv *priv, *flow->ibv_attr = (struct ibv_flow_attr){ .type = IBV_FLOW_ATTR_NORMAL, .size = sizeof(*flow->ibv_attr), + .priority = attr->priority, .port = priv->port, }; goto fill; @@ -854,6 +840,22 @@ mlx4_flow_toggle(struct priv *priv, mlx4_drop_put(priv->drop); return 0; } + assert(flow->ibv_attr); + if (!flow->internal && + !priv->isolated && + flow->ibv_attr->priority == MLX4_FLOW_PRIORITY_LAST) { + if (flow->ibv_flow) { + claim_zero(ibv_destroy_flow(flow->ibv_flow)); + flow->ibv_flow = NULL; + if (flow->drop) + mlx4_drop_put(priv->drop); + } + err = EACCES; + msg = ("priority level " + MLX4_STR_EXPAND(MLX4_FLOW_PRIORITY_LAST) + " is reserved when not in isolated mode"); + goto error; + } if (flow->queue) { struct rxq *rxq = NULL; @@ -883,7 +885,6 @@ mlx4_flow_toggle(struct priv *priv, qp = priv->drop->qp; } assert(qp); - assert(flow->ibv_attr); if (flow->ibv_flow) return 0; flow->ibv_flow = ibv_create_flow(qp, flow->ibv_attr); @@ -1028,6 +1029,7 @@ static int mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) { struct rte_flow_attr attr = { + .priority = MLX4_FLOW_PRIORITY_LAST, .ingress = 1, }; struct rte_flow_item pattern[] = {