From patchwork Mon Oct 23 14:49:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 30722 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D8A31B666; Mon, 23 Oct 2017 16:50:31 +0200 (CEST) Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by dpdk.org (Postfix) with ESMTP id 85D9E1B642 for ; Mon, 23 Oct 2017 16:50:18 +0200 (CEST) Received: by mail-wm0-f65.google.com with SMTP id p75so10059604wmg.3 for ; Mon, 23 Oct 2017 07:50:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=XBPm7uUiHS7TPuxqXkPM4vsV4jdDDEPhueoeLtjfNmM=; b=KfAOiG67Ju/9OwDnMjRX/flMgqMhUIZMTH94vvstV9hQzte1JpzjzM46kX3XOkbhW/ 9p1wKLCqGSLrtsMv94SRjZ/b+Az9+Y+5nV/CRtvcbDSiACgi+yzdQ1NlZ1eNUBUPHx/x afvLx4vF1dTP2UyvaQQm/jOUhsybbwg1e+fHcasrgy+v/EWwc+ORnJEIrgyZUS6Pd6YF jqTM4dBEl8y8CjZwVapXf8oq3xfR2n6/nHCcZwKMPwDrINnBoA80FIixr9iamdroC3iz Vx5tbn8qaLiaphoWw9eCuDDEMBFSKU1DOe5OcHEzy1rq2lu+cGj6Sd7UqihbHkdZvecJ xUDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=XBPm7uUiHS7TPuxqXkPM4vsV4jdDDEPhueoeLtjfNmM=; b=bjW1e0nce5HSRsYWq+cNRNTs1E7QYZDTe/3BkgH6PGTzWdvoGiF3Bnl9DshuWw+O9n 8NNm1SVKpqaLH53jtW3prlg+u+jLmsMwL0O1zNeRuyL+HOdAu/hymd5TX7rwasQxO8l1 7arf/uU4DPBuQGBSRe++g7XUHwO51Urw1E2HlYuhwkT/3MoyAWrbN7K1NFztcx6vnjr8 LcMsfpMtgwZkOltX22Dd/8mtTEzT+FjJiIRDy5PJiDgkBT9zcR4Wus5Q1MA+Ef8eRoW8 pw2hp3tdUJZs3ltvSgXrIycG6lTIiZ77L+hdwWpQy4hjKgz+kdH1wXDiUho3IR5SireJ VCUw== X-Gm-Message-State: AMCzsaVSy4AfZYLKUmDE9N487JAIbuJlHj90t4VfXP2iKxJz7pMwM38O 9WJoykmNno8si1N2kxv4TzJsDqz2GQ== X-Google-Smtp-Source: ABhQp+SdFnVAsBTp+cxAlARg+eYrFL3z/srQP6rpodHiGjTO9WOcz5j4O3TqFfBFyrwtsDRJ28k67g== X-Received: by 10.28.216.137 with SMTP id p131mr5284017wmg.50.1508770218049; Mon, 23 Oct 2017 07:50:18 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com. (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id s196sm4908234wmb.26.2017.10.23.07.50.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Oct 2017 07:50:17 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Yongseok Koh , Adrien Mazarguil Date: Mon, 23 Oct 2017 16:49:56 +0200 Message-Id: X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 6/7] net/mlx5: fix reception when VLAN is added X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When VLAN is enabled in the Rx side, only packets matching this VLAN are expected, this also includes the broadcast and all multicast packets. Fixes: 272733b5ebfd ("net/mlx5: use flow to enable unicast traffic") Fixes: 6a6b6828fe6a ("net/mlx5: use flow to enable all multi mode") Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5_trigger.c | 125 +++++++++++++++++++++------------------- 1 file changed, 66 insertions(+), 59 deletions(-) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 29167badd..9b62c0e6a 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -251,6 +251,29 @@ mlx5_dev_stop(struct rte_eth_dev *dev) int priv_dev_traffic_enable(struct priv *priv, struct rte_eth_dev *dev) { + struct rte_flow_item_eth bcast = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }; + struct rte_flow_item_eth ipv6_multi_spec = { + .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", + }; + struct rte_flow_item_eth ipv6_multi_mask = { + .dst.addr_bytes = "\xff\xff\x00\x00\x00\x00", + }; + struct rte_flow_item_eth unicast = { + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + }; + struct rte_flow_item_eth unicast_mask = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }; + const unsigned int vlan_filter_n = priv->vlan_filter_n; + const struct ether_addr cmp = { + .addr_bytes = "\x00\x00\x00\x00\x00\x00", + }; + unsigned int i; + unsigned int j; + int ret; + if (priv->isolated) return 0; if (dev->data->promiscuous) { @@ -261,75 +284,59 @@ priv_dev_traffic_enable(struct priv *priv, struct rte_eth_dev *dev) }; claim_zero(mlx5_ctrl_flow(dev, &promisc, &promisc)); - } else if (dev->data->all_multicast) { + return 0; + } + if (dev->data->all_multicast) { struct rte_flow_item_eth multicast = { .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", .type = 0, }; claim_zero(mlx5_ctrl_flow(dev, &multicast, &multicast)); - } else { - struct rte_flow_item_eth bcast = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - }; - struct rte_flow_item_eth ipv6_multi_spec = { - .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", - }; - struct rte_flow_item_eth ipv6_multi_mask = { - .dst.addr_bytes = "\xff\xff\x00\x00\x00\x00", - }; - struct rte_flow_item_eth unicast = { - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - }; - struct rte_flow_item_eth unicast_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - }; - const unsigned int vlan_filter_n = priv->vlan_filter_n; - const struct ether_addr cmp = { - .addr_bytes = "\x00\x00\x00\x00\x00\x00", - }; - unsigned int i; - unsigned int j; - unsigned int unicast_flow = 0; - int ret; - - for (i = 0; i != MLX5_MAX_MAC_ADDRESSES; ++i) { - struct ether_addr *mac = &dev->data->mac_addrs[i]; + } + for (i = 0; i != MLX5_MAX_MAC_ADDRESSES; ++i) { + struct ether_addr *mac = &dev->data->mac_addrs[i]; - if (!memcmp(mac, &cmp, sizeof(*mac))) - continue; - memcpy(&unicast.dst.addr_bytes, - mac->addr_bytes, - ETHER_ADDR_LEN); - for (j = 0; j != vlan_filter_n; ++j) { - uint16_t vlan = priv->vlan_filter[j]; + if (!memcmp(mac, &cmp, sizeof(*mac))) + continue; + memcpy(&unicast.dst.addr_bytes, + mac->addr_bytes, + ETHER_ADDR_LEN); + for (j = 0; j != vlan_filter_n; ++j) { + uint16_t vlan = priv->vlan_filter[j]; - struct rte_flow_item_vlan vlan_spec = { - .tci = rte_cpu_to_be_16(vlan), - }; - struct rte_flow_item_vlan vlan_mask = { - .tci = 0xffff, - }; + struct rte_flow_item_vlan vlan_spec = { + .tci = rte_cpu_to_be_16(vlan), + }; + struct rte_flow_item_vlan vlan_mask = { + .tci = 0xffff, + }; - ret = mlx5_ctrl_flow_vlan(dev, &unicast, - &unicast_mask, - &vlan_spec, - &vlan_mask); - if (ret) - goto error; - unicast_flow = 1; - } - if (!vlan_filter_n) { - ret = mlx5_ctrl_flow(dev, &unicast, - &unicast_mask); - if (ret) - goto error; - unicast_flow = 1; - } + ret = mlx5_ctrl_flow_vlan(dev, &unicast, + &unicast_mask, + &vlan_spec, + &vlan_mask); + if (ret) + goto error; + ret = mlx5_ctrl_flow_vlan(dev, &bcast, &bcast, + &vlan_spec, &vlan_mask); + if (ret) + goto error; + ret = mlx5_ctrl_flow_vlan(dev, &ipv6_multi_spec, + &ipv6_multi_mask, + &vlan_spec, &vlan_mask); + if (ret) + goto error; } - if (!unicast_flow) - return 0; + if (!vlan_filter_n) { + ret = mlx5_ctrl_flow(dev, &unicast, + &unicast_mask); + if (ret) + goto error; + } + } + if (!dev->data->all_multicast && !vlan_filter_n) { ret = mlx5_ctrl_flow(dev, &bcast, &bcast); if (ret) goto error;