From patchwork Thu Oct 12 12:19:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 30269 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 53AC61B313; Thu, 12 Oct 2017 14:20:42 +0200 (CEST) Received: from mail-wm0-f54.google.com (mail-wm0-f54.google.com [74.125.82.54]) by dpdk.org (Postfix) with ESMTP id 4D1391B2ED for ; Thu, 12 Oct 2017 14:20:31 +0200 (CEST) Received: by mail-wm0-f54.google.com with SMTP id q132so12824347wmd.2 for ; Thu, 12 Oct 2017 05:20:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yBBGlPNUMNLV7vJSoTpboXniPYCZAeZAXjcE5CxPUG8=; b=ESFvcc8mI7Ba0M1jqc4PtuW30AppD03dcqUp1nOilpxxQ8o7RFWMCUSw3OkPDk6O14 2aNb18TBMheVpYIYuZVPqA18rAZPv8y60EfXjO4Acq8eH5Txb2+LwvgDNmK1yJCAdhMY amRNy3MKgjDcLEC5/xkgTLWdb6gatxYRDn+hNI5ywNk7hWkC5YQJQoOBso11MFC4td0x GK4KfiW+eSIQV4f3S8tCpj5Iyau+BAiKBNbDOAxyO6bZKUEpUfp3ejRv/mDB6q9x1Xa8 STYGpKcfDuWExeSTXBDDUEOf8QE9V7ixhra3LCtss9TmCxVq4pUXG00UMMldlvBvHATi zWDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yBBGlPNUMNLV7vJSoTpboXniPYCZAeZAXjcE5CxPUG8=; b=FRVaDrLXPLOk0FejfvMlLW5YCaRWHt7tPA85o2nBsBrEJAYIGiizKEoJvCPEQLpiPH 5sfWdEjkSiVcJPFX8sPgHVS9HVVysNBTP7ijaVqfwTrsx7uiL8+bkpXj23cxk1qMR/eO Lue0mofh6FIdkIjDGz9O0hlARj++cJe/hUmdpvdab+/uts5t1AIfo8U0vHtImZzi/xJR T8+eGTQeJvUYTlKEEDyXtsmoHSmjEtCFSOrWw0AbXS2G7Tapi42S159e3TVCufgjiGbZ NkCFB18tz4jKMzshPiK0NOj68xPGX0/4ng0EEXTdriurNwfCguvX8FaU37BKZWm2bmgW 6gCQ== X-Gm-Message-State: AMCzsaVPHNJ7ieAwOG8PutRJsrtm05vq557m8jT864MXYE5B6T4Q53nZ orlkUlUz/eyohnaZDSMbmSMTaA== X-Google-Smtp-Source: AOwi7QCIiyqge8IAPYos3XxY39/mTXCEUKChQu1Lr1Is6e6lc+x8orohpUtqP+RaRjP9zelo3rU7Tw== X-Received: by 10.223.182.80 with SMTP id i16mr2341836wre.110.1507810830895; Thu, 12 Oct 2017 05:20:30 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 200sm102420wmu.44.2017.10.12.05.20.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Oct 2017 05:20:30 -0700 (PDT) From: Adrien Mazarguil To: Ferruh Yigit Cc: Nelio Laranjeiro , dev@dpdk.org Date: Thu, 12 Oct 2017 14:19:32 +0200 Message-Id: X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 18/29] net/mlx4: add VLAN filter configuration support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit brings back VLAN filter configuration support without any artificial limitation on the number of simultaneous VLANs that can be configured (previously 127). Also thanks to the fact it does not rely on fixed per-queue arrays for potential Verbs flow handle storage anymore, this version wastes a lot less memory (previously 128 * 127 * pointer size, i.e. 130 kiB per Rx queue, only one of which actually had any use for this room: the RSS parent queue). The number of internal flow rules generated still depends on the number of configured MAC addresses times that of configured VLAN filters though. Signed-off-by: Adrien Mazarguil Acked-by: Nelio Laranjeiro --- doc/guides/nics/features/mlx4.ini | 1 + drivers/net/mlx4/mlx4.c | 1 + drivers/net/mlx4/mlx4.h | 1 + drivers/net/mlx4/mlx4_ethdev.c | 42 +++++++++++++++++++++ drivers/net/mlx4/mlx4_flow.c | 67 ++++++++++++++++++++++++++++++++++ 5 files changed, 112 insertions(+) diff --git a/doc/guides/nics/features/mlx4.ini b/doc/guides/nics/features/mlx4.ini index d17774f..bfe0eb1 100644 --- a/doc/guides/nics/features/mlx4.ini +++ b/doc/guides/nics/features/mlx4.ini @@ -14,6 +14,7 @@ MTU update = Y Jumbo frame = Y Unicast MAC filter = Y SR-IOV = Y +VLAN filter = Y Basic stats = Y Stats per queue = Y Other kdrv = Y diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index 99c87ff..e25e958 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -227,6 +227,7 @@ static const struct eth_dev_ops mlx4_dev_ops = { .stats_get = mlx4_stats_get, .stats_reset = mlx4_stats_reset, .dev_infos_get = mlx4_dev_infos_get, + .vlan_filter_set = mlx4_vlan_filter_set, .rx_queue_setup = mlx4_rx_queue_setup, .tx_queue_setup = mlx4_tx_queue_setup, .rx_queue_release = mlx4_rx_queue_release, diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index 15ecd95..cc403ea 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -128,6 +128,7 @@ void mlx4_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); int mlx4_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr, uint32_t index, uint32_t vmdq); void mlx4_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr); +int mlx4_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int mlx4_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); void mlx4_stats_reset(struct rte_eth_dev *dev); void mlx4_dev_infos_get(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c index 52924df..7721f13 100644 --- a/drivers/net/mlx4/mlx4_ethdev.c +++ b/drivers/net/mlx4/mlx4_ethdev.c @@ -588,6 +588,48 @@ mlx4_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr, } /** + * DPDK callback to configure a VLAN filter. + * + * @param dev + * Pointer to Ethernet device structure. + * @param vlan_id + * VLAN ID to filter. + * @param on + * Toggle filter. + * + * @return + * 0 on success, negative errno value otherwise and rte_errno is set. + */ +int +mlx4_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct priv *priv = dev->data->dev_private; + struct rte_flow_error error; + unsigned int vidx = vlan_id / 64; + unsigned int vbit = vlan_id % 64; + uint64_t *v; + int ret; + + if (vidx >= RTE_DIM(dev->data->vlan_filter_conf.ids)) { + rte_errno = EINVAL; + return -rte_errno; + } + v = &dev->data->vlan_filter_conf.ids[vidx]; + *v &= ~(UINT64_C(1) << vbit); + *v |= (uint64_t)!!on << vbit; + ret = mlx4_flow_sync(priv, &error); + if (!ret) + return 0; + ERROR("failed to synchronize flow rules after %s VLAN filter on ID %u" + " (code %d, \"%s\"), " + " flow error type %d, cause %p, message: %s", + on ? "enabling" : "disabling", vlan_id, + rte_errno, strerror(rte_errno), error.type, error.cause, + error.message ? error.message : "(unspecified)"); + return ret; +} + +/** * DPDK callback to set the primary MAC address. * * @param dev diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index 14d2ed3..377b48b 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -1009,11 +1009,36 @@ mlx4_flow_flush(struct rte_eth_dev *dev, } /** + * Helper function to determine the next configured VLAN filter. + * + * @param priv + * Pointer to private structure. + * @param vlan + * VLAN ID to use as a starting point. + * + * @return + * Next configured VLAN ID or a high value (>= 4096) if there is none. + */ +static uint16_t +mlx4_flow_internal_next_vlan(struct priv *priv, uint16_t vlan) +{ + while (vlan < 4096) { + if (priv->dev->data->vlan_filter_conf.ids[vlan / 64] & + (UINT64_C(1) << (vlan % 64))) + return vlan; + ++vlan; + } + return vlan; +} + +/** * Generate internal flow rules. * * - MAC flow rules are generated from @p dev->data->mac_addrs * (@p priv->mac array). * - An additional flow rule for Ethernet broadcasts is also generated. + * - All these are per-VLAN if @p dev->data->dev_conf.rxmode.hw_vlan_filter + * is enabled and VLAN filters are configured. * * @param priv * Pointer to private structure. @@ -1034,6 +1059,10 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) const struct rte_flow_item_eth eth_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", }; + struct rte_flow_item_vlan vlan_spec; + const struct rte_flow_item_vlan vlan_mask = { + .tci = RTE_BE16(0x0fff), + }; struct rte_flow_item pattern[] = { { .type = MLX4_FLOW_ITEM_TYPE_INTERNAL, @@ -1044,6 +1073,10 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) .mask = ð_mask, }, { + /* Replaced with VLAN if filtering is enabled. */ + .type = RTE_FLOW_ITEM_TYPE_END, + }, + { .type = RTE_FLOW_ITEM_TYPE_END, }, }; @@ -1059,10 +1092,33 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) }, }; struct ether_addr *rule_mac = ð_spec.dst; + rte_be16_t *rule_vlan = + priv->dev->data->dev_conf.rxmode.hw_vlan_filter ? + &vlan_spec.tci : + NULL; + uint16_t vlan = 0; struct rte_flow *flow; unsigned int i; int err = 0; + /* + * Set up VLAN item if filtering is enabled and at least one VLAN + * filter is configured. + */ + if (rule_vlan) { + vlan = mlx4_flow_internal_next_vlan(priv, 0); + if (vlan < 4096) { + pattern[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_VLAN, + .spec = &vlan_spec, + .mask = &vlan_mask, + }; +next_vlan: + *rule_vlan = rte_cpu_to_be_16(vlan); + } else { + rule_vlan = NULL; + } + } for (i = 0; i != RTE_DIM(priv->mac) + 1; ++i) { const struct ether_addr *mac; @@ -1087,6 +1143,12 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) assert(flow->ibv_attr->type == IBV_FLOW_ATTR_NORMAL); assert(flow->ibv_attr->num_of_specs == 1); assert(eth->type == IBV_FLOW_SPEC_ETH); + if (rule_vlan && + (eth->val.vlan_tag != *rule_vlan || + eth->mask.vlan_tag != RTE_BE16(0x0fff))) + continue; + if (!rule_vlan && eth->mask.vlan_tag) + continue; for (j = 0; j != sizeof(mac->addr_bytes); ++j) if (eth->val.dst_mac[j] != mac->addr_bytes[j] || eth->mask.dst_mac[j] != UINT8_C(0xff) || @@ -1109,6 +1171,11 @@ mlx4_flow_internal(struct priv *priv, struct rte_flow_error *error) flow->select = 1; flow->mac = 1; } + if (!err && rule_vlan) { + vlan = mlx4_flow_internal_next_vlan(priv, vlan + 1); + if (vlan < 4096) + goto next_vlan; + } /* Clear selection and clean up stale MAC flow rules. */ flow = LIST_FIRST(&priv->flows); while (flow && flow->internal) {