From patchwork Wed Jun 27 18:08:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41722 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 839BB1C383; Wed, 27 Jun 2018 20:08:29 +0200 (CEST) Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by dpdk.org (Postfix) with ESMTP id C82A41C349 for ; Wed, 27 Jun 2018 20:08:27 +0200 (CEST) Received: by mail-wm0-f65.google.com with SMTP id p11-v6so6409611wmc.4 for ; Wed, 27 Jun 2018 11:08:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Ku0gv3n2wFHOLm6bOKrN3/JXdkm5HIQwIHJk9pkYSx4=; b=eAtrNxSKQE0ZjaRB6W9+4VSH/E2po7nKkJOiVp41xjX2VwBHCbNzvJONgMyW6f95pt DLYH7QxnpUzntrtew+px2QFbI05FthchzJk79Ubpwx28+cq0NBA7w84EawmMa2/eAYR3 uoe0Qgk3TXCct204gwORZWjK3bL9Gkq5Ts3+4QriyZgqYGPQ5jTC1qbvI257GImNDwdi 8o68hvgodeNxDoSahoaEBNFS8IKEBIBlCDPPooFDib+3zf1zeeiJscq4IquKBs2wX6Ak NV2QkD3T7DYMmnUkg2ESvFQeZ6nUmcvO3AGbN2VE3PxTkP9n0CQ/9LGqiVCzFg5+TGEK sqLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Ku0gv3n2wFHOLm6bOKrN3/JXdkm5HIQwIHJk9pkYSx4=; b=qcA5hZojHLGSrysEFYvWM4UnD7mnATgf/LA7oOgYAjEVI5cmGdin/9Hg0WHAFg91kr CXUKigtxD5ZfIUh/5/IHznWXEGmv5SjLUXoqGxmn+g4AJjf8tL5neKsw6FxdQ6AoZHql mYo2HepurYN7l5aKUgZklduMP8SrrYA3TlJ+Z5Rec2crorx4FSNj7uLguJqe/k7/SY66 3SAtCxSWZCNoQgmPzIR8H3uIPc4VOV+R6sIVcrZpI+gEZGTxdupU3kIF4PysKZVg9j2R ERl9h8MVYK3KZq1OPn+yMZMAMh76EfVYlOc+J9YNvMNZnDCaAQxflPwB0NMdZpAOnSu6 rPXA== X-Gm-Message-State: APt69E3V5ZJq3e/EFi74q4rP6nP15GSq7kRrvxiObz3+CDhyWXPKGqlB lGbb9z27NNz54LJPKgxytPMFjA== X-Google-Smtp-Source: AAOMgpcdneF4kYpgebRJGFO22fBg0YK3Ztw+Ic9T5h7ZtQIQnxGt1VDo1BZINcmXriM0KOgND52pXA== X-Received: by 2002:a1c:d681:: with SMTP id n123-v6mr5564755wmg.158.1530122907290; Wed, 27 Jun 2018 11:08:27 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id u204-v6sm8321730wmd.7.2018.06.27.11.08.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:26 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:10 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-2-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 1/6] net/mlx5: lay groundwork for switch offloads X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" With mlx5, unlike normal flow rules implemented through Verbs for traffic emitted and received by the application, those targeting different logical ports of the device (VF representors for instance) are offloaded at the switch level and must be configured through Netlink (TC interface). This patch adds preliminary support to manage such flow rules through the flow API (rte_flow). Instead of rewriting tons of Netlink helpers and as previously suggested by Stephen [1], this patch introduces a new dependency to libmnl [2] (LGPL-2.1) when compiling mlx5. [1] https://mails.dpdk.org/archives/dev/2018-March/092676.html [2] https://netfilter.org/projects/libmnl/ Signed-off-by: Adrien Mazarguil --- drivers/net/mlx5/Makefile | 2 + drivers/net/mlx5/mlx5.c | 32 ++++++++ drivers/net/mlx5/mlx5.h | 10 +++ drivers/net/mlx5/mlx5_nl_flow.c | 139 +++++++++++++++++++++++++++++++++++ mk/rte.app.mk | 2 +- 5 files changed, 184 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile index 8a5229e61..3325eed06 100644 --- a/drivers/net/mlx5/Makefile +++ b/drivers/net/mlx5/Makefile @@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_mr.c SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_flow.c SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_socket.c SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_nl.c +SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_nl_flow.c ifeq ($(CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS),y) INSTALL-$(CONFIG_RTE_LIBRTE_MLX5_PMD)-lib += $(LIB_GLUE) @@ -56,6 +57,7 @@ LDLIBS += -ldl else LDLIBS += -libverbs -lmlx5 endif +LDLIBS += -lmnl LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs LDLIBS += -lrte_bus_pci diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 665a3c31f..d9b9097b1 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -279,6 +279,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_nl_mac_addr_flush(dev); if (priv->nl_socket >= 0) close(priv->nl_socket); + if (priv->mnl_socket) + mlx5_nl_flow_socket_destroy(priv->mnl_socket); ret = mlx5_hrxq_ibv_verify(dev); if (ret) DRV_LOG(WARNING, "port %u some hash Rx queue still remain", @@ -1077,6 +1079,34 @@ mlx5_dev_spawn_one(struct rte_device *dpdk_dev, priv->nl_socket = -1; mlx5_nl_mac_addr_sync(eth_dev); } + priv->mnl_socket = mlx5_nl_flow_socket_create(); + if (!priv->mnl_socket) { + err = -rte_errno; + DRV_LOG(WARNING, + "flow rules relying on switch offloads will not be" + " supported: cannot open libmnl socket: %s", + strerror(rte_errno)); + } else { + struct rte_flow_error error; + unsigned int ifindex = mlx5_ifindex(eth_dev); + + if (!ifindex) { + err = -rte_errno; + error.message = + "cannot retrieve network interface index"; + } else { + err = mlx5_nl_flow_init(priv->mnl_socket, ifindex, + &error); + } + if (err) { + DRV_LOG(WARNING, + "flow rules relying on switch offloads will" + " not be supported: %s: %s", + error.message, strerror(rte_errno)); + mlx5_nl_flow_socket_destroy(priv->mnl_socket); + priv->mnl_socket = NULL; + } + } TAILQ_INIT(&priv->flows); TAILQ_INIT(&priv->ctrl_flows); /* Hint libmlx5 to use PMD allocator for data plane resources */ @@ -1127,6 +1157,8 @@ mlx5_dev_spawn_one(struct rte_device *dpdk_dev, if (priv) { unsigned int i; + if (priv->mnl_socket) + mlx5_nl_flow_socket_destroy(priv->mnl_socket); i = mlx5_domain_to_port_id(priv->domain_id, NULL, 0); if (i == 1) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1d8e156c8..390249adb 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -148,6 +148,8 @@ struct mlx5_drop { struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */ }; +struct mnl_socket; + struct priv { LIST_ENTRY(priv) mem_event_cb; /* Called by memory event callback. */ struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ @@ -207,6 +209,7 @@ struct priv { /* Context for Verbs allocator. */ int nl_socket; /* Netlink socket. */ uint32_t nl_sn; /* Netlink message sequence number. */ + struct mnl_socket *mnl_socket; /* Libmnl socket. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) @@ -369,4 +372,11 @@ void mlx5_nl_mac_addr_flush(struct rte_eth_dev *dev); int mlx5_nl_promisc(struct rte_eth_dev *dev, int enable); int mlx5_nl_allmulti(struct rte_eth_dev *dev, int enable); +/* mlx5_nl_flow.c */ + +int mlx5_nl_flow_init(struct mnl_socket *nl, unsigned int ifindex, + struct rte_flow_error *error); +struct mnl_socket *mlx5_nl_flow_socket_create(void); +void mlx5_nl_flow_socket_destroy(struct mnl_socket *nl); + #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c new file mode 100644 index 000000000..7a8683b03 --- /dev/null +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2018 6WIND S.A. + * Copyright 2018 Mellanox Technologies, Ltd + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "mlx5.h" + +/** + * Send Netlink message with acknowledgment. + * + * @param nl + * Libmnl socket to use. + * @param nlh + * Message to send. This function always raises the NLM_F_ACK flag before + * sending. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_nl_flow_nl_ack(struct mnl_socket *nl, struct nlmsghdr *nlh) +{ + alignas(struct nlmsghdr) + uint8_t ans[MNL_SOCKET_BUFFER_SIZE]; + uint32_t seq = random(); + int ret; + + nlh->nlmsg_flags |= NLM_F_ACK; + nlh->nlmsg_seq = seq; + ret = mnl_socket_sendto(nl, nlh, nlh->nlmsg_len); + if (ret != -1) + ret = mnl_socket_recvfrom(nl, ans, sizeof(ans)); + if (ret != -1) + ret = mnl_cb_run + (ans, ret, seq, mnl_socket_get_portid(nl), NULL, NULL); + if (!ret) + return 0; + rte_errno = errno; + return -rte_errno; +} + +/** + * Initialize ingress qdisc of a given network interface. + * + * @param nl + * Libmnl socket of the @p NETLINK_ROUTE kind. + * @param ifindex + * Index of network interface to initialize. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_nl_flow_init(struct mnl_socket *nl, unsigned int ifindex, + struct rte_flow_error *error) +{ + uint8_t buf[MNL_SOCKET_BUFFER_SIZE]; + struct nlmsghdr *nlh; + struct tcmsg *tcm; + + /* Destroy existing ingress qdisc and everything attached to it. */ + nlh = mnl_nlmsg_put_header(buf); + nlh->nlmsg_type = RTM_DELQDISC; + nlh->nlmsg_flags = NLM_F_REQUEST; + tcm = mnl_nlmsg_put_extra_header(nlh, sizeof(*tcm)); + tcm->tcm_family = AF_UNSPEC; + tcm->tcm_ifindex = ifindex; + tcm->tcm_handle = TC_H_MAKE(TC_H_INGRESS, 0); + tcm->tcm_parent = TC_H_INGRESS; + /* Ignore errors when qdisc is already absent. */ + if (mlx5_nl_flow_nl_ack(nl, nlh) && + rte_errno != EINVAL && rte_errno != ENOENT) + return rte_flow_error_set + (error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "netlink: failed to remove ingress qdisc"); + /* Create fresh ingress qdisc. */ + nlh = mnl_nlmsg_put_header(buf); + nlh->nlmsg_type = RTM_NEWQDISC; + nlh->nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL; + tcm = mnl_nlmsg_put_extra_header(nlh, sizeof(*tcm)); + tcm->tcm_family = AF_UNSPEC; + tcm->tcm_ifindex = ifindex; + tcm->tcm_handle = TC_H_MAKE(TC_H_INGRESS, 0); + tcm->tcm_parent = TC_H_INGRESS; + mnl_attr_put_strz_check(nlh, sizeof(buf), TCA_KIND, "ingress"); + if (mlx5_nl_flow_nl_ack(nl, nlh)) + return rte_flow_error_set + (error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "netlink: failed to create ingress qdisc"); + return 0; +} + +/** + * Create and configure a libmnl socket for Netlink flow rules. + * + * @return + * A valid libmnl socket object pointer on success, NULL otherwise and + * rte_errno is set. + */ +struct mnl_socket * +mlx5_nl_flow_socket_create(void) +{ + struct mnl_socket *nl = mnl_socket_open(NETLINK_ROUTE); + + if (nl && + !mnl_socket_setsockopt(nl, NETLINK_CAP_ACK, &(int){ 1 }, + sizeof(int)) && + !mnl_socket_bind(nl, 0, MNL_SOCKET_AUTOPID)) + return nl; + rte_errno = errno; + if (nl) + mnl_socket_close(nl); + return NULL; +} + +/** + * Destroy a libmnl socket. + */ +void +mlx5_nl_flow_socket_destroy(struct mnl_socket *nl) +{ + mnl_socket_close(nl); +} diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 7bcf6308d..414f1b967 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -145,7 +145,7 @@ endif ifeq ($(CONFIG_RTE_LIBRTE_MLX5_DLOPEN_DEPS),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += -lrte_pmd_mlx5 -ldl else -_LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += -lrte_pmd_mlx5 -libverbs -lmlx5 +_LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += -lrte_pmd_mlx5 -libverbs -lmlx5 -lmnl endif _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2 -L$(LIBMUSDK_PATH)/lib -lmusdk _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp From patchwork Wed Jun 27 18:08:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41723 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ED34C1C393; Wed, 27 Jun 2018 20:08:31 +0200 (CEST) Received: from mail-wm0-f67.google.com (mail-wm0-f67.google.com [74.125.82.67]) by dpdk.org (Postfix) with ESMTP id CA2B41C389 for ; Wed, 27 Jun 2018 20:08:29 +0200 (CEST) Received: by mail-wm0-f67.google.com with SMTP id 69-v6so6411282wmf.3 for ; Wed, 27 Jun 2018 11:08:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=U1E2+DC2LJ/BRtPJepqcUF6KWkj9ynxLOWeVoySyUcI=; b=O2bYwX49+zlyk4Uvd/ZmxkfiSTu7/40DAQO6Vp2eGASyjSvVi8APfWbmj5Sqx2Befc ZryEXttIN12kFCc1Vs9emCAMhV3KNXZaFC3oSYzP+vbPcjVQMjLtOtJ3QUEgGAZU4uxX iNJbSoOxFYPtEExwNjLMfaHACadgCZNPkZCyfMLSC+vsDbhTDJWpvj+2NWEGhZJaHoal hU6EI0eCGIwXjntrtJwVhKQuBq6DcXuzE/AlaeRA/IG0qQLxMhchAgAH433DBFxD+91W rreoVjMK5tLFjrE0ED/NgHFsYaff3xoW0LkESeFbxI5i7b6y5Ox4BataPlvYYSd8Th0a jQyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=U1E2+DC2LJ/BRtPJepqcUF6KWkj9ynxLOWeVoySyUcI=; b=bPEE7b8iYhuy73D083UCSlr3/quSK3ldACkQLbD7EcLw+W57aFh2sapR8KajQDNrQA zskzBgnLGXUBX6ahycEB/O5O0mBhpO+boCsnQZawbTX2skLsE23OHrDHnviTpBsPxDqH S8Fs8uKGJ139HCJfKbYOkcszOGsiZubEky5npTxspvFo/nWjkbi+vcXTp0NTUd3wjq3M n+olQsnfoB1Op1iNt4I4UxI+o15j8OhzRj4luBOVMkacMKADzS7Oe7Lj/tHxHeXuRtLw 95tr/g/D3muCo8UHGkbu/NbuZ2u/rCZr+MsOxX22Ogk5A8vB3n5cXAW6qC3i1kudR3wc XeYQ== X-Gm-Message-State: APt69E0dkuU8LbxM52rcktRldR2tR/y2F15VdpYSTXV3J84nOyh2Q6ZH 8X/4yEMfzcskDcVomlPa6F/VSw== X-Google-Smtp-Source: AAOMgpdhT+0jIuQpVRvAxEAqK6e4X3o2jQYKMuoE+5/iDGAQGooXmt9LPkYYXtvo02MsUv7blXJIIg== X-Received: by 2002:a1c:928c:: with SMTP id u134-v6mr5665915wmd.106.1530122909288; Wed, 27 Jun 2018 11:08:29 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 132-v6sm8584514wmr.33.2018.06.27.11.08.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:28 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:12 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-3-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 2/6] net/mlx5: add framework for switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Because mlx5 switch flow rules are configured through Netlink (TC interface) and have little in common with Verbs, this patch adds a separate parser function to handle them. - mlx5_nl_flow_transpose() converts a rte_flow rule to its TC equivalent and stores the result in a buffer. - mlx5_nl_flow_brand() gives a unique handle to a flow rule buffer. - mlx5_nl_flow_create() instantiates a flow rule on the device based on such a buffer. - mlx5_nl_flow_destroy() performs the reverse operation. These functions are called by the existing implementation when encountering flow rules which must be offloaded to the switch (currently relying on the transfer attribute). Signed-off-by: Adrien Mazarguil Signed-off-by: Nelio Laranjeiro Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5.h | 18 +++ drivers/net/mlx5/mlx5_flow.c | 113 ++++++++++++++ drivers/net/mlx5/mlx5_nl_flow.c | 295 +++++++++++++++++++++++++++++++++++ 3 files changed, 426 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 390249adb..aa16057d6 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -148,6 +148,12 @@ struct mlx5_drop { struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */ }; +/** DPDK port to network interface index (ifindex) conversion. */ +struct mlx5_nl_flow_ptoi { + uint16_t port_id; /**< DPDK port ID. */ + unsigned int ifindex; /**< Network interface index. */ +}; + struct mnl_socket; struct priv { @@ -374,6 +380,18 @@ int mlx5_nl_allmulti(struct rte_eth_dev *dev, int enable); /* mlx5_nl_flow.c */ +int mlx5_nl_flow_transpose(void *buf, + size_t size, + const struct mlx5_nl_flow_ptoi *ptoi, + const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions, + struct rte_flow_error *error); +void mlx5_nl_flow_brand(void *buf, uint32_t handle); +int mlx5_nl_flow_create(struct mnl_socket *nl, void *buf, + struct rte_flow_error *error); +int mlx5_nl_flow_destroy(struct mnl_socket *nl, void *buf, + struct rte_flow_error *error); int mlx5_nl_flow_init(struct mnl_socket *nl, unsigned int ifindex, struct rte_flow_error *error); struct mnl_socket *mlx5_nl_flow_socket_create(void); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9241855be..93b245991 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -271,6 +272,7 @@ struct rte_flow { /**< Store tunnel packet type data to store in Rx queue. */ uint8_t key[40]; /**< RSS hash key. */ uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */ + void *nl_flow; /**< Netlink flow buffer if relevant. */ }; static const struct rte_flow_ops mlx5_flow_ops = { @@ -2403,6 +2405,106 @@ mlx5_flow_actions(struct rte_eth_dev *dev, } /** + * Validate flow rule and fill flow structure accordingly. + * + * @param dev + * Pointer to Ethernet device. + * @param[out] flow + * Pointer to flow structure. + * @param flow_size + * Size of allocated space for @p flow. + * @param[in] attr + * Flow rule attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * A positive value representing the size of the flow object in bytes + * regardless of @p flow_size on success, a negative errno value otherwise + * and rte_errno is set. + */ +static int +mlx5_flow_merge_switch(struct rte_eth_dev *dev, + struct rte_flow *flow, + size_t flow_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct priv *priv = dev->data->dev_private; + unsigned int n = mlx5_domain_to_port_id(priv->domain_id, NULL, 0); + uint16_t port_list[!n + n]; + struct mlx5_nl_flow_ptoi ptoi[!n + n + 1]; + size_t off = RTE_ALIGN_CEIL(sizeof(*flow), alignof(max_align_t)); + unsigned int i; + unsigned int own = 0; + int ret; + + /* At least one port is needed when no switch domain is present. */ + if (!n) { + n = 1; + port_list[0] = dev->data->port_id; + } else { + n = mlx5_domain_to_port_id(priv->domain_id, port_list, n); + if (n > RTE_DIM(port_list)) + n = RTE_DIM(port_list); + } + for (i = 0; i != n; ++i) { + struct rte_eth_dev_info dev_info; + + rte_eth_dev_info_get(port_list[i], &dev_info); + if (port_list[i] == dev->data->port_id) + own = i; + ptoi[i].port_id = port_list[i]; + ptoi[i].ifindex = dev_info.if_index; + } + /* Ensure first entry of ptoi[] is the current device. */ + if (own) { + ptoi[n] = ptoi[0]; + ptoi[0] = ptoi[own]; + ptoi[own] = ptoi[n]; + } + /* An entry with zero ifindex terminates ptoi[]. */ + ptoi[n].port_id = 0; + ptoi[n].ifindex = 0; + if (flow_size < off) + flow_size = 0; + ret = mlx5_nl_flow_transpose((uint8_t *)flow + off, + flow_size ? flow_size - off : 0, + ptoi, attr, pattern, actions, error); + if (ret < 0) + return ret; + if (flow_size) { + *flow = (struct rte_flow){ + .attributes = *attr, + .nl_flow = (uint8_t *)flow + off, + }; + /* + * Generate a reasonably unique handle based on the address + * of the target buffer. + * + * This is straightforward on 32-bit systems where the flow + * pointer can be used directly. Otherwise, its least + * significant part is taken after shifting it by the + * previous power of two of the pointed buffer size. + */ + if (sizeof(flow) <= 4) + mlx5_nl_flow_brand(flow->nl_flow, (uintptr_t)flow); + else + mlx5_nl_flow_brand + (flow->nl_flow, + (uintptr_t)flow >> + rte_log2_u32(rte_align32prevpow2(flow_size))); + } + return off + ret; +} + +/** * Validate the rule and return a flow structure filled accordingly. * * @param dev @@ -2439,6 +2541,9 @@ mlx5_flow_merge(struct rte_eth_dev *dev, struct rte_flow *flow, int ret; uint32_t i; + if (attr->transfer) + return mlx5_flow_merge_switch(dev, flow, flow_size, + attr, items, actions, error); if (!remain) flow = &local_flow; ret = mlx5_flow_attributes(dev, attr, flow, error); @@ -2554,8 +2659,11 @@ mlx5_flow_validate(struct rte_eth_dev *dev, static void mlx5_flow_fate_remove(struct rte_eth_dev *dev, struct rte_flow *flow) { + struct priv *priv = dev->data->dev_private; struct mlx5_flow_verbs *verbs; + if (flow->nl_flow && priv->mnl_socket) + mlx5_nl_flow_destroy(priv->mnl_socket, flow->nl_flow, NULL); LIST_FOREACH(verbs, &flow->verbs, next) { if (verbs->flow) { claim_zero(mlx5_glue->destroy_flow(verbs->flow)); @@ -2592,6 +2700,7 @@ static int mlx5_flow_fate_apply(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error) { + struct priv *priv = dev->data->dev_private; struct mlx5_flow_verbs *verbs; int err; @@ -2640,6 +2749,10 @@ mlx5_flow_fate_apply(struct rte_eth_dev *dev, struct rte_flow *flow, goto error; } } + if (flow->nl_flow && + priv->mnl_socket && + mlx5_nl_flow_create(priv->mnl_socket, flow->nl_flow, error)) + goto error; return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index 7a8683b03..1fc62fb0a 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -5,7 +5,9 @@ #include #include +#include #include +#include #include #include #include @@ -14,11 +16,248 @@ #include #include +#include #include #include #include "mlx5.h" +/** Parser state definitions for mlx5_nl_flow_trans[]. */ +enum mlx5_nl_flow_trans { + INVALID, + BACK, + ATTR, + PATTERN, + ITEM_VOID, + ACTIONS, + ACTION_VOID, + END, +}; + +#define TRANS(...) (const enum mlx5_nl_flow_trans []){ __VA_ARGS__, INVALID, } + +#define PATTERN_COMMON \ + ITEM_VOID, ACTIONS +#define ACTIONS_COMMON \ + ACTION_VOID, END + +/** Parser state transitions used by mlx5_nl_flow_transpose(). */ +static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { + [INVALID] = NULL, + [BACK] = NULL, + [ATTR] = TRANS(PATTERN), + [PATTERN] = TRANS(PATTERN_COMMON), + [ITEM_VOID] = TRANS(BACK), + [ACTIONS] = TRANS(ACTIONS_COMMON), + [ACTION_VOID] = TRANS(BACK), + [END] = NULL, +}; + +/** + * Transpose flow rule description to rtnetlink message. + * + * This function transposes a flow rule description to a traffic control + * (TC) filter creation message ready to be sent over Netlink. + * + * Target interface is specified as the first entry of the @p ptoi table. + * Subsequent entries enable this function to resolve other DPDK port IDs + * found in the flow rule. + * + * @param[out] buf + * Output message buffer. May be NULL when @p size is 0. + * @param size + * Size of @p buf. Message may be truncated if not large enough. + * @param[in] ptoi + * DPDK port ID to network interface index translation table. This table + * is terminated by an entry with a zero ifindex value. + * @param[in] attr + * Flow rule attributes. + * @param[in] pattern + * Pattern specification. + * @param[in] actions + * Associated actions. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * A positive value representing the exact size of the message in bytes + * regardless of the @p size parameter on success, a negative errno value + * otherwise and rte_errno is set. + */ +int +mlx5_nl_flow_transpose(void *buf, + size_t size, + const struct mlx5_nl_flow_ptoi *ptoi, + const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + alignas(struct nlmsghdr) + uint8_t buf_tmp[MNL_SOCKET_BUFFER_SIZE]; + const struct rte_flow_item *item; + const struct rte_flow_action *action; + unsigned int n; + struct nlattr *na_flower; + struct nlattr *na_flower_act; + const enum mlx5_nl_flow_trans *trans; + const enum mlx5_nl_flow_trans *back; + + if (!size) + goto error_nobufs; +init: + item = pattern; + action = actions; + n = 0; + na_flower = NULL; + na_flower_act = NULL; + trans = TRANS(ATTR); + back = trans; +trans: + switch (trans[n++]) { + struct nlmsghdr *nlh; + struct tcmsg *tcm; + + case INVALID: + if (item->type) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, + item, "unsupported pattern item combination"); + else if (action->type) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + action, "unsupported action combination"); + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flow rule lacks some kind of fate action"); + case BACK: + trans = back; + n = 0; + goto trans; + case ATTR: + /* + * Supported attributes: no groups, some priorities and + * ingress only. Don't care about transfer as it is the + * caller's problem. + */ + if (attr->group) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + attr, "groups are not supported"); + if (attr->priority > 0xfffe) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "lowest priority level is 0xfffe"); + if (!attr->ingress) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "only ingress is supported"); + if (attr->egress) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "egress is not supported"); + if (size < mnl_nlmsg_size(sizeof(*tcm))) + goto error_nobufs; + nlh = mnl_nlmsg_put_header(buf); + nlh->nlmsg_type = 0; + nlh->nlmsg_flags = 0; + nlh->nlmsg_seq = 0; + tcm = mnl_nlmsg_put_extra_header(nlh, sizeof(*tcm)); + tcm->tcm_family = AF_UNSPEC; + tcm->tcm_ifindex = ptoi[0].ifindex; + /* + * Let kernel pick a handle by default. A predictable handle + * can be set by the caller on the resulting buffer through + * mlx5_nl_flow_brand(). + */ + tcm->tcm_handle = 0; + tcm->tcm_parent = TC_H_MAKE(TC_H_INGRESS, TC_H_MIN_INGRESS); + /* + * Priority cannot be zero to prevent the kernel from + * picking one automatically. + */ + tcm->tcm_info = TC_H_MAKE((attr->priority + 1) << 16, + RTE_BE16(ETH_P_ALL)); + break; + case PATTERN: + if (!mnl_attr_put_strz_check(buf, size, TCA_KIND, "flower")) + goto error_nobufs; + na_flower = mnl_attr_nest_start_check(buf, size, TCA_OPTIONS); + if (!na_flower) + goto error_nobufs; + if (!mnl_attr_put_u32_check(buf, size, TCA_FLOWER_FLAGS, + TCA_CLS_FLAGS_SKIP_SW)) + goto error_nobufs; + break; + case ITEM_VOID: + if (item->type != RTE_FLOW_ITEM_TYPE_VOID) + goto trans; + ++item; + break; + case ACTIONS: + if (item->type != RTE_FLOW_ITEM_TYPE_END) + goto trans; + assert(na_flower); + assert(!na_flower_act); + na_flower_act = + mnl_attr_nest_start_check(buf, size, TCA_FLOWER_ACT); + if (!na_flower_act) + goto error_nobufs; + break; + case ACTION_VOID: + if (action->type != RTE_FLOW_ACTION_TYPE_VOID) + goto trans; + ++action; + break; + case END: + if (item->type != RTE_FLOW_ITEM_TYPE_END || + action->type != RTE_FLOW_ACTION_TYPE_END) + goto trans; + if (na_flower_act) + mnl_attr_nest_end(buf, na_flower_act); + if (na_flower) + mnl_attr_nest_end(buf, na_flower); + nlh = buf; + return nlh->nlmsg_len; + } + back = trans; + trans = mlx5_nl_flow_trans[trans[n - 1]]; + n = 0; + goto trans; +error_nobufs: + if (buf != buf_tmp) { + buf = buf_tmp; + size = sizeof(buf_tmp); + goto init; + } + return rte_flow_error_set + (error, ENOBUFS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "generated TC message is too large"); +} + +/** + * Brand rtnetlink buffer with unique handle. + * + * This handle should be unique for a given network interface to avoid + * collisions. + * + * @param buf + * Flow rule buffer previously initialized by mlx5_nl_flow_transpose(). + * @param handle + * Unique 32-bit handle to use. + */ +void +mlx5_nl_flow_brand(void *buf, uint32_t handle) +{ + struct tcmsg *tcm = mnl_nlmsg_get_payload(buf); + + tcm->tcm_handle = handle; +} + /** * Send Netlink message with acknowledgment. * @@ -54,6 +293,62 @@ mlx5_nl_flow_nl_ack(struct mnl_socket *nl, struct nlmsghdr *nlh) } /** + * Create a Netlink flow rule. + * + * @param nl + * Libmnl socket to use. + * @param buf + * Flow rule buffer previously initialized by mlx5_nl_flow_transpose(). + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_nl_flow_create(struct mnl_socket *nl, void *buf, + struct rte_flow_error *error) +{ + struct nlmsghdr *nlh = buf; + + nlh->nlmsg_type = RTM_NEWTFILTER; + nlh->nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL; + if (!mlx5_nl_flow_nl_ack(nl, nlh)) + return 0; + return rte_flow_error_set + (error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "netlink: failed to create TC flow rule"); +} + +/** + * Destroy a Netlink flow rule. + * + * @param nl + * Libmnl socket to use. + * @param buf + * Flow rule buffer previously initialized by mlx5_nl_flow_transpose(). + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_nl_flow_destroy(struct mnl_socket *nl, void *buf, + struct rte_flow_error *error) +{ + struct nlmsghdr *nlh = buf; + + nlh->nlmsg_type = RTM_DELTFILTER; + nlh->nlmsg_flags = NLM_F_REQUEST; + if (!mlx5_nl_flow_nl_ack(nl, nlh)) + return 0; + return rte_flow_error_set + (error, errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "netlink: failed to destroy TC flow rule"); +} + +/** * Initialize ingress qdisc of a given network interface. * * @param nl From patchwork Wed Jun 27 18:08:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41724 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2C82F1C3A0; Wed, 27 Jun 2018 20:08:35 +0200 (CEST) Received: from mail-wr0-f195.google.com (mail-wr0-f195.google.com [209.85.128.195]) by dpdk.org (Postfix) with ESMTP id 83D5C1C38C for ; Wed, 27 Jun 2018 20:08:31 +0200 (CEST) Received: by mail-wr0-f195.google.com with SMTP id l2-v6so2358644wro.7 for ; Wed, 27 Jun 2018 11:08:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=vowwvn2lJbLTtsrzkUyNNl062xLLE43DSfy0OjgKNpA=; b=RaJ8T779SqKbwI3G1rHbTFuRyScPRjCvjZA8eeH8EhOrG0xF/UBpopCw+K5JdA0aZz AaRWNLRPANAd9DP+q8SzPDfHlpckRi2y5pwC1w0T7si+QfMKSN69Sk19A5EjhJ3wXZag xdlFga2513JBAXLC8vnxNI/ZbOpYWf5lbxMqtuVvxSPcKXURx5X8+nlDL1ENQs5BQ07o WuJ7tkmrhJ+AsJZEfzhIiBBWBL4iXZsmp1GUOyFAdH6QKODtKmVTZWavr2zhinwXCmJw Up1zNTcc+FVc37rZ43nRAx7OY5F0Nfg78YBpl3lq7xI7ge1RuzbM5PvVnZ2szFu3qVE9 h9sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=vowwvn2lJbLTtsrzkUyNNl062xLLE43DSfy0OjgKNpA=; b=OsM6/9EXzqY7hjDrOvoRXojwK/mLnE6apfgQ0EGNtaioKJjwL3wyRiklbM8I9Szdpp OA2RVnnAebtUS0qILLxuGhrOQEYgnCC36llPSVMggu05wovXOQTA9SGgbcreVZybCs1J bq//HILkyvpZS0P+oQ9k1D1Wkgh+QSMBTN4fZbJ0Ink4AEWSho+k7gEQYhR0AKw7Uocd JCRNS4+MITmLvyQfNwvnx5ICPkXK80TysnTg9OazlbXEbrUvZ1a6H9A55unF3T139pbS UwLZQXUgVyRHPcxXN/3GJslbqg1lybelwGDXDlYQ2tNwStsb19hqsZgUWs4ndoqirXAV jgWA== X-Gm-Message-State: APt69E3sWnZ09h8Wyy4PuEbwo1wf4d5RLrI9X6ScyYl3g7fYX4msBxPE KuFZU0uH2tsFvrFa7iGvSJknxA== X-Google-Smtp-Source: AAOMgpfSH7+bCq2Ce24lK/EHsdTcXJV/IPFne47ziSdusGcBhfAMG+SMhTntW1XNq3sKsjiyBeKGsQ== X-Received: by 2002:adf:f708:: with SMTP id r8-v6mr975072wrp.85.1530122911280; Wed, 27 Jun 2018 11:08:31 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id i4-v6sm4189360wrq.28.2018.06.27.11.08.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:30 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:14 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-4-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 3/6] net/mlx5: add fate actions to switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables creation of rte_flow rules that direct matching traffic to a different port (e.g. another VF representor) or drop it directly at the switch level (PORT_ID and DROP actions). Testpmd examples: - Directing all traffic to port ID 0: flow create 1 ingress transfer pattern end actions port_id id 0 / end - Dropping all traffic normally received by port ID 1: flow create 1 ingress transfer pattern end actions drop / end Note the presence of the transfer attribute, which requests them to be applied at the switch level. All traffic is matched due to empty pattern. Signed-off-by: Adrien Mazarguil Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_nl_flow.c | 77 +++++++++++++++++++++++++++++++++++- 1 file changed, 75 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index 1fc62fb0a..70da85fd5 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include #include #include @@ -31,6 +33,8 @@ enum mlx5_nl_flow_trans { ITEM_VOID, ACTIONS, ACTION_VOID, + ACTION_PORT_ID, + ACTION_DROP, END, }; @@ -39,7 +43,9 @@ enum mlx5_nl_flow_trans { #define PATTERN_COMMON \ ITEM_VOID, ACTIONS #define ACTIONS_COMMON \ - ACTION_VOID, END + ACTION_VOID +#define ACTIONS_FATE \ + ACTION_PORT_ID, ACTION_DROP /** Parser state transitions used by mlx5_nl_flow_transpose(). */ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { @@ -48,8 +54,10 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [ATTR] = TRANS(PATTERN), [PATTERN] = TRANS(PATTERN_COMMON), [ITEM_VOID] = TRANS(BACK), - [ACTIONS] = TRANS(ACTIONS_COMMON), + [ACTIONS] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), [ACTION_VOID] = TRANS(BACK), + [ACTION_PORT_ID] = TRANS(ACTION_VOID, END), + [ACTION_DROP] = TRANS(ACTION_VOID, END), [END] = NULL, }; @@ -98,6 +106,7 @@ mlx5_nl_flow_transpose(void *buf, const struct rte_flow_item *item; const struct rte_flow_action *action; unsigned int n; + uint32_t act_index_cur; struct nlattr *na_flower; struct nlattr *na_flower_act; const enum mlx5_nl_flow_trans *trans; @@ -109,14 +118,21 @@ mlx5_nl_flow_transpose(void *buf, item = pattern; action = actions; n = 0; + act_index_cur = 0; na_flower = NULL; na_flower_act = NULL; trans = TRANS(ATTR); back = trans; trans: switch (trans[n++]) { + union { + const struct rte_flow_action_port_id *port_id; + } conf; struct nlmsghdr *nlh; struct tcmsg *tcm; + struct nlattr *act_index; + struct nlattr *act; + unsigned int i; case INVALID: if (item->type) @@ -207,12 +223,69 @@ mlx5_nl_flow_transpose(void *buf, mnl_attr_nest_start_check(buf, size, TCA_FLOWER_ACT); if (!na_flower_act) goto error_nobufs; + act_index_cur = 1; break; case ACTION_VOID: if (action->type != RTE_FLOW_ACTION_TYPE_VOID) goto trans; ++action; break; + case ACTION_PORT_ID: + if (action->type != RTE_FLOW_ACTION_TYPE_PORT_ID) + goto trans; + conf.port_id = action->conf; + if (conf.port_id->original) + i = 0; + else + for (i = 0; ptoi[i].ifindex; ++i) + if (ptoi[i].port_id == conf.port_id->id) + break; + if (!ptoi[i].ifindex) + return rte_flow_error_set + (error, ENODEV, RTE_FLOW_ERROR_TYPE_ACTION_CONF, + conf.port_id, + "missing data to convert port ID to ifindex"); + act_index = + mnl_attr_nest_start_check(buf, size, act_index_cur++); + if (!act_index || + !mnl_attr_put_strz_check(buf, size, TCA_ACT_KIND, "mirred")) + goto error_nobufs; + act = mnl_attr_nest_start_check(buf, size, TCA_ACT_OPTIONS); + if (!act) + goto error_nobufs; + if (!mnl_attr_put_check(buf, size, TCA_MIRRED_PARMS, + sizeof(struct tc_mirred), + &(struct tc_mirred){ + .action = TC_ACT_STOLEN, + .eaction = TCA_EGRESS_REDIR, + .ifindex = ptoi[i].ifindex, + })) + goto error_nobufs; + mnl_attr_nest_end(buf, act); + mnl_attr_nest_end(buf, act_index); + ++action; + break; + case ACTION_DROP: + if (action->type != RTE_FLOW_ACTION_TYPE_DROP) + goto trans; + act_index = + mnl_attr_nest_start_check(buf, size, act_index_cur++); + if (!act_index || + !mnl_attr_put_strz_check(buf, size, TCA_ACT_KIND, "gact")) + goto error_nobufs; + act = mnl_attr_nest_start_check(buf, size, TCA_ACT_OPTIONS); + if (!act) + goto error_nobufs; + if (!mnl_attr_put_check(buf, size, TCA_GACT_PARMS, + sizeof(struct tc_gact), + &(struct tc_gact){ + .action = TC_ACT_SHOT, + })) + goto error_nobufs; + mnl_attr_nest_end(buf, act); + mnl_attr_nest_end(buf, act_index); + ++action; + break; case END: if (item->type != RTE_FLOW_ITEM_TYPE_END || action->type != RTE_FLOW_ACTION_TYPE_END) From patchwork Wed Jun 27 18:08:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41725 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55E091C3A6; Wed, 27 Jun 2018 20:08:37 +0200 (CEST) Received: from mail-wr0-f195.google.com (mail-wr0-f195.google.com [209.85.128.195]) by dpdk.org (Postfix) with ESMTP id C16621C39B for ; Wed, 27 Jun 2018 20:08:33 +0200 (CEST) Received: by mail-wr0-f195.google.com with SMTP id e18-v6so2945716wrs.5 for ; Wed, 27 Jun 2018 11:08:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=BlvwWX0nKitl8LKGfZMpmAt7D6l8wCjYNhZa6UUrEz4=; b=P3yW1XZ/apPty/boRDdob81RIiGzayrxUgge9EoQB0mOh2DeGlziBxWQenihf8mVFb MqgEscB0CXR1pjw8yKcZRWqEJtf+2eJRWgZoNcKbB7bw6+7NDnsL0KsjOsQH0o4wYmwZ RllYJzeQNjS4VfIxXnvzzxvXcicArdHOE5hQfu9D8goTIlEDb/PXMtOxJOq8ngoAgXVR FBVhUXlXfhq/bVYc1tAvpL0Xutj66NIUwEcNQ2/e4OE27ipCJ3c0Py7AmYi0YECOrvT/ l8bXFqjCnKnfzlKJ1tZkFZ7Rdh5LHOKc/Ix3YRZ8Knn3EW4MTVdvEqQ4Sei6MrA7OmjK IJUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=BlvwWX0nKitl8LKGfZMpmAt7D6l8wCjYNhZa6UUrEz4=; b=sKpL9r9xZxfmiGp1SFgU0xancdjg3ATjHMdzFrs54JtwvaABU1+XQxBli1QcqIE7nI Fx1+0xk/iHS2oBjTxecV+/LXzS6RAJPtmGb/va/hDBvAmo0ZjNx7dKBluvDe2eKPQCkF TDoFuNTzZTSee6oZqYhhItaQyWp18UazaaeuEFZjUh324dZ88hVkGv3PfB1lhWLESht4 AqyeCQQmnqp+BPQDJ/j/zSlDIDoxVSMME7jv5iBA58Y45M6QZsAsX54xq5nCiKiQvfCj w91K+lyTh8Q5J5td66G1lQ4t2aga0gq/R1UhYciqnbIh6S4AzC8GdDs9W2FUla0MoRGK 3Hkg== X-Gm-Message-State: APt69E0EJc0n/TscHuY95u4L7m/bWKAPd0WH7XywpGFDNWjQu8NIV/+/ YTg1XEUbLSCih4xNlgV6AkpchLsL X-Google-Smtp-Source: AAOMgpfMsIzw1ypKnHgumPtrmKdO7QHj/PTOQKeQy1Ese10oJ28mf2OJVvBSptKpBrdl1rUeMka7qw== X-Received: by 2002:a5d:4d8d:: with SMTP id b13-v6mr5874803wru.80.1530122913356; Wed, 27 Jun 2018 11:08:33 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id m10-v6sm3468867wrj.35.2018.06.27.11.08.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:32 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:16 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-5-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 4/6] net/mlx5: add L2-L4 pattern items to switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This enables flow rules to explicitly match supported combinations of Ethernet, IPv4, IPv6, TCP and UDP headers at the switch level. Testpmd example: - Dropping TCPv4 traffic with a specific destination on port ID 2: flow create 2 ingress transfer pattern eth / ipv4 / tcp dst is 42 / end actions drop / end Signed-off-by: Adrien Mazarguil Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_nl_flow.c | 397 ++++++++++++++++++++++++++++++++++- 1 file changed, 396 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index 70da85fd5..ad1e001c6 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -3,6 +3,7 @@ * Copyright 2018 Mellanox Technologies, Ltd */ +#include #include #include #include @@ -12,7 +13,9 @@ #include #include #include +#include #include +#include #include #include #include @@ -20,6 +23,7 @@ #include #include +#include #include #include "mlx5.h" @@ -31,6 +35,11 @@ enum mlx5_nl_flow_trans { ATTR, PATTERN, ITEM_VOID, + ITEM_ETH, + ITEM_IPV4, + ITEM_IPV6, + ITEM_TCP, + ITEM_UDP, ACTIONS, ACTION_VOID, ACTION_PORT_ID, @@ -52,8 +61,13 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [INVALID] = NULL, [BACK] = NULL, [ATTR] = TRANS(PATTERN), - [PATTERN] = TRANS(PATTERN_COMMON), + [PATTERN] = TRANS(ITEM_ETH, PATTERN_COMMON), [ITEM_VOID] = TRANS(BACK), + [ITEM_ETH] = TRANS(ITEM_IPV4, ITEM_IPV6, PATTERN_COMMON), + [ITEM_IPV4] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), + [ITEM_IPV6] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), + [ITEM_TCP] = TRANS(PATTERN_COMMON), + [ITEM_UDP] = TRANS(PATTERN_COMMON), [ACTIONS] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), [ACTION_VOID] = TRANS(BACK), [ACTION_PORT_ID] = TRANS(ACTION_VOID, END), @@ -61,6 +75,126 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [END] = NULL, }; +/** Empty masks for known item types. */ +static const union { + struct rte_flow_item_eth eth; + struct rte_flow_item_ipv4 ipv4; + struct rte_flow_item_ipv6 ipv6; + struct rte_flow_item_tcp tcp; + struct rte_flow_item_udp udp; +} mlx5_nl_flow_mask_empty; + +/** Supported masks for known item types. */ +static const struct { + struct rte_flow_item_eth eth; + struct rte_flow_item_ipv4 ipv4; + struct rte_flow_item_ipv6 ipv6; + struct rte_flow_item_tcp tcp; + struct rte_flow_item_udp udp; +} mlx5_nl_flow_mask_supported = { + .eth = { + .type = RTE_BE16(0xffff), + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }, + .ipv4.hdr = { + .next_proto_id = 0xff, + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), + }, + .ipv6.hdr = { + .proto = 0xff, + .src_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .dst_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + }, + .tcp.hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, + .udp.hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, +}; + +/** + * Retrieve mask for pattern item. + * + * This function does basic sanity checks on a pattern item in order to + * return the most appropriate mask for it. + * + * @param[in] item + * Item specification. + * @param[in] mask_default + * Default mask for pattern item as specified by the flow API. + * @param[in] mask_supported + * Mask fields supported by the implementation. + * @param[in] mask_empty + * Empty mask to return when there is no specification. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * Either @p item->mask or one of the mask parameters on success, NULL + * otherwise and rte_errno is set. + */ +static const void * +mlx5_nl_flow_item_mask(const struct rte_flow_item *item, + const void *mask_default, + const void *mask_supported, + const void *mask_empty, + size_t mask_size, + struct rte_flow_error *error) +{ + const uint8_t *mask; + size_t i; + + /* item->last and item->mask cannot exist without item->spec. */ + if (!item->spec && (item->mask || item->last)) { + rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, + "\"mask\" or \"last\" field provided without a" + " corresponding \"spec\""); + return NULL; + } + /* No spec, no mask, no problem. */ + if (!item->spec) + return mask_empty; + mask = item->mask ? item->mask : mask_default; + assert(mask); + /* + * Single-pass check to make sure that: + * - Mask is supported, no bits are set outside mask_supported. + * - Both item->spec and item->last are included in mask. + */ + for (i = 0; i != mask_size; ++i) { + if (!mask[i]) + continue; + if ((mask[i] | ((const uint8_t *)mask_supported)[i]) != + ((const uint8_t *)mask_supported)[i]) { + rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask, "unsupported field found in \"mask\""); + return NULL; + } + if (item->last && + (((const uint8_t *)item->spec)[i] & mask[i]) != + (((const uint8_t *)item->last)[i] & mask[i])) { + rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_LAST, + item->last, + "range between \"spec\" and \"last\" not" + " comprised in \"mask\""); + return NULL; + } + } + return mask; +} + /** * Transpose flow rule description to rtnetlink message. * @@ -107,6 +241,8 @@ mlx5_nl_flow_transpose(void *buf, const struct rte_flow_action *action; unsigned int n; uint32_t act_index_cur; + bool eth_type_set; + bool ip_proto_set; struct nlattr *na_flower; struct nlattr *na_flower_act; const enum mlx5_nl_flow_trans *trans; @@ -119,6 +255,8 @@ mlx5_nl_flow_transpose(void *buf, action = actions; n = 0; act_index_cur = 0; + eth_type_set = false; + ip_proto_set = false; na_flower = NULL; na_flower_act = NULL; trans = TRANS(ATTR); @@ -126,6 +264,13 @@ mlx5_nl_flow_transpose(void *buf, trans: switch (trans[n++]) { union { + const struct rte_flow_item_eth *eth; + const struct rte_flow_item_ipv4 *ipv4; + const struct rte_flow_item_ipv6 *ipv6; + const struct rte_flow_item_tcp *tcp; + const struct rte_flow_item_udp *udp; + } spec, mask; + union { const struct rte_flow_action_port_id *port_id; } conf; struct nlmsghdr *nlh; @@ -214,6 +359,256 @@ mlx5_nl_flow_transpose(void *buf, goto trans; ++item; break; + case ITEM_ETH: + if (item->type != RTE_FLOW_ITEM_TYPE_ETH) + goto trans; + mask.eth = mlx5_nl_flow_item_mask + (item, &rte_flow_item_eth_mask, + &mlx5_nl_flow_mask_supported.eth, + &mlx5_nl_flow_mask_empty.eth, + sizeof(mlx5_nl_flow_mask_supported.eth), error); + if (!mask.eth) + return -rte_errno; + if (mask.eth == &mlx5_nl_flow_mask_empty.eth) { + ++item; + break; + } + spec.eth = item->spec; + if (mask.eth->type && mask.eth->type != RTE_BE16(0xffff)) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.eth, + "no support for partial mask on" + " \"type\" field"); + if (mask.eth->type) { + if (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + spec.eth->type)) + goto error_nobufs; + eth_type_set = 1; + } + if ((!is_zero_ether_addr(&mask.eth->dst) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_DST, + ETHER_ADDR_LEN, + spec.eth->dst.addr_bytes) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_DST_MASK, + ETHER_ADDR_LEN, + mask.eth->dst.addr_bytes))) || + (!is_zero_ether_addr(&mask.eth->src) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_SRC, + ETHER_ADDR_LEN, + spec.eth->src.addr_bytes) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_SRC_MASK, + ETHER_ADDR_LEN, + mask.eth->src.addr_bytes)))) + goto error_nobufs; + ++item; + break; + case ITEM_IPV4: + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) + goto trans; + mask.ipv4 = mlx5_nl_flow_item_mask + (item, &rte_flow_item_ipv4_mask, + &mlx5_nl_flow_mask_supported.ipv4, + &mlx5_nl_flow_mask_empty.ipv4, + sizeof(mlx5_nl_flow_mask_supported.ipv4), error); + if (!mask.ipv4) + return -rte_errno; + if (!eth_type_set && + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_IP))) + goto error_nobufs; + eth_type_set = 1; + if (mask.ipv4 == &mlx5_nl_flow_mask_empty.ipv4) { + ++item; + break; + } + spec.ipv4 = item->spec; + if (mask.ipv4->hdr.next_proto_id && + mask.ipv4->hdr.next_proto_id != 0xff) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.ipv4, + "no support for partial mask on" + " \"hdr.next_proto_id\" field"); + if (mask.ipv4->hdr.next_proto_id) { + if (!mnl_attr_put_u8_check + (buf, size, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv4->hdr.next_proto_id)) + goto error_nobufs; + ip_proto_set = 1; + } + if ((mask.ipv4->hdr.src_addr && + (!mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_SRC, + spec.ipv4->hdr.src_addr) || + !mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_SRC_MASK, + mask.ipv4->hdr.src_addr))) || + (mask.ipv4->hdr.dst_addr && + (!mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_DST, + spec.ipv4->hdr.dst_addr) || + !mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_DST_MASK, + mask.ipv4->hdr.dst_addr)))) + goto error_nobufs; + ++item; + break; + case ITEM_IPV6: + if (item->type != RTE_FLOW_ITEM_TYPE_IPV6) + goto trans; + mask.ipv6 = mlx5_nl_flow_item_mask + (item, &rte_flow_item_ipv6_mask, + &mlx5_nl_flow_mask_supported.ipv6, + &mlx5_nl_flow_mask_empty.ipv6, + sizeof(mlx5_nl_flow_mask_supported.ipv6), error); + if (!mask.ipv6) + return -rte_errno; + if (!eth_type_set && + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_IPV6))) + goto error_nobufs; + eth_type_set = 1; + if (mask.ipv6 == &mlx5_nl_flow_mask_empty.ipv6) { + ++item; + break; + } + spec.ipv6 = item->spec; + if (mask.ipv6->hdr.proto && mask.ipv6->hdr.proto != 0xff) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.ipv6, + "no support for partial mask on" + " \"hdr.proto\" field"); + if (mask.ipv6->hdr.proto) { + if (!mnl_attr_put_u8_check + (buf, size, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv6->hdr.proto)) + goto error_nobufs; + ip_proto_set = 1; + } + if ((!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_SRC, + sizeof(spec.ipv6->hdr.src_addr), + spec.ipv6->hdr.src_addr) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_SRC_MASK, + sizeof(mask.ipv6->hdr.src_addr), + mask.ipv6->hdr.src_addr))) || + (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_DST, + sizeof(spec.ipv6->hdr.dst_addr), + spec.ipv6->hdr.dst_addr) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_DST_MASK, + sizeof(mask.ipv6->hdr.dst_addr), + mask.ipv6->hdr.dst_addr)))) + goto error_nobufs; + ++item; + break; + case ITEM_TCP: + if (item->type != RTE_FLOW_ITEM_TYPE_TCP) + goto trans; + mask.tcp = mlx5_nl_flow_item_mask + (item, &rte_flow_item_tcp_mask, + &mlx5_nl_flow_mask_supported.tcp, + &mlx5_nl_flow_mask_empty.tcp, + sizeof(mlx5_nl_flow_mask_supported.tcp), error); + if (!mask.tcp) + return -rte_errno; + if (!ip_proto_set && + !mnl_attr_put_u8_check(buf, size, + TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_TCP)) + goto error_nobufs; + if (mask.tcp == &mlx5_nl_flow_mask_empty.tcp) { + ++item; + break; + } + spec.tcp = item->spec; + if ((mask.tcp->hdr.src_port && + mask.tcp->hdr.src_port != RTE_BE16(0xffff)) || + (mask.tcp->hdr.dst_port && + mask.tcp->hdr.dst_port != RTE_BE16(0xffff))) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.tcp, + "no support for partial masks on" + " \"hdr.src_port\" and \"hdr.dst_port\"" + " fields"); + if ((mask.tcp->hdr.src_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_SRC, + spec.tcp->hdr.src_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_SRC_MASK, + mask.tcp->hdr.src_port))) || + (mask.tcp->hdr.dst_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_DST, + spec.tcp->hdr.dst_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_DST_MASK, + mask.tcp->hdr.dst_port)))) + goto error_nobufs; + ++item; + break; + case ITEM_UDP: + if (item->type != RTE_FLOW_ITEM_TYPE_UDP) + goto trans; + mask.udp = mlx5_nl_flow_item_mask + (item, &rte_flow_item_udp_mask, + &mlx5_nl_flow_mask_supported.udp, + &mlx5_nl_flow_mask_empty.udp, + sizeof(mlx5_nl_flow_mask_supported.udp), error); + if (!mask.udp) + return -rte_errno; + if (!ip_proto_set && + !mnl_attr_put_u8_check(buf, size, + TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_UDP)) + goto error_nobufs; + if (mask.udp == &mlx5_nl_flow_mask_empty.udp) { + ++item; + break; + } + spec.udp = item->spec; + if ((mask.udp->hdr.src_port && + mask.udp->hdr.src_port != RTE_BE16(0xffff)) || + (mask.udp->hdr.dst_port && + mask.udp->hdr.dst_port != RTE_BE16(0xffff))) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.udp, + "no support for partial masks on" + " \"hdr.src_port\" and \"hdr.dst_port\"" + " fields"); + if ((mask.udp->hdr.src_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_SRC, + spec.udp->hdr.src_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_SRC_MASK, + mask.udp->hdr.src_port))) || + (mask.udp->hdr.dst_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_DST, + spec.udp->hdr.dst_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_DST_MASK, + mask.udp->hdr.dst_port)))) + goto error_nobufs; + ++item; + break; case ACTIONS: if (item->type != RTE_FLOW_ITEM_TYPE_END) goto trans; From patchwork Wed Jun 27 18:08:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41726 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 713001C3AD; Wed, 27 Jun 2018 20:08:39 +0200 (CEST) Received: from mail-wr0-f196.google.com (mail-wr0-f196.google.com [209.85.128.196]) by dpdk.org (Postfix) with ESMTP id AADEA1C3A3 for ; Wed, 27 Jun 2018 20:08:35 +0200 (CEST) Received: by mail-wr0-f196.google.com with SMTP id t6-v6so2927947wrq.13 for ; Wed, 27 Jun 2018 11:08:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tlOZQprMHSS4oKMFSBxBxoJN8XoXR0cB/sZuXI+rrdk=; b=djNQX6Eswii19x/uzysUox8HAD2mCTjrQ1abCXNs8lthcMP91DInBVbi2LaVczGjmG erQje20xTi3vTPrbsyH2Wivfh313QgLg8gWaRk5hEmOkjKy/7nTvSHM5yXoYLzKFMQN5 nOUF0JtvidfmU/2a1QpcWHPg2s6xWI4uNokgO0TkhK09PfB1Ip/etXGFD59GKMKVD07D knS3wavCoPbVDo4PxwhCzoP6sp4xVo9bWopWcYhN70OwYbLYMdTiz157v+G9sKLJTHyf a7PZ8P7XbWuLuGgWtvbVJnQ7PK0EhaVy93MXjQhl0XFpdpmo+yTlMaz6Zl5WAbRvln7i F9WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=tlOZQprMHSS4oKMFSBxBxoJN8XoXR0cB/sZuXI+rrdk=; b=p8cGMRbkMcK1GOuDZ4qTR2r+y9ExZr7rcL/mg6j0iuJ4OUiZw+smQClLo+fEMSe72W QVxBeCnf4ZKqsuVFq6ruFVEGhdFCgpXlhZN7u5b4xOHSzwFXcp2Egw+MvXncrqNcDyPE 3CWuCnXG/klq9gVYL0W809QIXXQAfCyJlqsQyGlrQebWuzC16LFxj1nsDlyNBOuWOk2l l7CDVvb7mzNG2io4xH0JJnqRljYeFsYXX2p8QHP6KBzcErJ7J0C1hD9peZtljv/1pL8v yFyABGWXt7W0Z26GqjT+oHqMXKfDjSR/XSyOHzmddCeLBmp0hKf2cpadhQTqE+XXcP/W riCQ== X-Gm-Message-State: APt69E2JCi9fe1gBlRjM8Hlt6P7+UnRkGseq2tZuO+QLrE6PAHIcVCi6 j8g06vlF86++oYRcjb1DzUxZD+KI X-Google-Smtp-Source: AAOMgpfqehnd5rXRZ+6refuJqxgsbh9V8mOMVbaUas/1nY0EAjU4eVNVqL12PW88YDuzPfLAA1Eb0w== X-Received: by 2002:adf:fa03:: with SMTP id m3-v6mr1293342wrr.228.1530122915310; Wed, 27 Jun 2018 11:08:35 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id x10-v6sm1141102wrn.25.2018.06.27.11.08.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:34 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:18 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-6-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 5/6] net/mlx5: add VLAN item and actions to switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This enables flow rules to explicitly match VLAN traffic (VLAN pattern item) and perform various operations on VLAN headers at the switch level (OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID and OF_SET_VLAN_PCP actions). Testpmd examples: - Directing all VLAN traffic received on port ID 1 to port ID 0: flow create 1 ingress transfer pattern eth / vlan / end actions port_id id 0 / end - Adding a VLAN header to IPv6 traffic received on port ID 1 and directing it to port ID 0: flow create 1 ingress transfer pattern eth / ipv6 / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid / port_id id 0 / end Signed-off-by: Adrien Mazarguil Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_nl_flow.c | 177 ++++++++++++++++++++++++++++++++++- 1 file changed, 173 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index ad1e001c6..a45d94fae 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -36,6 +37,7 @@ enum mlx5_nl_flow_trans { PATTERN, ITEM_VOID, ITEM_ETH, + ITEM_VLAN, ITEM_IPV4, ITEM_IPV6, ITEM_TCP, @@ -44,6 +46,10 @@ enum mlx5_nl_flow_trans { ACTION_VOID, ACTION_PORT_ID, ACTION_DROP, + ACTION_OF_POP_VLAN, + ACTION_OF_PUSH_VLAN, + ACTION_OF_SET_VLAN_VID, + ACTION_OF_SET_VLAN_PCP, END, }; @@ -52,7 +58,8 @@ enum mlx5_nl_flow_trans { #define PATTERN_COMMON \ ITEM_VOID, ACTIONS #define ACTIONS_COMMON \ - ACTION_VOID + ACTION_VOID, ACTION_OF_POP_VLAN, ACTION_OF_PUSH_VLAN, \ + ACTION_OF_SET_VLAN_VID, ACTION_OF_SET_VLAN_PCP #define ACTIONS_FATE \ ACTION_PORT_ID, ACTION_DROP @@ -63,7 +70,8 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [ATTR] = TRANS(PATTERN), [PATTERN] = TRANS(ITEM_ETH, PATTERN_COMMON), [ITEM_VOID] = TRANS(BACK), - [ITEM_ETH] = TRANS(ITEM_IPV4, ITEM_IPV6, PATTERN_COMMON), + [ITEM_ETH] = TRANS(ITEM_IPV4, ITEM_IPV6, ITEM_VLAN, PATTERN_COMMON), + [ITEM_VLAN] = TRANS(ITEM_IPV4, ITEM_IPV6, PATTERN_COMMON), [ITEM_IPV4] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), [ITEM_IPV6] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), [ITEM_TCP] = TRANS(PATTERN_COMMON), @@ -72,12 +80,17 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [ACTION_VOID] = TRANS(BACK), [ACTION_PORT_ID] = TRANS(ACTION_VOID, END), [ACTION_DROP] = TRANS(ACTION_VOID, END), + [ACTION_OF_POP_VLAN] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), + [ACTION_OF_PUSH_VLAN] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), + [ACTION_OF_SET_VLAN_VID] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), + [ACTION_OF_SET_VLAN_PCP] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), [END] = NULL, }; /** Empty masks for known item types. */ static const union { struct rte_flow_item_eth eth; + struct rte_flow_item_vlan vlan; struct rte_flow_item_ipv4 ipv4; struct rte_flow_item_ipv6 ipv6; struct rte_flow_item_tcp tcp; @@ -87,6 +100,7 @@ static const union { /** Supported masks for known item types. */ static const struct { struct rte_flow_item_eth eth; + struct rte_flow_item_vlan vlan; struct rte_flow_item_ipv4 ipv4; struct rte_flow_item_ipv6 ipv6; struct rte_flow_item_tcp tcp; @@ -97,6 +111,11 @@ static const struct { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", }, + .vlan = { + /* PCP and VID only, no DEI. */ + .tci = RTE_BE16(0xefff), + .inner_type = RTE_BE16(0xffff), + }, .ipv4.hdr = { .next_proto_id = 0xff, .src_addr = RTE_BE32(0xffffffff), @@ -242,9 +261,13 @@ mlx5_nl_flow_transpose(void *buf, unsigned int n; uint32_t act_index_cur; bool eth_type_set; + bool vlan_present; + bool vlan_eth_type_set; bool ip_proto_set; struct nlattr *na_flower; struct nlattr *na_flower_act; + struct nlattr *na_vlan_id; + struct nlattr *na_vlan_priority; const enum mlx5_nl_flow_trans *trans; const enum mlx5_nl_flow_trans *back; @@ -256,15 +279,20 @@ mlx5_nl_flow_transpose(void *buf, n = 0; act_index_cur = 0; eth_type_set = false; + vlan_present = false; + vlan_eth_type_set = false; ip_proto_set = false; na_flower = NULL; na_flower_act = NULL; + na_vlan_id = NULL; + na_vlan_priority = NULL; trans = TRANS(ATTR); back = trans; trans: switch (trans[n++]) { union { const struct rte_flow_item_eth *eth; + const struct rte_flow_item_vlan *vlan; const struct rte_flow_item_ipv4 *ipv4; const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_tcp *tcp; @@ -272,6 +300,11 @@ mlx5_nl_flow_transpose(void *buf, } spec, mask; union { const struct rte_flow_action_port_id *port_id; + const struct rte_flow_action_of_push_vlan *of_push_vlan; + const struct rte_flow_action_of_set_vlan_vid * + of_set_vlan_vid; + const struct rte_flow_action_of_set_vlan_pcp * + of_set_vlan_pcp; } conf; struct nlmsghdr *nlh; struct tcmsg *tcm; @@ -408,6 +441,58 @@ mlx5_nl_flow_transpose(void *buf, goto error_nobufs; ++item; break; + case ITEM_VLAN: + if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) + goto trans; + mask.vlan = mlx5_nl_flow_item_mask + (item, &rte_flow_item_vlan_mask, + &mlx5_nl_flow_mask_supported.vlan, + &mlx5_nl_flow_mask_empty.vlan, + sizeof(mlx5_nl_flow_mask_supported.vlan), error); + if (!mask.vlan) + return -rte_errno; + if (!eth_type_set && + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_8021Q))) + goto error_nobufs; + eth_type_set = 1; + vlan_present = 1; + if (mask.vlan == &mlx5_nl_flow_mask_empty.vlan) { + ++item; + break; + } + spec.vlan = item->spec; + if ((mask.vlan->tci & RTE_BE16(0xe000) && + (mask.vlan->tci & RTE_BE16(0xe000)) != RTE_BE16(0xe000)) || + (mask.vlan->tci & RTE_BE16(0x0fff) && + (mask.vlan->tci & RTE_BE16(0x0fff)) != RTE_BE16(0x0fff)) || + (mask.vlan->inner_type && + mask.vlan->inner_type != RTE_BE16(0xffff))) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.vlan, + "no support for partial masks on" + " \"tci\" (PCP and VID parts) and" + " \"inner_type\" fields"); + if (mask.vlan->inner_type) { + if (!mnl_attr_put_u16_check + (buf, size, TCA_FLOWER_KEY_VLAN_ETH_TYPE, + spec.vlan->inner_type)) + goto error_nobufs; + vlan_eth_type_set = 1; + } + if ((mask.vlan->tci & RTE_BE16(0xe000) && + !mnl_attr_put_u8_check + (buf, size, TCA_FLOWER_KEY_VLAN_PRIO, + (rte_be_to_cpu_16(spec.vlan->tci) >> 13) & 0x7)) || + (mask.vlan->tci & RTE_BE16(0x0fff) && + !mnl_attr_put_u16_check + (buf, size, TCA_FLOWER_KEY_VLAN_ID, + spec.vlan->tci & RTE_BE16(0x0fff)))) + goto error_nobufs; + ++item; + break; case ITEM_IPV4: if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) goto trans; @@ -418,12 +503,15 @@ mlx5_nl_flow_transpose(void *buf, sizeof(mlx5_nl_flow_mask_supported.ipv4), error); if (!mask.ipv4) return -rte_errno; - if (!eth_type_set && + if ((!eth_type_set || !vlan_eth_type_set) && !mnl_attr_put_u16_check(buf, size, + vlan_present ? + TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IP))) goto error_nobufs; eth_type_set = 1; + vlan_eth_type_set = 1; if (mask.ipv4 == &mlx5_nl_flow_mask_empty.ipv4) { ++item; break; @@ -470,12 +558,15 @@ mlx5_nl_flow_transpose(void *buf, sizeof(mlx5_nl_flow_mask_supported.ipv6), error); if (!mask.ipv6) return -rte_errno; - if (!eth_type_set && + if ((!eth_type_set || !vlan_eth_type_set) && !mnl_attr_put_u16_check(buf, size, + vlan_present ? + TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IPV6))) goto error_nobufs; eth_type_set = 1; + vlan_eth_type_set = 1; if (mask.ipv6 == &mlx5_nl_flow_mask_empty.ipv6) { ++item; break; @@ -681,6 +772,84 @@ mlx5_nl_flow_transpose(void *buf, mnl_attr_nest_end(buf, act_index); ++action; break; + case ACTION_OF_POP_VLAN: + if (action->type != RTE_FLOW_ACTION_TYPE_OF_POP_VLAN) + goto trans; + conf.of_push_vlan = NULL; + i = TCA_VLAN_ACT_POP; + goto action_of_vlan; + case ACTION_OF_PUSH_VLAN: + if (action->type != RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN) + goto trans; + conf.of_push_vlan = action->conf; + i = TCA_VLAN_ACT_PUSH; + goto action_of_vlan; + case ACTION_OF_SET_VLAN_VID: + if (action->type != RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) + goto trans; + conf.of_set_vlan_vid = action->conf; + if (na_vlan_id) + goto override_na_vlan_id; + i = TCA_VLAN_ACT_MODIFY; + goto action_of_vlan; + case ACTION_OF_SET_VLAN_PCP: + if (action->type != RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) + goto trans; + conf.of_set_vlan_pcp = action->conf; + if (na_vlan_priority) + goto override_na_vlan_priority; + i = TCA_VLAN_ACT_MODIFY; + goto action_of_vlan; +action_of_vlan: + act_index = + mnl_attr_nest_start_check(buf, size, act_index_cur++); + if (!act_index || + !mnl_attr_put_strz_check(buf, size, TCA_ACT_KIND, "vlan")) + goto error_nobufs; + act = mnl_attr_nest_start_check(buf, size, TCA_ACT_OPTIONS); + if (!act) + goto error_nobufs; + if (!mnl_attr_put_check(buf, size, TCA_VLAN_PARMS, + sizeof(struct tc_vlan), + &(struct tc_vlan){ + .action = TC_ACT_PIPE, + .v_action = i, + })) + goto error_nobufs; + if (i == TCA_VLAN_ACT_POP) { + mnl_attr_nest_end(buf, act); + ++action; + break; + } + if (i == TCA_VLAN_ACT_PUSH && + !mnl_attr_put_u16_check(buf, size, + TCA_VLAN_PUSH_VLAN_PROTOCOL, + conf.of_push_vlan->ethertype)) + goto error_nobufs; + na_vlan_id = mnl_nlmsg_get_payload_tail(buf); + if (!mnl_attr_put_u16_check(buf, size, TCA_VLAN_PAD, 0)) + goto error_nobufs; + na_vlan_priority = mnl_nlmsg_get_payload_tail(buf); + if (!mnl_attr_put_u8_check(buf, size, TCA_VLAN_PAD, 0)) + goto error_nobufs; + mnl_attr_nest_end(buf, act); + mnl_attr_nest_end(buf, act_index); + if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) { +override_na_vlan_id: + na_vlan_id->nla_type = TCA_VLAN_PUSH_VLAN_ID; + *(uint16_t *)mnl_attr_get_payload(na_vlan_id) = + rte_be_to_cpu_16 + (conf.of_set_vlan_vid->vlan_vid); + } else if (action->type == + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) { +override_na_vlan_priority: + na_vlan_priority->nla_type = + TCA_VLAN_PUSH_VLAN_PRIORITY; + *(uint8_t *)mnl_attr_get_payload(na_vlan_priority) = + conf.of_set_vlan_pcp->vlan_pcp; + } + ++action; + break; case END: if (item->type != RTE_FLOW_ITEM_TYPE_END || action->type != RTE_FLOW_ACTION_TYPE_END) From patchwork Wed Jun 27 18:08:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41727 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8132D1C3B3; Wed, 27 Jun 2018 20:08:41 +0200 (CEST) Received: from mail-wr0-f196.google.com (mail-wr0-f196.google.com [209.85.128.196]) by dpdk.org (Postfix) with ESMTP id 8C8991C3A9 for ; Wed, 27 Jun 2018 20:08:37 +0200 (CEST) Received: by mail-wr0-f196.google.com with SMTP id h10-v6so2938493wrq.8 for ; Wed, 27 Jun 2018 11:08:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=xCuhasRte+d2yL53R27rf/D9tYfbWjgCZ9wuVC8rtkQ=; b=CX0oxoXyeiolrzaosWfX2H3/Mi3gFI6CTj78qeSA6EBzxbLjdJNDqwm9VJpQUNi2W4 ovh/3n5qn6XobZ15LwTKMdIlOLmL6zjlOlLJKm0CUo00UUzsQHxzc16yiVx6K0fhwuEZ Gyf0+hnlgDsfo3mjbN76qB3sflLK3iaTpQ27wRcWhPMKExhapiV0uyYbb4kYi6M9bBYB UBA0vKIHCVgFfYNXBmKAzu+xZxFm2sUFv5GFxtjmhTKWDdZxG94QC/GwriQ2vPMl4eGX ZK7wvpbF/5oCfL1dYOSsUfXlRBDY1aae+kL2UX9LWnfvnb2+CTd+LZdjcS8OhXGzNHFj WK8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=xCuhasRte+d2yL53R27rf/D9tYfbWjgCZ9wuVC8rtkQ=; b=UbT1QMT9ik/7TTXRyurWnMIrANjl7aMNew7bmaAzr4+6DGDbaMgIn/4fw6YvsTf48v DEzZXnHeGj9HwkRElUDpR2MbritBSLK71vuF4cRyPlNuQ8PgpwXi2z/PUTU3IuCeYZay 1PSEuyUxMkm5b6yETVpV/RtYvqDWUi6F6YkeiTkM/K9Ldi9hjpDoX9Brj66LZAu/7gEU 5Vf3QyJuCLHJZW08dU9NwVsll7ZVU0L9IPoIFk60FanlCM9tsM+UniVXff1py7uMGfQT FAGJ7zcPd6E5B7e2EmL2hK512Rj+3XAAHekmadogiMEV2yOn/aN6ZQuf4FUbiV13FRLh MrAg== X-Gm-Message-State: APt69E28BxMRCWACKfl6NejXxVGbenjbnAj/V/X9La6dQ1VF6QBafWF5 hNvAvzPGGSVRoWPSDISq91QCyu/p X-Google-Smtp-Source: AAOMgpdj/lCn4YSy/K9g3K2OJuaijFc5KhMr+6XuooiEaPJCPD9g8G8cTEMj8F/I+QjcL/g/pr86Aw== X-Received: by 2002:adf:bbc9:: with SMTP id z9-v6mr5680604wrg.183.1530122917296; Wed, 27 Jun 2018 11:08:37 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 132-v6sm8585086wmr.33.2018.06.27.11.08.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:36 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:20 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-7-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 6/6] net/mlx5: add port ID pattern item to switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This enables flow rules to match traffic coming from a different DPDK port ID associated with the device (PORT_ID pattern item), mainly for the convenience of applications that want to deal with a single port ID for all flow rules associated with some physical device. Testpmd example: - Creating a flow rule on port ID 1 to consume all traffic from port ID 0 and direct it to port ID 2: flow create 1 ingress transfer pattern port_id id is 0 / end actions port_id id 2 / end Signed-off-by: Adrien Mazarguil Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_nl_flow.c | 57 +++++++++++++++++++++++++++++++++++- 1 file changed, 56 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index a45d94fae..ad7a53d36 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -36,6 +36,7 @@ enum mlx5_nl_flow_trans { ATTR, PATTERN, ITEM_VOID, + ITEM_PORT_ID, ITEM_ETH, ITEM_VLAN, ITEM_IPV4, @@ -56,7 +57,7 @@ enum mlx5_nl_flow_trans { #define TRANS(...) (const enum mlx5_nl_flow_trans []){ __VA_ARGS__, INVALID, } #define PATTERN_COMMON \ - ITEM_VOID, ACTIONS + ITEM_VOID, ITEM_PORT_ID, ACTIONS #define ACTIONS_COMMON \ ACTION_VOID, ACTION_OF_POP_VLAN, ACTION_OF_PUSH_VLAN, \ ACTION_OF_SET_VLAN_VID, ACTION_OF_SET_VLAN_PCP @@ -70,6 +71,7 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [ATTR] = TRANS(PATTERN), [PATTERN] = TRANS(ITEM_ETH, PATTERN_COMMON), [ITEM_VOID] = TRANS(BACK), + [ITEM_PORT_ID] = TRANS(BACK), [ITEM_ETH] = TRANS(ITEM_IPV4, ITEM_IPV6, ITEM_VLAN, PATTERN_COMMON), [ITEM_VLAN] = TRANS(ITEM_IPV4, ITEM_IPV6, PATTERN_COMMON), [ITEM_IPV4] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), @@ -89,6 +91,7 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { /** Empty masks for known item types. */ static const union { + struct rte_flow_item_port_id port_id; struct rte_flow_item_eth eth; struct rte_flow_item_vlan vlan; struct rte_flow_item_ipv4 ipv4; @@ -99,6 +102,7 @@ static const union { /** Supported masks for known item types. */ static const struct { + struct rte_flow_item_port_id port_id; struct rte_flow_item_eth eth; struct rte_flow_item_vlan vlan; struct rte_flow_item_ipv4 ipv4; @@ -106,6 +110,9 @@ static const struct { struct rte_flow_item_tcp tcp; struct rte_flow_item_udp udp; } mlx5_nl_flow_mask_supported = { + .port_id = { + .id = 0xffffffff, + }, .eth = { .type = RTE_BE16(0xffff), .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", @@ -260,6 +267,7 @@ mlx5_nl_flow_transpose(void *buf, const struct rte_flow_action *action; unsigned int n; uint32_t act_index_cur; + bool in_port_id_set; bool eth_type_set; bool vlan_present; bool vlan_eth_type_set; @@ -278,6 +286,7 @@ mlx5_nl_flow_transpose(void *buf, action = actions; n = 0; act_index_cur = 0; + in_port_id_set = false; eth_type_set = false; vlan_present = false; vlan_eth_type_set = false; @@ -291,6 +300,7 @@ mlx5_nl_flow_transpose(void *buf, trans: switch (trans[n++]) { union { + const struct rte_flow_item_port_id *port_id; const struct rte_flow_item_eth *eth; const struct rte_flow_item_vlan *vlan; const struct rte_flow_item_ipv4 *ipv4; @@ -392,6 +402,51 @@ mlx5_nl_flow_transpose(void *buf, goto trans; ++item; break; + case ITEM_PORT_ID: + if (item->type != RTE_FLOW_ITEM_TYPE_PORT_ID) + goto trans; + mask.port_id = mlx5_nl_flow_item_mask + (item, &rte_flow_item_port_id_mask, + &mlx5_nl_flow_mask_supported.port_id, + &mlx5_nl_flow_mask_empty.port_id, + sizeof(mlx5_nl_flow_mask_supported.port_id), error); + if (!mask.port_id) + return -rte_errno; + if (mask.port_id == &mlx5_nl_flow_mask_empty.port_id) { + in_port_id_set = 1; + ++item; + break; + } + spec.port_id = item->spec; + if (mask.port_id->id && mask.port_id->id != 0xffffffff) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.port_id, + "no support for partial mask on" + " \"id\" field"); + if (!mask.port_id->id) + i = 0; + else + for (i = 0; ptoi[i].ifindex; ++i) + if (ptoi[i].port_id == spec.port_id->id) + break; + if (!ptoi[i].ifindex) + return rte_flow_error_set + (error, ENODEV, RTE_FLOW_ERROR_TYPE_ITEM_SPEC, + spec.port_id, + "missing data to convert port ID to ifindex"); + tcm = mnl_nlmsg_get_payload(buf); + if (in_port_id_set && + ptoi[i].ifindex != (unsigned int)tcm->tcm_ifindex) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_SPEC, + spec.port_id, + "cannot match traffic for several port IDs" + " through a single flow rule"); + tcm->tcm_ifindex = ptoi[i].ifindex; + in_port_id_set = 1; + ++item; + break; case ITEM_ETH: if (item->type != RTE_FLOW_ITEM_TYPE_ETH) goto trans;