From patchwork Thu Jul 2 12:53:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 72829 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC498A0520; Thu, 2 Jul 2020 14:54:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8A41F1D9CE; Thu, 2 Jul 2020 14:54:32 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id BA7E71D9BF for ; Thu, 2 Jul 2020 14:54:30 +0200 (CEST) From: Bing Zhao To: orika@mellanox.com, john.mcnamara@intel.com, marko.kovacevic@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, olivier.matz@6wind.com Cc: dev@dpdk.org, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com Date: Thu, 2 Jul 2020 20:53:41 +0800 Message-Id: <1593694422-299952-2-git-send-email-bingz@mellanox.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1593694422-299952-1-git-send-email-bingz@mellanox.com> References: <1593672361-285288-1-git-send-email-bingz@mellanox.com> <1593694422-299952-1-git-send-email-bingz@mellanox.com> Subject: [dpdk-dev] [PATCH v3 1/2] rte_flow: add eCPRI key fields to flow API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new item "rte_flow_item_ecpri" in order to match eCRPI header. eCPRI is a packet based protocol used in the fronthaul interface of 5G networks. Header format definition could be found in the specification via the link below: https://www.gigalight.com/downloads/standards/ecpri-specification.pdf eCPRI message can be over Ethernet layer (.1Q supported also) or over UDP layer. Message header formats are the same in these two variants. Signed-off-by: Bing Zhao Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 8 ++ lib/librte_ethdev/rte_flow.c | 1 + lib/librte_ethdev/rte_flow.h | 31 +++++++ lib/librte_net/Makefile | 1 + lib/librte_net/meson.build | 3 +- lib/librte_net/rte_ecpri.h | 163 +++++++++++++++++++++++++++++++++++++ lib/librte_net/rte_ether.h | 1 + 7 files changed, 207 insertions(+), 1 deletion(-) create mode 100644 lib/librte_net/rte_ecpri.h diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index d5dd18c..669d519 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1362,6 +1362,14 @@ Matches a PFCP Header. - ``seid``: session endpoint identifier. - Default ``mask`` matches s_field and seid. +Item: ``ECPRI`` +^^^^^^^^^^^^^ + +Matches a eCPRI header. + +- ``hdr``: eCPRI header definition (``rte_ecpri.h``). +- Default ``mask`` matches message type of common header only. + Actions ~~~~~~~ diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index 1685be5..f8fdd68 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -95,6 +95,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(HIGIG2, sizeof(struct rte_flow_item_higig2_hdr)), MK_FLOW_ITEM(L2TPV3OIP, sizeof(struct rte_flow_item_l2tpv3oip)), MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)), + MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)), }; /** Generate flow_action[] entry. */ diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index b0e4199..8a90226 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -527,6 +528,15 @@ enum rte_flow_item_type { */ RTE_FLOW_ITEM_TYPE_PFCP, + /** + * Matches eCPRI Header. + * + * Configure flow for eCPRI over ETH or UDP packets. + * + * See struct rte_flow_item_ecpri. + */ + RTE_FLOW_ITEM_TYPE_ECPRI, + }; /** @@ -1547,6 +1557,27 @@ static const struct rte_flow_item_pfcp rte_flow_item_pfcp_mask = { #endif /** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ITEM_TYPE_ECPRI + * + * Match eCPRI Header + */ +struct rte_flow_item_ecpri { + struct rte_ecpri_msg_hdr hdr; +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_ECPRI. */ +#ifndef __cplusplus +static const struct rte_flow_item_ecpri rte_flow_item_ecpri_mask = { + .hdr = { + .dw0 = 0x0, + }, +}; +#endif + +/** * Matching pattern item definition. * * A pattern is formed by stacking items starting from the lowest protocol diff --git a/lib/librte_net/Makefile b/lib/librte_net/Makefile index aa1d6fe..9830e77 100644 --- a/lib/librte_net/Makefile +++ b/lib/librte_net/Makefile @@ -20,5 +20,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_sctp.h rte_icmp.h rte_arp.h SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ether.h rte_gre.h rte_net.h SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_net_crc.h rte_mpls.h rte_higig.h SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_gtp.h rte_vxlan.h +SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ecpri.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_net/meson.build b/lib/librte_net/meson.build index f799349..24ed825 100644 --- a/lib/librte_net/meson.build +++ b/lib/librte_net/meson.build @@ -15,7 +15,8 @@ headers = files('rte_ip.h', 'rte_net.h', 'rte_net_crc.h', 'rte_mpls.h', - 'rte_higig.h') + 'rte_higig.h', + 'rte_ecpri.h') sources = files('rte_arp.c', 'rte_ether.c', 'rte_net.c', 'rte_net_crc.c') deps += ['mbuf'] diff --git a/lib/librte_net/rte_ecpri.h b/lib/librte_net/rte_ecpri.h new file mode 100644 index 0000000..31974b2 --- /dev/null +++ b/lib/librte_net/rte_ecpri.h @@ -0,0 +1,163 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#ifndef _RTE_ECPRI_H_ +#define _RTE_ECPRI_H_ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * eCPRI Protocol Revision 1.0, 1.1, 1.2, 2.0: 0001b + * Other values are reserved for future + */ +#define RTE_ECPRI_REV_UPTO_20 1 + +/** + * eCPRI message types in specifications + * IWF* types will only be supported from rev.2 + */ +#define RTE_ECPRI_MSG_TYPE_IQ_DATA 0 +#define RTE_ECPRI_MSG_TYPE_BIT_SEQ 1 +#define RTE_ECPRI_MSG_TYPE_RTC_CTRL 2 +#define RTE_ECPRI_MSG_TYPE_GEN_DATA 3 +#define RTE_ECPRI_MSG_TYPE_RM_ACC 4 +#define RTE_ECPRI_MSG_TYPE_DLY_MSR 5 +#define RTE_ECPRI_MSG_TYPE_RMT_RST 6 +#define RTE_ECPRI_MSG_TYPE_EVT_IND 7 +#define RTE_ECPRI_MSG_TYPE_IWF_UP 8 +#define RTE_ECPRI_MSG_TYPE_IWF_OPT 9 +#define RTE_ECPRI_MSG_TYPE_IWF_MAP 10 +#define RTE_ECPRI_MSG_TYPE_IWF_DCTRL 11 + +/** + * eCPRI Common Header + */ +RTE_STD_C11 +struct rte_ecpri_common_hdr { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint32_t size:16; /**< Payload Size */ + uint32_t type:8; /**< Message Type */ + uint32_t c:1; /**< Concatenation Indicator */ + uint32_t res:3; /**< Reserved */ + uint32_t revision:4; /**< Protocol Revision */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint32_t revision:4; /**< Protocol Revision */ + uint32_t res:3; /**< Reserved */ + uint32_t c:1; /**< Concatenation Indicator */ + uint32_t type:8; /**< Message Type */ + uint32_t size:16; /**< Payload Size */ +#endif +} __rte_packed; + +/** + * eCPRI Message Header of Type #0: IQ Data + */ +struct rte_ecpri_msg_iq_data { + rte_be16_t pc_id; /**< Physical channel ID */ + rte_be16_t seq_id; /**< Sequence ID */ +}; + +/** + * eCPRI Message Header of Type #1: Bit Sequence + */ +struct rte_ecpri_msg_bit_seq { + rte_be16_t pc_id; /**< Physical channel ID */ + rte_be16_t seq_id; /**< Sequence ID */ +}; + +/** + * eCPRI Message Header of Type #2: Real-Time Control Data + */ +struct rte_ecpri_msg_rtc_ctrl { + rte_be16_t rtc_id; /**< Real-Time Control Data ID */ + rte_be16_t seq_id; /**< Sequence ID */ +}; + +/** + * eCPRI Message Header of Type #3: Generic Data Transfer + */ +struct rte_ecpri_msg_gen_data { + rte_be32_t pc_id; /**< Physical channel ID */ + rte_be32_t seq_id; /**< Sequence ID */ +}; + +/** + * eCPRI Message Header of Type #4: Remote Memory Access + */ +RTE_STD_C11 +struct rte_ecpri_msg_rm_access { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint32_t ele_id:16; /**< Element ID */ + uint32_t rr:4; /**< Req/Resp */ + uint32_t rw:4; /**< Read/Write */ + uint32_t rma_id:8; /**< Remote Memory Access ID */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint32_t rma_id:8; /**< Remote Memory Access ID */ + uint32_t rw:4; /**< Read/Write */ + uint32_t rr:4; /**< Req/Resp */ + uint32_t ele_id:16; /**< Element ID */ +#endif + rte_be16_t addr_m; /**< 48-bits address (16 MSB) */ + rte_be32_t addr_l; /**< 48-bits address (32 LSB) */ + rte_be16_t length; /**< number of bytes */ +} __rte_packed; + +/** + * eCPRI Message Header of Type #5: One-Way Delay Measurement + */ +struct rte_ecpri_msg_delay_measure { + uint8_t msr_id; /**< Measurement ID */ + uint8_t act_type; /**< Action Type */ +}; + +/** + * eCPRI Message Header of Type #6: Remote Reset + */ +struct rte_ecpri_msg_remote_reset { + uint8_t msr_id; /**< Measurement ID */ + uint8_t act_type; /**< Action Type */ +}; + +/** + * eCPRI Message Header of Type #7: Event Indication + */ +struct rte_ecpri_msg_event_ind { + uint8_t evt_id; /**< Event ID */ + uint8_t evt_type; /**< Event Type */ + uint8_t seq; /**< Sequence Number */ + uint8_t number; /**< Number of Faults/Notif */ +}; + +/** + * eCPRI Message Header Format: Common Header + Message Types + */ +RTE_STD_C11 +struct rte_ecpri_msg_hdr { + union { + struct rte_ecpri_common_hdr common; + uint32_t dw0; + }; + union { + struct rte_ecpri_msg_iq_data type0; + struct rte_ecpri_msg_bit_seq type1; + struct rte_ecpri_msg_rtc_ctrl type2; + struct rte_ecpri_msg_bit_seq type3; + struct rte_ecpri_msg_rm_access type4; + struct rte_ecpri_msg_delay_measure type5; + struct rte_ecpri_msg_remote_reset type6; + struct rte_ecpri_msg_event_ind type7; + uint32_t dummy[3]; + }; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ECPRI_H_ */ diff --git a/lib/librte_net/rte_ether.h b/lib/librte_net/rte_ether.h index 0ae4e75..184a3f9 100644 --- a/lib/librte_net/rte_ether.h +++ b/lib/librte_net/rte_ether.h @@ -304,6 +304,7 @@ struct rte_vlan_hdr { #define RTE_ETHER_TYPE_LLDP 0x88CC /**< LLDP Protocol. */ #define RTE_ETHER_TYPE_MPLS 0x8847 /**< MPLS ethertype. */ #define RTE_ETHER_TYPE_MPLSM 0x8848 /**< MPLS multicast ethertype. */ +#define RTE_ETHER_TYPE_ECPRI 0xAEFE /**< eCPRI ethertype (.1Q supported). */ /** * Extract VLAN tag information into mbuf From patchwork Thu Jul 2 12:53:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 72830 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF15EA0520; Thu, 2 Jul 2020 14:54:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 056BC1D9D9; Thu, 2 Jul 2020 14:54:37 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 2A1401D9D8 for ; Thu, 2 Jul 2020 14:54:36 +0200 (CEST) From: Bing Zhao To: orika@mellanox.com, john.mcnamara@intel.com, marko.kovacevic@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, olivier.matz@6wind.com Cc: dev@dpdk.org, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com Date: Thu, 2 Jul 2020 20:53:42 +0800 Message-Id: <1593694422-299952-3-git-send-email-bingz@mellanox.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1593694422-299952-1-git-send-email-bingz@mellanox.com> References: <1593672361-285288-1-git-send-email-bingz@mellanox.com> <1593694422-299952-1-git-send-email-bingz@mellanox.com> Subject: [dpdk-dev] [PATCH v3 2/2] app/testpmd: add eCPRI in flow creation patterns X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to verify offloading of eCPRI protocol via flow rules, the command line of flow creation should support the parsing of the eCPRI pattern. Based on the specification, one eCPRI message will have the common header and payload. Payload format is various based on the type field of the common header. Fixed strings will be used instead of integer to make the CLI easy for auto-completion. The testpmd command line examples of flow to match eCPRI item are listed below: 1. flow create 0 ... pattern eth / ecpri / end actions ... This is to match all eCPRI messages. 2. flow create 0 ... pattern eth / ecpri common type rtc_ctrl / end actions ... This is to match all eCPRI messages with the type #2 - "Real-Time Control Data". 3. flow create 0 ... pattern eth / ecpri common type iq_data pc_id is [U16Int] / end actions ... This is to match eCPRI messages with the type #0 - "IQ Data", and the physical channel ID 'pc_id' of the messages is a specific value. Since the sequence ID is changeable, there is no need to match that field in the flow. Currently, only type #0, #2 and #5 will be supported. Since eCPRI could be over Ethernet layer (or after .1Q) and UDP layer, it is the PMD driver's responsibility to check whether eCPRI is supported and which protocol stack is supported. Network byte order should be used for eCPRI header, the same as other headers. Signed-off-by: Bing Zhao Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 143 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 4e2006c..801581e 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -230,6 +230,15 @@ enum index { ITEM_PFCP, ITEM_PFCP_S_FIELD, ITEM_PFCP_SEID, + ITEM_ECPRI, + ITEM_ECPRI_COMMON, + ITEM_ECPRI_COMMON_TYPE, + ITEM_ECPRI_COMMON_TYPE_IQ_DATA, + ITEM_ECPRI_COMMON_TYPE_RTC_CTRL, + ITEM_ECPRI_COMMON_TYPE_DLY_MSR, + ITEM_ECPRI_MSG_IQ_DATA_PCID, + ITEM_ECPRI_MSG_RTC_CTRL_RTCID, + ITEM_ECPRI_MSG_DLY_MSR_MSRID, /* Validate/create actions. */ ACTIONS, @@ -791,6 +800,7 @@ static const enum index next_item[] = { ITEM_ESP, ITEM_AH, ITEM_PFCP, + ITEM_ECPRI, END_SET, ZERO, }; @@ -1101,6 +1111,24 @@ static const enum index item_l2tpv3oip[] = { ZERO, }; +static const enum index item_ecpri[] = { + ITEM_ECPRI_COMMON, + ITEM_NEXT, + ZERO, +}; + +static const enum index item_ecpri_common[] = { + ITEM_ECPRI_COMMON_TYPE, + ZERO, +}; + +static const enum index item_ecpri_common_type[] = { + ITEM_ECPRI_COMMON_TYPE_IQ_DATA, + ITEM_ECPRI_COMMON_TYPE_RTC_CTRL, + ITEM_ECPRI_COMMON_TYPE_DLY_MSR, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -1409,6 +1437,9 @@ static int parse_vc_spec(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); static int parse_vc_conf(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_item_ecpri_type(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_vc_action_rss(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2802,6 +2833,66 @@ static const struct token token_list[] = { .next = NEXT(item_pfcp, NEXT_ENTRY(UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_pfcp, seid)), }, + [ITEM_ECPRI] = { + .name = "ecpri", + .help = "match eCPRI header", + .priv = PRIV_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)), + .next = NEXT(item_ecpri), + .call = parse_vc, + }, + [ITEM_ECPRI_COMMON] = { + .name = "common", + .help = "eCPRI common header", + .next = NEXT(item_ecpri_common), + }, + [ITEM_ECPRI_COMMON_TYPE] = { + .name = "type", + .help = "type of common header", + .next = NEXT(item_ecpri_common_type), + .args = ARGS(ARG_ENTRY_HTON(struct rte_flow_item_ecpri)), + }, + [ITEM_ECPRI_COMMON_TYPE_IQ_DATA] = { + .name = "iq_data", + .help = "Type #0: IQ Data", + .next = NEXT(NEXT_ENTRY(ITEM_ECPRI_MSG_IQ_DATA_PCID, + ITEM_NEXT)), + .call = parse_vc_item_ecpri_type, + }, + [ITEM_ECPRI_MSG_IQ_DATA_PCID] = { + .name = "pc_id", + .help = "Physical Channel ID", + .next = NEXT(item_ecpri, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ecpri, + hdr.type0.pc_id)), + }, + [ITEM_ECPRI_COMMON_TYPE_RTC_CTRL] = { + .name = "rtc_ctrl", + .help = "Type #2: Real-Time Control Data", + .next = NEXT(NEXT_ENTRY(ITEM_ECPRI_MSG_RTC_CTRL_RTCID, + ITEM_NEXT)), + .call = parse_vc_item_ecpri_type, + }, + [ITEM_ECPRI_MSG_RTC_CTRL_RTCID] = { + .name = "rtc_id", + .help = "Real-Time Control Data ID", + .next = NEXT(item_ecpri, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ecpri, + hdr.type2.rtc_id)), + }, + [ITEM_ECPRI_COMMON_TYPE_DLY_MSR] = { + .name = "delay_measure", + .help = "Type #5: One-Way Delay Measurement", + .next = NEXT(NEXT_ENTRY(ITEM_ECPRI_MSG_DLY_MSR_MSRID, + ITEM_NEXT)), + .call = parse_vc_item_ecpri_type, + }, + [ITEM_ECPRI_MSG_DLY_MSR_MSRID] = { + .name = "msr_id", + .help = "Measurement ID", + .next = NEXT(item_ecpri, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ecpri, + hdr.type5.msr_id)), + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -4124,6 +4215,58 @@ parse_vc_conf(struct context *ctx, const struct token *token, return len; } +/** Parse eCPRI common header type field. */ +static int +parse_vc_item_ecpri_type(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct rte_flow_item_ecpri *ecpri; + struct rte_flow_item_ecpri *ecpri_mask; + struct rte_flow_item *item; + uint32_t data_size; + uint8_t msg_type; + struct buffer *out = buf; + const struct arg *arg; + + (void)size; + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + switch (ctx->curr) { + case ITEM_ECPRI_COMMON_TYPE_IQ_DATA: + msg_type = RTE_ECPRI_MSG_TYPE_IQ_DATA; + break; + case ITEM_ECPRI_COMMON_TYPE_RTC_CTRL: + msg_type = RTE_ECPRI_MSG_TYPE_RTC_CTRL; + break; + case ITEM_ECPRI_COMMON_TYPE_DLY_MSR: + msg_type = RTE_ECPRI_MSG_TYPE_DLY_MSR; + break; + default: + return -1; + } + if (!ctx->object) + return len; + arg = pop_args(ctx); + if (!arg) + return -1; + ecpri = (struct rte_flow_item_ecpri *)out->args.vc.data; + ecpri->hdr.common.type = msg_type; + data_size = ctx->objdata / 3; /* spec, last, mask */ + ecpri_mask = (struct rte_flow_item_ecpri *)(out->args.vc.data + + (data_size * 2)); + ecpri_mask->hdr.common.type = 0xFF; + if (arg->hton) { + ecpri->hdr.dw0 = rte_cpu_to_be_32(ecpri->hdr.dw0); + ecpri_mask->hdr.dw0 = rte_cpu_to_be_32(ecpri_mask->hdr.dw0); + } + item = &out->args.vc.pattern[out->args.vc.pattern_n - 1]; + item->spec = ecpri; + item->mask = ecpri_mask; + return len; +} + /** Parse RSS action. */ static int parse_vc_action_rss(struct context *ctx, const struct token *token,