From patchwork Thu Feb 15 06:21:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 136801 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72B9543B06; Thu, 15 Feb 2024 07:21:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6AF084339A; Thu, 15 Feb 2024 07:21:41 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 023D543387 for ; Thu, 15 Feb 2024 07:21:36 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 0DD80207F22A; Wed, 14 Feb 2024 22:21:36 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0DD80207F22A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1707978096; bh=F2/wDytmYFOX2d0DKqV68TlOe/QEzaTQSo3Yr/7RCEQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BH0l+ydcAv1JTf91E+7vWhvmSui5mTioXB2KhrAtoQAiS7iNdfZ/z2JbVzTSiIp+x aN0Xetz5N0uymXrCHe4Fo/My4P3FS0qouakTPGGB9M2CHBUulCo5eYlRW842F8IZzh Pn7g+0rYnTk+CxlVwTeGZ/ZpF5s3gEXKKTFAfsSo= From: Tyler Retzlaff To: dev@dpdk.org Cc: Ajit Khaparde , Andrew Boyer , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Chengwen Feng , Dariusz Sosnowski , David Christensen , Hyong Youb Kim , Jerin Jacob , Jie Hai , Jingjing Wu , John Daley , Kevin Laatz , Kiran Kumar K , Konstantin Ananyev , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nithin Dabilpuram , Ori Kam , Ruifeng Wang , Satha Rao , Somnath Kotur , Suanming Mou , Sunil Kumar Kori , Viacheslav Ovsiienko , Yisen Zhuang , Yuying Zhang , mb@smartsharesystems.com, Tyler Retzlaff Subject: [PATCH v4 01/18] mbuf: deprecate GCC marker in rte mbuf struct Date: Wed, 14 Feb 2024 22:21:03 -0800 Message-Id: <1707978080-28859-2-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1707978080-28859-1-git-send-email-roretzla@linux.microsoft.com> References: <1706657173-26166-1-git-send-email-roretzla@linux.microsoft.com> <1707978080-28859-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provide a macro that allows conditional expansion of RTE_MARKER fields to empty to allow rte_mbuf to be used with MSVC. It is proposed that we announce the fields to be __rte_deprecated (currently disabled). Introduce C11 anonymous unions to permit aliasing of well-known offsets by name into the rte_mbuf structure by a *new* name and to provide padding for cache alignment. Signed-off-by: Tyler Retzlaff Acked-by: Chengwen Feng --- doc/guides/rel_notes/deprecation.rst | 20 ++ lib/eal/include/rte_common.h | 6 + lib/mbuf/rte_mbuf_core.h | 375 +++++++++++++++++++---------------- 3 files changed, 233 insertions(+), 168 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 10630ba..8594255 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -17,6 +17,26 @@ Other API and ABI deprecation notices are to be posted below. Deprecation Notices ------------------- +* mbuf: Named zero sized fields of type ``RTE_MARKER`` and ``RTE_MARKER64`` + will be removed from ``struct rte_mbuf`` and replaced with new fields + in anonymous unions. + + The names of the fields impacted are: + + old name new name + + ``cacheline0`` ``mbuf_cachelin0`` + ``rearm_data`` ``mbuf_rearm_data`` + ``rx_descriptor_fields1`` ``mbuf_rx_descriptor_fields1`` + ``cacheline1`` ``mbuf_cachelin1`` + + Contributions to DPDK should immediately start using the new names, + applications should adapt to new names as soon as possible as the + old names will be removed in a future DPDK release. + + Note: types of the new names are not API compatible with the old and + some code conversion is required to maintain correct behavior. + * build: The ``enable_kmods`` option is deprecated and will be removed in a future release. Setting/clearing the option has no impact on the build. Instead, kernel modules will be always built for OS's where out-of-tree kernel modules diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h index d7d6390..af73f67 100644 --- a/lib/eal/include/rte_common.h +++ b/lib/eal/include/rte_common.h @@ -582,6 +582,12 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void) /** Marker for 8B alignment in a structure. */ __extension__ typedef uint64_t RTE_MARKER64[0]; +#define __rte_marker(type, name) type name /* __rte_deprecated */ + +#else + +#define __rte_marker(type, name) + #endif /*********** Macros for calculating min and max **********/ diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index 5688683..9e9590b 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -16,7 +16,10 @@ * New fields and flags should fit in the "dynamic space". */ +#include +#include #include +#include #include #include @@ -464,204 +467,240 @@ enum { * The generic rte_mbuf, containing a packet mbuf. */ struct rte_mbuf { - RTE_MARKER cacheline0; - - void *buf_addr; /**< Virtual address of segment buffer. */ + __rte_marker(RTE_MARKER, cacheline0); + union { + char mbuf_cacheline0[RTE_CACHE_LINE_MIN_SIZE]; + __extension__ + struct { + void *buf_addr; /**< Virtual address of segment buffer. */ #if RTE_IOVA_IN_MBUF - /** - * Physical address of segment buffer. - * This field is undefined if the build is configured to use only - * virtual address as IOVA (i.e. RTE_IOVA_IN_MBUF is 0). - * Force alignment to 8-bytes, so as to ensure we have the exact - * same mbuf cacheline0 layout for 32-bit and 64-bit. This makes - * working on vector drivers easier. - */ - rte_iova_t buf_iova __rte_aligned(sizeof(rte_iova_t)); + /** + * Physical address of segment buffer. + * This field is undefined if the build is configured to use only + * virtual address as IOVA (i.e. RTE_IOVA_IN_MBUF is 0). + * Force alignment to 8-bytes, so as to ensure we have the exact + * same mbuf cacheline0 layout for 32-bit and 64-bit. This makes + * working on vector drivers easier. + */ + rte_iova_t buf_iova __rte_aligned(sizeof(rte_iova_t)); #else - /** - * Next segment of scattered packet. - * This field is valid when physical address field is undefined. - * Otherwise next pointer in the second cache line will be used. - */ - struct rte_mbuf *next; + /** + * Next segment of scattered packet. + * This field is valid when physical address field is undefined. + * Otherwise next pointer in the second cache line will be used. + */ + struct rte_mbuf *next; #endif - /* next 8 bytes are initialised on RX descriptor rearm */ - RTE_MARKER64 rearm_data; - uint16_t data_off; - - /** - * Reference counter. Its size should at least equal to the size - * of port field (16 bits), to support zero-copy broadcast. - * It should only be accessed using the following functions: - * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and - * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, - * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. - */ - RTE_ATOMIC(uint16_t) refcnt; - - /** - * Number of segments. Only valid for the first segment of an mbuf - * chain. - */ - uint16_t nb_segs; - - /** Input port (16 bits to support more than 256 virtual ports). - * The event eth Tx adapter uses this field to specify the output port. - */ - uint16_t port; - - uint64_t ol_flags; /**< Offload features. */ + /* next 8 bytes are initialised on RX descriptor rearm */ + __rte_marker(RTE_MARKER64, rearm_data); + union { + char mbuf_rearm_data[8]; + __extension__ + struct { + uint16_t data_off; + + /** + * Reference counter. Its size should at least equal to the + * size of port field (16 bits), to support zero-copy + * broadcast. + * It should only be accessed using the following + * functions: + * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and + * rte_mbuf_refcnt_set(). The functionality of these + * functions (atomic, or non-atomic) is controlled by the + * RTE_MBUF_REFCNT_ATOMIC flag. + */ + RTE_ATOMIC(uint16_t) refcnt; + + /** + * Number of segments. Only valid for the first segment of + * an mbuf chain. + */ + uint16_t nb_segs; + + /** + * Input port (16 bits to support more than 256 virtual + * ports). The event eth Tx adapter uses this field to + * specify the output port. + */ + uint16_t port; + }; + }; - /* remaining bytes are set on RX when pulling packet from descriptor */ - RTE_MARKER rx_descriptor_fields1; + uint64_t ol_flags; /**< Offload features. */ - /* - * The packet type, which is the combination of outer/inner L2, L3, L4 - * and tunnel types. The packet_type is about data really present in the - * mbuf. Example: if vlan stripping is enabled, a received vlan packet - * would have RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the - * vlan is stripped from the data. - */ - union { - uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */ - __extension__ - struct { - uint8_t l2_type:4; /**< (Outer) L2 type. */ - uint8_t l3_type:4; /**< (Outer) L3 type. */ - uint8_t l4_type:4; /**< (Outer) L4 type. */ - uint8_t tun_type:4; /**< Tunnel type. */ + /* remaining bytes are set on RX when pulling packet from descriptor */ + __rte_marker(RTE_MARKER, rx_descriptor_fields1); union { - uint8_t inner_esp_next_proto; - /**< ESP next protocol type, valid if - * RTE_PTYPE_TUNNEL_ESP tunnel type is set - * on both Tx and Rx. - */ + char mbuf_rx_descriptor_fields1[8]; __extension__ struct { - uint8_t inner_l2_type:4; - /**< Inner L2 type. */ - uint8_t inner_l3_type:4; - /**< Inner L3 type. */ + /* + * The packet type, which is the combination of outer/inner + * L2, L3, L4 and tunnel types. The packet_type is about + * data really present in the mbuf. Example: if vlan + * stripping is enabled, a received vlan packet would have + * RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the + * vlan is stripped from the data. + */ + union { + uint32_t packet_type; + /**< L2/L3/L4 and tunnel information. */ + __extension__ + struct { + uint8_t l2_type:4; + /**< (Outer) L2 type. */ + uint8_t l3_type:4; + /**< (Outer) L3 type. */ + uint8_t l4_type:4; + /**< (Outer) L4 type. */ + uint8_t tun_type:4; + /**< Tunnel type. */ + union { + uint8_t inner_esp_next_proto; + /**< ESP next protocol type, valid + * if RTE_PTYPE_TUNNEL_ESP tunnel + * type is set on both Tx and Rx. + */ + __extension__ + struct { + uint8_t inner_l2_type:4; + /**< Inner L2 type. */ + uint8_t inner_l3_type:4; + /**< Inner L3 type. */ + }; + }; + uint8_t inner_l4_type:4; + /**< Inner L4 type. */ + }; + }; + uint32_t pkt_len; + /**< Total pkt len: sum of all segments. */ }; }; - uint8_t inner_l4_type:4; /**< Inner L4 type. */ - }; - }; - uint32_t pkt_len; /**< Total pkt len: sum of all segments. */ - uint16_t data_len; /**< Amount of data in segment buffer. */ - /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ - uint16_t vlan_tci; + uint16_t data_len; /**< Amount of data in segment buffer. */ + /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ + uint16_t vlan_tci; - union { - union { - uint32_t rss; /**< RSS hash result if RSS enabled */ - struct { + union { union { + uint32_t rss; /**< RSS hash result if RSS enabled */ struct { - uint16_t hash; - uint16_t id; - }; - uint32_t lo; - /**< Second 4 flexible bytes */ - }; - uint32_t hi; - /**< First 4 flexible bytes or FD ID, dependent - * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. - */ - } fdir; /**< Filter identifier if FDIR enabled */ - struct rte_mbuf_sched sched; - /**< Hierarchical scheduler : 8 bytes */ - struct { - uint32_t reserved1; - uint16_t reserved2; - uint16_t txq; - /**< The event eth Tx adapter uses this field - * to store Tx queue id. - * @see rte_event_eth_tx_adapter_txq_set() - */ - } txadapter; /**< Eventdev ethdev Tx adapter */ - uint32_t usr; - /**< User defined tags. See rte_distributor_process() */ - } hash; /**< hash information */ - }; + union { + __extension__ + struct { + uint16_t hash; + uint16_t id; + }; + uint32_t lo; + /**< Second 4 flexible bytes */ + }; + uint32_t hi; + /**< First 4 flexible bytes or FD ID, dependent + * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. + */ + } fdir; /**< Filter identifier if FDIR enabled */ + struct rte_mbuf_sched sched; + /**< Hierarchical scheduler : 8 bytes */ + struct { + uint32_t reserved1; + uint16_t reserved2; + uint16_t txq; + /**< The event eth Tx adapter uses this field + * to store Tx queue id. + * @see rte_event_eth_tx_adapter_txq_set() + */ + } txadapter; /**< Eventdev ethdev Tx adapter */ + uint32_t usr; + /**< User defined tags. See rte_distributor_process() */ + } hash; /**< hash information */ + }; - /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ - uint16_t vlan_tci_outer; + /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ + uint16_t vlan_tci_outer; - uint16_t buf_len; /**< Length of segment buffer. */ + uint16_t buf_len; /**< Length of segment buffer. */ - struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */ + struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */ + }; + }; /* second cache line - fields only used in slow path or on TX */ - RTE_MARKER cacheline1 __rte_cache_min_aligned; - -#if RTE_IOVA_IN_MBUF - /** - * Next segment of scattered packet. Must be NULL in the last - * segment or in case of non-segmented packet. - */ - struct rte_mbuf *next; -#else - /** - * Reserved for dynamic fields - * when the next pointer is in first cache line (i.e. RTE_IOVA_IN_MBUF is 0). - */ - uint64_t dynfield2; -#endif - - /* fields to support TX offloads */ + __rte_marker(RTE_MARKER, cacheline1); union { - uint64_t tx_offload; /**< combined for easy fetch */ + char mbuf_cacheline1[RTE_CACHE_LINE_MIN_SIZE]; __extension__ struct { - uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; - /**< L2 (MAC) Header Length for non-tunneling pkt. - * Outer_L4_len + ... + Inner_L2_len for tunneling pkt. +#if RTE_IOVA_IN_MBUF + /** + * Next segment of scattered packet. Must be NULL in the last + * segment or in case of non-segmented packet. */ - uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; - /**< L3 (IP) Header Length. */ - uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; - /**< L4 (TCP/UDP) Header Length. */ - uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; - /**< TCP TSO segment size */ - - /* - * Fields for Tx offloading of tunnels. - * These are undefined for packets which don't request - * any tunnel offloads (outer IP or UDP checksum, - * tunnel TSO). - * - * PMDs should not use these fields unconditionally - * when calculating offsets. - * - * Applications are expected to set appropriate tunnel - * offload flags when they fill in these fields. + struct rte_mbuf *next; +#else + /** + * Reserved for dynamic fields + * when the next pointer is in first cache line + * (i.e. RTE_IOVA_IN_MBUF is 0). */ - uint64_t outer_l3_len:RTE_MBUF_OUTL3_LEN_BITS; - /**< Outer L3 (IP) Hdr Length. */ - uint64_t outer_l2_len:RTE_MBUF_OUTL2_LEN_BITS; - /**< Outer L2 (MAC) Hdr Length. */ + uint64_t dynfield2; +#endif - /* uint64_t unused:RTE_MBUF_TXOFLD_UNUSED_BITS; */ - }; - }; + /* fields to support TX offloads */ + union { + uint64_t tx_offload; /**< combined for easy fetch */ + __extension__ + struct { + uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; + /**< L2 (MAC) Header Length for non-tunneling pkt. + * Outer_L4_len + ... + Inner_L2_len for tunneling pkt. + */ + uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; + /**< L3 (IP) Header Length. */ + uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; + /**< L4 (TCP/UDP) Header Length. */ + uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; + /**< TCP TSO segment size */ + + /* + * Fields for Tx offloading of tunnels. + * These are undefined for packets which don't request + * any tunnel offloads (outer IP or UDP checksum, + * tunnel TSO). + * + * PMDs should not use these fields unconditionally + * when calculating offsets. + * + * Applications are expected to set appropriate tunnel + * offload flags when they fill in these fields. + */ + uint64_t outer_l3_len:RTE_MBUF_OUTL3_LEN_BITS; + /**< Outer L3 (IP) Hdr Length. */ + uint64_t outer_l2_len:RTE_MBUF_OUTL2_LEN_BITS; + /**< Outer L2 (MAC) Hdr Length. */ + + /* uint64_t unused:RTE_MBUF_TXOFLD_UNUSED_BITS; */ + }; + }; - /** Shared data for external buffer attached to mbuf. See - * rte_pktmbuf_attach_extbuf(). - */ - struct rte_mbuf_ext_shared_info *shinfo; + /** Shared data for external buffer attached to mbuf. See + * rte_pktmbuf_attach_extbuf(). + */ + struct rte_mbuf_ext_shared_info *shinfo; - /** Size of the application private data. In case of an indirect - * mbuf, it stores the direct mbuf private data size. - */ - uint16_t priv_size; + /** Size of the application private data. In case of an indirect + * mbuf, it stores the direct mbuf private data size. + */ + uint16_t priv_size; - /** Timesync flags for use with IEEE1588. */ - uint16_t timesync; + /** Timesync flags for use with IEEE1588. */ + uint16_t timesync; - uint32_t dynfield1[9]; /**< Reserved for dynamic fields. */ + uint32_t dynfield1[9]; /**< Reserved for dynamic fields. */ + }; + }; } __rte_cache_aligned; /**