From patchwork Thu Jun 13 14:23:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54773 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1DCB51D5FE; Thu, 13 Jun 2019 16:24:10 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 2B4381D5E0 for ; Thu, 13 Jun 2019 16:24:06 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdu-0000fx-2k; Thu, 13 Jun 2019 10:24:00 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENmlG009319; Thu, 13 Jun 2019 10:23:48 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENlgv009318; Thu, 13 Jun 2019 10:23:47 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon Date: Thu, 13 Jun 2019 10:23:35 -0400 Message-Id: <20190613142344.9188-2-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 01/10] Add __rte_internal tag for functions and version target X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This tag is meant to be used on function prototypes to identify functions that are only meant to be used by internal DPDK libraries (i.e. libraries that are built while building the SDK itself, as identified by the defining of the BUILDING_RTE_SDK macro). When that flag is not set, it will resolve to an error function attribute, causing build breakage for any compilation unit attempting to build it Validate the use of this tag in much the same way we validate __rte_experimental. By adding an INTERNAL version to library map files, we can exempt internal-only functions from ABI checking, and handle them to ensure that symbols we wish to only be for internal use between dpdk libraries are properly tagged with __rte_experimental Note this patch updates the check-experimental-syms.sh script, which normally only check the EXPERIMENTAL section to also check the INTERNAL section now. As such its been renamed to the now more appropriate check-special-syms.sh Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon --- ...rimental-syms.sh => check-special-syms.sh} | 24 ++++++++++++++++++- lib/librte_eal/common/include/rte_compat.h | 12 ++++++++++ mk/internal/rte.compile-pre.mk | 6 ++--- mk/target/generic/rte.vars.mk | 2 +- 4 files changed, 39 insertions(+), 5 deletions(-) rename buildtools/{check-experimental-syms.sh => check-special-syms.sh} (53%) diff --git a/buildtools/check-experimental-syms.sh b/buildtools/check-special-syms.sh similarity index 53% rename from buildtools/check-experimental-syms.sh rename to buildtools/check-special-syms.sh index 7d1f3a568..63682c677 100755 --- a/buildtools/check-experimental-syms.sh +++ b/buildtools/check-special-syms.sh @@ -31,10 +31,32 @@ do cat >&2 <<- END_OF_MESSAGE $SYM is not flagged as experimental but is listed in version map - Please add __rte_experimental to the definition of $SYM + Please add __rte_experimental to the definition/prototype of $SYM END_OF_MESSAGE exit 1 fi done + +for i in `awk 'BEGIN {found=0} + /.*INTERNAL.*/ {found=1} + /.*}.*;/ {found=0} + /.*;/ {if (found == 1) print $1}' $MAPFILE` +do + SYM=`echo $i | sed -e"s/;//"` + objdump -t $OBJFILE | grep -q "\.text.*$SYM$" + IN_TEXT=$? + objdump -t $OBJFILE | grep -q "\.text\.internal.*$SYM$" + IN_EXP=$? + if [ $IN_TEXT -eq 0 -a $IN_EXP -ne 0 ] + then + cat >&2 <<- END_OF_MESSAGE + $SYM is not flagged as internal + but is listed in version map + Please add __rte_internal to the definition/prototype of $SYM + END_OF_MESSAGE + exit 1 + fi +done + exit 0 diff --git a/lib/librte_eal/common/include/rte_compat.h b/lib/librte_eal/common/include/rte_compat.h index 92ff28faf..739e8485c 100644 --- a/lib/librte_eal/common/include/rte_compat.h +++ b/lib/librte_eal/common/include/rte_compat.h @@ -89,4 +89,16 @@ __attribute__((section(".text.experimental"))) #endif +/* + * __rte_internal tags mark functions as internal only, If specified in public + * header files, this tag will resolve to an error directive, preventing + * external applications from attempting to make calls to functions not meant + * for consumption outside the dpdk library + */ +#ifdef BUILDING_RTE_SDK +#define __rte_internal __attribute__((section(".text.internal"))) +#else +#define __rte_internal __attribute__((error("This function cannot be used outside of the core DPDK library"), \ + section(".text.internal"))) +#endif #endif /* _RTE_COMPAT_H_ */ diff --git a/mk/internal/rte.compile-pre.mk b/mk/internal/rte.compile-pre.mk index 0cf3791b4..f1d97ef76 100644 --- a/mk/internal/rte.compile-pre.mk +++ b/mk/internal/rte.compile-pre.mk @@ -56,8 +56,8 @@ C_TO_O = $(CC) -Wp,-MD,$(call obj2dep,$(@)).tmp $(CPPFLAGS) $(CFLAGS) \ C_TO_O_STR = $(subst ','\'',$(C_TO_O)) #'# fix syntax highlight C_TO_O_DISP = $(if $(V),"$(C_TO_O_STR)"," CC $(@)") endif -EXPERIMENTAL_CHECK = $(RTE_SDK)/buildtools/check-experimental-syms.sh -CHECK_EXPERIMENTAL = $(EXPERIMENTAL_CHECK) $(SRCDIR)/$(EXPORT_MAP) $@ +SPECIAL_SYM_CHECK = $(RTE_SDK)/buildtools/check-special-syms.sh +CHECK_SPECIAL_SYMS = $(SPECIAL_SYM_CHECK) $(SRCDIR)/$(EXPORT_MAP) $@ PMDINFO_GEN = $(RTE_SDK_BIN)/app/dpdk-pmdinfogen $@ $@.pmd.c PMDINFO_CC = $(CC) $(CPPFLAGS) $(CFLAGS) $(EXTRA_CFLAGS) -c -o $@.pmd.o $@.pmd.c @@ -75,7 +75,7 @@ C_TO_O_DO = @set -e; \ echo $(C_TO_O_DISP); \ $(C_TO_O) && \ $(PMDINFO_TO_O) && \ - $(CHECK_EXPERIMENTAL) && \ + $(CHECK_SPECIAL_SYMS) && \ echo $(C_TO_O_CMD) > $(call obj2cmd,$(@)) && \ sed 's,'$@':,dep_'$@' =,' $(call obj2dep,$(@)).tmp > $(call obj2dep,$(@)) && \ rm -f $(call obj2dep,$(@)).tmp diff --git a/mk/target/generic/rte.vars.mk b/mk/target/generic/rte.vars.mk index 25a578ad7..ed6a0c87b 100644 --- a/mk/target/generic/rte.vars.mk +++ b/mk/target/generic/rte.vars.mk @@ -96,7 +96,7 @@ LDFLAGS += -L$(RTE_OUTPUT)/lib # defined. ifeq ($(BUILDING_RTE_SDK),1) # building sdk -CFLAGS += -include $(RTE_OUTPUT)/include/rte_config.h +CFLAGS += -include $(RTE_OUTPUT)/include/rte_config.h -DBUILDING_RTE_SDK else # if we are building an external application, include SDK's lib and # includes too From patchwork Thu Jun 13 14:23:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54774 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B4B6A1D609; Thu, 13 Jun 2019 16:24:12 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 60C191D5E0 for ; Thu, 13 Jun 2019 16:24:07 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdu-0000fy-Pw; Thu, 13 Jun 2019 10:24:04 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENm2M009323; Thu, 13 Jun 2019 10:23:48 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENmg4009322; Thu, 13 Jun 2019 10:23:48 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon Date: Thu, 13 Jun 2019 10:23:36 -0400 Message-Id: <20190613142344.9188-3-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 02/10] meson: add BUILDING_RTE_SDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The __rte_internal macro is defined dependent on the value of the build environment variable BUILDING_RTE_SDK. This variable was set in the Makefile environment but not the meson environment, so lets reconcile the two by defining it for meson in the lib and drivers directories, but not the examples/apps directories, which should be treated as they are not part of the core DPDK library Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon Acked-by: Bruce Richardson --- drivers/meson.build | 1 + lib/meson.build | 1 + 2 files changed, 2 insertions(+) diff --git a/drivers/meson.build b/drivers/meson.build index 4c444f495..a312277d1 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -23,6 +23,7 @@ endif # specify -D_GNU_SOURCE unconditionally default_cflags += '-D_GNU_SOURCE' +default_cflags += '-DBUILDING_RTE_SDK' foreach class:dpdk_driver_classes drivers = [] diff --git a/lib/meson.build b/lib/meson.build index e067ce5ea..0e398d534 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -35,6 +35,7 @@ if is_windows endif default_cflags = machine_args +default_cflags += '-DBUILDING_RTE_SDK' if cc.has_argument('-Wno-format-truncation') default_cflags += '-Wno-format-truncation' endif From patchwork Thu Jun 13 14:23:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54772 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A24E81D5F8; Thu, 13 Jun 2019 16:24:07 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 352761D5F6 for ; Thu, 13 Jun 2019 16:24:06 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdv-0000fz-PR; Thu, 13 Jun 2019 10:24:01 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENnDs009327; Thu, 13 Jun 2019 10:23:49 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENnjE009326; Thu, 13 Jun 2019 10:23:49 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon Date: Thu, 13 Jun 2019 10:23:37 -0400 Message-Id: <20190613142344.9188-4-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 03/10] Exempt INTERNAL symbols from checking X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" No need to restrict the ABI on symbols that are only used by core libraries Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon --- devtools/check-symbol-change.sh | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/devtools/check-symbol-change.sh b/devtools/check-symbol-change.sh index c5434f3bb..83e8b43ae 100755 --- a/devtools/check-symbol-change.sh +++ b/devtools/check-symbol-change.sh @@ -93,6 +93,13 @@ check_for_rule_violations() if [ "$ar" = "add" ] then + if [ "$secname" = "INTERNAL" ] + then + # these are absolved from any further checking + echo "Skipping symbol $symname in INTERNAL" + continue + fi + if [ "$secname" = "unknown" ] then # Just inform the user of this occurrence, but From patchwork Thu Jun 13 14:23:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54780 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 29F041D627; Thu, 13 Jun 2019 16:24:26 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 583E21D60C for ; Thu, 13 Jun 2019 16:24:14 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdw-0000g0-6C; Thu, 13 Jun 2019 10:24:12 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENonG009331; Thu, 13 Jun 2019 10:23:50 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENnp8009330; Thu, 13 Jun 2019 10:23:49 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon Date: Thu, 13 Jun 2019 10:23:38 -0400 Message-Id: <20190613142344.9188-5-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 04/10] mark dpaa driver internal-only symbols with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" make use of the new __rte_internal tag to specify symbols that should only be used by dpdk provided libraries (as specified by the BUILDING_RTE_SDK cflag Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon --- drivers/bus/dpaa/include/fsl_bman.h | 12 ++--- drivers/bus/dpaa/include/fsl_fman.h | 50 +++++++++---------- drivers/bus/dpaa/include/fsl_qman.h | 60 +++++++++++------------ drivers/bus/dpaa/include/fsl_usd.h | 12 ++--- drivers/bus/dpaa/include/netcfg.h | 4 +- drivers/bus/dpaa/include/of.h | 6 +-- drivers/bus/dpaa/rte_bus_dpaa_version.map | 47 +++++++----------- drivers/net/dpaa/dpaa_ethdev.c | 4 +- drivers/net/dpaa/dpaa_ethdev.h | 4 +- drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 +-- 10 files changed, 99 insertions(+), 108 deletions(-) diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h index 0c74aba44..1835acf16 100644 --- a/drivers/bus/dpaa/include/fsl_bman.h +++ b/drivers/bus/dpaa/include/fsl_bman.h @@ -264,13 +264,13 @@ int bman_shutdown_pool(u32 bpid); * the structure provided by the caller can be released or reused after the * function returns. */ -struct bman_pool *bman_new_pool(const struct bman_pool_params *params); +struct bman_pool __rte_internal *bman_new_pool(const struct bman_pool_params *params); /** * bman_free_pool - Deallocates a Buffer Pool object * @pool: the pool object to release */ -void bman_free_pool(struct bman_pool *pool); +void __rte_internal bman_free_pool(struct bman_pool *pool); /** * bman_get_params - Returns a pool object's parameters. @@ -279,7 +279,7 @@ void bman_free_pool(struct bman_pool *pool); * The returned pointer refers to state within the pool object so must not be * modified and can no longer be read once the pool object is destroyed. */ -const struct bman_pool_params *bman_get_params(const struct bman_pool *pool); +const struct bman_pool_params __rte_internal *bman_get_params(const struct bman_pool *pool); /** * bman_release - Release buffer(s) to the buffer pool @@ -289,7 +289,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool); * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options * */ -int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num, +int __rte_internal bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num, u32 flags); /** @@ -302,7 +302,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num, * The return value will be the number of buffers obtained from the pool, or a * negative error code if a h/w error or pool starvation was encountered. */ -int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num, +int __rte_internal bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num, u32 flags); /** @@ -317,7 +317,7 @@ int bman_query_pools(struct bm_pool_state *state); * * Return the number of the free buffers */ -u32 bman_query_free_buffers(struct bman_pool *pool); +u32 __rte_internal bman_query_free_buffers(struct bman_pool *pool); /** * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h index 1d1ce8671..bd8218b3d 100644 --- a/drivers/bus/dpaa/include/fsl_fman.h +++ b/drivers/bus/dpaa/include/fsl_fman.h @@ -43,19 +43,19 @@ struct fm_status_t { } __attribute__ ((__packed__)); /* Set MAC address for a particular interface */ -int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num); +int __rte_internal fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num); /* Remove a MAC address for a particular interface */ -void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num); +void __rte_internal fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num); /* Get the FMAN statistics */ -void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats); +void __rte_internal fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats); /* Reset the FMAN statistics */ -void fman_if_stats_reset(struct fman_if *p); +void __rte_internal fman_if_stats_reset(struct fman_if *p); /* Get all of the FMAN statistics */ -void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n); +void __rte_internal fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n); /* Set ignore pause option for a specific interface */ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable); @@ -64,33 +64,33 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable); void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len); /* Enable/disable Rx promiscuous mode on specified interface */ -void fman_if_promiscuous_enable(struct fman_if *p); -void fman_if_promiscuous_disable(struct fman_if *p); +void __rte_internal fman_if_promiscuous_enable(struct fman_if *p); +void __rte_internal fman_if_promiscuous_disable(struct fman_if *p); /* Enable/disable Rx on specific interfaces */ -void fman_if_enable_rx(struct fman_if *p); -void fman_if_disable_rx(struct fman_if *p); +void __rte_internal fman_if_enable_rx(struct fman_if *p); +void __rte_internal fman_if_disable_rx(struct fman_if *p); /* Enable/disable loopback on specific interfaces */ -void fman_if_loopback_enable(struct fman_if *p); -void fman_if_loopback_disable(struct fman_if *p); +void __rte_internal fman_if_loopback_enable(struct fman_if *p); +void __rte_internal fman_if_loopback_disable(struct fman_if *p); /* Set buffer pool on specific interface */ -void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid, +void __rte_internal fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid, size_t bufsize); /* Get Flow Control threshold parameters on specific interface */ -int fman_if_get_fc_threshold(struct fman_if *fm_if); +int __rte_internal fman_if_get_fc_threshold(struct fman_if *fm_if); /* Enable and Set Flow Control threshold parameters on specific interface */ -int fman_if_set_fc_threshold(struct fman_if *fm_if, +int __rte_internal fman_if_set_fc_threshold(struct fman_if *fm_if, u32 high_water, u32 low_water, u32 bpid); /* Get Flow Control pause quanta on specific interface */ -int fman_if_get_fc_quanta(struct fman_if *fm_if); +int __rte_internal fman_if_get_fc_quanta(struct fman_if *fm_if); /* Set Flow Control pause quanta on specific interface */ -int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta); +int __rte_internal fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta); /* Set default error fqid on specific interface */ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid); @@ -99,36 +99,36 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid); int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp); /* Set IC transfer params */ -int fman_if_set_ic_params(struct fman_if *fm_if, +int __rte_internal fman_if_set_ic_params(struct fman_if *fm_if, const struct fman_if_ic_params *icp); /* Get interface fd->offset value */ -int fman_if_get_fdoff(struct fman_if *fm_if); +int __rte_internal fman_if_get_fdoff(struct fman_if *fm_if); /* Set interface fd->offset value */ -void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset); +void __rte_internal fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset); /* Get interface SG enable status value */ -int fman_if_get_sg_enable(struct fman_if *fm_if); +int __rte_internal fman_if_get_sg_enable(struct fman_if *fm_if); /* Set interface SG support mode */ -void fman_if_set_sg(struct fman_if *fm_if, int enable); +void __rte_internal fman_if_set_sg(struct fman_if *fm_if, int enable); /* Get interface Max Frame length (MTU) */ uint16_t fman_if_get_maxfrm(struct fman_if *fm_if); /* Set interface Max Frame length (MTU) */ -void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm); +void __rte_internal fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm); /* Set interface next invoked action for dequeue operation */ void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia); /* discard error packets on rx */ -void fman_if_discard_rx_errors(struct fman_if *fm_if); +void __rte_internal fman_if_discard_rx_errors(struct fman_if *fm_if); -void fman_if_set_mcast_filter_table(struct fman_if *p); +void __rte_internal fman_if_set_mcast_filter_table(struct fman_if *p); -void fman_if_reset_mcast_filter_table(struct fman_if *p); +void __rte_internal fman_if_reset_mcast_filter_table(struct fman_if *p); int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth); diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index e5cccbbea..85c0b9e25 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1311,7 +1311,7 @@ struct qman_cgr { #define QMAN_CGR_MODE_FRAME 0x00000001 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP -void qman_set_fq_lookup_table(void **table); +void __rte_internal qman_set_fq_lookup_table(void **table); #endif /** @@ -1319,7 +1319,7 @@ void qman_set_fq_lookup_table(void **table); */ int qman_get_portal_index(void); -u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit, +u32 __rte_internal qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit, void **bufs); /** @@ -1330,7 +1330,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit, * processed via qman_poll_***() functions). Returns zero for success, or * -EINVAL if the current CPU is sharing a portal hosted on another CPU. */ -int qman_irqsource_add(u32 bits); +int __rte_internal qman_irqsource_add(u32 bits); /** * qman_irqsource_remove - remove processing sources from being interrupt-driven @@ -1340,7 +1340,7 @@ int qman_irqsource_add(u32 bits); * instead be processed via qman_poll_***() functions. Returns zero for success, * or -EINVAL if the current CPU is sharing a portal hosted on another CPU. */ -int qman_irqsource_remove(u32 bits); +int __rte_internal qman_irqsource_remove(u32 bits); /** * qman_affine_channel - return the channel ID of an portal @@ -1352,7 +1352,7 @@ int qman_irqsource_remove(u32 bits); */ u16 qman_affine_channel(int cpu); -unsigned int qman_portal_poll_rx(unsigned int poll_limit, +unsigned int __rte_internal qman_portal_poll_rx(unsigned int poll_limit, void **bufs, struct qman_portal *q); /** @@ -1363,7 +1363,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit, * * This function will issue a volatile dequeue command to the QMAN. */ -int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags); +int __rte_internal qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags); /** * qman_dequeue - Get the DQRR entry after volatile dequeue command @@ -1373,7 +1373,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags); * is issued. It will keep returning NULL until there is no packet available on * the DQRR. */ -struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq); +struct qm_dqrr_entry __rte_internal *qman_dequeue(struct qman_fq *fq); /** * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue @@ -1384,7 +1384,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq); * This will consume the DQRR enrey and make it available for next volatile * dequeue. */ -void qman_dqrr_consume(struct qman_fq *fq, +void __rte_internal qman_dqrr_consume(struct qman_fq *fq, struct qm_dqrr_entry *dq); /** @@ -1397,7 +1397,7 @@ void qman_dqrr_consume(struct qman_fq *fq, * this function will return -EINVAL, otherwise the return value is >=0 and * represents the number of DQRR entries processed. */ -int qman_poll_dqrr(unsigned int limit); +int __rte_internal qman_poll_dqrr(unsigned int limit); /** * qman_poll @@ -1443,7 +1443,7 @@ void qman_start_dequeues(void); * (SDQCR). The requested pools are limited to those the portal has dequeue * access to. */ -void qman_static_dequeue_add(u32 pools, struct qman_portal *qm); +void __rte_internal qman_static_dequeue_add(u32 pools, struct qman_portal *qm); /** * qman_static_dequeue_del - Remove pool channels from the portal SDQCR @@ -1490,7 +1490,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request); * function must be called from the same CPU as that which processed the DQRR * entry in the first place. */ -void qman_dca_index(u8 index, int park_request); +void __rte_internal qman_dca_index(u8 index, int park_request); /** * qman_eqcr_is_empty - Determine if portal's EQCR is empty @@ -1547,7 +1547,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine); * a frame queue object based on that, rather than assuming/requiring that it be * Out of Service. */ -int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq); +int __rte_internal qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq); /** * qman_destroy_fq - Deallocates a FQ @@ -1565,7 +1565,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags); * qman_fq_fqid - Queries the frame queue ID of a FQ object * @fq: the frame queue object to query */ -u32 qman_fq_fqid(struct qman_fq *fq); +u32 __rte_internal qman_fq_fqid(struct qman_fq *fq); /** * qman_fq_state - Queries the state of a FQ object @@ -1577,7 +1577,7 @@ u32 qman_fq_fqid(struct qman_fq *fq); * This captures the state, as seen by the driver, at the time the function * executes. */ -void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags); +void __rte_internal qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags); /** * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled" @@ -1613,7 +1613,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags); * context_a.address fields and will leave the stashing fields provided by the * user alone, otherwise it will zero out the context_a.stashing fields. */ -int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts); +int __rte_internal qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts); /** * qman_schedule_fq - Schedules a FQ @@ -1642,7 +1642,7 @@ int qman_schedule_fq(struct qman_fq *fq); * caller should be prepared to accept the callback as the function is called, * not only once it has returned. */ -int qman_retire_fq(struct qman_fq *fq, u32 *flags); +int __rte_internal qman_retire_fq(struct qman_fq *fq, u32 *flags); /** * qman_oos_fq - Puts a FQ "out of service" @@ -1651,7 +1651,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags); * The frame queue must be retired and empty, and if any order restoration list * was released as ERNs at the time of retirement, they must all be consumed. */ -int qman_oos_fq(struct qman_fq *fq); +int __rte_internal qman_oos_fq(struct qman_fq *fq); /** * qman_fq_flow_control - Set the XON/XOFF state of a FQ @@ -1684,14 +1684,14 @@ int qman_query_fq_has_pkts(struct qman_fq *fq); * @fq: the frame queue object to be queried * @np: storage for the queried FQD fields */ -int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np); +int __rte_internal qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np); /** * qman_query_fq_frmcnt - Queries fq frame count * @fq: the frame queue object to be queried * @frm_cnt: number of frames in the queue */ -int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt); +int __rte_internal qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt); /** * qman_query_wq - Queries work queue lengths @@ -1721,7 +1721,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq); * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the * "flags" retrieved from qman_fq_state(). */ -int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr); +int __rte_internal qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr); /** * qman_enqueue - Enqueue a frame to a frame queue @@ -1756,9 +1756,9 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr); * of an already busy hardware resource by throttling many of the to-be-dropped * enqueues "at the source". */ -int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags); +int __rte_internal qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags); -int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, +int __rte_internal qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, int frames_to_send); /** @@ -1772,7 +1772,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, * to be processed by different frame queues. */ int -qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, +__rte_internal qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, int frames_to_send); typedef int (*qman_cb_precommit) (void *arg); @@ -1859,7 +1859,7 @@ int qman_shutdown_fq(u32 fqid); * @fqid: the base FQID of the range to deallocate * @count: the number of FQIDs in the range */ -int qman_reserve_fqid_range(u32 fqid, unsigned int count); +int __rte_internal qman_reserve_fqid_range(u32 fqid, unsigned int count); static inline int qman_reserve_fqid(u32 fqid) { return qman_reserve_fqid_range(fqid, 1); @@ -1878,7 +1878,7 @@ static inline int qman_reserve_fqid(u32 fqid) * than requested (though alignment will be as requested). If @partial is zero, * the return value will either be 'count' or negative. */ -int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial); +int __rte_internal qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial); static inline int qman_alloc_pool(u32 *result) { int ret = qman_alloc_pool_range(result, 1, 0, 0); @@ -1925,7 +1925,7 @@ void qman_seed_pool_range(u32 id, unsigned int count); * any unspecified parameters) will be used rather than a modify hw hardware * (which only modifies the specified parameters). */ -int qman_create_cgr(struct qman_cgr *cgr, u32 flags, +int __rte_internal qman_create_cgr(struct qman_cgr *cgr, u32 flags, struct qm_mcc_initcgr *opts); /** @@ -1947,7 +1947,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal, * is executed. This must be excuted on the same affine portal on which it was * created. */ -int qman_delete_cgr(struct qman_cgr *cgr); +int __rte_internal qman_delete_cgr(struct qman_cgr *cgr); /** * qman_modify_cgr - Modify CGR fields @@ -1963,7 +1963,7 @@ int qman_delete_cgr(struct qman_cgr *cgr); * unspecified parameters) will be used rather than a modify hw hardware (which * only modifies the specified parameters). */ -int qman_modify_cgr(struct qman_cgr *cgr, u32 flags, +int __rte_internal qman_modify_cgr(struct qman_cgr *cgr, u32 flags, struct qm_mcc_initcgr *opts); /** @@ -1991,7 +1991,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion); * than requested (though alignment will be as requested). If @partial is zero, * the return value will either be 'count' or negative. */ -int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial); +int __rte_internal qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial); static inline int qman_alloc_cgrid(u32 *result) { int ret = qman_alloc_cgrid_range(result, 1, 0, 0); @@ -2004,7 +2004,7 @@ static inline int qman_alloc_cgrid(u32 *result) * @id: the base CGR ID of the range to deallocate * @count: the number of CGR IDs in the range */ -void qman_release_cgrid_range(u32 id, unsigned int count); +void __rte_internal qman_release_cgrid_range(u32 id, unsigned int count); static inline void qman_release_cgrid(u32 id) { qman_release_cgrid_range(id, 1); diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h index ec1ab7cee..062c0ce73 100644 --- a/drivers/bus/dpaa/include/fsl_usd.h +++ b/drivers/bus/dpaa/include/fsl_usd.h @@ -56,7 +56,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal); int bman_free_raw_portal(struct dpaa_raw_portal *portal); /* Obtain thread-local UIO file-descriptors */ -int qman_thread_fd(void); +int __rte_internal qman_thread_fd(void); int bman_thread_fd(void); /* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt @@ -64,14 +64,14 @@ int bman_thread_fd(void); * processing is complete. As such, it is essential to call this before going * into another blocking read/select/poll. */ -void qman_thread_irq(void); -void bman_thread_irq(void); +void __rte_internal qman_thread_irq(void); +void __rte_internal bman_thread_irq(void); -void qman_clear_irq(void); +void __rte_internal qman_clear_irq(void); /* Global setup */ -int qman_global_init(void); -int bman_global_init(void); +int __rte_internal qman_global_init(void); +int __rte_internal bman_global_init(void); /* Direct portal create and destroy */ struct qman_portal *fsl_qman_portal_create(void); diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h index 7818de68b..b9da869ae 100644 --- a/drivers/bus/dpaa/include/netcfg.h +++ b/drivers/bus/dpaa/include/netcfg.h @@ -46,12 +46,12 @@ struct netcfg_interface { * cfg_file: FMC config XML file * Returns the configuration information in newly allocated memory. */ -struct netcfg_info *netcfg_acquire(void); +struct netcfg_info __rte_internal *netcfg_acquire(void); /* cfg_ptr: configuration information pointer. * Frees the resources allocated by the configuration layer. */ -void netcfg_release(struct netcfg_info *cfg_ptr); +void __rte_internal netcfg_release(struct netcfg_info *cfg_ptr); #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER /* cfg_ptr: configuration information pointer. diff --git a/drivers/bus/dpaa/include/of.h b/drivers/bus/dpaa/include/of.h index 7ea7608fc..d1cb2f38f 100644 --- a/drivers/bus/dpaa/include/of.h +++ b/drivers/bus/dpaa/include/of.h @@ -87,7 +87,7 @@ struct dt_file { uint64_t buf[OF_FILE_BUF_MAX >> 3]; }; -const struct device_node *of_find_compatible_node( +const __rte_internal struct device_node *of_find_compatible_node( const struct device_node *from, const char *type __always_unused, const char *compatible) @@ -98,7 +98,7 @@ const struct device_node *of_find_compatible_node( dev_node != NULL; \ dev_node = of_find_compatible_node(dev_node, type, compatible)) -const void *of_get_property(const struct device_node *from, const char *name, +const __rte_internal void *of_get_property(const struct device_node *from, const char *name, size_t *lenp) __attribute__((nonnull(2))); bool of_device_is_available(const struct device_node *dev_node); @@ -109,7 +109,7 @@ const struct device_node *of_get_parent(const struct device_node *dev_node); const struct device_node *of_get_next_child(const struct device_node *dev_node, const struct device_node *prev); -const void *of_get_mac_address(const struct device_node *np); +const void __rte_internal *of_get_mac_address(const struct device_node *np); #define for_each_child_node(parent, child) \ for (child = of_get_next_child(parent, NULL); child != NULL; \ diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index c88deaf7f..4d1b10bca 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -1,4 +1,4 @@ -DPDK_17.11 { +INTERNAL { global: bman_acquire; @@ -57,17 +57,6 @@ DPDK_17.11 { qman_set_vdq; qman_reserve_fqid_range; qman_volatile_dequeue; - rte_dpaa_driver_register; - rte_dpaa_driver_unregister; - rte_dpaa_mem_ptov; - rte_dpaa_portal_init; - - local: *; -}; - -DPDK_18.02 { - global: - dpaa_logtype_eventdev; dpaa_svr_family; per_lcore_dpaa_io; @@ -87,23 +76,10 @@ DPDK_18.02 { qman_release_cgrid_range; qman_retire_fq; qman_static_dequeue_add; - rte_dpaa_portal_fq_close; - rte_dpaa_portal_fq_init; - - local: *; -} DPDK_17.11; - -DPDK_18.08 { - global: fman_if_get_sg_enable; fman_if_set_sg; of_get_mac_address; - local: *; -} DPDK_18.02; - -DPDK_18.11 { - global: bman_thread_irq; fman_if_get_sg_enable; fman_if_set_sg; @@ -113,13 +89,26 @@ DPDK_18.11 { qman_irqsource_remove; qman_thread_fd; qman_thread_irq; + qman_set_fq_lookup_table; +}; + +DPDK_17.11 { + global: + + rte_dpaa_driver_register; + rte_dpaa_driver_unregister; + rte_dpaa_mem_ptov; + rte_dpaa_portal_init; local: *; -} DPDK_18.08; +}; -DPDK_19.05 { +DPDK_18.02 { global: - qman_set_fq_lookup_table; + + rte_dpaa_portal_fq_close; + rte_dpaa_portal_fq_init; local: *; -} DPDK_18.11; +} DPDK_17.11; + diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 2e043feb2..33a20ddc5 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -694,7 +694,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } int -dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, +__rte_internal dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, int eth_rx_queue_id, u16 ch_id, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) @@ -758,7 +758,7 @@ dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, } int -dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, +__rte_internal dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, int eth_rx_queue_id) { struct qm_mcc_initfq opts; diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index e906a0bec..503182c39 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -166,13 +166,13 @@ struct dpaa_if_stats { }; int -dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, +__rte_internal dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, int eth_rx_queue_id, u16 ch_id, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf); int -dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, +__rte_internal dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, int eth_rx_queue_id); enum qman_cb_dqrr_result diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map index 8cb4500b5..3a3d35c57 100644 --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map @@ -5,8 +5,10 @@ DPDK_17.11 { DPDK_18.08 { global: - - dpaa_eth_eventq_attach; - dpaa_eth_eventq_detach; rte_pmd_dpaa_set_tx_loopback; } DPDK_17.11; + +INTERNAL { + dpaa_eth_eventq_attach; + dpaa_eth_eventq_detach; +}; From patchwork Thu Jun 13 14:23:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54781 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 227B91D635; Thu, 13 Jun 2019 16:24:28 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 7F8571D5E7 for ; Thu, 13 Jun 2019 16:24:18 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdx-0000g1-7I; Thu, 13 Jun 2019 10:24:16 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENpo0009355; Thu, 13 Jun 2019 10:23:51 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENoGb009340; Thu, 13 Jun 2019 10:23:50 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon , Hemant Agrawal , Shreyansh Jain Date: Thu, 13 Jun 2019 10:23:39 -0400 Message-Id: <20190613142344.9188-6-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 05/10] fslmc: identify internal only functions and tag them as __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in fslmc bus driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon CC: Hemant Agrawal CC: Shreyansh Jain --- drivers/bus/fslmc/fslmc_bus.c | 2 +- drivers/bus/fslmc/mc/dpbp.c | 12 +- drivers/bus/fslmc/mc/dpci.c | 6 +- drivers/bus/fslmc/mc/dpcon.c | 4 +- drivers/bus/fslmc/mc/dpdmai.c | 16 +- drivers/bus/fslmc/mc/dpio.c | 18 +- drivers/bus/fslmc/mc/dpmng.c | 4 +- drivers/bus/fslmc/mc/fsl_dpbp.h | 12 +- drivers/bus/fslmc/mc/fsl_dpci.h | 6 +- drivers/bus/fslmc/mc/fsl_dpcon.h | 4 +- drivers/bus/fslmc/mc/fsl_dpdmai.h | 16 +- drivers/bus/fslmc/mc/fsl_dpio.h | 18 +- drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +- drivers/bus/fslmc/mc/fsl_mc_cmd.h | 4 +- drivers/bus/fslmc/mc/mc_sys.c | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 6 +- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 12 +- drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 10 +- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 6 +- .../bus/fslmc/qbman/include/fsl_qbman_debug.h | 4 +- .../fslmc/qbman/include/fsl_qbman_portal.h | 80 +++---- drivers/bus/fslmc/qbman/qbman_debug.c | 4 +- drivers/bus/fslmc/qbman/qbman_portal.c | 82 +++---- drivers/bus/fslmc/rte_bus_fslmc_version.map | 201 +++++++++--------- 24 files changed, 269 insertions(+), 264 deletions(-) diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index f6e66d22c..777fb66ef 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -28,7 +28,7 @@ int dpaa2_logtype_bus; #define FSLMC_BUS_NAME fslmc struct rte_fslmc_bus rte_fslmc_bus; -uint8_t dpaa2_virt_mode; +uint8_t __rte_internal dpaa2_virt_mode; uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) diff --git a/drivers/bus/fslmc/mc/dpbp.c b/drivers/bus/fslmc/mc/dpbp.c index d9103409c..0a68e5d50 100644 --- a/drivers/bus/fslmc/mc/dpbp.c +++ b/drivers/bus/fslmc/mc/dpbp.c @@ -26,7 +26,7 @@ * * Return: '0' on Success; Error code otherwise. */ -int dpbp_open(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpbp_id, uint16_t *token) @@ -157,7 +157,7 @@ int dpbp_destroy(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpbp_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -179,7 +179,7 @@ int dpbp_enable(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpbp_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -235,7 +235,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpbp_reset(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_reset(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -258,7 +258,7 @@ int dpbp_reset(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpbp_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpbp_attr *attr) @@ -329,7 +329,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io, * Return: '0' on Success; Error code otherwise. */ -int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint32_t *num_free_bufs) diff --git a/drivers/bus/fslmc/mc/dpci.c b/drivers/bus/fslmc/mc/dpci.c index 2874a6196..8e1f72b60 100644 --- a/drivers/bus/fslmc/mc/dpci.c +++ b/drivers/bus/fslmc/mc/dpci.c @@ -314,7 +314,7 @@ int dpci_get_attributes(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpci_set_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpci_set_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t priority, @@ -476,7 +476,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpci_set_opr(struct fsl_mc_io *mc_io, +int __rte_internal dpci_set_opr(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t index, @@ -514,7 +514,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpci_get_opr(struct fsl_mc_io *mc_io, +int __rte_internal dpci_get_opr(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t index, diff --git a/drivers/bus/fslmc/mc/dpcon.c b/drivers/bus/fslmc/mc/dpcon.c index 3f6e04b97..dfa5f96a7 100644 --- a/drivers/bus/fslmc/mc/dpcon.c +++ b/drivers/bus/fslmc/mc/dpcon.c @@ -25,7 +25,7 @@ * * Return: '0' on Success; Error code otherwise. */ -int dpcon_open(struct fsl_mc_io *mc_io, +int __rte_internal dpcon_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpcon_id, uint16_t *token) @@ -267,7 +267,7 @@ int dpcon_reset(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpcon_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpcon_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpcon_attr *attr) diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c index dcb9d516a..466a87ddd 100644 --- a/drivers/bus/fslmc/mc/dpdmai.c +++ b/drivers/bus/fslmc/mc/dpdmai.c @@ -24,7 +24,7 @@ * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_open(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpdmai_id, uint16_t *token) @@ -62,7 +62,7 @@ int dpdmai_open(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_close(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_close(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -170,7 +170,7 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -193,7 +193,7 @@ int dpdmai_enable(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -275,7 +275,7 @@ int dpdmai_reset(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpdmai_attr *attr) @@ -318,7 +318,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, @@ -360,7 +360,7 @@ int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, @@ -410,7 +410,7 @@ int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_tx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c index a3382ed14..8008deb2c 100644 --- a/drivers/bus/fslmc/mc/dpio.c +++ b/drivers/bus/fslmc/mc/dpio.c @@ -26,7 +26,7 @@ * * Return: '0' on Success; Error code otherwise. */ -int dpio_open(struct fsl_mc_io *mc_io, +int __rte_internal dpio_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpio_id, uint16_t *token) @@ -61,7 +61,7 @@ int dpio_open(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpio_close(struct fsl_mc_io *mc_io, +int __rte_internal dpio_close(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -173,7 +173,7 @@ int dpio_destroy(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise */ -int dpio_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpio_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -196,7 +196,7 @@ int dpio_enable(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise */ -int dpio_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpio_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -253,7 +253,7 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpio_reset(struct fsl_mc_io *mc_io, +int __rte_internal dpio_reset(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token) { @@ -277,7 +277,7 @@ int dpio_reset(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise */ -int dpio_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpio_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpio_attr *attr) @@ -322,7 +322,7 @@ int dpio_get_attributes(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpio_set_stashing_destination(struct fsl_mc_io *mc_io, +int __rte_internal dpio_set_stashing_destination(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t sdest) @@ -386,7 +386,7 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io, +int __rte_internal dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, int dpcon_id, @@ -425,7 +425,7 @@ int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io, +int __rte_internal dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, int dpcon_id) diff --git a/drivers/bus/fslmc/mc/dpmng.c b/drivers/bus/fslmc/mc/dpmng.c index 277080876..8ad1269ab 100644 --- a/drivers/bus/fslmc/mc/dpmng.c +++ b/drivers/bus/fslmc/mc/dpmng.c @@ -18,7 +18,7 @@ * * Return: '0' on Success; Error code otherwise. */ -int mc_get_version(struct fsl_mc_io *mc_io, +int __rte_internal mc_get_version(struct fsl_mc_io *mc_io, uint32_t cmd_flags, struct mc_version *mc_ver_info) { @@ -57,7 +57,7 @@ int mc_get_version(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int mc_get_soc_version(struct fsl_mc_io *mc_io, +int __rte_internal mc_get_soc_version(struct fsl_mc_io *mc_io, uint32_t cmd_flags, struct mc_soc_version *mc_platform_info) { diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h index 9d405b42c..b4de4c5d1 100644 --- a/drivers/bus/fslmc/mc/fsl_dpbp.h +++ b/drivers/bus/fslmc/mc/fsl_dpbp.h @@ -14,7 +14,7 @@ struct fsl_mc_io; -int dpbp_open(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpbp_id, uint16_t *token); @@ -42,11 +42,11 @@ int dpbp_destroy(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint32_t obj_id); -int dpbp_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); -int dpbp_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -55,7 +55,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io, uint16_t token, int *en); -int dpbp_reset(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_reset(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -70,7 +70,7 @@ struct dpbp_attr { uint16_t bpid; }; -int dpbp_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpbp_attr *attr); @@ -88,7 +88,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io, uint16_t *major_ver, uint16_t *minor_ver); -int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io, +int __rte_internal dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint32_t *num_free_bufs); diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h index cf3d15267..c5d18237f 100644 --- a/drivers/bus/fslmc/mc/fsl_dpci.h +++ b/drivers/bus/fslmc/mc/fsl_dpci.h @@ -180,7 +180,7 @@ struct dpci_rx_queue_cfg { int order_preservation_en; }; -int dpci_set_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpci_set_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t priority, @@ -227,14 +227,14 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io, uint16_t *major_ver, uint16_t *minor_ver); -int dpci_set_opr(struct fsl_mc_io *mc_io, +int __rte_internal dpci_set_opr(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t index, uint8_t options, struct opr_cfg *cfg); -int dpci_get_opr(struct fsl_mc_io *mc_io, +int __rte_internal dpci_get_opr(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t index, diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h index 36dd5f3c1..b23c77528 100644 --- a/drivers/bus/fslmc/mc/fsl_dpcon.h +++ b/drivers/bus/fslmc/mc/fsl_dpcon.h @@ -19,7 +19,7 @@ struct fsl_mc_io; */ #define DPCON_INVALID_DPIO_ID (int)(-1) -int dpcon_open(struct fsl_mc_io *mc_io, +int __rte_internal dpcon_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpcon_id, uint16_t *token); @@ -76,7 +76,7 @@ struct dpcon_attr { uint8_t num_priorities; }; -int dpcon_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpcon_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpcon_attr *attr); diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h index 40469cc13..2f5354161 100644 --- a/drivers/bus/fslmc/mc/fsl_dpdmai.h +++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h @@ -23,12 +23,12 @@ struct fsl_mc_io; */ #define DPDMAI_ALL_QUEUES (uint8_t)(-1) -int dpdmai_open(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpdmai_id, uint16_t *token); -int dpdmai_close(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_close(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -54,11 +54,11 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint32_t object_id); -int dpdmai_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); -int dpdmai_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -82,7 +82,7 @@ struct dpdmai_attr { uint8_t num_of_queues; }; -int dpdmai_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpdmai_attr *attr); @@ -148,7 +148,7 @@ struct dpdmai_rx_queue_cfg { }; -int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, @@ -168,7 +168,7 @@ struct dpdmai_rx_queue_attr { uint32_t fqid; }; -int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, @@ -184,7 +184,7 @@ struct dpdmai_tx_queue_attr { uint32_t fqid; }; -int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io, +int __rte_internal dpdmai_get_tx_queue(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t queue_idx, diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h index 3158f5319..6cf752914 100644 --- a/drivers/bus/fslmc/mc/fsl_dpio.h +++ b/drivers/bus/fslmc/mc/fsl_dpio.h @@ -13,12 +13,12 @@ struct fsl_mc_io; -int dpio_open(struct fsl_mc_io *mc_io, +int __rte_internal dpio_open(struct fsl_mc_io *mc_io, uint32_t cmd_flags, int dpio_id, uint16_t *token); -int dpio_close(struct fsl_mc_io *mc_io, +int __rte_internal dpio_close(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -57,11 +57,11 @@ int dpio_destroy(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint32_t object_id); -int dpio_enable(struct fsl_mc_io *mc_io, +int __rte_internal dpio_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); -int dpio_disable(struct fsl_mc_io *mc_io, +int __rte_internal dpio_disable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); @@ -70,11 +70,11 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io, uint16_t token, int *en); -int dpio_reset(struct fsl_mc_io *mc_io, +int __rte_internal dpio_reset(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); -int dpio_set_stashing_destination(struct fsl_mc_io *mc_io, +int __rte_internal dpio_set_stashing_destination(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t sdest); @@ -84,13 +84,13 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io, uint16_t token, uint8_t *sdest); -int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io, +int __rte_internal dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, int dpcon_id, uint8_t *channel_index); -int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io, +int __rte_internal dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, int dpcon_id); @@ -119,7 +119,7 @@ struct dpio_attr { uint32_t clk; }; -int dpio_get_attributes(struct fsl_mc_io *mc_io, +int __rte_internal dpio_get_attributes(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, struct dpio_attr *attr); diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h index bef2ef095..5cc7601f1 100644 --- a/drivers/bus/fslmc/mc/fsl_dpmng.h +++ b/drivers/bus/fslmc/mc/fsl_dpmng.h @@ -34,7 +34,7 @@ struct mc_version { uint32_t revision; }; -int mc_get_version(struct fsl_mc_io *mc_io, +int __rte_internal mc_get_version(struct fsl_mc_io *mc_io, uint32_t cmd_flags, struct mc_version *mc_ver_info); @@ -48,7 +48,7 @@ struct mc_soc_version { uint32_t pvr; }; -int mc_get_soc_version(struct fsl_mc_io *mc_io, +int __rte_internal mc_get_soc_version(struct fsl_mc_io *mc_io, uint32_t cmd_flags, struct mc_soc_version *mc_platform_info); #endif /* __FSL_DPMNG_H */ diff --git a/drivers/bus/fslmc/mc/fsl_mc_cmd.h b/drivers/bus/fslmc/mc/fsl_mc_cmd.h index ac919610c..2376c0d47 100644 --- a/drivers/bus/fslmc/mc/fsl_mc_cmd.h +++ b/drivers/bus/fslmc/mc/fsl_mc_cmd.h @@ -10,6 +10,8 @@ #include #include +#include "rte_compat.h" + #define MC_CMD_NUM_OF_PARAMS 7 #define phys_addr_t uint64_t @@ -80,7 +82,7 @@ enum mc_cmd_status { #define MC_CMD_HDR_FLAGS_MASK 0xFF00FF00 -int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd); +int __rte_internal mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd); static inline uint64_t mc_encode_cmd_header(uint16_t cmd_id, uint32_t cmd_flags, diff --git a/drivers/bus/fslmc/mc/mc_sys.c b/drivers/bus/fslmc/mc/mc_sys.c index efafdc310..35274a7e8 100644 --- a/drivers/bus/fslmc/mc/mc_sys.c +++ b/drivers/bus/fslmc/mc/mc_sys.c @@ -51,7 +51,7 @@ static int mc_status_to_error(enum mc_cmd_status status) return -EINVAL; } -int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd) +int __rte_internal mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd) { enum mc_cmd_status status; uint64_t response; diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c index db49d637f..9cb0923b6 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c @@ -89,7 +89,7 @@ dpaa2_create_dpbp_device(int vdev_fd __rte_unused, return 0; } -struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void) +struct dpaa2_dpbp_dev *__rte_internal dpaa2_alloc_dpbp_dev(void) { struct dpaa2_dpbp_dev *dpbp_dev = NULL; @@ -102,7 +102,7 @@ struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void) return dpbp_dev; } -void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp) +void __rte_internal dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp) { struct dpaa2_dpbp_dev *dpbp_dev = NULL; @@ -115,7 +115,7 @@ void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp) } } -int dpaa2_dpbp_supported(void) +int __rte_internal dpaa2_dpbp_supported(void) { if (TAILQ_EMPTY(&dpbp_dev_list)) return -1; diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 7bcbde840..5e403f2a0 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -229,7 +229,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) return 0; } -static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) +static struct dpaa2_dpio_dev *__rte_internal dpaa2_get_qbman_swp(int lcoreid) { struct dpaa2_dpio_dev *dpio_dev = NULL; int ret; @@ -253,7 +253,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) } int -dpaa2_affine_qbman_swp(void) +__rte_internal dpaa2_affine_qbman_swp(void) { unsigned int lcore_id = rte_lcore_id(); uint64_t tid = syscall(SYS_gettid); @@ -301,7 +301,7 @@ dpaa2_affine_qbman_swp(void) } int -dpaa2_affine_qbman_ethrx_swp(void) +__rte_internal dpaa2_affine_qbman_ethrx_swp(void) { unsigned int lcore_id = rte_lcore_id(); uint64_t tid = syscall(SYS_gettid); @@ -570,7 +570,7 @@ dpaa2_create_dpio_device(int vdev_fd, } void -dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage) +__rte_internal dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage) { int i = 0; @@ -581,7 +581,7 @@ dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage) } int -dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage) +__rte_internal dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage) { int i = 0; @@ -601,7 +601,7 @@ dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage) } uint32_t -dpaa2_free_eq_descriptors(void) +__rte_internal dpaa2_free_eq_descriptors(void) { struct dpaa2_dpio_dev *dpio_dev = DPAA2_PER_LCORE_DPIO; struct qbman_result *eqresp; diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h index 17e7e4fad..6f847ed57 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h @@ -38,21 +38,21 @@ extern uint8_t dpaa2_eqcr_size; extern struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE]; /* Affine a DPIO portal to current processing thread */ -int dpaa2_affine_qbman_swp(void); +int __rte_internal dpaa2_affine_qbman_swp(void); /* Affine additional DPIO portal to current crypto processing thread */ -int dpaa2_affine_qbman_ethrx_swp(void); +int __rte_internal dpaa2_affine_qbman_ethrx_swp(void); /* allocate memory for FQ - dq storage */ int -dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage); +__rte_internal dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage); /* free memory for FQ- dq storage */ void -dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage); +__rte_internal dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage); /* free the enqueue response descriptors */ uint32_t -dpaa2_free_eq_descriptors(void); +__rte_internal dpaa2_free_eq_descriptors(void); #endif /* _DPAA2_HW_DPIO_H_ */ diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index 0cbde8a9b..2f1e5dde2 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -418,9 +418,9 @@ void set_swp_active_dqs(uint16_t dpio_index, struct qbman_result *dqs) { rte_global_active_dqs_list[dpio_index].global_active_dqs = dqs; } -struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void); -void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp); -int dpaa2_dpbp_supported(void); +struct dpaa2_dpbp_dev *__rte_internal dpaa2_alloc_dpbp_dev(void); +void __rte_internal dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp); +int __rte_internal dpaa2_dpbp_supported(void); struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void); void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci); diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h index e010b1b6a..69948a13a 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h @@ -24,7 +24,7 @@ uint8_t verb; uint8_t reserved2[29]; }; -int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, +int __rte_internal qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, struct qbman_fq_query_np_rslt *r); -uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r); +uint32_t __rte_internal qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r); uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r); diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index 07b8a4372..d257801c3 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -108,7 +108,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p); * @p: the given software portal object. * @mask: The value to set in SWP_ISR register. */ -void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask); +void __rte_internal qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask); /** * qbman_swp_dqrr_thrshld_read_status() - Get the data in software portal @@ -277,7 +277,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled); * rather by specifying the index (from 0 to 15) that has been mapped to the * desired channel. */ -void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable); +void __rte_internal qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable); /* ------------------- */ /* Pull-mode dequeuing */ @@ -316,7 +316,7 @@ enum qbman_pull_type_e { * default/starting state. * @d: the pull dequeue descriptor to be cleared. */ -void qbman_pull_desc_clear(struct qbman_pull_desc *d); +void __rte_internal qbman_pull_desc_clear(struct qbman_pull_desc *d); /** * qbman_pull_desc_set_storage()- Set the pull dequeue storage @@ -331,7 +331,7 @@ void qbman_pull_desc_clear(struct qbman_pull_desc *d); * the caller provides in 'storage_phys'), and 'stash' controls whether or not * those writes to main-memory express a cache-warming attribute. */ -void qbman_pull_desc_set_storage(struct qbman_pull_desc *d, +void __rte_internal qbman_pull_desc_set_storage(struct qbman_pull_desc *d, struct qbman_result *storage, uint64_t storage_phys, int stash); @@ -340,7 +340,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d, * @d: the pull dequeue descriptor to be set. * @numframes: number of frames to be set, must be between 1 and 16, inclusive. */ -void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, +void __rte_internal qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, uint8_t numframes); /** * qbman_pull_desc_set_token() - Set dequeue token for pull command @@ -363,7 +363,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token); * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues. * @fqid: the frame queue index of the given FQ. */ -void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid); +void __rte_internal qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid); /** * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues. @@ -398,7 +398,7 @@ void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad); * Return 0 for success, and -EBUSY if the software portal is not ready * to do pull dequeue. */ -int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d); +int __rte_internal qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d); /* -------------------------------- */ /* Polling DQRR for dequeue results */ @@ -412,13 +412,13 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d); * only once, so repeated calls can return a sequence of DQRR entries, without * requiring they be consumed immediately or in any particular order. */ -const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p); +const struct qbman_result *__rte_internal qbman_swp_dqrr_next(struct qbman_swp *p); /** * qbman_swp_prefetch_dqrr_next() - prefetch the next DQRR entry. * @s: the software portal object. */ -void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s); +void __rte_internal qbman_swp_prefetch_dqrr_next(struct qbman_swp *s); /** * qbman_swp_dqrr_consume() - Consume DQRR entries previously returned from @@ -426,14 +426,14 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s); * @s: the software portal object. * @dq: the DQRR entry to be consumed. */ -void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq); +void __rte_internal qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq); /** * qbman_swp_dqrr_idx_consume() - Given the DQRR index consume the DQRR entry * @s: the software portal object. * @dqrr_index: the DQRR index entry to be consumed. */ -void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index); +void __rte_internal qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index); /** * qbman_get_dqrr_idx() - Get dqrr index from the given dqrr @@ -441,7 +441,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index); * * Return dqrr index. */ -uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr); +uint8_t __rte_internal qbman_get_dqrr_idx(const struct qbman_result *dqrr); /** * qbman_get_dqrr_from_idx() - Use index to get the dqrr entry from the @@ -451,7 +451,7 @@ uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr); * * Return dqrr entry object. */ -struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx); +struct qbman_result *__rte_internal qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx); /* ------------------------------------------------- */ /* Polling user-provided storage for dequeue results */ @@ -476,7 +476,7 @@ struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx); * Return 1 for getting a valid dequeue result, or 0 for not getting a valid * dequeue result. */ -int qbman_result_has_new_result(struct qbman_swp *s, +int __rte_internal qbman_result_has_new_result(struct qbman_swp *s, struct qbman_result *dq); /** @@ -488,9 +488,9 @@ int qbman_result_has_new_result(struct qbman_swp *s, * Return 1 for getting a valid dequeue result, or 0 for not getting a valid * dequeue result. */ -int qbman_check_command_complete(struct qbman_result *dq); +int __rte_internal __rte_internal qbman_check_command_complete(struct qbman_result *dq); -int qbman_check_new_result(struct qbman_result *dq); +int __rte_internal qbman_check_new_result(struct qbman_result *dq); /* -------------------------------------------------------- */ /* Parsing dequeue entries (DQRR and user-provided storage) */ @@ -649,7 +649,7 @@ static inline int qbman_result_DQ_is_pull_complete( * * Return seqnum. */ -uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq); +uint16_t __rte_internal qbman_result_DQ_seqnum(const struct qbman_result *dq); /** * qbman_result_DQ_odpid() - Get the seqnum field in dequeue response @@ -658,7 +658,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq); * * Return odpid. */ -uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq); +uint16_t __rte_internal qbman_result_DQ_odpid(const struct qbman_result *dq); /** * qbman_result_DQ_fqid() - Get the fqid in dequeue response @@ -690,7 +690,7 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq); * * Return the frame queue context. */ -uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq); +uint64_t __rte_internal qbman_result_DQ_fqd_ctx(const struct qbman_result *dq); /** * qbman_result_DQ_fd() - Get the frame descriptor in dequeue response @@ -698,7 +698,7 @@ uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq); * * Return the frame descriptor. */ -const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq); +const struct qbman_fd *__rte_internal qbman_result_DQ_fd(const struct qbman_result *dq); /* State-change notifications (FQDAN/CDAN/CSCN/...). */ @@ -708,7 +708,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq); * * Return the state in the notifiation. */ -uint8_t qbman_result_SCN_state(const struct qbman_result *scn); +uint8_t __rte_internal qbman_result_SCN_state(const struct qbman_result *scn); /** * qbman_result_SCN_rid() - Get the resource id from the notification @@ -841,7 +841,7 @@ struct qbman_eq_response { * default/starting state. * @d: the given enqueue descriptor. */ -void qbman_eq_desc_clear(struct qbman_eq_desc *d); +void __rte_internal qbman_eq_desc_clear(struct qbman_eq_desc *d); /* Exactly one of the following descriptor "actions" should be set. (Calling * any one of these will replace the effect of any prior call to one of these.) @@ -861,7 +861,7 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d); * @response_success: 1 = enqueue with response always; 0 = enqueue with * rejections returned on a FQ. */ -void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success); +void __rte_internal qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success); /** * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor * @d: the enqueue descriptor. @@ -872,7 +872,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success); * @incomplete: indiates whether this is the last fragments using the same * sequeue number. */ -void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, +void __rte_internal qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, uint16_t opr_id, uint16_t seqnum, int incomplete); /** @@ -906,7 +906,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id, * data structure.) 'stash' controls whether or not the write to main-memory * expresses a cache-warming attribute. */ -void qbman_eq_desc_set_response(struct qbman_eq_desc *d, +void __rte_internal qbman_eq_desc_set_response(struct qbman_eq_desc *d, uint64_t storage_phys, int stash); @@ -920,7 +920,7 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d, * result "storage" before issuing an enqueue, and use any non-zero 'token' * value. */ -void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token); +void __rte_internal qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token); /** * Exactly one of the following descriptor "targets" should be set. (Calling any @@ -935,7 +935,7 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token); * @d: the enqueue descriptor * @fqid: the id of the frame queue to be enqueued. */ -void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid); +void __rte_internal qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid); /** * qbman_eq_desc_set_qd() - Set Queuing Destination for the enqueue command. @@ -944,7 +944,7 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid); * @qd_bin: the queuing destination bin * @qd_prio: the queuing destination priority. */ -void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid, +void __rte_internal qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid, uint16_t qd_bin, uint8_t qd_prio); /** @@ -969,7 +969,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable); * held-active (order-preserving) FQ, whether the FQ should be parked instead of * being rescheduled.) */ -void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable, +void __rte_internal qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable, uint8_t dqrr_idx, int park); /** @@ -978,7 +978,7 @@ void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable, * * Return the fd pointer. */ -struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp); +struct qbman_fd *__rte_internal qbman_result_eqresp_fd(struct qbman_result *eqresp); /** * qbman_result_eqresp_set_rspid() - Set the response id in enqueue response. @@ -988,7 +988,7 @@ struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp); * This value is set into the response id before the enqueue command, which, * get overwritten by qbman once the enqueue command is complete. */ -void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val); +void __rte_internal qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val); /** * qbman_result_eqresp_rspid() - Get the response id. @@ -1000,7 +1000,7 @@ void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val); * copied into the enqueue response to determine if the command has been * completed, and response has been updated. */ -uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp); +uint8_t __rte_internal qbman_result_eqresp_rspid(struct qbman_result *eqresp); /** * qbman_result_eqresp_rc() - determines if enqueue command is sucessful. @@ -1008,7 +1008,7 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp); * * Return 0 when command is sucessful. */ -uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp); +uint8_t __rte_internal qbman_result_eqresp_rc(struct qbman_result *eqresp); /** * qbman_swp_enqueue() - Issue an enqueue command. @@ -1034,7 +1034,7 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d, * * Return the number of enqueued frames, -EBUSY if the EQCR is not ready. */ -int qbman_swp_enqueue_multiple(struct qbman_swp *s, +int __rte_internal qbman_swp_enqueue_multiple(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, uint32_t *flags, @@ -1051,7 +1051,7 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s, * * Return the number of enqueued frames, -EBUSY if the EQCR is not ready. */ -int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, +int __rte_internal qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, uint32_t *flags, @@ -1067,7 +1067,7 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, * * Return the number of enqueued frames, -EBUSY if the EQCR is not ready. */ -int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, +int __rte_internal qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, int num_frames); @@ -1108,13 +1108,13 @@ struct qbman_release_desc { * default/starting state. * @d: the qbman release descriptor. */ -void qbman_release_desc_clear(struct qbman_release_desc *d); +void __rte_internal qbman_release_desc_clear(struct qbman_release_desc *d); /** * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to * @d: the qbman release descriptor. */ -void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid); +void __rte_internal qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid); /** * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI @@ -1132,7 +1132,7 @@ void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable); * * Return 0 for success, -EBUSY if the release command ring is not ready. */ -int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d, +int __rte_internal qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers); /* TODO: @@ -1157,7 +1157,7 @@ int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh); * Return 0 for success, or negative error code if the acquire command * fails. */ -int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, +int __rte_internal qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, unsigned int num_buffers); /*****************/ diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c index 0bb2ce880..dba14a7c4 100644 --- a/drivers/bus/fslmc/qbman/qbman_debug.c +++ b/drivers/bus/fslmc/qbman/qbman_debug.c @@ -23,7 +23,7 @@ struct qbman_fq_query_desc { uint8_t reserved2[57]; }; -int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, +int __rte_internal qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, struct qbman_fq_query_np_rslt *r) { struct qbman_fq_query_desc *p; @@ -54,7 +54,7 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, return 0; } -uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r) +uint32_t __rte_internal qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r) { return (r->frm_cnt & 0x00FFFFFF); } diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index 20da8b921..be3ac01e0 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -328,7 +328,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p) return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR); } -void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask) +void __rte_internal qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask) { qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask); } @@ -487,12 +487,12 @@ enum qb_enqueue_commands { #define QB_ENQUEUE_CMD_NLIS_SHIFT 14 #define QB_ENQUEUE_CMD_IS_NESN_SHIFT 15 -void qbman_eq_desc_clear(struct qbman_eq_desc *d) +void __rte_internal qbman_eq_desc_clear(struct qbman_eq_desc *d) { memset(d, 0, sizeof(*d)); } -void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success) +void __rte_internal qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success) { d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT); if (respond_success) @@ -501,7 +501,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success) d->eq.verb |= enqueue_rejects_to_fq; } -void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, +void __rte_internal qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, uint16_t opr_id, uint16_t seqnum, int incomplete) { d->eq.verb |= 1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT; @@ -540,7 +540,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id, d->eq.seqnum |= 1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT; } -void qbman_eq_desc_set_response(struct qbman_eq_desc *d, +void __rte_internal qbman_eq_desc_set_response(struct qbman_eq_desc *d, dma_addr_t storage_phys, int stash) { @@ -548,18 +548,18 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d, d->eq.wae = stash; } -void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token) +void __rte_internal qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token) { d->eq.rspid = token; } -void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid) +void __rte_internal qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid) { d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT); d->eq.tgtid = fqid; } -void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid, +void __rte_internal qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid, uint16_t qd_bin, uint8_t qd_prio) { d->eq.verb |= 1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT; @@ -576,7 +576,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable) d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT); } -void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable, +void __rte_internal qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable, uint8_t dqrr_idx, int park) { if (enable) { @@ -876,7 +876,7 @@ static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, return num_enqueued; } -inline int qbman_swp_enqueue_multiple(struct qbman_swp *s, +inline int __rte_internal qbman_swp_enqueue_multiple(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, uint32_t *flags, @@ -1014,7 +1014,7 @@ static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, return num_enqueued; } -inline int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, +inline int __rte_internal qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, uint32_t *flags, @@ -1143,7 +1143,7 @@ static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, return num_enqueued; } -inline int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, +inline int __rte_internal qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, int num_frames) @@ -1163,7 +1163,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled) *enabled = src | (1 << channel_idx); } -void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable) +void __rte_internal qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable) { uint16_t dqsrc; @@ -1200,12 +1200,12 @@ enum qb_pull_dt_e { qb_pull_dt_framequeue }; -void qbman_pull_desc_clear(struct qbman_pull_desc *d) +void __rte_internal qbman_pull_desc_clear(struct qbman_pull_desc *d) { memset(d, 0, sizeof(*d)); } -void qbman_pull_desc_set_storage(struct qbman_pull_desc *d, +void __rte_internal qbman_pull_desc_set_storage(struct qbman_pull_desc *d, struct qbman_result *storage, dma_addr_t storage_phys, int stash) @@ -1225,7 +1225,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d, d->pull.rsp_addr = storage_phys; } -void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, +void __rte_internal qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, uint8_t numframes) { d->pull.numf = numframes - 1; @@ -1236,7 +1236,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token) d->pull.tok = token; } -void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid) +void __rte_internal qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid) { d->pull.verb |= 1 << QB_VDQCR_VERB_DCT_SHIFT; d->pull.verb |= qb_pull_dt_framequeue << QB_VDQCR_VERB_DT_SHIFT; @@ -1321,7 +1321,7 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s, return 0; } -inline int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d) +inline int __rte_internal qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d) { return qbman_swp_pull_ptr(s, d); } @@ -1345,7 +1345,7 @@ inline int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d) #include -void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s) +void __rte_internal qbman_swp_prefetch_dqrr_next(struct qbman_swp *s) { const struct qbman_result *p; @@ -1358,7 +1358,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s) * only once, so repeated calls can return a sequence of DQRR entries, without * requiring they be consumed immediately or in any particular order. */ -inline const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s) +inline const struct qbman_result *__rte_internal qbman_swp_dqrr_next(struct qbman_swp *s) { return qbman_swp_dqrr_next_ptr(s); } @@ -1483,7 +1483,7 @@ const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s) } /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */ -void qbman_swp_dqrr_consume(struct qbman_swp *s, +void __rte_internal qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq) { qbman_cinh_write(&s->sys, @@ -1491,7 +1491,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s, } /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */ -void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, +void __rte_internal qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index) { qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_DCAP, dqrr_index); @@ -1501,7 +1501,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, /* Polling user-provided storage */ /*********************************/ -int qbman_result_has_new_result(struct qbman_swp *s, +int __rte_internal qbman_result_has_new_result(struct qbman_swp *s, struct qbman_result *dq) { if (dq->dq.tok == 0) @@ -1529,7 +1529,7 @@ int qbman_result_has_new_result(struct qbman_swp *s, return 1; } -int qbman_check_new_result(struct qbman_result *dq) +int __rte_internal qbman_check_new_result(struct qbman_result *dq) { if (dq->dq.tok == 0) return 0; @@ -1544,7 +1544,7 @@ int qbman_check_new_result(struct qbman_result *dq) return 1; } -int qbman_check_command_complete(struct qbman_result *dq) +int __rte_internal __rte_internal qbman_check_command_complete(struct qbman_result *dq) { struct qbman_swp *s; @@ -1631,17 +1631,17 @@ int qbman_result_is_FQPN(const struct qbman_result *dq) /* These APIs assume qbman_result_is_DQ() is TRUE */ -uint8_t qbman_result_DQ_flags(const struct qbman_result *dq) +uint8_t __rte_internal qbman_result_DQ_flags(const struct qbman_result *dq) { return dq->dq.stat; } -uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq) +uint16_t __rte_internal qbman_result_DQ_seqnum(const struct qbman_result *dq) { return dq->dq.seqnum; } -uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq) +uint16_t __rte_internal qbman_result_DQ_odpid(const struct qbman_result *dq) { return dq->dq.oprid; } @@ -1661,12 +1661,12 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq) return dq->dq.fq_frm_cnt; } -uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq) +uint64_t __rte_internal qbman_result_DQ_fqd_ctx(const struct qbman_result *dq) { return dq->dq.fqd_ctx; } -const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq) +const struct qbman_fd *__rte_internal qbman_result_DQ_fd(const struct qbman_result *dq) { return (const struct qbman_fd *)&dq->dq.fd[0]; } @@ -1674,7 +1674,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq) /**************************************/ /* Parsing state-change notifications */ /**************************************/ -uint8_t qbman_result_SCN_state(const struct qbman_result *scn) +uint8_t __rte_internal qbman_result_SCN_state(const struct qbman_result *scn) { return scn->scn.state; } @@ -1733,22 +1733,22 @@ uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn) /********************/ /* Parsing EQ RESP */ /********************/ -struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp) +struct qbman_fd *__rte_internal qbman_result_eqresp_fd(struct qbman_result *eqresp) { return (struct qbman_fd *)&eqresp->eq_resp.fd[0]; } -void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val) +void __rte_internal qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val) { eqresp->eq_resp.rspid = val; } -uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp) +uint8_t __rte_internal qbman_result_eqresp_rspid(struct qbman_result *eqresp) { return eqresp->eq_resp.rspid; } -uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp) +uint8_t __rte_internal qbman_result_eqresp_rc(struct qbman_result *eqresp) { if (eqresp->eq_resp.rc == 0xE) return 0; @@ -1762,13 +1762,13 @@ uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp) #define QB_BR_RC_VALID_SHIFT 5 #define QB_BR_RCDI_SHIFT 6 -void qbman_release_desc_clear(struct qbman_release_desc *d) +void __rte_internal qbman_release_desc_clear(struct qbman_release_desc *d) { memset(d, 0, sizeof(*d)); d->br.verb = 1 << QB_BR_RC_VALID_SHIFT; } -void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid) +void __rte_internal qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid) { d->br.bpid = bpid; } @@ -1851,7 +1851,7 @@ static int qbman_swp_release_mem_back(struct qbman_swp *s, return 0; } -inline int qbman_swp_release(struct qbman_swp *s, +inline int __rte_internal qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers) @@ -1879,7 +1879,7 @@ struct qbman_acquire_rslt { uint64_t buf[7]; }; -int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, +int __rte_internal qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, unsigned int num_buffers) { struct qbman_acquire_desc *p; @@ -2097,12 +2097,12 @@ int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid, 1, ctx); } -uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr) +uint8_t __rte_internal qbman_get_dqrr_idx(const struct qbman_result *dqrr) { return QBMAN_IDX_FROM_DQRR(dqrr); } -struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx) +struct qbman_result *__rte_internal qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx) { struct qbman_result *dq; diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map index e86007384..26400a008 100644 --- a/drivers/bus/fslmc/rte_bus_fslmc_version.map +++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map @@ -1,44 +1,108 @@ -DPDK_17.05 { +INTERNAL { global: dpaa2_affine_qbman_swp; - dpaa2_alloc_dpbp_dev; - dpaa2_alloc_dq_storage; - dpaa2_free_dpbp_dev; - dpaa2_free_dq_storage; - dpbp_disable; - dpbp_enable; - dpbp_get_attributes; - dpbp_get_num_free_bufs; - dpbp_open; - dpbp_reset; - dpio_close; - dpio_disable; - dpio_enable; - dpio_get_attributes; - dpio_open; - dpio_reset; - dpio_set_stashing_destination; - mc_send_command; - per_lcore__dpaa2_io; - qbman_check_command_complete; - qbman_eq_desc_clear; - qbman_eq_desc_set_fq; - qbman_eq_desc_set_no_orp; - qbman_eq_desc_set_qd; - qbman_eq_desc_set_response; - qbman_pull_desc_clear; - qbman_pull_desc_set_fq; - qbman_pull_desc_set_numframes; - qbman_pull_desc_set_storage; - qbman_release_desc_clear; - qbman_release_desc_set_bpid; - qbman_result_DQ_fd; - qbman_result_DQ_flags; - qbman_result_has_new_result; - qbman_swp_acquire; - qbman_swp_pull; - qbman_swp_release; + dpaa2_alloc_dpbp_dev; + dpaa2_alloc_dq_storage; + dpaa2_free_dpbp_dev; + dpaa2_free_dq_storage; + dpbp_disable; + dpbp_enable; + dpbp_get_attributes; + dpbp_get_num_free_bufs; + dpbp_open; + dpbp_reset; + dpio_close; + dpio_disable; + dpio_enable; + dpio_get_attributes; + dpio_open; + dpio_reset; + dpio_set_stashing_destination; + mc_send_command; + per_lcore__dpaa2_io; + qbman_check_command_complete; + qbman_eq_desc_clear; + qbman_eq_desc_set_fq; + qbman_eq_desc_set_no_orp; + qbman_eq_desc_set_qd; + qbman_eq_desc_set_response; + qbman_pull_desc_clear; + qbman_pull_desc_set_fq; + qbman_pull_desc_set_numframes; + qbman_pull_desc_set_storage; + qbman_release_desc_clear; + qbman_release_desc_set_bpid; + qbman_result_DQ_fd; + qbman_result_DQ_flags; + qbman_result_has_new_result; + qbman_swp_acquire; + qbman_swp_pull; + qbman_swp_release; + + dpaa2_io_portal; + dpaa2_get_qbman_swp; + dpci_set_rx_queue; + dpcon_open; + dpcon_get_attributes; + dpio_add_static_dequeue_channel; + dpio_remove_static_dequeue_channel; + mc_get_soc_version; + mc_get_version; + qbman_check_new_result; + qbman_eq_desc_set_dca; + qbman_get_dqrr_from_idx; + qbman_get_dqrr_idx; + qbman_result_DQ_fqd_ctx; + qbman_result_SCN_state; + qbman_swp_dqrr_consume; + qbman_swp_dqrr_next; + qbman_swp_enqueue_multiple; + qbman_swp_enqueue_multiple_desc; + qbman_swp_interrupt_clear_status; + qbman_swp_push_set; + + dpaa2_dpbp_supported; + + dpaa2_svr_family; + dpaa2_virt_mode; + per_lcore_dpaa2_held_bufs; + qbman_fq_query_state; + qbman_fq_state_frame_count; + qbman_swp_dqrr_idx_consume; + qbman_swp_prefetch_dqrr_next; + + dpaa2_affine_qbman_ethrx_swp; + dpdmai_close; + dpdmai_disable; + dpdmai_enable; + dpdmai_get_attributes; + dpdmai_get_rx_queue; + dpdmai_get_tx_queue; + dpdmai_open; + dpdmai_set_rx_queue; + + dpaa2_dqrr_size; + dpaa2_eqcr_size; + dpci_get_opr; + dpci_set_opr; + + dpaa2_free_eq_descriptors; + + qbman_eq_desc_set_orp; + qbman_eq_desc_set_token; + qbman_result_DQ_odpid; + qbman_result_DQ_seqnum; + qbman_result_eqresp_fd; + qbman_result_eqresp_rc; + qbman_result_eqresp_rspid; + qbman_result_eqresp_set_rspid; + qbman_swp_enqueue_multiple_fd; +}; + +DPDK_17.05 { + global: + rte_fslmc_driver_register; rte_fslmc_driver_unregister; rte_fslmc_vfio_dmamap; @@ -50,27 +114,6 @@ DPDK_17.05 { DPDK_17.08 { global: - dpaa2_io_portal; - dpaa2_get_qbman_swp; - dpci_set_rx_queue; - dpcon_open; - dpcon_get_attributes; - dpio_add_static_dequeue_channel; - dpio_remove_static_dequeue_channel; - mc_get_soc_version; - mc_get_version; - qbman_check_new_result; - qbman_eq_desc_set_dca; - qbman_get_dqrr_from_idx; - qbman_get_dqrr_idx; - qbman_result_DQ_fqd_ctx; - qbman_result_SCN_state; - qbman_swp_dqrr_consume; - qbman_swp_dqrr_next; - qbman_swp_enqueue_multiple; - qbman_swp_enqueue_multiple_desc; - qbman_swp_interrupt_clear_status; - qbman_swp_push_set; rte_dpaa2_alloc_dpci_dev; rte_fslmc_object_register; rte_global_active_dqs_list; @@ -80,7 +123,6 @@ DPDK_17.08 { DPDK_17.11 { global: - dpaa2_dpbp_supported; rte_dpaa2_dev_type; rte_dpaa2_intr_disable; rte_dpaa2_intr_enable; @@ -90,13 +132,6 @@ DPDK_17.11 { DPDK_18.02 { global: - dpaa2_svr_family; - dpaa2_virt_mode; - per_lcore_dpaa2_held_bufs; - qbman_fq_query_state; - qbman_fq_state_frame_count; - qbman_swp_dqrr_idx_consume; - qbman_swp_prefetch_dqrr_next; rte_fslmc_get_device_count; } DPDK_17.11; @@ -104,40 +139,8 @@ DPDK_18.02 { DPDK_18.05 { global: - dpaa2_affine_qbman_ethrx_swp; - dpdmai_close; - dpdmai_disable; - dpdmai_enable; - dpdmai_get_attributes; - dpdmai_get_rx_queue; - dpdmai_get_tx_queue; - dpdmai_open; - dpdmai_set_rx_queue; rte_dpaa2_free_dpci_dev; rte_dpaa2_memsegs; } DPDK_18.02; -DPDK_18.11 { - global: - dpaa2_dqrr_size; - dpaa2_eqcr_size; - dpci_get_opr; - dpci_set_opr; - -} DPDK_18.05; - -DPDK_19.05 { - global: - dpaa2_free_eq_descriptors; - - qbman_eq_desc_set_orp; - qbman_eq_desc_set_token; - qbman_result_DQ_odpid; - qbman_result_DQ_seqnum; - qbman_result_eqresp_fd; - qbman_result_eqresp_rc; - qbman_result_eqresp_rspid; - qbman_result_eqresp_set_rspid; - qbman_swp_enqueue_multiple_fd; -} DPDK_18.11; From patchwork Thu Jun 13 14:23:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54776 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DAB661D615; Thu, 13 Jun 2019 16:24:16 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id ADBB01D5FC for ; Thu, 13 Jun 2019 16:24:08 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdy-0000g2-Uf; Thu, 13 Jun 2019 10:24:03 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENp7a009359; Thu, 13 Jun 2019 10:23:51 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENp7G009358; Thu, 13 Jun 2019 10:23:51 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon , Hemant Agrawal , Shreyansh Jain Date: Thu, 13 Jun 2019 10:23:40 -0400 Message-Id: <20190613142344.9188-7-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 06/10] dpaa2: Adjust dpaa2 driver to mark internal symbols with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in the dpaa2 driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon CC: Hemant Agrawal CC: Shreyansh Jain --- drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++-- drivers/net/dpaa2/dpaa2_ethdev.h | 4 ++-- drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 14 ++++++-------- 3 files changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 900182f66..fafdec1f9 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1798,7 +1798,7 @@ dpaa2_dev_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } -int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, +int __rte_internal dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, int eth_rx_queue_id, uint16_t dpcon_id, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) @@ -1881,7 +1881,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, return 0; } -int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev, +int __rte_internal dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev, int eth_rx_queue_id) { struct dpaa2_dev_priv *eth_priv = dev->data->dev_private; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 33b1506aa..b7d691238 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -140,12 +140,12 @@ int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev, int dpaa2_attach_bp_list(struct dpaa2_dev_priv *priv, void *blist); -int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, +int __rte_internal dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, int eth_rx_queue_id, uint16_t dpcon_id, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf); -int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev, +int __rte_internal dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev, int eth_rx_queue_id); uint16_t dpaa2_dev_loopback_rx(void *queue, struct rte_mbuf **bufs, diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map index d1b4cdb23..fcf0b8ac0 100644 --- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map +++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map @@ -1,15 +1,13 @@ -DPDK_17.05 { - - local: *; -}; - -DPDK_17.11 { +INTERNAL { global: dpaa2_eth_eventq_attach; dpaa2_eth_eventq_detach; +}; -} DPDK_17.05; +DPDK_17.05 { + local: *; +}; EXPERIMENTAL { global: @@ -17,4 +15,4 @@ EXPERIMENTAL { rte_pmd_dpaa2_mux_flow_create; rte_pmd_dpaa2_set_custom_hash; rte_pmd_dpaa2_set_timestamp; -} DPDK_17.11; +} DPDK_17.05; From patchwork Thu Jun 13 14:23:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54775 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB4521D60E; Thu, 13 Jun 2019 16:24:14 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 6AD691D5FB for ; Thu, 13 Jun 2019 16:24:08 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQdz-0000g3-KM; Thu, 13 Jun 2019 10:24:05 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENra4009364; Thu, 13 Jun 2019 10:23:53 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENrjD009363; Thu, 13 Jun 2019 10:23:53 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon , Hemant Agrawal , Shreyansh Jain Date: Thu, 13 Jun 2019 10:23:41 -0400 Message-Id: <20190613142344.9188-8-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 07/10] dpaax: mark internal functions with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in the dpaa2 driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon CC: Hemant Agrawal CC: Shreyansh Jain --- drivers/common/dpaax/dpaax_iova_table.c | 8 ++++---- drivers/common/dpaax/dpaax_iova_table.h | 8 ++++---- drivers/common/dpaax/rte_common_dpaax_version.map | 4 +++- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c index 2dd38a920..0f6a3c2fe 100644 --- a/drivers/common/dpaax/dpaax_iova_table.c +++ b/drivers/common/dpaax/dpaax_iova_table.c @@ -151,7 +151,7 @@ read_memory_node(unsigned int *count) } int -dpaax_iova_table_populate(void) +__rte_internal dpaax_iova_table_populate(void) { int ret; unsigned int i, node_count; @@ -252,7 +252,7 @@ dpaax_iova_table_populate(void) } void -dpaax_iova_table_depopulate(void) +__rte_internal dpaax_iova_table_depopulate(void) { if (dpaax_iova_table_p == NULL) return; @@ -264,7 +264,7 @@ dpaax_iova_table_depopulate(void) } int -dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length) +__rte_internal dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length) { int found = 0; unsigned int i; @@ -348,7 +348,7 @@ dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length) * Not for weak hearted - the tables can get quite large */ void -dpaax_iova_table_dump(void) +__rte_internal dpaax_iova_table_dump(void) { unsigned int i, j; struct dpaax_iovat_element *entry; diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h index 138827e7b..f89714d26 100644 --- a/drivers/common/dpaax/dpaax_iova_table.h +++ b/drivers/common/dpaax/dpaax_iova_table.h @@ -59,10 +59,10 @@ extern struct dpaax_iova_table *dpaax_iova_table_p; #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */ /* APIs exposed */ -int dpaax_iova_table_populate(void); -void dpaax_iova_table_depopulate(void); -int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length); -void dpaax_iova_table_dump(void); +int __rte_internal dpaax_iova_table_populate(void); +void __rte_internal dpaax_iova_table_depopulate(void); +int __rte_internal dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length); +void __rte_internal dpaax_iova_table_dump(void); static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __attribute__((hot)); diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map index 8131c9e30..fbda6d638 100644 --- a/drivers/common/dpaax/rte_common_dpaax_version.map +++ b/drivers/common/dpaax/rte_common_dpaax_version.map @@ -1,4 +1,4 @@ -DPDK_18.11 { +INTERNAL { global: dpaax_iova_table_update; @@ -6,6 +6,8 @@ DPDK_18.11 { dpaax_iova_table_dump; dpaax_iova_table_p; dpaax_iova_table_populate; +}; +DPDK_18.11 { local: *; }; From patchwork Thu Jun 13 14:23:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54777 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EADDE1D61D; Thu, 13 Jun 2019 16:24:18 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 068E91D5FB for ; Thu, 13 Jun 2019 16:24:09 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQe0-0000g5-BD; Thu, 13 Jun 2019 10:24:06 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENsbi009368; Thu, 13 Jun 2019 10:23:54 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENsrh009367; Thu, 13 Jun 2019 10:23:54 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon , Anoob Joseph Date: Thu, 13 Jun 2019 10:23:42 -0400 Message-Id: <20190613142344.9188-9-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 08/10] cpt: mark internal functions with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in the cpt driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon CC: Anoob Joseph --- drivers/common/cpt/cpt_pmd_ops_helper.c | 4 ++-- drivers/common/cpt/cpt_pmd_ops_helper.h | 4 ++-- drivers/common/cpt/rte_common_cpt_version.map | 6 +++++- 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/common/cpt/cpt_pmd_ops_helper.c b/drivers/common/cpt/cpt_pmd_ops_helper.c index 1c18180f8..d8c7add66 100644 --- a/drivers/common/cpt/cpt_pmd_ops_helper.c +++ b/drivers/common/cpt/cpt_pmd_ops_helper.c @@ -13,7 +13,7 @@ #define CPT_OFFSET_CONTROL_BYTES 8 int32_t -cpt_pmd_ops_helper_get_mlen_direct_mode(void) +__rte_internal cpt_pmd_ops_helper_get_mlen_direct_mode(void) { uint32_t len = 0; @@ -27,7 +27,7 @@ cpt_pmd_ops_helper_get_mlen_direct_mode(void) } int -cpt_pmd_ops_helper_get_mlen_sg_mode(void) +__rte_internal cpt_pmd_ops_helper_get_mlen_sg_mode(void) { uint32_t len = 0; diff --git a/drivers/common/cpt/cpt_pmd_ops_helper.h b/drivers/common/cpt/cpt_pmd_ops_helper.h index dd32f9a40..314e3871b 100644 --- a/drivers/common/cpt/cpt_pmd_ops_helper.h +++ b/drivers/common/cpt/cpt_pmd_ops_helper.h @@ -20,7 +20,7 @@ */ int32_t -cpt_pmd_ops_helper_get_mlen_direct_mode(void); +__rte_internal cpt_pmd_ops_helper_get_mlen_direct_mode(void); /* * Get size of contiguous meta buffer to be allocated when working in scatter @@ -30,5 +30,5 @@ cpt_pmd_ops_helper_get_mlen_direct_mode(void); * - length */ int -cpt_pmd_ops_helper_get_mlen_sg_mode(void); +__rte_internal cpt_pmd_ops_helper_get_mlen_sg_mode(void); #endif /* _CPT_PMD_OPS_HELPER_H_ */ diff --git a/drivers/common/cpt/rte_common_cpt_version.map b/drivers/common/cpt/rte_common_cpt_version.map index dec614f0d..7459d551b 100644 --- a/drivers/common/cpt/rte_common_cpt_version.map +++ b/drivers/common/cpt/rte_common_cpt_version.map @@ -1,6 +1,10 @@ -DPDK_18.11 { +INTERNAL { global: cpt_pmd_ops_helper_get_mlen_direct_mode; cpt_pmd_ops_helper_get_mlen_sg_mode; }; + +DPDK_18.11 { + local: *; +}; From patchwork Thu Jun 13 14:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54778 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4866A4C8E; Thu, 13 Jun 2019 16:24:22 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id C39B21D602 for ; Thu, 13 Jun 2019 16:24:11 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQe6-0000gL-ED; Thu, 13 Jun 2019 10:24:09 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DENtu9009373; Thu, 13 Jun 2019 10:23:55 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DENs4h009371; Thu, 13 Jun 2019 10:23:54 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon Date: Thu, 13 Jun 2019 10:23:43 -0400 Message-Id: <20190613142344.9188-10-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 09/10] octeonx: mark internal functions with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in the octeon driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon --- drivers/common/octeontx/octeontx_mbox.c | 6 +++--- drivers/common/octeontx/octeontx_mbox.h | 6 +++--- drivers/common/octeontx/rte_common_octeontx_version.map | 9 ++++++++- 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c index 880f8a40f..02bb593b8 100644 --- a/drivers/common/octeontx/octeontx_mbox.c +++ b/drivers/common/octeontx/octeontx_mbox.c @@ -190,7 +190,7 @@ mbox_send(struct mbox *m, struct octeontx_mbox_hdr *hdr, const void *txmsg, } int -octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base) +__rte_internal octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base) { struct mbox *m = &octeontx_mbox; @@ -213,7 +213,7 @@ octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base) } int -octeontx_mbox_set_reg(uint8_t *reg) +__rte_internal octeontx_mbox_set_reg(uint8_t *reg) { struct mbox *m = &octeontx_mbox; @@ -236,7 +236,7 @@ octeontx_mbox_set_reg(uint8_t *reg) } int -octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata, +__rte_internal octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata, uint16_t txlen, void *rxdata, uint16_t rxlen) { struct mbox *m = &octeontx_mbox; diff --git a/drivers/common/octeontx/octeontx_mbox.h b/drivers/common/octeontx/octeontx_mbox.h index 43fbda282..1055d30b0 100644 --- a/drivers/common/octeontx/octeontx_mbox.h +++ b/drivers/common/octeontx/octeontx_mbox.h @@ -29,9 +29,9 @@ struct octeontx_mbox_hdr { uint8_t res_code; /* Functional layer response code */ }; -int octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base); -int octeontx_mbox_set_reg(uint8_t *reg); -int octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, +int __rte_internal octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base); +int __rte_internal octeontx_mbox_set_reg(uint8_t *reg); +int __rte_internal octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata, uint16_t txlen, void *rxdata, uint16_t rxlen); #endif /* __OCTEONTX_MBOX_H__ */ diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map index f04b3b7f8..523444a75 100644 --- a/drivers/common/octeontx/rte_common_octeontx_version.map +++ b/drivers/common/octeontx/rte_common_octeontx_version.map @@ -1,7 +1,14 @@ -DPDK_18.05 { +INTERNAL { global: octeontx_mbox_set_ram_mbox_base; octeontx_mbox_set_reg; octeontx_mbox_send; }; + +DPDK_18.05 { + global: + octeontx_logtype_mbox; + + local: *; +}; From patchwork Thu Jun 13 14:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 54779 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A54751D608; Thu, 13 Jun 2019 16:24:24 +0200 (CEST) Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 478E41D60B for ; Thu, 13 Jun 2019 16:24:13 +0200 (CEST) Received: from [107.15.85.130] (helo=hmswarspite.think-freely.org) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hbQe7-0000gM-69; Thu, 13 Jun 2019 10:24:10 -0400 Received: from hmswarspite.think-freely.org (localhost [127.0.0.1]) by hmswarspite.think-freely.org (8.15.2/8.15.2) with ESMTP id x5DEO1SZ009392; Thu, 13 Jun 2019 10:24:01 -0400 Received: (from nhorman@localhost) by hmswarspite.think-freely.org (8.15.2/8.15.2/Submit) id x5DEO0v0009376; Thu, 13 Jun 2019 10:24:00 -0400 From: Neil Horman To: dev@dpdk.org Cc: Neil Horman , Jerin Jacob Kollanukkaran , Bruce Richardson , Thomas Monjalon , Akhil Goyal , Hemant Agrawal Date: Thu, 13 Jun 2019 10:23:44 -0400 Message-Id: <20190613142344.9188-11-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190613142344.9188-1-nhorman@tuxdriver.com> References: <20190525184346.27932-1-nhorman@tuxdriver.com> <20190613142344.9188-1-nhorman@tuxdriver.com> MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Subject: [dpdk-dev] [PATCH v2 10/10] dpaa2: mark internal functions with __rte_internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Identify functions in the dpaa2 driver which are internal (based on their not having an rte_ prefix) and tag them with __rte_internal Signed-off-by: Neil Horman CC: Jerin Jacob Kollanukkaran CC: Bruce Richardson CC: Thomas Monjalon CC: Akhil Goyal CC: Hemant Agrawal --- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 4 ++-- drivers/crypto/dpaa2_sec/dpaa2_sec_event.h | 4 ++-- .../crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 13 ++++++------- 3 files changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index 0d273bb62..6a4a42d1b 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -3152,7 +3152,7 @@ dpaa2_sec_process_atomic_event(struct qbman_swp *swp __attribute__((unused)), } int -dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev, +__rte_internal dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev, int qp_id, uint16_t dpcon_id, const struct rte_event *event) @@ -3195,7 +3195,7 @@ dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev, } int -dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev, +__rte_internal dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev, int qp_id) { struct dpaa2_sec_dev_private *priv = dev->data->dev_private; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h index 977099429..c142646fc 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h @@ -7,12 +7,12 @@ #define _DPAA2_SEC_EVENT_H_ int -dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev, +__rte_internal dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev, int qp_id, uint16_t dpcon_id, const struct rte_event *event); -int dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev, +int __rte_internal dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev, int qp_id); #endif /* _DPAA2_SEC_EVENT_H_ */ diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map index 0bfb986d0..ca0aedf3e 100644 --- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map +++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map @@ -1,12 +1,11 @@ -DPDK_17.05 { - - local: *; -}; - -DPDK_18.11 { +INTERNAL { global: dpaa2_sec_eventq_attach; dpaa2_sec_eventq_detach; +}; + +DPDK_17.05 { -} DPDK_17.05; + local: *; +};